uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,156,828 | arxiv | \section{Introduction} \label{sec:intro}
The stress tensors in general are widely used for description of internal forces of matter in various fields of science such as mechanical engineering and material science.
In quantum systems as well, the stress tensors have been investigated for many years, including one of the earliest quantum mechanics papers
\cite{Schrodinger1927, Pauli, Epstein1975, Bader1980, Bamzai1981a, Nielsen1983, Nielsen1985, Folland1986a, Folland1986b, Godfrey1988, Filippetti2000, Tachibana2001, Pendas2002, Rogers2002, Tachibana2004, Tachibana2005, Morante2006, Tao2008, Ayers2009, Tachibana2010,Jenkins2011,GuevaraGarcia2011,Tachibana2012}.
As for the stress tensors in quantum mechanics context, we can find several different definitions and applications in the literature.
For example, Ref.~\cite{Nielsen1983} and followers focus on the stress tensor which is associated with forces on nuclei.
In contrast, the one we consider in this paper is the electronic stress tensor, which is associated with effects caused by internal forces acting on electrons in molecules, following Ref.~\cite{Tachibana2001}.
This electronic stress tensor has been used to investigate chemical bonds and reactions and many interesting properties have been discovered
\cite{Tachibana2001,Tachibana2004,Tachibana2005, Szarek2007, Szarek2008, Szarek2009, Ichikawa2009a, Ichikawa2009b,Tachibana2010, Ichikawa2010, Ichikawa2011a, Ichikawa2011b, Ichikawa2011c}.
We briefly review our past works on the electronic stress tensor which are closely related to this paper.
In Ref.~\cite{Tachibana2004}, it has been proposed that a covalent bond can be described by the eigenvalues and
eigenvectors of the electronic stress tensor.
In detail, the bonding region with covalency can be characterized and visualized by the ``spindle structure",
where the largest eigenvalue of the electronic stress tensor is positive and
the corresponding eigenvectors form a bundle of flow lines that connects nuclei.
In Ref.~\cite{Szarek2007}, the eigenvalues of the electronic stress tensor of various homonuclear diatomic molecules
have been investigated and all metals and metalloids molecules studied there exhibit
the negative largest eigenvalue at the midpoint of the nuclei.
Ref.~\cite{Szarek2007} has mentioned that this negativity of the largest eigenvalue may be connected to the metallic nature
of bonding but, at the same time, it has been known that the negative largest eigenvalue appears between
a pair of atomic nuclei with very short distance such as the C-C bond in C$_2$H$_2$ \cite{Tachibana2005,Szarek2007}.
In Refs.~\cite{Ichikawa2011a} and \cite{Ichikawa2011c}, the Al$_4$ and Pd clusters have been investigated
and we have shown ``pseudo-spindle structure" \cite{Ichikawa2011c} between metal nuclei.
The pseudo-spindle structure is a negative eigenvalue version of the spindle structure and, more precisely,
it is the region between two atoms where the largest eigenvalue of the
electronic stress tensor is negative and corresponding eigenvectors forming a pattern
which connects them.
However, again, the pseudo-spindle structure is also seen in C$_2$H$_2$ \cite{Tachibana2005,Ichikawa2011b}.
Since the pseudo-spindle structure is seen in many bonds between metal nuclei, it is tempting to associate it
with a metallic bond, but it seems that the pseudo-spindle structure is not sufficient to
capture the entire feature of a metallic bond.
In this paper, we would like to argue how we may describe a metallic bond using the electronic stress tensor.
The conventional wisdom would say that it is not sensible to define ``a metallic bond" because metal is something
defined only for bulk with many electrons and ions.
Our challenge is capturing some aspects of a metallic bond or metallicity of a chemical bond using the electronic stress tensor.
As a starting point, we tackle this problem using small clusters of lithium, the simplest metal.
There are many studies on the nature of bonding in the Li clusters in the literature using such methods as
by generalized valence-bond \cite{McAdon1985},
electron localization function (ELF) \cite{Rousseau2000,Alikhani2006} and
quantum theory of atoms in molecules (QTAIM) \cite{Gatti1987,Bersuker1993,Yepes2012}.
We would also like to see whether our stress tensor approach gives complementary views to these methods.
The structure of this paper is as follows.
Sec.~\ref{sec:calc} briefly summarizes our analysis method of electronic structures using the electronic stress tensor.
Sec.~\ref{sec:results} shows our results of stress tensor analysis on the Li clusters.
In Sec.~\ref{sec:dinuclear}, we discuss Li$_2$. We also discuss H$_2$ and LiH for comparison.
In Sec.~\ref{sec:clusters}, the Li clusters are analyzed and a method to use differential eigenvalues of the stress tensor is proposed.
In Sec.~\ref{sec:comparison}, that method is applied to hydrocarbon molecules and its effectiveness is tested.
Finally, Sec.~\ref{sec:conclusion} is devoted to our conclusion.
\section{Theory and calculation methods} \label{sec:calc}
In this section, we summarize quantities and method we use for our electronic stress tensor analysis.
Our stress tensor is based on the rigged quantum electrodynamics (QED) theory \cite{Tachibana2001}.
The rigged QED is the ordinary QED (the theory of photons and electrons) equipped with degrees of freedom of atomic nuclei
in order to be used for atomic or molecular systems.
Since this is a theory of quantum fields, there exist quantum operators defined at each point in spacetime.
By using the equation of motion for operators of various physical quantities, many interesting operator relations have been found
\cite{Tachibana2001,Tachibana2004,Tachibana2005,Tachibana2010}.
The one related to the electronic stress tensor is the equation of motion of the kinetic momentum density operator.
It turns out that the time derivative of the kinetic momentum density operator can be expressed by the sum of the two force operators:
the tension density operator $\Hat{\vec{\tau}}^S(\vec{r})$ and the Lorentz force density operator $\Hat{\vec{L}}(\vec{r})$.
$\Hat{\vec{\tau}}^S(\vec{r})$ can be expressed by the divergence of some 3 by 3 tensor operator, which we call
the stress tensor density operator $\hat{\overleftrightarrow{\tau}}^{S}(\vec{r})$.
We obtain the quantum field theory version of the equilibrium equation of stress tensor
by taking the expectation value of this equation of motion with respect to stationary electronic state of an atom or molecule
(note that the expectation value of the time derivative of any operator with respect to stationary state is zero) as
$0 = \vec{\tau}^S(\vec{r}) + \langle \Hat{\vec{L}}(\vec{r}) \rangle$ or
$0 = \sum_l \partial_l \tau^{Skl}(\vec{r}) + \langle \Hat{L}^k(\vec{r})\rangle$
where $\{k, l\} = \{1, 2, 3\}$,
$\langle ... \rangle$ denotes the expectation value,
$\vec{\tau}^S(\vec{r}) \equiv \langle \hat{\vec{\tau}}^S(\vec{r}) \rangle$ and
$\tau^{Skl}(\vec{r}) \equiv \langle \hat{\tau}^{Skl}(\vec{r}) \rangle$
The expectation values for the stress tensor density and tension density can be expressed as
\begin{eqnarray}
\tau^{Skl}(\vec{r}) &=& \frac{\hbar^2}{4m}\sum_i \nu_i
\Bigg[\psi^*_i(\vec{r})\frac{\partial^2\psi_i(\vec{r})}{\partial x^k \partial x^l}-\frac{\partial\psi^*_i(\vec{r})}{\partial x^k} \frac{\partial\psi_i(\vec{r})}{\partial x^l} \nonumber\\
& & \hspace{4cm} +\frac{\partial^2 \psi^*_i(\vec{r})}{\partial x^k \partial x^l}\psi_i(\vec{r}) -\frac{\partial \psi^*_i(\vec{r})}{\partial x^l}\frac{\partial \psi_i(\vec{r})}{\partial x^k}\Bigg], \label{eq:stress} \\
\tau^{S k}(\vec{r})&=& \sum_l \partial_l \tau^{Skl}(\vec{r}) \nonumber \\
&=&\frac{\hbar^2}{4m}\sum_i \nu_i
\Bigg[\psi^*_i(\vec{r})\frac{\partial \Delta\psi_i(\vec{r})}{\partial x^k}-\frac{\partial\psi^*_i(\vec{r})}{\partial x^k} \Delta\psi_i(\vec{r}) \nonumber\\
& & \hspace{4cm} +\frac{\partial \Delta\psi^*_i(\vec{r})}{\partial x^k}\psi_i(\vec{r}) -\Delta \psi^*_i(\vec{r}) \frac{\partial \psi_i(\vec{r})}{\partial x^k}\Bigg],
\label{eq:tension}
\end{eqnarray}
where $\{k, l\} = \{1, 2, 3\}$, $m$ is the electron mass, $\psi_i(\vec{r})$ is the $i$th natural orbital and $\nu_i$ is its occupation number.
Since the stress tensor is an Hermitian matrix, its eigenvalues are real and eigenvectors are orthogonal to each other.
We denote the eigenvalues by $\lambda_i$ $(i=1,2,3)$ with the ordering of $\lambda_3 \ge \lambda_2 \ge \lambda_1$.
The concept of ``Lagrange point" is proposed in Ref.~\cite{Szarek2007} and we use it as a point which can characterize
a bond between two atoms.
The Lagrange point $\vec{r}_L$ is the point where the tension density $\vec{\tau}^S(\vec{r})$
vanishes: $\tau^{S k}(\vec{r}_L)=0$.
We report the eigenvalues of electronic stress tensor at this point in the following sections.
We also analyze electronic structure of molecules using kinetic energy density $n_T(\vec{r})$,
\begin{eqnarray}
n_{T}(\vec{r}) = - \frac{\hbar^2}{4m}
\sum_{i} \nu_i \left[ \psi_{i}^{*}(\vec{r}) \Delta \psi_{i}(\vec{r}) +
\Delta \psi_{i}^{*}(\vec{r}) \cdot \psi_{i}(\vec{r}) \right], \label{eq:KED}
\end{eqnarray}
which is defined in Ref.~\cite{Tachibana2001}.
Note that our definition of the kinetic energy density is not positive-definite.
Using this kinetic energy density, we can divide the whole space into three types of region:
the electronic drop region $R_D$ with $n_T(\vec{r}) > 0$,
where classically allowed motion of electron is guaranteed and the electron density is amply accumulated;
the electronic atmosphere region $R_A$ with $n_T(\vec{r}) < 0$, where the motion of electron is classically forbidden
and the electron density is dried up;
and the electronic interface $S$ with $n_T(\vec{r}) = 0$, the boundary between $R_D$ and $R_A$, which corresponds to a turning point.
The $S$ can give a clear image of the intrinsic shape of atoms and molecules and is therefore an important region in particular.
The electronic structures used in this paper are obtained by the Gaussian 09 \cite{Gaussian09} for cluster models
and ABINIT \cite{ABINIT1,ABINIT2} for periodic models.
We use Molecular Regional Density Functional Theory (MRDFT) package \cite{MRDFTv3} to compute the quantities, Eqs.~\eqref{eq:stress}-\eqref{eq:KED},
introduced in this section. Some part of the visualization is done using PyMOL Molecular Viewer program \cite{PyMOL}.
\section{Results and discussion} \label{sec:results}
\subsection{Li$_2$} \label{sec:dinuclear}
In this section, we analyze the smallest Li cluster, Li$_2$, by the electronic stress tensor.
For comparison, we also show the results of stress tensor analysis of H$_2$ and LiH.
The computation is performed by the coupled-cluster single-double (CCSD) method using the 6-311++G** basis set \cite{Raghavachari80b,Frisch84}.
Before we discuss Li$_2$, we review how the chemical bond of the hydrogen molecule is expressed by the electronic stress tensor
(Eq.~\eqref{eq:stress}).
In Fig.~\ref{fig:dist_H2}~(b), we plot the largest eigenvalue of the stress tensor and corresponding eigenvector on the plane including
the internuclear axis.
If the largest eigenvalue of the stress tensor is positive, it is called the tensile stress,
and if negative, it is called the compressive stress.
Namely, the sign of the largest eigenvalue tells whether electrons at a certain point in space
exhibit tensile stress or compressive stress.
The difference between tensile stress and compressive stress are described as follows.
If we consider a fictitious plane at some point in atomic or molecular system, there are two ways that
a part on one side of the plane acts on the other part of the plane.
One is when electrons on one side of the plane is ``pulled up" by the electron field on the other side.
The other is when electrons on one side is ``pushed back" by the electron field on the other side.
The former corresponds to the tensile stress and the latter to the compressive stress.
The direction of the eigenvector corresponds to the direction normal to the plane.
In Fig.~\ref{fig:dist_H2}~(b), we can see that the region with positive eigenvalue spreads between the H atoms and,
in that region, the eigenvectors form a bundle of flow lines that connects the H nuclei.
Such a region is called ``spindle structure" \cite{Tachibana2004} and it is clearly seen in the figure.
This means that electrons close to one H nucleus are pulled up by the electron field close to another H nucleus.
Such a structure can be associated with the formation of the tight bonding and has a suitable directionality
for a bond between two H nuclei.
Thus, it is proposed that such positive eigenvalue region is the manifestation of
the strong covalent bond \cite{Tachibana2004}.
We now turn to Li$_2$, whose largest eigenvalue of the stress tensor and corresponding eigenvector
is shown in Fig.~\ref{fig:dist_Li2}~(b) in the same manner as H$_2$.
The striking difference is that there is a region with the negative largest eigenvalue between Li nuclei and there is no spindle structure.
We do not even see a ``pseudo-spindle structure" \cite{Ichikawa2011c}, the negative eigenvalue version of the spindle structure
({\it i.e.} the region with the eigenvectors forming a bundle of flow lines that connects two nuclei but with negative eigenvalues),
which is frequently seen between metallic atoms \cite{Ichikawa2011a, Ichikawa2011b, Ichikawa2011c}.
This means that the electron field pushes back electrons in the neighboring regions with no solid directionality which can be
considered as a bond axis.
This is similar to the stress tensor of liquid.
A liquid has the ability to flow.
Liquid particles are loosely bound and they can move around one another freely.
Such properties can be well associated with electrons in metal.
In fact, coincidentally, metallic bonding is often described as
an ``electron sea" and the positively charged metal ions.
It may be unusual to say Li$_2$ has purely a metallic bond with no trace of covalency
but if we look at the electronic stress tensor, it is totally different from a covalent bond
and certainly has a feature which can be called metallic.
We will return to this point when we discuss larger Li clusters.
After seeing the difference between H$_2$ and Li$_2$ in terms of the electronic stress tensor,
it may be interesting to see what the stress tensor of LiH look like.
This is shown in Fig.~\ref{fig:dist_LiH}~(b).
It has both positive and negative regions between H and Li nuclei.
The region close to H/Li has positive/negative eigenvalue.
In the region very close to the internuclear axis, there is some flow of eigenvectors
in the direction of connecting the nuclei, but it is somewhat different from spindle or pseudo-spindle structures.
Therefore, we can say that LiH has a bond with solid directionality like covalent bond of H$_2$
but there is a strong part nearby H and a weaker part nearby Li.
It is tempting to associate this with an ionic bond, but we reserve conclusion until
more examples are examined.
Since we have discussed the analogy between the stress tensors of Li$_2$ and liquid, it is important to investigate
the other two eigenvalues.
We plot all the three eigenvalues of the stress tensor for H$_2$, Li$_2$ and LiH along the internuclear axes in Fig.~\ref{fig:eig_Li2H2LiH}.
The bond length and the eigenvalues at the Lagrange point are summarized in Table \ref{tab:Li2H2LiH}.
Let us first look at H$_2$.
As shown in Fig.~\ref{fig:eig_Li2H2LiH} (a), around the midpoint of the two H nuclei,
the largest eigenvalue ($\lambda_3$) is positive and the smaller eigenvalues ($\lambda_1$ and $\lambda_2$)
are negative and degenerate.
The absolute values of the three eigenvalues are ${\cal O}(0.1)$ and so are the differences between $\lambda_3$ and $\lambda_1$ or $\lambda_2$.
Fig.~\ref{fig:eig_Li2H2LiH} (b) shows the case of Li$_2$ and we see that the three eigenvalues are all negative
around the midpoint of the two Li nuclei.
Two larger eigenvalues ($\lambda_3$ and $\lambda_2$) are degenerate and the smallest one ($\lambda_1$)
is smaller than them by only ${\cal O}(10^{-4})$.
Therefore, compared to the case of H$_2$, the three eigenvalues of Li$_2$ can be regarded as degenerate.
Such degeneracy in the eigenvalues of stress tensor, suggesting the lack of directionality, solidifies the analogy with liquid.
Note that, in terms of the stress tensor, liquid is defined by the three eigenvalues which are negative and degenerate.
This is in stark contrast with the case of H$_2$, whose eigenvalue pattern indicates the strong directionality of the bond.
The case of LiH is shown in Fig.~\ref{fig:eig_Li2H2LiH} (c).
The three eigenvalues are almost degenerate in the region close to Li and the region close to H has positive largest eigenvalue
and two smaller eigenvalues are negative and degenerate.
It is different from both Li$_2$ and H$_2$ and it looks like a mixture of Li$_2$ and H$_2$.
Such a feature could be a hint to consider how an ionic bond may be expressed by the electronic stress tensor,
but, again, we need more examples.
So far, we have analyzed the molecules at the equilibrium distances and discussed their nature of bonding.
We now try to extract more information by changing the internuclear distances.
In particular, if we compute the stress tensors at very long distances, we can study the nature of bonding
from the viewpoint of the bond formation.
The eigenvalues and eigenvectors of the stress tensor for H$_2$, Li$_2$ and LiH at various internuclear distances
are plotted in Figs.~\ref{fig:dist_H2}, \ref{fig:dist_Li2} and \ref{fig:dist_LiH}.
In the following discussion, the zero surface of the kinetic energy density (Sec.~\ref{sec:calc}, Eq.~\eqref{eq:KED}),
the electronic interface $S$, plays an important role.
As is explained in Sec.~\ref{sec:calc}, $S$ is the boundary between
the electronic drop region $R_D$ (where the electron density is accumulated)
and the electronic atmosphere region $R_A$ (where the electron density is dried up).
In the figures, $S$ is drawn by green dashed lines.
To identify which side of $S$ has positive or negative kinetic energy density ($R_D$ or $R_A$),
it is useful to remember $R_D$'s exist, naturally, around nuclei.
When we look at the figures with very long internuclear distances, we see $S$ exists around each nucleus,
indicating that they do not form molecules and suggesting the existence of independent atoms.
Moving to the figures with shorter internuclear distances, we see that $S$'s around nucleus get closer, merge
and eventually form $S$ for the molecules at the equilibrium distances.
Incidentally, the state when $S$'s just begin to merge, namely touch each other, is called
the ``intrinsic electronic transition state" \cite{Tachibana2001}.
We denote the internuclear distance at this state by $r^{\rm TS}$.
The notable fact is that, when the distance is so long that $S$'s are well separated
(when practically considered as an infinite distance), we see the spindle structure at $R_A$
between the two nuclei for all of H$_2$, Li$_2$ and LiH
(see Figs.~\ref{fig:dist_H2} (d), \ref{fig:dist_Li2} (h) and \ref{fig:dist_LiH} (d) respectively).
This means that electrons in $R_D$ nearby nuclei are pulled up toward $R_A$ through $S$, resulting in the formation
of a Lewis electron pair.
In other words, all the molecules are covalently bonded at the very long distances.
In the case of H$_2$, as shown in Fig.~\ref{fig:dist_H2}, the spindle structure continues to exist at shorter distances including $r^{\rm TS}_{\rm HH}=2.20\,{\rm \AA}$ and the equilibrium distance.
In the case of Li$_2$, more interesting things happen.
First, we observe that the spindle structure exist at the intrinsic electronic transition state
($r^{\rm TS}_{\rm LiLi} = 5.43\,{\rm \AA}$, Fig.~\ref{fig:dist_Li2} (g)).
Then, at 3.31\,${\rm \AA}$, the positive eigenvalue region vanishes and the spindle structure turns into
the pseudo-spindle structure.
(Note that, at slightly longer distance of 3.36\,${\rm \AA}$, the positive eigenvalue region vanishes on the
internuclear axis while positive areas remain away from the axis.
Namely, between 3.36\,${\rm \AA}$ and 3.31\,${\rm \AA}$, the positive region is not simply connected.)
That pseudo-spindle structure is broken at 2.78\,${\rm \AA}$, at which three eigenvalues are degenerate (Fig.~\ref{fig:dist_Li2eig} (b)).
At shorter distances, we see a negative region with no particular directionality such as to connect two nuclei, especially at the equilibrium distance.
This is an example of what has been argued in Ref.~\cite{Tachibana2012}.
When there are two distant Li atoms, the tensile stress (as is indicated by the spindle structure) pulls up electron
in $R_D$ to the adjacent $R_A$ through $S$ which separates them.
The consequence is the formation of the long-range Lewis pair of electron.
As the distant pair of Li atoms comes closer, the spindle structure disappears, indicating that the Lewis pair is unbound.
This is considered to be a manifestation of metallicity.
In the case of LiH, at long distances, $R_A$ between Li and H is characterized by the spindle structure as mentioned above
and, in particular at $r^{\rm TS}_{\rm LiH} = 3.38\,{\rm \AA}$.
In Fig.\ref{fig:dist_LiH} (c), we see the contact point of the two interfaces is covered by the spindle structure.
When the distance gets shorter, although the spindle structure is partly lost, the positive eigenvalue region remains around H.
The negative eigenvalue region develops in the region close to Li but do not cover the whole space unlike the case of Li$_2$.
As for LiH, from the viewpoint of the electronic stress tensor, we may conclude that it has a bond which is neither covalent nor metallic.
Whether we can categorize it as ionic or other bonding is an interesting question to ask and will be investigated in future.
\subsection{Lithium clusters} \label{sec:clusters}
The optimized structures for Li clusters, Li$_n$ $(n=3\sim 8)$ are shown in Fig.~\ref{fig:structure_Licluster}.
In the figure, atoms are connected when the Lagrange point (Sec.~\ref{sec:calc}) is found between them.
These structures are obtained by re-optimizing the structures reported in Ref.~\cite{Gardet1996}.
We performed optimization with the three lowest multiplicities for each cluster and adopted the one with the lowest energy,
which turned out to be singlet for clusters with even numbers of atoms and doublets for those with odd numbers.
The computation is performed by the CCSD method using the 6-311++G** basis set \cite{Raghavachari80b,Frisch84}.
Although computational setups are slightly different, the obtained structures are basically consistent with Ref.~\cite{Gardet1996}.
As for Li$_6$, Ref.~\cite{Gardet1996} has reported the structure with $D_{2h}$ symmetry while
we have obtained the one with higher symmetry of $D_{4h}$, which is consistent with Ref.~\cite{Florez2008}.
We first examine the largest eigenvalue of the stress tensor and corresponding eigenvector on the plane including
three atoms in the Li clusters.
They are plotted in Figs.~\ref{fig:eigvec_Licluster1} and \ref{fig:eigvec_Licluster2}.
We omit equivalent bonds due to the structural symmetry.
In these figures, every pair of atoms has a Lagrange point in between.
We focus on the electronic drop regions ({\it i.e.}~regions with positive kinetic energy density) between two atoms in each panel.
The common feature is that they all have the largest eigenvalue which is negative.
As for the eigenvectors, many regions do not show clear pattern such as to connect two atoms, just as the case of Li$_2$.
The regions with rather clear pseudo-spindle structures are found but not so common
(the 1-2 bond in Li$_6$ (Fig.~\ref{fig:eigvec_Licluster1} (f)),
the 1-2 bond in Li$_7$ (Fig.~\ref{fig:eigvec_Licluster2} (a)) and
the 1-2 bond in Li$_8$ (Fig.~\ref{fig:eigvec_Licluster2} (c) or (d)).
From the discussion in Sec.~\ref{sec:dinuclear},
in order to characterize a chemical bond by the electronic stress tensor,
it is important to know the pattern and degree of degeneracy of the three eigenvalues in addition to
the sign of the largest eigenvalue.
The three eigenvalues of stress tensor at the Lagrange points are summarized in Table \ref{tab:Licluster}.
From the table, we can reconfirm that the eigenvalues are all negative.
In addition, we see that the three eigenvalues are close to each other as is the case of Li$_2$ at least at the Lagrange points.
In order to quantify the degeneracy pattern more clearly, we propose to use following differential eigenvalues.
\begin{eqnarray}
\lambda_{D32} &\equiv& \lambda_3-\lambda_2, \\
\lambda_{D21} &\equiv& \lambda_2-\lambda_1.
\end{eqnarray}
Since our convention is $\lambda_3 \ge \lambda_2 \ge \lambda_1$,
$\lambda_{D32}$ and $\lambda_{D21}$ are positive.
For example, at the Lagrange point,
$(\lambda_{D32}, \lambda_{D21}) =$ $(0.394, 0)$ for H$_2$,
$(0, 2.37 \times 10^{-4})$ for Li$_2$
and $(8.52 \times 10^{-3}, 0)$ for LiH.
In Fig.~\ref{fig:diff_eig} (a), we plot $(\lambda_{D32}, \lambda_{D21})$ at the Lagrange points for the Li clusters.
It is shown that the Li clusters have very small values of $\lambda_{D32}$ and $\lambda_{D21}$,
the same order as that of $\lambda_{D21}$ of Li$_2$ and much smaller than $\lambda_{D32}$ of H$_2$.
Thus, we can consider three eigenvalues of the Li clusters are almost degenerate,
meaning the high degree of isotropy (low degree of directionality) of the electronic stress tensor.
Together with their negativity, the stress tensor of the Li clusters are just like that of liquid.
As we have mentioned in Sec.~\ref{sec:dinuclear}, electrons in metal are readily associated with particles in liquid.
By turning this argument around, we may try to consider that negativity and degeneracy of the eigenvalues of
the electronic stress tensor characterize and quantify some aspects of the metallic nature of chemical bonding.
To support the validity of this idea, we here show some data of very small Na clusters, Na$_n$ $(n=2\sim 4)$.
The electronic structures are obtained by the same computational setups as the Li clusters.
We performed geometrical optimization starting from the structures reported in Refs.~\cite{Solovyov2002,Florez2008}.
The most stable structures turned out to be singlet for clusters with even numbers of atoms and doublets for those with odd numbers.
As for the symmetry of those structures, Na$_3$ and Na$_4$ are found to have $C_{2v}$ and $D_{2h}$ symmetry.
The results of the electronic stress tensor analysis are summarized in Table \ref{tab:Nacluster}.
From the data in the table, we readily see that the eigenvalues are all negative and almost degenerate just as the case of Li clusters.
As another way to test this idea, we analyze the electronic stress tensor of periodic models for bulk Li and Na.
The body-centered cubic structure is adopted for both Li and Na with the lattice constants taken to be
3.491\,{\AA} and 4.225\,{\AA} respectively.
We use the norm-conserving pseudopotentials of Troullier-Martins type \cite{Troullier1991} and
the generalized-gradient approximation method by Perdew-Burke-Ernzerhof \cite{Perdew1996} for density
functional exchange-correlation interactions.
Kinetic energy cutoff of plane-wave expansion (k-point) is taken as 40.0 hartree ($2\times 2\times 2$ k-point set).
We compute the electronic stress tensor at the midpoint of two nearest neighborhood atoms.
We obtain $\lambda_3 = -0.468 \times 10^{-3}$ and $\lambda_2 = \lambda_1 = -0.716 \times 10^{-3}$ for Li,
and $\lambda_3 = \lambda_2 = -0.253 \times 10^{-3}$ and $\lambda_1 = -0.293 \times 10^{-3}$ for Na.
As for the differential eigenvalues,
$(\lambda_{D32}, \lambda_{D21}) =$ $(2.48 \times 10^{-4}, 0)$ for Li and $(0, 3.95 \times 10^{-5})$ for Na.
These results show that the negativity and degeneracy of the eigenvalues of the electronic stress tensor are found
in the bulk systems too, and the degrees of negativity and degeneracy are not different from the clusters.
Here, some comments are in order as for comparison with the studies in the literature.
As shown just above, from the viewpoint of the electronic stress tensor, the chemical bonds in the small Li clusters
look similar to the bulk Li. They posses compressive stress (negative eigenvalues) with no particular directionality
(almost degenerate eigenvalues) like liquid, which can be considered as one of the important features of metal.
Interesting thing is that this is even so with Li$_2$.
Past studies like Refs.~\cite{McAdon1985,Rousseau2000} show multi-center bonds are ubiquitous
features of the Li clusters. In particular, Ref.~\cite{Rousseau2000} argues that
the metallic phase of Li is characterized by multi-center bonds and very gentle variation of ELF across space.
From this viewpoint, since Li$_2$ is not multi-centered by definition, Li$_2$ does not have a common feature
with bulk Li or the other Li clusters.
This point is in stark contrast with the stress tensor viewpoint.
We note that, in Ref.~\cite{Gatti1987,Bersuker1993}, where Li$_2$ is studied in the framework of QTAIM,
Li$_2$ is viewed as a system that two Li atoms are bonded to a pseudo atom (the central nonnuclear attractor)
and electrons at the central regions are associated with those ``partially behave as mobile metallic electron".
The viewpoint of the stress tensor is closer to one of QTAIM in a sense that Li$_2$ already exhibits some aspects
of chemical bonding which characterizes bulk Li.
\subsection{Comparison with other molecules} \label{sec:comparison}
In this section, we analyze hydrocarbon molecules and compare with the Li clusters,
especially regarding the negativity and degeneracy of the eigenvalues of the electronic stress tensor.
Ref.~\cite{Szarek2008} has studied hydrocarbon molecules using the electronic stress tensor.
We use similar set of molecules but not exactly the same one.
The complete list of structures of our set which consists of 48 molecules is shown in the supplementary material
(Fig.~S1 and Table S1 \cite{SI1}).
The structures of these hydrocarbon molecules are obtained by optimization using the density functional theory (DFT) method with
B3LYP \cite{Lee1988,Becke1993}
as a functional and the 6-311++G** basis set.
As for the electronic stress tensor analysis, we repeat the same analyses which has been carried out for the Li clusters
in the previous section.
The detailed data table for the hydrocarbon molecules like Table \ref{tab:Licluster} can be found in the supplementary material (Table S2 \cite{SI1}).
We begin by investigating the largest eigenvalue of stress tensor.
We find that most of the bonds have positive largest eigenvalues.
Negative largest eigenvalues are found in some of the triple C-C bonds,
as is mentioned in Sec.~\ref{sec:intro}.
In our data set, the triple C-C bonds in C$_2$HX with X being H, CH$_3$, C$_2$H$_5$, NH$_2$, OH, F, Cl and Br have
the largest eigenvalues which are negative and whose absolute values are ${\cal O}(10^{-3})$ or less
(the three eigenvalues for these bonds against the bond lengths are plotted in Fig.~S2 in the supplementary material \cite{SI1}).
It is known that this negativity of the largest eigenvalue in these triple bonds can be attributed to the very short bond
and not to metallicity \cite{Tachibana2005,Szarek2008,Ichikawa2011b}.
The pattern of degeneracy of three eigenvalues of the electronic stress tensor is very different from
that of the Li clusters.
In the hydrocarbon molecules, the smaller two eigenvalues are degenerate, $\lambda_1 \approx \lambda_2$,
and $\lambda_3$ has much larger values which are mostly positive.
Thus, if we only focus on the largest eigenvalue, these triple C-C bonds and the bonds in the Li clusters look similar.
However, by looking at the other two eigenvalues, we can clearly distinguish them.
As is done in the previous section, we compute the differential eigenvalues $\lambda_{D32}$ and $\lambda_{D21}$ at the Lagrange points
in the hydrocarbons.
This is plotted in Fig.~\ref{fig:diff_eig} (b) and we see that $\lambda_{D32}$ is much larger than $\lambda_{D21}$.
It means that the smaller two eigenvalues are relatively degenerate and the largest one is much larger than those two.
This distribution pattern is quite different from that of the Li clusters shown in Fig.~\ref{fig:diff_eig} (a).
Let us summarize our results using the differential eigenvalues.
The Li clusters are characterized by $\lambda_{D32} \ll 1$ and $\lambda_{D21} \ll 1$ whereas
the hydrocarbon molecules $\lambda_{D32} \gg \lambda_{D21}$ and $\lambda_{D21} \ll 1$.
In particular, $\lambda_{D32}({\rm Li}_n) \approx \lambda_{D21}({\rm Li}_n) \ll \lambda_{D32}({\rm h/c})$.
This means that the electronic stress tensor between atom pairs in the hydrocarbon molecules
shows directionality while it does not show much directionality in the case of the Li clusters.
In summary of this section, first of all, we find that most of the hydrocarbon molecules are
different from the Li clusters in a sense that the largest eigenvalue of the stress tensor is positive.
Although some of the triple C-C bonds have the largest eigenvalue which is negative, they do not
look like those of the Li clusters due to the non-degeneracy of the eigenvalues.
There is no hydrocarbon molecule which has the electronic stress tensor like that of liquid,
at least in our hydrocarbon molecule set.
Thus, this results are not inconsistent with the idea proposed in the last section that negativity and
degeneracy of the eigenvalues of the electronic stress tensor may characterize and quantify
some aspects of the metallicity of chemical bonding.
\section{Conclusion} \label{sec:conclusion}
In this paper, we have studied the electronic structure of small lithium clusters Li$_n$ ($n=2\sim 8$) using
the electronic stress tensor.
We have found that the degeneracy pattern of the three eigenvalues $\lambda_i$ $(i=1,2,3)$
($\lambda_3 \ge \lambda_2 \ge \lambda_1$) of the electronic stress tensor at the Lagrange points
of the Li clusters is very much different from that of the hydrocarbon molecules,
which are covalently bonded.
Namely, the three eigenvalues of the Li clusters have almost same values while
the hydrocarbon molecules have the largest eigenvalue much larger than the second largest eigenvalue,
which has similar value to the smallest eigenvalue.
The former degeneracy pattern indicates that the bonds are not directional
while the latter indicates the clear directionality of the bonds.
The negativity of the largest eigenvalue is not unique feature of the Li clusters (some of the C-C triple bonds
exhibit the negative largest eigenvalue) but this degeneracy is characteristic to Li.
We can consider that such difference in the degeneracy pattern reflects the metallic nature of bonding of the Li clusters
and the covalent nature of bonding of the hydrocarbon molecules.
Note that the negativity and degeneracy of the stress tensor, which is the property of the liquid,
is readily associated with the
tradition that the metal can be viewed as ``electron sea".
To describe the degeneracy pattern of the eigenvalues in a compact manner,
we have proposed to use the differential eigenvalues:
$\lambda_{D32} = \lambda_3-\lambda_2$ and $\lambda_{D21} = \lambda_2-\lambda_1$.
By using them, the degeneracy patterns of the eigenvalues can be
summarized as
$\lambda_{D32}({\rm Li}_n) \approx \lambda_{D21}({\rm Li}_n) \ll \lambda_{D32}({\rm h/c}) \sim {\cal O}(0.1)$.
It is of great interest whether this relation can be extended as
$\lambda_{D32}({\rm metallic}) \approx \lambda_{D21}({\rm metallic}) \ll \lambda_{D32}({\rm covalent})$
in general.
In this paper, we have only examined the Li clusters with limited numbers of atoms.
It is important to check this relation with larger Li clusters and also with other metal clusters.
Then, the role of the electronic stress tensor in describing the metallicity would be elucidated and
more detailed classification among the ``metallic bond" would be possible.
\noindent
\section*{Acknowledgment}
Theoretical calculations were partly performed using Research Center for
Computational Science, Okazaki, Japan.
This work is supported partly by Grant-in-Aid for Scientific research (No.~22550011)
from the Ministry of Education, Culture, Sports, Science and Technology, Japan.
|
2,869,038,156,829 | arxiv | \section{Introduction}
\label{sec:intro}
Magnetized relativistic jets are important astrophysical phenomena, most notably in the context of gamma ray bursts (GRBs), but also in active galactic nuclei and tidal disruption events. As a result, the dynamics of relativistic jets have been studied extensively, often in terms of the GRB central engine \citep{1999ApJ...524..262M, 2000ApJ...531L.119A, 2007ApJ...665..569M, 2009MNRAS.397.1153K, 2012MNRAS.423.3083M, 2013ApJ...767...19L}, but also in the largely engine-independent afterglow phase when ejecta accelerated by the central engine has transferred its energy to a collimated blast wave \citep{1999ApJ...525..737R, 2000ApJ...541L...9K, 2001grba.conf..312G, 2002ApJ...571..779P, 2002bjgr.conf..146L, 2003ApJ...586..356Z, 2005ApJ...626..966P, 2007RMxAC..27..140G, 2010ApJ...722..235V, 2012ApJ...751...57D}. Given these extensive studies, there are still many fundamental questions which remain unanswered. For example, afterglow jets are thought to be magnetized, as synchrotron emission necessitates a strong magnetic field, yet no clear mechanism has been demonstrated which robustly generates such a field. Additionally, current jet models are parameterized by a small handful of parameters \citep{2012ApJ...747L..30V}, which would seem to suggest a straightforward standardization of GRB afterglow light curves. However, GRB afterglows display a great deal of variety and variability, especially at early times, hence there likely exist additional important elements missing from simplified hydrodynamical models.
One avenue which potentially addresses these issues is vorticity generation behind the forward shock. Vorticity could both amplify magnetic fields via turbulent dynamo and produce variability in GRB light curves. Understanding where vorticity comes from and how much is present will help to complete the picture of how relativistic jets generate afterglow emission.
The source of vorticity is still unclear, but several mechanisms have been suggested. One possibility is small-scale Weibel instabilities in the plasma particles making up the shock itself \citep{2008ApJ...673L..39S}. However, such instabilities may have a short range of influence. Alternatively, vorticity can be generated when a shock overtakes high-density clumps in the interstellar medium (ISM) \citep{2007ApJ...671.1858S, 2008JFM...604..325G}, but it is unclear whether large enough clumps exist to make this a robust mechanism.
In this work, we consider the vorticity generated by Rayleigh-Taylor (RT) instability, as first suggested by \cite{2009ApJ...705L.213L}. After a GRB ejects a relativistic flow (ejecta), it expands and its thermal energy drops adiabatically until it is subdominant to the kinetic energy. The ejecta then coasts and becomes a very thin shell with width $\Delta r / r \sim 1 / \Gamma^2$, where $\Gamma$ is the Lorentz factor \citep{1999ApJ...513..669K}. When deceleration finally occurs, shocks are generated at the interface between ejecta and ISM. A forward shock pushes its way into the ISM, and a reverse shock pushes its way back into the ejecta. In the heated region between these two shocks resides the contact discontinuity, separating ejecta from ISM. This contact discontinuity is Rayleigh-Taylor unstable.
Nonrelativistic RT-unstable outflows were first studied by \cite{1992ApJ...392..118C}, both analytically and numerically. \cite{1996ApJ...465..800J} later performed a two-dimensional magnetohydrodynamics calculation which demonstrated how magnetic fields tend to align themselves along RT fingers. More recently, \cite{2010AnA...509L..10F} and \cite{2011MNRAS.415...83W} have demonstrated the importance of various microphysical processes at the shock front, and \cite{2010AnA...515A.104F} has performed 3D numerical calculations. To extend the nonrelativistic results into the relativistic regime, \cite{2010GApFD.104...85L} performed a stability analysis on the two-shock solution \citep{2006ApJ...645..431N} and found linear growth rates which could potentially be large enough to impact the forward shock.
In the first numerical studies of the relativistic case, \cite{2013ApJ...775...87D} found that Rayleigh-Taylor generates turbulence which could amplify magnetic fields to within a few percent of equipartition with the thermal energy density. However, in that work we found the turbulence remained confined within a region behind the forward shock and did not impact the forward shock, though turbulence did penetrate part of the energetic post-shock region.
\\
In this letter we demonstrate that it is possible for the Rayleigh-Taylor turbulence to collide with the forward shock. As a result, the shock is perturbed and corrugated and significant turbulence is present everywhere behind it. This turbulence persists for a long time, until the shock becomes nonrelativistic, possibly due to the non-universality of the Blandford-McKee solution \citep{2000astro.ph.12364G}.
The key ingredient allowing the turbulence to collide with the forward shock is a softer equation of state. In fact, a softened equation of state has already been invoked in the nonrelativistic case to explain how Rayleigh-Taylor fingers can catch up to the forward shock in Type 1A supernovae \citep{2001ApJ...560..244B, 2010AnA...509L..10F, 2011MNRAS.415...83W}. In this case, the collision of the ejecta with the forward shock is apparent in images of supernova remnants \citep{2005AnA...433..229V, 2011AnA...532A.114K, 2012SSRv..173..369H}.
A softened equation of state can result in a reduced pressure gradient in the forward shock. This pressure gradient acts as a restoring force keeping the Rayleigh-Taylor fingers behind the forward shock, so if the pressure gradient is reduced significantly, the turbulence can collide with the forward shock. Therefore, if cooling removes a non-negligible fraction of the internal energy, it can reduce this pressure gradient, facilitating the collision of the turbulence with the shock.
There are several reasons that the equation of state of GRB jets is expected to be softer than an adiabatic $4/3$ law. Cosmic ray acceleration at the forward shock can carry a significant amount of thermal energy, cooling the shock \citep{2011AnA...532A.114K, 2012ApJ...749..156O}, which may effectively result in a softer equation of state. Additionally, the shock is highly radiative, so that photon production also provides cooling.
Photon cooling can potentially impact the dynamics; for example, GRB 080319B was estimated to emit $\sim 10^{51}$ ergs in X-rays \citep{2008Natur.455..183R, 2009ApJ...691..723B}, which should be a non-negligible fraction of the energy in the blastwave. If this cooling is responsible for reduced pressure in the shock front, this might require some coupling between the leptons and the baryons. The cooling from cosmic rays has less certain observational constraints in the GRB context, but it has been found to be important dynamically for the nonrelativistic case of supernova remnants \citep{2011AnA...532A.114K, 2012APh....39...33A}.
In order to elucidate the effects of increased compressibility, we compare two Rayleigh-Taylor setups differing only in the adiabatic index: the usual relativistic $\gamma = 4/3$, and $\gamma = 1.1$ representing the case where cooling is dynamically important. It is straightforward to see that this reduced index should result in a lower pressure for fixed internal energy, since $P = (\gamma-1)\epsilon$. The difference between $4/3$ and $1.1$ can be envisioned as the difference between $P = \epsilon/3$ and $P = \epsilon/10$, so that for a given internal energy the pressure is reduced by about a factor of $3$. Such a change can be roughly interpreted as losing $2/3$ of the thermal energy to cooling. Therefore, this adiabatic index models a system which loses a non-negligible fraction of its internal energy. As we shall see, the reduced pressure allows Rayleigh-Taylor fingers to impact the forward shock, and generate plenty of vorticity for the entire time the shock is relativistic. Thus, the shock will continue to be corrugated as long as the relevant cooling processes are effective at softening the equation of state. The choice of $\gamma = 1.1$ to represent effects of cooling is a proof-of-concept which motivates further study using a more accurate cooling prescription.
\\
\section{Numerical Set-Up}
\label{sec:numerics}
Our study entails numerically integrating the equations of relativistic hydrodynamics,
\begin{equation}
\partial_{\mu} ( \rho u^{\mu} ) = 0
\end{equation}
\begin{equation}
\partial_{\mu} ( ( \rho + \epsilon + P ) u^{\mu} u^{\nu} + P g^{\mu \nu} ) = 0
\end{equation}
where $\rho$ is proper density, $P$ is pressure, $\epsilon$ is the internal energy density, and $u^{\mu}$ is the four-velocity. We employ an adiabatic equation of state:
\begin{equation}
P = (\gamma - 1) \epsilon
\end{equation}
and we use relativistic units such that $c = 1$.
We write the equations in spherical coordinates, and assume axisymmetry so that our calculation is two-dimensional (2D). Three-dimensional (3D) effects may also be important, as in the nonrelativistic case it has been found that the instability's growth is 30\% larger in 3D than in a 2D calculation \citep{2010AnA...515A.104F}. Thus, the 3D case will be an interesting complement to this work which we plan to address in the future.
In order to track the ``ejecta" and ``ISM" components of the flow, we also evolve a passive scalar, $X$, according to
\begin{equation}
\partial_\mu ( \rho X u^\mu ) = 0.
\end{equation}
Initially, we choose $X = 0$ in the ISM and $X = 1$ in the ejecta. This passive scalar is helpful for visualizing the turbulent mixing of ejecta with the ISM (Figure \ref{fig:pic}).
The calculation is performed using a novel moving-mesh code, JET \citep{2011ApJS..197...15D,2013ApJ...775...87D}. The JET code uses high-resolution shock-capturing methods, and is effectively Lagrangian in the radial dimension due to the radial motion of grid cells. In this study we use a resolution of $N_{\theta} = 800$ zones in polar angle, (meaning $\Delta \theta = 1.25 \times 10^{-4}$) and roughly $N_r \sim 8000$ zones radially. Previously we demonstrated accurate convergence of the JET code for the relativistic RT problem \citep{2013ApJ...775...87D}.
\subsection{Initial Conditions}
\label{sec:ics}
\begin{figure*}
\epsscale{1.0}
\plotone{fig1.eps}
\caption{ Snapshots of RT turbulence at time $t = l/\Gamma^{1/3} = 0.316 ~l$, using adiabatic index $\gamma = 4/3$ (left), and $\gamma = 1.1$ (right). As is clearly visible in the figure, the Rayleigh-Taylor turbulence does not collide with the forward shock in the $4/3$ case, but it does in the softer $\gamma = 1.1$ case.
\label{fig:pic} }
\end{figure*}
The system is parameterized by an explosion energy $E$, ejecta mass $M$, and ISM density $\rho_0$. We define a characteristic Lorentz factor $\Gamma = E/M$. For expediency we choose $\Gamma = 30$, and the constants $E$ and $\rho_0$ simply scale out of the problem, due to the scale invariance of the underlying hydrodynamical field equations. Our initial conditions are of a cold expanding flow with kinetic energy E and mass M pushing its way into an ISM with density $\rho_0$. It begins long before a significant amount of the ISM has been swept up, at time $t_0 = 0.01 ~l$, where $l \equiv (E/\rho_0)^{1/3}$ is the Sedov length. This almost totally specifies the problem, except for the overall shape of the ejecta density profile, which we prescribe as follows based on 1D numerical calculations of relativistic fireballs \citep{2013ApJ...776L...9D}:
\begin{equation}
\rho( r , t_0 ) = \left\{ \begin{array}
{l@{\quad \quad}l}
{E \over 2 \pi t_0^3}{1 - R/t_0 \over 1 - r/t_0} & r < R \\
\rho_0 & \text{otherwise} \\
\end{array} \right.
\label{eqn:hubble}
\end{equation}
\begin{equation}
\vec v( r , t_0 ) = \left\{ \begin{array}
{l@{\quad \quad}l}
\vec r / t_0 & r < R \\
0 & \text{otherwise} \\
\end{array} \right.
\label{eqn:hubble}
\end{equation}
\begin{equation}
P( r , t_0 ) \ll \rho( r , t_0 )
\end{equation}
where we have defined
\begin{equation}
R = t_0 \left(1 - {1 \over 4 \Gamma^2}\right).
\end{equation}
Our domain is axisymmetric, and extends from $\theta = 0$ to $\theta = 0.1$ with a reflecting boundary at $\theta = 0.1$. This angular size was chosen to represent a patch of a spherical outflow. During early times in the jet's evolution, while the Lorentz factor is larger than the inverse of the opening angle, causality prevents this choice of opening angle from making any difference in the dynamics. At late times, it is possible that jet spreading introduces an important dynamic to the turbulence, which we do not attempt to capture here.
\section{Results}
\label{sec:results}
\begin{figure}
\epsscale{1.0}
\plotone{fig2.eps}
\caption{ 1D profiles at $t = 0.316 ~l$ for the $\gamma = 1.1$ case. We plot proper density for a 1D calculation in spherical symmetry, and compare with the 2D version of the calculation. In 2D, we show the values of proper density along a radial slice at $\theta = 0.05$, as well as a spherically averaged profile. Additionally, we estimate the magnetic field strength using $\epsilon_{turb}$, the ratio of turbulent energy to thermal energy.
\label{fig:profile} }
\end{figure}
There is a clear difference in the dynamics between the $\gamma = 4/3$ and the $\gamma = 1.1$ case in Figure \ref{fig:pic}. In the $\gamma = 4/3$ case, the instability collides with the reverse shock, but does not overtake the forward shock. In the $\gamma = 1.1$ case, the softer equation of state results in lower pressures which allow the Rayleigh-Taylor turbulence to collide with the forward shock, corrugating it and pushing it forward. The entire heated region between forward and reverse shocks is turbulent. The corrugated shock front also causes further vorticity generation due to shock obliquity. In our calculation, we did not see any re-stabilization of the forward shock, which suggests that the turbulence should persist for as long as the soft equation of state is valid.
In Figure \ref{fig:profile} we plot a snapshot of the 1D profile at $t = \Gamma^{-1/3} ~l = 0.316 ~l$. Here we look at the $\gamma = 1.1$ case, comparing the turbulent 2D calculation with a 1D calculation performed assuming spherical symmetry. In 1D, we clearly see the forward shock, reverse shock, and contact discontinuity. In 2D, the contact discontinuity is totally disrupted, and the reverse shock has been pushed back further into the ejecta. For the 2D results, we plot a spherically averaged proper density, and additionally we plot the density measured along the radial line at $\theta = 0.05$. We see the turbulent variability exists everywhere between the forward and reverse shocks.
Turbulence quickly amplifies magnetic fields to rough equipartition with the turbulent kinetic energy density \citep{2003ApJ...597L.141H, 2004ApJ...612..276S, 2012PhRvL.108c5002B, 2013ApJ...769L..29Z}. Because this turbulence is present all the way up to the forward shock, magnetic fields amplified by the turbulence will facilitate synchrotron emission by the hot electrons in and behind the shock front.
Following the same strategy as in our previous work \citep{2013ApJ...775...87D}, we estimate the magnetic field strength by calculating the energy in turbulent fluctuations. This is characterized by the assumption
\begin{equation}
\epsilon_B \sim \epsilon_{turb},
\end{equation}
where $\epsilon_B$ is the local ratio of magnetic to thermal energy and $\epsilon_{turb}$ is the local ratio of turbulent to thermal energy. We calculate this ratio using essentially the same formula as in the previous work:
\begin{equation}
\epsilon_{turb} = { (\gamma - 1)(\langle \rho \rangle_{cons} - \langle \rho \rangle_{vol}) + \langle P \rangle_{cons} - \langle P \rangle_{vol} \over \langle P \rangle_{vol}}
\end{equation}
where brackets denote an average over angle, and the subscript ``vol" implies a simple volume average, whereas ``cons" implies a conservative average (mass, energy, and momentum are averaged, and proper density and pressure are calculated from these conserved quantities).
The turbulent fraction is plotted in the lower panel of Figure \ref{fig:profile}. We note several important points. First, the entire region between forward and reverse shocks have $\epsilon_{turb}>0$ and will therefore be magnetized. Secondly, the smallest values of $\epsilon_{turb}$ are of order 1\%, which by itself is enough to facilitate synchrotron emission. Third, the largest values of the magnetization are at the forward and reverse shocks, the same place where we hot electrons are expected to produce synchrotron emission. At the shocks, the magnetization is somewhere between a few percent and ten percent. Finally, the reverse shock is significantly more magnetized than the forward shock, as $\epsilon_{turb} \sim 0.1$ near the reverse shock, and $\epsilon_{turb} \sim 0.025$ near the forward shock.
\begin{figure}
\epsscale{1.0}
\plotone{fig3.eps}
\caption{ X-Ray afterglow light curves are calculated directly from the output data for adiabatic index $\gamma = 1.1$. We compare two methods: one assuming a constant magnetic energy fraction $\epsilon_B = 0.01$ and another directly estimating $\epsilon_B$ from the turbulence, equating $\epsilon_B = \epsilon_{turb}$.
\label{fig:afterglow} }
\end{figure}
It is now possible to calculate light curves using an estimate for the magnetic field taken directly from the turbulence calculation, rather than postulating a constant $\epsilon_B$. To generate an example light curve, we have calculated radiation from the blastwave using a simple synchrotron model. The synchrotron model is nearly identical to that of \cite{2010ApJ...722..235V}, however we assume an optically thin medium and calculate the flux and observer time for each fluid element, and bin them to calculate a light curve. Therefore, we do not take into account absorption, but we do model emission, including electron cooling (assuming a global cooling time). The model requires us to choose specific values for parameters other than $\epsilon_B$, so we choose an isotropic equivalent energy $E_{iso} = 10^{53}$ ergs, an ISM density of 1 proton per $\text{cm}^3$, an electron energy fraction $\epsilon_e = 0.1$, a slope to the electron energy distribution $p = 2.5$, and a luminosity distance of $10^{28}$ cm. We chose an observer frequency in the X-Ray band, at $10^{18}$ Hz. First, the 2D profiles are spherically averaged so as to produce a time series of 1D snapshots as in Figure \ref{fig:profile}. Next, this time-series is fed into our synchrotron model, now assuming the flow is spherically symmetric (we will not see a jet break this way, as that might have made our results somewhat more difficult to interpret).
Sample light curves are plotted in Figure \ref{fig:afterglow}; one assuming a fixed $\epsilon_B = 0.01$, and the other calculated assuming the local formula $\epsilon_B = \epsilon_{turb}$ calculated directly from the averaged fields. The most remarkable part of Figure \ref{fig:afterglow} is the right-hand side, where the two light curves almost exactly coincide. This means the magnetic fields generated by Rayleigh-Taylor are very well approximated by a fixed magnetic energy fraction of $\epsilon_B = 0.01$. We expect, however, that this would not necessarily hold if we used a more realistic cooling model, which stopped impacting the dynamics after some timescale. In this case, the forward shock could re-stabilize and $\epsilon_{turb}$ would vanish at the shock. This could also happen during the non-relativistic phase of the afterglow, at which time the adiabatic index would grow to $5/3$, an effect which was not accounted for here.
The second interesting part of this plot is the left-hand side. In the grey curve corresponding to $\epsilon_B =$ constant, the initial rise is due to the transition from a coasting to decelerating shell. This peak occurs at time $t_{\gamma} \sim \Gamma^{-2/3} ~l$. In the case where $\epsilon_B = \epsilon_{turb}$, the dynamics are identical, but the magnetic field does not turn on until a later time, about a factor of $5$ later in observer time, $t_B^{obs} = 5 t_\gamma^{obs}$ (we did not check the scaling with $\Gamma$ as we only performed the $\Gamma = 30$ case). This means that radiation from an outflow with initial Lorentz factor $\Gamma_0$ will peak at a time $t_{B}$, when the shell has decelerated to a Lorentz factor $\Gamma_B < \Gamma_0$ ($\sim 15$ in our case). If this peak were interpreted as occurring at $t_{\gamma}$, the Lorentz factor of the ejecta will be incorrectly estimated to be $\Gamma_B$ instead of $\Gamma_0$. This point may be important for jet models with a baryon-loaded component \citep{1998ApJ...496..311P, 2002MNRAS.337.1349R}, and in general for the interpretation of early-time plateaus in GRB light curves \citep{2006ApJ...642..389N,2006ApJ...642..354Z,2014arXiv1402.5162V}.
\section{Discussion}
\label{sec:disc}
We demonstrate that Rayleigh-Taylor instabilities can generate vorticity and magnetic fields in GRB afterglow jets. The only ingredient necessary for this mechanism is an equation of state which is softer than the usual $\gamma = 4/3$ model. This soft equation of state represents a mechanism for energy loss which reduces the pressure in the forward shock so that RT fingers can collide with it. Several processes occur at the forward shock which act to cool it; cosmic rays and radiation, for example, may carry significant energy away from the shock. Regardless of what cools the shock front, this seemingly benign change to the dynamics can completely change the structure and magnetization of the blastwave.
We estimate a magnetic energy fraction of $\epsilon_B \sim 1\%$ in the forward shock, and $\sim 10\%$ in the reverse shock. We show that the choice $\epsilon_B =$ constant $= 0.01$ agrees surprisingly well with late-time afterglow calculated assuming a local value of $\epsilon_B = \epsilon_{turb}$, although this result may change when more accurate models for cooling are employed in the calculation. Finally, we show that this magnetic field does not turn on until an observer time later than the deceleration time (a factor of $5$ later in our $\Gamma = 30$ case). This occurs at a time when the shock has decelerated to a Lorentz factor lower than its original value. This could be related to the cause of observed early-time plateaus in GRB afterglows.
\acknowledgments
This research was supported in part by NASA through Chandra grant TM3-14005X and Fermi grant NNX13AO93G.
Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. We are grateful to Andrei Gruzinov, Hendrik van Eerten, Eliot Quataert and Carles Badenes for helpful comments and discussions.
\bibliographystyle{apj}
|
2,869,038,156,830 | arxiv | \section*{Introduction}
Since the observation of the Mott metal-insulator transition in Cr-doped V$_2$O$_3$,\cite{MT-exp-1}
many properties of the Mott transition have been studied from theoretical and experimental aspects.
Recent hot topics of the Mott transition include critical properties\cite{MT-critexp-1,MT-critexp-2,MI-crit-1,MTC-dcc-DMFT-1,docc-scaling,docc-scaling-2}
and dynamical characteristics.~\cite{rf-1,rf-2,rf-3,rf-4,rf-5,rf-6,rf-7}
It is well known that doublon (doubly occupied site) plays the role of order parameter in the Mott transition.\cite{docc-op}
Some theoretical groups have investigated behaviors of doublons upon changing the electron correlation near the Mott transition.
Understanding of its thermodynamic criticality has also made progress by performing a scaling analysis of
a singularity in doublon density near the critical end point of the Mott transition.\cite{MI-crit-1,MTC-dcc-DMFT-1,docc-scaling,docc-scaling-2}
Correlations between doulon and holon (vacant site) have been also studied theoretically and it was proposed that they form a bound state
in the insulating phase and the Mott transition is characterized by its binding and unbinding.\cite{dc-3,dc-4}
To study dynamical properties in the Mott transition, dynamical spin or charge susceptibility~\cite{rf-1,rf-2,rf-7,rf-3} and
optical conductivity~\cite{docc-scaling-2,rf-4,rf-5,rf-6} have been calculated for the Hubbard models by using
the dynamical mean field theory (DMFT).~\cite{DMFT}
A clear difference has been reported for the spin and charge dynamics
between in the metallic and in the insulating phases.
However, the dynamics of doublons is not well understood.
This is the main issue of this paper and we will report our numerical study on the dynamical properties of doublons in the triangular-lattice Hubbard model.
Original Mott transition occurs inside the paramagnetic phase at a finite temperature without magnetic transition
and its realization involves magnetic frustration.
In this letter, we numerically study the dynamics of doublons and holons in a frustrated Hubbard model.
Using cellular dynamical mean field theory (CDMFT),~\cite{CDMFT} we calculate and examine their dynamical correlations.
To this end, we employ the half-filled triangular-lattice Hubbard model.
Its Hamiltonian reads as
\begin{eqnarray}
H=-v\sum_{\langle i,j \rangle,\sigma}c_{i\sigma}^\dagger c_{j\sigma}+U\sum_{i}n_{i\uparrow}n_{i\downarrow}-\mu\sum_{i,\sigma}n_{i\sigma}.
\label{eq:H-1}
\end{eqnarray}
In this model, the transition occurs inside the paramagnetic region.~\cite{magphase-trian}
Note that the nearest-neighbor hopping integral is denoted by $v$ and the symbol $t$ is reserved for time.
$U$ is the on-site Coulomb repulsion and the chemical potential $\mu$ tunes electron density to half filling.
$ c_{i\sigma}^\dagger(c_{i\sigma})$ is an electron creation (annihilation) operator at site $i$ with spin $\sigma$
and electron density operator is $n_{i\sigma}=c_{i\sigma}^\dagger c_{i\sigma}$.
We use the CDMFT with a three-site triangular cluster to calculate dynamical correlations.
We numerically obtain the single- and two-electron Green's functions inside the cluster by using the continuous-time quantum Monte Carlo (CTQMC) solver based on the strong coupling expansion.\cite{CTQMC}
Dynamical correlations are defined for real time $t$ by,
\begin{eqnarray}
S^{j}_{\rm oo'}(t)&=&\langle \hat{o}_{1}(t)\hat{o}'_{j}(0)\rangle,
\label{eq:dsf}
\end{eqnarray}
where the average is calculated for thermal equilibrium at temperature $T$.
$\hat{o}_{j}(\hat{o}'_{j}) $ are operators at site $j$ inside the cluster of doublon density $\hat{d}_{j}$ or holon density $\hat{h}_{j}$
or single-occupied density $\hat{s}_{j}$ described as,
\begin{eqnarray}
\hat{d}_{j}=n_{j\uparrow}n_{j\downarrow}, \hat{h}_{j}=(1-n_{j\uparrow})(1-n_{j\downarrow}),\nonumber \\
\hat{s}_{j}=1-(\hat{d}_{j}+\hat{h}_{j})=n_{j\uparrow}+n_{j\downarrow}-2\hat{d}_{j}.
\label{eq:de}
\end{eqnarray}
In our results, we have checked that the three sites inside the cluster are always all equivalent, and therefore it does not loose generality to choose the site 1
for one quantity $\hat{o} $.
For the other quantity $\hat{o}'$, it is relevant only if the site $j$ is 1 or not.
The $j$$=$$1$ case corresponds to on-site correlations and the $j$$=$$2$ case corresponds to nearest-neighbor correlations, while the choice of $j$$=$$3$ is completely equivalent to $j$$=$$2$.
Note also that the dynamic correlations $S^{j}_{\rm oo'}(t)$ have generally complex values.
Therefore, for their interpretation, it is more convenient to see,
\begin{eqnarray}
\Gamma^{j}_{\rm oo'}(t)&=&|S^{j}_{\rm oo'}(t)|-|\langle \hat{o}_{1}\rangle\langle \hat{o}'_{j}\rangle|,
\label{eq:dsf2}
\end{eqnarray}
rather than more conventional $|S^{j}_{\rm oo'}(t)-\langle \hat{o}_{1}\rangle\langle \hat{o}'_{j}\rangle|$, since it is real and its sign has meaning:
$\Gamma^{j}_{\rm oo'}(t)$$>$$0$ means attractive correlation, and $\Gamma^{j}_{\rm oo'}(t)$$<$$0$ means repulsive correlation.
The following is a brief sketch of numerical procedures.
In the CDMFT, we obtain the corresponding correlation function for Matsubara frequency $S^{j}_{\rm oo'}(i\omega_n)$
by averaging over 512 imaginary-time MC samples.
We then perform analytic continuation $i\omega_{n}$$\rightarrow$$\omega$$+$$i 0$ based on the maximum entropy algorithm (MEM)\cite{MEM}
to calculate real frequency quantity $S^{j}_{\rm oo'}(\omega)$.
We finally perform Fourier transformation to obtain $S^{j}_{\rm oo'}(t)$.
In what follows, we normalize $\omega$, $t$, $U$, and $T$ by the energy unit $v$.
\begin{figure}
\centering
\vspace{-3.5cm}
\centerline{\includegraphics[height=5.25in]{Fig.1.eps}}
\vspace{-3cm}
\caption{(Color online)
$U$-dependence of nearest-neighbor equal-time correlations defined by eq.~(\ref{eq:Poo}) at $T$=$0.08$.
(Inset) $U$-dependence of doublon density at $T$=$0.08$.
}
\label{fig:3}
\end{figure}
\begin{figure}
\centering
\vspace{0cm}
\vspace{-3.5cm}
\centerline{\includegraphics[height=5in]{Fig2-1.eps}}
\vspace{-7cm}
\centerline{\includegraphics[height=5in]{Fig2-2.eps}}
\vspace{-6.75cm}
\centerline{\includegraphics[height=5in]{Fig2-3.eps}}
\vspace{-3.25cm}
\caption{(Color online)
Nearest-neighbor dynamical correlations at $T$=$0.08$ between (a) doublon and holon,
(b) two doublons, and (c) two holons.
}
\label{fig:4}
\end{figure}
We start with confirming the finite-temperature Mott transition in the parameter space of $U$ and $T$.
We investigate the $U$-dependence of the doublon density, $d\equiv\frac{1}{N_{\rm c}}\sum_{i}\langle \hat{d}_{i}\rangle$, for various $T$'s.
Here, $N_{\rm c}=3$ is the cluster size.
Note that the half-filling condition determines the densities of holon and singly occupied sites as $h$$=$$d$, $s$$=$$1$$-$$2d$.
The inset of Fig.~\ref{fig:3} presents $d(U)$ at $T$=$0.08$.
$d(U)$ shows a jump and hysteresis, which is a characteristic of the first-order Mott transition.
At higher $T$=$0.15$, $d(U)$ shows no hysteresis and only a smooth change from insulator to metal.
Analyzing the singularity in $d(U)$ at various $T$'s, we determine the $U$$-$$T$ phase diagram
and this is consistent with the previous study.~\cite{phase}
A line of the first-order Mott transition terminates at the critical end point, $U^* $$\sim$$ 9.4$ and $T^*$$ \sim$$ 0.10$.
In the following, we will investigate dynamics with decreasing $U$ at the fixed temperature $T$=$0.08$ $<$ $T^*$,
where the first-order Mott transition occurs at $U_{\rm c}$$=$$9.4$.
We first examine the correlations between nearest-neighbor sites, $\Gamma^{2}_{\rm oo'}(t)$.
One important characteristics is the equal-time values, $t$$=$$0$.
We define their normalized correlation by,
\begin{eqnarray}
P^{\rm oo'}=\frac{\langle \hat{o}_{1}\hat{o}'_{2}\rangle}{\langle \hat{o}_{1}\rangle\langle \hat{o}'_{2}\rangle},
\label{eq:Poo}
\end{eqnarray}
where $\hat{o}$ and $\hat{o'}$ are $\hat{d}$, $\hat{h}$, or $\hat{s}$.
$P^{\rm oo'}$$>$$1$ means an attractive equal-time correlation between nearest-neighbor sites,
while $P^{\rm oo'}$$<$$1$ means repulsive equal-time correlation.
The main panel of Fig.~\ref{fig:3} shows their $U$-dependence for typical combinations of configuration.
The most important feature is a large value of $P^{\rm dh}$, and this manifests a strong nearest-neighbor attraction between doublon and holon.
This attraction is noticeably enhanced in the insulating phase.
An opposite behavior is found in the correlation between two doublons, between two holons, and between doublon and singly occupied site:
they are repulsive to each other.
Their repulsions are also enhanced in the insulating phase, but these enhancements are not as large
as the enhancement of doublon-holon attraction.
The difference between $P^{\rm hh}$ and $P^{\rm dd}$ is due to the particle-hole asymmetry of the model, since the lattice is not bipartite.
It is natural that $P^{\rm dd}$$>$$P^{\rm hh}$, because the low-energy density of states is larger for positive energy; i.e, adding electron is easier.
Dynamical correlations also exhibit clear differences between in the metallic and in the insulating phases.
Figures~\ref{fig:4} (a), (b), and (c) show the dynamics of doublon-holon pair, doublon pair, and holon pair, respectively,
in the metallic ($U$=$8$) and insulating ($U$=$10.5$) phases at $|U-U_c|$$\sim$$1$.
The characteristic time scale in the short time part is about $t$$\sim$$0.3$ in all cases, and this time scale is longer particularly for doublon-holon pair.
The short-$t$ behavior of this doublon-holon pair is opposite to the other two cases.
The doublon-holon pair decays into other configurations as $t$ increases, and the corresponding pair life time becomes longer in the insulating phase.
In contrast, the prohibited doublon pair and holon pair starts to be dynamically formed with retardation.
The formation of doublon pair occurs faster in the metallic phase than in the insulating phase,
while the formation of holon pair shows an opposite behavior.
Another important difference is long-$t$ behavior.
In the insulating phase, the correlations for $t$$>$$2$ is very small, almost vanishing.
The behavior in the metallic phase is in sharp contrast, and the correlations persist up to long time $t$$\sim$$50$$-$$60$.
Moreover, we find that the $t$-dependence at smaller $U$$=$$8$ shows many structures.
These results may be due to the larger charge fluctuation in the metallic phase.
To examine these structures, we have also investigated the complex $S^{2}_{\rm oo'}(t)$ and have found that the structure at $t$$\sim$$0.3$
corresponds to its position with phase $\pi$ or $0$.
To see more details, we have checked the correlations in the frequency domain ${S}^{2}_{\rm oo'}(\omega)$
as we will analyze later for the on-site doublon dynamics.
There are some low-$\omega$ peaks, in particular in the metallic phase, in addition to a broad peak around $\omega$=$U$.
The broad peak around $\omega$=$U$ corresponds to the incoherent dynamics by the excitations to the Hubbard band and
the short-$t$ dynamics is predominated by the contribution of the broad peak.
On the other hand, we have found that the low-$\omega$ peaks dominate the long-$t$ dynamics and are absent in the insulating phase.
\begin{figure}
\centering
\vspace{-3cm}
\centerline{ \includegraphics[height=4.5in]{Fig3-1.eps}}
\vspace{-5.75cm}
\centerline{ \includegraphics[height=4.5in]{Fig3-2.eps}}
\vspace{-3.25cm}
\centerline{\includegraphics[height=4.5in]{Fig3-3.eps} }
\vspace{-1.75cm}
\caption{(Color online) On-site dynamical correlation of doublon for two $U$'s (a) $\Gamma^{1}_{\rm oo'}(t)$.
(b) $S^{1}_{\rm dd}(\omega)$
Inset: Diamonds show calculated data in $t$ and lines are the fitting results of three peaks around $\omega$=$0$ (``low"),
$\omega$$\sim$$1$ (``mid"), and $\omega$$\sim$$U$ (``high") at $U$=$8$.
(c) $U$-dependence of the intensities of the three parts of $S^{1}_{\rm dd}(\omega)$ normalized by the total intensity.
}
\label{fig:1}
\end{figure}
Next, we examine the variation of the on-site doublon dynamics.
As shown in Fig.~\ref{fig:1} (a), $\Gamma^{1}_{\rm dd}(t)$ decreases with time $t$ for both of $U$=$8$ and $10.5$.
However, this decrease is not monotonic, in particular at smaller $U$=$8$, which indicates larger fluctuation of doublon.
The relaxation time towards $\Gamma^{1}_{\rm dd}(t)$$=$$0$ is related with the doublon lifetime
and we expect a longer life time in the metallic phase than that in the insulating phase.
The present result shows that the relaxation is slower in the long-$t$ part at $U$=$8$ than that at $U$=$10.5$ as expected.
The interesting characteristic is the two structures at $t$$\sim$$0.6$ and $2$, in particular in the metallic phase.
We investigate the complex $S^{1}_{\rm dd}(t)$, and find that the structure at $t$$\sim$$0.6$ corresponds to its position
that the phase is close to $\pi/2$ whereas that at $t $$\sim$$2$ corresponds the position with the phase $\pi$.
We also calculate the correlation in the frequency domain, ${S}^{1}_{\rm dd}(\omega)$, as shown in the main panel of Fig. \ref{fig:1} (b).
An important characteristic at $U$=$8$ is the three peaks in ${S}^{1}_{\rm dd}(\omega)$:
one is the large broad peak around $\omega$=$U$ (``high"), another is the very sharp peak around $\omega$=$0$ (``low"),
and the last is the small peak at $\omega$$\sim$$1$ (``mid").
The broad peak around $\omega$=$U$ corresponds to the incoherent dynamics due to the excitations to the Hubbard band,
and this persists from the insulating to metallic phase.
The sharp peak around $\omega$=$0$ is due to the dynamics of coherent quasiparticles.
This one and the small peak at $\omega$$\sim$$1$ disappear in the insulating phase, as shown in the result at $U$=$10.5$.
We estimate the intensity of the three peaks $I_{\rm low,mid,high}$ and examine their changes with $U$.
The high-$\omega$ broad and the $\omega$$\sim$$0$ sharp peaks are fitted by Gaussian functions to define $S^{1,\rm high(low)}_{\rm dd}(\omega)$
and the remaining part is denoted as $S^{1,\rm mid}_{\rm dd}(\omega)$.
$I_{\rm low,mid,high}$ are the intensities of these three parts.
We calculate their ratio to the total intensity $I_{\rm tot}$, and the results are shown in Fig.~\ref{fig:1}~(c).
In the metallic phase, $I_{\rm low}/I_{\rm tot}$ and $I_{\rm mid}/I_{\rm tot}$ decrease with increasing $U$,
while $I_{\rm high}/I_{\rm tot}$ increases.
At the Mott transition point, all the three intensities show a jump.
For larger $U$ in the insulating phase, only $I_{\rm high}/I_{\rm tot}$ takes a noticeable value.
We also calculate the real-time correlations $S^{1,\rm low,mid,high}_{\rm dd}(t)$ by the Fourier transformation
and the results are shown in the inset of Fig.~\ref{fig:1} (b).
One can see that the high-$\omega$ contribution is dominant at short $t$.
On the other hand, the long-$t$ dynamics is predonimated by only the low-$\omega$ part.
We find that the change from the short-$t$ to the long-$t$ behavior occurs around $t$$\sim$$2$.
In this letter, we have studied the dynamics of doublons in the triangular-lattice Hubbard model at half filling
by using the cellular dynamical mean field theory.
We have demonstrated that a nearest-neighbor doublon-holon pair exhibits a strong attraction in the insulating phase.
The nearest-neighbor pairs of doublons, of holons, and of doublon and single occupied site show a repulsive correlation,
but they do not show as large enhancement as doublon-holon attraction.
Calculating the on-site doublon dynamics, we have found quantitatively that doublons in the metallic phase have a longer life time than that in the insulating phase.
Our results of dynamical correlations show clear differences between in the metallic and in the insulating phases
and demonstrate complex dynamics of doublons in the metallic phase, which is associated with the several excitation process.
The authors are grateful to Kazumasa Hattori for helpful discussions.
The present work is supported by MEXT Grant-in-Aid for Scientific Research No.25400359,
and by Next Generation Supercomputing Project, Nanoscience Program, MEXT, Japan.
Numerical computation was performed with facilities at Supercomputer Center at ISSP and Information Technology Center,
University of Tokyo.
|
2,869,038,156,831 | arxiv |
\section{Appendix}
\label{sec:appendix}
In the appendix, we prove \fref{lem:approx} and \fref{thm:knn} on the performance of NIBH on $k$-nearest neighbor preservation. Finally, we include additional numerical simulation results and discussions on the empirical convergence of the NIBH algorithm.
\section*{Proof of \fref{lem:approx}}
\begin{lem} \label{lem:approx}
Let $x$ be a Gaussian random variable as $x \sim \mathcal{N}(\mu, \sigma^2)$. Define the distortion of the sigmoid approximation at $x$ as $\abs{h(x)-\sigma_\alpha(x)}$. Then, the expected distortion is bounded as
\begin{align*}
\mathbb{E}_x[ \abs{h(x)-\sigma_\alpha(x)} ] \leq \frac{1}{\sigma\sqrt{2\pi\alpha}} + 2 e^{-(\sqrt{\alpha} + c /\alpha \sigma^2)},
\end{align*}
where $c$ is a positive constant. As $\alpha$ goes to infinity, the expected distortion goes to $0$.
\end{lem}
\sloppy
\begin{proof}
It is easy to see that the distortion $\abs{h(x)-\sigma_\alpha(x)}$ occurs at $x = 0$. Therefore, among different values of $\mu$, $\mu = 0$ gives the largest distortion since the density of $x$ peaks at $x = 0$. Therefore, we bound the distortion at setting $\mu = 0$, which is an upper bound of the distortion when $\mu \neq 0$.
By definition (1) in the main text, $h(x)$ can be written as
\begin{align*}
h(x) = \left \{ \begin{array}{ll}
1 &\text{if} \,\, x \geq 0 \\
0 & \text{otherwise}.
\end{array} \right.
\end{align*}
When $x \sim \mathcal{N}(0,\sigma^2)$, we have
\begin{align}
\notag & \mathbb{E}_x[ \abs{h(x)-\sigma_\alpha(x)} ] = \int_{-\infty}^\infty \abs{h(x)-\sigma_\alpha(x)} \mathcal{N}(x;0,\sigma^2) \mathrm{d}x \\
\label{eq:sym} & = 2 \int_0^\infty (h(x)-\sigma_\alpha(x)) \mathcal{N}(x;0,\sigma^2) \mathrm{d}x \\
\notag & = 2 \int_0^{x_0} (h(x)-\sigma_\alpha(x)) \mathcal{N}(x;0,\sigma^2) \mathrm{d}x \\
\notag & \quad + 2 \int_{x_0}^{\infty} (h(x)-\sigma_\alpha(x)) \mathcal{N}(x;0,\sigma^2) \mathrm{d}x \\
\label{eq:leq} & \leq \! 2 \! \int_0^{x_0} \frac{1}{2} \mathcal{N}(x;0,\sigma^2) \mathrm{d}x \! + \! 2 \! \int_{x_0}^{\infty} \frac{1}{1+e^{\alpha x_0}} \mathcal{N}(x;0,\sigma^2) \mathrm{d}x \\
\label{eq:apr} & \leq \frac{x_0}{\sqrt{2\pi} \sigma} + 2 \frac{e^{-c x_0^2/\sigma^2}}{1+e^{\alpha x_0}} \\
\label{eq:den} & \leq \frac{1}{\sigma \sqrt{2\pi\alpha}} + 2 e^{-(\sqrt{\alpha} + c /\alpha\sigma^2)} ,
\end{align}
when we set $x_0 = \frac{1}{\sqrt{\alpha}}$ and $c$ is a positive constant. In \fref{eq:sym}, we used the fact that $\sigma_\alpha(x)$ and $h(x)$ are symmetric with respect to the point $(0,\frac{1}{2})$. \fref{eq:leq} is given by the properties of the sigmoid function, \fref{eq:apr} is given by the Gaussian concentration inequality \cite{talagrand}, and \fref{eq:den} is given by the inequality $1/(1+e^{\alpha x_0}) \leq e^{-\alpha x_0}$. The fact that $\mathbb{E}_x[ \abs{h(x)-\sigma_\alpha(x)} ] \rightarrow 0$ as $\alpha \rightarrow \infty$ is obvious from the bound above.
\end{proof}
\section*{Proof of \fref{thm:knn}}
\begin{thm} \label{thm:knn}
Assume that all the data points are independently generated from a mixture of Gaussian distribution, i.e., $\vecx_i \sim \sum_{p=1}^P \pi_p \mathcal{N}(\mu_p,\Sigma_p)$.
Let $\vecx_0 \in \mathbb{R}^N$ denote a query data point in the ambient space, and the other data points $\vecx_i$ be ordered so that $d(\vecx_0,\vecx_1) < d(\vecx_0,\vecx_2) < \ldots < d(\vecx_0,\vecx_Q)$. Let $\delta$ denote the final value of the distortion parameter computed from any binary hashing algorithm, and let $c$ denote a positive constant.
Then, if $\mathbb{E}_x[\Delta_k] \geq 2 \delta + \sqrt{\frac{1}{c}\log \frac{Qk}{\epsilon}}$, the binary hashing algorithm preserves the $k$-nearest neighbors of a point with probability at least $1-\epsilon$.
\end{thm}
In order to prove this theorem, we need the following Lemma:
\begin{lem} \label{lem:sg}
Let $\vecx_0, \ldots, \vecx_N$ and $\Delta_k$ be defined as in \fref{thm:knn}. Then, there exist a constant $c$ such that $P(\Delta_k - \mathbb{E}_x[\Delta_k] < t) \leq e^{-ct^2}$ for $t > 0$.
\end{lem}
\begin{proof}
Since the data points $\vecx_0$, $\vecx_k$ and $\vecx_{k+1}$ are independently generated from a finite mixture of Gaussian distributions, the random variable of their concatenation $\vecy = [\vecx_0^T, \vecx_k^T, \vecx_{k+1}^T]^T \in \mathbb{R}^{3N}$ is sub-Gaussian \cite{wainwrightpaper}. Then, we have
\begin{align*}
\Delta_k(\vecy) & = \|\vecx_0 - \vecx_{k+1}\|_2 - \|\vecx_0 - \vecx_k\|_2 \\
& = \| \Big( \begin{array}{ccc} \bI & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -\bI \end{array}\Big) \vecy \|_2 - \| \Big( \begin{array}{ccc} \bI & 0 & 0 \\ 0 & -\bI & 0 \\ 0 & 0 & 0 \end{array}\Big) \vecy \|_2 \\
& \leq \| \Big( \begin{array}{ccc} 2\bI & 0 & 0 \\ 0 & -\bI & 0 \\ 0 & 0 & -\bI \end{array}\Big) \vecy \|_2 \\
& \leq 2 \| \vecy \|_2,
\end{align*}
where we have used the triangular inequality in the second to last step, the Rayleigh-Ritz theorem \cite{hornjohnson}, and the fact that the maximum singular value of the matrix in the step before is $2$.
This result means that $\Delta_k(\vecy)$ is a Lipschitz function of $\vecy$.
Thus, by Talagrand's inequality \cite{talagrand}, we have that $P(\Delta_k - \mathbb{E}_x[\Delta_k] < -t) \leq e^{-ct^2}$ for some positive constant $c$ and $t > 0$, since $\vecy$ is sub-Gaussian.
\end{proof}
Now we are ready to prove \fref{thm:knn}.
\begin{proof}
Let $E$ denote the event that the set of top-$k$ nearest neighbors is not preserved in the Hamming space. Then, we have $E = \cup e_{m,n}$, where $e_{m,n}$ denote the event that $d_H(\vecx_0,\vecx_m) > d_H(\vecx_0,\vecx_n)$ with $m \in \{1,\ldots,k\}$ and $n \in \{k+1,\ldots,Q\}$.
Then, using the union bound \cite{infotheory}, we have
\begin{align*}
P(E) & \leq \sum_{m,n} P(e_{m,n}) \leq k(Q-k) P(e_{k,k+1}) \\
& \quad = k(Q-k) P(d_H(\vecx_0,\vecx_k) > d_H(\vecx_0,\vecx_{k+1})) \\
& \quad = k(Q-k) P(d_H(\vecx_0,\vecx_{k+1}) < d_H(\vecx_0,\vecx_k)),
\end{align*}
where we have used the fact that the most possible event among all $e_{m,n}$ events is the one corresponding to the order mismatch between the $k^\text{th}$ and $k+1^\text{th}$ nearest neighbor.
Now, note that the NIBH output $\delta$ satisfies $\max_{i,j} |d_H(\vecx_i,\vecx_j) - d(\vecx_i,\vecx_j)| \leq \delta$\footnote{Here we assume $\lambda = 1$ without loss of generality.}.
Observe that $\Delta_k = d(\vecx_0, \vecx_{k+1}) - d(\vecx_0, \vecx_k) \geq 2\delta$ is a sufficient condition for $d_H(\vecx_0,\vecx_{k+1}) \geq d_H(\vecx_0,\vecx_k)$, since
\begin{align*}
& d_H(\vecx_0,\vecx_{k+1}) - d_H(\vecx_0,\vecx_k) \\
& \quad \geq d(\vecx_0, \vecx_{k+1}) - \delta - d(\vecx_0, \vecx_k) - \delta \\
& \quad \geq 2\delta - 2\delta = 0,
\end{align*}
by the triangular inequality. This leads to
\begin{align*}
& P(d_H(\vecx_0,\vecx_{k+1}) < d_H(\vecx_0,\vecx_k)) \\
& \quad = 1 - P(d_H(\vecx_0,\vecx_{k+1}) \geq d_H(\vecx_0,\vecx_k)) \\
& \quad \leq 1 - P(\Delta_k \geq 2\delta) = P(\Delta_k < 2\delta).
\end{align*}
Therefore, combining all the above and \fref{lem:sg}, the probability that the $k$-nearest neighbor is not preserved is bounded by
\begin{align*}
P(E) & \leq k(Q-k) P(d_H(\vecx_0,\vecx_{k+1}) < d_H(\vecx_0,\vecx_k)) \\ & \leq k(Q-k)P(\Delta_k < 2\delta) \\
& = k(Q-k) P(\Delta_k - \mathbb{E}_x[\Delta_k] < -(\mathbb{E}_x[\Delta_k] - 2\delta) ) \\
& \leq k(Q-k) e^{-c(\mathbb{E}_x[\Delta_k] - 2\delta)^2} \\
& \leq kQ e^{-c(\mathbb{E}_x[\Delta_k] - 2\delta)^2}.
\end{align*}
Now, let $kQ e^{-c(\mathbb{E}_x[\Delta_k] - 2\delta)^2} \leq \epsilon$, we have that the requirement for the $k$-nearest neighbors to be exactly preserved with probability at least $1-\epsilon$ is
\begin{align*}
\mathbb{E}_x[\Delta_k] \geq 2 \delta + \sqrt{\frac{1}{c}\log \frac{Qk}{\epsilon}}.
\end{align*}
\end{proof}
\paragraph{Remark}
Note that our bound on the number of nearest neighbor preserved $k$ depends on the final outcome of the NIBH algorithm in the value of $\delta$.
In order to relate our result to the number of binary hash functions $M$ required, we can make use of \citep[Thm.~1.10]{yanivplan}.
For a bounded set $K \subset \mathbb{R}^N$ with diameter $1$, let $M \geq C \delta^{-6} w(K)^2$ where $w(K) = \mathbb{E}_x[\sup_{\vecx \in K}\langle g,\vecx \rangle]$ denotes the Gaussian width of $K$ and some constant $C$.
Then, \citep[Thm.~1.10]{yanivplan} states that, with high probability, $h(x)$ as defined in (1) with a random matrix $\bW$ whose entries are independently generated from $\mathcal{N}(0,1)$ is a $\delta_0$-isometric embedding.
Therefore, if we initialize NIBH with such a random $\bW$ which is likely to be $\delta_0$-isometric, then empirically (see Figure~\ref{fig:convergance}), the NIBH algorithm will learn a better embedding that is $\delta$-isometric with $\delta < \delta_0$. Therefore, we have that the number of hash functions $M$ required for $k$-nearest neighbor preservation is at least $M \sim (\mathbb{E}_x[\Delta_k] - \sqrt{\log(kQ/\epsilon})^{-6}w(\bX)^2$ assuming that the training dataset $\bX$ is properly normalized.
\section*{Empirical convergence of the NIBH algorithm}
Figure~\ref{fig:convergance} shows the empirical loss and the actual distortion parameter $\delta$ as a function of the iteration count $\ell$ in the NIBH algorithm as applied on $4950$ secants (i.e., $Q=$ 100) from the $\it{MNIST}$ dataset.
The behavior of the empirical loss function closely matches that of $\delta$ as they gradually converge.
The curve empirically confirms that minimizing the loss function in each iteration of NIBH (using the ADMM framework) directly penalizes the non-convex loss function (distortion parameter $\delta$).
After initializing the NIBH algorithm with random entries for $\bW$, the value of the distortion parameter $\delta$ significantly drops in the first few iterations, empirically conforming that NIBH
learns an embedding with significantly lower distortion parameter after a few iterations.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{convergence1.pdf}\label{fig:convergence}
\caption{Empirical convergence behavior of the NIBH algorithm. Both the maximum distortion parameter $\delta$ and the loss function $||\lambda \vecv - \vecc||_{\infty}$ that approximates $\delta$ gradually decrease and converge as the number of iterations increases. We see that the loss function of NIBH closely matches the behavior of the actual distortion parameter in each iteration.}%
\label{fig:convergance}
\end{figure}
\section{Introduction} \label{sec:intro}
\sloppy
Hashing, one of the primitive operations in large-scale systems, seeks a low-dimensional binary embedding of a high-dimensional data set.
Such a binary embedding can increase the computational efficiency of a variety of tasks, including searching, learning, near-neighbor retrieval, etc.
One of the fundamental challenges in machine learning is the development of efficient hashing algorithms that embed data points into compact binary codes while preserving the geometry of the original dataset.
In this paper, we are interested in learning \emph{near-isometric} binary embeddings, i.e., hash functions that preserve the distances between data points up to a given distortion in the Hamming space.
More rigorously, let $\mathcal{Z}$ and $\mathcal{Y}$ denote metric spaces with metrics $d_\mathcal{Z}$ and $d_\mathcal{Y}$, respectively.
An embedding $f: \mathcal{Z} \rightarrow \mathcal{Y}$ is called near-isometric~\citep[Def.~1.1]{yanivplan} if, for \emph{every} pair of data points $\vecz_i, \vecz_j \in \mathcal{Z}$, we have
\begin{align*}
d_\mathcal{Z}(\vecz_i,\vecz_j) - \gamma \! \leq \! d_\mathcal{Y}(f(\vecz_i),f(\vecz_j)) \! \leq \! d_\mathcal{Z}(\vecz_i,\vecz_j) + \gamma,
\end{align*}
where $\gamma$ is called the isometry constant.
In words, $f$ is near-isometric if and only if the entries of the \emph{pairwise-distortion vector} containing the distance distortion between every pair of data points
$| d_\mathcal{Z}(\vecz_i,\vecz_j)-d_\mathcal{Y}(f(\vecz_i),f(\vecz_j))|, \forall i>j$ do not exceed the isometry constant $\gamma$.
A near-isometric embedding is approximately \emph{distance-preserving} in that the distance between any pairs of data points in the embedded space $\mathcal{Y}$ is approximately equal to their distance in the ambient space $\mathcal{Z}$~\cite{weinberger2006unsupervised,shaw2007minimum,numax}.
The simplest and most popular binary hashing scheme, {\em random projection}, simply projects the data into a lower-dimensional (lower-bit) random subspace and then quantizes to binary values.
Random projections are known to be near-isometric with high probability, due to the celebrated Johnson-Lindenstrauss (JL) lemma~\cite{lsh,andoni2014beyond,yanivplan}.
Algorithms based on the JL lemma belong to the family of probabilistic dimensionality reduction techniques; a notable example is locality sensitive hashing (LSH) \cite{lsh,andoni2014beyond}.
Unfortunately, theoretical results on LSH state that the number of bits required to guarantee an isometric embedding can be as large as the number of data points~\cite{lsh,yanivplan}.
Even in practice, LSH's requirement on the number of bits is impractically high for indexing many real-world, large-scale datasets \cite{lv2007multi}.
Consequently, several \emph{data-dependent} binary hashing algorithms have been developed that leverage the structure of the data to learn compact binary codes.
These methods enable a significant reduction in the number of bits required to index large-scale datasets compared to LSH; see~\cite{wang2014hashing} for a survey.
However, learning compact binary codes that preserve the local geometry of the data remains challenging.
These data-dependent hashing algorithms focus on the choice of the distortion measure.
Typically, after finding the appropriate distortion measure, the hash functions are learned by minimizing the \emph{average} distortion, i.e., the $\ell_2$-norm of the pairwise-distortion vector, which sums the distortion among all pairwise distances with equal weights.
\emph{Binary reconstructive embedding} (BRE) \cite{kulis2009learning}, for example, uses an optimization algorithm to directly minimize the average distortion in the embedded sapce.
\emph{Spectral Hashing} (SH) \cite{weiss2009spectral}, \emph{Anchor Graph Hashing} (AGH) \cite{liu2011hashing}, \emph{Multidimensional Spectral Hashing} (MDSH) \cite{weiss2012multidimensional}, and \emph{Scalable Graph Hashing} (SGH) \cite{jiang2015scalable} define notions of \emph{similarity} based on a function of $\ell_2$-distance between the data points ${\| \vecz_i - \vecz_j \|}_2$ and use spectral methods to learn hash functions that keep similar points sufficiently close.
Some other hashing algorithms first project the data onto its principal components, e.g., \emph{PCA Hashing} (PCAH) \cite{jolliffe2002principal}, which embeds with minimal average distortion, and then learn a rotation matrix to minimize the quantization error \cite{gong2011iterative} or balance the variance across the components (\emph{Isotropic Hashing} (IsoHash) \cite{kong2012isotropic}).
While minimizing the average distortion seems natural, this approach can sacrifice the preservation of certain pairwise distances in favor of others.
As we demonstrate below, this can lead to poor performance in certain applications, such as the preservation of nearest neighbors.
In response, in this paper, we develop a new data-driven hashing algorithm that minimizes the \emph{worst-case} distortion among the pairwise distances, i.e., the $\ell_{\infty}$-norm of the pairwise-distortion vector.
Figure~\ref{fig:LinfL2} illustrates the advantages of minimizing the $\ell_{\infty}$-norm of the pairwise-distortion vector instead of its $\ell_2$-norm.
Consider three clusters of points in a two-dimensional space.
We compute the optimal one-dimensional embeddings of the data points by minimizing the $\ell_\infty$-norm and the $\ell_2$-norm of the pairwise-distortion vector using a grid search over the angular orientation of the line that represents the embedding.
We evaluate the near-neighbor preservation of a given query point in the embedded space (shown in \fref{fig:LinfL2}~(b)).
For a query point $q$, located without loss of generality at the origin, the nearest neighbor ranking from the ambient space is destroyed by the $\ell_2$-optimal embedding, since the circle and square clusters overlap.
In contrast, the $\ell_{\infty}$-optimal embedding exactly preserves the rankings.
This illustration emphasizes the importance of preserving \emph{relevant} distances in data retrieval tasks. To minimize the average distortion, the $\ell_2$-optimal embedding (dashed red line) sacrifices the square--circle distances in favor of the square--star and circle--star distances, which contribute more to the $\ell_2$-norm of the pairwise distance distortion vector.
In contrast, the $\ell_{\infty}$-optimal embedding focuses on the hardest distances to preserve (i.e., the worst-case distortion), leading to an embedding with smaller isometry constant than the $\ell_2$-optimal embedding.
Preservation of these distances is critical for near-neighbor retrieval.
\begin{figure}[t]
\vspace{-0.1cm}
\centering
\includegraphics[width=0.45\textwidth]{fig1.pdf}\label{fig:LinfL2}
\vspace{-.1cm}
\caption{Comparison of the near-neighbor (NN) preservation performance of hashing based on minimizing the $\ell_{\infty}$-norm (worst-case distortion) vs.\ the $\ell_2$-norm (average distortion) of the pairwise distance distortion vector on an illustrative data set. (a)~For a dataset with three clusters (5 circles, 5 squares, and 60 stars), we found the optimal embeddings for both error metrics using grid search. (b)~The projection of the data points using the $\ell_{\infty}$-optimal embedding preserves three well-separated clusters; however the projection using the $\ell_2$-optimal embedding mixes circles and squares, projecting them into a single cluster. For the query point $\text{q}=(0,0)$, all of its nearest neighbors NN(q) are preserved with the correct ordering using the worst-based distortion embedding but not the average distortion embedding.
}
\label{fig:str}
\vspace{-.5cm}
\end{figure}
\vspace{-0.2cm}
\subsection{Contributions}
\label{sec:contributions}
We make four distinct contributions in this paper.
First, conceptually, we advocate minimizing the worst-case distortion, which is formulated as an $\ell_\infty$-norm minimization problem, and show that this approach outperforms approaches based on minimizing the average, $\ell_2$-norm distortion in a range of computer vision and learning scenarios \cite{linfvsl2}.
Second, algorithmically, since $\ell_\infty$-norm minimization problems are computationally challenging, especially for large datasets, we develop two accelerated and scalable algorithms to find the optimal worst-case embedding.
The first, \emph{near-isometric binary hashing} (NIBH), is based on the alternating direction method of multipliers (ADMM) framework \cite{boydadmm}.
The second, NIBH-CG, is based on an accelerated greedy extension of the NIBH algorithm using the concept of \emph{column generation} \cite{dantzig1960decomposition}.
NIBH-CG can rapidly learn hashing functions from large-scale data sets that require the preservation of \emph{billions} of pairwise distances (e.g., \emph{MNIST}).
Third, theoretically, since current data-dependent hashing algorithms do not offer any probabilistic guarantees in terms of preserving near-neighbors, we develop new theory to prove that, under natural assumptions regarding the data distribution and with a notion of hardness of near-neighbor search, NIBH preserves the nearest neighbors with high probability.
Our analysis approach could be of independent interest for obtaining theoretical guarantees for other data-dependent hashing schemes.
Fourth, experimentally, we demonstrate the superior performance of NIBH as compared to ten state-of-the-art binary hashing algorithms using an exhaustive set of experimental evaluations involving six diverse datasets and three different performance metrics (near-isometry, Hamming distance ranking, and kendall $\tau$ ranking performance).
In particular, we show that NIBH achieves the same distance preservation and Hamming ranking performance as state-of-the-art algorithms {\em while using up to $60\%$ fewer bits.}
Our experiments clearly show the superiority of the $\ell_\infty$-norm formulation over the more classical $\ell_2$-norm formulation that underlies many hashing algorithms, such as BRE and IsoHash.
Our formulation also outperforms recently developed techniques that assume more structure in their hash functions, such as \emph{Spherical Hamming Distance Hashing} (SHD) \cite{heo2012spherical} and \emph{Circulant Binary Embedding} (CBE) \cite{yu2014circulant}.
\vspace{-0.2cm}
\fussy
\section{Near-Isometric Binary Hashing}
\label{sec:algos}
The standard formulation for data dependent binary hash function embeds a data point $\vecx\in\mathbb{R}^N$ into the low-{\break}dimensional Hamming space $\mathcal{H}=\{0,1\}^M$ by first multiplying it by an {\em embedding matrix} $\bW\in\mathbb{R}^{M\times N}$ and then quantizing the entries of the product $\bW\vecx$ to binary values:
\begin{align} \label{eq:hashfn}
h(\bW \vecx) = \frac{1+\text{sgn}(\bW \vecx)}{2}.
\end{align}
The function $\text{sgn}(\cdot)$ operates element-wise on the entries of $\bW \vecx$, transforming the real-valued vector $\bW \vecx$ into a set of binary codes depending on the sign of the entries in $\bW \vecx$.
\subsection{Problem formulation}
\label{sec:prob}
\sloppy
Consider the design of an embedding $f$ that maps $Q$ high-dimensional data vectors $\mathcal{X}=\{\vecx_1,\vecx_2, \dots, \allowbreak \vecx_Q\}$ in the ambient space $\mathbb{R}^N$ into low-dimensional binary codes $\mathcal{H} = \{\vech_1,\vech_2,\dots,\vech_Q\}$ in the Hamming space with $\vech_i \in \{0,1\}^M$, where $\vech_i = f(\vecx_i)$, $i=1,\dots,Q$, and $M \ll N$.
Define the distortion of the embedding by
\begin{align*}
\delta= & \underset{\lambda>0}{\text{inf}} \sup_{(i,j)\in \Omega} |\lambda {d}_{H}(\vech_i,\vech_j) - {d}(\vecx_i,\vecx_j)|, \\
& \text{with} \quad \Omega = \{(i,j):i,j \in\{1,2,\dots,Q\},i>j \},
\end{align*}
where $d(\vecx_i,\vecx_j)$ denotes the Euclidean distance between the data points $\vecx_i$, $\vecx_j$, ${d}_{H}(\vech_i,\vech_j)$ denotes the Hamming distance between the binary codes $\vech_i$ and $\vech_j$, and $\lambda$ is a positive scaling variable.
The distortion $\delta$ measures the worst-case deviation from perfect isometry (i.e., optimal distance preservation) among all pairs of data points.
Define the \emph{secant set} $\mathcal{S}(\mathcal{X})$ as $\mathcal{S}(\mathcal{X}) = \{ \vecx_i - \vecx_j : (i,j) \in \Omega \}$, i.e., the set of all pairwise difference vectors in $\mathcal{X}$.
Let $|\mathcal{S}(\mathcal{X})| = |\Omega| = {Q(Q-1)}/{2}$ denote the size of the secant set.
Note that the common distortion measure utilized in other hashing algorithms is the average distortion, i.e., $\sum_{i>j} (\lambda {d}_{H}(\vech_i,\vech_j) - {d}(\vecx_i,\vecx_j))^2/|\Omega|$.
We formulate the problem of minimizing the distortion parameter $\delta$ as the following optimization problem:
\begin{align*}
\underset{\bW,\lambda > 0}{\text{minimize}} \underset{(i,j)\in \Omega}{\sup} \abs{ \lambda \normtwo{h(\bW \vecx_i) - h(\bW \vecx_j)}^2 - \normtwo{\vecx_i - \vecx_j} },
\end{align*}
since the squared $\elltwo$-distance between a pair of binary codes is equivalent to their Hamming distance up to a scaling factor that can be absorbed into $\lambda$.
We can rewrite the above optimization problem as
\begin{align*}
(\text{P}^*) \quad \underset{\bW,\lambda>0}{\text{minimize}}\,\,\, \| \lambda \vecv'(\bW) - \vecc \|_\infty,
\end{align*}
\sloppy
where $\vecv' \in \mathbb{N}^{{Q(Q-1)}/{2}}$ is a vector containing the pairwise Hamming distances between the embedded data vectors $\normtwo{h(\vecx_i) - h(\vecx_j)}^2$, and $\vecc$ is a vector containing the pairwise $\elltwo$-distances between the original data vectors.
Intuitively, the $\ell_{\infty}$-norm objective optimizes the \emph{worst-case} distortion among all pairs of data points.
The problem $(\text{P}^*)$ is a combinatorial problem with complexity $\mathcal{O} (Q^{2M})$.
To overcome the combinatorial nature of the problem, we approximate the hash function $h(\cdot)$ by the sigmoid function (also known as the inverse logit link function) $\sigma(x)=(1+e^{-x})^{-1}$.
This enables us to approximate ($\text{P}^*$) by the following optimization problem:
\begin{align*}
(\text{P}) \quad \underset{\bW,\lambda>0}{\text{minimize}}\,\,\, \| \lambda \vecv(\bW) - \vecc \|_\infty,
\end{align*}
where $\vecv \in \mathbb{R}_+^{{Q(Q-1)}/{2}}$ is a vector containing the pairwise $\ell_2$ distances between the embedded data vectors after sigmoid relaxation $\normtwo{(1+e^{-\bW \vecx_i})^{-1} - (1+e^{-\bW \vecx_j})^{-1}}^2$.
Here the sigmoid function operates element-wise on $\bW \vecx_i$.
In practice we use a more general definition of the sigmoid function, defined as $\sigma_\alpha(x)=(1+e^{-\alpha x})^{-1}$, where $\alpha$ is the \emph{rate} parameter controlling how closely it approximates the non-smooth function $h(\cdot)$. The following lemma characterizes the quality of such an approximation (see the Appendix for a proof).
\begin{lem} \label{lem:approx}
Let $x$ be a Gaussian random variable as $x \sim \mathcal{N}(\mu, \sigma^2)$. Define the distortion of the sigmoid approximation at $x$ as $\abs{h(x)-\sigma_\alpha(x)}$. Then, the expected distortion is bounded as $\mathbb{E}_x[ \abs{h(x)-\sigma_\alpha(x)} ] \leq \frac{1}{\sigma\sqrt{2\pi\alpha}} + 2 e^{-(\sqrt{\alpha} + c /\alpha \sigma^2)}$,
where $c$ is a positive constant. As $\alpha$ goes to infinity, the expected distortion goes to $0$.
\end{lem}
\begin{remark}
As has been noted in the machine vision literature \cite{zoran2012natural}, a natural model for an image database is that its images are generated from a mixture of Gaussian distributions. \fref{lem:approx} bounds the deviation of the sigmoid approximation from the non-smooth hash function \fref{eq:hashfn} under this model.
\end{remark}
\subsection{Near-isometry and nearest neighbor preservation}
\label{sec:NNdelta}
Inspired by the definition of \emph{relative contrast} in \cite{he2012difficulty}, we define a more generalized measure of data separability to preserve $k$-NN that we call the {\em $k$-order gap} $\Delta_k := d(\vecx_0, \vecx_{k+1}) - d(\vecx_0, \vecx_k)$, where $\vecx_0$ is a query point and $\vecx_k$ and $\vecx_{k+1}$ are its $k^\text{th}$ and $k+1^\text{th}$ nearest neighbors, respectively.
We formally show that if the data is highly separable ($\Delta_k$ is large), then the above approach preserves all $k$ nearest neighbors with high probability (see the Appendix for a proof and discussion).
\begin{thm} \label{thm:knn}
Assume that all the data points are independently generated from a mixture of Gaussian distribution i.e., $\vecx_i \sim \sum_{p=1}^P \pi_p \mathcal{N}(\mu_p,\Sigma_p)$.
Let $\vecx_0 \in \mathbb{R}^N$ denote a query data point in the ambient space, and the other data points $\vecx_i$ be ordered so that $d(\vecx_0,\vecx_1) < d(\vecx_0,\vecx_2) < \ldots < d(\vecx_0,\vecx_Q)$. Let $\delta$ denote the final value of the distortion parameter computed from any binary hashing algorithm, and let $c$ denote a positive constant.
Then, if $\mathbb{E}_x[\Delta_k] \geq 2 \delta + \sqrt{\frac{1}{c}\log \frac{Qk}{\epsilon}}$, the binary hashing algorithm preserves all the $k$-nearest neighbors of a data point with probability at least $1-\epsilon$.
\end{thm}
\subsection{The NIBH algorithm}
\label{sec:admm}
We now develop an algorithm to solve the optimization problem (P).
We apply the alternating direction method of multipliers (ADMM) framework \cite{boydadmm} to construct an efficient algorithm to find a (possibly local) optimal solution of (P).
Note that (P) is non-convex, and therefore no standard optimization method is guaranteed to converge to a globally optimal solution in general.
We introduce an auxiliary variable $\vecu$ to arrive at the equivalent problem:
\begin{align} \label{eq:eqprob}
\underset{\bW,\vecu,\lambda>0}{\text{minimize}}\,\,\, \| \vecu \|_\infty \quad \text{subject to} \quad \vecu = \lambda \vecv(\bW) - \vecc.
\end{align}
The augmented Lagrangian form of this problem can be written as $\underset{\bW,\vecu,\lambda>0}{\text{minimize}}\,\,\, \| \vecu \|_\infty + \frac{\rho}{2} \normtwo{\vecu- \lambda \vecv(\bW) + \vecc + \vecy}^2,$
where $\rho$ is the scaling parameter in ADMM and $\vecy \in \mathbb{R}^{{Q(Q-1)}/{2}}$ is the Lagrange multiplier vector.
The NIBH algorithm proceeds as follows.
First, the variables $\bW$, $\lambda$, $\vecu$, and Lagrange multipliers $\vecy$ are initialized randomly.
Then, at each iteration, we optimize over each of the variables ${\bW, \vecu,}$ and ${\lambda}$ while holding the other variables fixed.
More specifically, in iteration $\ell$, we perform the following four steps until convergence:
\begin{itemize}[leftmargin=*]
\fussy
\item{\emph{Optimize over} $\bW$} via
$\bW^{(\ell+1)}\!\! \leftarrow \!\! \underset{\bW}{\text{arg min}} \frac{1}{2} \sum_{(i,j)\in\Omega} ( u_{ij}^{(\ell)} - \lambda^{(\ell)} \| \frac{1}{1+e^{-\bW \vecx_i}} - \frac{1}{1+e^{-\bW \vecx_j}} \|_2^2 +\normtwo{\vecx_i - \vecx_j}^2 - y_{ij}^{(\ell)} )^2$,
where $\lambda^{(\ell)}$ denotes the value of $\lambda$ in the $\ell^\text{th}$ iteration.
We also use $u_{ij}^{(\ell)}$ and $y_{ij}^{(\ell)}$ to denote the entries in $\vecu^{(\ell)}$ and $\vecy^{(\ell)}$ that correspond to the pair $\vecx_i$ and $\vecx_j$ in the dataset $\mathcal{X}$.
We show in our experiments below that using the accelerated first-order gradient descent algorithm \cite{nest} to solve this subproblem results in good empirical convergence performance (see the Appendix).
\sloppy
\item{\emph{Optimize over} $\vecu$} while holding the other variables fixed; it corresponds to solving the proximal problem of the $\ell_\infty$-norm $\vecu^{(\ell+1)} \!\!\leftarrow \!\! \underset{\vecu}{\text{arg min}} \, \| \vecu \|_\infty \! + \! \frac{\rho}{2} \| \vecu \! - \! \lambda^{(\ell)} \! \vecv^{(\ell+1)} \! + \vecc + \! \vecy^{(\ell)} \|_2^2$.
We use the low-cost algorithm described in \cite{studertom} to perform the proximal operator update.
\fussy
\item{\emph{Optimize over} $\lambda$} while holding the other variables fixed; it corresponds to a positive least squares problem, where $\lambda$ is updated as $\lambda^{(\ell+1)}\!\! \leftarrow \!\! \underset{\lambda>0}{\text{arg min}} \, \frac{1}{2} \| \vecu^{(\ell+1)} - \lambda \vecv^{(\ell+1)} + \vecc + \vecy^{(\ell)} \|_2^2. $
We perform this update using the non-negative least squares algorithm \cite{fcnnls}.
\item{\emph{Update} $\vecy$} via $\vecy^{(\ell+1)} \!\! \leftarrow \!\! \vecy^{(\ell)} + \eta (\vecu^{(\ell+1)} - \lambda^{(\ell+1)}\vecv^{(\ell+1)}+\vecc)$,
where the parameter $\eta$ controls the dual update step size. \fussy
\end{itemize}
\subsection{Accelerated NIBH for large-scale datasets}
\label{sec:nibhcg}
The ADMM-based NIBH algorithm is efficient for small-scale datasets (e.g., for secant sets of size $|\mathcal{S}(\mathcal{X})| < 5000$ or so).
However, the memory requirement of NIBH is quadratic in $|\mathcal{X}|$, which would be problematic for applications involving large-scale numbers of data points and secants.
In response, we develop an algorithm that approximately solves $(\text{P})$ while scaling very well to large-scale problems.
The key idea comes from classical results in optimization theory related to {\em column generation} (CG) \cite{dantzig1960decomposition,numax}.
The optimization problem \fref{eq:eqprob} is an $\ell_{\infty}$-norm minimization problem with an equality constraint on each secant.
The Karush-Kuhn-Tucker (KKT) condition for this problem states that, if strong duality holds, then the optimal solution is entirely specified by a (typically very small) portion of the constraints.
Intuitively, the secants corresponding to these constraints are the pairwise distances that are harder to preserve in the low-dimensional Hamming space.
We call the set of such secants the \emph{active} set.
In order to solve $(\text{P})$, it suffices to find the active secants and solve NIBH with a much smaller number of active constraints.
To leverage the idea of the active set, we iteratively run NIBH on a small subset of all the secants that violate the near-isometry condition, as detailed below:
\begin{itemize}
\item
Solve $(\text{P})$ with a small random subset $\mathcal{S}_0$ of all the secants $\mathcal{S}(\mathcal{X})$ using NIBH to obtain $\widehat{\bW}$, $\hat{\delta}$, and $\hat{\lambda}$, initial estimates of the parameters. Identify the active set $\mathcal{S}_\text{a}$. Fix $\lambda=\hat{\lambda}$ for the rest of the algorithm.
\item
Randomly select a new subset $\mathcal{S}_\text{v} \subset \mathcal{S}$ of secants that violate the near isometry condition using the current estimates of $\widehat{\bW}$, $\hat{\delta}$, and $\hat{\lambda}$. Then, form an augmented secant set $\mathcal{S}_\text{aug} = \mathcal{S}_\text{a} \cup \mathcal{S}_\text{v}$.
\item
Solve $(\text{P})$ with the secants in the set $\mathcal{S}_\text{aug}$ using the NIBH algorithm.
\end{itemize}
We dub this approach {\em NIBH-CG}.
NIBH-CG iterates over the above steps until no new violating secants are added to the active set.
Since the algorithm searches over all the secants for violating secants in each iteration before terminating, NIBH-CG ensures that all of the constraints are satisfied when it terminates.
A key benefit of NIBH-CG is that only the set of active secants (and not all secants) needs to be stored in memory.
This benefit leads to significant improvements in terms of memory complexity over competing algorithms, since the set of all secants quickly becomes large-scale and can exceed the system memory capacity in large-scale applications.
\section{Experiments}
\label{sec:experiments}
\begin{figure*}[t]
\vspace{-0.1cm}
\centering
\includegraphics[width=1\textwidth]{fig2.pdf}\label{fig:str}
\vspace{-.5cm}
\caption{Comparison of the NIBH and NIBH-CG algorithms against several baseline binary hashing algorithms using three small-scale datasets with 4950 secants ($Q$ = 100). The performance of NIBH-CG closely follows that of NIBH, and both outperform all of the other algorithms in terms of the maximum distortion $\delta$ (superior distance preservation), mean average precision $\text{MAP}$ of training samples (superior nearest neighbor preservation), and Kendall $\tau$ rank correlation coefficient (superior ranking preservation).}
\label{fig:str}
\vspace{-.2cm}
\end{figure*}
\label{fig:deltatest}
In this section, we validate the NIBH and NIBH-CG algorithms via experiments using a range of synthetic and real-world datasets, including three small-scale, three medium-scale, and one large-scale datasets with respect to three metrics.
We compare NIBH against ten state-of-the-art binary hashing algorithms, including
binary reconstructive embedding (BRE) \cite{kulis2009learning},
spectral hashing (SH) \cite{weiss2009spectral},
anchor graph hashing (AGH) \cite{liu2011hashing},
multidimensional spectral hashing (MDSH) \cite{weiss2012multidimensional},
scalable graph hashing (SGH) \cite{jiang2015scalable},
PCA hashing (PCAH) \cite{jolliffe2002principal},
isotropic hashing (IsoHash) \cite{kong2012isotropic},
spherical Hamming distance hashing (SHD) \cite{heo2012spherical},
circulant binary embedding (CBE) \cite{yu2014circulant},
and
locality-sensitive hashing (LSH) \cite{indyk1998approximate}.
\begin{figure*}[tp]
\centering
\includegraphics[width=0.8\textwidth]{fig3.pdf}\label{fig:deltatest}
\caption{Comparison of NIBH and NIBH-CG against several state-of-the-art binary hashing algorithms in preserving isometry on MNIST data. (a) NIBH-CG outperforms the other algorithms in minimizes the isometry constant on unseen data $\delta_{\text{test}}$. (b) . NIBH and NIBH-CG provide better isometry guarantee with a small sacrifice to universality.}
\end{figure*}
\subsection{Performance metrics and datasets}
\label{sec:perfmetric}
We compare the algorithms using the following three metrics:
{\emph{Maximum distortion}} $\delta= \underset{\lambda>0}{\text{inf}} ||\lambda \hat{\vecv} - \vecc||_{\infty}$, where the vector $\hat{\vecv}$ contains the pairwise Hamming distances between the learned binary codes.
This metric quantifies the distance preservation among all of the pairwise distances after projecting the training data in the ambient space into binary codes.
We also define the maximum distortion for unseen test data $\delta_{\text{test}}$, which measures the distance preservation on a hold-out test dataset using the hash function learned from the training dataset.
{\emph{Mean average precision}} (MAP) for near-neighbor preservation in the Hamming space. MAP is computed by first finding the set of $k$-nearest neighbors for each query point on a hold-out test data in the ambient space $\mathcal{L}^k$ and the corresponding set $\mathcal{L}_\text{H}^k$ in the Hamming space and then calculating the average precision $\text{AP} = |\mathcal{L}^k \cap \mathcal{L}_\text{H}^k |/k$.
We then report MAP by calculating the mean value of $\text{AP}$ across all data points.
{\emph{Kendall $\tau$ ranking correlation coefficient}}. We first rank the set of $k$-nearest neighbors for each data point by increasing distance in the ambient space as $\mathcal{T}(\mathcal{L}^k)$ and in the Hamming space as $\mathcal{T}(\mathcal{L}_\text{H}^k)$.
The Kendall $\tau$ correlation coefficient is a scalar $\tau\in[-1,1]$ that measures the similarity between the two ranked sets $\mathcal{T}(\mathcal{L}^k)$ and $\mathcal{T}(\mathcal{L}_\text{H}^k)$ \cite{kendall1938new}.
The value of $\tau$ increases as the similarity between the two rankings increases and reaches the maximum value of $\tau=1$ when they are identical.
We report the average value of $\tau$ across all data points in the training dataset.
To compare the algorithms, we use the following standard datasets from computer vision:
\emph{Random} consists of independently drawn random vectors in $\mathbb{R}^{100}$ from a multivariate Gaussian distribution with zero mean and identity covariance matrix.
\emph{Translating squares} is a synthetic dataset consisting of $10 \times 10$ images that are translations of a $3\times 3$ white square on black background \cite{numax}.
\emph{MNIST} is a collection of 60,000 $28 \times 28$ greyscale images of handwritten digits
\cite{lecun1998mnist}.
\emph{Photo-Tourism} is a corpus of approximately 300,000 image patches, represented using scale-invariant feature transform (SIFT) features \cite{lowe2004distinctive} in $\mathbb{R}^{128}$ \cite{snavely2006photo}.
\emph{LabelMe} is a collection of over 20,000 images represented using GIST descriptors in $\mathbb{R}^{512}$ \cite{torralba2008small}.
\emph{Peekaboom} is a collection of 60,000 images represented using GIST descriptors in $\mathbb{R}^{512}$ \cite{torralba2008small}.
Following the experimental approaches of the hashing literature \cite{kulis2009learning,norouzi2011minimal}, we pre-process the data by subtracting the mean and then normalizing all points to lie on the unit sphere.
\vspace{-0cm}
\subsection{Small- and medium-scale experiments}
\label{sec:smalld}
We start by evaluating the performance of NIBH and NIBH-CG using a small-scale subset of the first three datasets.
Small-scale datasets enable us to compare the performance of NIBH vs.\ NIBH-CG to verify that they perform similarly.
Also they help us assess the asymptotic behavior of algorithms in preserving isometry since the total of number of secants are small compare to the bit budget in compact binary codes.
\paragraph{Experimental setup.}
We randomly select $Q=$ 100 data points from the \emph{Random}, \emph{Translating squares}, and \emph{MNIST} datasets.
We then apply the NIBH, NIBH-CG, and all the baseline algorithms on each dataset for different target binary code word lengths $M$ from 1 to 70 bits.
We set the NIBH and NIBH-CG algorithm parameters to the common choice of $\rho=1$ and $\eta=1.6$.
To generate hash function of length $M$ for LSH, we draw $M$ random vectors from a Gaussian distribution with zero mean and an identity covariance matrix. We use the same random vectors to initialize NIBH and other baseline algorithms.
In the near-neighbor preservation experiments, to show the direct advantage of minimizing $\ell_{\infty}$-norm over $\ell_2$-norm, we followed the exact procedure described in BRE \cite{kulis2009learning} to select the training secants, i.e., we apply the NIBH algorithm on only the lowest $5\%$ of the pairwise distances (which are set to zero as in BRE) combined with the highest $2\%$ of the pairwise distances.
We follow the \emph{continuation} approach \cite{wen2010fast} to set the value of $\alpha$.
We start with a small value of $\alpha$, (e.g., $\alpha=1$) to avoid becoming stuck in bad local minima, and then gradually increase $\alpha$ as the algorithm proceeds.
As the algorithm gets closer to convergence and has obtained a reasonably good estimate of the parameters $\bW$ and $\lambda$, we set $\alpha = 10$, which enforces a good approximation of the sign function (see Lemma \ref{lem:approx}).
\paragraph{Results.}
\vspace{-0.2cm}
The plots in the top row of Figure~\ref{fig:str} illustrate the value of the distortion parameter $\delta$ as a function of the number of projections (bits) $M$.
The performance of NIBH and NIBH-CG closely follow each other, indicating that NIBH-CG is a good approximation to NIBH.
Both NIBH and NIBH-CG outperform the other baseline algorithms in terms of the distortion parameter $\delta$.
Among these baselines, LSH has the lowest isometry performance since random projections are oblivious to the intrinsic geometry of the training dataset.
To achieve $\delta=$ 1, NIBH(-CG) requires $60\%$ fewer bits $M$ than CBE and BRE.
NIBH(-CG) also achieves better isometry performance asymptotically, i.e., up to $\delta \approx$ 0.5, given a sufficient number of bits ($M\geq$ 70), while for most of the other algorithms the performance plateaus after $\delta=$ 1.
NIBH's superior near-isometry performance extends well to unseen data.
Figure \ref{fig:deltatest}(a) demonstrates that NIBH achieves the lowest isometry constant on a test dataset $\delta_{\text{test}}$ compared to other hashing algorithms.
Figure \ref{fig:deltatest}(b) further suggests that NIBH's superior isometry performance comes with smallest sacrifice to the universality of the hash functions.
\begin{figure*}[tp]
\vspace{-0.1cm}
\centering
\includegraphics[width=0.9\textwidth]{fig4.pdf}\label{fig:btr}
\caption{Hamming ranking performance comparison on three medium-scale datasets ($Q=$ 1000). The top-50 neighbors are used to report MAP over a test data of same size.}
\label{fig:btr}
\vspace{-0.5cm}
\end{figure*}
The plots in the middle and bottom row of Figure~\ref{fig:str} shows the average precision for retrieving training data and the Kendall $\tau$ correlation coefficient respectively, as a function of the number of bits $M$.
We see that NIBH preserves a higher percentage of nearest neighbors compared to other baseline algorithms as $M$ increases with better average ranking among $k=$ 10-nearest neighbors.
Now we showcase the performance of NIBH-CG on three medium-scale, real-world datasets used in \cite{kulis2009learning,norouzi2011minimal}, including \emph{Photo-tourism}, \emph{LabelMe}, and \emph{Peekaboom} for the popular machine learning task of {\em data retrieval}.
From each dataset we randomly select $Q=$ 1000 training points, following the setup in BRE \cite{kulis2009learning}, and use them to train NIBH-CG and the other baseline algorithms.
We then randomly select a separate set of $Q=$ 1000 data points and use it to test the performance of NIBH-CG and other baseline algorithms in terms MAP with $k = 50$.
Figure~\ref{fig:btr} illustrates the performance of NIBH-CG on these datasets.
NIBH-CG outperforms all the baseline algorithms with large margins in Hamming ranking performance in term of MAP with top-50 near-neighbors.
\subsection{Large-scale experiments}
\label{sec:larged}
We now demonstrate that NIBH-CG scales well to large-scale datasets.
We use the full \emph{MNIST} dataset with 60,000 training images and augment it with three rotated versions of each image (rotations of 90$^{\circ}$, 180$^{\circ}$, and 270$^{\circ}$) to create a larger dataset with $Q=$ 240,000 data points.
Next, we construct 4 training sets with 1,000, 10,000, 100,000, and 240,000 images out of this large set.
We train all algorithms with $M =$ 30 bits and compare their performance on a test set of 10,000 images.
BRE fails to execute on a standard desktop PC with 12 GB of RAM for training sets with more than 100,000 points due to the size of the secant set $|\mathcal{X}|$.
The results for all algorithms are given in Table~\ref{tab:largescale}; we tabulate their performance in terms of MAP for the top-500 neighbors.
The performance of NIBH-CG is significantly better than the baseline algorithms and, moreover, improves as the size of the training set grows.
This emphasizes that NIBH-CG excels at large-scaled problems thanks to its very small memory requirement; indeed, the memory requirement of NIBH-CG is linear in the number of \emph{active} secants rather than the total number of secants.
\vspace{-0.4cm}
\section{Discussion}
We have demonstrated that the worst-case, $\ell_{\infty}$-norm-based near-isometric binary hashing (NIBH) algorithm is superior to a wide range of algorithms based on the more traditional average-case, $\ell_2$-norm.
Despite its non-convexity and non-smoothness, NIBH admits an efficient optimization algorithm that converges to a high-performing local minimum.
Moreover, NIBH-CG, the accelerated version of NIBH, provides significant memory advantages over existing algorithms.
Our exhaustive experiments with six datasets, three metrics, and ten algorithms have shown that NIBH outperforms all of the state-of-the-art data-dependent hashing algorithms.
The results in this paper provide a strong motivation for exploring $\ell_{\infty}$-norm formulations in binary hashing.
\begin{table*}
\centering
\caption{Comparison of NIBH-CG against several baseline binary hashing algorithms on large-scale \emph{MNIST} datasets with over 28 billion secants $|\mathcal{S}(\mathcal{X})|$. We tabulate the Hamming ranking performance in terms of mean average precision MAP for different sizes of the dataset. All training times are in seconds.}
\scalebox{0.8}{
\label{tab:largescale}
\begin{tabular}{|c|c|c|c|c|c|}
\hline $M = 30$ \text{bits} & \multicolumn{4}{c|}{ MAP / Top-500 ( \emph{MNIST + rotations})} & \multicolumn{1}{c|}{ Training time } \\
\hline {Training size} $Q$ & $1K$ & $10K$ & $100K$ & $240K$ & $240K$\\
\hline {Secant size} $|\mathcal{S}(\mathcal{X})|$ & $500K$ & $50M$ & $5B$ & $28B$ & $28B$ \\
\hline
\hline \textbf{NIBH-CG} & $\bf{52.79}$ ($\pm 0.15$) & $\bf{54.69}$ ($\pm 0.18$) & $\bf{54.93}$ ($\pm 0.23$) & $\bf{55.52}$ ($\pm 0.11$) & 541.43 \\
\hline BRE & $48.33$ ($\pm 0.65$) & $50.67$($\pm 0.33$) & -- & -- & $18685.51$ \\
\hline CBE & $38.70$ ($\pm 1.18$) & $38.12$ ($\pm 1.34$) & $38.50$ ($\pm 2.05$) & $38.53$ ($\pm 0.83$) & 68.94\\
\hline SPH & $44.33$ ($\pm 0.74$) & $44.24$ ($\pm 0.61$) & $44.37$ ($\pm 0.71$) & $44.32$ ($\pm 0.63$) & 184.46 \\
\hline SH & $40.12$ ($\pm 0.00$) & $39.37$ ($\pm 0.00$) & $38.79$ ($\pm 0.00$) & $38.26$ ($\pm 0.00$) & 3.05 \\
\hline MDSH & $41.06$ ($\pm 0.00$) & $41.23$ ($\pm 0.00$) & $40.80$ ($\pm 0.00$) & $40.39$ ($\pm 0.00$) & 15.00 \\
\hline AGH & $45.81$ ($\pm 0.34$) & $47.78$ ($\pm$ 0.38) & $47.69$ ($\pm 0.41$) & $47.38$ ($\pm 0.32$) & 4.49 \\
\hline SGH & $51.32$ ($\pm 0.07$) & $51.33$ ($\pm 0.20$) & $51.01$ ($\pm 0.23$) & $50.66$ ($\pm 0.76$) & $5.89$ \\
\hline PCAH & $39.90$ ($\pm 0.00$) & $38.53$ ($\pm 0.00$) & $38.81$ ($\pm 0.00$) & $37.50$ ($\pm 0.00$) & $0.08$\\
\hline IsoHash & $50.91$ ($\pm 0.00 $) & $50.90$ ($\pm 0.00 $) & $50.72$ ($\pm 0.00 $) & $50.55$ ($\pm 0.00 $) & $2.82$ \\
\hline LSH & $33.69$ ($\pm 0.94 $) & $33.69$ ($\pm 0.94 $) & $33.69$ ($\pm 0.94 $) & $33.69$ ($\pm 0.94 $) & $2.29\times 10^{-4}$ \\
\hline
\end{tabular}}
\end{table*}
|
2,869,038,156,832 | arxiv | \section{Introduction}
\label{1}
Several types of dense relativistic matter exist in compact stars. For example, a relativistic electron
plasma forms and plays an essential role in white dwarfs. Also, electrons form a relativistic fluid inside
nuclear matter in the interior of neutron stars. If quark stars exist in nature, the corresponding dense
quark matter in the core will be a strongly coupled version of relativistic matter. Often, such matter is
subject to strong magnetic fields. In white dwarfs, e.g., the magnetic fields reach up to $10^{9}~\mbox{G}$,
while, in neutron stars, they may be up to $10^{15}~\mbox{G}$ \cite{astroreview,astroreview1}.
Relativistic matter in a strong magnetic field is also created in heavy ion collisions \cite{Skokov:2009qp}
that can lead to the chiral magnetic effect \cite{Kharzeev:2007tn}.
Many physical properties of the stellar matter under extreme conditions realized inside compact
stars are understood theoretically and could be tested to some extent through observational
data. However, as was pointed out in Refs.~\cite{FI1,Metlitski:2005pr,Gorbar:2009bm,Rebhan,
Basar:2010zd,Fukushima,Kim,Frolov}, the dense relativistic matter in a strong magnetic field may
hold some new theoretical surprises. In particular, a topological contribution in the axial current
at the lowest Landau level (LLL) was revealed in Ref.~\cite{Metlitski:2005pr}. More recently,
it was shown in Ref.~\cite{Gorbar:2009bm}, that in the normal phase of dense relativistic matter
in a magnetic field, there exists a contribution to the axial current associated with a relative shift
of the longitudinal momenta in the dispersion relations of opposite chirality fermions,
$k^3 \to k^3 \pm \Delta$, where the momentum $k^3$ is directed along magnetic field and
$\Delta$ is the chiral shift parameter intimately connected with the induced axial current
$j^{3}_{5}$. Unlike the topological contribution in $j^{3}_{5}$ at the lowest Landau level
(LLL) \cite{Metlitski:2005pr}, the dynamical one appears only in interacting matter and affects
the fermions in {\it all} Landau levels, including those around the Fermi surface. The induced axial
current and the shift of the Fermi surfaces of the left-handed and right-handed fermions are expected
to play an important role in transport and emission properties of matter in various types of
compact stars as well as in heavy ion collisions.
The main goal of this Letter is to study some general and subtle features of the dynamics with the
chiral shift parameter $\Delta$. One of such issues is whether the form of the induced axial $j^{3}_{5}$
current coincides with the result in the theory of noninteracting fermions in a magnetic field
\cite{Metlitski:2005pr} or whether it is affected by interactions (for related discussions, see
Refs.~\cite{Metlitski:2005pr,Gorbar:2009bm,Rebhan,Fukushima,Rubakov}). This question is
intimately related to that of the connection of the structure of the induced $j^{3}_{5}$ with
the axial anomaly \cite{ABJ}. By using the Nambu-Jona-Lasinio (NJL) model in a magnetic field,
it will be shown that while the dynamics responsible for the
generation of the chiral shift $\Delta$ essentially modifies the form of this current, it does {\it not}
affect the form of the axial anomaly: the latter is connected only with the topological part in the LLL
\cite{Ambjorn}. Moreover, while the topological contribution in the axial
current is generated in the infrared kinematic region (at the LLL), the contribution of $\Delta$
in this current is mostly generated in ultraviolet, which implies that higher Landau levels
are primarily important in that case.
\section{Model: General properties}
\label{2}
As in Ref.~\cite{Gorbar:2009bm}, in order to illustrate this phenomenon in the clearest way,
we will utilize the simplest NJL model with one fermion flavor, whose
Lagrangian density is
\begin{eqnarray}
{\cal L} &=& \bar\psi \left(iD_\nu+\mu_0\delta_{\nu}^{0}\right)\gamma^\nu \psi
-m_{0}\bar\psi \psi +\frac{G_{\rm int}}{2}\left[\left(\bar\psi \psi\right)^2
+\left(\bar\psi i\gamma^5\psi\right)^2\right],
\label{NJLmodel}
\end{eqnarray}
where $m_{0}$ is the bare fermion mass and $\mu_0$ is the chemical potential. By
definition, $\gamma^5\equiv i\gamma^0\gamma^1\gamma^2\gamma^3$. The covariant
derivative $D_{\nu}=\partial_\nu + i e A_{\nu}$ includes the external gauge field
$A_{\nu}$, which is assumed to be in the Landau gauge, $A^{\nu}= x B \delta_{2}^{\nu}$
\cite{footnote1}. Here $B$ is the strength of the external magnetic field pointing in the
$z$-direction. The $(3+1)$-dimensional Lorentz symmetry in the model is explicitly broken down
to the $SO(2)$ symmetry of rotations around the $z$-axis in the presence of this magnetic field.
Also, except parity ${\cal P}$, all the discrete symmetries ${\cal C}$, ${\cal T}$,
${\cal CP}$, ${\cal CT}$, $PT$, and ${\cal CPT}$ are broken (here ${\cal C}$
and ${\cal T}$ are charge conjugation and time reversal, respectively).
In the chiral limit, $m_{0}=0$, this model possesses the chiral $U(1)_L\times U(1)_R$
symmetry. In the vacuum state ($\mu_0=0$), however, this chiral symmetry is known to be
spontaneously broken because of the magnetic catalysis phenomenon \cite{MC}. In essence,
such spontaneous breaking results from the enhanced pairing dynamics of fermions and
antifermions in the infrared. The enhancement results from the non-vanishing density of
states in the lowest Landau level that is subject to an effective dimensional reduction
$D\to D-2$. At a sufficiently large value of the chemical potential, the chiral symmetry is
expected to be restored. As we shall see below, this is indeed the case, but the corresponding
normal ground state is characterized by a nonzero chiral shift parameter $\Delta$.
We will analyze model (\ref{NJLmodel}) in the mean field approximation,
which is reliable in the weakly coupled regime when the dimensionless coupling constant
$g \equiv G_{\rm int}\Lambda^2/(4\pi^2) \ll 1$, where $\Lambda$ is an ultraviolet cutoff.
Note that here the coupling $g$ is defined in such a way that $g_{cr} =1$, where $g_{cr}$ is
the critical value for generating a fermion dynamical mass in the NJL model without
magnetic field.
In this approximation, the full fermion propagator does not allow any wave function
renormalization different from $1$. Thus, the general ansatz for the (inverse) full propagator
is given by \cite{Gorbar:2009bm}
\begin{eqnarray}
iG^{-1}(u,u^\prime) &=&\Big[(i\partial_t+\mu)\gamma^0 -
(\bm{\pi}\cdot\bm{\gamma})
+ i\tilde{\mu}\gamma^1\gamma^2
+\Delta\gamma^3\gamma^5
-m\Big]\delta^{4}(u- u^\prime),
\label{ginverse}
\end{eqnarray}
where $u=(t,\bm{r})$, $\pi^k = i (\partial^k + i e A^{k})$ is the canonical
momentum, $m$ is the constituent (medium-modified) fermion mass,
$\mu$ is an effective chemical potential in
the quasiparticle dispersion relation, $\tilde{\mu}$ is an anomalous magnetic moment, and
$\Delta$ is the chiral shift parameter. Note that $\mu$ in the full propagator may differ from
the thermodynamical chemical potential $\mu_0$ in Lagrangian density (see below).
As is shown in Appendix \ref{AppPropagator}, in the mean field approximation one has
$\tilde{\mu} = 0$ in a self-consistent solution to the gap equation in this model.
Let us now demonstrate that one can get an important insight into the properties of the
solutions in this model already from the {\it form} of the gap equation for the parameters
$\mu$, $\Delta$, and $m$. As described in more detail in Appendix \ref{AppPropagator},
utilizing the approach based on the effective action for composite operators \cite{CJT},
one can show that the gap equation in the mean field approximation reduces to the
following set of equations:
\begin{eqnarray}
\mu - \mu_0 &=& -\frac{1}{2}G_{\rm int} \langle j^0\rangle ,
\label{gap-mu} \\
\Delta &=& -\frac{1}{2}G_{\rm int} \langle j_5^3\rangle ,
\label{gap-Delta} \\
m - m_0 &=& -G_{\rm int} \langle \bar\psi \psi\rangle ,
\label{gap-m}
\end{eqnarray}
where the chiral condensate $\langle \bar\psi \psi\rangle$, the vacuum expectation values
of the fermion density $j^0$ and the axial current density $j_5^3$ are
\begin{eqnarray}
\langle j^0\rangle &=& -\mbox{tr}\left[\gamma^0G(u,u)\right],
\label{charge} \\
\langle j_5^3\rangle &=& -\mbox{tr}\left[\gamma^3\gamma^5G(u,u)\right],
\label{current} \\
\langle \bar\psi \psi\rangle &=& -\mbox{tr}\left[G(u,u)\right].
\label{condensate}
\end{eqnarray}
Let us now consider the case of the normal phase in the chiral limit, when $m = m_0 = 0$
and $\langle \bar\psi \psi\rangle = 0$. It is realized when the chemical potential
$\mu_0 > m_{dyn}/\sqrt{2}$ \cite{Gorbar:2009bm}, where $m_{dyn}$ is a dynamical
fermion mass in a magnetic filed at zero chemical potential and zero temperature. Let us
analyze Eqs.~(\ref{gap-mu}) and (\ref{gap-Delta}) in perturbation theory in the coupling
constant $g$. In the zero approximation, we have a theory of free fermions in a magnetic
field. To this order, $\mu=\mu_0$ and $\Delta=0$. However, even in this case the fermion
density $\langle j^0\rangle$ and the axial current density $\langle j_5^3\rangle$ are
nonzero. The former can be presented as a sum over the Landau levels:
\begin{eqnarray}
\langle j^0\rangle_0 &=& \frac{ \mu_0|eB|}{2\pi^2} +\frac{\mbox{sign}(\mu_0)|eB|}{\pi^2}
\sum_{n=1}^{\infty}
\sqrt{\mu_{0}^2-2n|eB|}
\theta\left(|\mu_0|-\sqrt{2n|eB|}\right),
\label{j}
\end{eqnarray}
and the latter is \cite{Metlitski:2005pr}
\begin{equation}
\langle j^3_5\rangle_0 = \frac{eB}{2\pi^2}\mu_0\, .
\label{MZ}
\end{equation}
Then, to the next order in the coupling constant, one finds from Eq.~(\ref{gap-Delta})
that $\Delta \propto G_{\rm int} \langle j^3_5\rangle_0 \neq 0$. Thus, in the normal phase of
this theory, there {\it necessarily} exists a nonzero shift parameter $\Delta$. In fact, this is
one of the main results of Ref.~\cite{Gorbar:2009bm}. Let us also emphasize that $\Delta$
is generated by perturbative dynamics, which is directly connected with the fact that the vanishing
$\Delta$ is not protected by any symmetry [recall that ${\cal C} =+1$, ${\cal P} =+1$, and
${\cal T} = -1$ for the axial current density $j_5^3$, and beside parity ${\cal P}$, all the discrete
symmetries are broken in model (\ref{NJLmodel})].
Similarly, one finds from Eq.~(\ref{gap-mu}) that
$\mu - \mu_0 \propto G_{\rm int}\langle j^0\rangle_0 \neq 0$, i.e., $\mu$ and $\mu_0$ are different.
One can trace the origin of this difference to the Hartree terms in the gap equation [see the last two
terms in Eq.~(\ref{gap})]. This seems to be robust in the NJL model with a local four-fermion interaction
and a chemical potential, associated with a global charge, such as a baryon (or lepton) charge for example.
Note that when the conserved charge is related to a gauge symmetry, as in the case of the electric charge,
the situation may be different. In that case, a neutrality condition imposed by the Gauss law
takes place \cite{Kapusta}. The latter is necessary for providing the thermodynamic equilibrium
in a system. This is likely to result in $\mu^{(e)}=\mu_{0}^{(e)}$ when $\mu^{(e)}$ is the chemical
potential for electric charge. Note that usually there are chemical potentials of both types in dense
relativistic matter. While being of importance for potential applications in principle, we expect
that this fact will not change our main conclusion regarding the chiral shift parameter.
By noting from Eq.~(\ref{gap-Delta}) that the chiral shift $\Delta$ is induced by the axial
current, it is naturally to ask whether $\Delta$ itself affects this current. The answer to
this question is affirmative \cite{Gorbar:2009bm}. Another natural question is whether
the divergence of this modified current satisfies the conventional anomaly equation
\cite{ABJ}. As will be shown in the next section, the answer to this question is also
affirmative.
\section{Induced axial current and axial anomaly}
\label{3}
In this section, using the gauge invariant point-splitting regularization, we study the influence
of the shift parameter $\Delta$ on the form of the axial current and the axial anomaly (for
reviews of this regularization, see Refs.~\cite{Peskin,Ioffe}). The analysis is model independent
and is based only on the form of the fermion propagator with $\Delta$ in an external
electromagnetic field. Our main conclusion is that while including $\Delta$ essentially
changes the form of the axial current, it does not modify the axial anomaly. Moreover, while
the contribution of the chemical potential in the axial current is generated in the infrared kinematic
region (at the LLL) \cite{Metlitski:2005pr}, the contribution of $\Delta$ in the current is
mostly generated in ultraviolet (at all Landau levels).
\subsection{Axial current}
\label{3a}
We consider the case of a constant electromagnetic field. Since it is known that the axial anomaly
is insensitive to chemical potential \cite{Son,Gavai}, the latter will be omitted. Then, the general
form of the fermion propagator is \cite{Schwinger}
\begin{equation}
G(u,u^{\prime}) = e^{i\Phi(u,u^{\prime})}\bar{G}(u - u^{\prime})
\end{equation}
with the Schwinger phase
\begin{equation}
\Phi(u,u^{\prime})=e\int_{u^{\prime}}^u dx^{\nu}A_{\nu},
\label{phase}
\end{equation}
where the integration is performed along the straight line. The translation invariant part
$\bar{G}(u - u^{\prime})$ depends only on the field strength $F_{\mu\nu}$.
Therefore, in the normal phase with $m=0$, the inverse propagator (\ref{ginverse})
(with $\tilde{\mu}=0$) can be rewritten as
\begin{eqnarray}
iG^{-1}&=&iD_{\nu}\gamma^{\nu}+\Delta\gamma^3\gamma^5
=\left(iD_{\nu}\gamma^{\nu}-\Delta s_\perp\gamma^3\right) {\cal P}^{-}_5+
\left(iD_{\nu}\gamma^{\nu}+\Delta s_\perp\gamma^3\right) {\cal P}^{+}_5\,,
\end{eqnarray}
where $s_\perp \equiv \mbox{sign}(eB)$, $D_{\nu}=\partial_{\nu}+ ieA_{\nu}$, and
${\cal P}^{\mp}_5=(1 \mp s_\perp \gamma^5)/2$. This equation implies that the
effective electromagnetic vector potential equals $\tilde{A}_{\nu}^{-}=A_{\nu}
+ (s_\perp \Delta /e)\delta_{\nu}^{3} $ and $\tilde{A}_{\nu}^{+}=A_{\nu} -
(s_\perp \Delta /e)\delta_{\nu}^{3}$ for the $-$ and $+$ chiral fermions,
respectively. Since the field strength $F_{\mu\nu}$ for $\tilde{A}_{\nu}^{\mp}$
is the same as for $A_{\nu}$, $\Delta$ affects only the Schwinger phase
(\ref{phase}):
\begin{eqnarray}
\Phi^{-}_{\Delta}(u,u^{\prime}) = \Phi(u,u^{\prime}) + s_{\perp}\Delta (u^3-u^{\prime\,3}),\\
\Phi^{+}_{\Delta}(u,u^{\prime}) = \Phi(u,u^{\prime}) - s_{\perp}\Delta
(u^3-u^{\prime\,3}).
\end{eqnarray}
Thus, we find
\begin{eqnarray}
G(u,u^{\prime})&=&
\exp[is_{\perp}\Delta (u^3-u^{\prime\,3})]\,{\cal P}^{-}_5\,G_0(u,u^{\prime})
+\exp[-i s_{\perp}\Delta (u^3-u^{\prime\,3})]\,{\cal
P}^{+}_5\,G_0(u,u^{\prime})\,, \label{propagator-transformed}
\end{eqnarray}
where $G_0$ is the propagator with $\Delta=0$. Note that $\Delta$ appears now only in the phase factors.
According to Eq.~(\ref{current}), the axial current density is equal to
\begin{equation}
\langle j^{\mu}_5(u)\rangle=-\mbox{tr}
\left[\gamma^{\mu}\gamma^5\,G(u,u+\epsilon)\right]_{\epsilon \to 0}.
\label{density1}
\end{equation}
On the other hand, the fermion propagator in an electromagnetic field has the
following singular behavior for $u^{\prime}-u=\epsilon \to 0$
\cite{Peskin,Ioffe}:
\begin{equation}
G_0(u,u+\epsilon)\simeq
\frac{i}{2\pi^2}\left[\frac{\hat{\epsilon}}{\epsilon ^4}
-\frac{1}{16\epsilon ^2}eF_{\mu\nu} \left(\hat{\epsilon}\sigma^{\mu\nu}
+\sigma^{\mu\nu}\hat{\epsilon } \right)\right], \label{asymptotics}
\end{equation}
where $\hat{\epsilon }=\gamma_{\mu}\epsilon^{\mu}$. Then using
Eqs.~(\ref{propagator-transformed}) -- (\ref{asymptotics}), we find
\begin{eqnarray}
\langle j^{\mu}_5 \rangle_{\rm singular} &=& \left.
\frac{i\epsilon^{\mu}s_\perp}{\pi^2\epsilon^4} \left(e^{- is_\perp
\Delta \epsilon^3}- e^{is_\perp \Delta \epsilon^3}\right) +
\frac{ieF_{\lambda\sigma}\epsilon_{\beta}\epsilon^{\beta\mu\lambda\sigma}}
{8\pi^2\epsilon^2} \left(e^{- i s_\perp \Delta \epsilon^3}+e^{i
s_\perp \Delta \epsilon^3}\right)\right|_{\epsilon \to 0}\,.
\label{axialcurrent}
\end{eqnarray}
Taking into account that the limit $\epsilon \to 0$ should be taken in this
equation symmetrically \cite{Peskin,Ioffe}, i.e.,
$\epsilon^{\mu}\epsilon^{\nu}/\epsilon^2 \to \frac{1}{4}g^{\mu\nu}$, and the
fact that its second term contains only odd powers of $\epsilon$, we arrive at
\begin{equation}
\langle j^{\mu}_5 \rangle_{\rm singular} =
- \frac{\Delta}{2\pi^2\epsilon^2} \delta^{\mu}_{3} \sim
\frac{\Lambda^2 \Delta }{2\pi^2}\delta^{\mu}_{3} \,. \label{chiral-current}
\end{equation}
It is clear that $- 1/\epsilon^2$ plays the role of a Euclidean ultraviolet cutoff
$\Lambda^2$. This feature of expression (\ref{chiral-current}) agrees with the
results obtained in Ref.~\cite{Gorbar:2009bm}. As was shown there, while the
contribution of each Landau level into the axial current density $\langle
j^{\mu}_5 \rangle$ is finite at a fixed $\Delta$, their total contribution is
quadratically divergent. However, the important point is that since the
solution of gap equation (\ref{gap-Delta}) for the dynamical shift $\Delta$
yields $\Delta \sim g\mu\, eB/\Lambda^2 $, the axial current density is
actually finite. The explicit expressions for $\Delta$ and $\langle j^{3}_5
\rangle$ are \cite{Gorbar:2009bm}:
\begin{eqnarray}
\Delta &\simeq & -g \mu\frac{eB}{\Lambda^2\left(1+2 a g\right)},
\label{Delta-vs-mu}
\\
\langle j^{3}_5 \rangle
&\simeq & \frac{eB}{2\pi^{2}} \mu + a \frac{\Lambda^2}{\pi^{2}} \Delta
\simeq \frac{eB}{2\pi^{2}}\frac{\mu}{\left(1+ 2 a g\right)},
\label{a}
\end{eqnarray}
where $a$ is a dimensionless constant of order one \cite{footnote1}, which is determined by the
regularization scheme used, and $g$ is the coupling constant defined in Sec.~\ref{2}. Note that
both the topological and dynamical contributions are included in $\langle j^{3}_5 \rangle$.
(Terms of higher order in powers of $|eB|/\Lambda^2$ are neglected in both expressions.)
In Ref.~\cite{Gorbar:2009bm}, a gauge noninvariant regularization (with a cutoff in
a sum over Landau levels) was used. One can show that the main features of the structure of the
axial current in model (\ref{NJLmodel}) remain the same also in the gauge invariant
proper-time regularization \cite{GMS}. In this regularization,
\begin{eqnarray}
\Delta &\simeq & -g \mu \frac{eB}{\Lambda^2 (1+g/2)}, \\
\langle j_{3}^5 \rangle &\simeq & \frac{eB}{2\pi^{2}} \mu +
\frac{\sqrt{\pi}}{2(2\pi l)^2}
\frac{e^{-s \Delta^2 }}{\sqrt{s}}
\mbox{erfi}(\sqrt{s} \Delta )\coth(eBs) \Bigg|_{s=1/\Lambda^2}
\simeq \frac{eB}{2\pi^{2}}\frac{\mu}{\left(1+g/2\right)},
\end{eqnarray}
where $\mbox{erfi}(x)\equiv -i\, \mbox{erf}(i x)$ is the imaginary error function.
Note that in this case the parameter $a$ equals $1/4$ [compare with Eq.~(\ref{a})].
We conclude that interactions leading to the shift parameter $\Delta$ essentially change
the form of the induced axial current in a magnetic field. It is important to mention that
unlike the topological contribution in $\langle j^{\mu}_5 \rangle$ \cite{Metlitski:2005pr},
the dynamical one is generated by all Landau levels.
\subsection{Axial anomaly}
\label{3b}
In this subsection we will show that the shift parameter $\Delta$ does not affect
the axial anomaly.
In the gauge invariant point-splitting regularization, the divergence of the axial current
in the massless theory equals \cite{Peskin,Ioffe}
\begin{equation}
\partial_{\mu}j^{\mu}_5(u)=ie\epsilon^{\alpha}\bar{\psi}(u +\epsilon)
\gamma^{\mu}\gamma^5\psi(u)\left. F_{\alpha\mu}\right|_{ \epsilon \to 0}\,.
\label{divergence}
\end{equation}
Then, calculating the vacuum expectation value of the divergence of the axial current, we find
\begin{equation}
\langle \partial_{\mu}j^{\mu}_5(u) \rangle
=-ie\epsilon^{\alpha}F_{\alpha\mu}\mbox{tr}\left[\gamma^{\mu}\gamma^5
G(u,u+\epsilon) \right]_{\epsilon \to 0} =
ie\epsilon^{\alpha}F_{\alpha\mu} \langle j^{\mu}_5(u) \rangle\,,
\label{divergence1}
\end{equation}
where $G(u,u^{\prime})$ is the fermion propagator in Eq.~(\ref{propagator-transformed}).
Let us check that the presence of $\Delta$ in $G(u,u^{\prime})$, which modifies the axial
current, does not affect the standard expression for the axial anomaly.
We start by considering the first term in the axial current density in Eq.~(\ref{axialcurrent}):
\begin{equation}
\frac{i\epsilon^{\mu}s_{\perp}}{\pi^2\epsilon^4} \left(e^{- i
s_{\perp} \Delta \epsilon^3}-e^{i s_{\perp} \Delta \epsilon^3}\right)
\simeq \frac{2\Delta\epsilon^{\mu}\epsilon^3}{\pi^2\epsilon^4}
\left(1-\frac{\Delta^2\epsilon^2_3}{6}+...\right)\,.
\label{chiral-current-series}
\end{equation}
Its contribution to the right-hand side of Eq.~(\ref{divergence1}) is
\begin{equation}
\frac{2i\Delta\,\epsilon^{\alpha}\epsilon^{\mu}\epsilon^3}{\pi^2\epsilon^4}
\left(1-\frac{\Delta^2\epsilon^2_3}{6}+...\right)eF_{\alpha \mu} \,.
\label{new-term}
\end{equation}
Since this expression contains only odd powers of $\epsilon$, it gives zero contribution after
symmetric averaging over space-time directions of $\epsilon$.
Thus, only the second term in Eq.~(\ref{axialcurrent}) is relevant for the divergence of
axial current in Eq.~(\ref{divergence1}):
\begin{equation}
\langle \partial_{\mu}j^{\mu}_5(u) \rangle =
-\frac{e^2\epsilon^{\beta\mu\lambda\sigma}F_{\alpha\mu}F_{\lambda\sigma}
\epsilon^{\alpha}\epsilon_{\beta}}{8\pi^2\epsilon^2} \left(e^{- i s_{\perp} \Delta \epsilon^3}
+e^{i s_{\perp} \Delta \epsilon^3}\right) \to
-\frac{e^2}{16 \pi^2}
\epsilon^{\beta\mu\lambda\sigma}F_{\beta\mu}F_{\lambda\sigma}
\label{divergence-final}
\end{equation}
for $\epsilon\to 0$ and symmetric averaging over space-time directions of $\epsilon$. Therefore,
the presence of the shift parameter $\Delta$ does not affect the axial anomaly indeed.
\section{Discussion}
\label{discussion}
The emphasis in this Letter was on studying the structure of the induced axial
current and the chiral anomaly in the normal phase in magnetized relativistic matter.
Our conclusion is that there are two components in this current, the topological component,
induced only in the LLL, and the dynamical one provided by the chiral shift $\Delta$
(and generated in all Landau levels). While the former is intimately connected with
the axial anomaly, the latter does not affect the form of the anomaly at all. Thus
one can say that while the topological component of the current is anomalous,
the dynamical one is normal.
The present analysis was realized in the NJL model. It would be important to extend
it to renormalizable field theories, especially, QED and QCD. In connection with
that, we would like to note the following. The expression for the chiral shift parameter,
$\Delta \sim g\mu \, eB/\Lambda^2$, obtained in the NJL model implies that both fermion
density and magnetic field are necessary for the generation of $\Delta$. This feature
should also be valid in renormalizable theories. As for the cutoff $\Lambda$, it enters
the results only because of the nonrenormalizability of the NJL model. Similar studies of chiral
symmetry breaking in the vacuum ($\mu_0 = 0$) QED and QCD in a magnetic field show
that the cutoff scale $\Lambda$ is replaced by $\sqrt{|eB|}$ there \cite{QED}. Therefore,
one might expect that in QED and QCD with both $\mu$ and $B$ being nonzero, $\Lambda$
will be replaced by a physical parameter, such as $\sqrt{|eB|}$. This in turn suggests that
a constant chiral shift parameter $\Delta$ will become a running quantity that depends on the
longitudinal momentum $k^3$ and the Landau level index $n$.
It has been recently suggested in Refs.~\cite{Basar:2010zd,Kim}, that a chiral magnetic spiral
solution is realized in the chirally broken phase in the presence of a strong magnetic field. Like
the present solution with the chiral shift parameter $\Delta$, the chiral magnetic spiral one
is anisotropic, but beside that it is also inhomogeneous. It is essential, however, that the solution
with the chiral shift is realized in the {\it normal} phase of matter, in which the fermion density and
the axial current density are non-vanishing. It would be interesting to clarify whether there is a
connection between these two solutions describing the dynamics in the two different phases
of magnetized relativistic matter.
In this Letter, we concentrated on the basic and delicate questions regarding the chiral shift
parameter $\Delta$, the induced axial current, and the axial anomaly, but did not address
many specific details regarding the dynamics, e.g., those related to the chiral asymmetry
of the Fermi surface \cite{Gorbar:2009bm} and a dependence of $\Delta$ on the temperature
and the current fermion mass. These issues, which are of great interest because of their
potential applications in neutron stars and in heavy ion collisions, will be considered
elsewhere \cite{GMS}.
\begin{acknowledgments}
The authors would like to thank V.~P.~Gusynin for fruitful discussions. The work of E.V.G. was
supported partially by the SCOPES under Grant No. IZ73Z0-128026 of the Swiss NSF, under
Grant No. SIMTECH 246937 of the European FP7 program, the joint Grant RFFR-DFFD
No. F28.2/083 of the Russian Foundation for Fundamental Research and of the Ukrainian
State Foundation for Fundamental Research (DFFD). The work of V.A.M. was supported
by the Natural Sciences and Engineering Research Council of Canada. The work of I.A.S.
is supported in part by a start-up fund from the Arizona State University and by the U.S.
National Science Foundation under Grant No. PHY-0969844.
\end{acknowledgments}
|
2,869,038,156,833 | arxiv | \section{Introduction} \label{sec:introduction}
The parton distribution functions (PDFs) of the proton are best determined from global analysis of a wide variety of deep-inelastic scattering (DIS) and related hard-scattering data taken from both fixed-target experiments and colliders (HERA, the Tevatron, and most recently the LHC). Propagation of the experimental errors on the fitted data points to the uncertainties on the PDFs is a non-trivial task. The traditional Hessian method requires effective error inflation by a \emph{tolerance} parameter to accommodate minor inconsistencies between the fitted data sets. This means that the PDF uncertainties cannot be considered to be statistically rigorous, despite the r\^ole of PDF uncertainties as an important (and sometimes dominant) source of theoretical uncertainty on predicted quantities, such as the cross sections for Drell--Yan processes or Higgs boson production at the Tevatron and LHC~\cite{Watt:2011kp,Thorne:2011kq}. Moreover, the number of fitted parameters for error propagation in the Hessian method must be kept sufficiently small to avoid large correlations, often requiring several parameters to be held fixed and thereby introducing a potential parameterisation bias. Some insight into these problems may be gained using Monte Carlo techniques~\cite{Giele:1998gw,Giele:2001mr}, recently used in conjunction with a neural-network parameterisation by the NNPDF Collaboration (\cite{Ball:2011eq}, and references therein), where a large number $N_{\rm rep}\sim \mathcal{O}(10$--$1000)$ of fits are performed, each to a sample of replica pseudodata generated by shifting the original data points by random amounts dependent on the data errors. Then the PDF uncertainties can be calculated by simply taking the standard deviation of the resulting $N_{\rm rep}$ PDF sets.
In this paper we make a first study of the Monte Carlo approach to experimental error propagation within the context of the established ``MSTW 2008'' PDF determination~\cite{Martin:2009iq}. We retain the usual functional-form parameterisation and least-squares $\chi^2$-minimisation (using the Levenberg--Marquardt algorithm) rather than moving to the neural-network parameterisation and genetic-algorithm $\chi^2$-minimisation of the NNPDF approach~\cite{Ball:2011eq}. We focus on the most widely-used PDF determination at next-to-leading order (NLO) in the strong coupling $\alpha_S$, although the results would be expected to be similar at leading-order (LO) and at next-to-next-to-leading order (NNLO). Moreover, to avoid complications associated with simultaneously fitting $\alpha_S$ with the PDFs, throughout this paper we keep the value of $\alpha_S(M_Z^2)$ held fixed at the MSTW 2008 NLO best-fit value. First in section~\ref{sec:generation} we describe the Monte Carlo approach using data replicas and compare results to the usual Hessian method, then in section~\ref{sec:parambias} we explore potential parameterisation bias by increasing the number of free parameters. We then motivate the need for a tolerance parameter by fitting restricted data sets in section~\ref{sec:restricted} and by fitting idealised pseudodata in section~\ref{sec:theory}. In section~\ref{sec:random} we explain how to produce PDF sets randomly distributed in the space of parameters rather than in the space of data, which allows the inclusion of a suitable tolerance. As an example application of these random PDFs, in section~\ref{sec:reweighting} we demonstrate the use of Bayesian reweighting to study the effect of recent LHC data on the $W\to\ell\nu$ charge asymmetry~\cite{Chatrchyan:2011jz,Aad:2011dm}. Finally, we conclude in section~\ref{sec:conclusions}.
\section{Comparison of Hessian and Monte Carlo uncertainties} \label{sec:generation}
\subsection{Recap of the Hessian method}
The basic procedure for propagating experimental uncertainties in global PDF analyses using the Hessian method is discussed in detail in refs.~\cite{Pumplin:2001ct,Pumplin:2002vw,Martin:2002aw,Martin:2009iq}. Here, we briefly review the salient points. We assume that the global goodness-of-fit quantity, $\chi^2_{\rm global}$, is quadratic about the global minimum, which has $n$ best-fit parameters $\{a_1^0,\ldots,a_n^0\}$. In this case we can write
\begin{equation} \label{eq:hessian}
\Delta\chi^2_{\rm global} \equiv \chi^2_{\rm global} - \chi_{\rm min}^2 = \sum_{i,j=1}^n H_{ij}(a_i-a_i^0)(a_j-a_j^0),
\end{equation}
where the Hessian matrix $H$ has components
\begin{equation}
H_{ij} = \left.\frac{1}{2}\frac{\partial^2\chi^2_{\rm global}}{\partial a_i\partial a_j}\right|_{\rm min}.
\end{equation}
It is convenient to diagonalise the covariance (inverse Hessian) matrix, $C\equiv H^{-1}$, also known as the error matrix, and work in terms of the eigenvectors and eigenvalues. Since the covariance matrix is symmetric it has a set of orthonormal eigenvectors $\vec{v}_k$ defined by
\begin{equation} \label{eq:eigeq}
\sum_{j=1}^n C_{ij} v_{jk} = \lambda_k v_{ik},
\end{equation}
where $\lambda_k$ is the $k$th eigenvalue and $v_{ik}$ is the $i$th component of the $k$th orthonormal eigenvector ($k = 1,\ldots,n$). The parameter displacements from the global minimum can be expanded in a basis of rescaled eigenvectors $e_{ik}\equiv \sqrt{\lambda_k}v_{ik}$, that is,
\begin{equation} \label{eq:eigbasis}
a_i - a_i^0 = \sum_{k=1}^n e_{ik} z_k.
\end{equation}
Then it can be shown, using the orthonormality of $\vec{v}_k$, that eq.~\eqref{eq:hessian} reduces to
\begin{equation} \label{eq:hessiandiag}
\chi^2_{\rm global} = \chi^2_{\rm min} + \sum_{k=1}^n z_k^2,
\end{equation}
that is, $\sum_{k=1}^n z_k^2\le T^2$ is the interior of a hypersphere of radius $T$. Pairs of eigenvector PDF sets $S_k^\pm$ can then be produced to span this hypersphere, with parameters given by
\begin{equation} \label{eq:eigenstept}
a_i(S_k^\pm) = a_i^0 \pm t\,e_{ik}.
\end{equation}
In the quadratic approximation, $t=T\equiv(\Delta\chi^2_{\rm global})^{1/2}$, but particularly for the larger eigenvalues $\lambda_k$ there are significant deviations from the ideal quadratic behaviour, so in general $t$ is adjusted iteratively to give the desired value of $T$. Then asymmetric PDF uncertainties on a quantity $F$, which may be an individual PDF at particular values of $x$ and $Q^2$, or a derived quantity such as a cross section, can be calculated with the following ``master equations'':
\begin{align}
(\Delta F)_+ &= \sqrt{\sum_{k=1}^n \left\{{\rm max}\left[\;F(S_k^+)-F(S_0),\;F(S_k^-)-F(S_0),\;0\right]\right\}^2}, \label{eq:Fp} \\
(\Delta F)_- &= \sqrt{\sum_{k=1}^n \left\{{\rm max}\left[\;F(S_0)-F(S_k^+),\;F(S_0)-F(S_k^-),\;0\right]\right\}^2}, \label{eq:Fm}
\end{align}
where $S_0$ is the central PDF set. Symmetric PDF uncertainties can be calculated with
\begin{equation} \label{eq:symmunc}
\Delta F = \frac{1}{2}\sqrt{\sum_{k=1}^n \left[F(S_k^+)-F(S_k^-)\right]^2}.
\end{equation}
Ideally, with the standard ``parameter-fitting'' criterion~\cite{Collins:2001es}, we would expect the errors to be given by the choice of tolerance $T=1$ for the $68\%$ (one-sigma) confidence-level (C.L.) limit or $T=1.64$ for the $90\%$ C.L.~limit~\cite{Nakamura:2010zzi}. This criterion is appropriate if fitting consistent data sets with ideal Gaussian errors to a well-defined theory. However, in practice, there are some inconsistencies between the independent fitted data sets, and unknown experimental and theoretical uncertainties, so the parameter-fitting criterion is not appropriate for global PDF analyses. Historically, the CTEQ~\cite{Pumplin:2002vw} and MRST~\cite{Martin:2002aw} groups defined $90\%$ C.L.~uncertainties using $T = \sqrt{100}$ and $T = \sqrt{50}$, respectively. Instead, the ``MSTW 2008'' analysis~\cite{Martin:2009iq} introduced a new ``dynamic'' determination of the tolerance, chosen separately for each eigenvector direction according to a ``hypothesis-testing'' criterion~\cite{Collins:2001es} to maintain an adequate description of each individual data set in the global fit. Therefore, the distance $t$ in eq.~\eqref{eq:eigenstept} was replaced by $t_k^\pm$, adjusted to give the desired $T_k^\pm$, with an average value of $\langle t_k^\pm\rangle\approx\langle T_k^\pm\rangle\approx 3$ for 68\% C.L.~uncertainties, and $\langle t_k^\pm\rangle\approx\langle T_k^\pm\rangle\approx 6$ for 90\% C.L.~uncertainties; see figure 10 of ref.~\cite{Martin:2009iq} for the individual $T_k^\pm$ values in the MSTW 2008 NLO fit.
\subsection{Generation of Monte Carlo replica sets} \label{sec:MCgen}
We generate replica data sets with the central values shifted according to
\begin{equation} \label{eq:MCgen}
D_{m,i} \to \left(D_{m,i}+R_{m,i}^{\rm uncorr.}\,\sigma_{m,i}^{\rm uncorr.}+\sum_{k=1}^{N_{\rm corr.}}R_{m,k}^{\rm corr.}\,\sigma_{m,k,i}^{\rm corr.}\right)\cdot\left(1+R_m^{\mathcal{N}}\,\sigma_m^{\mathcal{N}}\right).
\end{equation}
Here, ``$m$'' labels a particular data set, or a combination of data sets, with a common (fitted) normalisation $\mathcal{N}_m$, ``$i$'' labels the individual data points in that data set, and ``$k$'' labels the individual correlated systematic errors for a particular data set. The individual data points $D_{m,i}$ have uncorrelated (statistical and systematic) errors $\sigma_{m,i}^{\rm uncorr.}$ and correlated systematic errors $\sigma_{m,k,i}^{\rm corr.}$. Treating the correlated errors as uncorrelated leads to the alternative form used for most of the data sets in the MSTW 2008 fit:
\begin{equation} \label{eq:MCgenQuad}
D_{m,i} \to \left(D_{m,i}+R_{m,i}^{\rm uncorr.}\,\sigma_{m,i}^{\rm tot.}\right)\cdot\left(1+R_m^{\mathcal{N}}\,\sigma_m^{\mathcal{N}}\right),
\end{equation}
where the total error is simply obtained by adding all errors (except normalisation) in quadrature,
\begin{equation}
\left(\sigma_{m,i}^{\rm tot.}\right)^2 = \left(\sigma_{m,i}^{\rm uncorr.}\right)^2 + \sum_{k=1}^{N_{\rm corr.}}\left(\sigma_{m,k,i}^{\rm corr.}\right)^2.
\end{equation}
We shift the data points in a way to be as consistent as possible with the $\chi^2$ definition used in the MSTW 2008 fit~\cite{Martin:2009iq}. The random numbers $R_{m,i}^{\rm uncorr.}$ or $R_{m,k}^{\rm corr.}$ are obtained from a Gaussian distribution of mean zero and variance one. A complication arises with the treatment of normalisation uncertainties in the MSTW 2008 analysis, where a \emph{quartic} penalty term was used in the $\chi^2$ definition instead of the usual quadratic penalty term, cf.~eqs.~(35) and (37) of ref.~\cite{Martin:2009iq}. This modification was made to discourage large normalisation shifts in the fit. It was partly motivated by claims (see section 6.7.4 on ``Normalizations'', pg.~170 in ref.~\cite{Devenish:2004pb}) that, for many experiments, quoted normalisation uncertainties represent the limits of a box-shaped distribution rather than the standard deviation of a Gaussian distribution; see further discussion in section 5.2.1 of ref.~\cite{Martin:2009iq}. The quartic $\chi^2$ penalty term is small if the fitted normalisation $\mathcal{N}_m\in[1-\sigma_m^{\mathcal{N}},1+\sigma_m^{\mathcal{N}}]$, then it rises rapidly outside this range, with the effect that the normalisation uncertainty is perhaps closer to being described by a box-shaped distribution than by a Gaussian distribution (which would correspond to a quadratic $\chi^2$ penalty term). Therefore, by default we take $R_m^{\mathcal{N}}$ in eqs.~\eqref{eq:MCgen} and \eqref{eq:MCgenQuad} to be uniformly distributed in the interval $(-1,1)$, so that the normalisation $\mathcal{N}_m$ is uniformly distributed in the interval $(1-\sigma_m^{\mathcal{N}},1+\sigma_m^{\mathcal{N}})$. However, we have also looked at the effect of obtaining $R_m^{\mathcal{N}}$ from a Gaussian distribution or alternatively simply fixing $R_m^{\mathcal{N}}=0$, i.e.~the case of fixed data set normalisations. As expected, fixing normalisations in the data replicas generally gives slightly smaller PDF uncertainties, while assuming normalisation uncertainties to be Gaussian gives larger PDF uncertainties, particularly for the up-valence distribution. However, it is perhaps inconsistent to assume Gaussian uncertainties in the replica generation with a quartic penalty term in the $\chi^2$: changing to a quadratic penalty term would allow more freedom in the fitted normalisations and so the PDF parameters would move less, likely reducing the PDF uncertainty compared to the case of a quartic penalty term. The default treatment of uniform $R_m^{\mathcal{N}}\in(-1,1)$ is probably reasonable and is closer to the treatment of normalisation uncertainties in the $\chi^2$ definition than a Gaussian $R_m^{\mathcal{N}}$. The Hessian error propagation via eigenvector PDF sets includes theoretical uncertainties on the hadronisation corrections for the CDF jet data (treated as a correlated systematic) and the small modification for the nuclear corrections ($r_1$, $r_2$, $r_3$)~\cite{Martin:2009iq}. It is currently not obvious how to treat these theoretical uncertainties in the replica generation, so the effect on PDF uncertainties will be assumed to be small.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/inputxgTeq1.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/inputxsminusTeq1.eps}
\end{minipage}
\caption{Comparison of Hessian and Monte Carlo results at the input scale of $Q_0^2=1$~GeV$^2$ for the (a)~gluon distribution and (b)~strange asymmetry. Both results allow $n=20$ free PDF parameters and do not apply a tolerance (i.e.~$T=1$ in the Hessian case). The best-fit (solid curves) and Hessian uncertainty (shaded region) are in good agreement with the average and standard deviation (thick dashed curves) of the $N_{\rm rep} = 40$ Monte Carlo replica PDF sets (thin dotted curves).}
\label{fig:inputpdfs}
\end{figure}
We perform a separate PDF fit to each replica data set, then we can take the average $\langle F\rangle$ and standard deviation $\Delta F$ of an observable $F$ calculated with each PDF replica set, $\mathcal{S}_k$ ($k=1,\ldots,N_{\rm rep}$), that is,
\begin{align}
\langle F\rangle = \frac{1}{N_{\rm rep}}\sum_{k=1}^{N_{\rm rep}}F(\mathcal{S}_k), \label{eq:MCav} \\
\Delta F = \sqrt{\frac{N_{\rm rep}}{N_{\rm rep}-1}\left(\langle F^2\rangle-\langle F\rangle^2\right)}. \label{eq:MCsd}
\end{align}
The number of replicas $N_{\rm rep}$ is arbitrary, but in all cases we choose to generate $N_{\rm rep} = 40$ replica PDF sets, where this number is chosen to be equal to the number of eigenvector PDF sets mostly for practical reasons, i.e.~to demonstrate that the implementation of the Monte Carlo (MC) method does not necessarily require more computer resources than the Hessian method. It could easily be increased in further studies, but first indications are that $N_{\rm rep} = 40$ is sufficiently large to avoid significant fluctuations. To allow a fair comparison with the existing Hessian eigenvector PDF sets, we take $n=20$ free PDF parameters, i.e.~8 PDF parameters are held fixed at their global best-fit values, and we do not apply a tolerance, i.e.~we use the Hessian eigenvector PDF sets corresponding to $T=1$ (see section~6.6 of ref.~\cite{Martin:2009iq}). In figure~\ref{fig:inputpdfs} we show the input gluon distribution and strange asymmetry for the $N_{\rm rep} = 40$ MC replica PDF sets (thin dotted curves), and their average and standard deviation (thick dashed curves), and we compare to the best-fit and Hessian uncertainty (solid curves and shaded region). We find good agreement of the Hessian and MC results at all $x$ and $Q^2$ values, and for all parton flavours, as will be demonstrated more explicitly in the next section.
Similar comparisons between Hessian and MC results were performed in a fit only to the H1 data from HERA I on neutral- and charged-current $e^\pm p$ cross sections~\cite{Dittmar:2009ii}, but it is still reassuring that we find a similar good agreement in the context of a more complicated global fit. On the other hand, in section~6.6 of ref.~\cite{Martin:2009iq} we also performed a fit to a reduced data set consisting of a limited number of inclusive DIS data sets (BCDMS, NMC, H1, ZEUS) with fairly conservative cuts of $Q^2\ge9$~GeV$^2$ and $W^2\ge 15$~GeV$^2$, where eigenvector PDF sets were produced with $n=16$ free PDF parameters for both a dynamic tolerance and with $T=1$. We find that there are some differences between the MC results with $n=16$ free PDF parameters and the Hessian results with $T=1$. The approximate equivalence between the Hessian and MC methods may break down, therefore, when fitting a limited selection of discrepant data sets that are insufficient to unambiguously constrain all fitted parameters.
\section{Investigation of potential parameterisation bias} \label{sec:parambias}
Recall the MSTW 2008 NLO PDF parameterisation at the input scale $Q_0^2 = 1$ GeV$^2$~\cite{Martin:2009iq}:
\begin{align}
xu_v & \equiv xu-x\bar{u} = A_u\,x^{\mbox{\large $\boldsymbol{\color{red} \eta_1}$}} (1-x)^{\mbox{\large $\boldsymbol{\color{red} \eta_2}$}} (1 + \mbox{\Large $\boldsymbol{\color{red} \epsilon_u}$}\,\sqrt{x} + {\color{blue}\gamma_u}\,x), \\
xd_v & \equiv xd-x\bar{d} = A_d\,x^{\mbox{\large $\boldsymbol{\color{red} \eta_3}$}} (1-x)^{\mbox{\large $\boldsymbol{\color{red} \eta_4}$}} (1 + \mbox{\Large $\boldsymbol{\color{red} \epsilon_d}$}\,\sqrt{x} + {\color{blue}\gamma_d}\,x), \\
xS & \equiv 2x\bar{u}+2x\bar{d}+xs+x\bar{s} = \mbox{\Large $\boldsymbol{\color{red} A_S}$}\,x^{{\color{blue}\delta_S}} (1-x)^{\mbox{\large $\boldsymbol{\color{red} \eta_S}$}} (1 + \mbox{\Large $\boldsymbol{\color{red} \epsilon_S}$}\,\sqrt{x} + {\color{blue}\gamma_S}\,x), \\
x\Delta & \equiv x\bar{d} - x \bar{u} = \mbox{\Large $\boldsymbol{\color{red} A_\Delta}$}\,x^{\mbox{\Large $\boldsymbol{\color{red} \eta_\Delta}$}} (1-x)^{\eta_S+2} (1 + \mbox{\Large $\boldsymbol{\color{red} \gamma_\Delta}$}\,x + {\color{blue}\delta_\Delta}\,x^2), \\
xg &= A_g\,x^{\mbox{\large $\boldsymbol{\color{red} \delta_g}$}} (1-x)^{\mbox{\large $\boldsymbol{\color{red} \eta_g}$}} (1 + {\color{blue}\epsilon_g}\,\sqrt{x} + {\color{blue}\gamma_g}\,x) + {\color{blue}A_{g^\prime}}\,x^{\mbox{\large $\boldsymbol{\color{red} \delta_{g^\prime}}$}} (1-x)^{\mbox{\large $\boldsymbol{\color{red} \eta_{g^\prime}}$}}, \label{eq:gluon} \\
xs + x\bar{s} &= \mbox{\Large $\boldsymbol{\color{red} A_{+}}$}\,x^{\delta_S}\,(1-x)^{\mbox{\large $\boldsymbol{\color{red} \eta_{+}}$}} (1 + \epsilon_S\,\sqrt{x} + \gamma_S\,x), \label{eq:splus} \\
xs - x\bar{s} &= \mbox{\Large $\boldsymbol{\color{red} A_{-}}$}\,x^{0.2} (1-x)^{\mbox{\large $\boldsymbol{\color{red} \eta_{-}}$}} (1-x/x_0). \label{eq:sminus}
\end{align}
The parameters $A_u$, $A_d$, $A_g$ and $x_0$ were fixed by enforcing number- and momentum-sum rule constraints, while the other parameters were allowed to go free to determine the overall best fit. The 20 highlighted (red) parameters were those allowed to go free when producing the eigenvector PDF sets, where the other 8 (blue) parameters were held fixed, as for the MC results in the previous section. However, this is not in fact necessary in the MC approach where it is only needed to find the best fit for each replica data set, and the Hessian matrix is not used for error propagation. Therefore, we can perform MC replica fits with all 28 free parameters to examine the effect on PDF uncertainties of the greater freedom in parameterisation, and to explore the extent that the Hessian uncertainties are limited by the restricted parameterisation.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/fracupvalence_1GeV.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/fracdownvalence_1GeV.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/fracantiup_1GeV.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/fracantidown_1GeV.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/fracstrange_1GeV.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/fracgluon_1GeV.eps}
\end{minipage}
\caption{Effect of $n=20\to28$ parameters on percentage PDF uncertainties at $Q^2=1~{\rm GeV}^2$.}
\label{fig:frac_1GeV}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/fracupvalence.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/fracdownvalence.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/fracantiup.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/fracantidown.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/fracstrange.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/fracgluon.eps}
\end{minipage}
\caption{Effect of $n=20\to28$ parameters on percentage PDF uncertainties at $Q^2=(100~{\rm GeV})^2$.}
\label{fig:frac}
\end{figure}
Recall~\cite{Martin:2009iq} that the reason to freeze several parameters before applying the Hessian method was to reduce the large correlations between some parameters, which would lead to severe breaking of the quadratic behaviour of $\Delta\chi^2$, meaning that linear error propagation would not be applicable. (A similar procedure was used in the CTEQ global fits; see, for example, section~5 of ref.~\cite{Tung:2006tb}.) We observed some departure from the ideal quadratic behaviour of $\Delta\chi^2$ even with only 20 parameters; see figures~5 and 6 of ref.~\cite{Martin:2009iq}. However, even with all 28 parameters free, the Hessian matrix is generally still positive-definite (has positive eigenvalues) and therefore we can still be relatively confident that the best fit is correctly determined. Note that we use the Levenberg--Marquardt algorithm for $\chi^2$-minimisation, which combines the advantages of the inverse-Hessian method and the steepest-descent method, and therefore simply finding the best fit is less reliant on accurate knowledge of the Hessian matrix compared to subsequent error propagation using the Hessian method.
In figure~\ref{fig:frac_1GeV} we show percentage uncertainties at the input scale $Q_0^2=1$~GeV$^2$, and in figure~\ref{fig:frac} we show percentage uncertainties after evolving to $Q^2=(100~{\rm GeV})^2$. We show only the uncertainties since the MC average is very close to the Hessian best-fit, with residual differences likely explained by statistical fluctuations. Again the MC uncertainties with $n=20$ input PDF parameters are in good agreement with the Hessian uncertainties with $\Delta\chi^2=1$, and both are much smaller than the 68\% C.L.~uncertainties including the dynamic tolerance. We show the effect of moving to $n=28$ input PDF parameters, which gives significantly larger $u_v$ and $d_v$ uncertainties mainly at low $x$ values (removing some unusual shapes in the $x$ dependence) and slightly larger gluon uncertainties around $x\sim 0.05$ in figure~\ref{fig:frac_1GeV}(f) and around $x\sim 0.01$ in figure~\ref{fig:frac}(f), but in all cases the MC uncertainties are still much smaller than the Hessian uncertainties at 68\% C.L.
One can see from the equations above that in going from a total of $20\to28$ input PDF parameters, the number of free parameters for both $xu_v$ and $xd_v$ goes from $3\to4$, for $xS$ ($\equiv 2x\bar{u}+2x\bar{d}+xs+x\bar{s}$) goes from $3\to5$, and for $xg$ goes from $4\to 7$. While there is perhaps some degree of parameterisation bias in the valence-quark distributions, the insensitivity of the sea-quark and gluon distributions to the relatively large increase in the number of free parameters suggests that parameterisation bias is likely to be small in those cases. Of course, an exception is the strange-quark and -antiquark distributions which are certainly constrained by the choice of parameterisation outside the limited data region ($0.01\lesssim x\lesssim 0.2$) of the CCFR/NuTeV dimuon cross sections. For example, the low-$x$ behaviour of $s$ and $\bar{s}$ is assumed to be the same as for $\bar{u}$ and $\bar{d}$, as suggested by arguments based on both Regge theory and perturbative QCD (see discussion in section~6.5.5 of ref.~\cite{Martin:2009iq}).
The study of potential parameterisation bias presented here is indicative rather than exhaustive. It could be followed up by a more involved study, for example, using Chebyshev polynomials along the lines of refs.~\cite{Pumplin:2009bb,Glazov:2010bw}. However, switching to an extremely flexible parameterisation brings the danger of fitting the statistical fluctuations of the data unless some method is used to enforce smoothness. We note that the limiting power-law behaviour as $x\to 0$ and $x\to 1$ is well-motivated by Regge theory and counting rules, respectively, and it is difficult to perceive a sensible alternative. More discussion and justification for the MSTW 2008 input parameterisation was given in section~6.5 of ref.~\cite{Martin:2009iq}.
\section{Fits to restricted data sets using data replicas} \label{sec:restricted}
Although we see little evidence for significant parameterisation bias in the MSTW 2008 \emph{global} fit, this might not be true for some ``non-global'' fits which tend to be constrained by parameterisation choices in the absence of relevant data. For example, the input parameterisation at $Q_0^2 = 1.9$~GeV$^2$ in the HERAPDF1.0~\cite{HERA:2009wt} or HERAPDF1.5 NLO~\cite{HERA:2010} analyses takes the form:
\begin{align*}
xu_v &= A_{u_v}\,x^{{\color{red} B_{q_v}}} (1-x)^{{\color{red} C_{u_v}}} (1 + {\color{red} E_{u_v}}\,x^2), \\
xd_v &= A_{d_v}\,x^{B_{q_v}} (1-x)^{{\color{red} C_{d_v}}}, \\
x\bar{u} &= {\color{red} A_{\bar{q}}}\,x^{{\color{red} B_{\bar{q}}}} (1-x)^{{\color{red} C_{\bar{u}}}}, \\
x\bar{d} &= A_{\bar{q}}\,x^{B_{\bar{q}}} (1-x)^{{\color{red} C_{\bar{d}}}}, \\
x\bar{s} &= 0.45\,x\bar{d}, \\
xs &= x\bar{s}, \\
xg &= A_g\,x^{{\color{red} B_g}} (1-x)^{{\color{red} C_g}}.
\end{align*}
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/fracupvalence_varyhera.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/fracdownvalence_varyhera.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/fracantiup_varyhera.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/fracantidown_varyhera.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/fracstrange_varyhera.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/fracgluon_varyhera.eps}
\end{minipage}
\caption{Effect on percentage PDF uncertainties of fitting subsets of MSTW 2008 global data.}
\label{fig:frac_varyhera}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/ratioupvalence_varyhera.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/ratiodownvalence_varyhera.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/ratioantiup_varyhera.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/ratioantidown_varyhera.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/ratiostrange_varyhera.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/ratiogluon_varyhera.eps}
\end{minipage}
\caption{Effect on PDFs of fitting subsets of MSTW 2008 global data.}
\label{fig:ratio_varyhera}
\end{figure}
There are only 10 parameters used to obtain the central fit and ``experimental'' uncertainties, although the more recent HERAPDF1.5 NNLO~\cite{HERA:2011} analysis introduces 4 more parameters (2 for $g$ and 1 each for $u_v,d_v$). The HERAPDF analyses additionally include ``model'' and ``parameterisation'' uncertainties that can be much larger than the ``experimental'' uncertainties. For example, quantities sensitive to the high-$x$ gluon distribution have a very large ``model'' uncertainty in the HERAPDF1.5 NNLO analysis due to variation of the minimum $Q^2$ cut~\cite{Watt:2012fj}. Nevertheless, it is interesting to investigate the potentially more realistic constraint arising only from HERA data with a flexible parameterisation; see also similar studies by the NNPDF Collaboration~\cite{Ball:2011uy}. This would be difficult to achieve in the Hessian method where several parameters would need to be held fixed to use the covariance matrix for error propagation, but it is straightforward using the MC method. We fit subsets of the global data included in the MSTW 2008 NLO analysis~\cite{Martin:2009iq}, specifically (i)~\emph{excluding} all HERA data (neutral-current $e^\pm p$ and charged-current $e^+ p$ cross sections, $F_2^{\rm charm}$, and inclusive jet production in DIS), (ii)~including \emph{only} HERA data, (iii)~performing a ``collider'' fit meaning data from HERA and the Tevatron (inclusive jet production, the $W\to\ell\nu$ charge asymmetry, and the $Z$ rapidity distribution) with no fixed-target data. The HERA-only fit uses the older separate H1 and ZEUS inclusive cross sections compared to the more precise combined HERA I data~\cite{HERA:2009wt} used in the HERAPDF fits. On the other hand, the public HERAPDF fits~\cite{HERA:2009wt,HERA:2010,HERA:2011} do not use data on $F_2^{\rm charm}$ or jet production. In all cases we use the MC method with $n=28$ free parameters wherever possible. However, for the HERA-only and HERA+Tevatron fits, there is no constraint at all on the strange asymmetry since the CCFR/NuTeV dimuon cross sections are missing, so we fix $s-\bar{s}$ at the global best-fit value, leaving $n=26$ free parameters. The percentage uncertainties on the PDFs at $Q^2=(100~{\rm GeV})^2$ from the various fits are shown in figure~\ref{fig:frac_varyhera}. The results reflect what might na\"ively be expected. For example, removing HERA data gives a huge increase in the small-$x$ uncertainties for the sea-quarks and gluon, but the valence-quark uncertainties are almost unchanged. With only HERA data, the gluon and antiquarks are still well-constrained at small $x$, but not at large $x$, and there are huge uncertainties in the valence- and strange-quark distributions. Adding the Tevatron data helps, but the collider-only uncertainty is still much larger than in the global fit, so really we need data from HERA, the Tevatron \emph{and} the fixed-target experiments to get a meaningful result. The corresponding ratios to the global fit are shown in figure~\ref{fig:ratio_varyhera}. Here, we see that the uncertainty bands from fits to subsets of the global data do not always overlap with those from the global fit, implying some tension between the different data sets, and suggesting that some kind of error inflation (or \emph{tolerance}) is necessary. A similar exercise was performed in the MSTW 2008 paper~\cite{Martin:2009iq} to a ``reduced'' data set, with a slightly more constrained parameterisation, and we find similar results if fitting the same ``reduced'' data set using the MC method.
\section{Fits to idealised consistent and inconsistent pseudodata} \label{sec:theory}
As a further exercise to examine potential data set inconsistency within the global fit, we generate idealised pseudodata from the best-fit theory predictions, i.e.~we replace $D_{m,i}$ by $T_{m,i}$ on the right-hand side of eqs.~\eqref{eq:MCgen} and \eqref{eq:MCgenQuad}, where $T_{m,i}$ are the theory predictions evaluated using the global best-fit parameters. The pseudodata are then simply given by the best-fit theory predictions with appropriate Gaussian noise added, and with uncertainties given by the genuine data uncertainties. We can then introduce deliberate inconsistencies into this idealised pseudodata and investigate the effect on the fitted PDFs. We choose the following deliberate inconsistencies, intended to simulate realistic, if somewhat large, incompatibilities that could potentially be present in the genuine data:
\begin{itemize}
\item We introduce a $Q^2$-dependent offset for the H1 and ZEUS inclusive neutral-current reduced cross sections, such that the pseudodata are multiplied by a factor of $\{1\pm0.005\log[Q^2/(10\,{\rm GeV}^2)]\}$, with the ``$+$'' sign for H1 and the ``$-$'' sign for ZEUS.
\item We generate the pseudodata for the CDF and D{\O} inclusive jet cross sections with a scale choice $\mu_R=\mu_F=p_T/2$, but fit it with $\mu_R=\mu_F=p_T$.
\item We normalise the CCFR/NuTeV dimuon cross sections downwards by 10\%.
\item We normalise the NuTeV/CHORUS $xF_3$ structure functions upwards by 5\%.
\item We introduce a rapidity-dependent offset for the CDF $Z$ rapidity distribution, such that the pseudodata are multiplied by a factor of $(1+0.03\,y_Z)$.
\item We introduce an $x$-dependent offset for the BCDMS/NMC/SLAC/E665 deuteron structure functions, intended to mimic a possible deuteron correction, such that the $F_2^d$ data are multiplied by a factor of
\[
f(x) =
\begin{cases}
(1+0.005)\left[1-0.003\log^2(x/x_1)\right] & :\quad x<x_1\\
(1+0.005)\left[1-0.018\log^2(x/x_1)+3\cdot10^{-8}\log^{20}(x/x_1)\right] & :\quad x\ge x_1
\end{cases},
\]
where $x_1=\exp(-2.5)\simeq 0.0821$.
\item We introduce a $Q^2$-dependent offset for the BCDMS $F_2^p$ and $F_2^d$ structure functions, such that the pseudodata are multiplied by a factor of $\{1+0.01\log[Q^2/(1\,{\rm GeV}^2)]\}$.
\end{itemize}
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/ratioupvalence_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/ratiodownvalence_inconsistent.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/ratioantiup_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/ratioantidown_inconsistent.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/ratiostrange_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/ratiogluon_inconsistent.eps}
\end{minipage}
\caption{Effect on PDFs of fitting consistent or inconsistent idealised pseudodata.}
\label{fig:ratio_theory}
\end{figure}
In figures~\ref{fig:ratio_theory} and \ref{fig:frac_theory} we show the effect of fitting the genuine data, then the consistent or inconsistent idealised pseudodata, in each case using MC error propagation with $N_{\rm rep} = 40$ replica data sets and $n=20$ input PDF parameters, and we compare to the standard MSTW 2008 NLO fit with dynamic tolerance. Despite the central values of the PDFs from the inconsistent fit shifting by significant amounts, the percentage uncertainties in figure~\ref{fig:frac_theory} are remarkably almost identical whether fitting either the genuine data, the consistent pseudodata or the inconsistent pseudodata.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/fracupvalence_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/fracdownvalence_inconsistent.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/fracantiup_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/fracantidown_inconsistent.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/fracstrange_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/fracgluon_inconsistent.eps}
\end{minipage}
\caption{Effect on percentage PDF uncertainties of fitting consistent or inconsistent pseudodata.}
\label{fig:frac_theory}
\end{figure}
The MC fit to perfectly consistent pseudodata gives $\chi^2_{\rm global}/N_{\rm pts.}=0.98\pm0.03$, which by construction is exactly unity up to the statistical fluctuation, and similarly for the individual data sets included in the global fit; see table~\ref{tab:chisq}. On the other hand, the MC fit to the inconsistent pseudodata gives $\chi^2_{\rm global}/N_{\rm pts.}=1.07\pm0.03$, so the fit quality has only deteriorated slightly, despite the central values of some PDFs shifting well outside their original uncertainty band; see figure~\ref{fig:ratio_theory}. This result is in contradiction to what seems to be a widely held view that a fit to inconsistent data should lead to a $\chi^2/N_{\rm pts.}\gg 1$. The values of the $\chi^2 / N_{\rm pts.}$ in table~\ref{tab:chisq} deviate further from unity for a few individual data sets such as BCDMS $F_2^d$, the NMC $F_2^d/F_2^p$ ratio, NuTeV $xF_3$ and the CDF $Z$ rapidity distribution, but not by such large amounts that the inconsistent fit would not be judged to be an ``acceptable'' fit. Despite the fairly significant $Q^2$-dependent offset of the H1 and ZEUS inclusive cross sections, amounting to almost 4\% at $Q^2=500$~GeV$^2$, there is only a slight increase in the $\chi^2$ values in going from the consistent to the inconsistent fit. Similarly, by looking at the MSTW08 fit to the genuine data in table~\ref{tab:chisq}, there are only a few individual data sets with values of $\chi^2 / N_{\rm pts.}$ significantly above unity, perhaps giving the false impression that there is not a large degree of incompatibility between individual data sets.
\begin{table}
\centering
{\footnotesize
\begin{tabular}{|l|c|c|c|}
\hline
Data set & MSTW08 & Fit consistent pseudodata & Fit inconsistent pseudodata \\ \hline
BCDMS $\mu p$ $F_2$ & $1.12$ & $0.96\pm0.13$ & $1.10\pm0.15$ \\
BCDMS $\mu d$ $F_2$ & $1.26$ & $0.99\pm0.13$ & $1.44\pm0.17$ \\
NMC $\mu p$ $F_2$ & $0.98$ & $0.96\pm0.12$ & $0.97\pm0.12$ \\
NMC $\mu d$ $F_2$ & $0.83$ & $1.00\pm0.12$ & $1.05\pm0.13$ \\
NMC $\mu n/\mu p$ & $0.88$ & $1.02\pm0.12$ & $1.25\pm0.13$ \\
E665 $\mu p$ $F_2$ & $1.08$ & $0.99\pm0.18$ & $0.99\pm0.18$ \\
E665 $\mu d$ $F_2$ & $1.01$ & $1.00\pm0.18$ & $1.02\pm0.18$ \\
SLAC $ep$ $F_2$ & $0.80$ & $0.97\pm0.22$ & $0.98\pm0.23$ \\
SLAC $ed$ $F_2$ & $0.78$ & $0.98\pm0.16$ & $1.03\pm0.18$ \\
NMC/BCDMS/SLAC $F_L$ & $1.22$ & $1.04\pm0.27$ & $1.04\pm0.27$ \\ \hline
E866/NuSea $pp$ DY & $1.24$ & $0.92\pm0.10$ & $0.98\pm0.10$ \\
E866/NuSea $pd/pp$ DY & $0.93$ & $0.86\pm0.35$ & $0.96\pm0.35$ \\ \hline
NuTeV $\nu N$ $F_2$ & $0.92$ & $0.93\pm0.19$ & $1.07\pm0.19$ \\
CHORUS $\nu N$ $F_2$ & $0.62$ & $1.01\pm0.24$ & $1.08\pm0.27$ \\
NuTeV $\nu N$ $xF_3$ & $0.89$ & $0.99\pm0.19$ & $1.42\pm0.22$ \\
CHORUS $\nu N$ $xF_3$ & $0.93$ & $0.89\pm0.21$ & $1.14\pm0.2$5 \\
CCFR $\nu N\to \mu\mu X$ & $0.77$ & $0.98\pm0.14$ & $1.03\pm0.14$ \\
NuTeV $\nu N\to \mu\mu X$ & $0.46$ & $0.96\pm0.16$ & $1.00\pm0.17$ \\ \hline
H1 MB 99 $e^+p$ NC & $1.15$ & $0.87\pm0.44$ & $0.92\pm0.44$ \\
H1 MB 97 $e^+p$ NC & $0.66$ & $0.99\pm0.20$ & $1.01\pm0.20$ \\
H1 low $Q^2$ 96--97 $e^+p$ NC & $0.56$ & $1.00\pm0.15$ & $1.03\pm0.15$ \\
H1 high $Q^2$ 98--99 $e^-p$ NC & $0.97$ & $0.98\pm0.12$ & $1.00\pm0.12$ \\
H1 high $Q^2$ 99--00 $e^+p$ NC & $0.89$ & $1.02\pm0.10$ & $1.05\pm0.10$ \\
ZEUS SVX 95 $e^+p$ NC & $1.16$ & $0.94\pm0.25$ & $0.94\pm0.25$ \\
ZEUS 96--97 $e^+p$ NC & $0.60$ & $1.01\pm0.11$ & $1.04\pm0.11$ \\
ZEUS 98--99 $e^-p$ NC & $0.59$ & $0.98\pm0.14$ & $1.00\pm0.14$ \\
ZEUS 99--00 $e^+p$ NC & $0.70$ & $1.02\pm0.16$ & $1.05\pm0.16$ \\
H1 99--00 $e^+p$ CC & $1.04$ & $1.00\pm0.23$ & $1.03\pm0.24$ \\
ZEUS 99--00 $e^+p$ CC & $1.27$ & $0.95\pm0.20$ & $1.02\pm0.21$ \\
H1/ZEUS $ep$ $F_2^{\rm charm}$ & $1.29$ & $1.00\pm0.12$ & $1.00\pm0.12$ \\
H1 99--00 $e^+p$ incl.~jets & $0.78$ & $1.00\pm0.30$ & $1.03\pm0.30$ \\
ZEUS 96--97 $e^+p$ incl.~jets & $0.99$ & $1.07\pm0.26$ & $1.07\pm0.25$ \\
ZEUS 98--00 $e^\pm p$ incl.~jets & $0.56$ & $0.95\pm0.25$ & $0.98\pm0.26$ \\ \hline
D{\O} II $p\bar{p}$ incl.~jets & $1.04$ & $0.96\pm0.14$ & $1.03\pm0.15$ \\
CDF II $p\bar{p}$ incl.~jets & $0.73$ & $1.01\pm0.22$ & $1.08\pm0.23$ \\
CDF II $W\to \ell\nu$ asym. & $1.32$ & $1.00\pm0.30$ & $1.03\pm0.3$3 \\
D{\O} II $W\to \ell\nu$ asym. & $2.51$ & $0.94\pm0.40$ & $1.08\pm0.47$ \\
D{\O} II $Z$ rap. & $0.68$ & $1.05\pm0.2$9 & $1.07\pm0.30$ \\
CDF II $Z$ rap. & $1.70$ & $1.05\pm0.29$ & $1.62\pm0.4$3 \\ \hline
\textbf{All data sets} & $\mathbf{0.93}$ & $\mathbf{0.98\pm0.03}$ & $\mathbf{1.07\pm0.03}$ \\
\hline
\end{tabular}
}
\caption{Values of $\chi^2 / N_{\rm pts.}$ for the data sets in various NLO global fits. The ``MSTW08'' column shows the best-fit numbers~\cite{Martin:2009iq}. The pseudodata numbers in the other two columns are the average and standard deviation of the $\chi^2 / N_{\rm pts.}$ over $N_{\rm rep} = 40$ replica fits. See ref.~\cite{Martin:2009iq} for data references.}
\label{tab:chisq}
\end{table}
In figures~\ref{fig:ratio_collider_theory} and \ref{fig:ratio_collider_inconsistent} we show the result of another study using the same consistent or inconsistent idealised pseudodata. First we show the PDFs obtained from fitting only the collider (HERA+Tevatron) subset of the pseudodata, then we show the effect of adding the remaining fixed-target pseudodata. In the ``theory'' case in figure~\ref{fig:ratio_collider_theory}, the fixed-target pseudodata are perfectly consistent with the collider pseudodata (by construction), so the global fit gives PDFs consistent with the collider fit, but with much smaller uncertainties. This is not true for the ``inconsistent'' case in figure~\ref{fig:ratio_collider_inconsistent}, where the global fit gives PDFs often lying outside the uncertainty band for the collider fit. The latter situation arises when fitting the genuine data in figure~\ref{fig:ratio_varyhera}, implying that the real collider data are inconsistent with the real fixed-target data. Note that the peculiar behaviour at large $x$ in figures~\ref{fig:ratio_collider_theory}(c,d) and \ref{fig:ratio_collider_inconsistent}(c,d) is due to the antiquark distributions going negative in the collider fit at high $x$ where there is no data constraint.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/ratioupvalence_collider_theory.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/ratiodownvalence_collider_theory.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/ratioantiup_collider_theory.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/ratioantidown_collider_theory.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/ratiostrange_collider_theory.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/ratiogluon_collider_theory.eps}
\end{minipage}
\caption{Effect on PDFs of fitting consistent idealised pseudodata, either collider-only or global.}
\label{fig:ratio_collider_theory}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/ratioupvalence_collider_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/ratiodownvalence_collider_inconsistent.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/ratioantiup_collider_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/ratioantidown_collider_inconsistent.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/ratiostrange_collider_inconsistent.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/ratiogluon_collider_inconsistent.eps}
\end{minipage}
\caption{Effect on PDFs of fitting inconsistent idealised pseudodata, either collider-only or global.}
\label{fig:ratio_collider_inconsistent}
\end{figure}
The conclusion of these studies is that defining experimental uncertainties via $\Delta\chi^2_{\rm global}=1$ is overly optimistic for \emph{global} PDF analysis and that the more conservative ``dynamic'' tolerance~\cite{Martin:2009iq} based on a ``hypothesis-testing'' criterion~\cite{Collins:2001es} is much more appropriate.\footnote{A similar conclusion has been reached using very different arguments in ref.~\cite{Pumplin:2009sc}.} As a final example of a situation where we believe it would make sense to introduce a tolerance to account for a potential discrepancy between data sets, consider the recent ATLAS determination~\cite{Aad:2012sb} of the ratio of the strange-to-down sea-quark distributions, $r_s(x,Q^2)\equiv 0.5(s+\bar{s})/\bar{d}$, from a fit to inclusive $W^\pm$ and $Z$ differential cross sections at the LHC, combined with inclusive DIS data from HERA. This ratio took the surprising values of
\begin{equation*}
r_s\left(x=0.023,Q_0^2=1.9~{\rm GeV}^2\right)=1.00^{+0.25}_{-0.28}\quad\text{and}\quad r_s\left(x=0.013,Q^2=M_Z^2\right)=1.00^{+0.09}_{-0.10},
\end{equation*}
where the $r_s$ uncertainty is dominated by the experimental PDF uncertainty, determined using $\Delta\chi^2=1$, of $\pm0.20$ and $\pm0.07$, respectively. These values being consistent with unity indicate no strange suppression, contrary to previous determinations from CCFR/NuTeV dimuon cross sections ($\nu N\to \mu\mu X$), where the strange-quark distributions are suppressed to about half of the $\bar{d}$ and $\bar{u}$ distributions at the lower $Q^2$ value. Even the HERA DIS data included in the ATLAS analysis~\cite{Aad:2012sb} shows some tension with the result of no strange suppression; the $\chi^2$ for the HERA DIS data increases by 2.9 units in going from fixed $r_s(x,Q_0^2)=0.5$ to free $r_s$ with two extra parameters. The MSTW 2008 NNLO analysis~\cite{Martin:2009iq}, which included the CCFR/NuTeV dimuon cross sections, found central values and 68\% C.L.~PDF uncertainties (including the ``dynamic'' tolerance) of
\begin{equation*}
r_s\left(x=0.023,Q^2=1.9~{\rm GeV}^2\right)=0.48\pm0.04\quad\text{and}\quad r_s\left(x=0.023,Q^2=M_Z^2\right)=0.79\pm0.02.
\end{equation*}
Rescaling the experimental PDF uncertainty of the ATLAS determination~\cite{Aad:2012sb} by a tolerance of $\approx 3$, corresponding to $\Delta\chi^2\approx 9$, would be enough to bring it into agreement with the MSTW08 result. The conclusion that the uncertainty on $r_s$ in the ATLAS determination~\cite{Aad:2012sb} has been underestimated was also reached by the NNPDF Collaboration~\cite{Hartland:2012}.
\section{Random PDFs generated in space of fit parameters} \label{sec:random}
Given that we have now established that we need an appropriate tolerance, the question arises of how to include this into the MC method. We can introduce a tolerance in the generation of the data replicas simply by rescaling all experimental errors in eqs.~\eqref{eq:MCgen} and \eqref{eq:MCgenQuad} by $\langle t\rangle\approx\langle T\rangle\approx 3$, corresponding to the average tolerance for 68\% C.L.~uncertainties. We find that this simple approach, using $n=20$ input PDF parameters, reproduces the Hessian uncertainties with a dynamic tolerance surprisingly well for most parton flavours and kinematic regions. However, it is not possible to implement exactly the ``dynamic'' tolerance (different for each eigenvector direction) in the MC method, since no reference is being made to the eigenvectors of the covariance matrix.
Instead of sampling the probability density by working in the space of data, we can produce random PDFs directly in the space of fit parameters.\footnote{We thank H.~Prosper for making this suggestion.} In fact, this was done in the original work of Giele and Keller~\cite{Giele:1998gw} using the covariance matrix of parameters from Alekhin's fit~\cite{Alekhin:1996za}. A convenient way to generate the random PDFs is to use the eigenvectors of the covariance matrix. Recall from eq.~\eqref{eq:eigbasis} that the parameter displacements from the global minimum can be expanded in a basis of the rescaled eigenvectors $e_{ik}\equiv \sqrt{\lambda_k}\,v_{ik}$, that is,
\begin{equation} \label{eq:component}
a_i - a_i^0 = \sum_{j=1}^n e_{ij}\,z_j,
\end{equation}
with $n=20$ the number of input PDF parameters. Usually the $\pm k$th eigenvector PDF set is defined by taking $z_j = \left(\pm t_j^\pm\right)\delta_{jk}$ in eq.~(\ref{eq:component}), that is, the usual eigenvector PDF sets are generated with input parameters:
\begin{equation} \label{eq:eigensteptdynamic}
a_i(S_k^\pm) = a_i^0 \pm t_k^\pm\,e_{ik}\qquad (k=1,\ldots,n),
\end{equation}
with $t_k^\pm$ adjusted to give the desired $T_k^\pm = (\Delta\chi^2_{\rm global})^{1/2}$. However, we can instead randomly sample the parameter space such that the $k$th random PDF set is generated with input parameters obtained by taking $z_j = \left(\pm t_j^\pm\right)|R_{jk}|$ in eq.~(\ref{eq:component}), that is,
\begin{equation} \label{eq:random}
a_i(\mathcal{S}_k) = a_i^0 + \sum_{j=1}^n e_{ij}\,\left(\pm t_j^\pm\right)\,|R_{jk}|\qquad (k=1,\ldots,N_{\rm pdf}),
\end{equation}
where $R_{jk}$ is a Gaussian-distributed random number of mean zero and variance one, and either $+t_j^+$ or $-t_j^-$ is chosen depending on the sign of $R_{jk}$. There are therefore $n=20$ random numbers $R_{jk}$ ($j=1,\ldots,n$) associated with the $k$th random PDF set ($k=1,\ldots,N_{\rm pdf}$). The number of random PDF sets $N_{\rm pdf}$ is arbitrary, but again we choose $N_{\rm pdf} = 40$ mostly for practical reasons. Each random PDF set has equal probability defined by the covariance matrix of fit parameters, and therefore statistical quantities such as the mean and standard deviation can easily be calculated using formulae such as eqs.~\eqref{eq:MCav} and \eqref{eq:MCsd} with the obvious replacement $N_{\rm rep}\to N_{\rm pdf}$. A comparison of the average and standard deviation of $N_{\rm pdf} = 40$ PDFs constructed with eq.~\eqref{eq:random} to the best-fit and Hessian uncertainty is made in figure~\ref{fig:ratio_random68clAll}. There is generally good agreement, with some shift of the average compared to the best-fit that can be attributed mostly to asymmetric tolerance values ($t_j^+\ne t_j^-$). We have verified this explanation by repeating the same studies without a tolerance ($T_j^\pm=1$). Alternative ad hoc treatments of the asymmetric tolerance values are possible. For example, if $t_j^+>t_j^-$ proportionally more random PDF sets could be produced for a ``$-$'' sign than for a ``$+$'' sign in eq.~\eqref{eq:random} so that the average would be closer to the best-fit, or one could simply symmetrise with the replacement $t_j^\pm\to (t_j^++t_j^-)/2$ in eq.~\eqref{eq:random}. However, since the degree of asymmetry is generally small, we will not explore these possibilities in practice.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/ratioupvalence_random68clAll.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/ratiodownvalence_random68clAll.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/ratioantiup_random68clAll.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/ratioantidown_random68clAll.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/ratiostrange_random68clAll.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/ratiogluon_random68clAll.eps}
\end{minipage}
\caption{$N_{\rm pdf} = 40$ random sets generated with eq.~\eqref{eq:random} as a ratio to the best-fit PDF set.}
\label{fig:ratio_random68clAll}
\end{figure}
As some measure of the amount of statistical fluctuation, we produce another $N_{\rm pdf} = 40$ PDFs constructed with eq.~\eqref{eq:random} using different random numbers $R_{jk}$. The results are shown in figures~\ref{fig:ratio_random68cl} and \ref{fig:frac_random68cl} and we conclude that $N_{\rm pdf} = 40$ is enough to avoid significant fluctuations, although there is some moderate variation due to the limited statistics (for example, in $d_v$ at $x\sim0.1$).
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/ratioupvalence_random68cl.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/ratiodownvalence_random68cl.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/ratioantiup_random68cl.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/ratioantidown_random68cl.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/ratiostrange_random68cl.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/ratiogluon_random68cl.eps}
\end{minipage}
\caption{Comparison of best-fit and Hessian uncertainty to the average and standard deviation of two sets of $N_{\rm pdf} = 40$ PDFs generated with different random parameters given by eq.~\eqref{eq:random} and one set of $N_{\rm pdf} = 1000$ random PDFs generated with eq.~\eqref{eq:randomPDF}.}
\label{fig:ratio_random68cl}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/fracupvalence_random68cl.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/fracdownvalence_random68cl.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/fracantiup_random68cl.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/fracantidown_random68cl.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/fracstrange_random68cl.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/fracgluon_random68cl.eps}
\end{minipage}
\caption{Similar to figure~\ref{fig:ratio_random68cl} but percentage uncertainties rather than the ratio to the best-fit.}
\label{fig:frac_random68cl}
\end{figure}
In principle, there is some amount of non-linearity in going from the input PDF parameters $a_i$ to the input PDFs $f(x,Q_0^2)$, then to the evolved PDFs $f(x,Q^2)$ and to physical observables $F$ calculated using these evolved PDFs (for example, hadronic cross sections with a quadratic PDF dependence). However, we find that, in practice, the apparent degree of non-linearity is small, an assumption that is inherent in the Hessian method for propagating uncertainties. Making this assumption of linearity, an alternative and simpler way to generate random PDFs is to work with the existing eigenvector PDF sets directly at the level of the quantity of interest $F$ such as the evolved PDF or the hadronic cross section. Then we can build random values of $F$ according to\footnote{cf.~the studies of F.~De Lorenzi: see eq.~(3.1) of ref.~\cite{DeLorenzi:2010zt} or eq.~(6.1) of ref.~\cite{DeLorenzi:2011}.}
\begin{equation} \label{eq:randomPDF}
F(\mathcal{S}_k) = F(S_0) + \sum_{j=1}^{n}\left[F(S_j^\pm)-F(S_0)\right]\,|R_{jk}|\qquad (k=1,\ldots,N_{\rm pdf}),
\end{equation}
where $S_j^+$ or $S_j^-$ is chosen depending on the sign of $R_{jk}$. Note that for the case $F=a_i$ in eq.~\eqref{eq:randomPDF}, then $a_i(S_0)\equiv a_i^0$ and inserting $a_i(S_j^\pm)$ from eq.~\eqref{eq:eigensteptdynamic} then we recover eq.~\eqref{eq:random}. This construction of a random $F(\mathcal{S}_k)$ using eq.~\eqref{eq:randomPDF} can be done ``on the fly'' for an almost arbitrarily large value of $N_{\rm pdf}$, after the initial computation of $F(S_0)$ and $F(S_j^\pm)$ $(j=1,\ldots,n)$ requiring only $2n+1$ ($=41$ for the MSTW 2008 PDFs) evaluations of $F$. We choose $N_{\rm pdf}=1000$ for the results shown in figures~\ref{fig:ratio_random68cl} and \ref{fig:frac_random68cl}, although the results are similar with a much smaller value. Here we take ``$F$'' in eq.~\eqref{eq:randomPDF} to be the evolved PDF at $Q=100$~GeV for the particular parton flavour shown in each plot, then we construct $N_{\rm pdf}=1000$ values of $F(\mathcal{S}_k)$ and take the average and standard deviation, finding good agreement with the best-fit and Hessian uncertainty. Again, the slight shift of the average compared to the best-fit can be attributed mostly to asymmetric tolerance values, which we confirm by repeating the same exercise starting from eigenvector PDF sets generated with $\Delta\chi^2_{\rm global}=1$. As already mentioned, ad hoc modifications to the procedure could be adopted to better account for asymmetric tolerance values, but we choose not to explore these possibilities in this work given the relatively small size of the effect. For example, a symmetrised version of eq.~\eqref{eq:randomPDF} could be obtained using
\begin{equation} \label{eq:randomPDFsymm}
F(\mathcal{S}_k) = F(S_0) + \frac{1}{2}\sum_{j=1}^{n}\left\lvert F(S_j^+)-F(S_j^-)\right\rvert\,R_{jk}\qquad (k=1,\ldots,N_{\rm pdf}),
\end{equation}
analogous to the symmetric formula for PDF uncertainties given in eq.~\eqref{eq:symmunc}.
We note that an unsuccessful attempt to generate random PDFs directly in the space of fit parameters was made in section~6.5 of ref.~\cite{Pumplin:2009nk}. This attempt was flawed in that all random PDF sets were constructed with the unnecessary constraint of a fixed $\Delta\chi^2=100$, with the $n$ parameters distributed on the surface of an $n$-dimensional hypersphere using the eigenvectors as basis vectors, leading to an envelope of the random PDF sets covering a much smaller range than the usual Hessian uncertainty. By contrast, if we generate random PDF sets according to eq.~\eqref{eq:random}, then the $\Delta\chi^2$, or equivalently $t_j^\pm$, is only used to define the distance along a particular eigenvector direction. At a general point in parameter space, given by stepping along all eigenvector directions by a random amount, the $\Delta\chi^2$ is irrelevant and it can be very large. It is not necessary or desirable that each random PDF set should have $\Delta\chi^2$ below a certain value. A fixed $\Delta\chi^2$ will only be recovered in the specific (and very unlikely) case that $|R_{jk}| = \delta_{jk}$, then eq.~\eqref{eq:random} reduces to eq.~\eqref{eq:eigensteptdynamic}.
Another argument that a Monte Carlo approach in the space of fit parameters involves exploring a space too wide to be sampled efficiently with a small number of random PDFs was made in section~3.2.1 of ref.~\cite{Forte:2010dt}. There it was argued that if the probability distribution for each parameter is given as a histogram with three bins, say the one-sigma region around the central value and the two outer regions, then na\"ively one might expect the need to randomly sample $3^n\gtrsim 3\times10^9$ PDF sets for $n=20$ free parameters. However, the $n$ parameters are certainly not independent, and the complete correlation information is provided by the covariance matrix obtained from the global fit. Working in the basis of eigenvectors then provides an optimally efficient way to sample the parameter space randomly along each eigenvector direction.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/conv_samerand_z0.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/conv_samerand_wpwmratio.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/conv_samerand_ttbar.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/conv_samerand_ggh120.eps}
\end{minipage}
\caption{Convergence of average and standard deviation of $N_{\rm pdf}$ random predictions as a function of $N_{\rm pdf}$, each time adding one more random prediction to the $N_{\rm pdf}-1$ previous random predictions, normalised to the best-fit prediction and compared to the Hessian uncertainty.}
\label{fig:conv_samerand}
\end{figure}
Nevertheless, it is instructive to perform a numerical exercise in order to explicitly demonstrate roughly how many random predictions are necessary to adequately sample the parameter space. We consider the 7~TeV LHC total cross sections for four typical processes corresponding to inclusive production of (a)~$Z^0$ bosons, (b)~$W^+$ relative to $W^-$ bosons, (c)~top-pairs and (d)~Standard Model Higgs bosons with $M_H=120$~GeV from gluon--gluon fusion. These four processes are chosen to sample a variety of parton flavours and momentum fractions $x$. We use the existing NLO calculations from ref.~\cite{Watt:2011kp} with the MSTW 2008 NLO best-fit and Hessian eigenvector PDF sets at 68\% C.L. For each of the four processes, we generate the minimal $N_{\rm pdf}=2$ random predictions computed using eq.~\eqref{eq:randomPDFsymm} for $F=\{\sigma_{Z^0},\sigma_{W^+}/\sigma_{W^-},\sigma_{t\bar{t}},\sigma_H\}$ and calculate the average and standard deviation. Then the number of random predictions, $N_{\rm pdf}$, is incremented by one, and the average and standard deviation recomputed, until $N_{\rm pdf}=1000$. The results are shown in figure~\ref{fig:conv_samerand} normalised to the best-fit prediction and compared with the symmetric Hessian uncertainty of eq.~\eqref{eq:symmunc}. We use the symmetrised formulae of eqs.~\eqref{eq:symmunc} and \eqref{eq:randomPDFsymm} to allow a direct comparison between the best-fit prediction and the average over the random predictions, without the complications arising from asymmetric tolerance values discussed elsewhere. We show a similar set of plots in figure~\ref{fig:conv_diffrand} where each value of $N_{\rm pdf}$ now corresponds to the average and standard deviation over $N_{\rm pdf}$ \emph{independent} random predictions. The results for adjacent $N_{\rm pdf}$ values therefore indicate the size of the statistical fluctuations, which decrease going to larger $N_{\rm pdf}$ values, but are still not completely negligible even for $N_{\rm pdf}\sim1000$. However, although there is little computational overhead in taking $N_{\rm pdf}$ to be very large when the random predictions are generated ``on the fly'', one would not expect to see noticeable differences when $N_{\rm rep}$ is much larger than around 1000. In fact, the statistical fluctuations are very small compared to the PDF uncertainty for $N_{\rm pdf}\gtrsim 100$ and even $N_{\rm pdf}=40$ may be sufficiently accurate for many practical purposes.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/conv_diffrand_z0.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/conv_diffrand_wpwmratio.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/conv_diffrand_ttbar.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/conv_diffrand_ggh120.eps}
\end{minipage}
\caption{Convergence of average and standard deviation of $N_{\rm pdf}$ random predictions as a function of $N_{\rm pdf}$, each time generating $N_{\rm pdf}$ \emph{independent} random predictions with different random numbers, normalised to the best-fit prediction and compared to the Hessian uncertainty.}
\label{fig:conv_diffrand}
\end{figure}
\section{Reweighting to describe the LHC $W\to\ell\nu$ charge asymmetry data} \label{sec:reweighting}
Updating a PDF set with new data using a Bayesian reweighting method based on statistical inference was originally proposed by Giele and Keller~\cite{Giele:1998gw} and later developed further by the NNPDF Collaboration~\cite{Ball:2010gb,Ball:2011gg}. Suppose we have a set of $N_{\rm pdf}$ random PDFs $\{\mathcal{S}_k\}$ with equal probability. It is irrelevant whether they are generated in the space of data (section~\ref{sec:MCgen}) or in the space of parameters (section~\ref{sec:random}). We can then apply the Bayesian reweighting technique exactly as for the NNPDF sets. The key formulae are summarised below, but we refer to refs.~\cite{Ball:2010gb,Ball:2011gg} for the derivation and more details of the method. We first compute the $\chi^2_k$ for the new data set (comprising $N_{\rm pts.}$ data points) using each $\mathcal{S}_k$, then we can calculate the mean value of any PDF-dependent quantity $F(\mathcal{S}_k)$ as:
\begin{equation}
\langle F\rangle_{\rm old} = \frac{1}{N_{\rm pdf}}\sum_{k=1}^{N_{\rm pdf}}F(\mathcal{S}_k),\quad
\langle F\rangle_{\rm new} = \frac{1}{N_{\rm pdf}}\sum_{k=1}^{N_{\rm pdf}}w_k(\chi^2_k)\,F(\mathcal{S}_k),
\end{equation}
where the \emph{weights} are given by
\begin{equation} \label{eq:weights}
w_k(\chi^2_k) = \frac{W_k(\chi^2_k)}{\frac{1}{N_{\rm pdf}}\sum_{j=1}^{N_{\rm pdf}}W_j(\chi^2_j)},\quad W_k(\chi^2_k) \equiv \left(\chi^2_k\right)^{\frac{1}{2}\left(N_{\rm pts.}-1\right)}\,\exp\left(-\frac{1}{2}\chi^2_k\right),
\end{equation}
with the denominator of $w_k(\chi^2_k)$ ensuring the normalisation condition:
\begin{equation}
\sum_{k=1}^{N_{\rm pdf}}w_k(\chi^2_k) = N_{\rm pdf}.
\end{equation}
Note that the expression for the weights in eq.~\eqref{eq:weights} differs from the original formula in ref.~\cite{Giele:1998gw} due to subtle arguments explained in ref.~\cite{Ball:2010gb}. The standard deviation $\Delta F$ after reweighting can be calculated using eq.~\eqref{eq:MCsd} with the trivial replacement $N_{\rm rep}\to N_{\rm pdf}$ and using the weighted averages $\langle F^2\rangle_{\rm new}$ and $\langle F\rangle_{\rm new}$. The effective number of random PDF sets left after reweighting, referred to as the ``Shannon entropy''~\cite{Ball:2010gb}, is given by
\begin{equation} \label{eq:Neff}
N_{\rm eff} = \exp\left(\frac{1}{N_{\rm pdf}}\sum_{k=1}^{N_{\rm pdf}}w_k\ln\left(\frac{N_{\rm pdf}}{w_k}\right)\right).
\end{equation}
As a simple application of this reweighting technique, we will consider the 7~TeV LHC data from the 2010 running period on the $W\to\ell\nu$ charge asymmetry from CMS~\cite{Chatrchyan:2011jz} and ATLAS~\cite{Aad:2011dm}. The $W\to\ell\nu$ charge asymmetry is defined differentially as a function of the pseudorapidity $\eta_\ell$ of the charged-lepton from the $W$-boson decay, i.e.
\begin{equation}
A_\ell(\eta_\ell) = \frac{{\rm d}\sigma(\ell^+)/{\rm d}\eta_\ell-{\rm d}\sigma(\ell^-)/{\rm d}\eta_\ell}{{\rm d}\sigma(\ell^+)/{\rm d}\eta_\ell+{\rm d}\sigma(\ell^-)/{\rm d}\eta_\ell}.
\end{equation}
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/dsdeta_nlo_cms25.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/dsdeta_nlo_atlas.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/Kfactor_nlo_cms25.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/Kfactor_nlo_atlas.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/lepton_nlo_cms25.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/lepton_nlo_atlas.eps}
\end{minipage}
\caption{(a,b)~${\rm d}\sigma(\ell^\pm)/{\rm d}\eta_\ell$ distributions, (c,d)~$K$-factors, (e,f)~lepton charge asymmetry, for kinematic cuts corresponding to the (a,c,e)~CMS data~\cite{Chatrchyan:2011jz} and (b,d,f)~ATLAS data~\cite{Aad:2011dm}.}
\label{fig:Kfactors}
\end{figure}
We will consider the CMS data~\cite{Chatrchyan:2011jz} with charged-lepton transverse momentum cut of $p_T^\ell>25$~GeV in both the electron ($\ell=e$) and muon ($\ell=\mu$) channels. The ATLAS data~\cite{Aad:2011dm} combine the electron and muon channels with cuts of $p_T^\ell>20$~GeV, missing transverse energy $\not\mathrel{E}_T^\nu>25$~GeV and transverse mass $M_T=\sqrt{2p_T^\ell\not\mathrel{E}_T^\nu(1-\cos\Delta\phi_{\ell\nu})}>40$~GeV, where $\Delta\phi_{\ell\nu}$ is the azimuthal separation between the directions of the charged-lepton and neutrino. The pseudorapidity distributions, ${\rm d}\sigma(\ell^\pm)/{\rm d}\eta_\ell$, calculated from the public \textsc{dynnlo} code~\cite{Catani:2009sm} using the MSTW 2008 NLO best-fit PDFs with $\mu_R=\mu_F=M_W$, are shown in figure~\ref{fig:Kfactors}(a,b) for (a)~CMS cuts and (b)~ATLAS cuts. For LO kinematics ($p_T^W=0$) with zero $W$ width ($\Gamma_W=0$), then $p_T^\ell=\not\mathrel{E}_T^\nu$ and $M_T=2p_T^\ell$, and the predictions are identical for the CMS and ATLAS cuts, but not after accounting for NLO and finite $W$ width effects. In figure~\ref{fig:Kfactors}(c,d) we define a $K$-factor by taking the ratio of the \textsc{dynnlo} histograms, then we fit to quartic polynomials in $|\eta_\ell|$ to provide a convenient parameterisation and to smooth statistical fluctuations from the \textsc{vegas} multidimensional integration. A fast calculation of the $W\to\ell\nu$ charge asymmetry can then be obtained using a simple LO calculation with zero $W$ width (denoted ``\texttt{LEPTON}''), including the parameterised $K$-factors for ${\rm d}\sigma(\ell^\pm)/{\rm d}\eta_\ell$, making the assumption that the $K$-factors are independent of the PDF choice. In figure~\ref{fig:Kfactors}(e,f) we compare the \texttt{LEPTON} calculation, without and with the inclusion of $K$-factors, with the \textsc{dynnlo} histograms, finding good agreement (by construction). It can be seen that the NLO corrections and finite-width effects are very small over most of the $|\eta_\ell|$ range. The effect on the $W\to\ell\nu$ charge asymmetry of neglecting the PDF dependence of the $K$-factors should then be completely negligible. We have also computed the NNLO corrections using the \textsc{dynnlo} code and find them to be much smaller than the NLO corrections, but we will consider only NLO QCD in making comparisons to data, as done elsewhere in this paper.
We will focus on demonstrating the reweighting technique rather than aiming to make an exhaustive study of the impact of the LHC data. With this aim in mind, we will not consider in this work the 2010 CMS data with $p_T^\ell>30$~GeV~\cite{Chatrchyan:2011jz}, the preliminary CMS measurements using 2011 data with $p_T^\mu>25$~GeV~\cite{CMS:muonasymmetry} or $p_T^e>35$~GeV~\cite{CMS:electronasymmetry}, or the recent LHCb measurements using 2010 data with $p_T^\mu>\{20,25,30\}$~GeV~\cite{Aaij:2012vn}. The ATLAS Collaboration~\cite{Aad:2011dm} provide the differential cross sections, ${\rm d}\sigma(\ell^\pm)/{\rm d}\eta_\ell$, separately for $W^+\to\ell^+\nu$ and $W^-\to\ell^-\bar{\nu}$ with the complete information on correlated systematic uncertainties, which is potentially more useful for PDF fits than simply the asymmetry $A_\ell(\eta_\ell)$. A future study could perhaps investigate the use of reweighting with the ATLAS $W^\pm$ (and $Z/\gamma^*$) differential cross sections rather than the asymmetry $A_\ell(\eta_\ell)$. In this study, we simply calculate the $\chi^2_k$ values with all experimental uncertainties added in quadrature.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/cms25_randa.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/atlas_randa.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/ratio_upvalence-downvalence_randwgtsa_cms25.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/ratio_upvalence-downvalence_randwgtsa_atlas.eps}
\end{minipage}
\caption{Lepton charge asymmetry $A_\ell(\eta_\ell)$ predictions compared to (a)~CMS~\cite{Chatrchyan:2011jz} and (b)~ATLAS~\cite{Aad:2011dm} data, then change in $u_v-d_v$ after reweighting using (c)~CMS and (d)~ATLAS data.}
\label{fig:uvdv}
\end{figure}
In figure~\ref{fig:uvdv}(a,b) we compare the (a)~CMS and (b)~ATLAS data on the $W\to\ell\nu$ charge asymmetry to predictions using the MSTW 2008 NLO PDFs, firstly with the usual best-fit and Hessian uncertainty. We then generate $N_{\rm pdf}=1000$ random predictions for the asymmetry by taking $F=A_\ell(\eta_\ell)$ in eq.~\eqref{eq:randomPDF}, and take the average and standard deviation, giving results slightly different from the best-fit and Hessian uncertainty (mainly due to the asymmetric tolerance values). The $\chi^2$ values of the average $A_\ell(\eta_\ell)$, displayed in the plot legends, are then slightly larger than the $\chi^2$ of the best-fit predictions. Next we compute weights for each of the $N_{\rm pdf}$ predictions according to eq.~\eqref{eq:weights}, then finally we plot the weighted average and standard deviation in figure~\ref{fig:uvdv}(a,b). The $\chi^2$ of the weighted average $A_\ell(\eta_\ell)$ improves significantly compared to the unweighted average. The effective number of random predictions $N_{\rm eff}$ after reweighting, computed according to eq.~\eqref{eq:Neff}, is about half the original number for CMS and almost a quarter the original number for ATLAS. The most significant change in the predictions after reweighting is for $\eta_\ell\approx 0$ where $A_\ell(\eta_\ell)$ depends on the combination $u_v-d_v$ at momentum fractions $x$ slightly above $x\sim M_W/\sqrt{s}\sim 0.01$. In figure~\ref{fig:uvdv}(c,d) we plot this combination for $Q^2=(100~{\rm GeV})^2$ for the same three sets of predictions shown in figure~\ref{fig:uvdv}(a,b). We compare the best-fit and Hessian uncertainty with the unweighted/weighted average and standard deviation of $N_{\rm pdf}=1000$ random PDFs constructed by taking $F=x(u_v-d_v)(x,Q^2)$ in eq.~\eqref{eq:randomPDF}, with the \emph{same} random numbers $R_{jk}$ and weights $w_k$ used in figure~\ref{fig:uvdv}(a,b). As expected from figure~\ref{fig:uvdv}(a,b), the shift in $u_v-d_v$ is largest at $x\sim 0.01$, and the average value after reweighting using the ATLAS data even lies outside the original uncertainty band. There is also a distinct reduction in the size of the uncertainty band after reweighting.
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
(a)\\
\includegraphics[width=\textwidth]{Figs/weightsrand_cms25.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(b)\\
\includegraphics[width=\textwidth]{Figs/weightsrand_atlas.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(c)\\
\includegraphics[width=\textwidth]{Figs/chisqrand_cms25.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(d)\\
\includegraphics[width=\textwidth]{Figs/chisqrand_atlas.eps}
\end{minipage}
\begin{minipage}{0.5\textwidth}
(e)\\
\includegraphics[width=\textwidth]{Figs/probalpha_cms25.eps}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
(f)\\
\includegraphics[width=\textwidth]{Figs/probalpha_atlas.eps}
\end{minipage}
\caption{Distributions of (a,b)~$w_k$, (c,d)~$\chi^2_k/N_{\rm pts.}$, (e,f) $\mathcal{P}(\alpha)$, for (a,c,e)~CMS and (b,d,f)~ATLAS.}
\label{fig:reweight}
\end{figure}
The procedure just described is not completely unambiguous. Alternative prescriptions could be formulated which are equivalent in a linear approximation, but which might differ due to some degree of non-linearity. For example, rather than starting by generating random predictions for the asymmetry by taking $F=A_\ell(\eta_\ell)$ in eq.~\eqref{eq:randomPDF}, we could instead generate $N_{\rm pdf}=1000$ random PDF sets by taking $F=xf(x,M_W^2)$ in eq.~\eqref{eq:randomPDF}, where $f=\{g,d,u,s,c,b,\bar{d},\bar{u},\bar{s},\bar{c},\bar{b}\}$, then calculate $A_\ell(\eta_\ell)$ for each of these $N_{\rm pdf}$ random PDF sets, before calculating weights according to eq.~\eqref{eq:weights} as before. This alternative method will give slightly different results since $A_\ell(\eta_\ell)$ depends on $xf(x,M_W^2)$ in a non-linear manner. In figure~\ref{fig:reweight}(a,b) we compare the distribution of weights $w_k$ computed using the two different methods, using the same random numbers $R_{jk}$ to allow a direct comparison of individual weights with the same label $k$. The distribution of weights is very similar using the two methods. The individual weights typically agree to within a few percent and differ by only a few tens of percent in the worst cases. The values of $N_{\rm eff}$ agree to the nearest integer and the values of $\chi^2/N_{\rm pts.}$ agree to two decimal places. The plots of figure~\ref{fig:uvdv} produced using the alternative method are indistinguishable. We conclude that the degree of non-linearity is small and both techniques may be useful in practice. For example, it might be useful to first generate $N_{\rm pdf}=1000$ random PDF sets as grid files by taking $F=xf(x,Q^2)$ in eq.~\eqref{eq:randomPDF}, then these grid files can be processed in exactly the same way as the NNPDF grid files. On the other hand, that method would require substantial disk storage and would require the observable $A_\ell(\eta_\ell)$ to be evaluated $N_{\rm pdf}$ times, which is potentially time-consuming. With the first method described above, it is unnecessary to store intermediate grid files, and only $2n+1$ ($=41$ for the MSTW 2008 PDFs) evaluations of $A_\ell(\eta_\ell)$ are needed for the best-fit and $2n$ eigenvector PDF sets, exactly as for the usual computation of Hessian uncertainties. The first method will therefore be used for subsequent results.
The $\chi^2$ distribution of the new data set after reweighting can easily be histogrammed:
\begin{equation}
\mathcal{P}(\chi^2_a<\chi^2<\chi^2_b) = \frac{1}{N_{\rm pdf}}\sum_{k=1}^{N_{\rm pdf}}w_k(\chi_k^2)\,\Theta(\chi^2_k-\chi_a^2)\Theta(\chi^2_b-\chi_k^2),
\end{equation}
where the $\chi^2$ distribution before reweighting is trivially obtained by setting all weights $w_k$ equal to unity. Both these distributions are shown in figure~\ref{fig:reweight}(c,d). The plot legends indicate the mean $\chi^2$ and the standard deviation. The reweighting procedure shifts the $\chi^2$ distribution so that larger weights are given to the random predictions with $\chi^2_k/N_{\rm pts.}\sim 1$.
If we rescale the data uncertainties by a factor $\alpha$, then the probability density for the rescaling parameter $\alpha$ is given by~\cite{Ball:2010gb}
\begin{equation} \label{eq:palpha}
\mathcal{P}(\alpha) \propto \frac{1}{\alpha}\sum_{k=1}^{N_{\rm pdf}}W_k\left(\frac{\chi^2_k}{\alpha^2}\right),
\end{equation}
that is, the sum of the unnormalised weights given by eq.~\eqref{eq:weights} with the replacement $\chi^2_k\to \chi^2_k/\alpha^2$. The overall normalisation of eq.~\eqref{eq:palpha} can be determined from the condition that the integral of $\mathcal{P}(\alpha)$ over $\alpha$ gives unity. The probability distribution $\mathcal{P}(\alpha)$ is shown in figure~\ref{fig:reweight}(e,f). These plots suggest that the LHC data on $A_\ell(\eta_\ell)$ are somewhat inconsistent with the data in the MSTW 2008 NLO fit and that the uncertainties on the LHC $A_\ell(\eta_\ell)$ data should be rescaled by a factor 1.37 for CMS and 1.68 for ATLAS to achieve consistency, where these are the most probable values of $\alpha$. Conversely, a most probable value of $\alpha$ much less than 1 would suggest that the experimental uncertainties are overestimated to some extent. In that case, it might be desirable to repeat the reweighting procedure with the replacement $\chi^2_k\to \chi^2_k/\alpha^2$ in eq.~\eqref{eq:weights}, where $\alpha$ is the most probable value.
It is clear (see, for example, the discussion in ref.~\cite{Watt:2011kp}) that there is some considerable tension between the LHC $W\to\ell\nu$ charge asymmetry data and some of the data already included in the MSTW 2008 fit, such as the Tevatron $W\to\ell\nu$ asymmetry, the NMC $F_2^d/F_2^p$ ratio, and the E866/NuSea Drell--Yan $\sigma^{pd}/\sigma^{pp}$ ratio. Other tensions have been observed with the more recent and precise Tevatron data on the $W\to\ell\nu$ charge asymmetry, and partially resolved by more flexible nuclear corrections for deuteron structure functions~\cite{Thorne:2010kj} and extended parameterisation choices for the functional form of the input PDFs. Indeed, we note that the LHC asymmetry $A_\ell(\eta_\ell)$ depends on valence-quark parameterisations near $x\sim 0.01$, and the studies in section~\ref{sec:parambias} suggested that this is the single place where the MSTW 2008 parameterisation is likely to be inadequate. Further attempts to resolve these tensions will be necessary for any future update of the MSTW 2008 fit. Therefore, the reweighting technique is instructive, but does not indicate the ultimate impact of including the new data in a global PDF fit after closer scrutiny of potential sources of tension. Nevertheless, we hope that the new method presented in this section of generating random predictions on-the-fly from the existing eigenvector PDF sets, followed by subsequent Bayesian reweighting, will be useful for a wide range of potential studies by third parties from both the experimental and theoretical communities.
\section{Conclusions} \label{sec:conclusions}
We have made a first study of the Monte Carlo approach to experimental uncertainty propagation in the context of the MSTW 2008 NLO PDF fit~\cite{Martin:2009iq}, either using data replicas or alternatively working directly in parameter space. The main findings of this study are as follows:
\begin{itemize}
\item The Hessian method and the Monte Carlo method using data replicas are approximately equivalent in a global fit when using the same parameterisation and (lack of) tolerance, i.e.~$\Delta\chi^2=1$. Similar findings have previously been observed in a fit only to H1 data~\cite{Dittmar:2009ii}.
\item The Monte Carlo approach using data replicas is better suited to exploring parameterisation bias due to the potentially restrictive input functional form. Increasing the number of parameters from $20\to28$ has only a small effect on PDF uncertainties, with the exception of the valence-quark distributions at low $x$ values where there is a moderate increase in PDF uncertainties. This gives some confidence that, in general, PDF uncertainties in the MSTW 2008 fit are not significantly underestimated due to parameterisation bias, with the possible exception of the strange-quark and -antiquark distributions where the imposed parameterisation constraint is more severe due to the lack of available data constraints.
\item The previous findings raise the question why the MSTW/CTEQ uncertainties (\emph{with} a tolerance) are similar to the NNPDF uncertainties (\emph{without} a tolerance)~\cite{Watt:2011kp}, if the tolerance in the former is not compensating for the more restricted functional-form parameterisation rather than the more flexible neural-network parameterisation. One possibility is that the procedural uncertainties for NNPDF associated with splitting data into training and validation sets mimic the effect of a tolerance for MSTW/CTEQ (see discussion in section 3.2 of ref.~\cite{DeRoeck:2011na}). Further investigation would be needed by the NNPDF Collaboration to clarify this possible explanation.
\item The Monte Carlo approach using data replicas is also better suited when making fits to limited data sets without the need to restrict the input parameterisation. We compared the global-fit PDFs to those extracted using a similar flexible parameterisation from more limited data sets either excluding HERA data, including only HERA data, or including only collider (HERA and Tevatron) data. The fits to limited data sets gave much larger PDF uncertainties for some parton combinations, implying that we need data from HERA, the Tevatron \emph{and} the fixed-target experiments to get a meaningful result. The PDF uncertainty bands from the fits to the limited data sets are not close to overlapping in many cases, implying that some kind of \emph{tolerance} is needed to accommodate inconsistencies between the various data subsets.
\item As a further exercise to examine the effect of data set inconsistency, we generated idealised pseudodata from the best-fit theory predictions, then we introduced deliberate inconsistencies. The fractional PDF uncertainties were very similar whether fitting the real data, the consistent pseudodata or the inconsistent pseudodata. On the other hand, the central values obtained when fitting the inconsistent pseudodata were incompatible accounting for the uncertainty bands, even though the $\chi^2_{\rm global}$ only increased by around 10\% and the $\chi^2/N_{\rm pts.}$ for individual data sets did not deviate far above unity. Given that a good fit should have $\chi^2/N_{\rm pts.}$ approximately in the range $1\pm \sqrt{2/N_{\rm pts.}}$~\cite{Collins:2001es}, giving $1.0\pm0.1$ for $N_{\rm pts.}\sim 200$, it is far from obvious to spot genuine inconsistencies in the real data of the size we introduced into the idealised pseudodata. It is definitely not the case that the PDF uncertainties will automatically expand to accommodate any inconsistencies. Again, this suggests the need for a tolerance to accommodate potential data set inconsistencies in the real data.
\item Having established the need for an appropriate tolerance, we pointed out that it could be introduced by rescaling all experimental uncertainties by a common factor (say, 3) in the generation of data replicas. However, the introduction of a ``dynamic'' tolerance for each eigenvector direction is not possible, since no use is made of the covariance matrix of fit parameters in the Monte Carlo error propagation.
\item Instead, we proposed sampling the covariance matrix of fit parameters by stepping along each eigenvector direction by a random amount, including the appropriate tolerance values. This method of generating random PDF sets is close to the usual generation of eigenvector PDF sets in the Hessian method where one steps along each eigenvector direction in turn by a fixed amount.
\item In fact, assuming linearity between the input PDF parameters and derived quantities such as evolved PDFs or hadronic cross sections, an assumption that is inherent in the Hessian method, then it is more convenient to generate random predictions on-the-fly from the existing eigenvector PDF sets.
\item As a simple example application to demonstrate the benefits of having randomly-distributed theory predictions, we used Bayesian reweighting to investigate the effect on the PDFs of recent LHC data on the $W\to\ell\nu$ charge asymmetry. Similar studies can now easily be performed by third parties using any PDF determination where eigenvector PDF sets are provided. The reweighting technique is therefore no longer limited only to using the PDF sets provided by the NNPDF Collaboration.
\end{itemize}
We conclude that the Monte Carlo method using data replicas is, on balance, not superior to the Hessian method in a global fit when using a conventional functional-form parameterisation of the input PDFs. In particular, one of the key benefits of the Monte Carlo approach, namely the use of Bayesian reweighting, can even be accomplished more efficiently using the existing eigenvector PDF sets. Therefore, any future update of the ``MSTW 2008'' analysis will continue to use the Hessian method with a ``dynamic'' tolerance.
\acknowledgments
We thank J.~Andersen, R.~Cousins, G.~Cowan, S.~Forte, S.~Lauritzen, L.~Lyons, A.~D.~Martin, R.~McNulty, H.~Prosper, J.~Pumplin, J.~Rojo, G.~Salam and W.~J.~Stirling for useful discussions. The work of R.S.T.~is supported partly by the London Centre for Terauniverse Studies (LCTS), using funding from the European Research Council via the Advanced Investigator Grant 267352.
\bibliographystyle{JHEP}
|
2,869,038,156,834 | arxiv | \section{Introduction} \label{Sec1}
The present study concerns important issues on the design of
neuroimaging experiments where the pioneering functional magnetic
resonance imaging (fMRI) technology is employed to gain better
knowledge on how our brain reacts to mental stimuli. In such an fMRI
study, a sequence of tens or hundreds stimuli (e.g., images of
1.5-second flickering checkerboard) is presented to a human subject
while an fMRI scanner repeatedly scans the subject's brain to collect
data for making statistical inference about brain activity; see \citet
{Lazar2008bk51} for an overview of fMRI. The quality of such an
inference largely hinges on the amount of useful information contained
in the data, which in turn depends on the selected stimulus sequence
(i.e., fMRI design). The importance of identifying high-quality
experimental designs for fMRI and gaining insights into these designs
cannot be overemphasized.
In a seminal work, \citet{BuracasBoynton2002ar51} advocated the
use of the maximal length linear feedback shift
register sequences (or $m$-sequences) as fMRI designs for a precise
estimate of the hemodynamic response function (HRF);
the HRF models the effect over time of a brief stimulus on the relative
concentration of oxy- to deoxy-blood in the cerebral blood
vessels at a brain voxel (3D image unit), and is often studied for
gaining information about the underlying brain activity evoked by the stimulus.
The $m$-sequences have since then become very popular in practice. They are
also included as part of the ``good'' initial designs in the computer
algorithm of \citet{Kaoetal2009ar51} to facilitate the search of
multi-objective fMRI designs.
The computational results of \citet{BuracasBoynton2002ar51} and
\citet{Liu2004ar51} suggested
that the $m$-sequences can yield high statistical efficiencies in terms
of the $A$-optimality criterion; the $A$-criterion, which measures the
average variance of parameter estimates, is a design selection
criterion widely used in many fields including fMRI [e.g., \citet{Dale1999ar51, Fristonetal1999ar51}].
By focusing on the $D$-optimality criterion, \citet{Kao2014ar51}
proved the statistical optimality of
the binary $m$-sequences in estimating the HRF. As indicated there, the
binary $m$-sequences are special cases of the \textit{Hadamard
sequences} that can be generated from a certain type of Hadamard
matrices or difference sets (Section~\ref{sec2.3}); all these
designs are \mbox{$D$-optimal} in the sense of minimizing the generalized
variance of the HRF parameter estimates. While these designs are
expected to be $A$-optimal, there unfortunately is no theoretical proof
of this. One of our contributions here is to address this void. We also
identify some new classes of optimal fMRI designs for the estimation of
the HRF.
Another common study objective is on comparing HRFs of two stimulus
types (e.g., pictures of familiar vs. unfamiliar faces). Some
computational results on optimal fMRI designs for this study objective
have been reported in \citet{WagerNichols2003ar51}, \citet
{Liu2004ar51}, \citet{Kaoetal2008ar51}, \citet{Kaoetal2009ar51} and \citet{Mausetal2010ar51}. However,
theoretical work on providing insightful knowledge to guide the
selection of designs is scarce. In their pioneering papers, \citet
{LiuFrank2004ar51} and \citet{Mausetal2010ar51}
approximated the frequency of each stimulus type that an $A$- or
$D$-optimal fMRI design should possess. However, designs attaining the
optimal stimulus frequency can still be sub-optimal since the onset
times and presentation order of the stimuli play a vital role. Working
on this research line, \citet{Kao2014ar52} provided a sufficient
condition for fMRI designs
to be universally optimal in the sense of \citet
{Kiefer1975ar2021}, and proposed to construct optimal designs for
comparing two HRFs via an extended $m$-sequence (or de Bruijn
sequence), a Paley difference set or a circulant partial Hadamard
matrix. A major limitation of this recent contribution is that the
proposed designs exist only when the design length $N$ is a multiple of
$4$. New developments on identifying optimal fMRI designs for other
practical $N$ are called for.
We consider the two previously described design problems to target
optimal fMRI designs for the estimation of the HRF of a stimulus type
and for the comparison of two HRFs. Our main idea for tackling these
design issues is by formulating them into \textit{circulant biased
weighing design} problems. With this approach, we are able to prove
that the Hadamard sequences are optimal in terms of a large class of
optimality criteria that include both $A$- and $D$-criteria. This holds
as long as the design length $N (=4t -1$ for a positive integer $t$) of
such a design is sufficiently greater than the number of the HRF
parameters, $K$. For given $K$, a lower bound of $N$ for the design to
be both $A$- and $D$-optimal is also derived. This bound is easily
satisfied in typical fMRI experiments. In addition, we adapt and extend
previous results on (biased) weighing designs to identify some optimal
fMRI designs for estimating the HRF when $N = 4t$ and $N = 4t + 1$.
These results are further extended to cases where the study objective
is on the comparison of two HRFs. We note that the designs that we
present here exist in many design lengths for which optimal fMRI are
hitherto unidentified. These designs can be applied in practice or
serve as benchmarks to evaluate other designs; they help to enlarge the
library of high-quality fMRI designs.
The remainder of the paper is organized as follows. In Section~\ref{sec2}, we
provide relevant background information and present our main results on
optimal fMRI designs for estimating
the HRF. Our results on optimal fMRI designs for the comparison of two
HRFs are in Section~\ref{sec3}, and a conclusion is in Section~\ref{sec4}. Some proofs of
our results are presented in the \hyperref[app]{Appendix}.
\section{Designs for estimating the HRF} \label{sec2}
\subsection{Statistical model and design selection criteria} \label{sec2.1}
Consider an fMRI study where a mental stimulus such as a 1.5-second
flickering checkerboard
image [\citet{Boyntonetal1996ar51, Miezinetal2000ar51}] or a
painful heat stimulus [\citet{Worsleyetal2002ar51}]
is presented/applied to a subject at some of the $N$ time points in the
experiment. Let $y_n$ be the measurement of a brain voxel
collected by an fMRI scanner at the $n$th time point, $n=1,\ldots,N$.
We consider the following statistical model:
\begin{equation}
\qquad y_n = \gamma+ x_n h_1 + x_{n-1}
h_2 + \cdots+ x_{n-K+1} h_K +
\varepsilon_n,\qquad n = 1, \ldots, N. \label{Model1}
\end{equation}
Here, $\gamma$ is a nuisance parameter, $h_1$ represents the unknown
height of the hemodynamic response function, HRF,
at the stimulus onset time point and $h_k$ is the HRF height at the
$(k-1)${th} time point following the stimulus onset.
The pre-specified integer $K$ is such that the HRF becomes negligible
after $K$ time points. The value of $x_{n-k+1}$ in
model (\ref{Model1}) is set to $1$ if $h_k$ contributes to $y_n$ and
$x_{n-k+1} = 0$ otherwise, and $\varepsilon_n$ is noise.
Our first design goal is to find an fMRI design, $\db= (d_1, \ldots,
d_N)^T$, that allows the most precise least-squares estimate of the HRF
parameter vector, $\mathbf{h}=(h_1, \ldots, h_K)^T$; here,
$d_n=1$ when a
stimulus appears at the $n$th time point and $d_n =0$ indicates no
stimulus presentation at that time point, $n=1,\ldots,N$. For simplicity,
we adopt the following assumptions from previous studies [\citet{Kao2013ar51} and
references therein]; see also Kao (\citeyear{Kao2014ar51,Kao2014ar52}) for discussions on these
assumptions. First, the last $K-1$ elements of $\db$ are also presented
in the pre-scanning period, that is, before the collection of $y_1$.
With this assumption, the value of $x_n$ in model (\ref{Model1}) is
$d_n$ for $n =1,\ldots, N$, and $x_{n} = d_{N+n}$ for $n \leq0$. In
addition, while additional nuisance terms may be included in the model
at the analysis stage to, say, allow for a trend/drift of $\yb= (y_1,
\ldots, y_N)^T$, we do not assume this extra complication when deriving
our analytical results on identifying optimal designs. We also consider
independent noise, but our results remain true when $\operatorname{cov}(\epsb) =
\alpha\Ib_N + \betab\mathbf{j}^T_N + \mathbf{j}_N\betab^T$, where $\epsb=(\varepsilon
_1, \ldots, \varepsilon_N)^T$, $\alpha$ is a constant, $\betab$ is a
constant vector, $\Ib_{N}$ is the identity matrix of order $N$, and
$\mathbf{j}
_N$ is the vector of $N$ ones; see also \citet{Kushner1997ar51}.
Other correlation structures of $\epsb$ such as an autoregressive
process may be considered, and is a focus of our future study. We now
rewrite model (\ref{Model1}) in the following matrix form:
\begin{equation}
\yb= \gamma\mathbf{j}_N + \Xb_d \mathbf{h}+ \epsb, \label{Eq:Model}
\end{equation}
where $\Xb_d = [\db, \Ub\db, \ldots, \Ub^{K-1}\db]$, and
\begin{equation}
\Ub= \left[ \matrix{ \zerob^T_{N-1}
& 1
\vspace*{2pt}\cr
\Ib_{N-1} & \zerob_{N-1}}\right].
\label{Eq:U}
\end{equation}
The information matrix for $\mathbf{h}$ is
$\Mb_b(\Xb_d) = \Xb_d^T(\Ib_N - N^{-1}\Jb_N)\Xb_d$, where $\Jb_N
= \mathbf{j}
_N\mathbf{j}^T_N$. We also let $\Mb(\Xb_d)= \Xb_d^T\Xb_d$.
Our target is at a $\db\in\mathcal{D}
= \{0,1\}^N$
that minimizes some real function $\Phi\{\Mb_b(\Xb_d)\}$ of $\Mb
_b(\Xb
_d)$. We consider the $A$-optimality criterion, $\Phi_A\{\Mb\} = \operatorname{tr}\{
\Mb
^{-1}\}/K$ for a positive definite $\Mb$, and $D$-optimality criterion,
$\Phi_D\{\Mb\} = |\Mb|^{-1/K}$. In addition, we adopt below some other
notions of optimality of designs and information matrices.
Specifically, the universal optimality described in Definition~\ref
{Def:UO} is due to \citet{Kiefer1975ar2021}. The type 1 criteria
of \citet{Cheng1978ar2021} with the version of \citet
{Cheng2014ar2021}, the $\Phi_p$-optimality criteria of \citet
{Kiefer1974ar2021} for $p \geq0$, and the $(M,S)$-optimality
[\citet{EcclestonHedayat1974ar2021}] are also considered.
Throughout this work, we set the criterion value to $+\infty$ for
designs with a singular information matrix.
\begin{definition}\label{Def:UO}
A design $\db$ is said to be universally optimal over a design class if
it minimizes $\Phi\{\Mb_b(\Xb_d)\}$ for all convex functions $\Phi$
such that
(i) $\Phi\{c\Mb\}$ is nonincreasing in $c>0$ and (ii) $\Phi(\Pb\Mb
\Pb
^T) = \phi(\Mb)$ for any $\Mb$ and any orthogonal matrix $\Pb$.
\end{definition}
\begin{definition}\label{Def:Type1} A design $\db$ is said to be
optimal over a design class with respect to all the type 1 criteria if
it minimizes $\Phi_{(f)}\{\Mb_b(\Xb_d)\} = \sum_{i=1}^K f(\lambda
_i(\Mb
_b (\Xb_d)))$ for any real-valued function $f$ defined on $[0, \infty)$
such that
(i) $f$ is continuously differentiable in $(0, \infty)$ with $f' <0$,
$f'' > 0$, and $f''' <0$ and (ii) $\lim_{x \rightarrow0^+} f(x) = f(0)
= \infty$. Here $\lambda_i(\Mb_b(\Xb_d))$ is the $i$th greatest
eigenvalue of $\Mb_b(\Xb_d)$, $i=1,\ldots,K$.
\end{definition}
\begin{definition}\label{Def:Phip} A design $\db$ is said to be $\Phi
_p$-optimal over a design class for a given $p \geq0$ if
it minimizes
\[
\Phi_p\bigl\{\Mb_b(\Xb_d)\bigr\} = \cases{
\bigl|\Mb_b(\Xb_{d})\bigr|^{-1/K},
& \quad $\mbox{for $p=0$;}$
\vspace*{2pt}\cr
\bigl[ \operatorname{tr}\bigl\{\Mb^{-p}_b(\Xb_d)\bigr\}/K
\bigr]^{1/p}, &\quad $\mbox{for $p \in(0, \infty)$;}$
\vspace*{2pt}\cr
\lambda_1\bigl(\Mb^{-1}_b(
\Xb_d)\bigr), &\quad $\mbox{when $p=\infty$,}$ }
\]
where $\lambda_i(\Mb_b(\Xb_d))$ is defined as in Definition~\ref{Def:Type1}.
\end{definition}
\begin{definition}\label{Def:MSopt} A matrix $\Mb^*$ is said to be
$(M,S)$-optimal over a class $\mathcal{M}$
of nonnegative definite matrices if (i) $\operatorname{tr}\{\Mb^*\} = \max_{M \in
\mathcal{M}} \operatorname{tr}\{\Mb\}$, and (ii) $\operatorname{tr}\{(\Mb^*)^2\} = \min_{M \in
\mathcal{M}_m} \operatorname{tr}\{\Mb^2\}$, where $\mathcal{M}_m \subset\mathcal{M}$
consists of all the matrices having the same trace as $\Mb^*$.
\end{definition}
Furthermore, we only consider optimality criteria $\Phi$ such that
\begin{equation}
\qquad\mbox{if}\quad \Phi(\Mb_1)\leq\Phi(\Mb_2),\qquad\mbox{then }
\Phi(c\Mb _1)\leq\Phi (c\Mb_2)\qquad\mbox{for all }c>0.
\label{Eq:c}
\end{equation}
\subsection{Circulant biased weighing designs} \label{sec2.2}
Our strategy for finding optimal fMRI designs is by taking advantage of
the link between these designs and circulant biased weighing designs.
A biased weighing design problem concerns the selection of a design for
efficient estimation of the weights of $K$ objects in $N$ weighings on
a spring/chemical balance
that has an unknown systematic bias. A~spring balance weighing design
(SBWD) is specified by
a $\Wb\in\{0, 1\}^{N\times K}$, where the $(n,k)${th} element of
$\Wb
$ indicates that the $k$th object is placed on the balance~($1$), or
absent ($0$) in the $n$th weighing. Such a design is called \textit
{circulant} if $\Wb$ is a circulant matrix. The information matrix
$\Mb
_b(\Wb)$ for the $K$ weights is equal to $\Wb^T(\Ib_N - N^{-1}\Jb
_N)\Wb
$. For each fMRI design $\db$, the matrix $\Xb_d$ clearly defines a
circulant SBWD. Thus, the fMRI design issue formulated earlier is a
sub-problem of the optimal SBWD problem: selecting an optimal design
among circulant SBWDs.
A chemical balance weighing design (CBWD) is specified by a $\bar{\Wb}
\in\{-1, 0, 1\}^{N\times K}$, where the $(n,k)${th} element of $\bar
{\Wb}$ indicates that the $k$th object is placed on the left pan
($-1$), right pan ($+1$), or absent ($0$) in the $n$th weighing.
Each SBWD matrix $\Wb$ can be transformed into a CBWD matrix $\tilde
{\Wb
}$ via $\tilde{\Wb}=\pm(\Jb_{N,K}-2\Wb)$, where $\Jb_{N,K} =
\mathbf{j}_N\mathbf{j}
_K^T$; that is, 0 and~1 are replaced by 1 and $-1$, or $-1$ and~1,
respectively. Given an fMRI design $\db\in\mathcal{D}$, let $\tilde
{\db
}=\pm(\mathbf{j}_N-2\db)$ and $\Xb_{\tilde{d}} = [\tilde
{\db}, \Ub\tilde{\db},
\ldots, \Ub^{K-1}\tilde{\db}]$, where $\Ub$ is defined as in (\ref
{Eq:U}); then, $\tilde{\db}\in\tilde{\mathcal{D}}=\{-1,1\}^N$, and
$\Xb
_{\tilde{d}}$ is a circulant CBWD matrix. Specifically, if we write
$\Xb
_d$ as $\Wb$, then $\Xb_{\tilde{d}}=\tilde{\Wb}$.
\citet{Cheng2014ar2021} showed that
\begin{equation}
\Mb_b(\Wb)=\tfrac{1}{4}\Mb_b(\tilde{\Wb})\qquad\mbox{for all }\Wb\in \{0, 1\} ^{N\times K}. \label{Eq:information}
\end{equation}
We thus have the following result.
\begin{lemma} \label{Lemma:biasedWD}
For any $\Phi$ satisfying (\ref{Eq:c}) and any $\Wb_1,\Wb_2\in\{0,
1\}
^{N\times K}$, $\Phi(\Mb_b(\Wb_1))\leq\Phi(\Mb_b(\Wb_2))$ if and only
if $\Phi(\Mb_b(\tilde{\Wb}_1))\leq\Phi(\Mb_b(\tilde{\Wb}_2))$.
Therefore,
an fMRI design $\db^* \in\mathcal{D}$ is $\Phi$-optimal for estimating
the HRF if and only if
\[
\Phi\bigl\{\Mb_b(\Xb_{\tilde{d}^*})\bigr\} = \min
_{\tilde{d} \in\tilde
{\mathcal
{D}}} \Phi\bigl\{\Mb_b(\Xb_{\tilde{d}})\bigr\}\qquad
\mbox{where } \tilde{\db }^* = \pm \bigl(\mathbf{j}_N - 2
\db^*\bigr).
\]
\end{lemma}
Lemma~\ref{Lemma:biasedWD} reduces the problem of finding optimal fMRI
designs for estimating an HRF to that of identifying optimal biased
circulant CBWDs without zero entries. In Section~\ref{sec3}, we
establish the connection of optimal designs for comparing two HRFs to
that of optimal biased circulant CBWDs allowing zero entries.
\subsection{Main results} \label{sec2.3}
Following Lemma~\ref{Lemma:biasedWD}, we tackle our first fMRI design
issue by working on circulant CBWDs ($\Xb_{\tilde{d}}$ for $\tilde
{\db}
\in\tilde{\mathcal{D}}$) that contain no zero. We have the following
result for such CBWDs.
\begin{lemma} \label{Lemma_mod3}
For $\tilde{\db} \in\tilde{\mathcal{D}}$, $\Mb(\Xb_{\tilde{d}})
= \Xb
^T_{\tilde{d}}\Xb_{\tilde{d}}$ has
diagonal elements equal to $N$ and off-diagonal elements congruent to
$N$ modulo 4. When $N$ is odd,
$\Mb_b(\Xb_{\tilde{d}}) \preceq\Mb(\Xb_{\tilde{d}}) - N^{-1}\Jb
_K$ and
the equality holds if $\mathbf{j}_N^T\tilde{\db} = \pm1$;
here, $\preceq$ is
the L\"owner ordering, that is, $\Mb_1 \preceq\Mb_2 $ if $\Mb_2 -\Mb
_1$ is
nonnegative definite.
\end{lemma}
\begin{pf}
All the diagonal elements of $\Mb(\Xb_{\tilde{d}})$ are clearly
$\tilde
{\db}^T\tilde{\db} = N$. In addition, for $q, r \in\{-1, +1\}$, let
$n^{(rq)}_k$ be the number of times $(\tilde{d}_{n-k}, \tilde
{d}_n) = (q,r)$, where $\tilde{d}_{n}$ is the $n$th element of
$\tilde{\db}$ for $n=1,\ldots, N$, and $\tilde{d}_n$ is set to
$\tilde
{d}_{N+n}$ when $n \leq0$. We have, for any $i \neq j$ and $k=|i-j|$,
the $(i,j)${th} element of $\Mb(\Xb_{\tilde{d}})$ is $n^{(++)}_k
+ n^{(--)}_k - (n^{(+-)}_k + n^{(-+)}_k) = N -
4n^{(-+)}_k$, and is thus congruent to $N$ modulo 4. Note that
the above equality is a consequence of the fact that $\Xb_{\tilde{d}}$
is a circulant matrix. Moreover, $\Mb_b(\Xb_{\tilde{d}}) = \Mb(\Xb
_{\tilde{d}}) - a^2 N^{-1}\Jb_K$, where $a = \mathbf{j}_N^T\tilde{\db}$ with
$a^2 \geq1$ if $N$ is odd. Thus, $\Mb(\Xb_{\tilde{d}}) - N^{-1}\Jb
_K -
\Mb_b(\Xb_{\tilde{d}}) = (a^2 - 1)N^{-1}\Jb_K$, and our claim follows.
\end{pf}
We now provide some results for obtaining optimal circulant biased
CBWDs with no zero, and hence, optimal fMRI designs for estimating the HRF.
For cases with $N=4t-1\ (\geq4$), the following lemma due to \citet
{Cheng1992ar2021} is useful.
\begin{lemma} \label{Lemma_C1992}
Let $\mathcal{M}^N$ be a set of $K$-by-$K$ symmetric and nonnegative
definite matrices,
$\mathcal{M}^N_m \subset\mathcal{M}^N$ be the set of matrices that
have the maximum trace over $\mathcal{M}^N$, and $\mathcal{M}^N_{ms}$
be the set of $\Mb$ that minimize $\operatorname{tr}(\Mb^2)$ over $\mathcal{M}^N_m$.
Suppose $A_N = \max_{M \in\mathcal{M}^N} \operatorname{tr}\{\Mb\}$ and $B_N = \min_{M
\in\mathcal{M}^N_m} \operatorname{tr}\{\Mb^2\}$ are such that \textup{(a)} $\lim_{N\rightarrow
\infty} A_N = \infty$, and \textup{(b)} for some $L >0$, $|B_N - K^{-1}A^2_N|
\leq L$ for all $N$. In addition, let $\lambda_i(\Mb)$ be as in
Definition~\ref{Def:Type1}, and
$\Phi_{(g)}\{\Mb\} = \sum_{i=1}^K g(\lambda_i(\Mb))$ for a
real-function $g$ satisfying the following two conditions: \textup{(i)} $g$ is
thrice continuously differentiable in a neighborhood of 1 with $g'(1) <
0$ and $g''(1) > 0$ and \textup{(ii)} for any $c > 0$, there are constants
$\alpha(c) > 0$ and $\beta(c)$ such that $g(cx) = \alpha(c) g(x) +
\beta
(c)$ for all $x$. Then there exists an $N_0(K, g)$ such that whenever
$N \geq N_0(K, g)$, for any $\Mb^*\in\mathcal{M}^N_{ms}$, we have
$\Phi
_{(g)}\{\Mb^*\} < \Phi_{(g)}\{\Mb\}$ for all $\Mb\notin\mathcal
{M}^N_{ms}$.
\end{lemma}
We note that when $g(x) = -\log x$, $\Phi_{(g)}$ is equivalent to the
$D$-criterion, or
equivalently, the $\Phi_p$-criterion in Definition~\ref{Def:Phip} with
$p=0$. This and the other $\Phi_p$-criteria satisfy conditions (i) and
(ii) in
Lemma~\ref{Lemma_C1992}; see also \citet{Cheng1992ar2021}. We thus
have the following result on $\Phi_p$-optimal circulant biased CBWDs and
$\Phi_p$-optimal fMRI designs for $N=4t -1$.
\begin{theorem} \label{thmm1}
Let $N = 4t - 1$, $p_0 > 0$, and $\tilde{\db}^* \in\tilde{\mathcal
{D}}$ be such that\break
$\Mb_b(\Xb_{\tilde{d}^*}) = (N+1)[\Ib_K - N^{-1}\Jb_K]$. Then there
exists an $N_0(K, p_0)$ such that, whenever
$N \geq N_0(K, p_0)$, $\tilde{\db}^*$ is $\Phi_p$-optimal over
$\tilde
{\mathcal{D}}$, and $\db^* = (\mathbf{j}_N \pm\tilde{\db
}^*)/2$ is $\Phi
_p$-optimal over $\mathcal{D}$
for any $p \in[0, p_0]$.
\end{theorem}
\begin{pf}
We first work on $\Mb(\Xb_{\tilde{d}}) - N^{-1}\Jb_K$ for $\tilde
{\db}
\in\tilde{\mathcal{D}}$. Following Lemma \ref{Lemma_mod3}, the
diagonal elements of $\Mb(\Xb_{\tilde{d}}) - N^{-1}\Jb_K$ are $N_b =
N-N^{-1}$, and the $(i,j)${th} element of this matrix is
$(c_{i,j} - N^{-1})$ with $c_{i,j} = 3 (\mathrm{mod}\ 4)$ for $i\neq j$. Thus,
$\operatorname{tr}\{[\Mb(\Xb_{\tilde{d}}) - N^{-1}\Jb_K]^2\}$ is minimized when
$c_{i,j} = -1$ for all $i \neq j$. This implies the $(M,S)$-optimality
of $\Mb(\Xb_{\tilde{d}^*}) - N^{-1}\Jb_K$ over $\tilde{\mathcal
{M}}_b =
\{\Mb(\Xb_{\tilde{d}}) - N^{-1}\Jb_K \mid\tilde{\db} \in\tilde
{\mathcal{D}} \}$. In addition, it can be seen that $A_N = KN_b$, and
$B_N = KN_b^2 +(1+N^{-1})^2K(K-1)$, where $A_N$ and $B_N$ are defined
as in Lemma~\ref{Lemma_C1992}. Therefore, $\lim_{N\rightarrow\infty}
A_N = \infty$, and $|B_N - K^{-1}A^2_N| = |KN_b^2 +(1+N^{-1})^2K(K-1) -
KN^2_b| = (1 + N^{-1})^2K(K-1)$ is bounded above by a positive number
for all $N$. Following Lemmas \ref{Lemma_mod3}, and \ref{Lemma_C1992},
we then have, when $N \geq N_0(K, p_0)$ for some $N_0(K, p_0)$,
\begin{eqnarray*}
\Phi_{p_0}\bigl\{\Mb_b(\Xb_{\tilde{d}^*})\bigr\} &=&
\Phi_{p_0}\bigl\{\Mb(\Xb _{\tilde
{d}^*}) - N^{-1}
\Jb_K\bigr\}
\\
& \leq&\Phi_{p_0}\bigl\{\Mb(\Xb_{\tilde{d}}) - N^{-1}
\Jb_K\bigr\} \leq\Phi_{p_0}\bigl\{\Mb_b(
\Xb_{\tilde{d}})\bigr\}
\end{eqnarray*}
for any $\tilde{\db} \in\tilde{\mathcal{D}}$; here $\Phi_{p_0}$ is
defined as in Definition~\ref{Def:Phip}. The $\Phi_p$-optimality of
$\tilde{\db}^*$ over $\tilde{\mathcal{D}}$ for $p \in[0, {p_0}]$ then
follows from Corollary~3.3 of \citet{Cheng1987ar2021} and the fact
that $\Mb_b(\Xb_{\tilde{d}^*})$ has two eigenvalues, with the smaller
one having multiplicity $1$. Moreover, with Lemma~\ref{Lemma:biasedWD},
we obtain the $\Phi_p$-optimality of $\db^*$ over $\mathcal{D}$.
\end{pf}
In Theorem~\ref{4546465545485}, we provide an $N_0(K, 1)$ for a design to be
$\Phi_1$-optimal (i.e., $A$-optimal).
Our approach for deriving this bound for $N$ is analogous to that of
\citet{GalilKiefer1980ar2021}, and
\citet{SatheShenoy1989ar2021}. A proof is provided in the \hyperref[app]{Appendix}.
\begin{theorem}\label{4546465545485}
Consider the same conditions as in Theorem~\ref{thmm1}. Let $N_0(K, 1)$
be the greatest real root of the cubic function
$c(x) = 2x^3 + (10-7K)x^2 + 2(2K-5)(K-1)x + 4K^2 - 7K$. If $K \geq4$
and $N \geq N_0(K, 1)$, then $\tilde{\db}^*$
is $\Phi_p$-optimal over $\tilde{\mathcal{D}}$, and $\db^*$ is
$\Phi
_p$-optimal over $\mathcal{D}$ for $0\leq p \leq1$.
\end{theorem}
Recently, \citet{Kao2014ar51} studied the efficiency of Hadamard
sequences, $\db_H$, in estimating $\mathbf{h}$ of model
(\ref{Eq:Model}).
A Hadamard sequence is a binary sequence constructed from a normalized
Hadamard matrix $\Hb\in\{-1,1\}^{(N+1) \times(N+1)}$ that contains a
circulant core $\tilde{\Hb}$. Such an $\Hb$ is such that $\Hb\Hb^T =
(N+1)\Ib_{N+1}$, the elements of its first row and column are all $1$,
and the bottom-right $N$-by-$N$ sub-matrix $\tilde{\Hb}$ is a circulant
matrix. These Hadamard matrices are known to exist when $N$ is a prime,
a product of twin primes, or $2^r - 1$ for an integer $r > 1$. They can
be easily generated by, for example, the Paley, Singer, or twin prime
power difference sets [\citet{GolombGong2005bk101,
Horadam2007bk101}]. Any column of the circulant core $\tilde{\Hb}$
is a vertex, $\tilde{\db}_H$, of the hypercube $\tilde{\mathcal{D}}
= \{
-1, 1\}^N$, and $\db_H = (\mathbf{j}_N -\tilde{\db}_H)/2$
forms a Hadamard
sequence. The popularly used binary $m$-sequences [\citet
{BuracasBoynton2002ar51}] can be constructed by the same method
when $N = 2^r - 1$, and are thus special cases of $\db_H$. The $\db_H$
has design length $N = 4t - 1$ for some integer $t$, and our results
can be applied to establish the $A$- and $D$-optimality of these
designs as stated in the following corollary.
\begin{corollary} \label{cor1}
A Hadamard sequences $\db_H$ of length $N$ is $A$- and $D$-optimal for
estimating the HRF if $K \geq4$ and $N \geq N_0(K, 1)$.
Here, $N_0(K, 1)$ is defined as in Theorem~\ref{4546465545485}.
\end{corollary}
\begin{pf} For $\db_H$, let $\tilde{\db}_H = \mathbf{j}_N- 2\db_H$. Then it
can be seen from the
construction of $\db_H$ that $\Xb_{\tilde{d}_H}$ is a ciruclant matrix
consisting of $K$ distinct columns of the circulant core of a
normalized Hadamard matrix. Consequently, $\Mb_b(\Xb_{\tilde{d}_H}) =
(N+1)[\Ib_K - N^{-1}\Jb_K]$.
Our claim then follows from Lemma~\ref{Lemma:biasedWD} and Theorem~\ref{4546465545485}.
\end{pf}
Our results so far are for cases with $N = 4t - 1$. For $N = 4t$, if
there exists a $\tilde{\db}$ with $\Mb_b(\Xb_{\tilde{d}}) = N\Ib_K$,
then $\tilde{\db}$ is universally optimal over $\tilde{\mathcal{D}}$,
and the corresponding $\db= (\mathbf{j}_N \pm\tilde{\db
})/2$ is universally
optimal in estimating the HRF over
all fMRI\vspace*{1pt} designs. This fact follows directly from Proposition~1$'$ of
\citet{Kiefer1975ar2021}. We note that $\tilde{\db}$ is
universally optimal whenever the columns of $\Xb_{\tilde{d}}$
are pairwise orthogonal, and are all orthogonal to $\mathbf{j}_N$. The
transpose of such a matrix $\Xb_{\tilde{d}}$ is called a \textit
{circulant partial Hadamard matrix} by \citet
{Craigenetal2013ar101}. Clearly, the corresponding $\Xb_{d}$ with
$\db= (\mathbf{j}_N \pm\tilde{\db})/2$ forms a
two-symbol, $N$-run,
$K$-factor \textit{circulant orthogonal array (OA)} whose strength is
$\geq2$; see \citet{Hedayatetal1999bk101} for an overview of
OAs. A circulant partial Hadamard matrix, and thus a circulant OA, can
be obtained by a computer search [\citet{Linetal2014ar51,
Lowetal2005ar101}]. Here, we provide a systematic method for
constructing a universally optimal $\db$.
\begin{theorem}\label{thmm3}
Let $\db_{1,g, H} \in\mathcal{D}$ be obtained by inserting a $0$ to a
run of $g$ $0$'s in a Hadamard sequence $\db_H$.
If $K \leq g + 1$, then $\db_{1,g,H}$ is universally optimal for
estimating $\mathbf{h}$ of model (\ref{Eq:Model}).
\end{theorem}
\begin{pf} Without loss of generality, we assume that a run of $g$
$0$'s appears in the tail of $\db_H$, and $\db_{1,g, H}$ is obtained by
adding a $0$ to this run of $0$'s. Suppose $K \leq q + 1$, and $\tilde
{\db}_{1,g, H} =\mathbf{j}_N - 2\db_{1,g, H}$. It can be
seen that $\Xb_{\tilde
{d}_{1,g, H}}$ is an $N$-by-$K$ circulant
orthogonal array
whose columns are some $K$ distinct columns of a Hadamard matrix.
\end{pf}
For $N = 4t + 1$, Theorem~4.1 of \citet{Cheng2014ar2021} provides
a guidance on the selection of $\Phi_{(f)}$-optimal biased CBWDs for
any type 1 criterion $\Phi_{(f)}$.
We describe this result in Lemma~\ref{lemma5} with our notation. It is
interesting to note that, under our setting, a simple alternative proof
of Lemma~\ref{lemma5} can be achieved by
utilizing Theorem~2.1 of \citet{Cheng1980ar2021} that is slightly
rephrased in Lemma~\ref{C1980} below.
\begin{lemma}\label{C1980}
Let $\Mb^*$ be a symmetric matrix with eigenvalues $\lambda_1(\Mb^*) >
\lambda_2(\Mb^*) = \lambda_3(\Mb^*) = \cdots= \lambda_{K}(\Mb^*)
> 0$
and $\mathcal{M}$ be a set of nonnegative definite symmetric matrices.
If the following
conditions are satisfied, then\break $\Phi_{(f)}\{\Mb^*\} \leq\Phi_{(f)}\{
\Mb
\}$ for any $\Mb\in\mathcal{M}$ and any type 1 criterion $\Phi_{(f)}$:
\begin{longlist}[(a)]
\item[(a)] $\operatorname{tr}\{\Mb^*\} \geq \operatorname{tr}\{\Mb\}$ for any $\Mb\in\mathcal{M}$;
\item[(b)] for any $\Mb\in\mathcal{M}$, $\operatorname{tr}\{\Mb^*\} - \sqrt
{[K/(K-1)][\operatorname{tr}\{(\Mb^*)^2\} - (\operatorname{tr}\{\Mb^*\})^2/K]} \geq$ $\operatorname{tr}\{\Mb\} -
\sqrt{[K/(K-1)][\operatorname{tr}\{\Mb^2\} - (\operatorname{tr}\{\Mb\})^2/K]}$.
\end{longlist}
\end{lemma}
Note that since the condition $\lim_{x \rightarrow0^+} f(x) = f(0) =
\infty$ is required in Definition~\ref{Def:Type1}, there is no need to
verify (2.2) in Theorem~2.1 of \citet{Cheng1980ar2021}; see
Theorem~2.3 of \citet{Cheng1978ar2021}.
\begin{table}[b]
\caption{A Hadamard sequence $\db_H$, and a $\db_{1,g,H}$ for
estimating $\mathbf{h}$ with $K \leq9$}\label{tab1}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcl@{}}
\hline
& $\bolds{N}$ & \multicolumn{1}{c@{}}{\textbf{Design}} \\
\hline
$\db_H$ & 151 & 1 0 0 1 0 0 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 1 0 1\\
&& 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 1 1 1 0 0 1 1\\
&& 0 1 0 1 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0 1 0 0 0 0 1 1 0 1 0 1 1\\
&& 1 1 0 1 1 1 1 1 0 1 0 1 1 0 1 0 0 0 1 0 0 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1\\
&& 0 0 1 1 0 1 1 \\[3pt]
$\db_{1,g,H}$ & 132 & 1 0 1 0 0 0 1 0 1 0 1 0 0 0 1 0 0 1 1 1 0 0 1 1 1 0 1 0 0 1 1 1 1 0 0 0\\
&& 0 1 0 0 1 0 1 0 0 0 0 1 0 0 1 1 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1\\
&& 1 1 1 0 0 1 0 1 1 0 0 1 1 0 1 1 1 1 0 1 0 1 1 0 1 1 1 1 0 0 0 0 1 1 0 1\\
&& 0 0 0 1 1 0 0 0 1 1 0 1 1 1 0 1 0 1 0 1 1 1 0 1 \\
\hline
\end{tabular*}
\end{table}
\begin{lemma}\label{lemma5}
Let $N = 4t + 1$, and $\tilde{\db}^* \in\tilde{\mathcal{D}}$ be
such that
$\Mb_b(\Xb_{\tilde{d}^*}) = (N-1)[\Ib_K + N^{-1}\Jb_K]$.
Then $\tilde{\db}^*$ is optimal over $\tilde{\mathcal{D}}$, and
$\db^*
= (\mathbf{j}_N \pm\tilde{\db}^*)/2$ is optimal for
estimating the HRF in
terms of any type 1 criterion.
\end{lemma}
\begin{pf}
From Lemma~\ref{Lemma_mod3}, we have that the diagonal elements of\break
$\Mb
(\Xb_{\tilde{d}}) = \Xb^T_{\tilde{d}}\Xb_{\tilde{d}}$ are $N$,
and the
off-diagonal elements are congruent to $1$ modulo $4$.
In addition, $\Mb_b(\Xb_{\tilde{d}}) \preceq\Mb(\Xb_{\tilde{d}}) -
N^{-1}\Jb_K$, and the equality holds when $\mathbf{j}^T_N\tilde{\db} = \pm1$.
It can then be easily seen that $\Mb(\Xb_{\tilde{d}^*}) - N^{-1}\Jb_K$
is $(M,S)$-optimal over $\tilde{\mathcal{M}}_b = \{\Mb(\Xb_{\tilde{d}})
- N^{-1}\Jb_K \mid\tilde{\db} \in\tilde{\mathcal{D}}\}$, and
conditions (a) and (b) in Lemma~\ref{C1980} are satisfied if we replace
$\Mb^*$, $\Mb$ and $\mathcal{M}$ there by $\Mb(\Xb_{\tilde{d}^*}) -
N^{-1}\Jb_K$, $\Mb(\Xb_{\tilde{d}}) - N^{-1}\Jb_K$, and $\tilde
{\mathcal{M}}_b$,
respectively. Consequently, for any type 1 criterion $\Phi_{(f)}$, we have
$\Phi_{(f)}\{\Mb_b(\Xb_{\tilde{d}^*})\} = \Phi_{(f)}\{\Mb(\Xb
_{\tilde
{d}^*}) - N^{-1}\Jb_K\} \leq\Phi_{(f)}\{\Mb(\Xb_{\tilde{d}}) -
N^{-1}\Jb_K\}
\leq\Phi_{(f)}\{\Mb_b(\Xb_{\tilde{d}})\}$. The optimality of
$\tilde
{\db}^*$ thus follows. By (\ref{Eq:information}), this argument also
applies to $\db^*$.
\end{pf}
We now provide a systematic method for constructing optimal fMRI
designs for cases with $N = 4t + 1$, followed by an example
on an application of our results in this section.
\begin{theorem}\label{thmm4}
Let $\db_{2,g, H} \in\mathcal{D}$ be obtained by inserting two $0$'s
to a run of $g$ $0$'s in a Hadamard sequence $\db_H$.
If $K \leq g + 1$, then $\db_{2,g, H}$ is optimal for estimating
$\mathbf{h}$
of model (\ref{Eq:Model}) for all type 1 criteria.
\end{theorem}
\begin{pf} With a similar argument as in the proof of Theorem~\ref
{thmm3}, we have that, when $K \leq g+1$ and $\tilde{\db}_{2,g, H} =
\mathbf{j}
_N - 2\db_{2,g, H}$, $\Mb_b(\Xb_{\tilde{d}_{2,g, H}}) = (N-1)[\Ib
_K +
N^{-1}\Jb_K]$.
Our claim then follows from Lemma~\ref{lemma5}.
\end{pf}
\begin{example}\label{examp1}
Consider an experiment where the stimulus can possibly occur every $4$
seconds. Then $N = 4(38) - 1 = 151$ corresponds
to a 10-minute experiment and $K = 9$ corresponds to a 32-second HRF. A
$\db_H =(d_{1,H},\ldots, d_{N,H})^T$ can be obtained by a Paley difference
set [\citet{Paley1933ar101}]. This is to set $d_{n,H}=0$ if $(n-1)
\in\{x^2\ (\mathrm{mod}\ N) \mid x = 1, \ldots, (N-1)/2\}$ and $d_{n,H}=1$,
otherwise. The obtained design $\db_H$ is presented in Table~\ref
{tab1}. It\vadjust{\goodbreak} is both $A$- and $D$-optimal for estimating $\mathbf{h}$ of model
(\ref{Eq:Model}) since $N > N_0(K, 1) = 21.34$.
Following Theorem~\ref{thmm3}, we may insert a $0$ to the longest run
of $0$'s in the $\db_H$ presented in Table~\ref{tab1} to yield a
universally optimal design. The resulting design can accommodate a $K
\leq8$. For $K = 9$, we obtain a universally optimal $\db_{1,g,H}$ by
extending a $\db_H$ of length $N = 131$. This $\db_{1,g,H}$ is
presented in Table~\ref{tab1}. We also obtain a $\db_{2,g,H}$ by
inserting another $0$ into the longest run of $0$'s in $\db_{1,g,H}$.
Following Theorem~\ref{thmm4}, this $\db_{2,g,H}$ is optimal for any
type 1 criterion in estimating $\mathbf{h}$ with $K \leq9$.
\end{example}
It is noteworthy that, by replacing $0$ and $1$ with $1$ and $2$,
respectively, the $\db_{1,g,H}$ in Table~\ref{tab1} is equivalent to
the design of the same design length in Table~3.1 of \citet
{Kao2014ar52}. The use of such a design whose elements are $1$ or
$2$ is discussed in the next section.
\section{Designs for contrasts between HRFs} \label{sec3}
We now consider optimal fMRI experimental designs for studies where the
objective is on comparing HRFs of two stimulus types.
For this situation, \citet{Kao2014ar52} presented some optimal
designs for $N=4t$ by considering the
following extension of model (\ref{Eq:Model}):
\begin{equation}
\yb= \gamma\mathbf{j}_N + \Xb_{u, 1} \mathbf{h}_1 + \Xb_{u, 2} \mathbf{h}_2
+ \epsb, \label{Eq:Model2}
\end{equation}
where $\mathbf{h}_q = (h_{q1}, \ldots, h_{qK})^T$ is the
vector of the $K$
unknown HRF heights of the $q${th}-type stimulus, $\Xb_{u,q}$
is the 0-1 design matrix obtained from the selected fMRI design $\mathbf{u}
=(u_1, \ldots, u_N)^T$ with $u_n \in\{0, 1, 2\}$, $q = 1, 2$,
and the remaining terms are as in (\ref{Eq:Model}). Specifically, $u_n
= q > 0$ indicates that
a stimulus of the $q${th} type appears at the $n$th time point, and
$u_n = 0$ if no
stimulus is present. In addition, for $q = 1, 2$, $\Xb_{u,q} =
[\deltab
_q, \Ub\deltab_q, \ldots, \Ub^{K-1}\deltab_q]$, where $\Ub$ is defined
in (\ref{Eq:U}), and the $n$th element of $\deltab_q$ is $1$ if $u_n
= q$, and is $0$ otherwise. The main interest lies in $\zetab= \mathbf{h}_1 -
\mathbf{h}_2$, and we may rewrite model (\ref{Eq:Model2}) as
\begin{equation}
\yb= \gamma\mathbf{j}_N + \Eb_{u}\etab+
\Fb_{u} \zetab + \epsb, \label{Eq:Model3}
\end{equation}
where $\Eb_{u} = (\Xb_{u,1} + \Xb_{u,2})/2$, $\etab=\mathbf{h}_1 + \mathbf{h}_2$, $\Fb
_{u} = (\Xb_{u,1} - \Xb_{u,2})/2$, and all the remaining terms are as
in (\ref{Eq:Model2}). The aim is thus at obtaining a design $\mathbf{u}\in\{
0, 1, 2\}^{N}$ so that $\Phi\{\Mb_u\}$ is minimized, where $\Mb_u =
\Fb
^T_{u}(\Ib_N - \omega\{[\mathbf{j}_N, \Eb_{u}]\})\Fb
_{u}$ and $\omega\{\Ab\}$
is the orthogonal projection matrix onto the space spanned by the
columns of the matrix $\Ab$. The following lemma can be easily proved.
\begin{lemma} \label{lemma6}
For a given design $\mathbf{u}\in\{0, 1, 2\}^N$, let $\bar
{\db}_u=(\bar
{d}_{u,1}, \ldots, \bar{d}_{u,N})^T \in\bar{\mathcal{D}} = \{-1,
0, 1\}
^N$ be defined as $\bar{d}_{u,n} = 0$, $1$ and $-1$ when $u_n = 0$, $1$
and $2$, respectively. Then
\[
\Mb_u \preceq\Fb^T_{u}\bigl(
\Ib_N - N^{-1}\Jb_N\bigr)\Fb_{u} =
\Xb^T_{\bar
{d}_u}\bigl(\Ib_N - N^{-1}
\Jb_N\bigr)\Xb_{\bar{d}_u}/4,
\]
where $\Xb_{\bar{d}_u} = [\bar{\db}_u, \Ub\bar{\db}_u, \ldots,
\Ub
^{K-1}\bar{\db}_u]$, and $\Ub$ is as in (\ref{Eq:U}).
In addition, $\Mb_u = \Xb^T_{\bar{d}_u}(\Ib_N - N^{-1}\Jb_N)\Xb
_{\bar
{d}_u}/4$ if $\mathbf{u}$ contains no zero.
\end{lemma}
Our approach for obtaining optimal fMRI designs for comparing HRFs is
by working on the upper bound of
$\Mb_u$ provided in Lemma~\ref{lemma6}. Specifically, we would like to
obtain a $\bar{\db}_u \in\bar{\mathcal{D}}$, or equivalently
a circulant CBWD $\Xb_{\bar{d}_u} \in\{-1, 0, 1\}^{N\times K}$, that
minimizes
$\Phi\{\Mb_b(\Xb_{\bar{d}_u})\}$.
As pointed out at the end of Section~\ref{sec2.2}, unlike the case
of estimating an HRF, here we also need to consider circulant CBWDs
with zero entries. If the obtained $\bar{\db}_u$
contains no zero, then the corresponding $\mathbf{u}$ is
$\Phi$-optimal. To
identify such a $\bar{\db}_u$, we consider the following lemma. For
convenience,
we omit the subscript of $\bar{\db}_u$ hereinafter, but its dependence
on $\mathbf{u}$ should be clear.
\begin{lemma} \label{lemma7}
Suppose $\bar{\db} \in\bar{\mathcal{D}}$ contains $r$ zeros, and
$\mathbf{j}
^T_N\bar{\db} = a$. We have the following results:
\begin{longlist}[(ii)]
\item[(i)] If $N = 4t - 1$, and $\Mb_b(\Xb_{\bar{d}}) = (N+1)[\Ib
_K -
N^{-1}\Jb_K]$, then
$a^2 = 1$, $r = 0$, and $\Mb(\Xb_{\bar{d}}) = \Xb^T_{\bar{d}}\Xb
_{\bar
{d}} = (N+1)\Ib_K - \Jb_K$.
\item[(ii)] If $N = 4t + 1$, and $\Mb_b(\Xb_{\bar{d}}) = (N-1)[\Ib
_K +
N^{-1}\Jb_K]$, then
$a^2 = 1$, $r = 0$, and $\Mb(\Xb_{\bar{d}}) = (N-1)\Ib_K + \Jb_K$.
\end{longlist}
\end{lemma}
\begin{pf}
We work only on (i) here. A similar argument can be applied to prove~(ii). For (i),
we have $\Mb_b(\Xb_{\bar{d}}) = \Mb(\Xb_{\bar{d}}) - (a^2/N) \Jb
_K =
(N+1)[\Ib_K - N^{-1}\Jb_K]$. Since each diagonal element of $\Mb(\Xb
_{\bar{d}})$ is an integer that is not greater than $N$, it can be seen
that $a^2 \leq1$. If $a = 0$, then $\Xb^T_{\bar{d}}\Xb_{\bar{d}} =
(N+1)[\Ib_K - N^{-1}\Jb_K]$. This leads to a contradiction since the
diagonal elements of the latter matrix are $(N+1)(1- N^{-1})$.
Therefore, $a^2 = 1$, $\Mb(\Xb_{\bar{d}}) = (N+1)\Ib_K - \Jb_K$
and $r
= 0$.
\end{pf}
The first main result in this section is an extension of
Theorem~\ref{thmm1}. We note that $\bar{N}_0(K, p_0)$ in Theorem~\ref
{thmm5} may not be
the same as $N_0(K, {p_0})$ in Theorem~\ref{thmm1}.
\begin{theorem} \label{thmm5}
Suppose $\bar{\db}^* \in\bar{\mathcal{D}}$ is a vector with $N = 4t-1$
elements, and it satisfies
$\Mb_b(\Xb_{\bar{d}^*}) = (N+1)[\Ib_K - N^{-1}\Jb_K]$. For any positive
number ${p_0}$,
there exists an $\bar{N}_0(K, {p_0})$ such that, if $N \geq\bar
{N}_0(K, {p_0})$,
then $\bar{\db}^*$ is $\Phi_p$-optimal over $\bar{\mathcal{D}}$
for any
$p \in[0, {p_0}]$.
\end{theorem}
\begin{pf}
Let $r$ be the number of zeros in $\bar{\db} \in\bar{\mathcal
{D}}$. It
is clear that
$\operatorname{tr}\{\Mb(\Xb_{\bar{d}})-N^{-1}\Jb_K\} = (N - r) - K/N$ is maximized
when $r = 0$. With Lemma~\ref{lemma7},
we can easily see that $\Mb(\Xb_{\bar{d}^*})-N^{-1}\Jb_K$ is
$(M,S)$-optimal over $\bar{\mathcal{M}}_b = \{\Mb(\Xb_{\bar{d}}) -
N^{-1}\Jb_K\}$. Following Lemma~\ref{Lemma_C1992} and Corollary~3.3 of
\citet{Cheng1987ar2021}, $\bar{\db}^*$ is $\Phi_p$-optimal over
$\bar{\mathcal{D}}$ for $p \in[0, {p_0}]$ when $N \geq\bar{N}_0(K,
{p_0})$ for some $\bar{N}_0(K, {p_0})$.
\end{pf}
An explicit lower bound $N_0(K, 1)$ for the $A$-criterion was given in
Theorem~\ref{4546465545485}. We show in the following theorem that one can take
$\bar{N}_0(K, 1)=N_0(K, 1)$.
\begin{theorem}\label{thmm5.1}
Let $\bar{\db}^* \in\bar{\mathcal{D}}$, where $N = 4t-1$, be such that
$\Mb_b(\Xb_{\bar{d}^*}) = (N+1)[\Ib_K - N^{-1}\Jb_K]$. If $K\geq
4$ and
$N \geq N_0(K, 1)$, where $N_0(K, 1)$ is as given in Theorem~\ref{4546465545485},
then $\bar{\db}^*$ is $A$-optimal (and $\Phi_p$-optimal for all $p
\in
[0,1]$) over $\bar{\mathcal{D}}$.
\end{theorem}
\begin{pf}
By Theorem~\ref{4546465545485}, it is enough to show that if $K\geq4$ and $N
\geq N_0(K, 1)$, then for any $\bar{\db}\in\bar{\mathcal{D}}$ that has
at least one zero entry, $\Phi_1\{\Mb_b(\Xb_{\bar{d}^*})\}\leq\Phi
_1\{
\Mb_b(\Xb_{\bar{d}})\}$. If $\bar{\db}\in\bar{\mathcal{D}}$ has at
least one zero entry, then each diagonal entry of $\Mb_b(\Xb_{\bar
{d}})$ is at most $N-1$; thus $\Phi_1\{\Mb_b(\Xb_{\bar{d}})\}\geq
K/(N-1)$. On the other hand, since $\Mb_b(\Xb_{\bar{d}^*})=(N+1)[\Ib_K
- N^{-1}\Jb_K]$ has two distinct eigenvalues $N+1$ and $(N+1)(N-K)/N$,
with multiplicity $K-1$ and 1, respectively, we have $\Phi_1\{\Mb
_b(\Xb
_{\bar{d}^*})\}=(K-1)/(N+1)+N/[(N+1)(N-K)]$. It follows that $\Phi_1\{
\Mb_b(\Xb_{\bar{d}^*})\}\leq\Phi_1\{\Mb_b(\Xb_{\bar{d}})\}$ provided
$(K-1)/(N+1)+N/[(N+1)(N-K)]\leq K/(N-1)$. The latter is the same as
$N\geq2K-1$. Therefore, it remains to show that $2K-1\leq N_0(K, 1)$.
Since $N_0(K, 1)$ is the greatest real root of the cubic function
$c(x) = 2x^3 + (10-7K)x^2 + 2(2K-5)(K-1)x + 4K^2 - 7K$, $c(x)>0$ for
all $x>N_0(K, 1)$. One can verify that $c(2K-1)<0$ when $K\geq4$. It
follows that in this case $2K-1<N_0(K, 1)$.
\end{pf}
For $N = 4t$, circulant OAs or equivalently circulant partial Hadamard
matrices described in Section~\ref{sec2.3} can be used to construct
$\bar{\db}^*$
that has $\Mb_b(\Xb_{\bar{d}^*}) = N\Ib_{K}$. Such a $\bar{\db
}^*$ can
be easily seen to be universally optimal in $\bar{\mathcal{D}}$.
Theorem~\ref{thmm6} below
helps to identify some optimal $\bar{\db}$ for $N = 4t + 1$.
For deriving this theorem, we again consider Lemmas \ref{C1980} and
\ref{lemma5}.
\begin{theorem}\label{thmm6}
For $N = 4t+1$, let $\bar{\db}^* \in\bar{\mathcal{D}}$ have
$\Mb_b(\Xb_{\bar{d}^*}) = (N-1)[\Ib_K + N^{-1}\Jb_K]$.
Then, $\bar{\db}^*$ is optimal over $\bar{\mathcal{D}}$ for any
type 1
criterion.
\end{theorem}
\begin{pf}
$\Mb_b(\Xb_{\bar{d}^*})$ has two nonzero eigenvalues, and the smaller
eigenvalue has multiplicity $K-1$. It can also be seen that condition
(a) in Lemma~\ref{C1980} is satisfied by $\bar{\db}^*$. In addition,
$\operatorname{tr}\{\Mb_b(\Xb_{\bar{d}^*})\} = K(N - N^{-1})$, and $\operatorname{tr}\{\Mb
^2_b(\Xb
_{\bar{d}^*})\} - (\operatorname{tr}\{\Mb_b(\Xb_{\bar{d}^*})\})^2/K
= K(K-1)(1 - N^{-1})^2$. Thus, condition (b) of Lemma~\ref{C1980}
is satisfied if and only if
\begin{equation}\qquad
K\bigl(N - N^{-1}\bigr) - A_{\bar{d}} \geq\sqrt{
\frac{K}{K-1}} \biggl[\bigl(1 - N^{-1}\bigr)\sqrt{K(K-1)} - \sqrt
{B_{\bar{d}} - \frac{A^2_{\bar{d}}}{K}} \biggr] \label{Eq:condc}
\end{equation}
for $\bar{\db} \in\bar{\mathcal{D}}$, where $A_{\bar{d}} = \operatorname{tr}\{
\Mb
_b(\Xb_{\bar{d}})\}$, and
$B_{\bar{d}} = \operatorname{tr}\{\Mb^2_b(\Xb_{\bar{d}})\}$. Clearly, (\ref{Eq:condc})
holds for the class of $\bar{\db} \in\bar{\mathcal{D}}$ that satisfy
$A_{\bar{d}} \leq K(N - N^{-1}) -\sqrt{K/(K-1)}(1 - N^{-1})\sqrt
{K(K-1)} = K(N-1)$. Thus, all $\bar{\db}$'s in this class are
outperformed by $\bar{\db}^*$ with respect to any type 1 criterion. For
any other $\bar{\db}$, let $r$ be the number of zeros
in $\bar{\db}$ and $a=\mathbf{j}_N^T \bar{\db}$. Then we
have $A_{\bar{d}} >
K(N-1)$, and $(1 - r - a^2/N) > 0$ since $A_{\bar{d}} = K[(N-r) -
a^2/N]$. Consequently, $r = 0$, and $\bar{\db}$ contains no zero.
Following Lemma~\ref{lemma5}, $\bar{\db}^*$ is also optimal over the
class of designs with no zero for any type 1 criterion. Our claim then follows.
\end{pf}
With these results, we can derive the following theorem for identifying
some optimal fMRI designs for studying contrasts between two HRFs.
\begin{theorem}\label{thmm7}
Suppose $\db_H$ is a Hadamard sequence, $\db_{1,g, H}$ is defined as in
Theorem~\ref{thmm3}, $\db_{2,g, H}$ is
as in Theorem~\ref{thmm4}, and $\Mb_u$ is the information matrix for
$\zetab$ in model (\ref{Eq:Model3}) for a design $\mathbf{u}\in\{0, 1, 2\}
^N$. We have the following results:
\begin{longlist}[(a)]
\item[(a)] Suppose $N = 4t - 1$, and $\mathbf{u}^* = \mathbf{j}_N + \db_H$ or $\mathbf{u}^*
= 2\mathbf{j}_N - \db_H$. If ${p_0} >0$, $p \in[0,
{p_0}]$, and
$N \geq\bar{N}_0(K, {p_0})$ for some $\bar{N}_0(K, {p_0}) > 0$, then
$\Phi_p\{\Mb_{u^*}\} \leq\Phi_p\{\Mb_u\}$ for any $\mathbf{u}\in\{0, 1, 2\}
^N$; here, $\Phi_p$ is defined as in Definition~\ref{Def:Phip}. For
$K\geq4$, we can take $\bar{N}_0(K, 1) $ to be the $N_0(K, 1)$ given
in Theorem~\ref{4546465545485}.
\item[(b)] Suppose $N = 4t$, and $\mathbf{u}^* = \mathbf{j}_N + \db_{1,g,H}$ or $\mathbf{u}
^* = 2\mathbf{j}_N - \db_{1,g,H}$. If $K \leq g + 1$, and
$\Phi$ is any
criterion satisfying the conditions in Definition~\ref{Def:UO}, then
$\Phi\{\Mb_{u^*}\} \leq\Phi\{\Mb_u\}$ for any $\mathbf{u}\in\{0, 1, 2\}^N$.
\item[(c)] Suppose $N = 4t +1$, and $\mathbf{u}^* = \mathbf{j}_N + \db_{2,g,H}$ or
$\mathbf{u}^* = 2\mathbf{j}_N - \db_{2,g,H}$. If
$K \leq g + 1$, and $\Phi_{(f)}$ is
any type 1 criterion defined in Definition~\ref{Def:Type1}, then $\Phi
_{(f)}\{\Mb_{u^*}\} \leq\Phi_{(f)}\{\Mb_u\}$ for any $\mathbf{u}\in\{0, 1,
2\}^N$.
\end{longlist}
\end{theorem}
\begin{pf}
For all the designs $\mathbf{u}^*$ in (a), (b) and (c), we
have $\Mb_{u^*} =
\Xb^T_{\bar{d}_{u^*}}(\Ib_N - N^{-1}\Jb_N)\Xb_{\bar{d}_{u^*}}/4$, where
$\bar{\db}_{u^*}$ is defined as in Lemma~\ref{lemma6}. When $\mathbf{u}^* = \mathbf{j}
_N + \db_H$ (or, resp., $\mathbf{u}^* = 2\mathbf{j}_N - \db_H$), we have $\Xb
_{\bar{d}_{u^*}} = \Xb_{\bar{d}_H}$ with $\bar{\db}_H = \mathbf{j}_N - 2\db_H$
(or, resp., $\bar{\db}_H = 2\db_H - \mathbf{j}_N$). We thus have that,
if $N \geq\bar{N}_0(K, {p_0})$ and $p \in[0, {p_0}]$, then
\[
\Phi_p\{\Mb_{u^*}\} = \Phi_p\bigl\{
\Mb_b(\Xb_{\bar{d}_H})/4\bigr\} \leq \Phi_p\bigl\{
\Mb_b(\Xb_{\bar{d}_u})/4\bigr\} \leq\Phi_p\{
\Mb_u\}
\]
for any $\mathbf{u}\in\{0, 1, 2\}^N$. This completes the
proof for (a).
Similar arguments can be used to prove (b) and (c) and are omitted.
\end{pf}
\section{Conclusion}\label{sec4}
Neuroimaging experiments utilizing the pioneering fMRI technology are
widely conducted in a variety of research fields for gaining better knowledge
about human brain functions. One of the key steps to ensure the success
of such an experiment is to judiciously select an optimal fMRI design.
Existing studies on obtaining optimal fMRI designs primarily focus on
computational approaches. However, insightful analytical results, while
important, are
rather scarce and scattered. To address this important issue, we
conduct a systematic and analytical analysis to characterize some
optimal fMRI designs for
estimating the HRF of a stimulus type and for comparing HRFs of two
stimulus types. Under certain conditions, we show that the popularly
used binary $m$-sequences
as well as the more general Hadamard sequences are optimal in some
statistically meaningful senses. We also identify several new classes
of high-quality fMRI designs
and present systematic methods for constructing them. These designs
exist in many design lengths where good fMRI designs have not been
reported previously.
There, however, are many research challenges that need to be overcome.
For example, our results provide good designs for design lengths of
$N = 4t - 1$, $4t$ and $4t+1$. A~future research of interest is on
identifying optimal fMRI designs for cases with $N = 4t + 2$. In
addition, our experience
indicates that the designs that we present here remain quite efficient
under some violations of model assumptions [cf. \citet{Kao2014ar51}].
Nevertheless, it still is of interest to analytically study optimal
designs for other situations (e.g., with an autoregressive error
process). Extending current results to
cases with a greater number of stimulus types is also a future research
of interest. Many research opportunities exist in this new and
wide-open research area.
\begin{appendix}\label{app}
\section*{Appendix: A proof of Theorem~\texorpdfstring{\protect\ref{4546465545485}}{2.2}}
For $N = 3\ (\mathrm{mod}\ 4) \geq4$ and $K \geq4$, we consider the following
set of $K$-by-$K$ nonnegative definite matrices:
\[
\Xi_{N,K} = \bigl\{ \Eb_K = \bigl((e_{ij})
\bigr)_{i,j = 1, \ldots, K} \mid e_{ij} = 3\ (\mathrm{mod}\ 4)\ \forall i,j,
e_{ii} = N, \Eb_K \succ N^{-1}\Jb_K
\bigr\},
\]
where $\Eb_K \succ N^{-1}\Jb_K$ indicates that $\Eb_K - N^{-1}\Jb
_K$ is
positive definite. With Lemma~\ref{Lemma_mod3}, it can be seen that
$\Mb(\Xb_{\tilde{d}}) \in\Xi_{N,K}$ for any $\tilde{\db} \in
\tilde
{\mathcal{D}}$ having a nonsingular
$\Mb_b(\Xb_{\tilde{d}})$. The idea for proving Theorem~\ref{4546465545485} is
then to show that
an $\Eb_K \in\Xi_{N,K}$ minimizing $\operatorname{tr}\{[\Eb_K - N^{-1}\Jb
_K]^{-1}\}$
is similar (with some permutations of rows and columns)
to a block matrix $\Bb\in\Xi_{N,K}$ to be defined in Definition~\ref
{DefA1} below. We also will show in Lemma~\ref{lemmaA3} that, when the
condition in Theorem~\ref{4546465545485} is satisfied,
we have $\operatorname{tr}\{[\Mb(\Xb_{\tilde{d}^*}) - N^{-1}\Jb_K]^{-1}\} = \min_{B
\in\mathcal{B}} \operatorname{tr}\{[\Bb- N^{-1}\Jb_K]^{-1}\}$, where
$\mathcal{B} \subset\Xi_{N,K}$ is the set of all block matrices. With
these facts and Lemma~\ref{Lemma_mod3}, we have
\begin{eqnarray}\label{Eq:phi1}
&&\Phi_1\bigl\{\Mb_b(\Xb_{\tilde{d}^*})\bigr\}\nonumber\\
&&\qquad=
\Phi_1\bigl\{\Mb(\Xb_{\tilde
{d}^*}) - N^{-1}
\Jb_K \bigr\} = \min_{E_k \in\Xi_{N,K}} \Phi_1\bigl
\{\Eb_K - N^{-1}\Jb_K \bigr\}
\\
&&\qquad\leq\min_{\tilde{d} \in\tilde{\mathcal{D}}} \Phi_1\bigl\{\Mb(\Xb
_{\tilde
{d}}) - N^{-1}\Jb_K \bigr\} \leq \min
_{\tilde{d} \in\tilde{\mathcal{D}}} \Phi_1\bigl\{\Mb_b(\Xb
_{\tilde
{d}})\bigr\}. \nonumber
\end{eqnarray}
Our claim in Theorem~\ref{4546465545485} then follows from (\ref{Eq:phi1}), and
Corollary~3.3 of \citet{Cheng1987ar2021}. This approach is similar
to that of \citet{SatheShenoy1989ar2021}, and \citet
{GalilKiefer1980ar2021}, where
weighing designs under the unbiasedness assumption are considered. We
now present the details
of our proof.
\begin{definition}\label{DefA1}
A block matrix $\Bb\in\Xi_{N,K}$ is of the form
\[
\Bb= \bigoplus_{i = 1}^m \bigl[(N-3)
\Ib_{r_i} + 4 \Jb_{r_i} \bigr] - \Jb_K
\]
for an integer $m \in\{1, \ldots, K\}$ representing the number of
``blocks.'' Here, $\bigoplus$ is the matrix direct sum, and $r_1, r_2,
\ldots, r_m$ are the block sizes of $\Bb$ that satisfy $r_i \geq1$, and
$\sum_{i=1}^m r_i = K$.
\end{definition}
For these block matrices, the following result is an extension of
Theorem~2.1(a) of \citet{Masaro1988ar2021} and
equation (1.1) of \citet{SatheShenoy1989ar2021}. Clearly, this
result also implies that
$\operatorname{tr}\{\Bb^{-1}_b\}$ is invariant to a rearrangement of the block sizes
$r_1, \ldots, r_m$, which
facilitates the derivation of the subsequent results; here $\Bb_b =
\Bb
- N^{-1}\Jb_K$.
\begin{lemma} \label{lemmaA1}
Let $\mathcal{B} \subset\Xi_{N,K}$ be the set of all block matrices. Then
for $\Bb\in\mathcal{B}$,
we have
\begin{eqnarray}\label{TwoEqu}
\operatorname{tr}\bigl\{\Bb^{-1}_b\bigr\} &=& \sum
_{i=1}^m L_i^{-1} +
\frac{K-m}{N - 3} + \frac
{\sum_{i=1}^m r_i L_i^{-2} }{(1+N^{-1})^{-1} - \sum_{i=1}^m r_i
L_i^{-1}}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=&\frac{1}{4t} \Biggl\{K - \sum_{i=1}^m
\frac{r_i}{t + r_i} + \frac{t
\sum_{i=1}^m {r_i}/{(t + r_i)^2}}{{4}/{(1+N^{-1})}
- \sum_{i=1}^m {r_i}/{(t + r_i)}} \Biggr\},
\end{eqnarray}
where $L_i = N - 3 + 4 r_i$, and $t = (N-3)/4 \geq1$.
\end{lemma}
\begin{pf} Let $\Cb_{r_i} = (N-3)\Ib_{r_i} + 4 \Jb_{r_i}$, we have
$\Bb_b = \bigoplus_{i = 1}^m \Cb_{r_i} - (1 + N^{-1})\Jb_K$, and
\begin{eqnarray*}
\Bb^{-1}_b &=& \Biggl[ \bigoplus
_{i = 1}^m \Cb_{r_i} - \bigl(1 +
N^{-1}\bigr)\mathbf{j} _K\mathbf{j}'_K \Biggr]^{-1}
\\
&=&\bigoplus_{i = 1}^m
\Cb^{-1}_{r_i} + \frac{(1 + N^{-1})
[\bigoplus_{i = 1}^m \Cb^{-1}_{r_i} ]\mathbf{j}_K\mathbf{j}'_K [\bigoplus_{i
= 1}^m \Cb^{-1}_{r_i} ]}{1 - (1 + N^{-1}) \sum_{i=1}^m \mathbf{j}_{r_i}
\Cb^{-1}_{r_i}\mathbf{j}_{r_i}}.
\end{eqnarray*}
The two equalities in (\ref{TwoEqu}) can then be derived by some
simple algebra.
\end{pf}
The following lemma indicates that a block matrix minimizing the trace
of $\Bb_b^{-1} = [\Bb- N^{-1}\Jb_K]^{-1}$ can be found over a small
subset of $\mathcal{B}$
[cf. Theorem~2.1(b) of \citet{Masaro1988ar2021}].
\begin{lemma} \label{lemmaA2}
Let $\mathcal{B}_s \subset\mathcal{B}$ be the set of block matrices
having blocks of only one size
or two contiguous sizes. Then
\[
\min_{\Bb\in\mathcal{B}_s} \operatorname{tr}\bigl\{ \Bb_b^{-1} \bigr
\} = \min_{
\Bb
\in\mathcal{B}} \operatorname{tr}\bigl\{ \Bb^{-1}_b
\bigr\}.
\]
\end{lemma}
\begin{pf}
Among the block matrices that yield the minimum $\operatorname{tr}\{ \Bb^{-1}_b \}$
over $\mathcal{B}$, let $\Bb_m$ be the one with the smallest number of
blocks, $m$. Clearly, we only need to consider cases where $m \geq2$.
Without loss of generality (see also the statement above Lemma~\ref
{lemmaA1}), we may assume that the first two block sizes $r_1$ and
$r_2$ are, respectively, the largest and the smallest block sizes among
the $m$ block sizes.
With Lemma~\ref{lemmaA1}, we can then write
$\operatorname{tr}\{\Bb_{m,b}^{-1}\} = [K + f(x)]/4t$, where $\Bb_{m,b} = \Bb_m -
N^{-1}\Jb_K$,
\begin{eqnarray*}
f(x) &=& \frac{t [{x}/{(t+x)^{2}}+{(r-x)}/{(t+r-x)^2} +
\beta
]}{
{4}/{(1+N^{-1})} - [ {x}/{(t + x)} + {(r-x)}/{(t+r-x)} +
\alpha ]} \\
&&{}- \biggl[\frac{x}{t + x} + \frac{r-x}{t+r-x} +
\alpha \biggr],
\end{eqnarray*}
$r = r_1 + r_2$, $x = r_1$ (or $r_2$) with $0 < x < r$, $\alpha= \sum_{i = 3}^{m} r_i/(t+r_i)$, and
$\beta= \sum_{i = 3}^{m} r_i/(t+r_i)^2$. We note that this expression
of $\operatorname{tr}\{\Bb_b^{-1}\}$
applies to any block matrix, and that $\alpha= \beta= 0$ for block
matrices with two or fewer blocks. Since the
number of blocks of $\Bb_m$ is $m \geq2$, we have $\operatorname{tr}\{ \Bb^{-1}_{m,b}
\} < \operatorname{tr}\{ \Bb^{-1}_b \}$ for those block matrices $\Bb$ with only one
block; thus,
$f(x) < f(0) = f(r)$. With some simple algebra similar to that of
\citet
{Masaro1988ar2021}, we also have
\[
f(x) = h(y) = \frac{ay^2 + b}{c y^2 + d}\quad \mbox{and}\quad f'(x) =
h'(y) = \frac{2(ad-bc)y}{(cy^2 + d)^2},
\]
where $y = x - 0.5 r \in(-0.5r, 0.5r)$, and $a, b, c, d$ are some
constants. Along with the fact that $f(x) < f(0) = f(r)$, and $f$ is
symmetric about $0.5r$, the minimum of $f(x)$ occurs when $x$ is the
integer closest to $0.5r$. Consequently, $r_1 = r_2$ when $r$ is even, and
$r_1 = r_2 +1$ when $r$ is odd.
\end{pf}
With these results, we can now work on $\mathcal{B}_{0,s} \subset
\mathcal{B}_s$ that consists of block matrices having
$m (< K)$ block sizes with $r_1 -1 = \cdots= r_v - 1 = r_{v+1} =
\cdots= r_{m} = r \geq1$, and $v \geq1$. We note that,
for any $m_0 \geq1$ and $r_0 \geq2$, a block matrix having $(m, v, r)
= (m_0, 0, r_0)$ can be treated as a block matrix with
$(m, v, r) = (m_0, m_0, r_0 -1)$, and is thus in $\mathcal{B}_{0,s}$;
see also \citet{SatheShenoy1989ar2021}. Consequently,
the only block matrix in $\mathcal{B}_s$ that is left out from
$\mathcal
{B}_{0,s}$ is $\Bb^* = (N+1)\Ib_K - \Jb_K$, which has
$(m, v, r) = (K, 0, 1)$, or equivalently, $(m, v, r) = (K, K, 0)$.
Under the condition described in the following lemma, we have $\operatorname{tr}\{
(\Bb
^* - N^{-1}\Jb_K)^{-1} \} \leq \operatorname{tr}\{ (\Bb_s - N^{-1}\Jb_K)^{-1} \}$
for any other $\Bb_s \in\mathcal{B}_s$.
\begin{lemma} \label{lemmaA3}
Let $\Bb^* = (N+1)\Ib_K - \Jb_K$, $\Bb_s \in\mathcal{B}_{0,s}$ be
a
previously described block matrix,
$\Bb^*_{b} = \Bb^* - N^{-1}\Jb_K$, $\Bb_{s,b} = \Bb_{s} -
N^{-1}\Jb_K$,
and $N_0(K, 1)$ be the greatest real root of the cubic function
$c(x) = 2x^3 + (10-7K)x^2 + 2(2K-5)(K-1)x + 4K^2 - 7K$. If $N \geq
N_0(K, 1)$, then
\[
\operatorname{tr}\bigl\{ \bigl(\Bb_b^*\bigr)^{-1} \bigr\} \leq \operatorname{tr}\bigl\{
\Bb_{s,b}^{-1} \bigr\}.
\]
\end{lemma}
\begin{pf}
Let $u = m-v$, and $\Bb$ be obtained by replacing a block of size $r +
1$ in $\Bb_s$ with a block of size $r$
and a block of size $1$. From (\ref{TwoEqu}), we have
\[
\operatorname{tr}\bigl\{\Bb_{s,b}^{-1} \bigr\} = \frac{K-m}{N - 3} +
\frac{u - 1}{L} + \frac
{v -
1}{L + 4} + \frac{(2L + 4)(1+N^{-1})^{-1} - K}{g(v)},
\]
where $L = N - 3 + 4r$, and $g(v) = 4v(r+1) + (L+4)\xi$, and $\xi=
[L(1+N^{-1})^{-1}-K]$. In addition,
\begin{eqnarray*}
\operatorname{tr}\bigl\{\Bb_b^{-1} \bigr\} &=& \frac{K-m-1}{N - 3} +
\frac{u}{L} + \frac{v -
2}{L + 4} + \frac{1}{N+1}
\\
&&{}+ \frac{(2L + 4)(1+N^{-1})^{-1} - K + 16r(r-1)(N+1)^{-2}}{g(v) -
4r(N+1+L)(N+1)^{-1}}.
\end{eqnarray*}
Since $g(v) \geq g(1)$, we have
\begin{eqnarray*}
&&\operatorname{tr}\bigl\{\Bb_{s,b}^{-1} - \Bb_b^{-1}
\bigr\}
\\
&&\qquad = \frac{4r}{L(N-3)} + \frac{(2L + 4)(1+N^{-1})^{-1} - K}{g(v)} +\frac
{1}{L+4} -
\frac{1}{N+1}
\\
&&\qquad\quad{} - \frac{(2L + 4)(1+N^{-1})^{-1} - K + 16r(r-1)(N+1)^{-2}}{g(v) -
4r(N+1+L)(N+1)^{-1}}
\\
&&\qquad\geq\frac{4r}{L(N-3)} + \frac{(2L + 4)(1 + N^{-1})^{-1} - K}{g(1)}
\\
&&\qquad\quad{} - \frac{(2L+4 - 4r)(1+N^{-1})^{-1} - K}{(N+1)\xi-4(r-1)} = \Delta (r). \label{Eq:Ineq}
\end{eqnarray*}
The equality holds when $v = 1$. With some algebra,
we have
\begin{eqnarray*}
\Delta(r) &=& \frac{8r(N+12r-11)(N+1)^{-1}c(N)}{L(N-3)[(N+1)\xi-
4(r-1)][(L+4)\xi+ 4(r+1)]}
\\
&&{} + 16r(r-1) \\
&&\hspace*{-2pt}\quad{}\times\frac{N^2 - 7N + 12 - 8(1+N^{-1})^{-1} +
16(r-1)^2N(3N-1)(N+1)^{-2}}{L(N-3)[(N+1)\xi- 4(r-1)][(L+4)\xi+
4(r+1)]}
\\
&&{} + 16r(r-1) \\
&&\hspace*{-2pt}\quad{}\times\frac{4(r-1)(N+1)^{-1}[(7N-K)(N-K) + 5(N^2 - 1) +
8N]}{L(N-3)[(N+1)\xi- 4(r-1)][(L+4)\xi+ 4(r+1)]},
\end{eqnarray*}
where $c(N) = 2N^3 + (10-7K)N^2 + 2(2K-5)(K-1)N + 4K^2 - 7K$.
With $N = 3\ (\mathrm{mod}\ 4) \geq4$ and $K \geq4$, it can be seen that, when
$N \geq N_0(K, 1)$, we have $c(N) \geq0$, $\Delta(r) > 0$ for $r > 1$,
and $\Delta(1) \geq0$. Consequently, for a $\Bb_s$, we either (i) find
a $\Bb\notin\mathcal{B}_s$ with
$\operatorname{tr}\{\Bb_{s,b}^{-1}\} > \operatorname{tr} \{\Bb_b^{-1}\}$, or (ii) can keep splitting
each block of size $r+1 = 2$ in $\Bb_s$
into two blocks of size $1$ without increasing the objective function
($\operatorname{tr}\{\Bb_b^{-1}\}$).
For the first case, $\Bb_s (\neq\Bb^*)$ is obviously not an optimal
block matrix
of our interest. For the second case, we can continue the process until
$\Bb= \Bb^*$. Our claim thus follows.
\end{pf}
The results we have so far suggest that, under the condition of Lemma~\ref{lemmaA3}, $\Bb^* = (N+1)\Ib_K - \Jb_K$ minimizes $\operatorname{tr}\{\Bb
_b^{-1}\}
$ over all block matrices $\Bb$. \red{With the following
Lemma~\ref{lemmaA4}, our proof of Theorem~\ref{4546465545485} is then complete.}
\begin{lemma} \label{lemmaA4}
Let $\Bb^* = (N+1)\Ib_K - \Jb_K$. If $N \geq N_0(K, 1)$ for the $N_0(K,
1)$ defined in Lemma~\ref{lemmaA3}, then
\[
\operatorname{tr}\bigl\{\bigl(\Bb_b^*\bigr)^{-1}\bigr\} = \min
_{\Eb_K \in\Xi_{N,K}} \operatorname{tr}\bigl\{\bigl[\Eb_K - N^{-1}\Jb
_K\bigr]^{-1}\bigr\}.
\]
\end{lemma}
The proof of Lemma~\ref{lemmaA4} is lengthy, but otherwise is a
simple extension of that of Theorem~2.2 of \citet{SatheShenoy1989ar2021}.
The main idea is to show that an $\Eb^*_K \in\Xi_{N,K}$
minimizing $\operatorname{tr}\{(\Eb_{b,K}^{-1}\}$ is similar to a block matrix after
some permutations of rows and columns. Lemma~\ref{lemmaA4}
then follows from Lemma~\ref{lemmaA3}. To that end, we need the
following lemmas, which are extensions of
results in \citet{SatheShenoy1989ar2021}. Lemma~\ref{lemmaS1} is
a well-known result, and
the proof is omitted. We also use the following notation: $\Eb^*_K \in
\Xi_{N,K}$ is a
matrix such that $\operatorname{tr}\{(\Eb^*_{b,K})^{-1}\} = \min_{\Eb_K \in\Xi_{N,K}}
\operatorname{tr}\{\Eb^{-1}_{b,K}\}$,
$N_b = N - N^{-1}$, $3_b = 3 - N^{-1}$, $c_b = c - N^{-1}$ for some $c
= 3\ (\mathrm{mod}\ 4)$, and $\mub_{b,i} = \mub_i - N^{-1} \mathbf{j}_{K-2}$ and $\mub
_{b,j}= \mub_j - N^{-1} \mathbf{j}_{K-2}$ for some $\mub
_i$ and $\mub_j$ whose
elements are congruent to 3 modulo 4. In addition, $a_{b,i,j} = \mub
^T_{b,i}\Eb^{-1}_{b,K-2}\mub^T_{b,j}$, $b_{b, i, j} = \mub
^T_{b,i}\Eb
^{-2}_{b,K-2}\mub^T_{b,j}$, $A_{b, i, j} = N_b - a_{b, i, j}$,
$z_{b,i,j}(c_b) = c_b - a_{b,i,j}$,
and
\begin{eqnarray}\label{Eq:fij}
&&f_{b,i,j}(c_b)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad = \frac{(A_{b,i,i} + A_{b,j,j}) + A_{b,i,i}
b_{b,j,j}+A_{b,j,j}b_{b,i,i} - 2b_{b,
i,j}z_{b,i,j}(c_b)}{A_{b,i,i}A_{b,j,j} - z^2_{b,i,j}(c_b)}.
\end{eqnarray}
\begin{lemma}\label{lemmaS1}
Let $\Eb= ((\Eb_{ij}))$ for $i, j = 1, 2$ be a partitioned positive
definite matrix, where
$\Eb_{11}$ and $\Eb_{22}$ are square matrices. We have
\[
\operatorname{tr}\bigl\{\Eb^{-1}\bigr\} = \operatorname{tr}\bigl\{\Eb^{-1}_{22}
\bigr\} + \operatorname{tr}\bigl\{\Vb\bigl[\Ib+ \Eb_{12}\Eb _{22}^{-2}
\Eb_{21}\bigr]\bigr\},
\]
where $\Vb= (\Eb_{11} - \Eb_{12}\Eb_{22}^{-1}\Eb_{21})^{-1}$.
\end{lemma}
\begin{lemma}\label{lemmaS2}
$\operatorname{tr}\{(\Eb^*_{b,K})^{-1}\} <
\operatorname{tr}\{(\Eb^*_{b,K-1})^{-1}\} + (N-3)^{-1}$.
\end{lemma}
\begin{pf}
For $K = 2$, $\operatorname{tr}\{\Eb^{-1}_{b, 2}\} = 2N_b/(N_b^2 - c_b^2)$ is
minimized when $c = -1$, or equivalently, $c_b = -1 - N^{-1}$.
Thus, $\operatorname{tr}\{(\Eb^*_{b, 2})^{-1}\} - \operatorname{tr}\{(\Eb^*_{b, 1})^{-1}\} =$
\[
\frac{2N_b}{N_b^2 - (1+N^{-1})^2} - N_b = \frac
{N^2-2N+2}{(N+1)(N-1)(N-2)} < \frac{1}{N-3}.
\]
Suppose $\operatorname{tr}\{(\Eb^*_{b,K-1})^{-1}\} < \operatorname{tr}\{(\Eb^*_{b,K-2})^{-1}\} +
(N-3)^{-1}$.
We would like to show that $\operatorname{tr}\{(\Eb^*_{b,K})^{-1}\} < \operatorname{tr}\{(\Eb
^*_{b,K-1})^{-1}\} + (N-3)^{-1}$. To that end, we write
\[
\Eb^*_{b,K-1} = \left[
\matrix{
N_b & \mub^T_{b,j}
\vspace*{2pt}\cr
\mub_{b,j} & \Eb_{b,K-2} }
\right].
\]
With Lemma~\ref{lemmaS1} and the fact that $\Eb^*_{b,K-1}$ is positive
definite, we have
$b_{b,j,j} > 0$ and
\begin{eqnarray*}
A^{-1}_{b,j,j} &<& A^{-1}_{b,j,j}(1+b_{b,j,j})
= \operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K-1}\bigr)^{-1}\bigr\} - \operatorname{tr}\bigl\{
\Eb_{b,K-2}^{-1}\bigr\}
\\
& \leq &\operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K-1}\bigr)^{-1}\bigr\} - \operatorname{tr}
\bigl\{\bigl(\Eb^*_{b,K-2}\bigr)^{-1}\bigr\} <
(N-3)^{-1}.
\end{eqnarray*}
Thus, $A_{b, j, j} = N_b - a_{b, j, j} > N - 3$, and this in turn implies
$z_{b,j,j}(3_b) = 3_b - a_{b, j, j} > 0$. Following this fact and some
simple algebra, we can show that
the following matrix $\Eb_{b,K}(j)$, which is obtained by adding a row
and a column to
$\Eb^*_{b,K-1}$, is positive definite, and is thus in $\Xi_{N,K}$:
\[
\Eb_{b,K}(j) = \left[
\matrix{
N_b & 3_b & \mub^T_{b,j}
\vspace*{2pt}\cr
3_b & N_b & \mub^T_{b,j}
\vspace*{2pt}\cr
\mub_{b,j} & \mub_{b,j} & \Eb_{b,K-2} }
\right].
\]
With Lemma~\ref{lemmaS1}, we also have
\begin{eqnarray}\label{Eq:Ej}
\operatorname{tr}\bigl\{\Eb^{-1}_{b,K}(j)\bigr\} &=& \operatorname{tr}\bigl\{
\Eb_{b,K-2}^{-1}\bigr\} + f_{b,j,j}(3_b)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=&\operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K-1}\bigr)^{-1}\bigr\} -
A^{-1}_{b,j,j}(1+b_{b,j,j}) + f_{b,j,j}(3_b).
\nonumber
\end{eqnarray}
By noting that $f_{b,j,j}(3_b)$ in (\ref{Eq:fij}) can be written as
\[
f_{b,j,j}(3_b) = \frac{2[z_{b,j,j}(3_b) +
(N-3)(1+b_{b,j,j})]}{(N-3)(N-3 + 2z_{b,j,j}(3_b))},
\]
we can show that $f_{b,j,j}(3_b) - A^{-1}_{b,j,j}(1+b_{b,j,j}) < (N-3)^{-1}$.
The proof is then completed by the fact that
\begin{eqnarray*}
\operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K}\bigr)^{-1}\bigr\} &\leq& \operatorname{tr}\bigl\{
\Eb^{-1}_{b,K}(j)\bigr\} =\operatorname{tr}\bigl\{\bigl(\Eb
^*_{b,K-1}\bigr)^{-1}\bigr\} + f_{b,j,j}(3_b)
- A^{-1}_{b,j,j}(1+b_{b,j,j})
\\
&<& \operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K-1}\bigr)^{-1}\bigr\} +
(N-3)^{-1}.
\end{eqnarray*}
\upqed\end{pf}
\begin{lemma}\label{lemmaS3}
Write $\Eb^*_{b,K} = \Eb^*_K - N^{-1}\Jb_K$ in the form of
\begin{equation}
\Eb^*_{b,K} = \left[\matrix{
N_b & c_b & \mub^T_{b,i}
\vspace*{2pt}\cr
c_b & N_b & \mub^T_{b,j}
\vspace*{2pt}\cr
\mub_{b,i} & \mub_{b,j} & \Eb_{b,K-2}}
\right] = \left[ \matrix{ N & c &
\mub^T_{i}
\vspace*{2pt}\cr
c & N & \mub^T_{j}
\vspace*{2pt}\cr
\mub_{i} & \mub_{j} & \Eb_{K-2}}
\right] - N^{-1}\Jb_K. \label{Eq:Estar}
\end{equation}
We have \textup{(i)}
$\operatorname{tr}\{(\Eb^*_{b,K})^{-1}\} = \operatorname{tr}\{\Eb^{-1}_{b,K-2}\} + f_{b,i,j}(c_b)$ and
\textup{(ii)} $f_{b,i,j}(c_b) < 2(N-3)^{-1}$.
\end{lemma}
\begin{pf}
We first replace $\Eb$, $\Eb_{11}$ and $\Eb_{22}$ in Lemma~\ref
{lemmaS1} by
$\Eb^*_{b,K}$,
\[
\Eb_{11} = \left[ \matrix{ N_b &
c_b
\vspace*{2pt}\cr
c_b & N_b }
\right],
\]
and $\Eb_{b, K-2}$, respectively. This allows to verify (i). In
addition, we have from Lemma~\ref{lemmaS2} that
\begin{eqnarray*}
f_{b,i,j}(c_b) &= &\operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K}
\bigr)^{-1}\bigr\} - \operatorname{tr}\bigl\{\Eb^{-1}_{b,K-2}\bigr\}
\leq \operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K}\bigr)^{-1}\bigr\} - \operatorname{tr}\bigl\{
\bigl(\Eb^*_{b,K-2}\bigr)^{-1}\bigr\}
\\
&\leq& \operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K}\bigr)^{-1}\bigr\} - \operatorname{tr}\bigl
\{\bigl(\Eb^*_{b,K-1}\bigr)^{-1}\bigr\} + \operatorname{tr}\bigl\{ \bigl(\Eb
^*_{b,K-1}\bigr)^{-1}\bigr\} - \operatorname{tr}\bigl\{\bigl(
\Eb^*_{b,K-2}\bigr)^{-1}\bigr\}
\\
& <& 2(N-3).
\end{eqnarray*}
This proves (ii).
\end{pf}
We now are ready to prove that $\Eb^*_K$ is similar to a block matrix.
This is done by considering the expression of $\Eb^*_{b,K}$ in (\ref
{Eq:Estar}).
We then will show that if $|c| > 3$, then $f_{b,i,j}(c_b) \geq
2(N-1)^{-1}$, which contradicted with Lemma~\ref{lemmaS3}(ii), a
necessary condition of Lemma~\ref{lemmaS2}. Since the same argument can
be applied after
permuting the rows and columns of $\Eb^*_{b,K}$, $\Eb^*_K$ must have
off-diagonal elements equal to $-1$ or $3$, and
is thus similar to a block matrix. We begin this procedure by deriving
some useful results.
With Lemma~\ref{lemmaS3}(i) and equation (\ref{Eq:Ej}), we have
\begin{eqnarray*}
\operatorname{tr}\bigl\{\bigl(\Eb^*_{b,K}\bigr)^{-1}\bigr\} &= &\operatorname{tr}\bigl\{
\Eb^{-1}_{b,K-2}\bigr\} + f_{b,i,j}(c_b)
\\
&\leq&\min\bigl\{\operatorname{tr}\bigl\{\Eb^{-1}_{b,K-2}\bigr\} +
f_{b,i,i}(3_b), \operatorname{tr}\bigl\{\Eb ^{-1}_{b,K-2}
\bigr\} + f_{b,j,j}(3_b) \bigr\}.
\end{eqnarray*}
Thus, $f_{b,i,j}(c_b) \leq\min\{f_{b,i,i}(3_b), f_{b,j,j}(3_b)\}$.
Let $z_{b,g}(c_b) = \sqrt{z_{b,i,i}(c_b)z_{b,j,j}(c_b)}$; then
\begin{eqnarray*}
&&\frac{z_{b,i,i}(3_b) b_{b,j,j} + z_{b,j,j}(3_b)b_{b,i,i}}{2}\\
&&\qquad\geq \sqrt{z_{b,i,i}(3_b)
b_{b,j,j}z_{b,j,j}(3_b)b_{b,i,i}}
\\
&&\qquad= \sqrt{z_{b,i,i}(3_b)z_{b,j,j}(3_b)}
\sqrt{b_{b,i,i}b_{b,j,j}} \geq z_{b,g}(3_b)|b_{b,i,j}|.
\end{eqnarray*}
The last inequality is due to the Cauchy--Schwarz inequality
[see also, Theorem~14.10.1 of \citet{Harville1997bk31}]. With the same
reason, we also have $a^2_{b,i,j} \leq a_{b,i,i}a_{b,j,j}$. Thus,
for $|c| > 3$,
\begin{eqnarray*}
\bigl|z_{b,i,j}(c_b)\bigr| &=& |c_b - a_{b,i,j}|
\geq|c| - \bigl|N^{-1}\bigr| - |a_{b,i,j}| > 3_b -
\sqrt{a_{b,i,i}a_{b,j,j}}
\\
&\geq&3_b - \frac{a_{b,i,i} + a_{b,j,j}}{2} = \frac{3_b - a_{b,i,i} +
3_b - a_{b,j,j}}{2}\\
& =&
\frac{z_{b,i,i}(3_b) + z_{b,j,j}(3_b)}{2}.
\end{eqnarray*}
We note that $a_{b,i,i}$, and $a_{b,j,j}$ are positive since $\Eb
_{K-2}$ is positive definite.
Let $z_{b,a}(c_b) = (z_{b,i,i}(c_b) + z_{b,j,j}(c_b))/2$. We have, for
$|c| > 3$,
$|z_{b,i,j}(c_b)| > z_{b,a}(3_b)\geq z_{b,g}(3_b)$. It also can be
easily seen that
\[
f_{b,j,j}(3_b) = \frac{2[A_{b,j,j} + (N-3)b_{b,j,j}]}{(N-3)[A_{b,j,j} +
z_{b,j,j}(3_b)]}.
\]
With these facts and some algebra similar to that in \citet
{SatheShenoy1989ar2021},
we can see that, for $|c| > 3$,
\begin{eqnarray*}
&&\bigl[A_{b,i,i}A_{b,j,j} - z^2_{b,i,j}(c_b)
\bigr] f_{b,i,j}(c_b)
\\
&&\qquad= \bigl[(N-3)^2 +2(N-3)z_{b,a}(3_b)+
z^2_{b,g}(3_b) - z^2_{b,i,j}(c_n)
\bigr]f_{b,i,j}(c_b)
\\
&&\qquad\geq2\bigl[N-3 + z_{b,a}(3_b)\bigr]\\
&&\qquad\quad{} +
\bigl[(N-3)z_{b,a}(3_b) + z^2_{b,g}(3_b)
- z^2_{b,i,j}(c_b)\bigr] f_{b,i,j}(c_b).
\end{eqnarray*}
The first equality is due to $A_{b,i,j} = N - c + z_{b,i,j}(c_b)$. This
in turn leads to
$f_{b,i,j}(c_b) \geq2(N-3)^{-1}$. With Lemma~\ref{lemmaS3}(ii), we
thus can conclude that $|c| \leq3$ and
$\Eb^*_K$ is similar to a block matrix. The proof of Lemma~\ref
{lemmaA4} is then completed by using Lemma~\ref{lemmaA3}.
\end{appendix}
|
2,869,038,156,835 | arxiv | \section{Introduction}
Modeling flow in fractured porous media has drawn great attention in the past decades, being fundamental for addressing many environmental and energy problems, such as water resources management, isolation of radioactive waste and ground water contamination.
Given the wide applications of fractured model in practical applications, many advances has been made in the accomplishments of designing efficient numerical methods for fractured porous media.
In \cite{Martin05}, a mixed finite element method is developed and error estimates are also proved.
Later, a mixed finite element method on non-matching grids is considered in \cite{DAngelo12}.
In \cite{Chave18}, a hybrid-high order method is analyzed on fairly general meshes.
The error estimates proposed therein show that the method is fully robust with respect to the heterogeneity of the permeability coefficients.
In \cite{Antonietti19}, a discontinuous Galerkin approximation for flows in fractured porous media on polytopal grids is analyzed, where optimal convergence estimates in mesh-dependent energy norm are derived on fairly general meshes possibly including elements with unbounded number of faces.
In addition to the aforementioned methods, we also mention other methods that have been developed for fractured porous media, see \cite{Lipnikov08, Angot09, Sandve12, Berrone13, Fumagalli13, Benedetto14, KBrenner16, AntoniettiMF16,chen2016,chen2017, KBrenner17, Delpra17,Boon18, Formaggia18}.
Staggered discontinuous Galerkin (DG) methods are initially introduced to solve wave propagation problems \cite{EricEngquistwave06,ChungWave}.
The salient features of staggered DG method make it desirable for practical applications and the applications to various partial differential equations important for both science and engineering have been considered in \cite{ChungKimWid13, EricCiarYu13, KimChungLee13, Cheung15, LeeKim16, ChungQiu17, ChungParkLina18, LinaParkconvection19}.
Recently, staggered DG methods have been successfully designed on fairly general meshes possibly including hanging nodes for Darcy law and the Stokes equations, respectively \cite{LinaPark, LinaParkShin}.
It is further developed with essential modifications to solve coupled Stokes and Darcy problem, and Brinkman problem \cite{LinaParkcoupledSD,LinaEricLam20}.
Staggered DG methods designed therein earn many desirable features, including:
1) It can be flexibly applied to fairly general meshes with possible inclusion of hanging nodes, and the handing nodes can be simply incorporated in the construction of the method;
2) superconvergence can be obtained, which can deliver one order high convergence with proper postprocessing scheme designed;
3) local mass conservations can be preserved, which is highly appreciated in the simulation of multiphase flow.
In addition, the mass matrix is block diagonal which is desirable when explicit time stepping schemes are used;
4) no numerical flux or penalty term is needed in contrast to other DG methods.
The purpose of this paper is to develop and analyze staggered DG method for the coupled bulk-fracture model stemming from the modeling of flows in fractured porous media, allowing more general meshes such as elements with arbitrarily small edges.
The flexibility of staggered DG method in handling fairly general meshes, and the preservation of physical properties indeed make it an attractive candidate for such kind of problems.
In this paper we propose a discretization which combines a staggered DG approximation for the problem in the bulk domains with a conforming finite element approximation on the fracture.
Unlike the strategies employed in \cite{DAngelo12,Chave18}, we impose the coupling conditions by replacing all the terms with respect to the jump and average of flux by the corresponding pressure term, which can compensate for the degrees of freedom for bulk pressure across the fracture.
The existence and uniqueness of the resulting system is proved and a rigorous error analysis is carried out.
In particular, we prove the convergence estimates under weaker assumption on the polygonal mesh by exploiting some novel strategies.
Research in this direction has drawn great attention, see \cite{Beiraoda17,BrennerSung18,CaoChen18,Antonietti19,CaoChen19} for works considering general polygonal elements allowing arbitrarily small edges.
The primary difficulty arising from \textit{a priori} error estimates lies in the fact that $L^2$ error estimate for flux is coupled with energy error of fracture pressure, which will naturally lead to suboptimal convergence for $L^2$ error of flux.
To overcome this issue, we construct the Ritz projection for fracture pressure so that the term causing suboptimal convergence can vanish.
Moreover, we are able to show that the Ritz projection superconverges to numerical approximation of fracture pressure. Then without duality argument we can achieve optimal convergence for $L^2$ error of fracture pressure and bulk pressure, respectively.
It is noteworthy that our error estimates are shown to be fully robust with respect to the heterogeneity and anisotropy
of the permeability coefficients, which is desirable feature for fractured flow simulation.
The theoretical findings are verified by a series of numerical tests.
Especially, numerical tests indicate that our method is robust to anisotropy of meshes.
We emphasize that our method allows general meshes with arbitrarily small edges, thus it can be easily adapted to solve problem on unfitted grids.
In fact, we only need to update the interface elements by connecting the intersection points between background grids and fracture, thereby the resulting grids are again fitted with fracture and thus can be naturally embedded into our current framework.
Therefore, this paper focuses on the heart of the novelty on the fitted mesh to make the presentation clear.
The rest of this paper is organized as follows.
In the next section, we describe the model problem and formulate the staggered DG formulation for the bulk region coupled with standard conforming Galerkin formulation inside the fracture.
In addition, some fundamental ingredients are given in order to prove the \textit{a priori} error estimates.
In Section~\ref{sec:error}, \textit{a priori} error analysis is derived for bulk flux, bulk pressure and fracture pressure measured in $L^2$ error, where a discrete trace inequality is proved.
Then several numerical experiments are given in Section~\ref{sec:numerical} to confirm the theoretical findings, where various tests including elements with small edges and anisotropic meshes are demonstrated.
Finally, a conclusion is given.
\section{Description of staggered DG method}
In this section we first describe the governing equations modeling Darcy flows in fractured porous media.
Then staggered DG discretization is derived for the model problem under consideration.
Finally, we introduce some technical results that are vital for subsequent sections.
\subsection{Model problem}
We consider a porous medium saturated by an incompressible fluid that occupies the space region $\Omega\subset \mathbb{R}^2$ and is crossed by a single fracture $\Gamma$.
Here, $\Omega_B:=\Omega\backslash \bar{\Gamma}$ represents the bulk region and can be decomposed as $\Omega_B:=\Omega_{B,1}\cup \Omega_{B,2}$.
In addition, we denote by $\partial \Omega_B:=\bigcup_{i=1}^2 \partial \Omega_{B,i}\backslash \bar{\Gamma}$ and denote by $\partial \Gamma$ the boundary of fracture $\Gamma$.
$\bm{n}_\Gamma$ denotes a unit normal vector to $\Gamma$ with a fixed orientation.
The schematic of the bulk and fracture domain is illustrated in Figure~\ref{fig:bulkdomain}.
Without loss of generality, we assume in the following that the subdomains are numbered so that $\bm{n}_\Gamma$ coincides with the outward normal direction of $\Omega_{B,1}$.
In the bulk region, we model the motion of the incompressible fluid by Darcy's law in mixed form, so that the pressure $p: \Omega_B\rightarrow \mathbb{R}$ and the flux $\bm{u}: \Omega_B \rightarrow \mathbb{R}^2$ satisfy
\begin{align}
\bm{u}+K\nabla p & =\bm{0}\quad \mbox{in}\;\Omega_B,\label{eq:bulk1} \\
\nabla \cdot \bm{u} & =f\quad \mbox{in}\;\Omega_B,\label{eq:bulk2} \\
p & =p_0\quad \mbox{on}\; \partial \Omega_B.
\end{align}
Here, $p_0\in H^{\frac{1}{2}}(\partial \Omega_B)$ the boundary pressure, and $K:\Omega_B \rightarrow \mathbb{R}^{2\times 2}$ the bulk permeability tensor, which is assumed to be a symmetric, piecewise constant.
Further, we assume that $K$ is uniformly elliptic so that there exist two strictly positive real numbers $K_1$ and $K_2$ satisfying for almost every $x\in \Omega_B$ and all $z\in \mathbb{R}^2$ such that $|\bm{z}|=1$
\begin{equation*}
0<K_1\leq K(x) \bm{z}\cdot \bm{z}\leq K_2.
\end{equation*}
Inside the fracture, we consider the motion of the fluid as governed by Darcy's law in primal form, so that the fracture pressure $p_\Gamma: \Gamma\rightarrow \mathbb{R}$ satisfies
\begin{equation}
\begin{aligned}
-\nabla_t \cdot (K_\Gamma \nabla_t p_\Gamma)
& =\ell_{\Gamma}f_\Gamma+[\bm{u}\cdot \bm{n}_\Gamma]
& & \mbox{in}\; \Gamma, \\
p_\Gamma
& = g_\Gamma
& & \mbox{on}\; \partial \Gamma,
\end{aligned}
\label{eq:fracture}
\end{equation}
where $f_\Gamma\in L^2(\Gamma)$ and $K_\Gamma: = \kappa_\Gamma^*\ell_\Gamma$ with $\kappa_\Gamma^*: \Gamma\rightarrow \mathbb{R}$ and $\ell_\Gamma:\Gamma\rightarrow \mathbb{R}$ denoting the tangential permeability and thickness of the fracture, respectively.
The quantities $\kappa_\Gamma^*$ and $\ell_\Gamma$ are assumed to be piecewise constants.
Here, $\nabla_t\cdot$ and $\nabla_t$ denote the tangential divergence and gradient operators along $\Gamma$, respectively.
\Red{For the sake of simplicity, we assume $p_0=0$, $g_\Gamma=0$ in the analysis.}
The above problems are coupled by the following interface conditions
\begin{equation}
\begin{aligned}
\eta_\Gamma \{\bm{u}\cdot\bm{n}_\Gamma\} & =[p] & & \mbox{on}\;\Gamma, \\
\alpha_\Gamma[\bm{u}\cdot\bm{n}_\Gamma] & =\{p\}-p_\Gamma & & \mbox{on}\;\Gamma,
\end{aligned}\label{eq:interface}
\end{equation}
where we set
\begin{equation*}
\eta_\Gamma: =\frac{\ell_\Gamma}{\kappa_\Gamma^n},\quad \alpha_\Gamma:=\eta_\Gamma(\frac{\xi}{2}-\frac{1}{4}).
\end{equation*}
Here $\xi\in (\frac{1}{2},1]$ is a model parameter, and $\kappa_\Gamma^n: \Gamma\rightarrow \mathbb{R}$ represents the normal permeability of the fracture, which is assumed to be a piecewise constant.
As in the bulk domain, we assume that there exists positive constants $\kappa_1^*,\kappa_2^*,\kappa_1^n,\kappa_2^n$ such that, almost everywhere on $\Gamma$,
\begin{equation*}
\kappa_1^*\leq \kappa_\Gamma^*\leq \kappa_2^*,\quad \kappa_1^n\leq \kappa_\Gamma^n\leq \kappa_2^n.
\end{equation*}
Also, $[\cdot]$ and $\{\cdot\}$ are jump and average operators, respectively, and their precise definitions can be found in the next subsection.
The well-posedness of the coupled problem for $\xi\in (\frac{1}{2},1]$ has been proved in \cite{Martin05}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{mesh_schematic.eps}
\caption{Illustration of bulk and fracture domain.}
\label{fig:bulkdomain}
\end{figure}
\begin{remark}[Neumann boundary conditions]
\rm{When the fracture tip is immersed in the domain $\Omega_B$, the boundary condition at the immersed tip can be modeled as a homogeneous Neumann boundary condition, see \cite{Angot09}.
For both bulk and fracture domains, the Neumann boundary condition can be treated as a natural boundary condition.
Since the analysis for such boundary condition is parallel to the analysis for Dirichlet boundary conditions, we only consider the latter one for simplicity.}
\end{remark}
Before closing this subsection, we introduce some notations that will be employed throughout the paper.
Let $D\subset \mathbb{R}^d,$ $d=1,2$, we adopt the standard notations for the Sobolev spaces $H^s(D)$ and their associated norms $\|\cdot\|_{s,D}$, and semi-norms $|\cdot|_{s,D}$ for $s\geq 0$.
The space $H^0(D)$ coincides with $L^2(D)$, for which the norm is denoted as $\|\cdot\|_{D}$.
We use $(\cdot,\cdot)_D$ to denote the inner product for $d=2$ and $\langle\cdot,\cdot\rangle_D$ for $d=1$.
If $D=\Omega$, the subscript $\Omega$ will be dropped unless otherwise mentioned.
In the sequel, we use $C$ to denote a generic positive constant which may have different values at different occurrences.
\subsection{Staggered DG method}
In this subsection, we begin with introducing the construction of our staggered DG spaces, in line with this we then present the staggered DG method for the model problem \eqref{eq:bulk1}-\eqref{eq:interface}.
We consider a family of meshes $\mathcal{T}_u$ made of disjoint polygonal (primal) elements which are aligned with the fracture $\Gamma$ so that any element $T\in \mathcal{T}_u$ can not be cut by $\Gamma$.
Note that, since $\Omega_{B,1}$ and $\Omega_{B,2}$ are disjoint, each element $T$ belongs to one of the two subdomains.
The union of all the edges excluding the edges lying on the fracture $\Gamma$ in the decomposition $\mathcal{T}_u$ is called primal edges, which is denoted as $\mathcal{F}_u$.
Here we use $\mathcal{F}_u^0$ to stand for the subset of $\mathcal{F}_u$, that is the set of edges in $\mathcal{F}_{u}$ that do not lie on $\partial\Omega_B$.
In addition, we use $\mathcal{F}_h^\Gamma$ to denote the one-dimensional mesh of the fracture $\Gamma$.
For the construction of staggered DG method, we decompose each element $T \in \mathcal{T}_u$ into the union of triangles by connecting the interior point $\nu$ of $T$ to all the vertices.
Here the interior point $\nu$ is chosen as the center point for simplicity.
We rename the union of these sub-triangles by $S(\nu)$ to indicate that the triangles sharing common vertex $\nu$.
In addition, the resulting simplicial sub-meshes are denoted as $\mathcal{T}_h$.
Moreover, some additional edges are generated in the subdivision process due to the connection of $\nu$ to all the vertices of the primal element, and these edges are denoted by $\mathcal{F}_p$.
For each triangle $\tau\in \mathcal{T}_h$, we let $h_\tau$ be the diameter of $\tau$ and $h=\max\{h_\tau, \tau\in \mathcal{T}_h\}$.
In addition, we define $\mathcal{F}:=\mathcal{F}_{u}\cup \mathcal{F}_{p}$ and $\mathcal{F}^{0}:=\mathcal{F}_{u}^{0}\cup \mathcal{F}_{p}$.
The construction for general meshes is illustrated in Figure~\ref{grid}, where the black solid lines are edges in $\mathcal{F}_{u}$ and black dotted lines are edges in $\mathcal{F}_{p}$.
Finally, we construct the dual mesh.
For each interior edge $e\in \mathcal{F}_{u}^0$, we use $D(e)$ to represent the dual mesh, which is the union of the two triangles in $\mathcal{T}_h$ sharing the edge $e$.
For each edge $e\in(\mathcal{F}_{u}\backslash\mathcal{F}_{u}^0)\cup \mathcal{F}_h^\Gamma$, we use $D(e)$ to denote the triangle in $\mathcal{T}_h$ having the edge $e$, see Figure~\ref{grid}.
For each edge $e$, we define a unit normal vector $\bm{n}_{e}$ as follows:
If $e\in \mathcal{F}\backslash \mathcal{F}^{0}$, then $\bm{n}_{e}$ is the unit normal vector of $e$ pointing towards the outside of $\Omega$.
If $e\in \mathcal{F}^{0}$, an interior edge, we then fix $\bm{n}_{e}$ as one of the two possible unit normal vectors on $e$.
When there is no ambiguity, we use $\bm{n}$ instead of $\bm{n}_{e}$ to simplify the notation.
Typical analysis for polygonal element usually requires the following mesh regularity assumptions (cf. \cite{Beir13,Cangiani16}):
\begin{description}
\item[Assumption (A)] Every element $S(\nu)$ in $\mathcal{T}_{u}$ is star-shaped with respect to a ball of radius $\geq \rho_S h_{S(\nu)}$, where $\rho_S$ is a positive constant and $h_{S(\nu)}$ denotes the diameter of $S(\nu)$.
\item[Assumption (B)] For every element $S(\nu)\in \mathcal{T}_{u}$ and every edge $e\in \partial S(\nu)$, it satisfies $h_e\geq \rho_E h_{S(\nu)}$, where $\rho_E$ is a positive constant and $h_e$ denotes the length of edge $e$.
\end{description}
Assumption (A) and (B) can guarantee that the triangulation $\mathcal{T}_h$ is shape regular.
However, it excludes the elements with arbitrarily small edges, which is interesting from the practical applications.
Thus, in this paper, we will show the convergence estimates by only assuming Assumption (A).
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{mesh_schematic1.eps}\hspace{1em}
\includegraphics[width=0.35\textwidth]{mesh_schematic2.eps}
\caption{Schematic of the primal mesh $S(\nu)$, the dual mesh $D(e)$ and the primal simplicial sub-meshes.}
\label{grid}
\end{figure}
Let $k\geq 0$ be the order of approximation. For every $\tau \in \mathcal{T}_{h}$ and $e\in\mathcal{F}$, we define $P^{k}(\tau)$ and $P^{k}(e)$ as the spaces of polynomials of degree less than or equal to $k$ on $\tau$ and $e$, respectively.
For $q$ and $\bm{v}$ belonging to the broken Sobolev space the jump $[q]\mid_e$ and the jump $[\bm{v}\cdot\bm{n}]\mid_e$ over $e\in \mathcal{F}^0\cup \mathcal{F}_h^\Gamma$ are defined respectively as
\begin{equation*}
[q]=q_{1}-q_{2}, \quad [\bm{v}\cdot\bm{n}]=\bm{v}_{1}\cdot\bm{n}-\bm{v}_{2}\cdot\bm{n},
\end{equation*}
where $q_{i}=q\mid_{\tau_{i}}$, $\bm{v}_{i}=\bm{v}\mid_{\tau_{i}}$ and $\tau_{1}$, $\tau_{2}$ are the two triangles in $\mathcal{T}_h$ having the edge $e$.
Moreover, for $e\in \mathcal{F}\backslash \mathcal{F}^0$, we define $[q]=q_1$.
In the above definitions, we assume $\bm{n}$ is pointing from $\tau_1$ to $\tau_2$.
Similarly, we define the average $\{q\}\mid_e$ and the average $\{\bm{v}\cdot\bm{n}\}\mid_e$ over $e\in \mathcal{F}^0\cup \mathcal{F}_h^\Gamma$ by
\begin{align*}
\{q\}=\frac{q_{1}+q_{2}}{2}, \quad \{\bm{v}\cdot\bm{n}\}=\frac{\bm{v}_{1}\cdot\bm{n}+\bm{v}_{2}\cdot\bm{n}}{2},
\end{align*}
where $q_{i}=q\mid_{\tau_{i}}$, $\bm{v}_{i}=\bm{v}\mid_{\tau_{i}}$ and $\tau_{1}$, $\tau_{2}$ are the two triangles in $\mathcal{T}_h$ having the edge $e$.
Next, we will introduce some finite dimensional spaces.
First, we define the following locally $H^{1}(\Omega)$ conforming space $S_h$:
\begin{equation*}
S_{h}:=\{q :
q\mid_{\tau}\in P^{k}(\tau)\;
\forall \tau \in \mathcal{T}_{h};\;[q]\mid_e=0\;\forall e\in\mathcal{F}_{u}^{0};
\;q\mid_{\partial \Omega_B}=0\}.
\end{equation*}
Notice that, if $q\in S_h$, then $q\mid_{D(e)}\in H^1(D(e))$ for each edge $e\in (\mathcal{F}_{u}\cup \mathcal{F}_h^\Gamma)$ and no continuity is imposed across $e\in \mathcal{F}_h^\Gamma$ for function $q\in S_h$.
The discrete $H^1$-norm for $S_h$ are defined as follows
\begin{equation*}
\|q\|_Z^2=\sum_{\tau\in \mathcal{T}_h}\|\nabla q\|_{0,\tau}^2+\sum_{\tau\in \mathcal{T}_h}\sum_{e\in \mathcal{F}_{p}\cap \partial \tau}\frac{h_e}{2|\tau|}\|[q]\|_{0,e}^2.
\end{equation*}
where $|\tau|$ represents the area of triangle $\tau\in \mathcal{T}_h$.
Note that the scaling in the second term used here is different from that of \cite{LinaPark}, and this modification enables us to show the convergence estimates without Assumption (B).
We specify the degrees of freedom for $S_h$ similar to that of \cite{ChungWave}.
(SD1) For $e\in \mathcal{F}_u$, we have
\begin{equation*}
\phi_e(q): = \langle q,p_k\rangle_e\quad \forall p_k\in P^{k}(e).
\end{equation*}
(SD2) For $\tau\in \mathcal{T}_h$, we have
\begin{equation*}
\phi_\tau(q): = (q,p_{k-1})_\tau \quad \forall p_{k-1}\in P^{k-1}(\tau).
\end{equation*}
(SD3) For $e\in \mathcal{F}_h^\Gamma$, we have for each $i=1,2$
\begin{equation*}
\phi_{e}^i(q): = \langle q\mid_{\Omega_{B,i}},p_k\rangle_e \quad \forall p_k\in P^k(e).
\end{equation*}
Note that in original staggered DG method, the finite dimensional space for pressure is continuous over all the primal edges, in which case (SD3) can be compliant with (SD1).
In this paper we consider Darcy flows with fracture where the pressure is discontinuous across the fracture, thereby (SD3) can not be compliant with (SD1).
Proceeding analogously to Lemma~2.2 of \cite{ChungWave}, we can show that any function $q\in S_h$ is uniquely determined by the degrees of freedom (SD1)-(SD3), which is omitted here for simplicity.
We next define the following locally $H(\mbox{div};\Omega)-$conforming space $\bm{V}_h$:
\begin{equation*}
\bm{V}_{h}=\{\bm{v}:
\bm{v}\mid_{\tau} \in P^{k}(\tau)^{2}\;\forall \tau \in \mathcal{T}_{h};\;
[\bm{v}\cdot\bm{n}]\mid_e=0\;\forall e\in \mathcal{F}_{p}\}.
\end{equation*}
Note that if $\bm{v}\in \bm{V}_h$, then $\bm{v}\mid_{S(\nu)}\in H(\textnormal{div};S(\nu))$ for each $S(\nu)\in \mathcal{T}_{u}$.
We equip $\bm{V}_h$ with the following discrete $L^2$ norm
\begin{equation*}
\|\bm{v}\|_{X'}^2=\|\bm{v}\|_0^2+\sum_{\tau\in \mathcal{T}_h}\sum_{e\in \mathcal{F}_{p}\cap \partial \tau}\frac{|\tau|}{2h_e}\|\bm{v}\cdot \bm{n}\|_{0,e}^2.
\end{equation*}
The degrees of freedom for $\bm{V}_h$ can be defined below.
(VD1) For each edge $e\in \mathcal{F}_{p}$, we have
\begin{equation*}
\psi_e(\bm{v}):=\langle\bm{v}\cdot\bm{n}, p_k\rangle_e \quad \forall p_k\in P^k(e).
\end{equation*}
(VD2) For each $\tau\in \mathcal{T}_h$, we have
\begin{equation*}
\psi_\tau(\bm{v}):=(\bm{v}, \bm{p}_{k-1})_\tau \quad \forall \bm{p}_{k-1}\in P^{k-1}(\tau)^2.
\end{equation*}
Finally, we define a finite dimensional subspace of $H^1_0(\Gamma)$ by
\begin{equation*}
W_h=\{q_\Gamma: q_\Gamma\in H^1_0(\Gamma)\;|\; q_\Gamma\mid_e \in P^k(e), \forall e\in \mathcal{F}_h^\Gamma\}.
\end{equation*}
With the above preparations, we can now derive our staggered DG method by following \cite{LinaPark,LinaParkShin}.
Multiplying \eqref{eq:bulk1} by $\bm{v}\in \bm{V}_h$ and performing integration by parts, we can obtain
\begin{align*}
& (K^{-1}\bm{u},\bm{v})_{\Omega_B}+\sum_{e\in \mathcal{F}_u}\langle p, [\bm{v}\cdot \bm{n}]\rangle_e+\sum_{e\in \mathcal{F}_h^\Gamma}\langle [p],\{\bm{v}\cdot \bm{n}\}\rangle_e \\
& \;+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \{p\},[\bm{v}\cdot \bm{n}]\rangle_e-\sum_{\tau\in \mathcal{T}_h}(p, \nabla \cdot \bm{v})_\tau=0,
\end{align*}
where the staggered continuity property of $\bm{v}$ is integrated into the derivation.
Similarly, multiplying \eqref{eq:bulk2} by $q\in S_h$ and performing integration by parts yield
\begin{equation}
\sum_{e\in \mathcal{F}_p}\langle\bm{u}\cdot \bm{n}, [q]\rangle_e+\sum_{e\in \mathcal{F}_h^\Gamma} \langle [\bm{u}\cdot \bm{n} ],\{q\}\rangle_e+\sum_{e\in \mathcal{F}_h^\Gamma} \langle \{\bm{u}\cdot \bm{n}\},[q]\rangle_e-\sum_{\tau\in \mathcal{T}_h}(\bm{u}, \nabla q)_\tau=(f,q)_{\Omega_B}\label{eq:uterm}.
\end{equation}
Then we exploit the interface condition \eqref{eq:interface} in \eqref{eq:uterm} and recast the above formulation as
\begin{equation*}
\sum_{e\in \mathcal{F}_p}\langle\bm{u}\cdot \bm{n}, [q]\rangle_e+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\alpha_\Gamma}(\{p\}-p_\Gamma),\{q\}\rangle_e+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\eta_\Gamma}[p],[q]\rangle_e-\sum_{\tau\in \mathcal{T}_h}(\bm{u}, \nabla q)_\tau=(f,q)_{\Omega_B}.
\end{equation*}
As for the fracture model \eqref{eq:fracture}, we multiply by $q_{\Gamma}\in W_h$ and replace the jump term $[\bm{u}]|_\Gamma\cdot \bm{n}_\Gamma$ by utilizing \eqref{eq:interface}, which implies
\begin{equation*}
\langle K_\Gamma\nabla_t p_{\Gamma}, \nabla_t q_\Gamma\rangle_\Gamma-\sum_{e\in \mathcal{F}_h^\Gamma}\langle\frac{1}{\alpha_\Gamma}(\{p\}-p_{\Gamma}),q_\Gamma\rangle_e
=\langle\ell_\Gamma f_\Gamma, q_\Gamma\rangle_\Gamma.
\end{equation*}
Thereby we obtain the following discrete formulation for the model problem \eqref{eq:bulk1}-\eqref{eq:fracture}:
Find $(\bm{u}_h, p_h, p_{\Gamma,h})\in \bm{V}_h\times S_h\times W_h$ such that
\begin{equation}
\begin{split}
(K^{-1} \bm{u}_h, \bm{v})_{\Omega_B}+b_h^*(p_h, \bm{v})&=0,\\
-b_h(\bm{u}_h, q)+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\alpha_\Gamma}(\{p_h\}-p_{\Gamma,h}),\{q\}\rangle_e+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\eta_\Gamma}[p_h],[q]\rangle_e&=(f,q)_{\Omega_B},\\
\langle K_\Gamma\nabla_t p_{\Gamma,h}, \nabla_t q_{\Gamma}\rangle_\Gamma-\sum_{e\in \mathcal{F}_h^\Gamma}\langle\frac{1}{\alpha_\Gamma}(\{p_h\}-p_{\Gamma,h}),q_{\Gamma}\rangle_e
&=\langle\ell_\Gamma f_\Gamma, q_{\Gamma}\rangle_\Gamma,\\
\forall (\bm{v},q,q_{\Gamma})\in \bm{V}_h\times S_h\times W_h,
\end{split}
\label{eq:discrete}
\end{equation}
where the bilinear forms are defined by
\begin{align*}
b_h(\bm{u}_h, q) & =- \sum_{e\in \mathcal{F}_p}\langle \bm{u}_h\cdot\bm{n},[q]\rangle_e+\sum_{\tau\in \mathcal{T}_h}(\bm{u}_h,\nabla q)_\tau, \\
b_h^*(p_h, \bm{v}) & =\sum_{e\in \mathcal{F}_u^0}\langle p_h,[\bm{v}\cdot\bm{n}]\rangle_e-\sum_{\tau\in \mathcal{T}_h}(p_h,\nabla \cdot\bm{v})_\tau+\sum_{e\in \mathcal{F}_h^\Gamma}\langle[p_h],\{\bm{v}\cdot\bm{n}\}\rangle_e
+\sum_{e\in \mathcal{F}_h^\Gamma}\langle\{p_h\},[\bm{v}\cdot\bm{n}]\rangle_e \\
& =\sum_{e\in \mathcal{F}_u^0}\langle p_h,[\bm{v}\cdot\bm{n}]\rangle_e
-\sum_{\tau\in \mathcal{T}_h}(p_h,\nabla \cdot\bm{v})_\tau
+\sum_{e\in \mathcal{F}_h^\Gamma}\langle[p_h(\bm{v}\cdot\bm{n})],1\rangle_e.
\end{align*}
Summing up the equations in \eqref{eq:discrete} yields the following formulation: Find $(\bm{u}_h,p_h,p_{\Gamma,h})\in \bm{V}_h\times S_h\times W_h$ such that
\begin{equation}
\begin{split}
&(K^{-1}\bm{u}_h,\bm{v})_{\Omega_B}+b_h^*(p_h,\bm{v})-b_h(\bm{u}_h,q)+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\alpha_\Gamma}(\{p_h\}-p_{\Gamma}),\{q\}-q_{\Gamma}\rangle_e\\
&\;+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\eta_\Gamma}[p_h],[q]\rangle_e
+\langle K_\Gamma\nabla_t p_{\Gamma,h}, \nabla_t q_{\Gamma}\rangle_\Gamma=(f,q)_{\Omega_B}+\langle\ell_\Gamma f_\Gamma, q_{\Gamma}\rangle_\Gamma,\\
&\hskip 8cm\forall (\bm{v},q,q_{\Gamma})\in \bm{V}_h\times S_h\times W_h.
\end{split}
\label{eq:weak}
\end{equation}
Integration by parts reveals the following adjoint property
\begin{equation}
b_h(\bm{v},q)=b_h^*(q, \bm{v})\quad \forall (\bm{v},q)\in \bm{V}_h\times S_h.\label{eq:adjoint}
\end{equation}
\begin{remark}
\rm{
In the derivation we employ the interface conditions \eqref{eq:interface} to replace all the terms corresponding to $\bm{u}$ on the fracture $\Gamma$ by $p$ and $p_\Gamma$, which is different from existing methods such as the hybrid high-order method and mixed finite element method \cite{Chave18,DAngelo12}.
\Red{Our methodology is based on the fact that the degrees of freedom for bulk pressure (SD3) are defined with respect to the primal edges on the fracture.}
We also emphasize that the velocity $\bm{u}$ can be made both locally and globally mass conservative by a suitable postprocessing (cf. \cite{ChungCockburn14}).
Moreover, the use of conforming finite element to discretize the equations in the fracture is made just for simplicity, other discretization techniques can be exploited.
}
\end{remark}
\begin{lemma}Under Assumption (A), we have the following inf-sup condition
\begin{align}
\inf_{q\in S_h}\sup_{\bm{v}\in \bm{V}_h}\frac{b_h(\bm{v},q)}{\|\bm{v}\|_{X'}\|q\|_Z}\geq C.\label{eq:inf-sup}
\end{align}
\end{lemma}
\begin{proof}
The proof for this lemma follows similar idea as Theorem~3.2 of \cite{ChungWave}, we simplify the proof by direct applications of the degrees of freedom (VD1)-(VD2).
In addition, our proof here only relies on Assumption (A) thanks to the modified norm defined for $\|\cdot\|_Z$ and $\|\cdot \|_{X'}$.
Let $q\in S_h$. It suffices to find $\bm{v}\in \bm{V}_h$ such that
\begin{equation*}
b_h(\bm{v},q)\geq C \|q\|_Z^2\quad \mbox{and} \quad \|\bm{v}\|_{X'}\leq C \|q\|_Z.
\end{equation*}
Recall that
\begin{equation}
b_h(\bm{v}, q) =- \sum_{e\in \mathcal{F}_p}\langle \bm{v}\cdot\bm{n},[q]\rangle_e+\sum_{\tau\in \mathcal{T}_h}(\bm{v},\nabla q)_\tau.\label{bh-recall}
\end{equation}
We define $\bm{v}$ by using degrees of freedom (VD1)
\begin{equation*}
\langle \bm{v}\cdot \bm{n},p_k\rangle_e=\sum_{\substack{\tau\in\mathcal{T}_h\\\tau\subset D(e)}}\frac{h_e}{2|\tau|}\langle[q],p_k\rangle_e\quad\forall p_k\in P^k(e),\;e\in\mathcal{F}_p,
\end{equation*}
and (VD2)
\begin{equation*}
(\bm{v},\bm{p}_{k-1})_\tau = (\nabla q,\bm{p}_{k-1})_\tau\quad\forall\bm{p}_{k-1}\in P^{k-1}(\tau)^2,\;\tau\in\mathcal{T}_h,
\end{equation*}
which together with \eqref{bh-recall} yields
\begin{equation*}
b_h(\bm{v}, q)=- \sum_{e\in \mathcal{F}_p}\langle \bm{v}\cdot\bm{n},[q]\rangle_e+\sum_{\tau\in \mathcal{T}_h}(\bm{v},\nabla q)_\tau=\|q\|_{Z}^2.
\end{equation*}
On the other hand, scaling arguments imply
\begin{equation*}
\|\bm{v}\|_{X'}\leq C \|q\|_Z.
\end{equation*}
This completes the proof.
\end{proof}
Finally, we introduce the following interpolation operators, which play an important role in later analysis.
We define the interpolation operator $I_h: H^1(\Omega_B)\rightarrow S_h$ by
\begin{equation*}
\begin{split}
\langle I_h w-w,\psi\rangle_e&=0 \quad \forall \psi\in P^k(e),\;e\in \mathcal{F}_{u},\\
\langle (I_hw-w)|_{\Omega_{B,i}},\psi\rangle_{e}&=0\quad \forall \psi\in P^k(e),\;e\in \mathcal{F}_h^\Gamma,\;i=1,2,\\
(I_hw-w,\psi)_\tau&=0 \quad \forall \psi\in P^{k-1}(\tau),\;\tau\in \mathcal{T}_h
\end{split}
\end{equation*}
and the interpolation operator $J_h: H^\delta(\Omega_B)^2\rightarrow \bm{V}_h,\delta>1/2$ by
\begin{equation*}
\begin{split}
\langle(J_h \bm{v}-\bm{v})\cdot \bm{n},\phi\rangle_e&=0 \quad \forall \phi\in P^k(e),\;e\in \mathcal{F}_{p},\\
(J_h\bm{v}-\bm{v}, \bm{\phi})_\tau&=0 \quad \forall \bm{\phi}\in P^{k-1}(\tau)^2,\; \tau\in \mathcal{T}_h.
\end{split}
\end{equation*}
The definition of the interpolation operators implies that
\begin{align*}
b_h(\bm{u}-J_h\bm{u},w) & =0 \quad \forall w\in S_h, \\
b_h^*(p-I_hp,\bm{v}) & =0 \quad \forall \bm{v}\in \bm{V}_h.
\end{align*}
Notice that if $w$ is continuous on $\Gamma$, then $I_hw$ is also continuous on $\Gamma$.
The interpolation operators $I_h$ and $J_h$ satisfy: (1) they are locally defined for each element $\tau\in \mathcal{T}_h$; (2) for $p_k\in P^k(\tau)$ and $\bm{p}_k\in P^{k}(\tau)^2$, we have $I_hp_k=p_k$ and $J_h\bm{p}_k = \bm{p}_k$.
The following error estimates are clearly satisfied on the reference element $\hat{\tau}$ by using (1) and (2) (see \cite{Ciarlet78}).
\begin{equation*}
\begin{split}
\|\hat{\bm{v}}-J_h\hat{\bm{v}}\|_{0,\hat{\tau}}&\leq C \|\hat{\bm{v}}\|_{k+1,\hat{\tau}},\\
\|\hat{q}-I_h\hat{q}\|_{0,\hat{\tau}}&\leq C \|\hat{q}\|_{k+1,\hat{\tau}},\\
\|\nabla (\hat{q}-I_h\hat{q})\|_{0,\hat{\tau}}&\leq C \|\hat{q}\|_{k+1,\hat{\tau}},
\end{split}
\end{equation*}
where $\hat{\bm{v}}$ and $\hat{q}$ are the corresponding variables of $\bm{v}$ and $q$ on the reference element $\hat{\tau}$.
In addition, under Assumption (A), the maximum angles in $\mathcal{T}_h$ are uniformly bounded away from $\pi$ (although shape regularity is not guaranteed).
Then we can proceed as Theorem~2.1 of \cite{Apel99} to obtain the following anisotropic error estimates for $\tau\in \mathcal{T}_{h}$
\begin{equation}
\begin{split}
\|\bm{v}-J_h\bm{v}\|_{0,\tau}&\leq C h_\tau^{k+1}\|\bm{v}\|_{k+1,\tau}\quad \forall \bm{v}\in H^{k+1}(\tau)^2,\\
\|q-I_hq\|_{0,\tau}&\leq C h_\tau^{k+1}\|q\|_{k+1,\tau}\quad \forall q\in H^{k+1}(\tau).
\end{split}
\label{eq:interpolationIhJh-local}
\end{equation}
Here, generic constants $C$ possibly depend on $\rho_S$ in Assumption~(A) but not on $\rho_E$ in Assumption~(B).
We next introduce the standard nodal interpolation operator $\pi_h: H^1(\Gamma)\rightarrow W_h$, which satisfies for $q_\Gamma\in H^{k+1}(\Gamma)$
\begin{equation*}
\begin{split}
\|q_\Gamma-\pi_h q_\Gamma\|_{0,e}&\leq C h_e^{k+1}\|q_\Gamma\|_{k+1,e},\\
\|\nabla_t (q_\Gamma-\pi_h q_\Gamma)\|_{0,e}&\leq C h_e^{k}\|q_\Gamma\|_{k+1,e},
\end{split}
\end{equation*}
where $e\in \mathcal{F}_h^\Gamma$.
\section{Error analysis}\label{sec:error}
In this section, we present the unique solvability of the discrete system \eqref{eq:discrete} and the convergence estimates for all the variables involved under Assumption (A).
As $L^2$ error of $\bm{u}$ is coupled with energy error of $p_\Gamma$, it will yield sub-optimal convergence if standard interpolation operator for $p_\Gamma$ is exploited.
As such, we propose to employ the Ritz projection which enables us to achieve the optimal convergence estimates.
\begin{theorem}[stability]\label{thm:stability}
Under Assumption (A), the discrete system \eqref{eq:discrete} admits a unique solution $(\bm{u}_h, p_h, p_{\Gamma,h})\in \bm{V}_h\times S_h\times W_h$.
Furthermore, there exists a positive constant independent of $h$ but possibly depending on $\rho_S$ and the problem data such that
\begin{equation}
\begin{split}
&\|K^{-\frac{1}{2}}\bm{u}_h\|_{0,\Omega_B}^2+\|p_h\|_{0,\Omega_B}^2+\sum_{e\in \mathcal{F}_h^\Gamma}\|\eta_\Gamma^{-\frac{1}{2}}[p_h]\|_{0,e}^2
+\|K_\Gamma^{\frac{1}{2}}\nabla_t p_{\Gamma,h}\|_{0,\Gamma}^2+\sum_{e\in \mathcal{F}_h^\Gamma}\|\alpha_\Gamma^{-\frac{1}{2}}(\{p_h\}-p_{\Gamma,h})\|_{0,e}^2\\
&\leq C \Big(\|f\|_{0,\Omega_B}^2+\|\ell_\Gamma f_\Gamma\|_{0,\Gamma}^2\Big).
\end{split}
\label{eq:stability}
\end{equation}
\end{theorem}
\begin{proof}
Since \eqref{eq:weak} is a square linear system, existence follows from uniqueness, thus, it suffices to show uniqueness. Taking $\bm{v}=\bm{u}_h,q=p_h,q_{\Gamma}=p_{\Gamma,h}$ in \eqref{eq:weak} yields
\begin{align*}
& \|K^{-\frac{1}{2}}\bm{u}_h\|_{0,\Omega_B}^2+\|K_\Gamma^{\frac{1}{2}}\nabla_t p_\Gamma\|_{0,\Gamma}^2
+\sum_{e\in \mathcal{F}_h^\Gamma}\|\alpha_\Gamma^{-\frac{1}{2}}(\{p_h\}-p_{\Gamma,h})\|_{0,e}^2
+\sum_{e\in \mathcal{F}_h^\Gamma}\|\eta_\Gamma^{-\frac{1}{2}}[p_h]\|_{0,e}^2 \\
& \leq C \Big(\|f\|_{0,\Omega_B}\|p_h\|_{0,\Omega_B}+\|\ell_\Gamma f_\Gamma\|_{0,\Gamma}\|p_{\Gamma,h}\|_{0,\Gamma}\Big).
\end{align*}
On the other hand, an application of the discrete Poincar\'{e}-Friedrichs inequality on anisotropic meshes (cf. \cite{DuanTan11}) leads to
\begin{equation*}
\|p_h\|_{0,\Omega_B}\leq C \|p_h\|_Z.
\end{equation*}
In view of the inf-sup condition \eqref{eq:inf-sup}, the discrete adjoint property \eqref{eq:adjoint}, and \eqref{eq:discrete}, we have
\begin{equation*}
C||p_h||_{0,\Omega_B}\leq C\|p_h\|_Z\leq \sup_{\bm{v}_h\in \bm{V}_h}\frac{b_h(\bm{v}_h, p_h)}{\|\bm{v}_h\|_{0,\Omega_B}}=\sup_{\bm{v}_h\in \bm{V}_h} \frac{b_h^*(p_h,\bm{v}_h)}{\|\bm{v}_h\|_{0,\Omega_B}}=\sup_{\bm{v}\in V_h}\frac{(K^{-1}\bm{u}_h,\bm{v}_h)}{\|\bm{u}_h\|_{0,\Omega_B}}\leq \|K^{-1}\bm{u}_h\|_{0,\Omega_B}.
\end{equation*}
Moreover, $p_{\Gamma,h}\in H^1_0(\Gamma)$ and the Poincar\'{e} inequality imply that
\begin{equation*}
\|p_{\Gamma,h}\|_{0,\Gamma}\leq C \|\nabla_t p_{\Gamma,h}\|_{0,\Gamma}.
\end{equation*}
Combining the above estimates with Young's inequality, we can infer that
\begin{align*}
& \|K^{-\frac{1}{2}}\bm{u}_h\|_{0,\Omega_B}^2+\|p_h\|_{0,\Omega_B}+\|K_\Gamma^{\frac{1}{2}}\nabla_t p_{\Gamma,h}\|_{0,\Gamma}^2
+\sum_{e\in \mathcal{F}_h^\Gamma}\|\eta_\Gamma^{-\frac{1}{2}}[p_h]\|_{0,e}^2+\sum_{e\in \mathcal{F}_h^\Gamma}\|\alpha_\Gamma^{-\frac{1}{2}}(\{p_h\}-p_{\Gamma,h})\|_{0,e}^2 \\
& \leq C \Big(\|f\|_{0,\Omega_B}^2+\|\ell_\Gamma f_\Gamma\|_{0,\Gamma}^2\Big),
\end{align*}
which gives the desired estimate \eqref{eq:stability}.
Here, $C$ depends on the permeability $K$ and $K_\Gamma$.
The uniqueness follows immediately by setting $f=f_\Gamma=0$.
\end{proof}
Here, we introduce the Ritz projection $\Pi_h^p p_\Gamma\in W_h$, which is defined by
\begin{equation}
\langle K_\Gamma\nabla_t \Pi_h^pp_\Gamma, \nabla_t q_{\Gamma,h}\rangle_\Gamma
= \langle K_\Gamma\nabla_t p_\Gamma, \nabla_t q_{\Gamma,h}\rangle_\Gamma
\quad \forall q_{\Gamma,h}\in W_h.\label{eq:ritz-projection}
\end{equation}
It is well-posed by the Riesz representation theorem.
Then taking $q_{\Gamma,h}=\pi_hp_\Gamma-\Pi_hp_\Gamma$ in \eqref{eq:ritz-projection} yields
\begin{equation*}
\|K_\Gamma^{1/2}\nabla_t (p_\Gamma-\Pi_h^pp_\Gamma)\|_{0,\Gamma}^2
=\langle K_\Gamma \nabla_t ( p_\Gamma-\Pi_h^pp_\Gamma), \nabla_t (p_\Gamma-\pi_hp_\Gamma)\rangle_\Gamma,
\end{equation*}
which implies
\begin{equation*}
\|K_\Gamma^{\frac{1}{2}}\nabla_t (p_\Gamma-\Pi_h^pp_\Gamma)\|_{0,\Gamma}
\leq \|K_\Gamma^{\frac{1}{2}}\nabla_t (p_\Gamma-\pi_hp_\Gamma)\|_{0,\Gamma}
\leq C \Big(\sum_{e\in \mathcal{F}_h^\Gamma}h_e^{2k}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)^{1/2}.
\end{equation*}
Next, we show the $L^2$ error estimate for $\|p_\Gamma-\Pi_h^pp_\Gamma\|_{0,\Gamma}$. Consider the dual problem
\begin{equation}
\begin{aligned}
-\nabla_t \cdot (K_\Gamma\nabla_t \phi )
&= p_\Gamma-\Pi_h^p p_\Gamma
&&\mbox{on} \;\Gamma,\\
\phi
&=0
&&\mbox{on}\;\partial \Gamma,
\end{aligned}\label{eq:dual-poissonfracture1}
\end{equation}
which satisfies the following elliptic regularity estimate (cf. \cite{ChuHou10})
\begin{align*}
(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_\Gamma\phi\|_{2,e}^2)^{1/2}\leq C \|p_\Gamma-\Pi_h^p p_\Gamma\|_{0,\Gamma}.
\end{align*}
Multiplying \eqref{eq:dual-poissonfracture1} by $p_\Gamma-\Pi_h^p p_\Gamma$ and integration by parts, we can obtain
\begin{align*}
\|p_\Gamma-\Pi_h^p p_\Gamma\|_{0,\Gamma}^2= \langle K_\Gamma \nabla_t \phi, \nabla_t (p_\Gamma-\Pi_h^p p_\Gamma)\rangle_\Gamma.
\end{align*}
Owing to \eqref{eq:ritz-projection}, we can bound the above equation by
\begin{align*}
\|p_\Gamma-\Pi_h^p p_\Gamma\|_{0,\Gamma}^2
&= \langle K_\Gamma\nabla_t (\phi-\pi_h\phi), \nabla_t (p_\Gamma-\Pi_h^p p_\Gamma)\rangle_\Gamma\\
&\leq \|K_\Gamma\nabla_t (\phi-\pi_h\phi)\|_{0,\Gamma}\|\nabla_t (p_\Gamma-\Pi_h^p p_\Gamma)\|_{0,\Gamma}\\
&\leq C h(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}\phi\|_{2,e}^2)^{\frac{1}{2}}\|\nabla_t (p_\Gamma-\Pi_h^p p_\Gamma)\|_{0,\Gamma}\\
&\leq C h\|p_\Gamma-\Pi_h^p p_\Gamma\|_{0,\Gamma}\|\nabla_t (p_\Gamma-\Pi_h^p p_\Gamma)\|_{0,\Gamma}.
\end{align*}
Thus
\begin{align}
\|p_\Gamma-\Pi_h^p p_\Gamma\|_{0,\Gamma}\leq C K_{\Gamma,\min}^{-\frac{1}{2}}h^{k+1}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)^{1/2}.\label{eq:dual-L2f}
\end{align}
With help of the Ritz projection, we derive \textit{a priori} error estimates.
Note that the following theorem states the optimal convergence for $L^2$ error of the flux, $||K^{-\frac{1}{2}}(\bm{u}-\bm{u}_h)||_{0,\Omega}$, and superconvergence for semi-$H^1$ error of the pressure on the fracture, $||K^{\frac{1}{2}}_\Gamma\nabla_t(\Pi_h^pp_\Gamma-p_{\Gamma,h})||_{0,\Gamma}$.
\begin{theorem}\label{thm:energyError}
Under Assumption (A), there exists a positive constant $C$ independent of $h$ and of the problem data, but possibly depending on $\rho_S$ such that
\begin{equation*}
\begin{aligned}
& \|K^{-\frac{1}{2}}(J_h \bm{u}-\bm{u}_h)\|_{0,\Omega_B}+\|K_\Gamma^{\frac{1}{2}}\nabla_t (\Pi_h^p p_\Gamma-p_{\Gamma,h})\|_{0,\Gamma}\\
&+\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|\eta_\Gamma^{-\frac{1}{2}}[I_hp-p_h]\|_{0,e}^2\Big)^{\frac{1}{2}}+\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|\alpha_\Gamma^{-\frac{1}{2}}(\{I_hp-p_h\}-(\Pi_h^p p_\Gamma-p_{\Gamma,h}))\|_{0,e}^2\Big)^{\frac{1}{2}}\\
& \;\leq C \Big(\|K^{-\frac{1}{2}}(\bm{u}-J_h\bm{u})\|_{0,\Omega_B}^2
+\|\alpha_\Gamma^{-\frac{1}{2}}(p_\Gamma-\Pi_h^p p_\Gamma)\|_{0,\Gamma}^2\Big).
\end{aligned}
\end{equation*}
\end{theorem}
\begin{proof}
\Red{Our discrete formulation \eqref{eq:discrete} is consistent due to its derivation.}
Thereby we can obtain the following error equations
\begin{align}
(K^{-1}(\bm{u}-\bm{u}_h), \bm{v})_{\Omega_B}+b_h^*(p-p_h,\bm{v})
& =0,\label{eq:err1} \\
-b_h(\bm{u}-\bm{u}_h, q)
+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\alpha_\Gamma}(\{p-p_h\}-(p_\Gamma-p_{\Gamma,h})),\{q\}\rangle_e
+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\eta_\Gamma}[p-p_h],[q]\rangle_e
& =0\;,\label{eq:err2} \\
\langle K_\Gamma \nabla_t (p_\Gamma-p_{\Gamma,h}), \nabla_t q_{\Gamma}\rangle_\Gamma
-\sum_{e\in \mathcal{F}_h^\Gamma}\langle\frac{1}{\alpha_\Gamma}\{p-p_h\}, q_{\Gamma}\rangle_e
+\sum_{e\in \mathcal{F}_h^\Gamma}\langle\frac{1}{\alpha_\Gamma}( p_\Gamma-p_{\Gamma,h}), q_{\Gamma}\rangle_e
& =0,\label{eq:err3} \\
\forall (\bm{v}, q, q_{\Gamma})\in \bm{V}_h\times S_h\times W_h.
\end{align}
Taking $\bm{v}=J_h \bm{u}-\bm{u}_h, q= I_hp-p_h, q_{\Gamma} = \Pi_h^p p_\Gamma-p_{\Gamma,h}$ in \eqref{eq:err1}-\eqref{eq:err3} and adding the resulting equations, we can obtain
\begin{equation}
\begin{aligned}
& (K^{-1} (J_h\bm{u}-\bm{u}_h),J_h\bm{u}-\bm{u}_h)_{\Omega_B}
+\langle K_\Gamma \nabla_t (\Pi_h^p p_\Gamma-p_{\Gamma,h}),\nabla_t (\Pi_h^p p_\Gamma-p_{\Gamma,h})\rangle_\Gamma \\
& \;+\sum_{e\in \mathcal{F}_h^\Gamma}\langle\frac{1}{\eta_\Gamma}[p-p_h],[I_hp-p_h]\rangle_e
+\sum_{e\in \mathcal{F}_h^\Gamma}\langle\frac{1}{\alpha_\Gamma}(\{p-p_h\}-(p_\Gamma-p_{\Gamma,h})),
\{I_hp-p_h\}-(\Pi_h^p p_\Gamma-p_{\Gamma,h})\rangle_e \\
& \qquad=(K^{-1}(J_h\bm{u}-\bm{u}),J_h\bm{u}-\bm{u}_h)_{\Omega_B}
+\langle K_\Gamma \nabla_t (\Pi_h^p p_\Gamma-p_{\Gamma}),\nabla_t (\Pi_h^p p_\Gamma-p_{\Gamma,h})\rangle_\Gamma.
\end{aligned}
\label{eq:error-divide}
\end{equation}
It then follows from the Cauchy-Schwarz inequality and the properties of $I_h$ nd $J_h$ that
\begin{align*}
& \|K^{-\frac{1}{2}}(J_h \bm{u}-\bm{u}_h)\|_{0,\Omega_B}^2
+\|K_\Gamma^{\frac{1}{2}}\nabla_t (\Pi_h^p p_\Gamma-p_{\Gamma,h})\|_{0,\Gamma}^2
+\sum_{e\in \mathcal{F}_h^\Gamma}\|\eta_\Gamma^{-\frac{1}{2}}[I_hp-p_h]\|_{0,e}^2 \\
& \;+\sum_{e\in \mathcal{F}_h^\Gamma}\|\alpha_\Gamma^{-\frac{1}{2}}(\{I_hp-p_h\}-(\Pi_h^p p_\Gamma-p_{\Gamma,h}))\|_{0,e}^2 \\
& =(K^{-1}(J_h\bm{u}-\bm{u}),J_h\bm{u}-\bm{u}_h)_{\Omega_B}
+\langle K_\Gamma\nabla_t (\Pi_h^p p_\Gamma- p_{\Gamma}),\nabla_t (\Pi_h^p p_\Gamma-p_{\Gamma,h})\rangle_\Gamma \\
& \;+\sum_{e\in \mathcal{F}_h^\Gamma}\langle \frac{1}{\alpha_\Gamma}(p_\Gamma-\Pi_h^p p_\Gamma),
\{I_hp-p_h\}-(\Pi_h^p p_\Gamma-p_{\Gamma,h})\rangle_e \\
& \leq C \Big(\|K^{-\frac{1}{2}}(J_h\bm{u}-\bm{u})\|_{0,\Omega_B} \|K^{-\frac{1}{2}}(J_h\bm{u}-\bm{u}_h)\|_{0,\Omega_B} \\
& \;+\sum_{e\in \mathcal{F}_h^\Gamma}\|\alpha_\Gamma^{-\frac{1}{2}}(\{I_hp-p_h\}-(\Pi_h^p p_\Gamma-p_{\Gamma,h}))\|_{0,e}
\|\alpha_\Gamma^{-\frac{1}{2}}(p_\Gamma-\Pi_h^p p_\Gamma)\|_{0,e}\Big).
\end{align*}
Combined with Young's inequality, this leads to
\begin{align*}
& \|K^{-\frac{1}{2}}(J_h \bm{u}-\bm{u}_h)\|_{0,\Omega_B}^2
+\|K_\Gamma^{\frac{1}{2}}\nabla_t (\Pi_h^p p_\Gamma-p_{\Gamma,h})\|_{0,\Gamma}^2
+\sum_{e\in \mathcal{F}_h^\Gamma}\|\eta_\Gamma^{-\frac{1}{2}}[I_hp-p_h]\|_{0,e}^2 \\
& \;+\sum_{e\in \mathcal{F}_h^\Gamma}\|\alpha_\Gamma^{-\frac{1}{2}}(\{I_hp-p_h\}-(\Pi_h^p p_\Gamma-p_{\Gamma,h}))\|_{0,e}^2 \\
& \;\leq C \Big(\|K^{-\frac{1}{2}}(\bm{u}-J_h\bm{u})\|_{0,\Omega_B}^2
+\|\alpha_\Gamma^{-\frac{1}{2}}(p_\Gamma-\Pi_h^p p_\Gamma)\|_{0,\Gamma}^2\Big).
\end{align*}
Therefore, the proof is completed.
\end{proof}
\begin{corollary}\label{cor:L2err}
\Red{Assume that $(\bm{u}\mid_\tau, p\mid_\tau,p_{\Gamma}\mid_e)\in H^{k+1}(\tau)^2\times H^{k+1}(\tau)\times H^{k+1}(e)$ for $\tau\in \mathcal{T}_h$ and $e\in \mathcal{F}_h^\Gamma$}. Then under the assumption of Theorem~\ref{thm:energyError}, there exists a positive constant $C$ independent of $h$ and of the problem data, but possibly depending on $\rho_S$ such that
\begin{align*}
\|K^{-\frac{1}{2}}(\bm{u}-\bm{u}_h)\|_{0,\Omega_B}&\leq C \Big(\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2 +\alpha_1^{-1}K_{\Gamma,\min}^{-1}h^{2(k+1)}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)\Big)^{\frac{1}{2}},\\
\|p_\Gamma-p_{\Gamma,h}\|_{0,\Gamma}&\leq C\Big(K_{\Gamma,\min}^{-\frac{1}{2}}\Big(\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2+
\alpha_1^{-1}\sum_{e\in\mathcal{F}_h^\Gamma}h_e^{2(k+1)}\|p_\Gamma\|_{k+1,e}^2\Big)^{\frac{1}{2}}\\
&\;+K_{\Gamma,\min}^{-\frac{1}{2}}h^{k+1}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)^{\frac{1}{2}}\Big)
\end{align*}
and
\begin{align*}
\|p-p_h\|_{0,\Omega_B}&\leq C \Big(K_1^{-1}\Big(\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2 +\alpha_1^{-1}\sum_{e\in\mathcal{F}_h^\Gamma}h_e^{2(k+1)}\|p_\Gamma\|_{k+1,e}^2\Big)+\sum_{\tau\in \mathcal{T}_h}h_\tau^{2(k+1)}\|p\|_{k+1,\tau}^2\Big)^{\frac{1}{2}},
\end{align*}
where $\alpha_1:=\min\{\alpha_\Gamma\}$, $K_{\Gamma,\min}:=\min\{K_\Gamma\}$ and $K_\tau$ is the smallest eigenvalue of $K\mid_\tau$.
In addition, we also have the following superconvergent results
\begin{align*}
\|I_hp-p_h\|_Z\leq C K_1^{-\frac{1}{2}} \Big(\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2 +\alpha_1^{-1}K_{\Gamma,\min}^{-1}h^{2(k+1)}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)\Big)^{\frac{1}{2}}.
\end{align*}
\end{corollary}
\begin{proof}
Since $\Pi_h^p p_\Gamma-p_{\Gamma,h}$ belongs to $H^1_0(\Gamma)$, we have from Poincar\'{e} inequality and Theorem~\ref{thm:energyError} that
\begin{align*}
\|\Pi_h^pp_\Gamma-p_{\Gamma,h}\|_{0,\Gamma}\leq C\|\nabla_t(\Pi_h^pp_\Gamma-p_{\Gamma,h})\|_{0,\Gamma}\leq CK_{\Gamma,\min}^{-\frac{1}{2}}\|K_\Gamma^{\frac{1}{2}}\nabla_t(\Pi_h^pp_\Gamma-p_{\Gamma,h})\|_{0,\Gamma},
\end{align*}
which together with \eqref{eq:dual-L2f} and Theorem~\ref{thm:energyError} implies
\begin{align*}
\|p_\Gamma-p_{\Gamma,h}\|_{0,\Gamma}&\leq C\Big(K_{\Gamma,\min}^{-\frac{1}{2}}\Big(\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2+
\alpha_1^{-1}\sum_{e\in\mathcal{F}_h^\Gamma}h_e^{2(k+1)}\|p_\Gamma\|_{k+1,e}^2\Big)^{\frac{1}{2}}\\
&\;+K_{\Gamma,\min}^{-\frac{1}{2}}h^{k+1}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)^{1/2}\Big).
\end{align*}
In addition, we also have from \eqref{eq:interpolationIhJh-local} and Theorem~\ref{thm:energyError} that
\begin{align*}
\|K^{-\frac{1}{2}}(\bm{u}-\bm{u}_h)\|_{0,\Omega_B}\leq C \Big(\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2 +\alpha_1^{-1}K_{\Gamma,\min}^{-1}h^{2(k+1)}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)\Big)^{\frac{1}{2}}.
\end{align*}
Finally, it follows from the inf-sup condition \eqref{eq:inf-sup} and \eqref{eq:err1} that
\begin{align*}
\|I_hp-p_h\|_Z&\leq C\|K^{-1}(\bm{u}-\bm{u}_h)\|_0\\
&\leq C K_1^{-\frac{1}{2}} \Big(\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2 +\alpha_1^{-1}K_{\Gamma,\min}^{-1}h^{2(k+1)}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)\Big)^{\frac{1}{2}},
\end{align*}
which can be combined with the discrete Poincar\'{e} inequality yields
\begin{align*}
\|I_hp-p_h\|_{0,\Omega_B}&\leq C \|I_hp-p_h\|_Z\\
&\leq C K_1^{-\frac{1}{2}} \Big(
\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2
+\alpha_1^{-1}K_{\Gamma,\min}^{-1}h^{2(k+1)}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)\Big)^{\frac{1}{2}}.
\end{align*}
Thus
\begin{align*}
\|p-p_h\|_{0,\Omega_B}&\leq C \Big(K_1^{-1} \Big(\sum_{\tau\in \mathcal{T}_h} K_\tau^{-1}h_\tau^{2(k+1)}\|\bm{u}\|_{k+1,\tau}^2 +\alpha_1^{-1}K_{\Gamma,\min}^{-1}h^{2(k+1)}\Big(\sum_{e\in \mathcal{F}_h^\Gamma}\|K_{\Gamma}^{\frac{1}{2}}p_\Gamma\|_{k+1,e}^2\Big)\Big)\\
&\;+
\sum_{\tau\in\mathcal{T}_h}h_\tau^{2(k+1)}\|p\|_{k+1,\tau}^2\Big)^{\frac{1}{2}}.
\end{align*}
\end{proof}
\begin{remark}
\rm{
The introduction of the Ritz projection is important to deliver optimal convergence estimates. Our methodology is based on the observation that the second term on the right side of \eqref{eq:error-divide} is actually the troublemaker, which can be cancelled if the Ritz projection is exploited. On the other hand, thanks to the Ritz projection we can achieve the optimal convergence estimates in $L^2$ errors of bulk pressure and fracture pressure without resort to duality argument as usually required by standard methods.
Alternatively, we obtain optimal convergence estimates for all the variables, which are fully robust with respect to the heterogeneity of $K$ and $K_\Gamma$, and the anisotropy of the bulk permeability. In addition, our analysis weakens the usual assumption on the polygonal mesh. All these desirable features make our method a good candidate for practical applications.
}
\end{remark}
\section{Numerical experiments}\label{sec:numerical}
In this section, we will present several numerical experiments to confirm the validity of the \textit{a priori} error estimates that we have derived for our method.
To demonstrate the robustness of our method with respect to general meshes, we employ regular polygonal grids, polygonal grids with small edges and anisotropic grids.
In addition, to further verify that our method can handle more complicated problems, we test the performances of our method on the case that the background grid is not aligned with the fracture.
\subsection{Convergence test}\label{subsec:example1}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{ex1_sol.eps}
\includegraphics[width=0.45\textwidth]{ex3_sol.eps}
\caption{Graphs of solutions $p$ and $p_\Gamma$ for Example~\ref{subsec:example1} (left) and Example~\ref{subsec:example3} (right).}
\label{fig:solshape}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{mesh_tri.eps}
\includegraphics[width=0.32\textwidth]{mesh_rect.eps
\includegraphics[width=0.32\textwidth]{mesh_poly.eps
\caption{Uniform triangular (left), rectangular (center), polygonal (right) meshes with comparable mesh sizes for Example~\ref{subsec:example1}. Here, dashed lines represent dual edges and red lines are the fracture $\Gamma$.}
\label{fig:mesh}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{convhist_iso_p.eps}
\includegraphics[width=0.32\textwidth]{convhist_iso_u.eps}
\includegraphics[width=0.32\textwidth]{convhist_iso_pG.eps}
\includegraphics[width=0.32\textwidth]{convhist_anisK_p.eps}
\includegraphics[width=0.32\textwidth]{convhist_anisK_u.eps}
\includegraphics[width=0.32\textwidth]{convhist_anisK_pG.eps}
\caption{Convergence history for the isotropic case (top row) and the anisotropic case (bottom row) of Example~\ref{subsec:example1} with $k=1,2,3$.
Right triangles indicate theoretical convergence rates.
Solid lines, dotted lines, and dashed lines are error with triangular, rectangular, and polygonal meshes, respectively.}
\label{fig:convhist}
\end{figure}
In this example, we verify the theoretical convergence order given in Corollary~\ref{cor:L2err}.
Let $\Omega=(0,1)\times(0,1)$ be the unit square in $\mathbb{R}^2$ with a fracture $\Gamma = \{x=0.5\}\times (0,1)$ at the middle.
Then, $\Omega_{B,1}=(0,0.5)\times(0,1)$ and $\Omega_{B,2}=(0.5,1)\times(0,1)$.
The solution $p$ and $p_\Gamma$ are defined by
\begin{equation*}
p=\begin{cases}
\sin(4x)\cos(\pi y) & \mbox{when }(x,y)\in\Omega_{B,1}, \\
\cos(4x)\cos(\pi y) & \mbox{when }(x,y)\in\Omega_{B,2},
\end{cases}\quad p_\Gamma = \frac{3}{4}\cos(\pi y)(\cos(2)+\sin(2)),
\end{equation*}
and the profiles for $p$ nd $p_G$ are depicted in Figure~\ref{fig:solshape}.
To demonstrate that our method handles anisotropic permeability, we consider two values of $\kappa_\Gamma^n$
\begin{equation*}
\kappa_\Gamma^n=\begin{cases}0.01&\text{for isotropic case},\\1&\text{for anisotropic case}.\end{cases}
\end{equation*}
Other physical parameters are chosen as $\xi=3/4$, $\ell_\Gamma=0.01$, $K_\Gamma=1$ and
\begin{equation*}
K = \left(
\begin{array}{cc}
\kappa_\Gamma^n/(2\ell_\Gamma) & 0\\
0 & 1 \\
\end{array}
\right).
\end{equation*}
The boundary conditions and source terms can be derived from the definition.
Numerical tests are performed with three mesh types: Uniform triangular; uniform rectangular; quasi-uniform polygonal meshes, see Figure~\ref{fig:mesh}.
Quasi-uniform polygonal meshes are centroidal Voronoi tessellation (CVT) generated by the Lloyd algorithm with target mesh size $h$ (cf. \cite{Talischi12}).
We pre-allocate fixed generators for Voronoi cells near the fracture so that the resulting mesh aligns with the fracture $\Gamma$.
In Figure~\ref{fig:convhist}, we report the convergence history for $\|p-p_h\|_{0,\Omega_B}, \|\bm{u}-\bm{u}_h\|_{0,\Omega_B}$ and $\|p_\Gamma-p_{\Gamma,h}\|_{0,\Gamma}$ against the number of degrees of freedom for polynomial order $k=1,2,3$ for both isotropic and anisotropic cases. We can observe optimal convergence rates $\mathcal{O}(h^{k+1})$ regardless of the choice of $K$, which confirms the theoretical findings.
In addition, the polygonal meshes and rectangular meshes outperform triangular meshes in terms of accuracy.
This and the geometrical flexibility of polygonal meshes reveal that polygonal meshes are better suited to the simulation of physical problems under consideration.
\subsection{Robustness to small edges}\label{subsec:example2}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{mesh_smallEdge_org.eps}
\includegraphics[width=0.32\textwidth]{mesh_smallEdge_ref.eps}
\includegraphics[width=0.32\textwidth]{mesh_smallEdge.eps}
\caption{Schematic of perturbation. $2\times2$ squares (left), two rectangles and two pentagons after perturbation with $d=0.1\times h_e$ (center), and a resulting mesh from a uniform rectangular mesh with $h_e=2^{-3}$ and $d=0.1\times h_e$.
The dashed circle is the ball, described in Assumption (A), of an pentagon.}
\label{fig:mesh_smallEdge}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{SvsR_p.eps}
\includegraphics[width=0.32\textwidth]{SvsR_u.eps}
\includegraphics[width=0.32\textwidth]{SvsR_pG.eps}
\caption{Convergence history with uniform rectangular meshes (solid lines) and perturbed meshes with $d=0.001\times h_e$ (dashed lines)}
\label{fig:convhist_smallEdge}
\end{figure}
In Corollary~\ref{cor:L2err}, \textit{a priori} error estimates for $||u-u_h||_{0,\Omega_B}$, $||p-p_h||_{0,\Omega_B}$ and $||p_\Gamma-p_{\Gamma,h}||_{0,\Gamma}$ are derived without Assumption (B).
Therefore, the accuracy of all three variables in $L^2$ error should be independent of the existence of small edges.
To demonstrate this, we design a mesh by perturbing a uniform rectangular mesh.
For each $2\times2$ squares, we replace the common vertex of four squares by a small edge with length $\sqrt{2}d$ so that we obtain two squares and two pentagons, see Figure~\ref{fig:mesh_smallEdge}.
By taking $d$ small enough, the differences between the mesh sizes of uniform rectangular mesh and its perturbed mesh are negligible.
While the perturbation preserves $\rho_S$ in Assumption (A), $\rho_E$ in Assumption (B) is changed from $1/\sqrt{2}$ to $d/h_e$ for the perturbed mesh.
For numerical tests, we consider the same solution used in Example~\ref{subsec:example1}.
The convergence history against the number of degrees of freedom are reported in Figure~\ref{fig:convhist_smallEdge} for polynomial order $k=1,2,3$.
Here, we use the uniform rectangular mesh and perturbed mesh with $d=0.001\times h_e$.
We can observe that the difference in accuracy between the numerical solutions with uniform rectangular meshes and perturbed meshes for all the three variables are negligible.
Also, noting that the perturbation introduces additional degrees of freedom due to the small edges, the difference can be attributed to the increased degrees of freedom.
Therefore, we conclude that our method is free from Assumption (B).
\subsection{Anisotropic meshes}\label{subsec:example3}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{mesh_anis_rect.eps}
\includegraphics[width=0.32\textwidth]{mesh_anis_poly.eps}
\caption{Mapped rectangular (left) and polygonal (right) meshes from (quasi-)uniform meshes used in Example~\ref{subsec:example3}. Polygons near $y=1$ are highly anisotropic so that the Assumption~(A) is not satisfied.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{AvsU_p.eps}
\includegraphics[width=0.32\textwidth]{AvsU_u.eps}
\includegraphics[width=0.32\textwidth]{AvsU_pG.eps}
\caption{Convergence history with uniform rectangular meshes (solid lines) and anisotropic meshes (dashed lines) for $k=1,2,3$. Reg. and Map. in legends are abbreviation of regular and mapped meshes, respectively.}
\label{fig:convhist_anisotropic}
\end{figure}
In this example, we investigate reliability of the proposed method when it is used with anisotropic meshes.
Consider the solution $p$ and $p_\Gamma$ defined by
\begin{equation*}
p=\begin{cases}
\exp(10y)\sin(4x)\sin(\pi y) & \mbox{when }(x,y)\in\Omega_{B,1}, \\
\exp(10y)\cos(4x)\sin(\pi y) & \mbox{when }(x,y)\in\Omega_{B,2},
\end{cases}\quad p_\Gamma = \frac{3}{4}\exp(10y)\sin(\pi y)(\cos(2)+\sin(2)).
\end{equation*}
The domain $\Omega$, fracture $\Gamma$, and other physical constants are chosen as for the isotropic case of Example~\ref{subsec:example1}.
Notice that the solutions $p$ and $p_\Gamma$ exhibit boundary layer on $y=1$, see Figure~\ref{fig:solshape}.
Consider a mapping $A:(0,1)\times(0,1)\rightarrow(0,1)\times(0,1)$ by $(x,y)\mapsto (x,\sin(\pi y/2))$.
This maps a regular mesh on $(0,1)\times(0,1)$ to a highly anisotropic mesh near $y=1$.
The resulting mapped mesh violates Assumption (A) since the radius of the ball converges to 0 near $y=1$ as $h$ converges to 0.
Again, we report the convergence history against the number of degrees of freedom on mapped rectangular and polygonal meshes for $k=1,2,3$, see Figure~\ref{fig:convhist_anisotropic}.
For a reference, we also include numerical results with uniform rectangular meshes.
We can observe optimal convergence for all the three variables even when highly anisotropic meshes are employed.
Also, due to densely distributed mesh near the boundary layer, the anisotropic meshes give more accurate results than uniform meshes.
Moreover, the loss of accuracy can be also attributed to the non-uniform distribution of elements.
\subsection{Unfitted general meshes}\label{subsec:example4}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{background.eps}
\includegraphics[width=0.32\textwidth]{unfitted.eps}
\includegraphics[width=0.32\textwidth]{unfitted_mag.eps}
\caption{Underlying polygonal mesh ($\mathcal{T}_u$, left), modified mesh ($\tilde{\mathcal{T}}_u$) (center) and its magnified view with dual edges (right). The modified mesh contains both sliver elements and small edges.}
\label{fig:mesh_unfitted}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{convhist_fit_p.eps}
\includegraphics[width=0.32\textwidth]{convhist_fit_u.eps}
\includegraphics[width=0.32\textwidth]{convhist_fit_pG.eps}
\caption{Convergence history with fitted (solid lines) and unfitted (dashed lines).}
\label{fig:convhist_unfitted}
\end{figure}
In Example~\ref{subsec:example1}, we generate fitted polygonal meshes with pre-fixed generators near $\Gamma$.
However, this method is not applicable in practice because of complex, or even unknown \text{a priori}, geometry of the fracture.
Therefore, numerical methods utilizing unfitted meshes are preferred.
Example~\ref{subsec:example2} and Example~\ref{subsec:example3} suggest that the proposed method is reliable and accurate even when polygonal meshes with small edges or sliver elements are employed.
This allow us to consider unfitted meshes for our method without additional treatment such as mesh aggregation \cite{badia2018} or removal of small edges.
Let $\mathcal{T}_u$ be a polygonal mesh on $\Omega$ which is generated independent of $\Gamma$.
For each polygon $T\in \mathcal{T}_u$ with $T\cap \Gamma\neq \emptyset$, we split $T$ into $\{T_i\}$ so that $T_i\subset \Omega_{B,i}$ for each $i$.
This induces a new mesh $\tilde{\mathcal{T}}_u$ where there is one and only one $\Omega_{B,i}$ for each $T\in\tilde{\mathcal{T}}_u$ such that $T\subset \Omega_{B,i}$.
Figure~\ref{fig:mesh_unfitted} shows an example of background mesh $\mathcal{T}_u$ and updated mesh $\tilde{\mathcal{T}}_u$ with its induced simplicial sub-meshes $\mathcal{T}_h$.
The convergence history with unfitted mesh is depicted in Figure~\ref{fig:convhist_unfitted}.
We also report convergence history with quasi-uniform polygonal meshes which is generated with pre-fixed generators as in Figure~\ref{fig:mesh}.
As expected from the observation made in Example~\ref{subsec:example2} and \ref{subsec:example3}, the proposed method gives optimally convergent numerical approximations for all the three variables.
Moreover, the accuracy of numerical approximations with unfitted meshes are similar to that with fitted meshes for bulk variables $p_h$ and $u_h$.
While the accuracy of $p_{\Gamma,h}$ with unfitted meshes is slightly lower than that with fitted meshes, considering the flexibility of the method, the difference is moderate.
\subsection{Quarter five-spot problem}\label{subsec:qfs}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{qfs_prob.eps}
\includegraphics[width=0.45\textwidth]{p_slice.eps}
\caption{Domain configuration (left) and pressure profile along $x=y$ for Example~\ref{subsec:qfs}.\label{fig:qfs_prob}}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{p_permeable.eps}
\includegraphics[width=0.45\textwidth]{p_impermeable.eps}
\caption{Pressure profile for Example~\ref{subsec:qfs} with permeable (left) and impermeable (right) fracture.\label{fig:qfs_p}}
\end{figure}
We conclude this section with a quarter five-spot problem.
A quarter five-spot problem emerges from petroleum engineering \cite{chen1998,petipjeans1999} and is frequently used to validate numerical algorithms \cite{Chave18, Antonietti19}.
Consider a unit square domain $\Omega$ with a diagonal fracture $\Gamma=\{(x,y):x+y=1\}$ with thickness $\ell_\Gamma=0.01$, see Figure~\ref{fig:qfs_prob}.
We set the boundary condition
\begin{equation*}
\bm{u}\cdot\bm{n}=0\text{ on }\partial\Omega_1\backslash\Gamma,\quad p=0\text{ on }\partial\Omega_2\backslash\Gamma.
\end{equation*}
We model the injection and production by the source term
\begin{equation*}
f = 10.1\Big(\tanh\left(200(0.2 - (x^2+y^2)^{\frac{1}{2}})\right)-\tanh\left(200(0.2-((x-1)^2+(y-1)^2)^{\frac{1}{2}})\right)\Big)
\end{equation*}
so that we have an injection well at $(0,0)$ and a production well at $(1,1)$.
Permeability for the bulk domain is chosen as $K=\bm{I}_{2\times 2}$.
As in \cite{Antonietti19}, we perform two numerical experiments: (1) Permeable fracture, $\kappa_\Gamma^n=1$ and $\kappa_\Gamma^*=100$; (2) Impermeable fracture, $\kappa_\Gamma^n=10^{-2}$ and $\kappa_\Gamma^*=1$.
The background mesh is chosen as a uniform rectangular mesh with $h\approx2^{-6}$ and we use cubic polynomials.
The bulk pressures are depicted in Figure~\ref{fig:qfs_p} and their profiles along the line $x=y$ are displayed in Figure~\ref{fig:qfs_prob}.
Both pressures with permeable and impermeable fractures have the largest value at the injection well $(0,0)$ and the smallest value at the projection well $(1,1)$.
The difference between the two pressure profiles are pronounced near the fracture.
Compared to the permeable fracture case, the impermeable fracture produces significant jump, see Figure~\ref{fig:qfs_prob} and \ref{fig:qfs_p}.
The profile produced with our method is qualitatively similar to that from \cite{Antonietti19}.
\section{Conclusion}\label{sec:conclusion}
In this paper we propose and analyze a staggered DG method combined with a standard conforming finite element method for the bulk and fracture allowing general polygonal elements even with arbitrarily small edges.
We impose the interface condition by replacing the average and jump terms with respect to the flux by the corresponding pressure term, which can guarantee the stability of the method.
The novel contributions of this paper are twofold.
First, convergence analysis allowing arbitrarily small edges is delivered, which sheds novel light on the analysis of staggered DG method for other physical problems.
Second, optimal flux $L^2$ error robust with respect to the heterogeneity and anisotropy of the permeability coefficients can be proved with the help of the Ritz projection. The numerical experiments presented indicate that our method is accurate, efficient and can handle anisotropic meshes without losing convergence order.
The proposed method has the flexibility of treating general meshes, which can be naturally adapted to solve problems on unfitted background grids.
\section*{Acknowledgements}
The research of Eric Chung is partially supported by the Hong Kong RGC General Research Fund (Project numbers 14304217 and 14302018), CUHK Faculty of Science Direct Grant 2018-19 and NSFC/RGC Joint Research Scheme (Project number HKUST620/15). The research of Eun-Jae Park is supported by
NRF-2015R1A5A1009350 and NRF-2019R1A2C2090021.
|
2,869,038,156,836 | arxiv | \section{Introduction}
The first attempts to prove the ribbon--slice conjecture led to a 3--dimensional characterization of slice knots. Kawauchi, Shibuya and Suzuki \cite{KSS} proved that slice knots are exactly those that bound a normal singular disk in the 3--sphere with no clasp and no triple point of a certain type, called here borromean. We generalize this result to a 3--dimensional characterization of the slice genus, proving that the slice genus of a knot is the minimal genus of a normal singular surface in the 3--sphere, with no clasp and no borromean triple point, bounded by the knot.
It was proved by Kaplan \cite{Kaplan} that any knot bounds a normal singular disk with no clasp. This allowed Murakami and Sugishita \cite{MS} to define the $T$--genus of a knot as the minimal number of borromean triple points on such a disk. They proved that the $T$--genus is a concordance invariant and an upper bound for the slice genus. Further, they showed that the mod--$2$ reduction of the $T$--genus coincides with the Arf invariant. From these properties, they deduced the value of the $T$--genus for several knots for which the difference between the $T$--genus and the slice genus is 0 or 1. This raises the question of wether this difference can be greater than one; we prove that it can be arbitrarily large. For this, we show that the $T$--genus is an upper bound for the 4--dimensional positive clasp number and we use a recent result of Daemi and Scaduto \cite{DaSca} that states that the difference between the 4--dimensional positive clasp number and the slice genus can be arbitrarily large.
We introduce the ribbon $T$--genus of a knot, defined as the minimal number of borromean triple points on an immersed disk bounded by the knot with no clasp and no non-borromean triple point. In \cite{KMS}, Kawauchi, Murakami and Sugishita proved that the $T$--genus of a knot equals its $\Delta$--distance to the set of slice knots. We prove the ribbon counterpart of it, namely that the ribbon $T$--genus of a knot equals its $\Delta$--distance to the set of ribbon knots. As a consequence, providing a knot with distinct $T$--genus and ribbon $T$--genus would imply the existence of a non-ribbon slice knot.
We generalize the definition and properties of the $T$--genus to algebraically split links. In addition, we express Milnor's triple linking number of an algebraically split 3--component link as the algebraic intersection of three disks bounded by the three components, that intersect only along ribbons and borromean triple points. We also give an elementary proof, in the setting of non-split links, that the difference between the $T$--genus and the slice genus can be arbitrarily large, computing the $T$--genus on a family of cabled borromean links. Finally, we discuss the case of colored links.
\vspace{1ex}
\noindent\textbf{Conventions.}
We work in the smooth category. All manifolds are oriented. Boundaries of oriented manifolds are oriented using the ``outward normal first'' convention.
\vspace{1ex}
\noindent\textbf{Acknowledgments.}
I wish to thank Emmanuel Wagner for motivating conversations and Jae Choon Cha for an interesting suggestion.
\section{Definitions and main statements}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.6]
\begin{scope} [xshift=-5cm]
\draw[fill=gray!40,color=gray!40] (0,0) circle (2 and 1);
\draw[fill=gray!70,color=gray!70] (-0.5,0) -- (-0.5,2) -- (0.5,2) -- (0.5,0);
\draw[fill=gray!70,color=gray!70] (0.5,-2) -- (0.5,-0.99) -- (0,-1) -- (-0.5,-0.99) -- (-0.5,-2);
\foreach \x in {-0.5,0.5} {
\draw (\x,0) -- (\x,2);
\draw (\x,-0.98) -- (\x,-2);
\draw[dashed] (\x,-0.95) -- (\x,-0.05);}
\draw[color=gray!40] (-2,0) arc (-180:0:2 and 1);
\end{scope}
\begin{scope} [scale=0.8]
\tour
\draw (-0.3,-1.3) -- (-0.5,-0.2);
\draw (-0.4,-0.7) node[right] {$i$};
\draw[rotate=240] (-1.43,-1.4) -- (1.43,-1.4);
\draw (-1,1) node[right] {$b$};
\end{scope}
\draw (-2.5,-2.8) node {ribbon};
\begin{scope}[xshift=6.5cm,yscale=0.5,xscale=0.7]
\draw[fill=gray!40] (-2,-3) arc (-90:90:3);
\draw[fill=white] (2,3) arc (90:270:3);
\draw[fill=gray!60] (2,3) arc (90:270:3);
\draw[fill=white] (-2,-3) arc (-90:0:3) -- (-2,0);
\draw[gray!40,fill=gray!40] (-2,-3) arc (-90:0:3) -- (-2,0);
\draw (-2,-3) arc (-90:0:3);
\draw[gray!40,line width=2pt] (-2,0) -- (-1.01,0);
\end{scope}
\begin{scope} [xshift=11cm,scale=0.8]
\tour
\draw (-1.5,-1.3) -- (-0.5,-0.5);
\draw (-1.5,1.3) -- (-0.5,0.5);
\end{scope}
\draw (8.7,-2.8) node {clasp};
\end{tikzpicture}
\end{center}
\caption{Lines of double points and their preimages}
\label{figribbonclasp}
\end{figure}
If $\Sigma$ is a compact surface immersed in $S^3$, the self-intersections of $\Sigma$ are lines of double points, which possibly intersect along triple points. The lines of double points are of two kinds: ribbons and clasps (see Figure~\ref{figribbonclasp}). A {\em ribbon} is a line of double points whose preimages by the immersion are a {\em $b$--line} properly immersed in $\Sigma$ and an {\em $i$--line} immersed in the interior of $\Sigma$. A {\em clasp} is a line of double points that is not a ribbon.
When three ribbons meet at a triple point, there are again two possibilities. We say that the triple point is {\em borromean} if its three preimages are intersections of a $b$--line and an $i$--line (see Figure~\ref{figtriplepoints}). We will also consider surfaces with {\em branch points}, namely points that have a neighborhood as represented in Figure~\ref{figbranchedpt}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.6]
\draw (0,0) circle (2);
\draw[red] (-1.43,-1.4) -- (1.43,-1.4);
\draw[green] (0,-1.8) -- (0,-1);
\draw[rotate=120,green] (-1.43,-1.4) -- (1.43,-1.4);
\draw[rotate=120,blue] (0,-1.8) -- (0,-1);
\draw[rotate=240,blue] (-1.43,-1.4) -- (1.43,-1.4);
\draw[rotate=240,red] (0,-1.8) -- (0,-1);
\draw (0,-1.4) node[above right] {$\scriptstyle{p_1}$};
\draw (-1,0.2) node {$\scriptstyle{p_2}$};
\draw (1,0.2) node {$\scriptstyle{p_3}$};
\draw (0,-2.8) node {borromean};
\begin{scope} [xshift=6cm]
\draw (0,0) circle (2);
\draw[red] (-1.43,-1.4) -- (1.43,-1.4);
\draw [blue] (0,-1.8) -- (0,-1);
\draw[rotate=240,blue] (-1.43,-1.4) -- (1.43,-1.4);
\draw[green] (-1.43,1.4) -- (1.43,1.4);
\draw[red] (-0.3,0) -- (0.5,0);
\draw[green] (0.1,-0.4) -- (0.1,0.4);
\draw (0,-1.4) node[above right] {$\scriptstyle{p_1}$};
\draw (-0.6,0.9) node {$\scriptstyle{p_2}$};
\draw (0.6,0.3) node {$\scriptstyle{p_3}$};
\draw (0,-2.8) node {non borromean};
\end{scope}
\end{tikzpicture}
\caption{Triple points on a disk\vspace{0.8ex}\\{\footnotesize The picture represents the singular set of the disk on its preimage.\\ The points $p_1$, $p_2$ and $p_3$ are the three preimages of a triple point $p$.}} \label{figtriplepoints}
\end{center}
\end{figure}
\begin{samepage}
A compact surface is:
\begin{itemize}
\item {\em normal singular} if it is immersed in $S^3$ except at a finite number of branch points,
\item {\em ribbon} if it is immersed in $S^3$ with no clasp and no triple point,
\item {\em $T$--ribbon} if it is immersed in $S^3$ with no clasp and no non-borromean triple point,
\item {\em slice} if it is smoothly properly embedded in $B^4$.
\end{itemize}
\end{samepage}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw (-3,-1) -- (3,-1) -- (5,1) -- (-1,1) -- (-3,-1);
\draw[thick,gray,dashed] (0,0) node {$\scriptstyle{\bullet}$} -- (3.3,0);
\draw[thick,gray] (3.3,0) -- (4,0);
\draw (4,0) .. controls +(0.5,0.5) and +(0.5,0.5) .. (3.5,0.5) .. controls +(-0.5,-0.5) and +(-0.5,-0.5) .. (4,0);
\draw (3.5,-0.2) -- (0,0) -- (3.84,0.705);
\end{tikzpicture}
\caption{A branched point} \label{figbranchedpt}
\end{center}
\end{figure}
Beyond ribbons and clasps, the different types of double point lines that appear on a normal singular surface are closed lines, namely {\em circles}, or intervals, called {\em branched ribbons} if one endpoint is branched and {\em branched circles} if the two endpoints are branched, see Figure~\ref{figsingularities}, where the preimages are drawn. Like for a ribbon, the two preimages of a branched ribbon are naturally divided into a {\em $b$--line} that contains a boundary point and an {\em $i$--line} that doesn't. For a (branched) circle, one may call $b$--line one preimage and $i$--line the other. A normal singular surface with such namings assigned to the preimages of each (branched) circle is said to be {\em marked}. A triple point on a normal singular surface is {\em borromean} if its three preimages are intersections of a $b$--line and an $i$--line.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.5]
\begin{scope}
\tour
\draw (-0.3,-1.3) -- (-0.5,-0.2);
\draw (-0.4,-0.7) node[right] {$i$};
\draw[rotate=240] (-1.43,-1.4) -- (1.43,-1.4);
\draw (-1,1) node[right] {$b$};
\draw (0.2,-2.8) node {ribbon};
\end{scope}
\begin{scope} [xshift=7cm]
\tour
\draw (-0.3,-1.3) -- (0,0.2) -- (-2,0);
\draw (-0.2,-0.5) node[right] {$i$};
\draw (-1,0) node[above] {$b$};
\draw (0.5,-2.8) node {branched ribbon};
\end{scope}
\begin{scope} [xshift=14cm]
\tour
\draw (-0.2,-1.2) circle (0.4);
\draw (-0.8,0.3) circle (0.5);
\draw (0.3,-2.8) node {circle};
\end{scope}
\begin{scope} [xshift=21cm]
\tour
\draw (-0.4,-1.2) .. controls +(0.7,1) and +(0.7,-1) .. (-0.4,1) .. controls +(-0.7,-1) and +(-0.7,1) .. (-0.4,-1.2);
\draw (0.5,-2.8) node {branched circle};
\end{scope}
\end{tikzpicture}
\caption{Double points lines on a normal singular surface (preimages)} \label{figsingularities}
\end{center}
\end{figure}
Given a link $L=L_1\sqcup\dots\sqcup L_n$ in $S^3$, a {\em complex for $L$} is a union of compact surfaces $\Sigma=\cup_{1\leq i\leq n}\Sigma_i$ such that $\partial\Sigma_i=L_i$ for all $i$.
The {\em genus} of such a complex is the sum of the genera of its components. The {\em slice genus $g_s(L)$} (resp. {\em ribbon genus $g_r(L)$}) of a link $L$ is the minimal genus of a slice (resp. ribbon) complex for $L$. Note that $g_s(L)\leq g_r(L)$. These invariants are well-defined (finite) for {\em algebraically split links}, namely links whose components have trivial pairwise linking numbers. The following result generalizes the characterization of slice knots by Kawauchi--Shibuya--Suzuki \cite[Corollary~6.7]{KSS}.
\begin{theo}[Corollary~\ref{corCharSliceGenus}]
The slice genus of an algebraically split link $L$ equals the minimal genus of a marked normal singular complex for $L$ with no clasp and no borromean triple point.
\end{theo}
The {\em $T$--genus} $T_s(L)$ (resp. {\em ribbon $T$--genus $T_r(L)$}) is the minimal number of borromean triple points on a marked normal singular disks complex (resp. $T$--ribbon disks complex) with no clasp for $L$; obviously $T_s(L)\leq T_r(L)$. Note that these numbers may be undefined. Kaplan \cite{Kaplan} proved that any algebraically split link bounds a $T$--ribbon disks complex. In particular, the (ribbon) $T$--genus of an algebraically split link is well-defined. It follows that any of the four invariants $g_s,g_r,T_s,T_r$ is well-defined on a link if and only if this link is algebraically split (see for instance Corollary~\ref{corgenusTgenus}).
In \cite{MS}, Murakami and Sugishita proved that the $T$--genus is an upper bound for the slice genus. We generalize this result to algebraically split links and prove the ribbon counterpart of it.
\begin{theo}[Corollary~\ref{corgenusTgenus}]
For any algebraically split link $L$, $g_s(L)\leq T_s(L)$ and $g_r(L)\leq T_r(L)$.
\end{theo}
The proof we give relies on the expression of the $T$--genus in terms of $\Delta$--distance, which is the distance on the set of links defined by the $\Delta$--move (see Figure~\ref{figdeltamove}).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.3]
\newcommand{\DM}{
\draw (-3,-1) -- (3,-1);
\draw[color=white,line width=8pt,rotate=120] (-3,-1) -- (0,-1);
\draw[rotate=120] (-3,-1) -- (3,-1);
\draw[color=white,line width=8pt,rotate=240] (-3,-1) -- (0,-1);
\draw[rotate=240] (-3,-1) -- (3,-1);
\draw[color=white,line width=8pt] (-3,-1) -- (0,-1);
\draw (-3,-1) -- (1,-1);}
\DM
\draw[<->] (4,0) -- (5,0);
\begin{scope} [xshift=9cm,yshift=1cm,rotate=60]
\DM
\end{scope}
\end{tikzpicture}
\end{center}
\caption{$\Delta$--move} \label{figdeltamove}
\end{figure}
The {\em $\Delta$--slicing number $s_{\scriptscriptstyle{\Delta}}(L)$} (resp. {\em $\Delta$--ribonning number $r_{\scriptscriptstyle{\Delta}}(L)$}) is the minimal number of $\Delta$--moves necessary to change $L$ into a slice (resp. ribbon) link. Note that these are well-defined if and only if $L$ is algebraically split (see Theorem~\ref{thMuNa}). Kawauchi, Murakami and Sugishita \cite{KMS} proved that the $T$--genus of a knot equals its slicing number. We generalize this as follows.
\begin{theo}[Theorem~\ref{thDeltaDistance}]
For any algebraically split link $L$, $T_s(L)=s_{\scriptscriptstyle{\Delta}}(L)$ and $T_r(L)=r_{\scriptscriptstyle{\Delta}}(L)$.
\end{theo}
It is a natural generalization of the ribbon--slice question to ask wether the $T$--genus and the ribbon $T$--genus always coincide. The above result shows that it is an equivalent question.
\begin{corollary}
The slice knots are all ribbon if and only if $T_s(K)=T_r(K)$ for any knot $K$. The same holds for algebraically split links with a given number of components.
\end{corollary}
In \cite{MS}, Murakami and Sugishita proved that the $T$--genus of knots is a concordance invariant whose mod--$2$ reduction is the Arf invariant --- we generalize this to algebraically split links. They deduced the value of the $T$--genus for several knots satisfying $T_s-g_s=0,1$. The question arises then to know if this difference can be arbitrarily large. In Section~\ref{secex}, we provide a family of non-split links $B_n$ with $g_s(B_n)=1$ and $T_s(B_n)=n$. These links are constructed by cabling one component of the borromean link.
Nevertheless, this is not fully satisfying in that it is based on the augmentation of the number of components of the link. To get the answer in the setting of knots, we will compare the $T$--genus with the $4$--dimensional positive clasp number. The {\em $4$--dimensional clasp number} $c_4(L)$ of a link $L$ is the smallest number of transverse double points on an immersed disks complex in $B^4$ bounded by $L$. Similarly define the {\em $4$--dimensional positive/negative clasp number} $c_4^+(L)$/$c_4^-(L)$ by counting the positive/negative double points. We also consider here a balanced version of this invariant, which is the most natural in the comparison with the $T$--genus. The {\em balanced $4$--dimensional clasp number} $c_4^b(L)$ of a link $L$ is the smallest number of positive (or negative) transverse double points on an immersed disks complex in $B^4$, bounded by $L$, with trivial self-intersection number. We have the following immediate inequalities (note that a positive or negative transverse double point can be added to an immersed surface in $B^4$ without modifying its boundary).
$$c_4^+(L),c_4^-(L)\leq c_4^b(L)\leq c_4(L)\leq 2c_4^b(L)$$
Note that, among all the invariants introduced at that point, only $c_4^\pm$ depends on the orientation, in the case of a link with at least two components.
It is well known that the $4$--dimensional clasp number is an upper bound for the slice genus --- a transverse double point can be smoothed at the cost of adding one to the genus of the surface. For knots, this can be improved as follows.
\begin{lem}[Lemma~\ref{lemmagscb}]
For any knot $K$, $g_s(K)\leq c_4^b(K)$.
\end{lem}
We will see in Section~\ref{secex} that this lemma does not hold for algebraically split links. In contrast, we have the following.
\begin{theo}[Corollary~\ref{corTgenusclaspnb}]
For any algebraically split link $L$, $c_4^b(L)\leq T_s(L)$.
\end{theo}
In \cite{DaSca}, Daemi and Scaduto minored the difference $c_4^+(K)-g_s(K)$ for the connected sums of copies of the knot $7_4$.
\begin{theorem}[Daemi--Scaduto]
For any positive integer $n$, $c_4^+(\sharp^n7_4)-g_s(\sharp^n7_4)\geq n/5$.
\end{theorem}
\begin{corollary}
The difference between the $T$--genus and the slice genus, for knots in $S^3$, can be arbitrarily large.
\end{corollary}
This raises the question of what can be the difference between the $T$--genus and the balanced 4--dimensional clasp number. In \cite{Miller}, Miller proves the existence of knots with arbitrarily large slice genus and trivial positive and negative 4--dimensional clasp number, which implies that the difference $T_s-c_4^{\pm}$ can be arbitrarily large.
\begin{question}
Can the difference $T_s-c_4^b$ be arbitrarily large for knots?
\end{question}
In Section~\ref{secex}, we propose a family of knots that could realize such an unbounded difference, namely the family of twist knots. The 4--dimensional clasp number equals $1$ for all non-slice twist knots; I would conjecture that their $T$--genus raises linearly with the number of twists.
\section{Surfaces, cobordisms and projections}
In this section, we present some manipulations on surfaces and cobordisms and we deduce a comparison of the $T$--genus with the slice genus and the balanced $4$--dimensional clasp number, and a characterization of the slice genus.
On the preimage of a marked normal singular compact surface, a preimage of a triple point is a {\em triple point of type ($b$--$i$)} if it is the intersection of a $b$--line and an $i$--line.
\begin{proposition} \label{propborrotoclasp}
Let $\Sigma$ be a marked normal singular compact surface in $S^3$ with $b$ borromean triple points and no clasp. Then $\Sigma$ is the radial projection of a properly immersed surface in $B^4$ with $b$ positive and $b$ negative double points.
\end{proposition}
\begin{proof}
Let $\tilde{\Sigma}$ be a compact surface and let $\iota:\tilde{\Sigma}\to S^3$ be a map which is an immersion except at a finite number of branch points, such that $\iota(\tilde\Sigma)=\Sigma$. We will define a radius fonction on $\tilde\Sigma$ in order to immerse it in $B^4$. Let $r:\tilde\Sigma\to(0,1]$ be a smooth function that sends:
\begin{itemize}
\item $\partial\tilde\Sigma$ onto $1$,
\item branch points and triple points of type ($b$--$i$) onto $\frac12$,
\item other points in an $i$--line into $(0,\frac12)$,
\item other points in a $b$--line into $(\frac12,1)$.
\end{itemize}
Note that, within the three preimages of a non-borromean triple point, only one has type ($b$--$i$), so that only one is sent onto $\frac12$. The map from $\tilde\Sigma$ to $B^4=\frac{S^3\times[0,1]}{(x,0)\sim(y,0)}$ given by $x\mapsto\left(x,r(x)\right)$ is an embedding except at borromean triple points which are still triple points. It remains to modify the radius function around these borromean triple points.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\foreach \x in {0,5.5,9,12.5} {\draw[xshift=\x cm] (0,2) node[left] {$0$} -- (0,0) node[left] {$1$} -- (2,0) (0,1) node[left] {$r$};}
\draw (1,0) node[below] {$\tilde\ell_{ij}$};
\draw[thick] (2,0) arc (7:173:1) (2,1.76) arc (-7:-173:1) (1,0.86) node {$\scriptstyle\bullet$};
\draw[very thick,->] (3,1) -- (4,1);
\begin{scope} [xshift=5.5cm]
\draw (1,0) node[below] {$\tilde\ell_{12}$};
\draw[thick,rounded corners=1pt] (2,0) arc (7:70:1) arc (20:160:0.37) arc (110:173:1) (2,1.76) arc (-7:-70:1) arc (-20:-160:0.37) arc (-110:-173:1);
\end{scope}
\begin{scope} [xshift=9cm]
\draw (1,0) node[below] {$\tilde\ell_{23}$};
\draw[thick,rounded corners=2pt] (2,0) arc (7:173:1) (2,1.76) arc (-7:-70:1) arc (20:160:0.37) arc (-110:-173:1);
\end{scope}
\begin{scope} [xshift=12.5cm]
\draw (1,0) node[below] {$\tilde\ell_{31}$};
\draw[thick,rounded corners=2pt] (2,0) arc (7:70:1) arc (-20:-160:0.37) arc (110:173:1) (2,1.76) arc (-7:-173:1);
\end{scope}
\end{tikzpicture}
\caption{Modification of the radius function around a triple point\vspace{0.8ex}\\{\footnotesize The line $\ell_{ij}$ is the ribbon line on $\Sigma$ containing $p$\\whose preimage $\tilde\ell_{ij}$ is made of an $i$--line containing $p_i$ and a $b$--line containing $p_j$.}} \label{figmodifradius}
\end{center}
\end{figure}
Take a smooth function $f:D^2\to[0,\varepsilon]$, where $\varepsilon>0$, such that $f=0$ on a collar neighborhood of $S^1$, $f(0)=\varepsilon$ and~$0$ is the only critical point of $f$. Fix a borromean triple point $p$ on $\Sigma$ and write $p_1,p_2,p_3$ its preimages, chosing the indices so that the $i$--line containing $p_1$ has the same image as the $b$--line containing~$p_2$. Now consider small disk neighborhoods of $p_1$ and $p_2$ and define a new radius function $r'$ by $r'=r-f$ in the neighborhood of $p_1$, $r'=r+f$ in the neighborhood of $p_2$ and $r'=r$ elsewhere. This desingularizes the triple point but adds two double points of opposite signs with preimages on the $i$--line containing $p_1$ and on the $b$--line containing $p_2$, see Figure~\ref{figmodifradius}. If the neighborhoods and $\varepsilon$ are small enough, no other singularity is created. Performing a similar modification around each borromean triple point, we finally get the required immersion.
\end{proof}
\begin{corollary} \label{corTgenusclaspnb}
For any algebraically split link $L$, $c_4^b(L)\leq T_s(L)$.
\end{corollary}
For knots, this corollary and the following lemma give a relation between the slice genus and the $T$--genus. Neverthless, the lemma does not hold for algebraically split links and we will give alternative proofs of this relation.
\begin{lemma} \label{lemmagscb}
For any knot $K$, $g_s(K)\leq c_4^b(K)$.
\end{lemma}
\begin{proof}
Take a disk $\Sigma$ properly immersed in $B^4$ with $\partial\Sigma=K$ that realizes $c_4^b(K)$. We will desingularize $\Sigma$ by tubing once for each pair of double points with opposite signs. Fix a pair $(p_+,p_-)$ of respectively a positive and a negative self-intersection point of $\Sigma$. Take a path $\gamma$ on $\Sigma$ joining $p_+$ and $p_-$, whose interior does not meet the singularities of $\Sigma$. Inside a neighborhood of $\gamma$ in $B^4$, one can find a solid tube $C=\gamma\times D^2$ where $\gamma\times\{0\}=\gamma$, such that $C$ meets the leaf of $\Sigma$ transverse to $\gamma$ around $p_+$ (resp. $p_-$) along $p_+\times D^2$ (resp. $p_-\times D^2$), and $C$ meets the leaf of $\Sigma$ containing $\gamma$ along $\gamma$. Now remove from $\Sigma$ the interior of the disks $p_\pm\times D^2$ and reglue $\gamma\times S^1$ instead. The sign condition ensures that the new surface is again oriented.
\end{proof}
Proposition~\ref{propborrotoclasp} says in particular that the boundary of a marked normal singular genus--$g$ complex bounds a genus--$g$ slice complex. We now prove the converse in order to get a three-dimensional characterization of the slice genus.
\begin{theorem} \label{thprojection}
Let $S$ be a properly embedded compact surface in $B^4$. Then $S$ is isotopic, via an isotopy of properly embedded surfaces, to a surface whose radial projection in $S^3$ is a marked normal singular compact surface with no clasp and no borromean triple point.
\end{theorem}
\begin{proof}
Up to isotopy, we can assume that the radius function on $S$ is a Morse function whose index $\ell$ critical points take the value $\varepsilon_\ell$ with $0<\varepsilon_0<\varepsilon_1<\varepsilon_2<1$. Hence $S$ can be reorganized as a PL surface with:
\begin{itemize}
\item at $r=\varepsilon_0$, a family $\Delta$ of disjoint disks,
\item at $r\in(\varepsilon_0,\varepsilon_1)$, the boundary $\partial\Delta$,
\item at $r=\varepsilon_1$, $\partial\Delta\cup B$, where $B$ a disjoint union of embedded bands $[0,1]\times[0,1]$ that meet $\partial\Delta$ exactly along $[0,1]\times\{0,1\}$,
\item at $r\in(\varepsilon_1,\varepsilon_2)$, the split union of $L=\partial S$ with a trivial link,
\item at $r=\varepsilon_2$, the disjoint union of $L$ with a disjoint union $D$ of disks bounded by the above trivial link,
\item at $r\in(\varepsilon_2,1]$, the link $L$.
\end{itemize}
Projecting radially $S$ on $S^3$, we obtain a complex $\Sigma=\Sigma'\cup D$, where $\Sigma'$ is a ribbon complex --- projection of $\Delta\cup B$ --- and $D$ is a family of disjoint disks, disjoint from $L$, that we can assume transverse to $\Sigma'$. Denote $\tilde\Sigma=\tilde\Sigma'\cup\tilde D$ the preimage of $\Sigma$ with the corresponding decomposition.
Note that a double point $p\in\partial D$ is a double point of $\Sigma'$, all branch points of $\Sigma$ lie on $\partial D$, branch points are simple. We now analyse the lines of double points on $\Sigma$ and their preimages. Let $\gamma\subset\Sigma$ be the closure of a line of double points. Note that it cannot have an endpoint in $\mathrm{Int}(D)$ since $D$ is disjoint from $L$.
\paragraph{\underline{Case 1}:} $\gamma$ does not contain any branch point.
Let $\tilde\gamma$ be one of its two preimages. If $\tilde\gamma\subset\tilde D$, then $\gamma$ is a circle and we mark $\tilde\gamma$ as a $b$--line.
If $\tilde\gamma$ meets both $\tilde D$ and $\tilde\Sigma'$, then it contains an interval properly embedded in $\tilde D$. It has to be continued on both sides by $b$--lines of $\tilde\Sigma'$. These end either on $\partial\tilde\Sigma$ or on $\partial\tilde D$. In the latter case, they are continued by an interval properly embedded in $\tilde D$. Iterating, we see that $\tilde\gamma$ is either an interval with two endpoints on $\partial\tilde\Sigma$ or a circle. In both cases, we mark it as a $b$--line. Now assume $\tilde\gamma\subset\tilde\Sigma'$. If it is an interval, it has no endpoint on $\partial\tilde D$, so that it is a $b$--line or an $i$--line of $\tilde\Sigma'$. If it is a circle, either the other preimage of $\gamma$ meets $\tilde D$, in which case we mark $\tilde\gamma$ as an $i$--line, or the two preimages of $\gamma$ are contained in $\tilde\Sigma'$ and we assign them different markings.
\paragraph{\underline{Case 2}:} $\gamma$ contains a single branch point $p$.
Let $\tilde\gamma$ be the whole preimage of $\gamma$ and let $\tilde p$ be the preimage of $p$. Near $p$, $\gamma$ lies in $\Sigma'\cap D$. Let $q$ be the first point of $\partial D$ reached from $p$. The preimages of $q$ are $\tilde q_1\in\partial\tilde D$ and $\tilde q_2\in\mathrm{Int}(\tilde\Sigma')$, see Figure~\ref{figSingleBranchPoint}. The curve $\tilde\gamma$ joins $\tilde p$ to $\tilde q_1$ inside $\tilde D$ and to $\tilde q_2$ inside $\tilde\Sigma'$. It is continued from $\tilde q_1$ by a $b$--line in $\tilde\Sigma'$ and from $\tilde q_2$ by an $i$--line in $\Sigma'$. The $b$--line may end on $\partial\tilde D$, in which case we iterate the argument, or on $\partial\tilde\Sigma$. Finally, the preimage of $\gamma$ containing $\tilde q_1$ is a $b$--line ending on $\partial\tilde\Sigma$ and the preimage containing $\tilde q_2$ is an $i$--line contained in $\tilde\Sigma'$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.8]
\draw[very thick,dashed] (4,-2) -- (5,-2) (3,2) -- (4,2);
\draw[very thick] (3,2) -- (0,2) arc (90:270:2) -- (4,-2) node[above] {$\partial\tilde\Sigma$};
\draw[fill=gray!30] (0,0.5) circle (0.8) (2,-0.5) circle (0.6);
\draw (-0.1,0.95) node {$\tilde D$};
\draw (-0.8,0.5) node {$\scriptstyle{\bullet}$} node[above left] {$\tilde{p}$} -- (0.8,0.5) node {$\scriptstyle{\bullet}$} node[above right] {$\tilde{q}_1$};
\draw (-0.8,0.5) .. controls +(-0.5,-0.5) and +(0,0.5) .. (-1.2,-0.5) node {$\scriptstyle{\bullet}$} node[left] {$\tilde{q}_2$} .. controls +(0,-0.2) and +(-0.3,0.3) .. (-0.8,-1) node {$\scriptstyle{\bullet}$} .. controls +(0.3,-0.3) and +(-0.5,0) .. (0,-1.5) node {$\scriptstyle{\bullet}$} .. controls +(0.5,0) and +(-0.3,-0.3) .. (1,-1) node {$\scriptstyle{\bullet}$};
\draw (0.8,0.5) .. controls +(0.5,0) and +(0,0.5) .. (2,0.1) node {$\scriptstyle{\bullet}$} .. controls +(0,-0.5) and +(-0.5,0) .. (2.6,-0.5) node {$\scriptstyle{\bullet}$} .. controls +(0.5,0) and +(0,0.5) .. (3,-2) node {$\scriptstyle{\bullet}$};
\end{tikzpicture}
\caption{Case of a single branch point $p$} \label{figSingleBranchPoint}
\end{center}
\end{figure}
\paragraph{\underline{Case 3}:} $\gamma$ contains two branch points, {\em ie} it is an interval with branch points as endpoints. The two preimages of $\gamma$ are intervals in $\tilde\Sigma$ with the same endpoints. Analyzing the situation from a branch point as in the previous case, we see that one preimage of $\gamma$ is contained in $\tilde\Sigma'$, while the other meets $\tilde D$. We mark the first one as an $i$--line and the second one as a $b$--line.
Finally $\Sigma$ is a marked normal singular complex. Let $p$ be a triple point of $\Sigma$. Since $\Sigma'$ is a ribbon complex, $p$ must have at least one preimage $\tilde p$ in $\mathrm{Int}(\tilde D)$. It follows from the previous discussion that no $i$--line meets $\mathrm{Int}(\tilde D)$, so that $\tilde p$ is the intersection of two $b$--lines. Hence $p$ is non-borromean.
\end{proof}
\begin{corollary} \label{corCharSliceGenus}
The slice genus of an algebraically split link $L$ equals the minimal genus of a marked normal singular complex for $L$ with no clasp and no borromean triple point.
\end{corollary}
A {\em cobordism} from a link $L$ to a link $L'$ is a surface $S$ properly embedded in $S^3\times[0,1]$, such that $\partial S=L'\times\{1\}- L\times\{0\}$; the links $L$ and $L'$ are said to be {\em cobordant}. The cobordism $S$ is {\em strict} if it has genus $0$ and distinct components of $L$ belong to distinct components of $S$; note that the relation induced on links is not symmetric. A cobordism is a {\em concordance} if $S$ is a disjoint union of annuli and each annulus has one boundary component in $S^3\times\{0\}$ and the other in $S^3\times\{1\}$; the links $L$ and $L'$ are then said to be {\em concordant}.
\begin{theorem} \label{thconcordance}
Let $\Sigma$ be a marked normal singular compact surface in $S^3$ with $b$ borromean triple points and no clasp. Let $S$ be a compact surface properly embedded in $S^3\times[0,1]$ such that $\partial S\cap\left(S^3\times\{1\}\right)=-\partial\Sigma\times\{1\}$. Then, up to an isotopy of $S$ fixing the boundary, the image of $S\cup\left(\Sigma\times\{1\}\right)$ by the projection $S^3\times[0,1]\twoheadrightarrow S^3$ is again a marked normal singular compact surface with $b$ borromean triple points and no clasp.
\end{theorem}
\begin{proof}
The proof is essentially the same as for Theorem~\ref{thprojection}. This proof still works if the surface~$S$, at $r=\varepsilon_0$, is not a disjoint union of disks but a marked normal singular complex $S$. One can ask that, in the projected complex $\Sigma$, the bands added at $r=\varepsilon_1$ avoid the singularities of $S$ and the union of disks $D$ avoids the triple points of $S$. The same discussion shows that $\Sigma$ can be marked as a normal singular surface with as many borromean triple points as $S$. Here, we consider $S$ in~$S^3\times[0,1]$ with $0<\varepsilon_2<\varepsilon_1<\varepsilon_0=1$ and we project on $S^3\times\{0\}$.
\end{proof}
\begin{corollary} \label{corconcordance}
The $T$--genus of algebraically split links is a concordance invariant.
\end{corollary}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.4]
\begin{scope} [yshift=0.5cm]
\foreach \t in {0,120,240} {
\draw[rotate=\t] (-1.77,0.27) arc (135:350:2.5);
\draw[rotate=\t] (2.46,-1.07) arc (10:105:2.5);}
\end{scope}
\draw (5.5,0) node {$\sim$};
\begin{scope} [xshift=12cm,scale=0.5]
\draw (0,-6) .. controls +(-3,3) and +(0,-5) .. (-3,3) .. controls +(0,4) and +(-2,2) .. (0,6);
\draw[white,line width=5pt] (-5.9,2) -- (5.9,2);
\draw (-5.9,2) -- (5.9,2);
\draw (-6.5,-1.6) -- (-6.5,3) .. controls +(0,5) and +(-5,0) .. (0,8.5) .. controls +(5,0) and +(0,5) .. (6.5,3) -- (6.5,-1.6);
\draw[white,line width=5pt] (-7.1,2) .. controls +(-2,0) and +(0,2) .. (-10,0) .. controls +(0,-2) and +(-2,0) .. (-6.5,-2) -- (6.5,-2);
\draw (-7.1,2) .. controls +(-2,0) and +(0,2) .. (-10,0) .. controls +(0,-2) and +(-2,0) .. (-6.5,-2) -- (6.5,-2);
\draw (7.1,2) .. controls +(2,0) and +(0,2) .. (10,0) .. controls +(0,-2) and +(2,0) .. (6.5,-2);
\draw (-6.5,-2.4) .. controls +(0,-5) and +(-5,0) .. (0,-8.5) ..controls +(5,0) and +(0,-5) .. (6.5,-2.4);
\draw[white,line width=5pt] (0,-6) .. controls +(2,-2) and +(0,-4) .. (3,-3) .. controls +(0,5) and +(3,-3) .. (0,6);
\draw (0,-6) .. controls +(2,-2) and +(0,-4) .. (3,-3) .. controls +(0,5) and +(3,-3) .. (0,6);
\end{scope}
\begin{scope} [xshift=25cm,scale=0.5]
\draw[blue!10,fill=blue!10] (-6.5,-2.1) -- (-6.5,3) .. controls +(0,5) and +(-5,0) .. (0,8.5) .. controls +(5,0) and +(0,5) .. (6.5,3) -- (6.5,-2.1) .. controls +(0,-5) and +(5,0) .. (0,-8.5) .. controls +(-5,0) and +(0,-5) .. (-6.5,-2.1);
\draw[green,fill=green!20] (0,-6) .. controls +(2,-2) and +(0,-4) .. (3,-3) .. controls +(0,5) and +(3,-3) .. (0,6);
\draw[red!30,fill=red!30] (-6.5,2) .. controls +(-3,0) and +(0,2) .. (-10,0) .. controls +(0,-2) and +(-2,0) .. (-6.5,-2) -- (2,-2) -- (0,0) -- (-6.5,0) -- (-6.5,2);
\draw[red!30,fill=red!30] (6.5,2) .. controls +(3,0) and +(0,2) .. (10,0) .. controls +(0,-2) and +(2,0) .. (6.5,-2) --(3,-2) -- (3,0) -- (6.5,0);
\draw[vert,dashed] (0,-6) .. controls +(-3,3) and +(0,-5) .. (-3,3) .. controls +(0,4) and +(-2,2) .. (0,6);
\draw[red,dashed] (-6,2) -- (6,2) (2.2,-2) -- (2.5,-2);
\draw[blue,dashed] (-6.5,-1.9) -- (-6.5,0) (6.5,0) -- (6.5,-1.9);
\draw[red] (-7.1,2) .. controls +(-2,0) and +(0,2) .. (-10,0) .. controls +(0,-2) and +(-2,0) .. (-6.5,-2) -- (2,-2) (3.5,-2) -- (6.5,-2) .. controls +(2,0) and +(0,-2) .. (10,0) .. controls +(0,2) and +(2,0) .. (7.1,2);
\draw[blue] (-6.5,-2.1) .. controls +(0,-5) and +(-5,0) .. (0,-8.5) ..controls +(5,0) and +(0,-5) .. (6.5,-2.1) (-6.5,0) -- (-6.5,3) .. controls +(0,5) and +(-5,0) .. (0,8.5) .. controls +(5,0) and +(0,5) .. (6.5,3) -- (6.5,0);
\draw[green] (0,-6) .. controls +(2,-2) and +(0,-4) .. (3,-3) .. controls +(0,5) and +(3,-3) .. (0,6);
\draw[gray,densely dashed,thick] (2,-2) -- (-2,2) (0,-6) -- (0,6) (-6.5,0) -- (6.5,0);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{The borromean link and a $T$--ribbon disks complex for it} \label{figDisksBorromean}
\end{figure}
The next result says that a marked normal singular surface in $S^3$ can be pushed into $S^3\times[0,1]$ in order to isolate the borromean triple points.
\begin{proposition} \label{propcob}
Let $\Sigma$ be a marked normal singular surface in $S^3$ with $b$ borromean triple points and no clasp. Then there is a surface $S$ properly embedded in $S^3\times[0,1]$ and a disjoint union $\Delta$ of $b$ $T$--ribbon disks complexes as represented in Figure~\ref{figDisksBorromean} such that $S\cap(S^3\times\{1\})=-\partial\Delta$, $S\cap(S^3\times\{0\})=\partial\Sigma$ and the image of $S\cup\Delta$ by the projection $S^3\times[0,1]\twoheadrightarrow S^3$ is $\Sigma$.
\end{proposition}
\begin{proof}
Let $\iota:\tilde{\Sigma}\to S^3$ be an immersion except at a finite number of branch points, with image~$\Sigma$. We will define a height fonction on $\tilde\Sigma$ in order to immerse it in $S^3\times[0,1]$ as a cobordism from $\partial\Sigma$ to a split union of borromean links.
First define a subsurface $\tilde C_1$ of $\tilde\Sigma$ as the disjoint union of:
\begin{itemize}
\item a small disk around each triple point of type ($b$--$b$), namely intersection of two $b$--lines,
\item for each $b$--line, a small disk around a point of the line between any two consecutive triple points of type ($b$--$i$),
\item for each closed $b$--line with no triple point, a small disk around any point of the circle,
\item at each branched point, a small disk meeting the $b$--line along an open interval admitting the branched as an endpoint,
\item a collar neighborhood of $\partial\Sigma$.
\end{itemize}
Set $\tilde\Sigma_1=\tilde\Sigma\setminus\mathrm{Int}(\tilde C_1)$, see Figure~\ref{figpushcobborro}. In restriction to $\tilde\Sigma_1$, $\iota$ is an immersion whose image is a $T$--ribbon genus--$0$ surface such that any ribbon line contains at most one triple point. Now define $\tilde\Sigma_2$ as a neighborhood in $\tilde\Sigma_1$ of all the $i$--lines. Set $\tilde C_2=\tilde\Sigma_1\setminus\mathrm{Int}(\tilde\Sigma_2)$. Then $\tilde\Sigma_2$ is a dijoint union of disks. On some of them, the restriction of $\iota$ to $\tilde\Sigma_2$ has no singularity; define $\tilde C_3$ as their union with a collar neighborhood of the others. Set $\tilde\Sigma_3=\tilde\Sigma_2\setminus\mathrm{Int}(\tilde C_3)$. The restriction of $\iota$ to $\tilde\Sigma_3$ has, on each disk, one $b$--line and one $i$--line intersecting once.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.4]
\begin{scope}
\draw[fill=vert!60] (5,4) ellipse (5 and 4);
\draw[vert,fill=gray!10] (5,4) ellipse (4.7 and 3.7);
\draw[vert,fill=vert!60] (1.57,3.3) circle (0.3) (4.9,6.3) circle (0.3) (6,4.2) circle (0.3) (7.6,1.8) circle (0.3) (5.48,2.67) circle (0.3);
\draw[very thick] (5,4) ellipse (5 and 4) (1,7.5) node {$\tilde\Sigma$};
\draw (0.15,3) -- (9.85,5) (1,6.4) -- (2,0.8)
(3,5) -- (4,2) (8,6) -- (9,4)
(3,7.65) arc (-160:-17:2) (3.5,5.5) arc (180:10:1.5)
(3,2.5) -- (5.5,3) -- (5,0)
(3,1.5) circle (0.4) (7.5,2.5) circle (0.7);
\draw[vert] (9,0.5) node {$\tilde C_1$};
\end{scope}
\begin{scope} [xshift=14cm]
\draw[fill=vert!60] (5,4) ellipse (5 and 4);
\draw[vert,fill=gray!10] (3,1.5) circle (0.6);
\draw[vert,fill=gray!10,rounded corners=2pt] (3.2,2.35) -- (3.6,2.4) -- (3.5,2.8) -- (2.7,2.7) -- (2.8,2.3) -- (3.2,2.35)
(4.57,2.6) -- (4.95,2.65) -- (4.85,3.05) -- (4.1,2.95) -- (4.2,2.55) -- (4.57,2.6)
(3,4.1) -- (3.35,3.15) -- (3.85,3.2) -- (3.2,5.2) -- (2.7,5.1) -- (3,4.1)
(3.7,2.4) -- (3.9,1.8) -- (4.25,1.8) -- (3.8,3.1) -- (3.5,3) -- (3.7,2.4)
(8,5.5) -- (8.2,5) -- (8.7,5.1) -- (8.2,6.2) -- (7.7,6.1) -- (8,5.5)
(8.6,4.27) -- (8.8,3.8) -- (9.35,3.85) -- (9,4.6) -- (8.5,4.5) -- (8.6,4.27)
(3.38,6) arc (160:105:1.7) -- (4.65,6.75) arc (105:185:1.3) -- (3.3,5.3) arc (185:160:1.7)
(6.3,6.62) arc (40:0:1.7) -- (6.25,5.5) arc (0:80:1.3) -- (5.3,7.2) arc (80:40:1.7);
\draw[very thick] (5,4) ellipse (5 and 4) (0.8,7.5) node {$\tilde\Sigma_1$};
\draw (0.15,3) -- (9.85,5) (1,6.4) -- (2,0.8);
\draw (3,5) -- (3.55,3.35) (3.7,2.9) -- (4,2) (8,6) -- (8.4,5.2) (8.8,4.4) -- (9,4);
\draw (3,7.65) arc (-160:-17:2) (3.5,5.5) arc (180:115:1.5) (5.5,6.9) arc (70:10:1.5);
\draw (2.9,2.5) -- (3.4,2.6) (4.25,2.75) -- (4.75,2.85) (5.5,3) -- (5,0);
\draw (3,1.1) arc (270:0:0.4) (7.5,2.5) circle (0.7);
\draw[very thick,fill=white] (1.57,3.3) circle (0.3) (4.9,6.3) circle (0.3) (6,4.2) circle (0.3) (7.6,1.8) circle (0.3) (5.48,2.67) circle (0.3);
\draw[vert] (9,0.5) node {$\tilde C_2$};
\end{scope}
\begin{scope} [xshift=28cm]
\draw[very thick,fill=vert!60,yshift=-1cm] (3,1.5) circle (0.65);
\draw[very thick,fill=vert!60,rounded corners=2pt,xshift=-0.8cm,yshift=-0.75cm,scale=1.1] (3.2,2.35) -- (3.6,2.4) -- (3.5,2.8) -- (2.7,2.7) -- (2.8,2.3) -- (3.2,2.35);
\draw[very thick,fill=vert!60,rounded corners=2pt,xshift=0.03cm,yshift=-0.8cm,scale=1.1] (4.57,2.6) -- (4.95,2.65) -- (4.85,3.05) -- (4.1,2.95) -- (4.2,2.55) -- (4.57,2.6);
\draw[very thick,fill=vert!60,rounded corners=2pt,yshift=-0.9cm,xshift=-0.8cm,scale=1.2] (3.7,2.4) -- (3.9,1.8) -- (4.25,1.8) -- (3.8,3.1) -- (3.5,3) -- (3.7,2.4);
\draw[very thick,fill=vert!60,rounded corners=2pt]
(8.6,4.27) -- (8.8,3.8) -- (9.35,3.85) -- (9,4.6) -- (8.5,4.5) -- (8.6,4.27)
(8,5.5) -- (8.2,5) -- (8.7,5.1) -- (8.2,6.2) -- (7.7,6.1) -- (8,5.5);
\draw[very thick,fill=vert!60,rounded corners=3pt,xshift=-0.3cm]
(2.9,3.73) -- (3.15,2.95) -- (4.1,3) -- (3.4,5.4) -- (2.4,5.3) -- (2.9,3.73);
\draw[vert,fill=gray!10,rounded corners=2pt,xshift=-0.3cm]
(3,4.1) -- (3.35,3.15) -- (3.85,3.2) -- (3.2,5.2) -- (2.7,5.1) -- (3,4.1);
\draw[xshift=-0.3cm] (3,3.55) -- (3.9,3.7) (3,5) -- (3.55,3.35);
\begin{scope} [xshift=0.5cm]
\draw[very thick,fill=vert!60,rounded corners=3pt]
(3.15,6) arc (160:95:1.9) -- (4.75,6.5) arc (105:185:1.2) -- (3.1,5.1) arc (190:160:1.9)
(6.54,6.74) arc (40:-5:1.9) -- (6.1,5.3) arc (-2:83:1.3) -- (5.1,7.4) arc (90:40:1.9);
\draw[vert,fill=gray!10,rounded corners=2pt]
(3.38,6) arc (160:105:1.7) -- (4.65,6.75) arc (105:185:1.3) -- (3.3,5.3) arc (185:160:1.7)
(6.3,6.62) arc (40:0:1.7) -- (6.25,5.5) arc (0:80:1.3) -- (5.3,7.2) arc (80:40:1.7);
\draw (3.5,5.5) arc (180:115:1.5) (5.5,6.9) arc (70:10:1.5) (3.7,6.8) arc (-130:-105:2) (5.6,6.4) arc (-70:-43:2);
\draw (1,7.5) node {$\tilde\Sigma_2$};
\draw[vert] (5,0.5) node {$\tilde C_3$};
\end{scope}
\end{scope}
\end{tikzpicture}
\caption{Pushing a normal singular surface} \label{figpushcobborro}
\end{center}
\end{figure}
\begin{samepage}
Let $h:\tilde\Sigma\to[0,1]$ be a smooth function that sends:
\begin{itemize}
\item $\partial\tilde\Sigma$ onto $0$, $\partial\tilde\Sigma_1$ onto $\frac13$ and $\partial\tilde\Sigma_2$ onto $\frac23$,
\item $\mathrm{Int}(\tilde C_1)$ into $(0,\frac13)$, $\mathrm{Int}(\tilde C_2)$ into $(\frac13,\frac23)$ and $\mathrm{Int}(\tilde C_3)$ into $(\frac23,1)$,
\item $\tilde\Sigma_3$ onto $1$.
\end{itemize}
\end{samepage}
Finally define an immersion $\iota':\tilde\Sigma\to S^3\times[0,1]$ by $\iota'(p)=\big(\iota(p),h(p)\big)$. Set $S=\iota'\left(\tilde\Sigma\setminus\mathrm{Int}(\tilde\Sigma_3)\right)$ and $\Delta=\iota'(\tilde\Sigma_3)$.
\end{proof}
The above results provide a characterization of the $T$--genus which generalizes a result of Kawauchi--Murakami--Sugishita in the case of knots \cite{KMS}.
\begin{corollary} \label{corcobborro}
The $T$--genus of an algebraically split link $L$ is the smallest integer~$b$ such that there is a strict cobordism from $L$ to a split union of $b$ borromean links.
\end{corollary}
\begin{proof}
If $\Sigma$ is a marked normal singular disks complex for $L$ with $b$ borromean triple points and no clasp, then Proposition~\ref{propcob} gives a strict cobordism from $L$ to a split union of $b$ borromean links. Reciprocally, given such a cobordism $S$, define $\Sigma$ as the disjoint union of disks bounded by the $b$ borromean links as in Figure~\ref{figDisksBorromean} and apply Theorem~\ref{thconcordance} to get a marked normal singular disks complex for $L$ with $b$ borromean triple points and no clasp.
\end{proof}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [xscale=0.25,yscale=0.2]
\draw[white,fill=blue!10] (-6.5,-2.1) -- (-6.5,3) .. controls +(0,5) and +(-5,0) .. (0,8.5) .. controls +(5,0) and +(0,5) .. (6.5,3) -- (6.5,-2.1) .. controls +(0,-5) and +(5,0) .. (0,-8.5) .. controls +(-5,0) and +(0,-5) .. (-6.5,-2.1);
\draw[green,fill=green!20] (0,-6) .. controls +(2,-2) and +(0,-4) .. (3,-3) .. controls +(0,5) and +(3,-3) .. (0,6);
\draw[white,fill=red!30] (-7,2) .. controls +(-2,0) and +(0,2) .. (-10,0) .. controls +(0,-2) and +(-2,0) .. (-6.5,-2) .. controls +(4,0) and +(2,-2) .. (-3,0) -- (-6,0);
\draw[white,fill=red!30] (7,2) .. controls +(2,0) and +(0,2) .. (10,0) .. controls +(0,-2) and +(2,0) .. (6.5,-2) .. controls +(-2,0) and +(2,-2) .. (3,0) -- (6,0);
\draw[white,fill=red!30] (-7.5,0) .. controls +(0.3,0) and +(0,-0.3) .. (-7,0.5) -- (-7,3) .. controls +(0,6) and +(-5,0) .. (0,9) .. controls +(5,0) and +(0,6) .. (7,3) -- (7,0.5) .. controls +(0,-0.3) and +(-0.3,0) .. (7.5,0) -- (5.5,0) .. controls +(0.3,0) and +(0,-0.3) .. (6,0.5) -- (6,3) .. controls +(0,4) and +(5,0) .. (0,8) .. controls +(-5,0) and +(0,4) .. (-6,3) -- (-6,0.5) .. controls +(0,-0.3) and +(-0.3,0) .. (-5.5,0);
\draw[vert,dashed] (0,-6) .. controls +(-3,3) and +(0,-5) .. (-3,3) .. controls +(0,4) and +(-2,2) .. (0,6);
\draw[red,dashed] (-5.9,2) -- (5.9,2);
\draw[red,dashed] (-3,0) .. controls +(-1,1) and +(-1,0) .. (-1,1) .. controls +(2,0) and +(-1,1) .. (3,0);
\draw[blue,dashed] (-6.5,-1.9) -- (-6.5,3) .. controls +(0,5) and +(-5,0) .. (0,8.5) .. controls +(5,0) and +(0,5) .. (6.5,3) -- (6.5,-1.9);
\draw[red] (-7.1,2) .. controls +(-2,0) and +(0,2) .. (-10,0) .. controls +(0,-2) and +(-2,0) .. (-6.5,-2) .. controls +(4,0) and +(2,-2) .. (-3,0);
\draw[red] (7.1,2) .. controls +(2,0) and +(0,2) .. (10,0) .. controls +(0,-2) and +(2,0) .. (6.5,-2) .. controls +(-2,0) and +(2,-2) .. (3,0);
\draw[red] (-7.5,0) .. controls +(0.3,0) and +(0,-0.3) .. (-7,0.5) -- (-7,3) .. controls +(0,6) and +(-5,0) .. (0,9) .. controls +(5,0) and +(0,6) .. (7,3) -- (7,0.5) .. controls +(0,-0.3) and +(-0.3,0) .. (7.5,0);
\draw[blue] (-6.5,-2.1) .. controls +(0,-5) and +(-5,0) .. (0,-8.5) ..controls +(5,0) and +(0,-5) .. (6.5,-2.1);
\foreach \s in {-1,1} {
\draw[red] (6*\s,3) .. controls +(0.4*\s,-0.2) and +(-0.4*\s,-0.2) .. (7*\s,3);
\draw[red] (3*\s,7.8) .. controls +(0.2*\s,0.4) and +(0.2*\s,-0.4) .. (3*\s,8.9);}
\draw[gray,densely dashed,thick] (-3,0) -- (-5.5,0) .. controls +(-0.3,0) and +(0,-0.3) .. (-6,0.5) -- (-6,3) .. controls +(0,4) and +(-5,0) .. (0,8) .. controls +(5,0) and +(0,4) .. (6,3) -- (6,0.5) .. controls +(0,-0.3) and +(0.3,0) .. (5.5,0) -- (3,0);
\draw[gray,densely dashed,thick] (-1,1) -- (-2,2) (0,-6) -- (0,6);
\end{tikzpicture}
\caption{Ribbon complex for the borromean link} \label{figSeifertBorro}
\end{center}
\end{figure}
\begin{corollary} \label{corslicegenusTgenus}
For any algebraically split link $L$, $g_s(L)\leq T_s(L)$.
\end{corollary}
\begin{proof}
Figure~\ref{figSeifertBorro} shows that the borromean link bounds a slice complex of genus $1$. Take a strict cobordism from $L$ to a split union of $b=T_s(L)$ borromean links and complete it into a slice complex of genus $b$ for $L$ by gluing a genus--$1$ slice complex to each borromean link.
\end{proof}
\section{$T$--genera and $\Delta$--distance}
The $\Delta$--move on links represented in Figure~\ref{figdeltamove} was introduced by Murakami and Nakanishi \cite{MuNa}. Note that it preserves the linking numbers between the components of the link. The following result gives the converse \cite[Theorem~1.1]{MuNa}.
\begin{theorem}[Murakami--Nakanishi] \label{thMuNa}
Two links are related by a sequence of $\Delta$--moves if and only if they have the same number of components and their components have the same pairwise linking numbers.
\end{theorem}
A $\Delta$--move can be realized by {\em gluing a borromean link}, see Figure~\ref{figGlueBorro}. This will be useful for relating the $T$--genera and the $\Delta$--distance. The equality $T_s(K)=s_\Delta(K)$ for a knot $K$ was given in \cite[Theorem 2]{KMS}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{scope} [yshift=0.8cm,scale=0.35]
\foreach \t in {0,120,240} {
\draw[rotate=\t] (-1.77,0.27) arc (135:350:2.5);
\draw[rotate=\t] (2.46,-1.07) arc (10:105:2.5);}
\draw (0,-6) node {Borromean link};
\end{scope}
\begin{scope} [xshift=7cm,scale=0.17]
\foreach \t in {0,120,240} {
\draw[rotate=\t] (-1.77,0.27) arc (135:225:2.5);
\draw[rotate=\t] (1.77,-3.27) arc (315:350:2.5);
\draw[rotate=\t] (2.46,-1.07) arc (10:105:2.5);
\draw[rotate=\t,line width=10pt,white] (-14,-6) -- (-7,-6);
\draw[rotate=\t] (-1.77,-3.27) .. controls +(3,-3) and +(3,0) .. (-7,-6) -- (-14,-6);
\draw[rotate=\t] (1.77,-3.27) .. controls +(-3,-3) and +(-3,0) .. (7,-6) -- (14,-6);}
\draw[line width=10pt,white] (-14,-6) -- (-8,-6);
\draw (-14,-6) -- (-7,-6);
\draw (0,-13) node {Gluing a borromean link};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Borromean link and $\Delta$--move} \label{figGlueBorro}
\end{figure}
\begin{theorem} \label{thDeltaDistance}
For any algebraically split link $L$, $T_s(L)=s_{\scriptscriptstyle{\Delta}}(L)$ and $T_r(L)=r_{\scriptscriptstyle{\Delta}}(L)$.
\end{theorem}
\begin{proof}
The link $L$ can be obtained from a slice link, which bounds a marked normal singular disks complex $\Sigma$ with no borromean triple point and no clasp, by $s_{\scriptscriptstyle{\Delta}}(L)$ $\Delta$--moves. Realize each of these $\Delta$--moves by gluing a borromean link. The borromean link bounds a complex of three disks that intersect along three ribbons, with a single borromean triple point, see Figure~\ref{figDisksBorromean}. Hence, when gluing a borromean link, we glue to $\Sigma$ such a complex of three disks with three bands. The bands may meet $\Sigma$ and create new ribbons, but we can assume that no other kind of singularity is added. Thus we obtain an immersed disks complex for $L$ with $s_{\scriptscriptstyle{\Delta}}(L)$ borromean triple points and no clasp, proving $T_s(L)\leq s_{\scriptscriptstyle{\Delta}}(L)$. Similarly, $L$ can be obtained from a ribbon link, which bounds a ribbon disks complex, by $r_{\scriptscriptstyle{\Delta}}(L)$ $\Delta$--moves, so that $T_r(L)\leqr_{\scriptscriptstyle{\Delta}}(L)$. It remains to prove the reverse inequalities.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{scope}
\draw (0.5,0) node {$\varnothing$} (4,0) circle (0.5);
\node (A+) at (1,0.3) {};
\node (A-) at (1,-0.3) {};
\node (B+) at (3,0.3) {};
\node (B-) at (3,-0.3) {};
\draw[->] (A+) -- (B+) node[above,pos=0.5] {{\em birth}};
\draw[->] (B-) -- (A-) node[below,pos=0.5] {{\em death}};
\end{scope}
\begin{scope} [xshift=9cm]
\foreach \x in {0,5.4} {
\draw[dashed] (\x-1.4,0.5) arc (90:270:0.5) (\x,0.5) arc (90:-90:0.5);}
\draw (-1.4,0.5) arc (90:-90:0.5) (0,0.5) arc (90:270:0.5);
\foreach \s in {-1,1} {
\draw (4,0.5*\s) .. controls +(0.3,0) and +(-0.3,0) .. (4.7,0.1*\s) .. controls +(0.3,0) and +(-0.3,0) .. (5.4,0.5*\s);}
\node (A+) at (1,0.3) {};
\node (A-) at (1,-0.3) {};
\node (B+) at (3,0.3) {};
\node (B-) at (3,-0.3) {};
\draw[->] (A+) -- (B+) node[above,pos=0.5] {{\em fusion}};
\draw[->] (B-) -- (A-) node[below,pos=0.5] {{\em fission}};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Birth, death, fusion and fission moves} \label{figfusionfission}
\end{figure}
We shall prove that $T_s(L)$ can be reduced by gluing a borromean link. There is a strict cobordism $S$ from $L$ to the split union of $t=T_s(L)$ borromean links, which we denote $B^t$. This cobordism can be rearrange into the following pattern, where $\mathcal{O}^k$ is a trivial $k$--component link, $J$ is a link and the different steps are described in Figure~\ref{figfusionfission}.
\begin{center}
\begin{tikzpicture} [xscale=1.5]
\node (0) at (0,0) {$L$};
\node (1) at (2,0) {$L\sqcup\mathcal{O}^k$};
\node (2) at (4,0) {$J$};
\node (3) at (6,0) {$B^t\sqcup\mathcal{O}^\ell$};
\node (4) at (8,0) {$B^t$};
\draw[->] (0) -- (1) node[below,pos=0.5] {{\em births}};
\draw[->] (1) -- (2) node[below,pos=0.5] {{\em fusions}};
\draw[->] (2) -- (3) node[below,pos=0.5] {{\em fissions}};
\draw[->] (3) -- (4) node[below,pos=0.5] {{\em deaths}};
\end{tikzpicture}
\end{center}
We can assume that each connected component of $S$ contains one component of $L$ and one component of $J$. One can get a trivial 3--component link as a band sum of two borromean links, see Figure~\ref{figBorroToTrivial}. Perform such a band sum gluing a borromean link $B$ to our link $B^t$. The bands gluing $B$ to $B^t$ can be glued before the fissions and deaths, hence be glued to $J$. Then these bands can be slid to be glued onto parts of the link $L$. Hence we can start by gluing $B$ to $L$ and then perform the remaining of the cobordism. This provides a strict cobordism from $L\sharp B$ to $B^{t-1}\sqcup\mathcal{O}^3$. Filling in the components of $\mathcal{O}^3$ with disks, we finally get a strict cobordism from the connected sum $L\sharp B$ to $B^{t-1}$. In other words, we get a cobordism from a link $L'$, obtained from $L$ by gluing a borromean link, to the disjoint union of $t-1$ borromean links; in particular $T_s(L')\leq T_s(L)-1$. It follows that $s_\Delta(L)\leq T_s(L)$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [yshift=0.8cm,scale=0.3]
\foreach \s in {1,-1} {
\foreach \t in {90,210,330} {
\draw[xshift=6*\s cm,xscale=-\s,rotate=\t] (-1.77,0.27) arc (135:350:2.5);
\draw[xshift=6*\s cm,xscale=-\s,rotate=\t] (2.46,-1.07) arc (10:105:2.5);}
\draw[white,line width=5pt] (2*\s,-1) -- (2*\s,1);
\draw[white,line width=5pt] (5.5*\s,3.5) -- (7*\s,3.8);
\draw[white,line width=5pt] (5.5*\s,-3.5) -- (7*\s,-3.8);}
\foreach \s in {1,-1} {
\draw[rounded corners=3pt] (-2.2,1*\s) -- (-2.1,0.7*\s) -- (2.1,0.7*\s) -- (2.2,1*\s);
\foreach \t in {-1,1} {
\draw (-7*\t,3.8*\s) .. controls +(0.5*\t,0) and +(-5*\t,0) .. (0,6*\s);
\draw (-5.5*\t,3.5*\s) .. controls +(-0.3*\t,0.2*\s) and +(-4*\t,0) .. (0,5*\s);}}
\end{tikzpicture}
\end{center}
\caption{A trivial $3$--component link as a band sum of two borromean links} \label{figBorroToTrivial}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.6]
\newcommand{\tourr}{
\draw[very thick,dashed] (4,-2) node[above] {$\partial\tilde\Sigma$} -- (5,-2) (3,2) -- (4,2);}
\begin{scope}
\tourr
\draw[very thick] (3,2) -- (0,2) arc (90:270:2) -- (4,-2);
\draw (0,2) -- (1,-2) (2,2) node[above] {$\tilde\gamma$} -- (2.5,-2) (-0.5,1) -- (2.5,1.5) (1.8,-1) -- (2.8,-1.2);
\draw[dashed] (-1.6,-1) node[above right] {$\tilde\eta$} -- (2.25,0);
\end{scope}
\draw (7,0) node {$\rightsquigarrow$};
\begin{scope} [xshift=11cm]
\tourr
\draw[very thick] (3,2) -- (0,2) arc (90:210:2) .. controls +(0.2,-0.2) and +(-0.4,0.4) .. (2.4,0) .. controls +(0.4,-0.4) and +(-0.2,0.2) .. (-1.41,-1.41) arc (225:270:2) -- (4,-2);
\draw (0,2) -- (0.58,-0.32) (0.7,-0.8) -- (1,-2) (2,2) -- (2.24,0.08) (2.28,-0.24) -- (2.5,-2) (-0.5,1) -- (2.5,1.5) (1.8,-1) -- (2.8,-1.2);
\end{scope}
\end{tikzpicture}
\caption{Sliding $\partial\Sigma$ along a path} \label{figSlideBoundary}
\end{center}
\end{figure}
Now, take a $T$--ribbon disks complex $\Sigma$ for $L$ with $b=T_r(L)$ borromean triple points. Let $\tilde\Sigma$ be its preimage. Assume there is a ribbon $\gamma$ on $\Sigma$ which contains more than one triple point; denote $\tilde\gamma$ the corresponding $b$--line on $\tilde\Sigma$. Take a path $\tilde\eta$ on $\tilde\Sigma$ joining a point of $\partial\tilde\Sigma$ to a point of $\tilde\gamma$ between two preimages of triple points, such that $\tilde\eta$ does not meet any $i$--line --- which is possible since the $i$--lines are disjoint; denote $\eta\subset\Sigma$ the image of $\tilde\eta$. Slide the boundary of $\Sigma$ along $\eta$, see Figure~\ref{figSlideBoundary}. This results in an isotopy of $L$. Since $\tilde\gamma$ may intersect some $b$--lines, some ribbons may have been divided into more ribbons, but no other singularity appears. Moreover, $\gamma$ has been divided into ribbons with less triple points. Finally, we can assume that any ribbon on $\Sigma$ contains at most one triple point.
Fix a triple point $p$ of $\Sigma$. For each preimage $\tilde p$ of $p$, take a neighborhood in $\tilde\Sigma$ of the union of the corresponding $i$--line with a part of the corresponding $b$--line joining $\tilde p$ to $\partial\tilde\Sigma$, see Figure~\ref{figRemoveTriplePoint}. These neighborhoods are three disks with only three ribbons meeting at a borromean triple point. Cutting these three disks amounts to cutting a borromean link. The same modification can be performed by a single $\Delta$--move. This implies $r_{\scriptscriptstyle{\Delta}}(L)\leq T_r(L)$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{scope} [xscale=0.5,yscale=0.3]
\draw[gray!40,fill=gray!40] (1.5,0) .. controls +(0,2) and +(0,-1) .. (0.6,2.3) .. controls +(0,1) and +(-0.4,-0.2) .. (2,3.8) .. controls +(0.4,0.2) and +(0,1) .. (3.4,3.7) .. controls +(0,-1) and +(0,3) .. (2.5,0);
\draw[very thick] (0,0) -- (4,0) (4,6) node[above] {$\partial\tilde\Sigma$} -- (0,6);
\draw[very thick,dashed] (-1,0) -- (5,0) (5,6) -- (-1,6);
\draw (2,6) -- (2,0) (1,2.5) -- (3,3.5);
\end{scope}
\draw (4,0.9) node {$\sim$};
\begin{scope} [xshift=6cm,xscale=0.5,yscale=0.3]
\draw[gray!40,fill=gray!40] (3,0) -- (1,0) .. controls +(1,0) and +(0,1) .. (1,-1.5) .. controls +(0,-1) and +(-1,0) .. (2,-3) .. controls +(1,0) and +(0,-1) .. (3,-1.5) .. controls +(0,1) and +(-1,0) .. (3,0);
\draw[very thick] (0,0) -- (1,0) .. controls +(1,0) and +(0,1) .. (1,-1.5) .. controls +(0,-1) and +(-1,0) .. (2,-3) .. controls +(1,0) and +(0,-1) .. (3,-1.5) .. controls +(0,1) and +(-1,0) .. (3,0) -- (4,0) (4,6) node[above] {$\partial\tilde\Sigma$} -- (0,6);
\draw[very thick,dashed] (-1,0) -- (1,0) (3,0) -- (5,0) (5,6) -- (-1,6);
\draw (2,6) -- (2,-3) (1.5,-1.8) -- (2.5,-1.3);
\end{scope}
\end{tikzpicture}
\caption{Isolate a borromean triple point} \label{figRemoveTriplePoint}
\end{center}
\end{figure}
\end{proof}
We recover the result of Corollary~\ref{corslicegenusTgenus} and get the ribbon counterpart of it.
\begin{corollary} \label{corgenusTgenus}
For any algebraically split link $L$, $g_s(L)\leq T_s(L)$ and $g_r(L)\leq T_r(L)$.
\end{corollary}
\begin{proof}
By Theorem~\ref{thDeltaDistance}, $T_r(L)=r_{\scriptscriptstyle{\Delta}}(L)$, so that $L$ can be obtained from a ribbon link, bounding a ribbon disks complex, by a sequence of $T_r(L)$ $\Delta$--moves. We have seen that a $\Delta$--move can be realized by gluing a borromean link. This can be achieved by gluing to the ribbon complex represented in Figure~\ref{figSeifertBorro}, which has genus~$1$. The same holds for the slice version.
\end{proof}
\section{Arf invariant and Milnor's triple linking number}
Recall that the Arf invariant is a $\mathbb{Z}/2\mathbb{Z}$--valued concordance invariant of knots which takes the value $0$ on the trivial knot and the value $1$ on the trefoil knot. The following result \cite[Theorem 2]{Robertello} allows to extend the Arf invariant to algebraically split links (Robertello gives this result more generally for the so-called proper links).
\begin{theorem}[Robertello]
Let $L$ be an algebraically split link. If there are strict cobordisms from two knots $K$ and $K'$ to $L$, then the Arf invariants $\mathrm{Arf}(K)$ and $\mathrm{Arf}(K')$ are equal.
\end{theorem}
For an algebraically split link $L$, define the {\em Arf invariant} of $L$ as $\mathrm{Arf}(L)=\mathrm{Arf}(K)$ for any knot $K$ such that there is a strict cobordism from $K$ to $L$. The following result was established in \cite{KMS} in the case of knots.
\begin{proposition} \label{propArf}
Let $L$ be an algebraically split link. Let $\Sigma$ be a marked normal singular disks complex for $L$ with no clasp and $b$ borromean triple points. Then $\mathrm{Arf}(L)=b\ mod\ 2$.
\end{proposition}
\begin{proof}
Let $K$ be a knot obtained from $L$ by merging the component, so that there is a strict cobordism from $K$ to $L$. By Proposition~\ref{propcob}, there is a strict cobordism from $L$ to a split union $B^b$ of $b$ borromean links. Composing these cobordisms gives a strict cobordism from $K$ to $B^b$. Hence $\mathrm{Arf}(K)=\mathrm{Arf}(L)=\mathrm{Arf}(B^b)$. Now, the trefoil knot can be obtained from the borromean link by merging the three components, see Figure~\ref{figBorroToTrefoil}, hence $\mathrm{Arf}(B)=1$. This concludes since the Arf invariant is additive under connected sum and thus under split union.
\end{proof}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.35]
\begin{scope}
\foreach \t in {90,210,330} {
\draw[rotate=\t] (-1.77,0.27) arc (135:350:2.5);
\draw[rotate=\t] (2.46,-1.07) arc (10:105:2.5);}
\foreach \s in {1,-1} {
\draw[white,line width=5pt] (0.5,3.5*\s) -- (-1,3.8*\s) (3,2*\s) -- (3.8,1*\s);
\draw (-1,3.8*\s) .. controls +(0.5,0) and +(-1.3,0.7*\s) .. (3,5.2*\s) .. controls +(1.8,-\s) and +(-0.3,0.5*\s) .. (3.8,1*\s);
\draw (0.5,3.47*\s) .. controls +(-0.3,0.2*\s) and +(-0.9,0.5*\s) .. (2.5,4.3*\s) .. controls +(1.3,-0.7*\s) and +(0.5,-0.4*\s) .. (3,2*\s);}
\end{scope}
\draw (8,0) node {$\sim$};
\begin{scope} [xshift=15cm]
\foreach \t in {90,210,330} {
\draw[rotate=\t] (-1.8,0.3) .. controls +(-2.4,-2.6) and +(-1,-3.1) .. (1.1,0.1);}
\end{scope}
\end{tikzpicture}
\end{center}
\caption{A trefoil knot as a band sum of a borromean link} \label{figBorroToTrefoil}
\end{figure}
This result provides an interesting geometric realization of the Arf invariant of an algebraically split link $L$. The Arf invariant of $L$ is not only the parity of $T_s(L)$, but more generally the parity of the number of borromean triple points on any marked normal singular disks complex for $L$. This was first noticed by Kaplan for knots and $T$--ribbon disks \cite[Theorem 5.3]{Kaplan}.
We now give an expression of Milnor's triple linking number in terms of borromean triple points. The Milnor invariant $\mu(L)$ of an ordered $3$--component link $L=L_1\sqcup L_2\sqcup L_3$ is a $\mathbb{Z}$--valued concordance invariant introduced by Milnor; we refer the reader to \cite{Milnor} (see also \cite{Meilhan}) for its definition and main properties. Given a marked singular disks complex $D=D_1\cup D_2\cup D_3$ for $L$ with no clasp, define the {\em borromean triple point number} $n_{bt}(D)$ of $D$ as the number of borromean triple points of $D$ involving its three components, counted with signs. Note that changing the orientation of one component of $L$, as well as permuting two components, changes the sign of $n_{bt}(D)$; this also holds for $\mu(L)$.
\begin{proposition} \label{propMilnor}
Let $L=L_1\sqcup L_2\sqcup L_3$ be an ordered algebraically split 3--component link. Let $D=D_1\cup D_2\cup D_3$ be a marked singular disks complex for $L$. Then $\mu(L)=n_{bt}(D)$.
\end{proposition}
\begin{proof}
Proposition~\ref{propcob} provides a strict cobordism $S\subset S^3\times[0,1]$ from $L\subset S^3\times\{0\}$ to a split union $B^t$ of $t=T_s(L)$ borromean links bounding a $T$--ribbon complex $R^t\subset S^3\times\{1\}$, such that $S\cup R^t$ projects onto $D$. This cobordism can be worked out as in the proof of Theorem~\ref{thDeltaDistance} in order to get a strict cobordism, from a connected sum $L\sharp B$ to a sublink $B^{t-1}$ of $B^t$ made of $t-1$ borromean links, which projects to a marked singular disks complex for $L\sharp B$. Note that the sign of the remaining borromean triple points is unchanged. Note also that the connected components of $L$ involved in the connected sum correspond to the connected components of $D$ involved in the cancelled triple point.
If the borromean link is glued along one or two components of the initial link, then the invariance of $\mu$ under link-homotopy --- a relation which allows isotopy and self-crossing change of each component --- shows that the value of $\mu(L)$ is unchanged; as is the value of $n_{bt}(D)$. If the three components of the borromean link are glued to the three components of the initial link, then it is a result of Krushkal \cite[Lemme 9]{Krushkal} that Milnor's triple linking number is additive under such a gluing. Hence $\pm1$ is added to $\mu(L)$, depending on the orientations; the same value is added to $n_{bt}(D)$.
Repeat this operation until there is no more borromean triple point on $D$. At that stage, $n_{bt}(D)=0$ and $L$ has been turned to a slice link, so that $\mu(L)=0$ since $\mu$ is a concordance invariant. It follows that the initial values of $n_{bt}(D)$ and $\mu(L)$ were equal.
\end{proof}
\begin{corollary}
Let $L=L_1\sqcup L_2\sqcup L_3$ be an ordered algebraically split 3--component link. Then $|\mu(L)|\leq T_s(L)$.
\end{corollary}
\begin{corollary} \label{corMilnor}
Let $L=L_1\sqcup L_2\sqcup L_3$ be an ordered algebraically split 3--component link. Let $D=D_1\cup D_2\cup D_3$ be a $T$--ribbon disks complex for $L$. Then $\mu(L)$ equals the algebraic intersection number $\langle D_1,D_2,D_3\rangle$.
\end{corollary}
\section{Examples} \label{secex}
Let $B_n$ be the link obtained by 0--cabling $(n-1)$ times a component of the borromean link~$B$, see Figure~\ref{figlinkB4}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\foreach \x in {1,2,3,4} {\draw (\x,0) ellipse (0.2 and 0.5);}
\draw[rounded corners=6pt] (0,0) -- (0,-1) -- (5,-1) -- (5,1) -- (0,1) -- (0,0);
\draww{ (-0.1,0.3) arc (90:270:0.3) -- (0.3,-0.3) (5.1,0.3) arc (90:-90:0.3) -- (4.3,-0.3);}
\foreach \x in {0,1,...,3} {\foreach \y in {-0.3,0.3} {
\draww{(\x +0.3,\y) -- (\x +1,\y);}}}
\draw (0.1,0.3) -- (0.3,0.3) (4.3,0.3) -- (4.9,0.3);
\end{tikzpicture}
\caption{The link $B_4$} \label{figlinkB4}
\end{center}
\end{figure}
\begin{proposition}
For all $n>0$, $T_r(B_n)=T_s(B_n)=n$ and $g_r(B_n)=g_s(B_n)=1$.
\end{proposition}
\begin{proof}
First note that the borromean link $B$ satisfies $T_r(B)=T_s(B)=1$: Figure~\ref{figDisksBorromean} gives a $T$--ribbon disks complex for $B$ with one borromean triple point, and $B$ is not slice since $|\mu(B)|=1$.
In Figure~\ref{figSeifertBorro}, taking $n$ parallel copies of the green disk provides a genus--1 ribbon complex for~$B_n$, so that $g_r(B_n)\leq1$. Moreover, $B_n$ is not slice since it has a non-slice sublink, namely $B$, thus $g_s(B_n)\geq1$.
In Figure~\ref{figlinkB4}, the obvious disks bounded by the components of $B_n$ define a $T$--ribbon disks complex with exactly $n$ borromean triple points, giving $T_r(B_n)\leq n$. Now, take a marked normal singular disks complex $\mathcal D=D\cup D'\cup\left(\cup_{1\leq i\leq n}D_i\right)$ for $B_n$, where the disks $D_1,\dots,D_n$ are bounded by the parallel components of $B_n$. For any $1\leq i\leq n$, the number of borromean triple points on $\mathcal D$ defined by the intersection $D\cap D'\cap D_i$ is at least 1 since $B$ is not slice. Hence $T_s(B_n)\geq n$.
\end{proof}
Let $K_n$ be the {\em twist knot} defined by $n$ half-twists, see the left-hand side of Figure~\ref{figtwistknots}. It is easy to see that the genus of $K_n$ is~$1$. On the other hand, one can check that $K_0$ and $K_4$ are slice. A famous result of Casson and Gordon \cite{CG} states that these are the only slice twist knots.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.4]
\newcommand{\crossing}[2]{
\begin{scope} [xshift=#1cm,yshift=#2cm]
\draw (0,0) .. controls +(0,1) and +(0,-1) .. (1,2);
\draww{(1,0) .. controls +(0,1) and +(0,-1) .. (0,2);}
\end{scope}}
\begin{scope}
\foreach \y in {0,2,...,6} {\crossing0\y}
\draw (1,0) .. controls +(0,-2) and +(0,-7) .. (-2,4) .. controls +(0,4) and +(-1,0) .. (0,8.2) -- (0.8,8.2);
\draww{(0,0) .. controls +(0,-2) and +(0,-7) .. (3,4) .. controls +(0,4) and +(1,0) .. (1.2,8.2);}
\draw (1,8) -- (1,8.4) .. controls +(0,1) and +(0,1) .. (0,8.4);
\draw (-1,3) node[rotate=90] {{\tiny $n$ half-twists}};
\draw[gray] (-0.3,3) node {$\left\lbrace\resizebox{0cm}{1.8cm}{\phantom{rien}}\right.$};
\end{scope}
\node (A) at (4,3.5) {}; \node (B) at (9,3.5) {};
\draw[->] (A) -- (B) node[below,pos=0.5] {{\small $\Delta$--move}};
\begin{scope} [xshift=12.5cm]
\foreach \y in {0,2,...,6} {\crossing0\y}
\draw (1,0) .. controls +(0,-2) and +(0,-7) .. (-2,4) .. controls +(0,2) and +(-1,0) .. (-0.2,6);
\draww{(0,0) .. controls +(0,-2) and +(0,-7) .. (3,4) .. controls +(0,2) and +(1,0) .. (1.2,6) -- (0.2,6);}
\draw (1,8) .. controls +(0,1) and +(0,1) .. (0,8);
\end{scope}
\draw (17,3.5) node {$=$};
\begin{scope} [xshift=20cm,yshift=1.5cm]
\foreach \y in {0,2} {\crossing0\y}
\draw (1,0) .. controls +(0,-1) and +(0,-4) .. (-1.5,2) .. controls +(0,2) and +(-1,0) .. (0,4.2) -- (0.8,4.2);
\draww{(0,0) .. controls +(0,-1) and +(0,-4) .. (2.5,2) .. controls +(0,2) and +(1,0) .. (1.2,4.2);}
\draw (1,4) -- (1,4.4) .. controls +(0,1) and +(0,1) .. (0,4.4);
\end{scope}
\end{tikzpicture}
\caption{A twist knot and a $\Delta$--move on it} \label{figtwistknots}
\end{center}
\end{figure}
\begin{theorem}[Casson--Gordon]
For $n\neq0,4$, $g_s(K_n)=1$.
\end{theorem}
\begin{lemma}
For $n\neq0,4$, $c_4^b(K_n)=c_4(K_n)=1$.
\end{lemma}
\begin{proof}
It is a simple and well-known fact that the unknotting number majors the 4--dimensional clasp number. Clearly, the unknotting number of non-trivial twist knots equals~$1$.
\end{proof}
\begin{proposition}
For all $n\geq0$, $|T_r(K_{n+2})-T_r(K_n)|=|T_s(K_{n+2})-T_s(K_n)|=1$.
\end{proposition}
\begin{proof}
For $n\geq0$, Figure~\ref{figtwistknots} shows that $K_{n+2}$ can be turned into $K_n$ by a single $\Delta$--move, which modifies $T_s$ and $T_r$ by at most one according to Theorem~\ref{thDeltaDistance}. Proposition~\ref{propArf} shows that they have to be modified.
\end{proof}
\begin{corollary}
For all $n>1$, $T_r(K_{2n-1})\leq n$ and $T_r(K_{2n})\leq n-2$.
\end{corollary}
\begin{proof}
The move on Figure~\ref{figtwistknots} changes $K_1$ into the trivial knot, so that $T_r(K_1)=1$. The knot $K_4$ is slice.
\end{proof}
The following result shows the failure of Lemma~\ref{lemmagscb} for algebraically split links.
\begin{lemma}
Let $L$ be the split union of a non-slice twist knot with its mirror image. Then $g_s(L)=2$ and $c_4^b(L)=1$.
\end{lemma}
\begin{proof}
The slice genus follows from that of non-slice twist knots. Since $L$ is not slice, $c_4^b(L)>0$. Now, each component of $L$ bounds a disk in $B^4$ with exactly one double point and the two disks can be chosen to be the mirror image of each other, in order to get two double points with opposite sign.
\end{proof}
This lemma implies that the difference $g_s-c_4^b$, and thus $g_s-T_s$, can be arbitrarily large for split links: take the split union of arbitrarily many copies of the link $L$ in the lemma.
\section{Colored links}
A {\em colored link} is a link $L$ in $S^3$ with a given partition into sublinks $L=L_1\sqcup\dots\sqcup L_\mu$. A {\em colored complex for $L$} is a union of compact surfaces $\Sigma=\cup_{1\leq i\leq \mu}\Sigma_i$ such that $\partial\Sigma_i=L_i$ for all $i$. We have as previously notions of normal singular, ribbon, $T$--ribbon, slice complex and a notion of marking.
Given two links $K=\sqcup_{i=1}^kK_i$ and $J=\sqcup_{j=1}^\ell J_j$, where the $K_i$ and $J_j$ are knots, the {\em linking number of $K$ and $J$} is $\textrm{\textnormal{lk}}(K,J)=\sum_{i=1}^k\sum_{j=1}^\ell\textrm{\textnormal{lk}}(K_i,J_j)$.
A colored link $L$ is {\em algebraically $c$--split} if its sublinks have trivial pairwise linking numbers. We will generalize Kaplan's result, proving that any algebraically $c$--split colored link bounds a $T$--ribbon genus--$0$ colored complex. Although Kaplan's proof generalizes to this setting, we present here an alternative proof based on Theorem~\ref{thMuNa}.
We start with preliminary results.
\begin{lemma} \label{lemmaLinkingsA}
Fix a positive integer $n$ and integers $\ell_{ij}$ for $1\leq i<j\leq n$. There exists a link $K$ with $n$ connected components $K_i$ such that $\textrm{\textnormal{lk}}(K_i,K_j)=\ell_{ij}$ for all $i<j$ and $K$ bounds a genus--$0$ compact surface embedded in $S^3$.
\end{lemma}
\begin{proof}
Take a trivial link $K$ with $n$ components $K_i$. It bounds an embedded genus--$0$ surface~$\Sigma$, compact and connected. For given $i<j$, the linking $\textrm{\textnormal{lk}}(K_i,K_j)$ can be modified as follows. Take a band $B=[0,1]\times[0,1]$ on $\Sigma$ such that $\{0\}\times[0,1]=B\cap K_i$, $\{1\}\times[0,1]=B\cap K_j$ and $(0,1)\times[0,1]\subset\mathrm{Int}(\Sigma)$. Twist the band $B$ around $\left\lbrace\frac12\right\rbrace\times[0,1]$, see Figure~\ref{figTwist}. This adds the number of twists to the linking number $\textrm{\textnormal{lk}}(K_i,K_j)$ without modifying the other linking numbers.
\end{proof}
\begin{figure} [htb]
\begin{center}
\begin{tikzpicture} [xscale=0.6,yscale=0.3]
\begin{scope}
\draw[color=gray!35,fill=gray!35,rounded corners=5pt] (0,-3) -- (1,-1) -- (5,-1) -- (6,-3) -- (6,3) -- (5,1) -- (1,1) -- (0,3) --(0,-3);
\draw[color=gray!35,fill=gray!35] (0,-3) -- (1,-1) -- (5,-1) -- (6,-3) -- (6,3) -- (5,1) -- (1,1) -- (0,3) --(0,-3);
\draw[rounded corners=5pt] (0,-3) -- (1,-1) -- (5,-1) -- (6,-3) (6,3) -- (5,1) -- (1,1) -- (0,3);
\draw (0.3,-0.8) node {$\Sigma$};
\draw[->] (6,3) -- (5.5,2) node[above left] {$K_i$};
\draw[-<] (6,-3) -- (5.5,-2) node[below left] {$K_j$};
\end{scope}
\draw[->,very thick] (7.5,0) -- (8.5,0);
\begin{scope} [xshift=10cm]
\draw[color=gray!35,fill=gray!35] (0,3) -- (1.5,0) -- (0,-3) (6,3) -- (4.5,0) -- (6,-3);
\foreach \x in {0,1,2} {
\draw[color=gray!35,fill=gray!35,rounded corners=10pt,xshift=\x cm] (1.5,0) -- (2,1) -- (2.5,0) (2.5,0) -- (2,-1) -- (1.5,0);}
\draw[rounded corners=10pt] (0,-3) -- (1.4,-0.2) (1.6,0.2) -- (2,1) -- (3,-1) -- (3.4,-0.2) (3.6,0.2) -- (4,1) -- (6,-3) (6,3) -- (4.6,0.2) (4.4,-0.2) -- (4,-1) -- (3,1) -- (2.6,0.2) (2.4,-0.2) -- (2,-1) -- (0,3);
\end{scope}
\end{tikzpicture}
\caption{Twisting a band in $\Sigma$} \label{figTwist}
\end{center}
\end{figure}
\begin{lemma} \label{lemmaLinkingsB}
Let $L=\sqcup_{1\leq i\leq\mu}L_i$ be an algebraically $c$--split colored link. There exists an algebraically $c$--split colored link $J=\sqcup_{1\leq i\leq\mu}J_i$ whose knot components have the same pairwise linking numbers as those of $L$ and which bounds a $T$--ribbon genus--$0$ colored complex in $S^3$.
\end{lemma}
\begin{proof}
First, thanks to Lemma~\ref{lemmaLinkingsA}, define $J$ as the split union of links $J_i$ that realize the pairwise linking numbers of the $L_i$ and bound embedded genus--$0$ surfaces $\Sigma_i$ in $S^3$. Write $J_i$ as the disjoint union of knots $J_i=\sqcup_{1\leq \ell\leq k_i}J_{i\ell}$. Fix $i,j,\ell,m$ such that $1\leq i<j\leq\mu$, $1\leq\ell\leq k_i$ and $1\leq m<k_j$. Take a band in $\Sigma_j$ joining $J_{jm}$ to $J_{jk_j}$. Link this band around $J_{i\ell}$ in order to realize the desired linking $\textrm{\textnormal{lk}}(J_{i\ell},J_{jm})$, see Figure~\ref{figLink}. This adds ribbon intersections on the complex $\cup_{i=1}^\mu\Sigma_i$. Then, for $\ell<k_i$, realize the linking $\textrm{\textnormal{lk}}(J_{i\ell},J_{jk_j})$ using a band on $\Sigma_i$ joining $J_{i\ell}$ to $J_{ik_i}$. Since $J$ remains algebraically $c$--split, all linking numbers are finally realized.
\end{proof}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [xscale=0.6,yscale=0.3]
\begin{scope}
\draw[color=gray!30,fill=gray!30] (0,0) -- (2,0) -- (2,10) -- (0,10);
\draw (0,10) node[below right] {$\Sigma_i$};
\draw[->] (2,10) -- (2,7) (2,0) -- (2,7) node[right] {$J_{i\ell}$};
\draw[color=gray!50,fill=gray!50] (4,0) -- (5.5,0) -- (5.5,10) -- (4,10);
\draw (4,10) node[below right] {$\Sigma_j$};
\draw[->] (4,0) -- (4,3) (4,10) -- (4,3) node[left] {$J_{jm}$};
\draw[->] (5.5,10) -- (5.5,5) (5.5,0) -- (5.5,5) node[right] {$J_{jk_j}$};
\end{scope}
\draw[->,very thick] (9,5) -- (10,5);
\begin{scope} [xshift=12.5cm]
\draw[color=gray!50,fill=gray!50] (0.6,7.5) .. controls +(0,-1) and +(0,1) .. (4,5) -- (5.5,5) .. controls +(0,1) and +(0,-1) .. (1.6,7.5)
(0.6,2.5) .. controls +(0,-1) and +(0,1) .. (4,0) -- (5.5,0) .. controls +(0,1) and +(0,-1) .. (1.6,2.5);
\draw (0.6,7.5) .. controls +(0,-1) and +(0,1) .. (4,5) (5.5,5) .. controls +(0,1) and +(0,-1) .. (1.6,7.5)
(0.6,2.5) .. controls +(0,-1) and +(0,1) .. (4,0) (5.5,0) .. controls +(0,1) and +(0,-1) .. (1.6,2.5);
\draw[color=gray!30,fill=gray!30] (0,0) -- (2,0) -- (2,10) -- (0,10);
\draw[dashed] (0.6,7.5) .. controls +(0,-1) and +(0,1) .. (4,5) (5.5,5) .. controls +(0,1) and +(0,-1) .. (1.6,7.5)
(0.6,2.5) .. controls +(0,-1) and +(0,1) .. (4,0) (5.5,0) .. controls +(0,1) and +(0,-1) .. (1.6,2.5);
\draw[gray!50] (2,1.383) -- (2.1,1.336) (2,1.955) -- (2.1,1.89) (2,6.383) -- (2.1,6.336) (2,6.955) -- (2.1,6.89);
\draw[gray!30,fill=gray!30] (1.9,0) -- (2,0) -- (2,10) -- (1.9,10);
\draw (2,0) -- (2,2.8) (2,3.8) -- (2,7.8) (2,8.8) -- (2,10);
\draw[color=gray!50,fill=gray!50] (4,10) .. controls +(0,-1) and +(0,1) .. (0.6,7.5) -- (1.6,7.5) .. controls +(0,1) and +(0,-1) .. (5.5,10)
(4,5) .. controls +(0,-1) and +(0,1) .. (0.6,2.5) -- (1.6,2.5) .. controls +(0,1) and +(0,-1) .. (5.5,5);
\draw (4,10) .. controls +(0,-1) and +(0,1) .. (0.6,7.5) (1.6,7.5) .. controls +(0,1) and +(0,-1) .. (5.5,10)
(4,5) .. controls +(0,-1) and +(0,1) .. (0.6,2.5) (1.6,2.5) .. controls +(0,1) and +(0,-1) .. (5.5,5);
\end{scope}
\end{tikzpicture}
\caption{Linking a band in $\Sigma_j$ around a component of $J_i$} \label{figLink}
\end{center}
\end{figure}
\begin{proposition} \label{propAlgTrivial}
Any algebraically $c$--split colored link bounds a $T$--ribbon genus--$0$ colored \mbox{complex.}
\end{proposition}
\begin{proof}
Let $L$ be an algebraically $c$--split colored link. Lemma~\ref{lemmaLinkingsB} provides another algebraically $c$--split colored link $J$ with the same pairwise linking numbers, that bounds a $T$--ribbon genus--$0$ colored complex $\Sigma$. Theorem~\ref{thMuNa} says that $L$ can be obtained from $J$ by a sequence of $\Delta$--moves. Realize these $\Delta$--moves by gluing borromean links to $L$ and associated $T$--ribbon disks complexes to $\Sigma$. This provides a ribbon disks complex for $L$.
\end{proof}
We consider a colored version of the invariants studied above, namely we define these invariants from colored complexes, and we add a superscript $c$ to distinguish them from the non-colored version. It follows from Proposition~\ref{propAlgTrivial} that the slice and ribbon genera and the $T$--genera of an algebraically $c$--split colored link are well-defined. Most of the results we have seen remain true in the colored setting, with the same proof. We collect them in the next statement.
A {colored cobordism} from a link $L$ to a link $L'$ is a disjoint union in $S^3\times[0,1]$ of genus--$0$ cobordisms from the colored sublinks of $L$ to the colored sublinks of $L'$. In this definition, some sublinks of the colored links may be empty.
\begin{theorem}
For any algebraically $c$--split colored link $L$,
\begin{itemize}
\item the colored slice genus of $L$ equals the minimal genus of a marked normal singular colored complex for $L$ with no clasp intersection and no borromean triple point,
\item the $T$--genus of $L$ is the smallest integer~$b$ such that there is a colored cobordism from $L$ to a split union of $b$ borromean links with any coloring,
\item $T_s^c(L)=s_{\scriptscriptstyle{\Delta}}^c(L)$ and $T_r^c(L)=r_{\scriptscriptstyle{\Delta}}^c(L)$,
\item $g_s^c(L)\leq T_s^c(L)$ and $g_r^c(L)\leq T_r^c(L)$,
\item $c_4^{b,c}(L)\leq T_s^c(L)$.
\end{itemize}
\end{theorem}
\begin{proof}
The first point is a corollary of Theorem~\ref{thprojection}.
The second point follows from Theorem~\ref{thconcordance} and Proposition~\ref{propcob}.
The proof of Theorem~\ref{thDeltaDistance} works in the colored setting and gives the third point; the fourth is a corollary of it. Finally the fifth point is a corollary of Proposition~\ref{propborrotoclasp}.
\end{proof}
Let $L$ and $L'$ be colored links. A {\em colored concordance} from $L$ to $L'$ is a disjoint union of concordances between the sublinks of $L$ and $L'$. Note that the relations in the next result are also satisfied by the slice genus.
\begin{theorem}
Let $L$ and $J$ be algebraically $c$--split colored links. Let $\widehat J$ be the colored link with the same underlying link as $J$ and a different color for each knot component.
\begin{itemize}
\item If $L$ and $J$ are related by a colored cobordism, then $T_s(L)\leq T_s(\widehat J)$.
\item If $L$ and $J$ are related by a colored concordance, then $T_s(L)=T_s(J)$.
\end{itemize}
\end{theorem}
\begin{proof}
First assume $L$ and $J$ are cobordant and define $\Sigma$ as the union of a cobordism from $L$ to $J$ with a marked normal singular disks complex $S$ for $\widehat J$ that realizes $T_s(\widehat J)$. Since $S$ is made of disks, after removing closed components if necessary, $\Sigma$ is a marked normal singular genus--$0$ complex for $L$ with at most $T_s(\widehat J)$ borromean triple points.
Now assume $L$ and $J$ are concordant and do the same with a concordance from $L$ to $J$ and a marked normal singular genus--$0$ complex $S$ for $J$ that realizes $T_s(J)$. Once again, $\Sigma$ has genus~$0$, so that $T_s(L)\leq T_s(J)$. Similarly $T_s(J)\leq T_s(L)$.
\end{proof}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture} [scale=0.7]
\draw[rounded corners=20pt] (1,0) -- (1,1.2) -- (-1,1.2) -- (-1,-1.2) -- (0,-1.2);
\begin{scope} [yscale=0.8]
\draww{(1,-1) arc (270:90:1) .. controls +(1,0) and +(-1,0) .. (3,-1);}
\draww{ (3,-1) arc (-90:90:1) .. controls +(-1,0) and +(1,0) .. (1,-1);}
\end{scope}
\draww{[rounded corners=20pt] (0,-1.2) -- (1,-1.2) -- (1,0);}
\draw[very thick,->] (5,0) -- (6,0) node[above] {$\Delta$} -- (7,0);
\draw[rounded corners=20pt] (9,1.2) -- (8,1.2) -- (8,-1.2) -- (12,-1.2) -- (12,0);
\begin{scope} [xshift=9cm,yscale=0.8]
\draww{ (1,-1) arc (270:90:1) .. controls +(1,0) and +(-1,0) .. (3,-1);}
\draww{(3,-1) arc (-90:90:1) .. controls +(-1,0) and +(1,0) .. (1,-1);}
\end{scope}
\draww{[rounded corners=20pt] (12,0) -- (12,1.2) -- (9,1.2);}
\end{tikzpicture}
\caption{A $\Delta$--move on a Hopf link} \label{figHopf}
\end{center}
\end{figure}
We end with a point which does not generalize to colored links. In the non-colored setting, the parity of the number of borromean triple points on a marked normal singular complex is fixed, it only depends on the link. A consequence of this fact is that a $\Delta$--move performed on an algebraically split link always modifies the link. Figure~\ref{figHopf} shows a $\Delta$--move on the Hopf link that leaves it unchanged. It follows that a colored Hopf link with a single color bounds $T$--ribbon genus--$0$ complexes with any number of borromean triple points.
\def$'${$'$}
\providecommand{\bysame}{\leavevmode ---\ }
\providecommand{\og}{``}
\providecommand{\fg}{''}
\providecommand{\smfandname}{\&}
\providecommand{\smfedsname}{\'eds.}
\providecommand{\smfedname}{\'ed.}
\providecommand{\smfmastersthesisname}{M\'emoire}
\providecommand{\smfphdthesisname}{Th\`ese}
|
2,869,038,156,837 | arxiv | \section{Introduction}
In a recent paper \cite{grl} the phenomenological implications of a class of
string motivated supersymmetric models based on level-1 heterotic string
models with Wilson line breaking of the underlying E$_{6}$ symmetry were
investigated. These models (henceforth referred to as Extended Minimal
Supersymmetric Models (EMSSM)) contain additional vector-like
representations filling out complete five and ten dimensional
representations of $SU(5)$ even though the gauge group is just that of the
Standard Model\footnote{%
To be precise these models contain additional vector-like states $I+\bar{I}$
, where $I$, $\bar{I}$ denote complete representations of $SU(5)$. The
components $\psi $ of the additional complete $SU(5)$ representations
transform under $SU(3)$ and $SU(2)$ groups as follows: for $\psi =d^{c}:(%
\bar{3},1)\ $, $\psi =l:(1,2)$ for the case $I$ is the 5 dimensional
representation of $SU(5)$ and $\psi =e^{c}:(1,1)\ ,\psi =u^{c}:(\bar{3},1)\
,\psi =q:(3,2)\ $ for the case $I$ is the 10 dimensional representation of $%
SU(5)$}. This class of string models has several advantages for model
building \cite{witold1};\ the Wilson line breaking necessary to break the $%
E_{6}$ symmetry in level-1 theories offers an elegant way out of the
doublet-triplet splitting problem encountered in GUTs. It can also lead to
the standard unification of the gauge couplings without the need of a Grand
Unified Group below the compactification scale\footnote{%
Alternative constructions were developed which can lead to non-standard U(1)
normalisation even in level-1 theory - see discussion in ref. \cite{dienes1}}%
. In this case the string prediction for the scale of unification of the
gauge couplings may be directly compared with the value obtained continuing
the gauge couplings up in energy providing the possibility of a quantitative
test of string unification including gravity.
The effect of the additional vector-like states on $\alpha _{3}(M_{z})$ and
the value of the unification scale was explored in detail in\cite{grl}. It
was found that, working to two-loop order in the gauge sector, the
unification scale is systematically increased by a small factor (about 3 or
less, a function of the value of the unified coupling and of the number $%
n=(n_{5}+3n_{10})/2$ of pairs \footnote{$n_{5}=N_{5}+N_{\overline{5}}$ and $%
n_{10}=N_{10}+N_{\overline{10}}$} of additional complete $SU(5)$
representations), while the strong coupling was systematically increased,
taking it further from the experimental value \cite{particledata}, $\alpha
_{3}(M_{z})=0.118\pm 0.003$. The increase in the unification scale does not,
in fact, take it closer to the weakly coupled heterotic string prediction
\cite{scale} because the increase is correlated with an increase in the
unified coupling which always happens when additional matter is added and
the heterotic string prediction also increases with the unified coupling. As
a result there is still a discrepancy\footnote{Ways to reconcile
this scale discrepancy have been reviewed by Dienes \cite{dienes2}.}
of a factor of 10-20.
However, in this analysis the effects of Yukawa couplings on
the running of the gauge couplings were not included and we seek to include
them in this paper. An immediate difficulty arises because in theories with
additional massive vectorlike states there may be many new Yukawa couplings
involving these heavy states. We shall consider two basic possibilities
which give an indication of the general possibilities and uncertainties. In
the first we include only the Yukawa couplings present in the MSSM\ which
are responsible for the third generation masses. In the second we include
couplings between massive Higgs doublets and the light quarks and leptons.
This model goes some way towards explaining the light quark mass matrix via
mixing in the Higgs sector \cite{ibanezross} as well as explaining the
masses of the third generation, in a ``fixed-point-scenario'' \cite{sun}. We
also comment on how our results would be affected if further couplings
involving massive states were included.
The paper is organized as follows. In Section 2 we review the effect of
heavy thresholds on the running coupling constants and we analyze the
renormalisation group evolution (RGE) predictions for the gauge couplings
with Yukawa couplings effects included. In section 3 we derive an analytic
form for $\alpha _{3}(M_{z})$ and the unification scale as well as the
decoupling scale of the extra matter and make some numerical estimates. In
section 4 we consider the case of large $n.$ Finally, in Section 5 we
consider the case of unification at strong coupling. We discuss the
differences between the (quasi)-fixed point approach \cite{sun} and the two
loop perturbative approach results. Our conclusions are presented in the
last section.
\section{Heavy thresholds' contribution from the integrated NSVZ beta
function}
In this section we make use of the Novikov-Shifman-Vainshtein-Zakharov
(NSVZ) beta function \cite{NSVZ,murayama2} for the gauge couplings, which
was derived using holomorphicity arguments and the instanton calculus. We
integrate it in the presence of the additional heavy states of the EMSSM\
model and then use its two-loop approximation to perform further
calculations.
The integrated expression for the gauge couplings' running, in the absence
of these heavy thresholds due to the presence of the extra heavy states we
consider, was presented in ref. \cite{shifman} and was deduced without
actually integrating the beta function, but by using physical arguments and
the properties of the holomorphic gauge coupling; from that, the integrated
form in the presence of our additional heavy thresholds can be easily
``guessed''. For clarity, we prefer to re-derive it here by direct
integration. For a fuller discussion the reader is referred to the
literature \cite{murayama2,shifman,murayama1}. We have \cite{NSVZ}
\begin{equation}
\beta (\alpha )^{NSVZ}\equiv \frac{d\alpha }{d(\ln \mu )}=-\frac{\alpha ^{2}%
}{2\pi }\left[ 3T(G)-\sum_{\sigma }^{{}}T(R_{\sigma })(1-\gamma _{\sigma
}^{NSVZ})\right] \left( {1-T(G)\frac{\alpha }{2\pi }}\right) ^{-1}
\label{shifmanbetaalpha}
\end{equation}
with the definition ($\mu $ is the running scale)
\[
\gamma _{\sigma }^{NSVZ}=-\frac{d\ln Z_{\sigma }}{d\ln \mu }
\]
and where $T(G)$ and $T(R_{\sigma })$ represent the Dynkin index for the
adjoint representation and for $R_{\sigma }$ representation respectively
(not necessarily the fundamental one), whose values are given in the
Appendix. The above sum runs over {\it all} matter fields $\sigma $ in
representation $R_{\sigma }$ and this includes, for example, the extra heavy
states (which we called $\psi $ -see footnote (1)), in addition to the low
energy spectrum of MSSM. This gives
\begin{equation}
2\pi \frac{d\alpha ^{-1}}{d\ln \mu }+T(G)\frac{d\ln \alpha }{d\ln \mu }%
=3T(G)-\sum_{\sigma }T(R_{\sigma })(1-\gamma _{\sigma }^{NSVZ})
\end{equation}
which can be integrated from the high scale $M$ down to the low scale $\mu $
with $\mu >\Lambda _{Susy}$ to give
\begin{equation}
2\pi (\alpha ^{-1}(M)-\alpha ^{-1}(\mu ))+T(G)\ln \frac{\alpha (M)}{\alpha
(\mu )}=3T(G)\ln \frac{M}{\mu }-\sum_{\sigma }\int_{\mu }^{M}T(R_{\sigma
})(1-\gamma _{\sigma }^{NSVZ})d(\ln {\tilde{\mu}}) \label{part}
\end{equation}
In the above equation $\alpha $ stands for any gauge group coupling. Any
extra heavy state $\psi $, with mass $\mu _{\psi }(\mu _{\psi })$ larger
than $\mu $, decouples at its {\it physical mass scale} in the running of
the {\it canonical} gauge coupling \cite{murayama2,murayama1} and therefore,
below $\mu _{\psi }$ scale, $\gamma _{\psi }=0$.
From eq.(\ref{part}) we obtain the result
\begin{eqnarray}
\alpha ^{-1}(\mu )=\alpha ^{-1}(M) &+&\frac{-3T(G)}{2\pi }\ln \frac{M}{\mu
\left( \frac{\alpha (M)}{\alpha (\mu )}\right) ^{1/3}} \nonumber \\
&+&\frac{1}{2\pi }\sum_{\phi }^{{}}T(R_{\phi })\ln \frac{M}{\mu }+\frac{1}{%
2\pi }\sum_{\phi }^{{}}T(R_{\phi })\ln \frac{Z_{\phi }(M)}{Z_{\phi }(\mu )}
\nonumber \\
&+&\frac{1}{2\pi }\sum_{\psi }^{{}}T(R_{\psi })\ln \frac{M}{\mu _{\psi }}+%
\frac{1}{2\pi }\sum_{\psi }^{{}}T(R_{\psi })\ln \frac{Z_{\psi }(M)}{Z_{\psi
}(\mu _{\psi })}
\end{eqnarray}
where $\phi $'s stand for a MSSM-like spectrum. At this point we make the
observation that
\[
\mu _{\psi }Z_{\psi }(\mu _{\psi })=\mu _{\psi }^{o}
\]
which is the mass renormalisation equation, where $\mu _{\psi }^{o}$
represents the bare mass of the $SU(5)$ component $\psi $. For the case of a
Grand Unified Group Model, the gauge invariance principle requires the mass
terms $\mu _{\psi }\psi \overline{\psi }$ be invariant, which is possible
only if all bare masses are equal to a common value, $\mu _{g}$. We also
consider, without any loss of generality that $Z_{\psi }(M_{g})=Z_{\phi
}(M_{g})=1$ where $M_{g}$ is the value of unification scale. Taking $M=M_{g}$%
, we find
\begin{eqnarray}
\alpha ^{-1}(\mu ) &=&\alpha ^{-1}(M_{g})+\frac{-3T(G)}{2\pi }\ln \frac{M_{g}%
}{\mu \left( \frac{\alpha (M_{g})}{\alpha (\mu )}\right) ^{1/3}}+\sum_{\phi
}^{{}}\frac{T(R_{\phi })}{2\pi }\ln \frac{M_{g}}{\mu Z_{\phi }(\mu )}+\frac{n%
}{2\pi }\ln \frac{M_{g}}{\mu _{g}} \label{integrated} \\
&&+\frac{T(R_{\Sigma })}{2\pi }\ln \frac{M_{g}}{\mu _{\Sigma }} \nonumber
\end{eqnarray}
with $n=(n_{5}+3n_{10})/2$,$\;n_{5}=N_{5}+N_{\overline{5}}$ and $%
n_{10}=N_{10}+N_{\overline{1}0}$. The last term in eq.(\ref{integrated})
could stand for the Higgs in the adjoint representation or for the $SU(3)$
Higgs
triplet\footnote{These exotic Higgs effects
will not be considered further in this paper}.
This formula is valid to all orders in perturbation theory, as long as
Supersymmetry is not broken. This equation was used, in this form, as a
check of the intermediate results obtained in \cite{grl}.
In two-loop order for gauge couplings running,
$Z$ factors depend on the mean mass of the extra
heavy states which can be considered to be their common bare mass $\mu_g$
since the difference would be of three loop order. Hence, in two-loop
order $\mu _{g}$ is the only mass scale associated with the
heavy states in eq.(\ref
{integrated}). Therefore one sees that the decoupling of
the heavy states happens, in two-loop order,
at $\mu _{g}$ and not at each of the physical masses of the
component fields of the massive $SU(5)$ representations.
\section{Analytical Results from Renormalisation Group Evolution}
In this section we consider the effects of the Yukawa couplings on the
running of the gauge couplings and their effects on the predictions for the
unification scale, the strong coupling at electroweak scale and the scale
where the heavy states decouple. In the case the Yukawa couplings are
negligible, we recover previous two-loop analytical results \cite{grl}.
To keep our approach general and to exhibit the effects Yukawa couplings
have on the running of the gauge couplings, it is convenient to use eq.(\ref
{integrated}) for the running of the gauge couplings. We should mention that
in two-loop order this formula has no regularization ambiguities, which only
arise at three loop and beyond \cite{jones}. The advantage of eq.(\ref
{integrated}) for the evolution of the couplings is that it expresses the
running of the gauge couplings in terms of the wavefunction renormalisation
coefficients $Z_{\phi }$. Unlike the gauge wave function contribution which
is one-loop exact, one still needs to use perturbation theory to compute the
$Z_{\phi }$ order by order for the matter wave function. However, for a two
loop approximation for the gauge couplings, only a one-loop calculation of $%
Z_{\phi }$ is required, simplifying the calculation. Their evolution from
the unification scale where they are normalised to unity\footnote{$Z_{\phi }$
depend on the unification scale!}, to the decoupling scale $\mu _{g}$ of the
heavy states is affected by the presence of $n=(n_{5}+3n_{10})/2$ {\it pairs}
of complete $SU(5)$ multiplets which affects the anomalous dimensions of the
matter fields\footnote{%
The running of the gauge couplings depends on $n_{5}+3n_{10}$ only and not
separately on $n_{5}$ or $n_{10}$ \cite{grl}.}.
Below the supersymmetry breaking scale we have to include the effect of the
low energy supersymmetric thresholds. Their effect at one-loop, which
corresponds to a two-loop effect if the splitting of the super-partner
masses is of radiative origin, we denote by $\delta _{i}$. This term is
needed because eq.(\ref{integrated}) is valid only as long as supersymmetry
is unbroken \footnote{$\delta _{i}$ include regularisation scheme conversion
factors as well.}.
With these considerations we rewrite eq. (\ref{integrated}) for the three
gauge couplings (j is a generation index) in the following way:
\begin{eqnarray} \label{integratedalpha}
\alpha _{i}^{-1}(M_{z}) &=&-\delta _{i}+\alpha _{g}^{-1}+\frac{b_{i}}{2\pi }
\ln \frac{M_{g}}{M_{z}}+\frac{n}{2\pi }\ln \frac{M_{g}}{\mu _{g}}-\frac{
\beta _{i,H_{1}}}{2\pi }\ln Z_{H_{1}}(M_{z})-\frac{\beta _{i,H_{2}}}{2\pi }
\ln Z_{H_{2}}(M_{z}) \nonumber \\
&&-\frac{\beta _{i,g}}{2\pi }\ln \left[ \frac{\alpha _{g}}{\alpha _{i}(M_{z})%
}\right] ^{1/3}-\sum_{j=1}^{3}\sum_{\phi _{j}}{}\frac{\beta _{i,\phi _{j}}}{%
2\pi }\ln Z_{\phi _{j}}(M_{z})
\end{eqnarray}
where $b_{1}=33/5$, $\,b_{2}=1$, $\,b_{3}=-3$ and where $\beta _{i,\phi
_{j}}\equiv T(R_{\phi _{j}}^{i})$, $i=\{1,2,3\}$, are the contributions to
one-loop beta function\footnote{%
We also used that the one-loop beta function is $b=-3 T(G)+ \sum T(R_\phi)$,
where the sum runs over all chiral supermultiplets in representation $R_\phi$%
.} of the matter fields $\phi _{j}$ (j=generation index), while $\beta
_{i,g}\equiv T^{i}(G)$ is the one-loop beta function for the pure gauge
(+gaugino) sector; the Higgs (+higgsino) sector contribution is included
separately via the terms proportional to $\beta _{i,H1,2}$ considered. Thus
we have
\begin{equation}
\beta _{i,\phi _{j}}=\left(
\begin{array}{ccccc}
\frac{3}{10} & \frac{1}{10} & \frac{3}{5} & \frac{4}{5} & \frac{1}{5} \\
& & & & \\
\frac{1}{2} & \frac{3}{2} & 0 & 0 & 0 \\
& & & & \\
\ 0 & 1 & 0 & \frac{1}{2} & \frac{1}{2}
\end{array}
\right) _{i,\phi _{j}}\;\;\;\;\;\;\;\beta _{i,g}=\left(
\begin{array}{c}
0 \\
\\
-6 \\
\\
-9
\end{array}
\right) ;\;\;\;\;\;\;\;\beta _{i,H_{1,2}}=\left(
\begin{array}{c}
\frac{3}{10} \\
\\
\frac{1}{2} \\
\\
0
\end{array}
\right)
\end{equation}
independent of the values of $j$. The field $\phi _{j}$ runs over the set $%
\phi _{j}=\{l_{L},q_{L},e_{R},u_{R},d_{R}\}_{j}$, in this order, with $j$ as
generation index.
\subsection{$\alpha _{3}(M_{Z}),$ $M_{g}$ and $\mu _{g}.$}
To compute the two-loop values for the strong coupling
at electroweak scale, the unification scale and
the decoupling scale $\mu _{g}$ of the
extra massive states\footnote{$\mu_g$ is the decoupling scale only in two
loop order}, we use the two step method developed in reference \cite{grl}.
The first step eliminates the low energy supersymmetric thresholds
dependence by expressing our results as a change to MSSM predictions for
the strong coupling and the unification scale; this is possible because, to two
loop order for the RGE, the $\delta _{i}$'s are the same in both cases. Note
that this means that our results will be expressed in terms of $\alpha
_{g}^{o}$, $M_{g}^{o}$ and $\alpha _{i}^{o}(M_{z})$ which {\it all} have the
$\delta _{i}$ dependence {\it included}.
The second step is more technical and consists of replacing the arguments of
the two-loop log's by the one-loop approximation. This will, of course,
generate the evolution of the couplings correct to two-loop order.
To implement the first step note that in the MSSM we have
\begin{eqnarray}
\alpha _{i}^{o-1}(M_{z}) &=&-\delta _{i}+\alpha _{g}^{o-1}+\frac{b_{i}}{2\pi
}\ln \frac{M_{g}^{o}}{M_{z}}-\frac{\beta _{i,H_{1}}}{2\pi }\ln
Z_{H_{1}}^{o}(M_{z})-\frac{\beta _{i,H_{2}}}{2\pi }\ln Z_{H_{2}}^{o}(M_{z})
\nonumber \\
&&-\frac{\beta _{i,g}}{2\pi }\ln \left[ \frac{\alpha _{g}^{o}}{\alpha
_{i}^{o}(M_{z})}\right] ^{1/3}-\sum_{j=1}^{3}\sum_{\phi _{j}}{}\frac{\beta
_{i,\phi _{j}}}{2\pi }\ln Z_{\phi _{j}}^{o}(M_{z}) \label{mssm}
\end{eqnarray}
where we employed the index ``o'' to label all MSSM related quantities, to
distinguish them from those of the extended model which introduces
additional heavy states. Note that $Z^{o}$ are normalised to unity at the $%
M_{g}^{o}$, $Z^{o}(M_{g}^{o})\equiv Z^{o}(0)=1$.
As is commonly done in all ``bottom-up'' approaches, we impose the
conditions $\alpha _{1}(M_{z})=\alpha _{1}^{o}(M_{z})$ and $\alpha
_{2}(M_{z})=\alpha _{2}^{o}(M_{z})$ and equal to their experimental values.
We now determine the change of
the strong coupling at electroweak scale; from (\ref
{integratedalpha}), (\ref{mssm}) we get that (i=1,2):
\begin{eqnarray}
0 &=&\alpha _{g}^{-1}-\alpha _{g}^{o-1}+\frac{b_{i}}{2\pi }\ln \frac{M_{g}}{%
M_{g}^{o}}+\frac{n}{2\pi }\ln \frac{M_{g}}{\mu _{g}}-\frac{\beta _{i,H_{1}}}{%
2\pi }\ln \left[ \frac{Z_{H_{1}}(M_{z})}{Z_{H_{1}}^{o}(M_{z})}\right] -\frac{%
\beta _{i,H_{2}}}{2\pi }\ln \left[ \frac{Z_{H_{1}}(M_{z})}{%
Z_{H_{1}}^{o}(M_{z})}\right] \nonumber \\
&&-\frac{\beta _{i,g}}{2\pi }\ln \left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}%
\right] ^{1/3}-\sum_{j=1}^{3}\sum_{\phi _{j}}{}\frac{\beta _{i,\phi _{j}}}{%
2\pi }\ln \left[ \frac{Z_{\phi _{j}}(M_{z})}{Z_{\phi _{j}}^{o}(M_{z})}\right]
\label{dif12} \\
&& \nonumber \\
\delta \alpha _{3}^{-1}(M_{z}) &=&\alpha _{g}^{-1}-\alpha _{g}^{o-1}+\frac{%
b_{3}}{2\pi }\ln \frac{M_{g}}{M_{g}^{o}}+\frac{n}{2\pi }\ln \frac{M_{g}}{\mu
_{g}}-\frac{\beta _{3,H_{1}}}{2\pi }\ln \left[ \frac{Z_{H_{1}}(M_{z})}{%
Z_{H_{1}}^{o}(M_{z})}\right] -\frac{\beta _{3,H_{2}}}{2\pi }\ln \left[ \frac{%
Z_{H_{2}}(M_{z})}{Z_{H_{2}}^{o}(M_{z})}\right] \nonumber \\
&&-\frac{\beta _{3,g}}{2\pi }\ln \left[ \frac{\alpha _{g}\alpha
_{3}^{o}(M_{z})}{\alpha _{g}^{o}\alpha _{3}(M_{z})}\right]
^{1/3}-\sum_{j=1}^{3}\sum_{\phi _{j}}{}\frac{\beta _{3,\phi _{j}}}{2\pi }\ln %
\left[ \frac{Z_{\phi _{j}}(M_{z})}{Z_{\phi _{j}}^{o}(M_{z})}\right]
\label{al3}
\end{eqnarray}
where by $\delta \alpha _{3}^{-1}(M_{z})$ we denoted the (two-loop induced)
difference $\delta \alpha _{3}^{-1}(M_{z})=1/\alpha _{3}(M_{z})-1/\alpha
_{3}^{o}(M_{z})$. We note that the term $\ln(\alpha^o_3(M_z)/\alpha_3(M_z)$
can simply be neglected in the two-loop approximation as $\alpha^o_3(M_z)$
and $\alpha_3(M_z)$ are equal in one-loop, and therefore this term would
bring a higher order correction. We solve the system of
eqs. (\ref{dif12}), (\ref{al3})
for $\delta \alpha _{3}^{-1}(M_{z})$, $M_{g}$ and $\mu _{g}$ in
function of $\alpha _{g}$ to get:
\begin{equation}
\delta \alpha _{3}^{-1}(M_{z})=-\frac{\sigma _{g}}{2\pi }\ln \left[ \frac{%
\alpha _{g}}{\alpha _{g}^{o}}\right] ^{\frac{1}{3}}-\sum_{j=1}^{3}\sum_{\phi
_{j}}\frac{\sigma _{\phi _{j}}}{2\pi }\ln \left[ \frac{Z_{\phi _{j}}(M_{z})}{%
Z_{\phi _{j}}^{o}(M_{z})}\right] -\frac{\sigma _{H_{1}}}{2\pi }\ln \left[
\frac{Z_{H_{1}}(M_{z})}{Z_{H_{1}}^{o}(M_{z})}\right] -\frac{\sigma _{H_{2}}}{%
2\pi }\ln \left[ \frac{Z_{H_{2}}(M_{z})}{Z_{H_{2}}^{o}(M_{z})}\right]
\label{mat1} \\
\end{equation}
where $\sigma _{\phi _{j}}=\beta _{1,\phi
_{j}}(b_{2}-b_{3})/(b_{1}-b_{2})+\beta _{2,\phi
_{j}}(b_{3}-b_{1})/(b_{1}-b_{2})+\beta _{3,\phi _{j}}$ and we have a similar
definition for $\sigma _{H_{1}}$, $\sigma _{H_{2}}$ and $\sigma _{g}$. Thus
\begin{equation}
\sigma =\left\{ l_{L_{j}}:\frac{-9}{14},\;\;q_{L_{j}}:\frac{-3}{2},\;\;H_{1}:%
\frac{-9}{14},\;H_{2}:\frac{-9}{14},\;\;e_{R_{j}}:\frac{3}{7},\;\;u_{R_{j}}:%
\frac{15}{14},\;\;d_{R_{j}}:\frac{9}{14},\;\;g:\frac{9}{7}\right\}
\end{equation}
We observe that $\sigma _{\phi _{j}}$ is negative for $SU(2)$ doublets and
positive for $SU(2)$ singlets and these will therefore have opposite effects
on the values of $\alpha _{3}(M_{z})$, for same sign of the associated
logarithmic factor. Moreover, the contribution to the strong coupling
depends on
whether $Z$'s are larger or smaller than unity, which is determined by the
relative effect of gauge and Yukawa contributions to their running. This
should become clearer later. The overall effect on the strong coupling depends
however on the {\it relative} magnitude of the wavefunction
coefficients, $Z, $ of the extended model compared to those of
MSSM, $Z^{o}$.
Using the notation $\Delta \beta _{12,\phi _{j}}=\beta _{1,\phi _{j}}-\beta
_{2,\phi _{j}}$ and $\Delta \beta _{12,H_{1,2}}=\beta _{1,H_{1,2}}-\beta
_{2,H_{1,2}}$ the change in the unification scale factor is given by (see
eqs.(\ref{dif12}), (\ref{al3})):
\begin{equation}
\ln \frac{M_{g}}{M_{g}^{o}}=\frac{15}{14}\ln \left[ \frac{\alpha _{g}}{%
\alpha _{g}^{o}}\right] ^{\frac{1}{3}}+\frac{5}{28}\sum_{j=1}^{3}\sum_{\phi
_{j}}^{{}}\Delta \beta _{12,\phi _{j}}\ln \left[ \frac{Z_{\phi _{j}}(M_{z})}{%
Z_{\phi _{j}}^{o}(M_{z})}\right] +\left\{ \frac{5}{28}\Delta \beta
_{12,H_{1}}\ln \left[ \frac{Z_{H_{1}}(M_{z})}{Z_{H_{1}}^{o}(M_{z})}\right]
+H_{1}\leftrightarrow H_{2}\right\} \label{mat2}
\end{equation}
Note that $\Delta \beta _{12,\phi _{j}}$ has negative sign for $SU(2)$
doublets and positive for $SU(2)$ singlets (just like $\sigma _{\phi _{j}})$
and hence, for same sign of the logarithmic factor, these terms will drive
the unification scale in opposite directions. Moreover, comparing eqs.(\ref
{mat1}) and (\ref{mat2}), we observe that the unification scale $M_g$ is
increased for positive $\Delta\beta_{12}$ eq.(\ref{mat2}), while alpha
strong eq.(\ref{mat1}) is also increased\footnote{$\Delta\beta_{12,\phi_j}$
has same sign as $\sigma_{\phi_j}$} for a given (positive)
sign of the log's. Thus, we
already see it is difficult to decrease the strong coupling and increase
unification scale {\it simultaneously}. One could argue that $Z$ factors
have an implicit unification scale dependence as well, and that our above
explanation does not apply; however this dependence comes in under the log,
and is therefore very mild. Note that this applies independent of the type
of Yukawa interaction, implicitly present in the values of $Z$ coefficients.
This seems to be generic to models which consider {\it complete} additional
representations and it was also explored in \cite{grl} in the absence of
Yukawa effects with same conclusions. The above discussion should become
clearer later when a more quantitative study will be made.
Finally, the decoupling scale $\mu _{g}$ is given by (see eqs.(\ref{dif12}),
(\ref{al3})):
\begin{equation}
\ln \frac{\mu _{g}}{M_{g}^{o}}=\frac{2\pi }{n}\left[ \frac{1}{\alpha _{g}}-%
\frac{1}{\alpha _{g}^{o}}\right] +\frac{\Omega _{g}}{n}\ln \left[ \frac{%
\alpha _{g}}{\alpha _{g}^{o}}\right] ^{\frac{1}{3}}+\sum_{j=1}^{3}\sum_{\phi
_{j}}^{{}}\frac{\Omega _{\phi _{j}}}{n}\ln \left[ \frac{Z_{\phi _{j}}(M_{z})%
}{Z_{\phi _{j}}^{o}(M_{z})}\right] +\left\{ \frac{\Omega _{H_{1}}}{n}\ln %
\left[ \frac{Z_{H_{1}}(M_{z})}{Z_{H_{1}}^{o}(M_{z})}\right]
+H_{1}\leftrightarrow H_{2}\right\} \label{mat3}
\end{equation}
where
\begin{equation}
\Omega _{F}=\frac{b_{2}^{\prime }\beta _{1,{F}}-b_{1}^{\prime }\beta _{2,{F}}%
}{b_{1}-b_{2}};\;\;\;\;\;\Omega _{g}=\frac{b_{2}^{\prime }\beta _{1,{g}%
}-b_{1}^{\prime }\beta _{2,{g}}}{b_{1}-b_{2}};
\end{equation}
with $F\equiv \{\phi _{j},H_{1},H_{2}\}$ and $b_{i}^{\prime }=b_{i}+n$.
To simplify our expressions further we need to compute the wavefunction
renormalisation coefficients. Above the mass of the additional states in the
extended model the running of these coefficients is altered implying that
the anomalous dimensions of the matter fields are changed. We have (with $%
t=1/(2\pi )\ln (scale)$)
\begin{equation}
\frac{d}{dt}\ln Z_{F}(t)\equiv -4\pi \gamma_{F}=2\sum_{k=1}^{3}C_{k}({F})
\alpha _{k}(t)+\sum_{\nu =\tau ,b,t}^{{}}{\tilde{{\cal A}}}_{\nu }({F}%
)y_{\nu }(t) +{\cal O}(\alpha ^{2}) \label{Zs}
\end{equation}
In the above equation $C_{j}({F})$ is the second order Casimir operator for
the field ${F}$ for the group $SU(j)$ and is generation independent. The
coefficients ${\tilde{{\cal A}}}_{\nu }({F})$ depend on the type of the
superpotential of the model. For various cases we consider, their values are
given in the Appendix.
The above equation can easily be integrated to give full analytical
expressions for the wavefunction coefficients, valid in one-loop
approximation which is consistent with a two-loop running for the gauge
couplings. The solution is of the type:
\begin{equation}
Z_{F}(t)=Z_{F}^{G}(t)\times Z_{F}^{Y}(t) \label{solutie}
\end{equation}
where $Z_{F}^{G}$ and $Z_{F}^{Y}$ are given by the gauge and Yukawa running
parts respectively, with $Z_{F}^{G}(0)=1$ and $Z_{F}^{Y}(0)=1$. $Z_{F}^{G}$
is determined by the gauge group; for the extended model we get from eq.(\ref
{Zs}) and (\ref{solutie}) that
\begin{eqnarray}
Z_{F}^{G}(M_{z}) &=&\prod_{k=1}^{3}\left[ \frac{\alpha _{g}}{\alpha _{k}(\mu
_{g})}\right] ^{-\frac{2C_{k}(F)}{b_{k}^{\prime }}}\left[ \frac{\alpha
_{k}(\mu _{g})}{\alpha _{k}(M_{z})}\right] ^{-\frac{2C_{k}(F)}{b_{k}}}
\nonumber \\
&=&\prod_{k=1}^{3}\left[ \frac{\alpha _{g}}{\alpha _{k}(\mu _{g})}\right] ^{%
\frac{2C_{k}(F)}{b_{k}}\frac{n}{b_{k}^{\prime }}}\left[ \frac{\alpha _{g}}{%
\alpha _{k}(M_{z})}\right] ^{-\frac{2C_{k}(F)}{b_{k}}} \label{zs}
\end{eqnarray}
Note that for one-loop running of the coefficients $Z$ (consistent with two
loop approximation for gauge couplings), the decoupling scale of the heavy
states $\psi $ can be considered to be $\mu _{g}$ rather than a physical
mean mass of the multiplet, as the difference is a radiative effect and
therefore represents a higher (two-loop) correction to $Z$'s or three loop
order for the gauge couplings. This justifies the last equation we wrote
above.
Similarly, we have that in MSSM,
\begin{equation} \label{zgmssm}
Z^{o G}_{F}(M_z)=\prod_{k=1}^{3} \left[\frac{\alpha_g^o}{\alpha^o_k(M_z)}%
\right]^{-\frac{2 C_k(F)}{b_k}}
\end{equation}
The coefficients $Z_{F}^{Y}$ can also be expressed in a form similar to that
of $Z_{F}^{G}$ (eq. (\ref{zs})), in terms of Yukawa and gauge couplings at
initial and final scale only; to do this one only needs the one-loop running
for Yukawa couplings; from this, any Yukawa coupling can be expressed (see
Appendix) in terms of the derivative of $\ln y_{\tau,b, t}$ and derivative
of $\ln {\alpha_i}$ where the latter is obtained by replacing $\alpha_i$
with $(1/\tilde b_i)d\ln\alpha_i/dt$, consistent with one-loop running for
Yukawa; hence, the integral $\int y_{\nu}dt$ can be performed analytically
and therefore the differential equation of $Z^Y_F$'s (similar to (\ref{Zs})
without the gauge term) can be integrated to give $Z^Y_F$'s in one-loop
approximation.
A final comment is in order here: we see that the powers entering eq.(\ref
{zs}) depend on $n$ as $n/(b_{i}+n)$. This means that in
the limit of large $n$ the contribution of these terms
to the running of the couplings brings in
only a small depedence on n.
We will later see that $\alpha _{g}/\alpha
_{k}(\mu _{g})$ itself is also relatively stable with respect to $n$. Next,
from the running of the gauge couplings eqs. (\ref{integrated}) and (\ref
{integratedalpha}) we also observe that the {\it explicit} $n$ dependence
comes in under same form for all the couplings as $n/(2\pi )\ln (M_{g}/\mu
_{g})$, which means that the {\it relative} behaviour of the gauge couplings
evolution is affected only through $Z$ factors. With $\alpha _{1}(M_{z})$
and $\alpha _{2}(M_{z})$ fixed to their experimental values, and the above
observations we conclude that the predictions for the strong coupling at
electroweak scale will be stable for large $n$. This observation applies to
other predictions we will make as well, such as the unification scale and
the bare mass, $\mu _{g}$. For the absence of Yukawa couplings' effects this
behaviour was shown in \cite{grl}. We will see that this remains true in
their presence too, although a dependence of the particular type of
superpotential might be expected.
\subsection{Analysis for 3rd generation Yukawa couplings only.}
We will consider first the case when the superpotential is the same in both
MSSM and in the extended model. In eqs. (\ref{mat1}), (\ref{mat2}), (\ref
{mat3}), we replace the gauge part of $Z$ coefficients, $Z_{F}^{G}$, with
their one-loop analytical expressions, consistent with two-loop running for
the gauge couplings. For the Yukawa part of $Z$ coefficients, $Z_{F}^{Y}$,
from their evolution equation
\begin{equation} \label{matrixA}
\frac{d}{dt}\ln Z_{F}^{Y}(t)=\sum_{\nu =\tau ,b,t}^{{}}{\cal A}_{\nu }({F}%
)y_{\nu }(t)+{\cal O}(\alpha ^{2})
\end{equation}
after using the relations among the coefficients ${\cal A}_{\nu }(F)$, (not
affected by the presence of the extra matter\footnote{%
Their expressions are presented in Appendix}), one can get the following
relations, valid at any scale between $M_g$ and $M_z$
\begin{eqnarray}
Z_{Q_{3}}^{Y} &=&\left( Z_{b_{R}}^{Y}Z_{t_{R}}^{Y}\right) ^{1/2} \nonumber
\label{wfr} \\
Z_{L_{3}}^{Y} &=&{Z_{\tau _{R}}^{Y}}^{1/2} \nonumber \\
Z_{H_{1}}^{Y} &=&{Z_{b_{R}}^{Y}}^{3/2}{Z_{\tau _{R}}^{Y}}^{1/2} \nonumber \\
Z_{H_{2}}^{Y} &=&{Z_{t_{R}}^{Y}}^{3/2}
\end{eqnarray}
where we used, without any loss of generality, the convention $Z^Y(0)=1$ for
any field. Making use of these relations, we get from eq.(\ref{mat1}) that
\begin{eqnarray}
\delta \alpha _{3}^{-1}(M_{z}) &=&-\frac{470}{77\pi }\ln \left[ \frac{\alpha
_{g}}{\alpha _{g}^{o}}\right] +\sum_{j=1}^{3}\ln \left\{ 1+\frac{%
b_{j}^{\prime }}{n}\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}-1\right]
\right\} ^{\omega _{j}} \nonumber \label{res4} \\
&&+\frac{3}{28\pi }\left[ 5\ln \frac{Z_{b_{R}}^{Y}}{Z_{b_{R}}^{oY}}+\ln
\frac{Z_{\tau _{R}}^{Y}}{Z_{\tau _{R}}^{oY}}+3\ln \frac{Z_{t_{R}}^{Y}}{%
Z_{t_{R}}^{oY}}\right]
\end{eqnarray}
where $Z^Y$, $\,Z^{oY}$ factors are evaluated at $M_z$ scale and where we
defined
\begin{equation}\label{omegaaa}
\omega _{j}=\left\{ \frac{-2}{11\pi }\frac{n}{b_{1}^{\prime }},\frac{81}{%
14\pi }\frac{n}{b_{2}^{\prime }},\frac{2}{7\pi }\frac{n}{b_{3}^{\prime }}%
\right\} _{j}
\end{equation}
and $\delta \alpha _{3}^{-1}(M_{z})=\alpha _{3}^{-1}(M_{z})-\alpha
_{3}^{o-1}(M_{z})$. To get the above formula we replaced the terms of the
form $\ln \left( \alpha _{g}/\alpha _{j}(\mu _{g})\right) $ by
\begin{equation}
\ln \left[ \frac{\alpha _{g}}{\alpha _{j}(\mu _{g})}\right] =\ln \left[
1+\alpha _{g}\frac{b_{j}^{\prime }}{2\pi }\ln \left( \frac{M_{g}}{\mu _{g}}%
\right) \right] =\ln \left[ 1+\frac{b_{j}^{\prime }}{n}\left( \frac{\alpha
_{g}}{\alpha _{g}^{o}}-1\right) \right]
\end{equation}
where we made use of the fact that $\alpha _{g}^{-1}-\alpha
_{g}^{o-1}+n/(2\pi )\ln (M_{g}/\mu _{g})=0$ in one-loop order \footnote{%
In fact $\alpha _{g}^{-1}-\alpha _{g}^{o-1}+b_{i}/(2\pi )\ln \left(
M_{g}/M_{g}^{o}\right) +n/(2\pi )\ln \left( M_{g}/\mu _{g}\right) =0$ in one
loop and we further have $\ln \left( M_{g}/M_{g}^{o}\right) =0$ in one-loop
because the change of unification scale is a two-loop effect.} which is
obtained from equations (\ref{dif12}) and (\ref{al3}) after \footnote{%
We also have that $\delta \alpha _{3}^{-1}$ is non-zero in two-loop only,
being $0$ in one-loop.} noticing that, in one-loop approximation, all
wavefunction renormalisation coefficients are equal to 1.
Similar results hold for the unification scale and for the decoupling scale
of the extra matter. We have from eqs.(\ref{mat2}), (\ref{wfr})
\begin{equation}
\frac{M_{g}}{M_{g}^{o}}=\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}\right] ^{%
\frac{31}{21}}\prod_{j=1}^{3}\left\{ 1+\frac{b_{j}^{\prime }}{n}\left[ \frac{%
\alpha _{g}}{\alpha _{g}^{o}}-1\right] \right\} ^{\rho _{j}}\left[ \frac{%
Z_{b_{R}}^{oY}}{Z_{b_{R}}^{Y}}\right] ^{\frac{1}{7}}\left[ \frac{Z_{\tau
_{R}}^{Y}}{Z_{\tau _{R}}^{oY}}\right] ^{\frac{1}{14}}\left[ \frac{%
Z_{t_{R}}^{oY}}{Z_{t_{R}}^{Y}}\right] ^{\frac{1}{28}} \label{res1}
\end{equation}
where $Z^Y$, $\,Z^{oY}$ factors are evaluated at $M_z$ scale and
\begin{equation}
\rho _{j}=\left\{ \frac{1}{12}\frac{n}{b_{1}^{\prime }},{\frac{-39}{28}}{%
\frac{n}{b_{2}^{\prime }}},{\frac{4}{21}}{\frac{n}{b_{3}^{\prime }}}\right\}
_{j}
\end{equation}
Finally from eq.(\ref{mat3}) and (\ref{wfr})
\begin{equation}
\frac{\mu _{g}}{M_{g}^{o}}=\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}\right]
^{\frac{31}{21}+\frac{2336}{231n}}\exp \left[ \frac{2\pi }{n}\left( \alpha
_{g}^{-1}-\alpha _{g}^{o-1}\right) \right] \prod_{j=1}^{3}\left\{ 1+\frac{%
b_{j}^{\prime }}{n}\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}-1\right]
\right\} ^{\sigma _{j}}\left[ \frac{Z_{\tau _{R}}^{Y}}{Z_{\tau _{R}}^{oY}}%
\right] ^{r_{1}}\left[ \frac{Z_{b_{R}}^{Y}}{Z_{b_{R}}^{oY}}\right] ^{r_{2}}%
\left[ \frac{Z_{t_{R}}^{Y}}{Z_{t_{R}}^{oY}}\right] ^{r_{3}} \label{res2}
\end{equation}
with
\begin{equation}
r_{1}={\frac{1}{14}-\frac{3}{7n}},\;\;\;r_{2}={\frac{-1}{7}- \frac{23}{14n}}%
,\;\;\;r_{3}={\frac{-1}{28}-\frac{43}{28n}}
\end{equation}
and
\begin{equation}
\sigma _{j}=\left\{ {\frac{11n-7}{132\,b_{1}^{\prime }}}, {\frac{-3(111+13n)%
}{28\,b_{2}^{\prime }}}, {\frac{4(22+n)}{21\,b_{3}^{\prime }}}\right\}_{j}
\end{equation}
In the above expressions eqs.({\ref{res4}), ({\ref{res1}), ({\ref{res2}), $%
Z^{Y}$ factors are evaluated at the electroweak scale, and normalised to
unity at the unification scale $M_{g}$ and the same is true for the $Z^{o}$
coefficients\footnote{%
These are normalised to unity at $M_{g}^{o}$!}. If we set all ``Yukawa''
wavefunction coefficients $Z_{F}^{Y}$ and $Z_{F}^{oY}$ equal to unity, we
recover the results \cite{grl} we obtained previously in the absence of any
Yukawa couplings\footnote{%
In ref. \cite{grl} the results of the extended model
were compared with those of MSSM {\it without} top, bottom,
tau Yukawa effects.}}}}. As the coefficients $%
Z_{F}^{Y}\geq 1$ (unlike $Z_{F}^{G}\leq 1$) (see eqs.(\ref{Zs}), (\ref
{matrixA})), we note that their effect is to decrease the strong coupling at
electroweak scale. However, we must evaluate the {\it relative} effect of
these coefficients $Z^{Y}$ to those of MSSM, $Z^{oY} $; this will be done
below, by expressing them in terms of Yukawa couplings, assuming the same
{\it low energy input} for Yukawa couplings in both EMSSM and MSSM models.
This means we take same values at the electroweak scale for top, bottom and $%
\tau $ Yukawa couplings in both models. Using the expressions of $Z^{Y}$ and
$Z^{oY}$, the result will be expressed as a function of ratios of the Yukawa
couplings, following the details presented at the end of the previous
subsection, after eq.(\ref{zgmssm}).
\noindent For the strong coupling eq.(\ref{res4}) we obtain the following result: $%
1/\alpha _{3}(M_{z})=1/\alpha _{3}^{o}(M_{z})+\delta \alpha _{3}^{-1}(M_{z})$
with
\begin{eqnarray} \label{deltaalphastrong}
\delta \alpha _{3}^{-1}(M_{z}) &=&-\frac{11563}{2013\pi }\ln \left[ \frac{%
\alpha _{g}}{\alpha _{g}^{o}}\right] +\sum_{j=1}^{3}\ln \left\{ 1+\frac{%
b_{j}^{\prime }}{n}\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}-1\right]
\right\} ^{{\cal U}_{j}} \nonumber \\
&&+\frac{3}{854\pi }\left\{ 4\ln \left[ \frac{y_{\tau }(0)}{y_{\tau }(M_{z})}%
\right] +45\ln \left[ \frac{y_{b}(0)}{y_{b}(M_{z})}\right] +23\ln \left[
\frac{y_{t}(0)}{y_{t}(M_{z})}\right] \right\} \nonumber \\
&&-\frac{3}{854\pi }\left\{ 4\ln \left[ \frac{y_{\tau }^{o}(0)}{y_{\tau
}^{o}(M_{z})}\right] +45\ln \left[ \frac{y_{b}^{o}(0)}{y_{b}^{o}(M_{z})}%
\right] +23\ln \left[ \frac{y_{t}^{o}(0)}{y_{t}^{o}(M_{z})}\right] \right\}
\end{eqnarray}
and
\begin{equation}
{\cal U}_{j}=\left\{ \frac{-2923}{14091\pi }\frac{n}{b_{1}^{\prime }},\frac{%
4293}{854\pi }\frac{n}{b_{2}^{\prime }},\frac{130}{183\pi }\frac{n}{%
b_{3}^{\prime }}\right\} _{j}
\end{equation}
$j=\{1,2,3\}$. The terms which involve $\alpha_g/\alpha_g^o$ in the r.h.s.
of eq.(\ref{deltaalphastrong}) drive $\alpha_3(M_z)$ above MSSM value. We
also see that, imposing the same electroweak scale values for Yukawa
couplings in both models, the overall effect of the Yukawa couplings depends
on $\ln ({y_{\tau }(0)}/{y_{\tau }^{o}(0)})$, $\ln ({y_{b}(0)}/{y_{b}^{o}(0)}%
)$ and $\ln ({y_{t}(0)}/{y_{t}^{o}(0)})$. The contribution of these terms is
negative and hence $\alpha _{3}(M_{z})$ is further increased from its MSSM
value. The reason for this is that the running of Yukawa couplings is
similar in both models \footnote{%
i.e.they have the same anomalous dimension coefficients}, but in the
extended model they decrease faster due to the fact the gauge couplings in
EMSSM at any scale are increased from their corresponding MSSM values. This
means that, for a fixed low energy input for Yukawa couplings and $SU(2)$
and $U(1)$ gauge couplings, the net effect due the presence of the extra
multiplets is to {\it decrease} the Yukawa couplings at unification scale
while increasing the value of the unified coupling. Hence $y(0)<y^{o}(0)$
for top, bottom and tau quarks and $\alpha_3(M_z)$ is increased. Note that
this is true in the case we have the same superpotential in both models
which means that the coefficients in front of Yukawa couplings in their one
loop running equations are the same to those of MSSM.
The Yukawa effects come in eq.(\ref{deltaalphastrong}) as log's of ratios $%
y_\lambda(0)/y_\lambda(Q)$ with positive sign in front; this might no longer
be true if the superpotential of the model we consider for the extended
model is changed. Still, it seems that a domain with large initial Yukawa
couplings $y_\lambda(0)$ would favour a decreasing effect on the strong coupling at
electroweak scale, particularly when they are larger than their electroweak
scale values. This would in turn favour a fast rate of approach towards
infrared fixed points for ratios of Yukawa to any gauge coupling \cite{sun}.
However, getting a large initial Yukawa coupling, with same low energy input
values as in MSSM is not easy, due to, as we mentioned it above, the gauge
couplings competing effect which seems to be difficult to avoid. In cases
with unified coupling larger than in MSSM, as it happens in our model with
and due to the extra heavy states, the gauge couplings increase while
increasing the scale, causing the Yukawa couplings to decrease as we
increase the scale. This could eventually be compensated for by increasing
the values of the coefficients in front of Yukawa couplings in the one-loop
running of Yukawa couplings. This is possible if the Yukawa structure is
richer than in MSSM; such a particular case will be explored in the next
subsection.
The unification scale is given by (see eq.(\ref{res1}))
\begin{equation}
\frac{M_{g}}{M_{g}^{o}}=\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}\right] ^{%
\frac{1711}{1098}}\prod_{j=1}^{3}\left\{ 1+\frac{b_{j}^{\prime }}{n}\left[
\frac{\alpha _{g}}{\alpha _{g}^{o}}-1\right] \right\} ^{f_{j}}\prod_{\lambda
=\tau ,b,t}\left[ \frac{y_{\lambda }(0)}{y_{\lambda }(M_{z})}\frac{%
y_{\lambda }^{o}(M_{z})}{y_{\lambda }^{o}(0)}\right] ^{g_{\lambda }}
\label{res1y}
\end{equation}
where
\begin{equation}
f_{j}=\left\{ \frac{1133}{15372}\frac{n}{b_{1}^{\prime }},\frac{-2277}{1708}%
\frac{n}{b_{2}^{\prime }},\frac{32}{549}\frac{n}{b_{3}^{\prime }}\right\}
_{j}
\end{equation}
and
\begin{equation}
g_{\lambda }=\left\{ \frac{93}{1708},\frac{-128}{1708},\frac{1}{1708}%
\right\} _{\lambda } \label{mg}
\end{equation}
with $\lambda =\{\tau,b,t\}$ in this order. The presence of the factor
\begin{equation}
K=\prod_{\lambda =\tau ,b,t}\left[ \frac{y_{\lambda }(0)}{y_{\lambda }(M_{z})%
}\frac{y_{\lambda }^{o}(M_{z})}{y_{\lambda }^{o}(0)}\right] ^{g_{\lambda }}
\end{equation}
in the expression for $M_{g}/M_{g}^{o}$ prevents us from simplifying any
further this analytical expression. To make predictions one needs, just as
in the previous case, the initial value (at unification scale) of the Yukawa
couplings and their value at the electroweak scale. Since the latter
correspond to the masses of the third family it is reasonable to take them
to be the same in the MSSM and in the extended model.
Hence $K$ depends on
$y_{\lambda }(0)/y_{\lambda }^{o}(0)$ only. This dependence is very mild
however because the powers $g_{\lambda }$ are very small and therefore, with
a good approximation we have that
\begin{equation}
K\approx 1
\end{equation}
We checked this numerically\footnote{%
For generic values for Yukawa couplings for high and low $\tan\beta$ case
see ref. \cite{kazakov}.} by using the one-loop running for Yukawa couplings
with one-loop running for the gauge couplings for MSSM case and for the
extended model (in the presence of the extra-matter) and observed that the
ratios $y_{\lambda }^{o}(0)/y_{\lambda }(0)$ are of order 10 giving $K=1-2$,
with larger $K$ (up to 1.8) for larger unified coupling. The main
contribution to increasing the value of $K$ comes from the bottom quark as $%
K $ contains a negative power of $y_b(0)/y^o_b(0)\leq 1$, giving an increase
factor larger than unity, but still very small due to small $g_\lambda$, as
mentioned (this is further suppressed by the tau' contribution). For $%
n=1,2,3 $ case the same discussion about the validity of these results as
that presented in \cite{grl} applies. Hence we have a clear prediction for
the unification scale,
\begin{equation}
\frac{M_{g}}{M_{g}^{o}}\approx \left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}%
\right] ^{\frac{1711}{1098}}\prod_{j=1}^{3}\left\{ 1+\frac{b_{j}^{\prime }}{n%
}\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}-1\right] \right\} ^{f_{j}}
\end{equation}
This increase of the unification scale is very close to that found in
reference \cite{grl} and, for $\alpha_g\leq 10 \alpha_g^o\approx 0.4$ is
less than $\approx 3.5$, and depends on the values of $n$, with largest
value for smallest $n$. For large $n$ the results are stable, as we
previously mentioned. We conclude that the effects of Yukawa couplings on
the scale are small and thus there is no need for a further numerical
calculation.
Finally, the bare mass of heavy states is given by
\begin{equation}
\frac{\mu _{g}}{M_{g}^{o}}=\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}\right]
^{\frac{1711}{1098}+\frac{104039}{12078n}}\exp {\left[ \frac{2\pi }{n}\left(
\frac{1}{\alpha _{g}}-\frac{1}{\alpha _{g}^{o}}\right) \right] }%
\prod_{j=1}^{3}\left\{ 1+\frac{b_{j}^{\prime }}{n}\left[ \frac{\alpha _{g}}{%
\alpha _{g}^{o}}-1\right] \right\} ^{{\cal R}_{j}}\prod_{\lambda =\tau ,b,t}%
\left[ \frac{y_{\lambda }(0)}{y_{\lambda }(M_{z})}\frac{y_{\lambda
}^{o}(M_{z})}{y_{\lambda }^{o}(0)}\right] ^{{\cal S}_{\lambda }} \nonumber
\label{res3}
\end{equation}
\begin{equation}
{\cal R}_{j}=f_{j}+\left\{ \frac{10909}{169092b_{1}^{\prime }},\frac{-15339}{%
1708b_{2}^{\prime }},\frac{1460}{549b_{3}^{\prime }}\right\} _{j}
\end{equation}
\begin{equation}
{\cal S}_{\lambda }=g_{\lambda }+\left\{ \frac{-187}{1708n},\frac{-716}{1708n}%
,\frac{-755}{1708n}\right\} _{\lambda }
\end{equation}
with $j=\{1,2,3\}$ and $\lambda =\{\tau ,b,t\}$. We again take Yukawa
couplings at $M_z$ scale be equal in both MSSM and our extended model. The
dependence of this $\mu_g$ scale on the Yukawa couplings $%
y_\lambda(0)/y^o_\lambda(0)$ is again weak for large $n$, and therefore the
results stay close to those of reference \cite{grl}. We would like to note
that our results for unification scale, the strong
coupling and decoupling scale $%
\mu_g$ were all computed in terms of Yukawa couplings at the unification and
electroweak scale. The values of these Yukawa couplings can be determined
numerically from their one-loop running only, and we do not need any
numerical work for the gauge couplings running, thus simplifying the
calculation for $M_g$,$\,\alpha_3(M_z)$ and $\,\mu_g$.
\subsection{Family symmetric Yukawa couplings.}
The predictions we make for the strong coupling and the
unification scale depend on the
type of superpotential we assume; this is so because the running of Yukawa
couplings (and therefore their effect on the running of the gauge couplings)
depends on the type of superpotential. We considered in the previous section
a MSSM type of superpotential and its predictions. We consider now a
different superpotential, to show how these predictions change. The pattern
of the results we present could prove helpful in designing models with
better phenomenological predictions.
We thus turn now to a discussion of the implications of a model designed to
give an acceptable structure for all quark and lepton masses including the
light generations \cite{ibanezross}. In this case the new superpotential has
the following form:
\begin{equation} \label{fam_sym_pot}
W=%
\sum_{i,j=1}^{3}(h_{ij}^{u}Q_{i}U_{j}H_{2}^{ij}+h_{ij}^{d}Q_{i}
D_{j}H_{1}^{ij} +h_{ij}^{l}L_{i}e_{j}H_{1}^{ij})
\end{equation}
where the structure in the light quark mass matrix is driven via mixing in
the Higgs sector so that the two light Higgs doublets of the MSSM are
mixtures of the Higgs doublets $H_{1}^{ij}\,$ and of $H_{2}^{ij}.$ This form
is able to reproduce the values of the masses of the third generation for
the case the Yukawa couplings are given by their infra-red fixed point
values in terms of the gauge couplings\cite{sun}. We use this as our
starting point and take the fixed point values for the $h_{ij}^{u}.$ This
means they are all equal, independent of $i,j$. Similarly we take $%
h_{ij}^{d} $ and $h_{ij}^{l}$ to be $i,j$ independent and at their fixed
points. With this we find
\begin{equation} \label{w1}
\frac{d}{dt}\ln Z_{F}^{Y}(t)=\sum_{\nu =\tau ,b,t}^{{}}{\cal B} _{\nu }({F}%
)y_{\nu }(t)+{\cal O}(\alpha ^{2})
\end{equation}
where ${\cal B_{\nu }({F})}$ is presented in the Appendix. From eq.(\ref{w1}%
) we obtain the following relations, valid at any scale ${\cal M}$ larger
than $\mu_g$, ${\cal M}\geq\mu_g$
\begin{eqnarray}
Z_{Q_{j}}^Y({\cal M})&=&\left[{Z_{b_{R}}^{Y}({\cal M})} {Z_{t_{R}}^{Y}({\cal %
M})} \right] ^{1/2} \\
{Z_{L_{j}}^{Y}({\cal M})}&=& \left[ {Z_{\tau _{R}}^{Y}({\cal M})} \right]%
^{1/2} \nonumber \\
{Z_{H_{1}^{ij}}^{Y}({\cal M})} &=& \left[ {Z_{b_{R}}^{Y}({\cal M})} \right]%
^{1/2} \left[ {Z_{\tau _{R}}^{Y}({\cal M})} \right]^{1/6} \nonumber \\
{Z_{H_{2}^{ij}}^{Y}({\cal M})} &=& \left[ {{Z_{t_{R}}^{Y}}({\cal M})} \right]%
^{1/2} \nonumber
\end{eqnarray}
For the case $M_Z\leq{\cal M}\leq \mu_g$ the above type of superpotential is
no longer valid and a MSSM-like superpotential applies. This is so because
the extra heavy states we consider, among which are all Higgs fields%
\footnote{%
This implies that the number of extra heavy states is larger than 5} of eq. (%
\ref{fam_sym_pot}) decouple at $\mu_g$. Below this scale $\mu_g$, only the
third generation' Yukawa couplings give important contributions to
wavefunction renormalisation coefficients $Z^Y_{\phi_k}$, while for the
first two generations the factors $Z_{\phi_j}$, $\,j=\{1,2\}$ evolve only
through their gauge contributions, $Z_{\phi_j}^G$,$\,j=\{1,2\}$. We thus
have that for $M_Z\leq{\cal M}\leq \mu_g$
\begin{equation} \label{ww1}
\frac{d}{dt}\ln Z_{F}^{Y}(t)=\sum_{\nu =\tau ,b,t}^{{}}{\cal A} _{\nu }({F}%
)y_{\nu }(t)+{\cal O}(\alpha ^{2})
\end{equation}
which, after integration below $\mu_g$, gives that
\begin{eqnarray}
\frac{Z_{Q_{3}}^Y({\cal M})}{Z_{Q_{3}}^{Y}(\mu_g)} &=&\left[ \frac{%
Z_{b_{R}}^{Y}({\cal M})}{ Z_{b_{R}}^{Y}(\mu_g)} \frac{Z_{t_{R}}^{Y}({\cal M})%
}{Z_{t_{R}}^{Y}(\mu_g)} \right] ^{1/2} \\
\frac{Z_{L_{3}}^{Y}({\cal M})}{Z_{L_{3}}^{Y}({\mu_g})} &=& \left[ \frac{%
Z_{\tau _{R}}^{Y}({\cal M})}{Z_{\tau _{R}}^{Y}(\mu_g)} \right]^{1/2}
\nonumber \\
\frac{Z_{H_{1}}^{Y}({\cal M})}{Z_{H_{1}}^{Y}(\mu_g)} &=& \left[ \frac{%
Z_{b_{R}}^{Y}({\cal M})}{Z_{b_{R}}^{Y}({\mu_g})} \right]^{3/2} \left[ \frac{%
Z_{\tau _{R}}^{Y}({\cal M})}{Z_{\tau_{R}}^{Y}(\mu_g)} \right]^{1/2}
\nonumber \\
\frac{Z_{H_{2}}^{Y}({\cal M})}{Z_{H_{2}}^{Y}({\mu_g})} &=& \left[ \frac{{%
Z_{t_{R}}^{Y}}({\cal M})}{{Z_{t_{R}}^{Y}}({\mu_g})} \right]^{3/2} \nonumber
\end{eqnarray}
Note also the similarity with eq. (\ref{wfr}), as expected, with the
difference that the initial condition for any $Z^Y$ takes place at $\mu_g$
scale (with $Z^Y(\mu_g)\not=1$!) rather than at unification scale.
Following the procedure outlined above we obtain, in this case, the
following change to the strong coupling:
\begin{eqnarray}
\delta \alpha _{3}^{-1}(M_{z}) &=&-\frac{470}{77\pi }\ln \left[ \frac{\alpha
_{g}}{\alpha _{g}^{o}}\right] +\sum_{j=1}^{3}\ln \left\{ 1+\frac{%
b_{j}^{\prime }}{n}\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}-1\right]
\right\} ^{\omega _{j}} \nonumber \label{WWW} \\
&&+\frac{9}{28\pi }\left[ \ln {Z_{b_{R}}^{Y}(\mu _{g})}-\ln {\
Z_{t_{R}}^{Y}(\mu_{g})}-\frac{1}{3}\ln {\ Z_{\tau _{R}}^{Y}(\mu _{g})}\right]
\nonumber \\
&&+\frac{3}{28\pi} \left[5\ln\frac{Z_{b_{R}}^{Y}(M_{z})}{Z_{b_{R}}^{Y}({\mu_g%
})}+ \ln \frac{Z_{\tau_{R}}^{Y}(M_{z})}{Z_{\tau_{R}}^{Y}(\mu_g)} +3\ln \frac{%
Z_{t_{R}}^{Y}(M_z)}{Z_{t_{R}}^{Y}(\mu_g)}\right] \nonumber \\
&&-\frac{3}{28\pi }\left[ 5\ln {Z_{b_{R}}^{oY}(M_{z})}+\ln{Z_{\tau
_{R}}^{oY}(M_{z})}+3\ln {Z_{t_{R}}^{oY}(M_{z})}\right]
\end{eqnarray}
where
\begin{equation}
\omega _{j}=\left\{ \frac{-2}{11\pi }\frac{n}{b_{1}^{\prime }},\frac{81}{%
14\pi }\frac{n}{b_{2}^{\prime }},\frac{2}{7\pi }\frac{n}{b_{3}^{\prime }}%
\right\} _{j}
\end{equation}
and $\delta \alpha _{3}^{-1}(M_{z})=\alpha _{3}^{-1}(M_{z})-\alpha
_{3}^{o-1}(M_{z})$. Setting all $Z^Y$ equal to unity in (\ref{WWW}) gives
the previous result \cite{grl} where only gauge effects on $\alpha_3(M_z)$
were considered, leading to an increase of its value from the MSSM value. We
also see that the Yukawa effects give in this case two negative signs in eq.(%
\ref{WWW}) in the square bracket which make its contribution rather small
and negative; For comparison see the result of eq.(\ref{res4}) for the MSSM
superpotential. This contribution, if the signs were positive was expected
to be the largest as $Z$ factors have a steeper running between $M_g$ (where
they are equal to unity) and $\mu_g$ than between $\mu_g$ and $M_z$ due to
larger coefficients in (\ref{w1}) than in (\ref{ww1}). The second-last and
last square brackets in eq.(\ref{WWW}) give a small correction to alpha
strong and in fact they largely cancel each other. We can further express $%
\delta\alpha_3^{-1}(M_z)$ in function of Yukawa couplings evaluated at $M_g$%
, at $\mu_g$ and at electroweak scale.
\begin{eqnarray} \label{deltaalpha}
\delta\alpha^{-1}_3(M_z)&=& -\frac{11563}{2013\pi}\ln\left[\frac{\alpha_g}{%
\alpha_g^o}\right] +\sum_{i=1}^{3}{\cal Z}_j\ln \left\{ 1+\frac{b_{j}^{\prime
}}{n} \left[\frac{\alpha_g}{\alpha_{g}^{o}}-1\right]\right\} \nonumber \\
&&+\left\{\frac{243}{1022\pi}\ln\left[\frac{y_b(0)}{y_b(\mu_g)}\right] -%
\frac{45}{511\pi}\ln\left[\frac{y_{\tau}(0)}{y_{\tau}(\mu_g)}\right] -\frac{%
225}{1022\pi}\ln\left[\frac{y_t(0)}{y_t(\mu_g)}\right]\right\} \nonumber \\
&&+\left\{\frac{135}{854\pi}\ln \left[\frac{y_b(\mu_g)}{y_b(M_z)}\right] +%
\frac{6}{427\pi}\ln\left[\frac{y_\tau(\mu_g)}{y_\tau(M_z)}\right] +\frac{69}{%
854\pi}\ln\left[\frac{y_t(\mu_g)}{y_t(M_z)}\right]\right\} \nonumber \\
&&-\left\{\frac{135}{854\pi}\ln \left[\frac{y_b^o(0)}{y_b^o(M_z)}\right] +%
\frac{6}{427\pi}\ln\left[\frac{y_\tau^o(0)}{y_\tau^o(M_z)}\right] +\frac{69}{%
854\pi}\ln\left[\frac{y_t^o(0)}{y_t^o(M_z)}\right]\right\}
\end{eqnarray}
with ${\cal Z}_j$ given by
\begin{equation}
{\cal Z}_j= \left\{\frac{1}{\pi}\left(\frac{-2923}{14091}+\frac{351}{365
b_1^{\prime}}\right), \frac{1}{\pi}\left(\frac{4293}{854}+\frac{-6129}{1022
b_2^{\prime}}\right), \frac{1}{\pi}\left(\frac{130}{183}+\frac{486}{511
b_3^{\prime}}\right)\right\}_j
\end{equation}
The terms in (\ref{deltaalpha}) which contain $\alpha_g/\alpha_g^o$ have an
overall increasing effect on the strong coupling
driving it towards values larger
than in MSSM as it can be seen by simply plotting their sum with respect to $%
\alpha_g$.
We must therefore analyse the effect Yukawa
couplings in eq.(\ref{deltaalpha}) have. Their effect on $\alpha_3(M_z)$ is
not always opposite to that of the gauge couplings. In the expression above
we see three types of contributions. The first one is due to Yukawa effects
between unification scale and (effective) decoupling scale $\mu_g$. This is
due to the family symmetric Yukawa couplings we chose above this scale. We
can {\it assume} that the values of $y_\lambda(0)$ are larger than $%
y_\lambda(\mu_g)$; this would be expected given that we considered the
family symmetric couplings, which is possible if they flow to infrared fixed
point values, thing favoured by a large initial Yukawa\footnote{%
Considering $y_\lambda(0)$ smaller than $y_\lambda(\mu_g)$ is not allowed in
this model because a rapid flow to infrared fixed points was assumed here
and hence generation independent Yukawa couplings are possible for $%
y_\lambda(Q)/y_\lambda(0)\leq 1$, with $Q\leq M_g$}. The overall effect
would therefore be to increase the strong
coupling, due to the negative signs of
the last two terms for tau and top couplings in the first curly bracket eq.(%
\ref{deltaalpha}) which dominate the bottom contribution. This is unlike
equation (\ref{deltaalphastrong}), where the log's of Yukawa ratios had
positive sign in front, (but were themselves negative). Here they are
positive, but their sign in front is not positive for all bottom, tau, top
couplings.
The second contribution comes from terms like $y_\lambda(\mu_g)/y_%
\lambda(M_z)$ and the third contribution comes from MSSM terms, $%
y^o_\lambda(0)/y^o_\lambda(M_z)$. These terms largely cancel because the
running of Yukawa couplings below $\mu_g$ scale is the same in MSSM and
EMSSM and we have same electroweak values for $y_\lambda(M_z)$, $%
y^o_\lambda(M_z)$ (these are inputs of our analysis); We also know that
Yukawa effects in MSSM, coming in as ratios $y^o_\lambda(0)/y^o_\lambda(M_z)$
are small and the scale $\mu_g$ being in general heavy\footnote{%
The extra states are not protected by any chiral symmetry and therefore
their mass is heavy}, we conclude that the last two curly brackets of eq.(%
\ref{deltaalpha}) do not solve the problem of how to reduce the small
discrepancy between the predicted value for the strong coupling and its
experimental value.
To conclude, it is not easy to reduce the strong coupling and increase the
unification scale significantly, and it seems that the observation we made
after eq.(\ref{mat2}) is true in general for models which consider only {\it %
complete} additional representations.
\section{The large $n$ limit.}
Our analysis has so far assumed that perturbation theory works well up to
the unification scale, and that the presence of a given number of states
does not affect its convergence. However, for a large number of states {\it %
and} a large unified coupling one should carefully consider the limits of
the two-loop perturbative expansion we applied in this work as well as in
ref. \cite{grl}. It is the purpose of this section to explore the
phenomenological implications for the case when these limits are reached, in
the absence of Yukawa effects.
In the case of large $n$ and large $\alpha _{g}$ the perturbative expansion
breaks down. An estimate of where breakdown occurs can be obtained by
comparing two-loop beta function terms with three loop terms\footnote{%
For this, expand the denominator of eq. (\ref{shifmanbetaalpha}) to get
three loop terms} or, equivalently, one-loop terms in the expansion of the
anomalous dimensions of the fields with two-loop terms. Although the higher
order terms have an additional power of coupling they are also proportional
to $n$ and in the large n limit this will compensate for the additional
coupling. This happens for $n\alpha \approx n^{2}\alpha ^{2}/(4\pi )$, or $%
n\alpha \approx {\cal O}(4\pi )$. \ In the case of large n the perturbation
series in $\alpha $ as well as the perturbation series for anomalous
dimensions of chiral fields can be resummed \cite{largeN} to leading order $%
O(1/n)$. This calculation was done by Jones in \cite{largeN} and the reader
is referred to this work for full details. We will use their results for the
resummed anomalous dimensions of the light fields of the spectrum for the
case the Yukawa couplings are ignored.
The anomalous dimension for any matter field is given by
\begin{equation}
\gamma _{F}=-\sum_{k=1}^{3}\frac{\alpha _{k}}{2\pi }G\left( \frac{\alpha
_{k}n}{4\pi }\right) C_{k}(R_{F})
\end{equation}
Here $F$ stands for a matter field in the representation $R_{F}$ of the
gauge group $SU(k)$\footnote{%
For the U(1) factor $C_{k}(R_{F})$ is $\frac{3}{5}Y^{2}$ where $Y$ is the
usual weak hypercharge \cite{largeN}.}. The function G is defined by
\begin{equation}
G(\epsilon )=\frac{1}{2}\frac{\Gamma (3-2\epsilon )}{\Gamma (2-\epsilon )^{2}%
}\frac{\sin (\pi \epsilon )}{\pi \epsilon }
\end{equation}
and has a pole for $\epsilon =3/2$ which sets the radius of convergence of
the {\it resummed} series. For $G=1$ we recover the results of eq.(\ref{Zs}%
). The running of the gauge couplings is now given by
\begin{eqnarray}
\alpha _{i}^{-1}(M_{z})& =-\delta _{i}+\alpha _{g}^{-1}+\frac{b_{i}}{2\pi }%
\ln \left[ \frac{M_{g}}{M_{z}}\right] +\frac{n}{2\pi }\ln \left[ \frac{M_{g}%
}{\mu _{g}}\right] +\frac{1}{4\pi }\sum_{j=1}^{3}\frac{n}{b_{j}^{\prime }}%
\left[ 2\delta _{ij}\lambda _{j}-\frac{b_{ij}}{b_{j}}\right] \ln \left[
\frac{\alpha _{g}}{\alpha _{j}(\mu _{g})}\right] \nonumber \\
& \nonumber \\
& +\frac{1}{4\pi }\sum_{j=1}^{3}\frac{b_{ij}}{b_{j}}\ln \left[ \frac{\alpha
_{g}}{\alpha _{j}(M_{z})}\right] +{\cal K}_{i} \label{shifman}
\end{eqnarray}
\begin{table}[tbp]
\begin{center}
\begin{tabular}{|c|c|c|cr|}
\hline
$n$ & $\alpha_g$ & $M_g/M_g^o$ & $\alpha_3(M_z)$ & \\ \hline\hline
30 & 0.1 & 1.359 & 0.12700 & \\ \hline
30 & 0.2 & 1.760 & 0.12796 & \\ \hline
30 & 0.3 & 2.043 & 0.12850 & \\ \hline
40 & 0.1 & 1.355 & 0.12698 & \\ \hline
40 & 0.2 & 1.745 & 0.12786 & \\ \hline
40 & 0.3 & 2.019 & 0.12835 & \\ \hline
50 & 0.1 & 1.353 & 0.12696 & \\ \hline
50 & 0.2 & 1.737 & 0.12780 & \\ \hline
50 & 0.3 & 2.006 & 0.12828 & \\ \hline
60 & 0.1 & 1.351 & 0.12695 & \\ \hline
60 & 0.2 & 1.732 & 0.12777 & \\ \hline
60 & 0.3 & 1.999 & 0.12823 & \\ \hline
\end{tabular}
\end{center}
\caption{The values of $\protect\alpha _{g}$, strong coupling at $\protect%
\alpha _{3}(M_{Z})$, and $M_{g}/M_{g}^{o}$ as a function of (large) n. We
always have that $(n\protect\alpha _{g})/(4\protect\pi )<3/2$.}
\label{table:1}
\end{table}
where $b_{ij}$ and $\,b_{j}$ are just the two-loop and one-loop beta
functions for the MSSM and where $\lambda_1=0$, $\,\lambda_2=2$,
$\,\lambda_3=3$. ${\cal K}_{i}$ are the resummed corrections induced
in the large $n$ limit ignoring Yukawa effects and are given by
\begin{equation}
{\cal K}_{i}=\frac{1}{4\pi }\sum_{j=1}^{3}(b_{ij}-2\lambda _{j}\delta
_{ij}b_{j})\frac{1}{b_{j}^{\prime }}\int_{\alpha _{j}(\mu _{g})}^{\alpha
_{g}}\frac{d\alpha _{j}}{\alpha _{j}}\left[ -1+G\left( \frac{\alpha _{j}n}{%
4\pi }\right) \right] \label{kapa}
\end{equation}
The effect of ${\cal K}_{i}$'s on the unification scale and the strong
coupling may now be readily computed to give
\begin{equation}
\frac{M_{g}}{M_{g}^{o}}=e^{\frac{5\pi }{14}(-{\cal K}_{1}+{\cal K}_{2})}%
\left[ \frac{\alpha _{g}}{\alpha _{g}^{o}}\right] ^{31/21}\left[ \frac{%
\alpha _{3}(M_{z})}{\alpha _{3}(M_{z})^{o}}\right] ^{4/21}\left[ \frac{%
\alpha _{g}}{\alpha _{1}(\mu _{g})}\right] ^{n/(12b_{1}^{\prime })}\left[
\frac{\alpha _{g}}{\alpha _{2}(\mu _{g})}\right] ^{-39n/(28b_{2}^{\prime })}%
\left[ \frac{\alpha _{g}}{\alpha _{3}(\mu _{g})}\right] ^{4n/(21b_{3}^{%
\prime })} \label{emg}
\end{equation}
and
\begin{equation}
\alpha _{3}^{-1}(M_{z})=\alpha _{3}^{o-1}(M_{z})-\frac{470}{77\pi }\ln \left[
\frac{\alpha _{g}}{\alpha _{g}^{o}}\right] +\sum_{j=1}^{3}\ln \left[ \frac{%
\alpha _{g}}{\alpha _{j}(\mu _{g})}\right] ^{\omega _{j}}-\frac{17}{14\pi }%
\ln \left[ \frac{\alpha _{3}(M_{z})}{\alpha _{3}^{o}(M_{z})}\right] +\left(
{\cal K}_{3}-\frac{12}{7}{\cal K}_{2}+\frac{5}{7}{\cal K}_{1}\right)
\label{alst}
\end{equation}
with $\omega_j$ given in eq.(\ref{omegaaa}).
To use these equations it is necessary to compute the values of $\alpha
_{j}(\mu _{g}).$ We do this by writing the equations equivalent to eq.(\ref
{shifman}) for $\alpha _{i}^{-1}(\mu _{g})$ and eliminating the term $n/(2\pi
)\ln \left( M_{g}/\mu _{g}\right) $ between these equations for $\alpha
_{i}. $ This gives
\begin{equation}
\alpha _{i}^{-1}(\mu _{g})=\alpha _{g}^{-1}+\frac{5}{28}\frac{b_{i}^{\prime }%
}{n}\left( b_{2}{\cal K}_{1}-b_{1}{\cal K}_{2}\right) +{\cal K}_{i}-\frac{%
b_{i}^{\prime }}{n}\left[ \frac{1}{\alpha _{g}}-\frac{1}{\alpha _{g}^{o}}%
\right] -\frac{2336}{231n}\frac{b_{i}^{\prime }}{2\pi }\ln \left[ \frac{%
\alpha _{g}}{\alpha _{g}^{o}}\right] +
\sum_{j=1}^{3}\ln \left[ \frac{\alpha _{g}}{\alpha _{j}(\mu _{g})}\right]
^{v_{ij}} \label{alphas}
\end{equation}
with
\begin{equation}
v_{ij}=\frac{1}{4\pi b_{j}^{\prime }}(b_{ij}+2n\lambda _{j}\delta _{ij})+%
\frac{b_{i}^{\prime }}{2\pi }\left\{ \frac{7}{132b_{1}^{\prime }},\frac{333}{%
28b_{2}^{\prime }},\frac{-88}{21b_{3}^{\prime }}\right\} _{j}
\end{equation}
Note that the quantities ${\cal K}_{i}$ depend on $\alpha _{j}(\mu _{g})$ as
seen from eq.(\ref{kapa}). For given $\alpha _{g}$ and large $n$ we solve (%
\ref{alphas}) to get $\alpha _{j}(\mu _{g})$ which are then used to compute
the ratio $M_{g}/M_{g}^{o}$ and $\alpha _{3}(M_{z})$ of eqs. (\ref{emg}), (%
\ref{alst}). The results for values of $n$ and $\alpha _{g}$ which avoid the
pole in $G$ are presented in Table 1.
We see that the effects of a large number of states is very small and the
results for $M_{g}/M_{g}^{o}$ and $\alpha _{3}(M_{z})$ are in general
insensitive to the variation of (large) $n$ and stay close to the MSSM
values.
\section{The case of strong coupling.}
Our analysis to date has assumed that the couplings (even at large n) remain
in the perturbative domain between electroweak scale and unification scale.
However, non-perturbative physics could be important {\it below unification
scale} and in this section we will consider this possibility. In this case
one apparently loses all the predictive power because the full beta
functions are known only perturbatively and the perturbation series does not
work beyond the scale of non-perturbative physics. In \cite{sun} it was
argued that this is not the case because the {\it ratios} of the gauge
couplings are driven towards stable infra-red fixed points. Since the
couplings are initially large, the rate of flow to the infrared fixed point
should be rapid. As a result, the low-energy values of these ratios are
insensitive to their initial values and to the non-perturbative effects
(provided the couplings initially lie in the domain of attraction of the
fixed points), giving a reliable prediction for $\alpha _{3}(M_{z})$ even in
this case. Using the fixed points (FP) of the one-loop RGE to determine the
boundary conditions of the gauge couplings at the decoupling scale of the
additional massive states, it was found that a low value of $\alpha
_{3}(M_{z})$ is predicted, very close to the mean experimental value and in
much better agreement than the perturbative MSSM result.
Here we wish to consider corrections to the above ``fixed point'' result in
order to estimate the precision of the result and to establish whether the
apparent improvement over the perturbative case is significant. To do this
we shall continue to use the perturbative solution, but only in the domain
where it is applicable. As we shall see in this region one may see the RG
flow to the fixed points but there are calculable corrections. The first
point to make is that even using the improved perturbation sum discussed in
the last section one requires\footnote{We will however ask that
$n\alpha \leq {\cal O}(4\pi)$ instead, because we will not
use the resummed perturbation series, and this relation marks the limit
where it becomes important.}
$n\alpha \leq {\cal O}(6\pi ).$ We can only
calculate those effects below the non-perturbative domain at the scale $M_{o}
$ where the gauge couplings enter the perturbative domain and as we have
just remarked this starts at quite small values of $\alpha $. Thus our error
estimates will necessarily be somewhat rough. There are two corrections to
the ``fixed point'' calculation, deriving from the relations among the gauge
couplings at the decoupling scale\footnote{%
In two-loop running of gauge couplings the effective decoupling scale is $%
\mu _{g}$, the bare mass of the heavy spectrum} $\mu _{g}$, and from the
uncertainty in their exact values as they leave the non-perturbative region
to enter the perturbative domain. Here we analyse them briefly.
The first correction arises because, even if one sticks to the one-loop beta
functions, due to the finite energy range involved the ratios of couplings
are not driven quite to the fixed point. Therefore the boundary condition we
used for the evolution of the couplings below $\mu _{g}$ is {\it not} the
``fixed-point'' ratio \cite{sun}
\begin{equation}
R_{ji}(\mu _{g})\equiv \frac{\alpha _{j}(\mu _{g})}{\alpha _{i}(\mu _{g})}%
=R_{ji}^{\ast } \label{fixedpoint}
\end{equation}
with $R_{ji}^{\ast }=b_{i}^{\prime }/b_{j}^{\prime }$. Instead, the boundary
condition for the running of the gauge couplings below the decoupling scale%
\footnote{%
Below this scale a MSSM like spectrum applies} is a ``quasi-fixed-point''
(QFP) relation for the ratio of the gauge couplings. This relation exactly
takes into account a one-loop running for the gauge couplings above $\mu
_{g} $ scale (just as in FP case) {\it and} the {\it finite} range of
energy. This QFP relation is
\begin{equation}
R_{ji}(\mu _{g})\equiv \frac{\alpha _{j}(\mu _{g})}{\alpha _{i}(\mu _{g})}=%
\frac{R_{ji}^{\ast }}{1-\left[ 1-\frac{R_{ji}^{\ast }}{R_{ji}(M_{o})}\right]
\frac{\alpha _{i}(\mu _{g})}{\alpha _{i}(M_{o})}} \label{qfp}
\end{equation}
This can be easily deduced by integrating the one-loop differential eqs for
the gauge couplings above $\mu _{g}$. Using this QFP boundary condition for
the gauge couplings evolution below $\mu _{g}$ scale, one gets a value for $%
\alpha _{3}(M_{z})$ which is typically
smaller than that of FP case. The results
depend on the value of $R_{ij}(M_{o})$, which enters (\ref{qfp}), but this
dependence is relatively weak and the prediction for $\alpha _{3}(M_{z})$
stays below MSSM value. This is somewhat expected, as we know that two-loop
corrections in MSSM increase $\alpha _{3}(M_{z})$ from its rather good one
loop prediction\footnote{%
In MSSM a one-loop calculation gives $\alpha _{3}(M_{z})\approx 0.117$ while
two-loop calculation increases it to $\approx 0.126$}, taking it away from
the current experimental upper limit. As eq. (\ref{qfp}) ignores the two
loop evolution' effects between $\mu _{g}$ and $M_{o}$, we get a smaller $%
\alpha _{3}(M_{z})$ than in MSSM.
This brings us to the second correction which consists of two-loop or higher
order (gauge and Yukawa) effects between $\mu _{g}$ and $M_{o}$. This
affects the relation (\ref{qfp}) and therefore the low energy predictions
for the strong coupling. We will consider
here only the gauge effects. Their effect
is to change the expression of $R_{ji}(\mu _{g})$ such as to increase the
value of the strong coupling at electroweak scale.
The increase overcomes the QFP
decrease in $\alpha _{3}(M_{z})$, bringing it closer to MSSM prediction.
This conclusion is subject to the corrections due to the unknown values of
the gauge couplings at $M_{o}$ (or equivalently $Z(M_{o})$), where they
enter the perturbative domain as we lower the scale, but there is a
systematic effect increasing the value of $\alpha _{3}(M_{z})$ even for
sizeable changes in the boundary conditions.
Our analysis is limited because the range between $\mu _{g}$ and $M_{o}$ is
quite small and the {\it calculable} corrections to the fixed point result
correspondingly small. However this does not mean the full non-perturbative
correction to the fixed point result is small. An estimate of these
corrections coming from the higher order and quasi fixed point terms
discussed above may be obtained by noting that there is a focussing effect
of the fixed point following in the perturbative flow domain which reduces
the contribution of such effects by the factor $\frac{\alpha _{i}(\mu _{g})}{%
\alpha _{i}(M_{o})}$ (cf eq(\ref{qfp})). Using this factor with $\alpha
_{i}(M_{o})$ limited by the perturbative condition $n\alpha \leq {\cal O}%
(4\pi )$ together with the assumption of $O(1)$ deviations from the QFP
boundary condition we arrive at a reasonably conservative estimate for these
errors of ${\cal O}(\pm 0.01)$ for $n=12$, increasing for larger
values of $n$.
\section{Summary and Conclusions}
For the case of perturbative evolution of the gauge and Yukawa couplings up
to the unification scale, we derived an analytical method to determine the
two-loop unification predictions for the value of unification scale, the
intermediate mass scale and the value of the
strong coupling at $M_{z}$ in models
with the MSSM spectrum augmented by additional massive representations
filling out complete $SU(5)$ representations. The effects of the two-loop
terms involving Yukawa couplings are in general relatively small and
model-dependent. For the models we considered, $\alpha _{3}(M_{z})$ cannot
be lowered below the MSSM value, even with Yukawa effects present, keeping $%
\alpha _{3}(M_{z})$ above the mean experimental value. However the sign of
the effect is not universal, so it is possible a richer structure in Yukawa
sector could reduce the strong coupling.
We also showed that the unification scale
is not changed significantly by the top, bottom and tau Yukawa coupling
effects. Using the large n resummed perturbation series we showed that the
results are stable in the limit of inclusion of a large number of complete $%
SU(5)$ representations and stay close to MSSM predictions. Finally we
considered the case that unification occurs at strong coupling with
perturbative analysis breaking down {\it below} the unification scale.
Because of the fixed point structure in the RGEs (at one-loop level) one may
still make a prediction for $\alpha _{3}(M_{z})$. However threshold and two
loop effects above the decoupling scale for the heavy states can be
sizeable. As a result the low estimate for $\alpha _{3}(M_{z})$ obtained
using fixed point boundary conditions at the decoupling scale is increased
for large n towards the MSSM value. Overall we estimate an irreducible error
in the strong coupling determination of $\alpha _{3}(M_{z})$ of $O(0.01)$
coming from the residual sensitivity to the initial values of the ratios of
the couplings as they enter the perturbative domain and from two-loop
corrections to $R_{ji}(\mu _{g})$ above $\mu _{g}$.
\section{Acknowledgments}
D.G. gratefully acknowledges the financial support from the part of
University of Oxford and Oriel College (University of Oxford). G. A.-C.
acknowledges the financial support for his work on this project from
P.P.A.R.C., the Foundation Blanceflor Boncompagni-Ludovisi and the Swiss
National Science Foundation.
\section{Appendix}
We have the following expressions for $T(R_\sigma)$ for the fundamental
representation
\begin{equation}
\delta^{ab}T(R)=Tr(T^aT^b)=\frac{1}{2}\delta^{ab}
\end{equation}
For a given flavour the value of $T(R_\sigma)$ is $T(R_\sigma)=1/2$. ($%
T(R_\sigma)=1$ for conjugate pairs of fields).\newline
\noindent For the adjoint representation we have
\begin{equation}
\left(T^a\right)_{bc}=-if^{abc}
\end{equation}
and therefore, for $SU(N)$ group
\begin{equation}
\delta^{ab}T(G)=f^{acd}f^{bcd}=N\delta^{ab}
\end{equation}
where the structure constants $f^{abc}$ are given by
\begin{equation}
\left[T^a,T^b\right]=if^{abc} T^c
\end{equation}
The values of ${\cal A}$ used in text (\ref{matrixA}) in the context of a
MSSM like superpotential and considering only the Yukawa couplings for top,
bottom and tau is given by (see for example \cite{bjorkman_jones})
\begin{equation}
{\cal A}_{\nu}(F)=\left(
\begin{array}{crcrcrcrcrcrcrcr}
& l_{L_3} & q_{L_3} & e_{R_3} & u_{R_3} & d_{R_3} & H_1 & H_2 \\
\\
\nu=\tau:\; & -1 & 0 & -2 & 0 & 0 & -1 & 0 \\
\nu=b: \; & 0 & -1 & 0 & 0 & -2 & -3 & 0 \\
\nu=t: \; & 0 & -1 & 0 & -2 & 0 & 0 & -3
\end{array}
\right)
\end{equation}
The running of Yukawa couplings in one-loop, for a MSSM like superpotential
can be rewritten, using \cite{bjorkman_jones}, as follows (we ignore the
first two generations' contribution)
\begin{equation}
\left(
\begin{array}{c}
y_\tau(t) \\
\\
y_b(t) \\
\\
y_t(t)
\end{array}
\right) =\left(
\begin{array}{crcrcr}
\frac{35}{122} & \frac{-9}{61} & \frac{3}{122} \\
\\
\frac{-3}{61} & \frac{12}{61} & \frac{-2}{61} \\
\\
\frac{1}{122} & \frac{-2}{61} & \frac{21}{122}
\end{array}
\right) \left(
\begin{array}{c}
\frac{d \ln y_\tau}{dt} \\
\\
\frac{d \ln y_b}{dt} \\
\\
\frac{d \ln y_t}{dt}
\end{array}
\right) +\left(
\begin{array}{crcrcr}
\frac{143}{305} & \frac{30}{61} & \frac{-40}{61} \\
\\
\frac{-23}{915} & \frac{21}{61} & \frac{160}{183} \\
\\
\frac{136}{915} & \frac{27}{61} & \frac{136}{183} \\
\end{array}
\right) \left(
\begin{array}{c}
\alpha_1(t) \\
\\
\alpha_2(t) \\
\\
\alpha_3(t)
\end{array}
\right)+{\cal O}(\alpha^2)
\end{equation}
For a family symmetric superpotential we used in text (\ref{fam_sym_pot})
\begin{equation}
W=\sum_{i,j=1}^{3}(h_{ij}^{u}Q_{i}U_{j}H_{2}^{ij}
+h_{ij}^{d}Q_{i}U_{j}H_{1}^{ij}+h_{ij}^{l}L_{i}e_{j}H_{1}^{ij})
\end{equation}
we have that
\begin{equation}
{\cal B}_{\nu}(F)=\left(
\begin{array}{crcrcrcrcrcrcrcr}
& l_{L_j} & q_{L_j} & e_{R_j} & u_{R_j} & d_{R_j} & H_1 & H_2 \\
\nu=\tau:\; & -3 & 0 & -6 & 0 & 0 & -1 & 0 \\
\nu=b: \; & 0 & -3 & 0 & 0 & -6 & -3 & 0 \\
\nu=t: \; & 0 & -3 & 0 & -6 & 0 & 0 & -3
\end{array}
\right)
\end{equation}
For the above class of superpotential, the Yukawa couplings running in one
loop can be written as follows
\begin{equation}
\left(
\begin{array}{c}
y_\tau(t) \\
\\
y_b(t) \\
\\
y_t(t)
\end{array}
\right) =\left(
\begin{array}{crcrcr}
\frac{15}{146} & \frac{-2}{73} & \frac{1}{146} \\
\\
\frac{-2}{219} & \frac{20}{219} & \frac{-5}{219} \\
\\
\frac{1}{438} & \frac{-5}{219} & \frac{13}{146} \\
\end{array}
\right) \left(
\begin{array}{c}
\frac{d \ln y_\tau}{dt} \\
\\
\frac{d \ln y_b}{dt} \\
\\
\frac{d \ln y_t}{dt}
\end{array}
\right) +\left(
\begin{array}{crcrcr}
\frac{13}{73} & \frac{18}{73} & \frac{-8}{73} \\
\\
\frac{7}{1095} & \frac{13}{73} & \frac{80}{219} \\
\\
\frac{232}{3285} & \frac{15}{73} & \frac{232}{657} \\
\end{array}
\right) \left(
\begin{array}{c}
\alpha_1(t) \\
\\
\alpha_2(t) \\
\\
\alpha_3(t)
\end{array}
\right)+{\cal O}(\alpha^2)
\end{equation}
|
2,869,038,156,838 | arxiv | \section{Introduction}
High energy p+p, p+A and A+A collisions at RHIC and LHC address
many interesting questions.
Some fundamental ones concern the relationship between quantum
field theory and hydrodynamics, thermalization,
and between quantum entanglement, decoherence,
and thermodynamic entropy production.
According to our present understanding, within
a short time of order $1-2$ fm/$c$ or less, collision systems reach a
state which can be approximated by a thermal medium characterized
by local thermodynamic properties.
The microscopic processes leading to this state
are still somewhat controversial, the main problem being that
no controlled calculational technique in QCD is
applicable to the {strongly time dependent rapidly evolving, far off
equilibrium, and strongly coupled medium produced by the collisions.
Although there exist conflicting observations, there
seems to be wide agreement that two complementary approaches
have been quite successful in modeling and analyzing the pre-hydro
($< 0.1-0.2$ fm/$c$) and later ($> 1-2$ fm/$c$) phases of such collisions,
namely AdS/CFT (or gauge/gravity) duality and relativistic viscous
hydrodynamics.
As was shown in Ref.~\cite{vanderSchee:2013pia} for smooth shocks,
i.e.\ neglecting initial state fluctuations,
the interpolation between these two regimes is quite smooth.
This observation suggests that potential
theoretical concerns regarding the use of
AdS/CFT duality to model the strongly coupled dynamics
of QCD plasma prior to the onset of a hydrodynamic regime
are not so severe as to foreclose the phenomenological utility
of this approach.
AdS/CFT duality, in its simplest form, allows one to solve
the dynamics of maximal supersymmetric $SU(N)$ Yang-Mills theory
($\mathcal{N}\,{=}\,4$ SYM) in the limit of large $N$ and large 't Hooft coupling.
Such a system, of course, is not QCD.
Therefore, much energy has been invested in recent years
characterizing the differences in the dynamics
between a real quark-gluon plasma (at accessible temperatures)
and the non-Abelian plasma of $\mathcal{N}\,{=}\,4$ SYM.
In Ref.~\cite{Waeber:2018bea} we extended earlier literature
on finite 't Hooft coupling corrections
\cite{Buchel:2008ac,Buchel:2008sh,Stricker:2013lma}
by evaluating corrections to the lowest
electromagnetic quasinormal mode (QNM) frequencies.
The inverse of the
imaginary part of the lowest QNM frequency gives a characteristic
thermalization time and is, therefore, especially relevant for the
dynamics of HICs.
In a different work \cite{Endrodi:2018ikq},
we compared the response of QCD and $\mathcal{N}\,{=}\,4$ SYM plasmas to
a background magnetic field, and with an appropriately
calibrated comparison found remarkably little difference between
the behavior of QCD and that of conformal $\mathcal{N}\,{=}\,4$ SYM over
a wide range of temperature and magnetic field.
Lattice gauge theory studies \cite{Panero:2009tv,Bali:2013kia}
have shown that meson masses and some meson
coupling constants scale trivially with the number of colors
all the way from $N=3$ to $N=\infty$.
In these studies,
modeling QCD plasma at experimentally relevant temperatures
using large-$N$, strongly coupled $\mathcal{N}\,{=}\,4$ SYM theory
works much better than might have been expected {\em a priori}.
Based on these results, we surmise that strongly coupled
$\mathcal{N}\,{=}\,4$ SYM, to which AdS/CFT duality is most easily applied,
describes the early phase of HICs not only
qualitatively but also semi-quantitatively at a useful level of accuracy.%
\footnote
{%
To be clear, this assumption applies to observables sensitive to
typical thermal momentum scales in the plasma and not, for example,
to measurements of high transverse momentum particle or jet production.
}
This evidence-based hypothesis forms the basis for the considerations
in our present work.
It is remarkable how well viscous hydrodynamics describes HICs.
Various hydrodynamics codes achieve a close to perfect agreement with an
enormous amount of experimental data in spite of uncomfortably large
spatial gradient terms. Considerations that help explain this
unexpected success include the distinction between
``hydrodynamization'' and genuine ``thermalization,''
and the fact that hydrodynamics has attractor properties which set in
long before true local equilibration is reached
\cite{Berges:2013fga,Romatschke:2017vte,Romatschke:2017acs}.
A rather disquieting
consequence of this line of argument is that while overwhelming
experimental evidence supports hydrodynamic behavior and suggests
early hydrodynamization, there is little experimental evidence for early
genuine thermalization. The latter is, however, required to fulfill the
core premise of high energy heavy ion physics, namely that
nuclear collisions allow us to investigate the equilibrated
quark-gluon plasma that filled the early universe.
We try in this contribution to add one piece of information to
this complicated problem.
\begin{figure}
\includegraphics[width=8.6cm]{HIC1.pdf}
\caption
{%
Sketch of a peripheral HIC.}
\label{fig:collision}
\end{figure}
High energy heavy ion experiments have generated many surprising
experimental observations which call for microscopic explanations.
One is the degree of similarity between high multiplicity p+p collisions
and A+A collisions.
Another surprise is the extent to which
the usual cartoons illustrating HICs,
such as Fig.~\ref{fig:collision}, involving smooth energy densities
for the colliding nuclei are quite misleading.
Instead, modern hydrodynamics codes typically start from initial conditions
with extremely large fluctuations of energy and entropy density.
(See figure~1 in Ref.~\cite{Bernhard:2016tnd} for a typical example.)
These initial conditions are required to
explain the large observed odd azimuthal flow moments,
see e.g., Ref.~\cite{Acharya:2018zuq}.
If odd moments solely arose from statistical
fluctuations of the hydrodynamic fluid itself,
then for symmetric collisions such as Pb+Pb,
the odd flow coefficients $v_3$, $v_5$, etc., should be very
significantly suppressed compared to the even coefficients
$v_2$, $v_4$, etc., which is not the case.
Here, as usual, the flow moments $\{ v_{\rm n} \}$ are defined by
the azimuthal dependence of the produced particle distribution,
\begin{equation}
\label{eq:flowdud}
E \, \frac{\mathrm{d}^{3}N}{\mathrm{d}p^3}
=
\frac{1}{2\pi} \,
\frac{\mathrm{d}^{2}N}{p_{\rm T}\, \mathrm{d} \ensuremath{p_{\mathrm{T}}}{}\, \mathrm{d} y}
\Bigl(1+2\sum_{\rm n=1}^{\infty}v_{\rm n} \cos[{\rm n}(\varphi - \Psi_{\rm n})]\Bigr),
\end{equation}
with $E$ the energy, $p$ momentum, \ensuremath{p_{\mathrm{T}}}{} transverse
momentum, $\varphi$ the azimuthal angle, $y$ the pseudorapidity
of the particle, and $\Psi_{\rm n}$ the $n$-th harmonic symmetry
plane angle.
Hence typical AdS shock wave calculations involving smooth
initial energy densities are too idealized.
Even symmetric Pb+Pb collisions should be characterized by
initial energy densities which are asymmetric owing to the
presence of independent and substantial transverse variations.
This was one of the motivations for our study \cite{Waeber:2019nqd}
of idealized but highly asymmetric planar shock collisions.
A further phenomenologically relevant aspect is that
AdS/CFT models of collisions, in the leading infinite coupling limit,
tend to predict surprisingly short hydrodynamization times and equilibration times\footnote{The here cited model treated in \cite{Balasubramanian:2010ce} doesn't sharply differentiate between hydrodynamization time and equilibration time. Here and henceforth we also use the term "equilibration" synonymous with "thermalization".},
0.3 fm/$c$ or less \cite{Balasubramanian:2010ce}.
While there is no consensus among perturbative, i.e. weak coupling, estimates of the thermalization time scale $\tau_{\rm therm}$, all estimates suggest $\tau_{\rm therm}$ it to be substantially longer.
For example, an early result \cite{Baier:2000sb} gave
$\tau_{\rm therm}>\alpha^{-13/5}Q_s^{-1}>4$~fm/$c$,
and a more recent result based on a different analysis \cite{Berges:2014yta}
is $\tau_{\rm therm}\sim 2$~fm/$c$.
In \cite{Romatschke:2016hle} it is argued from the perspective of hydrodynamics
that thermalization might possibly never be reached in a realistic heavy-ion collision.
In our earlier work \cite{Waeber:2018bea} using a specific resummation {\em ansatz},
we found that for realistic finite values of the 't Hooft coupling
of QCD, the imaginary part of the lowest QNM is roughly halved,
corresponding to a doubling of the predicted equilibration time.
In the present contribution we will argue that for fluctuations
large enough to generate the observed $v_3$ values,
the AdS prediction for the hydrodynamization time of individual transverse ``pixels'' are
significantly lengthened
but stay much smaller than $\tau_{\rm therm}$ found in perturbative calculations. However, we will
also argue that complete thermalization could take much longer as it
requires equilibration between these ``pixels''.
Over the years many different hydrodynamics codes have been developed,
improved and fine-tuned to describe the experimental data. Their relative
advantages and disadvantages are the topic of specialized workshops.
We do not want to enter this discussion here. Rather, we will focus on
just one relatively recent study \cite{Bernhard:2016tnd} which is especially
systematic with respect to the properties we are interested in.
We leave it to the authors of other studies to decide whether our conclusions
are also relevant for their work.
In the following section we briefly review those results of
Ref.~\cite{Bernhard:2016tnd} which are important for us
and then discuss how these compare with the results of our
AdS/CFT study \cite{Waeber:2019nqd}.
The present contribution was separated from
Ref.~\cite{Waeber:2019nqd} because, unlike the
well defined results of Ref.~\cite{Waeber:2019nqd} which
should stand the test of time,
the following discussion of phenomenological consequences
depends crucially on the comparison of results from hydrodynamics
codes to experimental data and is subject to
far more uncertain interpretation.
In particular, it is not yet feasible to perform numerical
gravity calculations with initial conditions which fully
mimic the strong transverse fluctuations of energy and entropy densities
that appear to be present in real collisions.
In Section 3 we discuss implications for peripheral collisions
and for p+A collisions. Section 4 is devoted to another aspect,
namely the time dependence of the apparent horizon in asymmetric
collisions and a comparison to what is known about the time dependence
of entropy for classical and quantum gauge theories.
A final section holds a few concluding remarks.
\section{The role of fluctuations in heavy ion collisions}
The arguments in favor of strong fluctuations in the initial state
of heavy ion collisions (HICs) are manifold, both theoretical and experimental.
In, e.g., Ref.~\cite{Muller:2011bb} (see also~\cite{Lappi:2017skr}) the initial fluctuations
in transverse energy density were calculated in the Color Glass
Condensate (CGC) model.
It was argued that for typical pixels these can be larger than 50\%.
On the experimental side we have
already noted the surprisingly large values $v_3$ observed,
e.g., in Ref.~\cite{Acharya:2018zuq}.
Different models vary in their assumptions, including
those which concern fluctuations.
We will follow Ref.~\cite{Bernhard:2016tnd}
whose Fig.~1 shows several typical examples of initial state fluctuations.
The basic assumption of that paper, which we also adopt,
is that one may think of the initial state of a HIC as arising from a
sum over many isolated collisions of often vastly asymmetric ``pixels''.
Because these asymmetries are so large, holographic calculations for
smooth symmetric shock wave collisions are insufficient.
Extending the AdS treatment to include realistic fluctuations is somewhat subtle because
it relates to basic questions of what is exactly meant by ``decoherence'' and ``thermalization.''
While the fundamental T-invariance of QCD seems to imply the absence of any decoherence, this
is no longer true if specific probes of only limited spatial extent are considered. All standard
observables for high energy heavy-ion collisions do exactly that, probing only transverse scales
which are much smaller than the nuclear radii, be it $1/Q_s$ or, e.g., individual hadron radii.
Therefore, real life heavy ion experiments always imply coarse graining,
see again Fig.~1 of
\cite{Bernhard:2016tnd}, which circumvents this T-invariance argument.
Basically, all experimental observables are insensitive to quantum correlations beyond
the scales mentioned above.
The description of detailed properties of collisions of highly
non-uniform nuclei by viscous relativistic hydrodynamics has been
the topic of many careful and interesting investigations, far
too many to review in this short note. Let us only mention
Ref.~\cite{Welsh:2016siu}, where it was highlighted that the inclusion
of realistic fluctuations is even more important if one studies
collision systems like p+A. The fact that we limit our discussion
here to Ref.~\cite{Bernhard:2016tnd} should not be interpreted as any
form of judgment on the relative value of the various models but
just as reflection of our inability to do justice to all of them.
One of the standard procedures, also adopted here, is to describe
all collisions by means of a ``nuclear thickness function'' $T$
which is usually assumed to be a superposition of Gaussians of a
certain width $w$ in the transverse plane.
In Ref.~\cite{Bernhard:2016tnd}
the randomly generated participant thickness function $\widetilde T(x,y)$ is
constructed as follows:
\begin{subequations}
\begin{eqnarray}
\widetilde T(x,y) &=& \sum_{i=1}^{N_{\rm part}}~~\gamma_i~T_p(x{-}x_i,y{-}y_i)
\\
T_p(x,y)&=& \frac{1}{2\pi w^2} \, \exp\left( -\frac{x^2+y^2}{2w^2} \right)
\end{eqnarray}
\label{eq:Ttilde}%
\end{subequations}
where the coefficients $\gamma_i$ are chosen according to the Gamma
probability distribution,
\begin{equation}
P(\gamma)
= \frac{k^k}{\Gamma(k)} \, \gamma^{k-1}e^{-k\gamma} \,,
\label{eq:2.2}
\end{equation}
with the parameter $k=1.4$ and the mean value of $\gamma$ set to unity.
The coefficients $\{\gamma_i\}$ simulate
the statistical fluctuations of the initial state,
while $\{ x_i, y_i \}$ are the random participant locations
in the transverse plane. (See Ref.~\cite{Bernhard:2016tnd} for details.)
When we refer to ``pixels'' we mean independent transverse areas
with a radius of order $1/Q_s$, where $Q_s$ is the saturation scale
of order 1--2 GeV in the initial state and not those areas with
which hydrodynamics is initialized (with mean radius $w$), which
we call ``patches`` for distinction.
The typical radius of a ``pixel''
is 0.1--0.2 fm, while that of a hydrodynamical ``patch'' is 0.4--1.2 fm,
see table 1 in Ref.~\cite{Moreland:2018gsh}.
However, in line
with our comment above regarding the attractor properties of hydrodynamics,
the initialization time for hydrodynamics can be chosen more or
less at will in the range $0.1-1.5$ fm/$c$
(see again Table 1 in Ref.~\cite{Moreland:2018gsh}).
In Ref.~\cite{Bernhard:2016tnd} the hydrodynamics code
was actually initialized at $t \,{=}\, 0$~fm/$c$ which resulted in the fit
value $w\,{=}\, 0.5$~fm. This value for the patch size $w$ probably has little
physical meaning, as hydrodynamics is definitely not applicable at
$t \,{=}\, 0$~fm/$c$, but this is irrelevant for our discussion since
the distribution $P(\gamma)$ is independent
of $w$ and so are the resulting fluctuations.
We will argue below that the relevant time scales for hydrodynamization and thermalization
depend only on the distribution function $P(\gamma)$.
The physical idea behind the thickness function is that due to
length contraction and time dilation partons in the colliding nuclei are coherent in
the longitudinal direction but incoherent in transverse distance beyond
a characteristic length scale which is typically chosen as the inverse
saturation scale $1/Q_s\sim 0.2$~fm.
What is phenomenologically
important is the assumption made about how the local initial entropy
density $s$ (or related energy density) depends on the thickness functions of
two colliding transverse pixels.
Very little is known about this and models differ widely.
The authors of Ref.~\cite{Bernhard:2016tnd}, therefore, use the
flexible parameterisation
\begin{equation}
s(x,y) ~\sim~ \left( \frac{\widetilde T^p_A +\widetilde T^p_B}{2}\right)^{1/p}
\end{equation}
which covers a wide range of possibilities.
\begin{equation}
s~\sim~ \left\{
\begin{array}{lll}
\max(\widetilde T_A ,\widetilde T_B) \,, & p\rightarrow +\infty;&\\[2pt]
(\widetilde T_A +\widetilde T_B)/2 \,, & p=+1;& ({\rm arithmetic})\\[2pt]
(\widetilde T_A \, \widetilde T_B)^{1/2} \,, & p=0;& ({\rm geometric})\\[2pt]
2(\widetilde T^{-1}_A +\widetilde T^{-1}_B)^{-1} \,,
~~~~~& p=-1;~~~~~& ({\rm harmonic})\\[2pt]
\min(\widetilde T_A ,\widetilde T_B) \,, & p\rightarrow -\infty,&
\end{array}
\right.
\end{equation}
and simply vary the value of $p$ to find the one for which they obtain the best
overall agreement with the data.
This phenomenological analysis clearly favors the geometric mean ($p=0$),
see Fig.~9 of Ref.~\cite{Bernhard:2016tnd}, so that
\begin{equation}
s(x,y) \sim \left[ \widetilde T_A(x,y) \,\widetilde T_B(x,y)\right]^{1/2}
\,.
\label{eq:s(x,y)}
\end{equation}
This dependence of the initial entropy density on the geometric mean
of the thickness functions $\widetilde T_A, \widetilde T_B$
is especially noteworthy in light of analogous results from
studies of holographic collisions \cite{Waeber:2019nqd}.
This study found that in asymmetric collisions
the rapidity dependence of the proper energy density
of the produced plasma, near the onset of the hydrodynamic regime,
is well described by the geometric mean of the produced proper energy densities
in the corresponding symmetric collisions.
Moreover, the hydrodynamization time hypersurface
was found to be an almost perfect proper time hypersurface
(i.e., the boundary of the hydrodynamic regime is essentially a hyperbola)
whose value depends exclusively on the energy scale set by the
geometric mean of the longitudinally integrated energy densities
of the colliding pixels \cite{Waeber:2019nqd ,Chesler:2015fpa},%
\footnote
{%
Explicitly, $\int dz \> T^{00}_A(z) \equiv \mu_A^3 \, N_{\rm c}^2/(2\pi^2)$,
etc.
}
\begin{equation}
\tau_{\rm hydro} \approx 2/\sqrt{\mu_A \, \mu_B} \,.
\label{eq:thydro}
\end{equation}
\begin{figure}
\includegraphics[width=8.6cm]{Pbar.pdf}
\caption
{%
The probability distribution $\overline P(\tau)$
for $k=1.4$, in units where $\mu = 1$.
}
\label{fig:Pbar}
\end{figure}
To connect this holographic result for hydrodynamization time to the
model (\ref{eq:s(x,y)}) of fluctuating initial conditions,
we regard the energy scale $\mu_A$ of a given pixel of projectile $A$
as equal to the average scale $\mu$ times the fluctuating amplitude
$\gamma_i$ of some participant lying within this pixel,
and likewise for projectile $B$.
One may then compute the resulting probability distribution
$\overline P(\tau_{\rm hydro})$
of the hydrodynamization time $\tau_{\rm hydro}$.
The result is
\begin{eqnarray}
&\overline P(\tau)
\equiv
\int_0^\infty \int_0^\infty d\gamma_A \> d\gamma_B \>
P(\gamma_A) \, P(\gamma_B) \,\times \nonumber
\\ &\delta(\tau - \tfrac 2{\mu \sqrt{\gamma_A \gamma_B}})=
\frac 4{\tau}
\left(\frac {2k}{\mu\tau} \right)^{2k}
J_0\Big(\frac{4k}{\mu\tau}\Big) \bigm/
\Gamma(k)^2 \,.
\end{eqnarray}
This distribution in plotted in Fig.~\ref{fig:Pbar}.
It is peaked at $\tau = 1.7/\mu$, a little below the value
of $2/\mu$ which, from Eq.~(\ref{eq:thydro}), would be the
holographic prediction in the absence of fluctuations.
Glancing at Fig.~\ref{fig:Pbar} and focusing on the peak
in the distribution, one might think that the
main effect of including initial state fluctuations is merely
to induce relatively modest fluctuations in the hydrodynamization
time so that it becomes, crudely,
$\tau_{\rm hydro} \approx (2\pm 1)/\mu$.
This conclusion, however, is wrong due to the slow
power-law decrease of the distribution $\overline P$ with increasing
time $\tau$, which reflects the non-analytic behavior
of the adopted form (\ref{eq:2.2}) of
$P(\gamma)$ as $\gamma \to 0$.
The median of the distribution for $\tau_{\rm hydro}$
is $2.76/\mu$ (for $k = 1.4$),
substantially larger than the peak value.
The mean hydrodynamization time is given by
\begin{subequations}
\begin{eqnarray}
\bar \tau_{\rm hydro}
\equiv
\int_0^\infty d\tau \> \overline P(\tau) \, \tau
&=&
\frac{2k}{\mu} \, \frac{\Gamma(k{-}\tfrac {1}{2})^2}{\Gamma(k)^2}
\\ &=&
4.06 / \mu \,,
\end{eqnarray}
\end{subequations}
with the numeric value specific to $k = 1.4$,
while the rms deviation is
\begin{subequations}
\begin{eqnarray}
\Delta \bar \tau_{\rm hydro}
&\equiv &
\left[
\int_0^\infty d\tau \> \overline P(\tau) \,
(\tau - \bar\tau_{\rm hydro})^2
\right]^{1/2}
\\ &=&
\frac {2k}{(k{-}1) \,\mu}=
5.70 / \mu \,,
\end{eqnarray}
\end{subequations}
considerably larger than the mean value.
There is a 70\% probability that the hydrodynamization time
is larger than the non-fluctuating estimate of $2/\mu$,
a 45\% probability that $\tau_{\rm hydro}$ is larger than $3/\mu$,
and a 30\% probability that it is larger than $4/\mu$.
This simple model predicts that fluctuations in the transverse plane
lead to large non-Gaussian fluctuations in the hydrodynamization time.
For an individual collision of two pixels it doubles, on
average, the hydrodynamization time,
thereby converting the typical time scale of $0.2-0.3$ fm/$c$ predicted
by AdS/CFT modeling with smooth initial conditions
\cite{Balasubramanian:2010ce}
to the range $0.4-0.6$ fm/$c$.
Moreover, it has consequences for
thermalization of the complete nuclear system. Complete
equilibration would require equilibration between different
pixels.
However, with significant variations in energy density and hence
effective local temperature across the transverse extent of the fireball,
such variations can only relax via hydrodynamic processes whose
time scale grows linearly with the length scale of variations and
exceeds the relevant initial hydrodynamization time.
The fact that fluctuation are definitely not smoothed out at the initial
hydrodynamization time explains why large fluctuations can still persist
around 1 fm/$c$ and generate a sizable triangular flow component $v_3$.
In other words, hadron phenomenology as encoded in hydrodynamics codes like
\cite{Bernhard:2016tnd} appears to be compatible with AdS/CFT modeling
only because of large transverse fluctuations.
\section{Peripheral collisions}
\begin{figure}
\includegraphics[width=8.6cm]{HIC2.pdf}
\caption
{Sketch of a peripheral heavy ion collision.}
\label{fig:collision2}
\end{figure}
One original motivation for our investigation was the observation that,
if hydrodynamization would occur significantly more slowly in the
asymmetric fringe regions (the orange areas in Fig.~\ref{fig:collision})
in peripheral collisions, then this would cause the hydrodynamized overlap region
(the inner red region in Fig.~\ref{fig:collision}) to be slimmer and thus increase $v_2$.
It turns out, however, that this effect is rather small and can be
essentially neglected in comparison with the large fluctuation effects
just discussed.
To illustrate this, we consider in this section a very crude model
that approximates the energy and entropy densities of each
colliding nucleus as homogeneous within its Lorentz contracted
spherical volume, such that the
asymmetry for a given pixel is exclusively determined by geometry.
The nuclei move in the $\pm z$ direction and the reaction plane is the
$z{-}y$ plane.
The transverse energy densities at a given transverse
position indicated by the dashed line in Fig.~\ref{fig:collision2}
are then given by
\begin{eqnarray}
\mu_1^3
&=&
M_N A_1 \left( \tfrac{4\pi}{3} R_1^3\right)^{-1} \,
2\gamma_1 \rho_1 \,,
\nonumber \\
\mu_2^3
&=&
M_N A_2 \left( \tfrac{4\pi}{3} R_2^3 \right)^{-1} \,
2\gamma_2 \rho_2 \,,
\end{eqnarray}
where $A_{1,2}$ are the two nucleon numbers, $R_{1,2}$ the two nuclear radii,
$\gamma_{1,2}$ the Lorentz factors of each nucleus,
and $y_{1,2}\leq R_{1,2}$ the transverse distances from
the centers of the nuclei in reaction plane
and $x_{1}=x_2\leq R_{1}$ orthogonal to it. We defined $\rho_i \equiv \sqrt{R_i^2-y_i^2-x_1^2}$.
Finally, $y_b \equiv y_1+y_2$ is the impact parameter.
The resulting geometric mean of the scale parameter is
\begin{eqnarray}
\mu =& \sqrt{\mu_1\mu_2}
= \Big[ \frac{9M_N^2 A_1A_2}{4\pi^2 R_1^2R_2^2}~ \gamma_1\gamma_2 ~\Big]^{{1}/{6}}\Big[\frac{\rho_1 \rho_2}{R_1 R_2}\Big]^{1/6}
\nonumber \\
\propto & \left[ \left(1-\frac{y_1^2+x_1^2}{R_1^2}\right)
~\left(1-\frac{(y_b-y_1)^2+x_1^2}{R_2^2}\right) \right]^{{1}/{12}} \,.
\label{eq:geom}
\end{eqnarray}
As shown in Fig.~\ref{fig:Theta},
this function goes to zero as $x_1^2{+}y_1^2\rightarrow R_1^2$,
or $x_1^2{+}y_2^2\rightarrow R_2^2$, so abruptly
that the impact on $v_2$ is negligible.
\begin{figure}
\includegraphics[width=8.6cm]{phaeno_waeber_2_fig1.pdf}
\caption
{%
The function $(1-z^2)^{\frac{1}{12}}$ which,
in the model (\ref{eq:geom}),
controls the dependence of the scale $\mu$ on the
distance from the nuclear boundaries
}
\label{fig:Theta}
\end{figure}
Let us add that it follows also from the geometric mean of the
energy scales that the difference between
smooth $A+A$ and smooth $p+A$ collisions,
i.e., neglecting effects due to fluctuations, is not dramatic.
For $p+A$ the momentum scale is smaller by a factor $A^{1/6}$
compared to $A+A$ and thus the hydrodynamization time is larger
by that factor 2.5 (for Pb) increasing $0.1-0.2$ fm/$c$ to $0.25-0.5$ fm/$c$.
Thus, also in this case, fluctuations have the strongest
impact on the hydrodynamization time,
implying that $A+A$ and $p+A$ collisions should have similar
properties because, in a holographic description,
the essential difference of incident nucleons versus nuclei
is solely encoded in their respective energy densities.
We mention in passing that in Ref.~\cite{Waeber:2019nqd} it was
found that in forward direction of the heavier nucleus in a asymmetric
collision the matter density is larger and thus hydrodynamization happens
slightly earlier.
\section{Entropy production in classical and quantum gauge theories}
It is possible to extend the motivation for the present
contribution to a grander scale. Decoherence, entropy production and
hydrodynamization or thermalization are intensely discussed also in other
fields like quantum gravity and quantum computing. AdS/CFT duality
has the potential to connect all of these fields. It was established
in recent years that there exists an intimate connection to quantum
error correction schemes while by construction AdS/CFT combines quantum
gravity and quantum field theory.
In principle, the connection to QCD and HICs opens the very attractive
possibility for experimental tests of theoretical predictions because the
number of final state hadrons per rapidity interval $dN/dy$ is taken as
a measure of thermodynamic entropy in the final state after all interactions
have stopped. Obviously this is, however, only helpful if the differences
between CFT and QCD are minor.
As entropy production is equivalent to information loss, this discussion
centers on the question in which sense information can get lost under
unitary time evolution and how this potential information loss in the
boundary theory is related to the generation of the Bekenstein-Hawking
entropy of the formed large black branes in the (AdS$_5\times$S$^5$) bulk.
There exists a fundamental difference with respect to ergodic
properties in the relation between (AdS$_5\times$S$^5$)/CFT and
QCD for high and low temperature, which is most obvious from the
fact that (AdS$_5\times$S$^5$)/CFT is conformal and QCD below the
confinement-deconfinement transition temperature is not. However,
there are strong indications that far above this pseudocritical
temperature the classical solutions of Einstein's equation on
the AdS side produce results that match perfectly relativistic
hydrodynamics which, in turn, provides a near-perfect description of
the experimentally observed properties of the high energy phase of
HICs, see e.g. \cite{vanderSchee:2013pia}. Also, it was shown that on
the string theory side the leading quantum corrections are not large
\cite{Buchel:2008ac,Buchel:2008sh,Stricker:2013lma,Waeber:2018bea}.
The latter appears to be a general trend. In principle, classical chaos and
quantum chaos can differ fundamentally but it seems that for the high
temperature phase of QCD they do not.
Here we want to address only one aspect of this extensive
discussion to which our calculations may add some insight. For
classical ergodic theories the coarse-grained entropy grows
linearly in time with a rate given by the Kolmogorov-Sinai entropy
$h_{KS}=\sum_i \lambda_i\Theta(\lambda_i)$ defined as the sum of
all positive Lyapunov exponents $\lambda_i$:
\begin{equation}
dS_{\rm class}=h_\mathrm{KS}~dt
\label{eq:1}
\end{equation}
Because the definition of entropy is ambiguous in a
non-equilibrium setting one can ask the question: ``For which definition
of entropy in the quantum theory does one observe linear growth with
boundary time?''. We will not address the much deeper question whether
the condition of linear growth is required or at least well motivated.
Quantum chaos can be described quantitatively in terms of exponential
growth of out-of-time-order correlators (OTOCs)\footnote{
As pointed out by the authors of \cite{Rozenbaum_2019}, the identification of the exponential growth of OTOCs with Lyapunov exponents depends on the specific choice of initial states.
The general relation between the growth rate of OTOCs and classical Lyapunov exponents is both nontrivial and so far not fully understood. }
\begin{equation}
C(t) \sim \Big\langle \left[ \hat W(t),\hat V(0) \right]^2 \Big\rangle ~\sim~\exp(2\lambda_Lt)
\label{eq:3}
\end{equation}
of suitable operators $\hat W$, $\hat V$ \cite{Larkin,Almheiri:2013hfa,Maldacena:2015waa},
where in the semi-classical limit $\lambda_L$ should be close to the largest classical Lyapunov exponent.
For many models this behavior was indeed established. For example in Ref.~\cite{Stanford:2015owe} it was found for a weakly coupled matrix $\Phi^4$ theory that
\begin{equation}
\lambda_L\approx 0.031~\lambda^{3/2}~T
\label{eq:4}
\end{equation}
with the ~$'$t Hooft parameter $\lambda = Ng_{YM}^2$.
Kitaev \cite{kitaev} found for a setting similar to the Sachdev-Ye model \cite{sachdev} that the Maldacena-Shenker-Stanford (MSS) bound
\cite{Maldacena:2015waa} gets saturated for infinite coupling strength:
\begin{equation}
\lambda_L \to 2\pi T \qquad {\rm for}\quad \lambda \to \infty .
\label{eq:5}
\end{equation}
In \cite{Buividovich:2018scl} the BFSS matrix model was investigated
numerically and it was found, as in all other investigations known to us,
that the leading exponent for a quantum theory stays below the MSS bound,
often even substantially so.
It is indisputable that classical Yang-Mills theories show classical
chaotic behavior, i.e. after an initial phase, which depends on the chosen
initial conditions, a period of linear growth of the coarse-grained entropy sets in,
followed by saturation at the thermal equilibrium value. Numerical studies
of classical Yang-Mills theories showing this behavior can be found, e.g.,
in Ref.~\cite{Muller:1992iw,Biro:1994bi,Bolte:1999th,Iida:2013qwa}.
In \cite{Biro:1994sh} it was conjectured that the largest Lyapunov exponent
in a lattice discretized classical SU(N) Yang-Mills theory at weak
coupling is given by
\begin{equation}
\lambda_L~\approx~0.175 N g_{YM}^2 T = 0.175 \lambda T,
\end{equation}
based on numerical simulations for $N = 2,3$ \cite{Muller:1992iw,Gong:1992yu}.
In \cite{Kunihiro:2010tg} phenomenological arguments were given that for real QCD
at the ~$'$t Hooft coupling $\lambda~\approx~11.3$ relevant to the early stage of a HIC,
the largest Lyapunov exponent is of the order
\begin{equation}
\lambda_L~\approx~0.3\, T ~\ll~ 2\pi T
\label{eq:6}
\end{equation}
significantly below the MSS bound.
There exist many more publications worth mentioning in this context.
A very recent example dealing with QCD is, e.g.,
\cite{Akutagawa:2019awh}, while in Ref.~\cite{Bianchi:2017kgb} it is
argued that, at least for entanglement entropy, bosons, and a quadratic
Hamiltonian, a linear growth of entropy with the Kolmogorov-Sina\"i
entropy can be derived also for quantum systems.
Using the coarse graining approach of Husimi distributions,
the authors of Ref.~\cite{Kunihiro:2008gv} argued that
the growth rate of the coarse-grained Wehrl entropy of a quantum system
is equal to the Kolmogorov-Sina\"i entropy of its classical counterpart.
Numerical AdS$_5\times$S$^5$ calculations confirm this picture and
add a specific angle. Here, the classical Einstein equation is solved on
AdS space-time for boundary conditions that mimic, e.g., HICs
\cite{Chesler:2008hg,Chesler:2010bi,Chesler:2013lia,Chesler:2015wra}.
In this setting the size of the apparent horizon is the appropriate measure
for the produced entropy (see e.g. \cite{Engelhardt:2017aux}).
We will elaborate on this point in more detail below.\footnote{The
event horizon is not suitable as it depends on the entire future history.
}
Fig. 3 in Ref.~\cite{Chesler:2008hg} shows
for a specific setting that the apparent horizon grows linearly with
time for an appreciable period, just as expected.
Shock wave collisions in $\mathcal{N}=4$ SYM theory, which can be computed
via their gravitational dual description in AdS$_5$, are believed to behave in many
qualitative aspects very similar to HICs.
The AdS/CFT duality links the volume element of the apparent horizon to the entropy density\footnote{Strictly speaking this relation is only valid in the equilibrium case.
As in e.g. \cite{Wilke_da, Grozdanov:2016zjj} we exploit this relation also off-equilibrium
and use it to define the entropy(-density) in this case. In general the dictionary giving an interpretation of geometric objects in Anti-de Sitter space in terms of the boundary field theory is far from being complete and remains elusive for many cases, see e.g. \cite{Engelhardt:2017wgc}}
$s$ times an infinitesimal spatial boundary volume $ d^3x$.
Using this relation, the longitudinally integrated entropy density in symmetric collisions
in five dimensional AdS space was calculated, e.g., in Ref.~\cite{Grozdanov:2016zjj}
(see dashed curves of Fig. 5 in ~\cite{Grozdanov:2016zjj}).
Our results for the entropy production (longitudinally integrated, given in units of $\mu^2$)
during symmetric collisions are discussed in Fig. \ref{fig:app_horizon_sym}.
They are seen to correspond closely to the findings in \cite{Grozdanov:2016zjj}.
In Fig.~\ref{fig:app_horizon_asym} and Fig.~\ref{fig:comparison} we compare this
to the analogous computation during asymmetric collisions.
The code used in this calculation is described in detail in Ref.~\cite{Waeber:2019nqd}.
For large enough times the growth of the area
of the apparent horizon is close to linear, as expected, with a slight
superimposed oscillation which probably averages out over sufficiently
long time periods.\footnote{%
The small wiggling around linear growth seen in
Figs.~\ref{fig:app_horizon_sym} and \ref{fig:app_horizon_asym} can be explained by damped oscillations induced by the lowest quasinormal mode and can thus be expected to fade off for larger times.
\iffalse
Nonmonotonic growth of the coarse grained
entropy is a common feature for quantum systems. An example of this
behavior is presented in \protect\cite{Tsai:2010yb} for two-dimensional
Yang-Mills quantum mechanics.
\fi} The observation that the rate of growth of
the apparent horizon area is almost identical for the two cases shown
in Figs.~\ref{fig:app_horizon_sym} and \ref{fig:app_horizon_asym} is
relevant for the physics of HICs because the longitudinal thickness of
both ions in the overlap region is very asymmetric in some regions of
the transverse plane. However, the gradient of the linear growth should
be independent of these initial conditions.
\begin{figure}[h]
\includegraphics[width=8.6cm]{entropy_sym.pdf}
\caption
{%
The entropy $S$ (per unit transverse area on the boundary), produced during a symmetric collision of thin
gravitational shockwaves in AdS$_5$ (both shocks have width $ w=0.075/\mu$,
where $w$ is the width of the single Gaussian shock waves before the collision),
as a function of time $t$, which is given in units of $[t]=[\mu^{-1}]$, where $\mu^3$
is the transverse energy density of the shock fronts. The gauge/gravity duality relates
the entropy density $s$ to the volume element of the apparent horizon.
To estimate the entropy production we integrate over the longitudinal coordinate.
$S$ is given in units of $\mu^2$. For large enough times linear growths seems to be
a good approximation. The shock fronts touch at $\mu t=0$. The linear fit, plotted as
a red dashed line, is included to guide the eye. Due to the finitely sized spatial box,
in which we study the gravitational collision, we could not follow the time evolution
long enough to observe a potential saturation regime for the entropy (see e.g. \cite{Tsai:2010yb}).}
\label{fig:app_horizon_sym}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=8.6cm]{entropy_asym.pdf}
\caption
{%
The analogous plot as shown in Fig.\ref{fig:app_horizon_sym} but for an asymmetric collision of shock waves with widths $w_+=0.075/\mu$ (right moving) and $w_- = 0.25/\mu$ (left moving).}
\label{fig:app_horizon_asym}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=8.6cm]{entropy_compare.pdf}
\caption
{%
Comparison of the symmetric (red, dashed, Fig.\ref{fig:app_horizon_sym})
and asymmetric (gray blue, Fig.\ref{fig:app_horizon_asym}) cases.}
\label{fig:comparison}
\end{figure}
Thus, at large times linear entropy growth does not only seem to connect
classical and quantum chaos for QFTs but also their holographic dual
description. Obviously, these similarities could well be accidental
at our present stage of understanding, but they are sufficiently intriguing}
to warrant further research.
\section{Conclusion}
In this note we have argued that detailed holographic calculations
for asymmetric collisions are highly relevant for any quantitative
description of realistic HICs. We have performed such calculations and
have found that:
\begin{itemize}
\item
The characteristic large fluctuations in transverse energy and entropy
densities, which are required in hydrodynamic descriptions to explain
the observed large event-by-event fluctuations of flow observables,
delay hydrodynamization and equilibration in the holographic description so strongly that they
are still significant at time of 1--2 fm/$c$ when hydrodynamics becomes
definitively applicable.
\item
In contrast, the effect of the transverse dependence of energy densities
in peripheral collisions has only a minor impact on the hydrodynamization
time, such that it is well motivated to initialize hydrodynamics for the
entire system simultaneously.
\item
p+A and A+A collisions do not show major differences in hydrodynamization properties.
\item
The long time linear growth of the apparent horizon is very similar for
symmetric and asymmetric collisions which supports its interpretation
as entropy.
\end{itemize}
Finally, it should be noted that while the delayed hydrodynamization and equilibration caused
by initial state fluctuations is helpful to explain the observed $v_3$ fluctuations
it also increases concerns that thermalization might not happen as rapidly
as is usually assumed in the interpretation of the medium modifications of
``hard'' probes.
\section*{Acknowledgements}
BM acknowledges support from the U.S. Department of Energy
grant DE-FG02-05ER41367.
LY acknowledges support from the U.S. Department of Energy
grant DE-SC\-0011637. SW acknowledges support from the Elite Network of Bavaria in form of a Research Scholarship grant. AS and SW acknowledge support by BMBF (Verbundprojekt 05P2018,
05P18WRCA1).
|
2,869,038,156,839 | arxiv | \section{Introduction}
The well-known measurements of the Anisotropies of the Cosmic
Microwave Background Radiation by WMAP \cite{WMAP}, in combination with
the supernovae type Ia observations \cite{SNIa},
imply that the evolution of
the universe is dominated by dark energy, and a state parameter that
is strongly constrained. Among the most popular scenarios
to explain the data, is to assume the
existence of an inflationary universe with a
very small cosmological constant
$\Lambda$. In principle, possible contributions to
dark energy can also be provided from topological defects
which are produced at phase transitions in the universe \cite{vilenkin}.
An interesting possibility, for instance, would have been that,
such contributions are provided by
domain walls \cite{wallsref} associated with the breaking of
discrete symmetries (which arise commonly in a wide class of
particle physics models).
Yet another possibility is that there is no discrete
symmetry at all. Even then, there could be nearby minima
separated by a potential barrier, with initial conditions that result
in both minima getting populated with non-zero probabilities.
In this case we do not have an exact domain wall configuration,
but (as will become more obvious later) it still makes sense
to talk about approximate domain walls that interpolate,
in a broader sense, between
basins of attraction of nearby local minima.
And, in fact, we will show that the dynamics and
evolution of the network of inhomogeneities is very similar in both
situations - with exact and approximate domain walls.
As a specific example of the behaviour of the second type we take
run-away potentials which appear in models of dynamical
supersymmetry breaking, and play an important role in
modern attempts at non-perturbative supersymmetry breaking
and moduli stabilisation. In fact, it has been
pointed out by Dine \cite{Dine},
that spatially inhomogeneous field configurations may evolve differently
in the expanding Robertson-Walker background
than the homeogeneous mode. The inhomogeneities may help to stabilise
the moduli (such as the dilaton or radion)
at shallow but finite minima, thus avoiding the Steinhardt--Brustein
\cite{stein-brus} and Buchm\"uller \cite{buchm}
effects. At the same time,
the energy density inhomogeneities of such
configurations
may contribute to the
shape of the power spectrum of CMBR.
In the case of TeV scale supersymmetry breaking this
contribution would be unobservable, but the issue of finding the right vacuum
remains a valid question independently of the mass scale associated with a run-away potential.
\section{Cosmological problems with wall networks and their possible resolution}
There are three main problems in cosmological scenarios that involve a
significant abundabce of domain walls:
(i) Domain walls that could potentially contribute to Dark Energy,
generally predict an equation of state with
$-2/3 < w_X < -1/3$, which would be ruled out from the commonly quoted
upper bound $w_X < -0.78$ at $95\%$ c.l.
(ii) Domain walls that could enhance the Cold Dark Matter Spectrum
are in general associated with unacceptably large
fluctuations of the CMBR (Cosmic Microwave Background Radiation),
for the range of parameters that would have been relevant for
the formation of structure.
For a horizon-size bubble at a redshift $z_a$, with surface energy $\sigma$,
the generated anisotropies are given by
\begin{equation}
\delta T / T \sim G_{N} \sigma R_{H} (z_{a}).
\label{eq:tin}
\end{equation}
(iii) Domain walls in the simplest class of models that evade
problem (ii), do not stay around suffiently, in order to produce density
fluctuations that can sufficiently grow to the observed structures \cite{PRS,Sarkar,Coulson}.
The first problem has in fact been addressed in a very convincing way
in \cite{ADWRO},
where the assumptions made in the
choice of priors of the data analysis have been questioned.
In fact, it has been shown that,
for lower values of
the Hubble parameter ($h<0.65$, as indicated by
Sunyaev-Zeldovich and time delays for gravitational lensing
observations), and for higher values of the matter density
($\Omega_m > 0.35$, in agreement with measurements of the
temperature-luminosity relation of distant clusters observed with
the XMM-Newton satellite),
domain walls in an inflationary universe
can provide a good fit to the WMAP data.
In previous papers \cite{LR}, \cite{LLOR}, we have
proposed and tested two main frameworks that may naturally
arise in standard model extensions for which domain walls can
lead to the formation of structure, enhancing the standard
cold dark matter spectrum in an inflationary universe,
while still be compatible with CMBR. These are the following:
a) Schemes where the walls are unstable, due to a non-degeneracy
of the minima of the potential
(as appears naturally in a wide class of superstring models \cite{LR}).
For a large range of possible parameters, the walls are
expected to annihilate before recombination.
In this way, although structure can be generated
and subsequently grow
in consistency with the observations, no unacceptable
distortions to the cosmic microwave background radiation
are produced.
b) A second possibility is that,
if one of the minima
of the potential of the scalar field is favoured,
then a biased phase transition occurs.
As a first step, we showed why
such a bias may be expected in post-inflationary,
out-of-equilibrium phase transitions \cite{LLOR}.
The idea is that, if the interactions of a field are {\em very weak},
this will not be confined at the top of its potential, but
will in fact be centered around a classical value
that will be closer to one of the minima of the potential.
Quantum fluctuations will move it, but, nevertheless, the bias
(offset) will remain. Then, percolation theory indicates that
there is a range of natural initial conditions
for which walls of finite size (and not of horizon size)
are produced inside a sea of the dominant vacuum.
While not very accurate, percolation theory allowed
us to formulate a qualitative picture
of the spatial
distribution of the wall-driven overdensities
in a post-inflationary
universe, and to account for
the whole range of large scale structure observations.
In addition, by studying wall-driven fluctuations at small scales,
it has been possible to reproduce the observed
distribution of quasars \cite{LHei}).
Subsequently, elaborate numerical simulations seemed to indicate that
despite the biasing of the minima, the walls either disappear too fast,
or stay around for too long \cite{Coulson}, implying that they have to be very
soft if they are not to lead to unacceptable distortions of the microwave background
radiation.
In this work, we will give specific examples where
this need not be the case, firstly in biased double well
potentials with
non-degenerate minima and secondly, for the runaway potentials that can be
expected in a wide class of supersymmetry breaking models, based on
gaugino condensates.
This complements the literature on the subject and
raises additional possibilities to those that
have been considered in the recent years
(\cite{Mats} - \cite{Eto}).
\section{Basic Framework for Out-of-Equilibrium, Biased Phase Transitions}
An elaborate numerical study of the dynamics of domain wall
networks in the case of a scalar field whose potential
has two degenerate minima that occur with the same probability,
has been provided by Press, Ryden and
Spergel, \cite{PRS}, who
showed that such networks would rapidly evolve into long domain walls
stretching across the universe whose surface area, and, hence, energy density,
persisted for a long time. This resulted
to a rapid domination of the energy density
of the universe by these walls and to unacceptably
large distortion in the CMBR. Such an
initial distribution on a lattice can be described statistically using
percolation theory.
On a three dimensional square lattice, there is a critical
probability, $p_{c}=0.311$, above which the associated vacuum will percolate
across the entire lattice \cite{ovr}.
It is easy to see that, by initializing both vacua
with a probability $p=1/2$, both vacua propagate across the lattice.
Since domain walls lie on the interface between the two different vacua, this
implies the formation of domain walls which extend across the entire
universe. This gives a clear mathematical explanation for the Press, Ryden and
Spergel result. However, if for one vacuum $p<p_{c}$, then this
vacuum would form finite clusters in the percolating sea of the other
one. The domain walls would then be small, finite bags
which would disappear relatively rapidly.
Similar effects would hold in the case of potentials with non-degenerate minima
\cite{LR}.
Here, the true minimum will be at some
stage energetically favoured, and domain walls will dissappear.
If a phase transition is triggered by fluctuations in a system in
thermal equilibrium, and the vacua are trully degenerate,
one expects the population probabilities
of each vacuum to be equal.
However, non-equilibrium phase transitions, which can occur in
realistic models of the early universe, generically lead to a biased
choice of vacuum state.
Indeed, an out-of-equilibrium scalar field $\phi$ living on an inflating
de Sitter space, observed over a physical volume $\ell^3$,
breaks into a classical and a quantum piece
\begin{equation}
\phi = \phi_c + \phi_q
\label{1}
\end{equation}
where $\phi_c$ satisfies the classical equation of motion
and $\phi_q$ represents de Sitter space quantum
fluctuations.
This is illustrated in Figure 1.
\begin{figure}[!h]
\begin{center}
\includegraphics*[height=3cm]{pic.eps}
\end{center}
\caption{\it Shematic illustration of biased,
out-of-equilibrium phase transitions.
The initial mean value of the field is shifted towards
one of the minima, which occurs with a higher probability.}
\end{figure}
During inflation, and long afterwards, the Hubble term
is very large
compared to the
curvature of the potential and, thus,
to a very good approximation,
$\phi_c = \vartheta$ where $\vartheta$ is
an arbitrary
constant (to
next order, there is a tiny damped velocity
$\dot \phi_c
\sim \frac{V}{H \upsilon}$).
On the other hand, $\phi_q$
represents
quantum
fluctuations of the scalar field in de Sitter space.
These fluctuations
result in the formation of a weakly inhomogeneous
quasi-classical random field. After inflation ends, the FRW horizon,
$\ell_c=1/H$, grows and fluctuations with scales less than the
horizon are smoothed out. Thus $\ell_c$ acts as an UV cut-off in
the momentum distribution of this random field. In a spatial
region of length $l$, the distribution
of the fluctuations around $\vartheta$ can be calculated and is
given by \begin{equation}
P(\phi)=\frac{1}{\sqrt{2 \pi} \sigma_{\ell}} \exp{(-\frac{(\phi -
\vartheta)^2}
{2 \sigma^2_{\ell}})}
\label{prob}
\end{equation}
where
\begin{equation}
\sigma^2_{\ell} = \frac{H_i^2}{4\pi^2} \ln \left(
\frac{\ell}{\ell_c} \right)
\label{3}
\end{equation}
(and, as has been discussed in \cite{LLOR}, for fields produced towards the
end of inflation, one can ensure that
the longwavelength
components in the
Fourier
decomposition of the random field $\phi_q$ do not introduce
unacceptable for our discussion correlations
between the values of the random field at distant points).
Clearly, such transitions can only occur in a system that is too
weakly coupled to achieve thermal equilibrium.
A number of such biased
transitions have been investigated, includingy those occurring in very
light scalar fields in deSitter space \cite{ovr}.
The two disconnected minima are
denoted by $(+)$ and $(-)$ respectively.
At the phase transition, the field has some finite correlation length
(like the inverse Ginzburg temperature in
case of a transition triggered by thermal fluctuations) over which
the post-transition vacuum
is chosen coherently, denoted by $\Lambda$.
One can approximate the initial
spatial structure of the vacuum produced during the transition
by first dividing space into cells of volume $\Lambda^{d}$, where $d$ is the
dimension,
and second, by assuming that choices of the new vacua are made independently
in each cell, giving the $(+)$-vacuum with probability $p$ and
the $(-)$-vacuum with probability $1-p$,where $0\leq p\leq 1/2$.
Whenever the vacua in neighboring cells are different,
a domain wall will form
which interpolates between them, and so, typically,
a complicated spatial network of domain
walls will form.
Of course, an arbitrary spatial superposition of domain walls,
such as that produced by the mechanism described above,
is not a solution of the equations of motion and
cannot be stable. However, such a superposition
represents physical initial conditions, the
subsequent evolution of which is governed by the dynamics of the
theory. Subject to this dynamics, the initially static domain
walls acquire non-zero velocities, oscillate under their
surface tension, and interact with one another. This will be discussed in subsequent sections,
where we will summarise the main ingredients,
but also motivation
and naturalness of out-of-equilibrium,
biased phase transitions,
where the bias can come from: \\
(i) small differences in
the energy of the minima of the potential, and \\
(ii) different probabilities to reach these minima.
\section{Dynamics of the Scalar Field and Wall Network}
How do we study the behaviour of a ``biased network''?
The scalar field is initialized by randomly setting
it equal to $-\phi_0$ or $+\phi_{0}$ at
each lattice site, with bias probability $p$ for $+\phi_0$
and $1-p$ for $-\phi_0$ with $0\leq p \leq 1/2$. The
lattice resolution corresponds to the initial field correlation length,
and on physical scales above the resolution cut-off the field will
have a white noise power spectrum (yielding results similar to percolation theory).
In this section, we follow the study presented in \cite{PRS} and subsequently extended in
\cite{Coulson}. The dynamics of the scalar field, $\phi$, is
determined by the equation of motion. This has the form
\begin{equation}\label{r_ruchu_t}
\frac{\partial^{2}\phi}{\partial t^{2}} + \frac{3}{a} \frac{\partial
a}{\partial t} \frac{\partial \phi}{\partial t} -
\frac{1}{a^2}\nabla^{2}\phi = - \frac{\partial V}{\partial \phi}.
\end{equation}
and, introducing the conformal time $\eta$
(with $d\eta = \frac{dt}{a(t)}$), it becomes
\begin{equation}
\frac{\partial^{2}\phi}{\partial \eta^{2}} + 2\frac{\partial a}{\partial t}
\frac{\partial \phi}{\partial \eta} - \nabla^{2}\phi =
-a^{2}\frac{\partial V}{\partial \phi}.
\label{eqmot}
\end{equation}
In the above,
$a$ is the scale factor of the universe ($a \sim \eta$
in the radiation era, and $a\sim \eta^2$ in the matter era),
$V$ is the scalar potential and the spatial gradients are with
respect to co-moving co-ordinates.
Then:
\begin{equation}
\rho = \frac{1}{2}\frac{\partial^{2}\phi}{\partial t^{2}}
+ \frac{1}{2a^2}|\nabla\phi|^2 + V(\phi),
\end{equation}
\begin{equation}
p = \frac{1}{2}\frac{\partial^{2}\phi}{\partial t^{2}}
- \frac{1}{6a^2}|\nabla\phi|^2 - V(\phi).
\end{equation}
The scalar potential
determines the topology of the vacuum manifold.
A typical choice is a
$\phi^4$ potential
\begin{equation}
\label{eq:potential}
V(\phi)= V_0 \left(\frac{\phi^2}{\phi_0^2} -1\right)^2
\end{equation}
with the two degenerate vacua, $\phi=\pm\phi_0$, separated by a
potential barrier $V_0$.
One can define a physical domain wall thickness $w_0$ given by
\begin{equation}
\label{eq:thick}
w_0 \equiv \pi \frac{\phi_0}{\sqrt{2V_0}}.
\end{equation}
The ratio of the wall thickness to the horizon size (${\cal H}^{-1} =
\left( \frac{1}{a} \frac{\partial a}{\partial \eta} \right)^{-1}$)
at the time of the phase transition
\begin{equation}
\label{eq:eta0}
W_0 \equiv \frac{w_0}{a(\eta_0)} \frac{1}{\eta_0}
\left. \frac{ d \ln{a}}{ d \ln{\eta}}\right|_{\eta_0}
\end{equation}
then sets $\eta_0$, the conformal time of the phase transition
and the time at which we begin the simulation (one needs
the walls to be thinner than the horizon in order to study their
dynamics, namely ~$w_0 \ll H^{-1}$).
Here we assume that the expansion is dominated by some smooth component,
filling the universe. The equation of state of this component is
$p = \alpha \rho$. This gives
$ a(t) = a_0 t^{\frac{2}{3(\alpha+1)}}$, and
$d\eta = \frac{dt}{a(t)}$, $\eta =
t^\frac{3\alpha+1}{3(\alpha+1)}$. Also,
$$
a(\eta) \sim \eta^\frac{2}{3\alpha +1} \sim \eta ^\omega.
$$
The equation of motion for static domain walls is
\cite{PRS}
\begin{equation}
\phi = \phi_{0}\tanh \left[\frac{\sqrt{2V_0}}{\phi_0}a(z-z_0) \right]
\equiv \phi_0 \tanh \left[\frac{a(z-z_0)}{w_0}\right]
\end{equation}
and for non-static (boosted with a velocity $v_0$):
\begin{equation}
\phi = \phi_{0}\tanh\left[\frac{\gamma_0
}{w_0}a(z-z_0-v_{0}t)\right],
\end{equation}
where $\gamma_0 \equiv (1-a^2 v^2_0)^{-\frac{1}{2}}$.
The energy and surface density of the walls is
\begin{equation}
\rho(z) = \frac{\gamma^2_0 V_0}{2}
{\rm{sech}}^4\left[\frac{\gamma_0}{w_0}a(z-z_0-v_0t)\right]
\end{equation}
\begin{equation}
\sigma = a\int_{-\infty}^{+\infty} \rho(z)dz = \frac{2\gamma_0 V_0
w_0}{3}.
\end{equation}
Finally, during the expansion, the velocity of the wall changes
according to
\begin{equation}\label{spowalnianie_scian}
\gamma(t)v(t) \sim a(t)^{-4}.
\end{equation}
Let us now pass to run-away potentials, of the form
$$ V(s) = \frac{1}{2s} {\left( A(2s+N_1) e^{-\frac{s}{N_1}} - B(2s+N_2) e^{-\frac{s}{N_2}}\right)}^2.$$
In this case we do not have an analytic solution of the domain-wall type,
interpolating between finite and run-away minima. However,
we may still use the domain-wall language to describe the
distribution of energy and topology of the vacuum.
We shall call as `domain walls' the non-equilibrium configurations which appear
on the lattice as joining field values in naighbouring lattice sites; then,
we can identify the position of these generalised walls as the link between
the lattice sites occupied by different vacua. In the case of the run-away vacuum,
we shall simply determine whether a field value at a given site
belongs to the classical domain of attraction of that vacuum.
To further develop an intuitive feeling about the evolotion
of the system we shall define a domain wall width, demanding that
it should correspond to a distance in configuration space
over which the field gradient is of the order of the
potential energy of the local
maximum that separates the vacua. In other words,
$$\Delta = \frac{|\phi_{max} - \phi_{min}|}{\sqrt{V(\phi_{max}) - V(\phi_{min})}}, $$
where $\phi_{max}$ denotes the position
of the maximum separating the domains of
attraction of the finite minimum, $\phi_{min}$, and
of the run-away minimum ($\phi \rightarrow
\infty$).
\section{Numerical procedure: description and testing}
The numerical implementation of the equation of motion
(\ref{eqmot}) is non-trivial.
It involves discretisation of the equation of motion (Appendix I), which allows
treating the domain wall network numerically. Moreover, the calculations are very
time-consuming, unless an optimisation of the time step is applied. We propose
in Appendix II such a technique, which very significantly improves the
efficiency of the code, and allows us to go to larger lattices and
higher accuracies. The role of the size of the lattice is discussed in Appendix III.
There are additional considerations to be made:
to start with, the factor $a^2$ on the right hand side of the equation
of motion
makes the effective potential barrier grow with the expansion.
The result is that, in comoving coordinates, the width of the walls
decreases like $a^{-1}$, which is $\eta^{-1}$
for radiation dominated and
$\eta^{-2}$ for a matter dominated universe.
This implies that, on any reasonably sized
grid, it is impossible to ensure that the walls would be visible on
the lattice to the end of a calculation, when the horizon size
is roughly the grid size.
To appropriately represent walls, their width should be
of the order of a few lattice sites during the whole simulation
(in our case, we require the walls to be about five lattice
sites wide since, if they become too wide, we lose the resolution).
However, we know that the dynamics of the walls
does not depend on their width once they get created and separated from each other
\cite{PRS}, while the total surface energy and surface tension
also do not depend on the width.
As a result, one can consider a
generalization of the equation of motion, which may force the walls
to maintain a constant co-moving thickness while otherwise
not altering their dynamics. This modified equation is
\begin{equation}\label{r_ruchu_uog}
\frac{\partial^{2}\phi}{\partial \eta^{2}}
+ \frac{\alpha}{\eta} \left(\frac{d\ln{a}}{d \ln {\eta}} \right)
\frac{\partial \phi}{\partial \eta}
- \nabla^{2}\phi = -a^{\beta}\frac{\partial V}{\partial \phi},
\end{equation}
and, for $\alpha=\beta=2$, we recover the initial equation of motion.
If $\beta=0$ the walls will have constant comoving width.
This choice does alter the scaling of the adiabatic effects of the Hubble
expansion ($\beta =0$), but this effect can be compensated by a proper choice of
$\alpha$. It turns out that \cite{PRS}
\begin{equation}\label{skalowanie_fi}
\langle \phi - \phi_0 \rangle_{\rm{rms}} \sim a^{-\frac{\alpha}{2}
- \frac{\beta}{4}}.
\end{equation}
Thus, for $\beta = 0$, we have to set $\alpha = 3$
to have the same scaling of the deviation of $\Phi$ and $\Phi_{0}$.
In addition,
\begin{equation}\label{spowalnianie_uog}
\gamma v \sim a^{-\alpha -\frac{\beta}{2}}.
\end{equation}
Having obtained a consistent set of equations, the next step is to understand
the relevant range of initial conditions and parameters.
As in previous works, we will set
$\phi_0=1$. The scalar field initial conditions
are then chosen using the prescription described above for various
bias probabilities,
$p$. That is, in the following we will use percolation theory with allowed
field
values of $\pm1$ at any lattice site. It is of interest to compare the
evolution of
the network initialized with two-point initial conditions with
those initialized with various continuous distributions. We have done
this for a
uniform distribution which gives probability $1-p$ of choosing $\phi$ between
$-1$ and $0$ and probability $p$ of choosing between $0$ and $1$,
and with a gaussian
distribution $P(\phi)$ such that $\int_{0}^{+\infty} d\phi P(\phi)=p$.
In general, the surface area of the initial network, measured at $\eta=\eta_0$,
is largest in the case of the two-point, percolative distribution.
However, after a few steps of dynamical
evolution the network stabilizes, and its important characteristics,
such as surface energy or kinetic energy and their time evolution,
become indistinguishable for a fixed bias $p$. Hence, the sharp initial
conditions
of percolation theory also give a good approximation to initial conditions
softened by smooth distribution functions. This justifies the use of the pure
two-point percolation theory initial conditions in this paper.
We will set the initial field ``velocity'', $\dot{\phi}$, to be zero
everywhere on the lattice (in previous work
the results were found to be insensitive
to small initial velocities with respect to the energy of the barrier
\cite {Coulson}; this
was done by repeated simulations with $\dot{\phi}$
chosen from a uniform distribution of velocities between $-1$ and $+1$
($\sim {\cal O}\left(\phi_0/\eta_0\right)$).
Simulations are run in the radiation dominated epo\-ch, with
an initial time, $\eta_0=1$, unless stated otherwise.
We chose a wall thickness $w_0=5$, and a ratio $W_0=5$ {(see (11))}.
This value is used to ensure that the wall thickness is well
above the lattice resolution scale (recall $\Delta x=1$), while also
ensuring that for most of the dynamical range of the simulation, the wall--wall
separation exceeds the wall thickness.
We tested our code with the simplest case
one can study, namely the double well potential, for which
\[
V(\phi) = V_0 \left( \left(\frac{\phi}{\phi_0}\right)^2 -1\right)^2
\]
($ V(\phi) = m \phi^2 +
\lambda \phi^4$, $m = -2V_0 {\phi_0}^2 $,
$\lambda = V_0$).
Looking at the potential and the first derivative (Figure 2) we see that, while the
potential is symmetric (even), the first derivative is asymmetric (odd):
\begin{figure}[!h]
\begin{center}
\epsfbox{hat_gr1.eps}
\end{center}
\caption{\it Shape and first derivative of the double well potential}
\label{etykieta4}
\end{figure}
This is a special situation, and this symmetry is violated in the case of the
exponential potential discussed later on.
For the numerical part, we followed
\cite{PRS} and \cite{Coulson}, setting the values of the
parameters $\alpha$ and $\beta$ to reproduce the original equation
of motion, equation~\ref{eqmot}. In the runs the domain
walls maintained a constant physical rather than co-moving
thickness, and so problems of available dynamic range become
important. The prescription used is as follows:
The simulation is run on a $512\times 512$ lattice, with our
standard field initial conditions and $p=0.5$. Using a wall thickness
of $w_0=25$, the initial conditions were evolved
with the standard parameter values of $\alpha=3$ and $\beta=0$,
until a time when
the wall-wall separation exceeded the wall thickness; that is, a
time $\eta=2w_0$. The equation of motion was then changed, to set
$\alpha=\beta=2$ until a time $\eta=250$. The co-moving thickness
of the walls at the end of the simulation was then $5$ lattice sites.
To compare, the run was repeated, this time leaving the standard
parameter values fixed throughout the simulation.
Looking at the evolution of the energy density
of the network of domain walls in three dimensions,
(in a radiation dominated epoch), we confirm the following results
\cite{Coulson}.
$\bullet$
For $p<0.311$, the critical threshold of percolation theory,
only one vacuum percolates the
lattice, and isolated bags of one vacuum
are to be found in a percolating sea of the
more dominant vacuum. These bags rapidly decay under their
surface tension.
$\bullet$
For $0.5>p>0.311$ both vacua percolate, leading to
an initial network of infinite
(lattice sized) domain walls. However,
these also rapidly decompose into vacuum
bags which then decay.
$\bullet$
Only in the exact $p=0.5$ case is long-term scaling
seen, dominating the energy density
of the universe only in the case of $p=0.5$.
What is thus seen, is that,
a
network of domain walls forming well before matter-radiation
equality, can be
sufficiently massive to contribute significantly
to large scale structure formation on comoving scales
less than $\sim 20\ Mpc$. However, such a network
will decay before photon decoupling.
An immediate question is whether superhorizon fluctuations
can be of any relevance. In principle,
such a possibility can arise, particularly in a universe with
a significant hot dark matter component, and
scale invariant primordial perturbations
induced by an earlier inflationary epoch. However,
similarly to \cite{Coulson}, we found that,
for much of the range of biases,
wall networks turned out to be cosmologically innocuous,
as their energy density
exponentially decays with a characteristic time of only
a few expansion times. Simulations were made in both
3-dim and 2-dim; in the
later case walls stay around longer, but still,
for any significant scaling
of the network before the ultimate exponential decay,
finetuning of $p$ close to $1/2$ is required.
In principle, one has to consider also the possibility
of having bias in the initial velocities.
However, for the double well potentials, modifications from such effects are
negligible. The reason is that in this case the field is perfectly reflected
to the minima by the external barriers; this however is not what happens for
the runaway potentials that we will proceed to discuss, as well as for periodic
potentials.
\section{Biased Potentials with non-degenerate minima}
In the previous section, we summarised the expectations for
potentials with degenerate minima. However,
the behaviour of domain walls can change radically, in the case that the
minima of the potential are unstable.
Before studying the domain wall dynamics that are to be expected,
we would first like to discuss the naturalness of such solutions.
This problem has been studied extensively in \cite{LR},
where specific realisations of such scenarios have been proposed:
In realistic standard model extensions, and particularly
in superstrings
there are usually several discrete groups
$Z_N$. The fields
in the theory then transform as $\alpha^r$, r=0,1,..,N-1, where
$\alpha$ is the $N^{th}$ root of unity, and the effective
potential is constructed from $Z_N$ invariant combinations of the
fields. In non-supersymmetric models, for example,
the Lagrangian of a complex scalar field $\Phi$, transforming as
$\alpha$ has the form:
\begin{equation}
L = \partial_{\mu} \Phi \, \partial^{\mu}
\Phi^{*}
+ \mu^{2} \mid\Phi\mid^{2}
- \frac {\lambda \mid\Phi\mid^{4}}{4}
+\lambda^{\prime}(\frac{\Phi^{N}}{M^{N-4}}
+ \frac{\Phi^{* N}}{M^{N-4}})+..
\label{eq:e1b}
\end{equation}
where the coupling $\lambda$ is made real by absorbing its phase
in the field $\Phi$.
The coupling $\lambda'$ is of order unity.
The non-renormalisable terms of dimension
$>4$ arise because we have an effective field theory generated
by physics at some (high) scale, M.
If $\mu^2$ is positive,
the effective potential for $\Phi$ has a
minimum for non-vanishing value of the modulus and leads to
spontaneous symmetry breaking of the discrete $Z_N$ group. In
this case it is convenient
to reparametrize $\Phi$ as
\begin{equation}
\Phi = (\rho + \upsilon)
\,e\,^{i \,\theta / \upsilon+\alpha },
\label{eq:e2}
\end{equation}
where $\upsilon e^{i \alpha}$ is the v.e.v. (vacuum expectation
value) of $\Phi$, while $\rho$ and $\theta$ are real scalar
fields. The potential of the field $\theta$ is then \cite{LR}
\begin{equation}
V ( \theta ) = \frac{2 \lambda^{\prime }\upsilon^{N}}{M^{N-4}}
\,
\cos(\frac{\theta N}{\upsilon } + N \alpha)
\label{eq:e3}
\end{equation}
and, the value of the pseudo-Goldstone mass is given by
\begin{equation}
m^{2} = \frac{N^{2} V_{0}}{\upsilon^{2}}
\cos(N \alpha)\, \, ,
\,\,\,\,\,V_{0} \equiv \lambda'
\frac{2 \upsilon^{N}}{M^{N-4}}.
\label{eq:e4}
\end{equation}
The potential of equation~(\ref{eq:e3}) has an N-fold degeneracy
corresponding to $\theta /v\rightarrow\theta /v+2\pi /N$
\footnote{Pseudogoldstone bosons, due to their very light mass and negligible interactions,
may very naturally give rise to late phase transitions
\cite{late}.}.
How can this degeneracy be lifted?
So far we have discussed
domain walls which are expected to arise
from the potential of a {\em single} scalar field $\Phi$.
However additional scalar fields are also present.
Then, if, as is likely, the interactions between the fields
cause
more than one field transforming non-trivially under the
discrete symmetry group to acquire a vev,
it is possible to generate a situation in which the vacuum
degeneracy is apparently lifted:
If one of these fields acquires its vev
before or during inflation the observable universe will have a
unique value for its vev. After inflation the effective potential
describing the remaining fields may still have an {\it
approximate} discrete symmetry but the vacua will not be exactly
degenerate.
To illustrate this, for instance in non-sypersymmetric models,
consider adding to the above theories a second
field $\Phi^{\prime}$. If $\Phi$ transforms as $\alpha^{m}$
and $\Phi'$ as $\alpha^{n}$ under the symmetry
group and assuming that $n \geq m$ and
$N/n$ is integer
in order to simplify the analysis,
the potential is \cite{LR}
\begin{equation}
V(\theta) =\sum_{r=0}^{N/n} \left[2
\frac{\upsilon^{(N-nr)/m}
\upsilon^{\prime r}}{M^{(N-(n-m)r)/m-4}}
\cos \left(\frac{r
\theta^{\prime}}{\upsilon^{\prime}}+
\frac{(N-nr)}{m}\alpha+r\beta\right)
\right].
\label{eq:asy2}
\end{equation}
Clearly there will be a dominant term, however the subdominant ones
will {\em slightly} split the degeneracy of the potential.
Having summarised the model building aspects, the next question
is whether the domain walls to be expected in these theories can
be of any relevance for structure formation.
In the case of non-degenerate minima, we expect that there
is a critical scale at which the
loss in surface energy from collapsing horizon-size bubbles
becomes similar to the gain in volume energy
by passing to the true minimim.
At some stage, the true minimum will be favoured at all horizons,
and walls will disappear.
In this case, for subhorizon fluctuations,
we will have a maximum scale, corresponding to the
size of the horizon $R$ at the time that the walls disappear
(which today would correspond to
$R^{\prime}=R_{H_0}/\sqrt{1+z_{a}} $).
These could in principle generate structure for
the smaller angular scales of WMAP.
The question of course is what is the situation
regarding superhorizon fluctuations, and whether
these can give any structure at the largest scales
(COBE, and the largest scales of WMAP).
The time when walls dissappear is specificed by a redshift
$z_a$, which is a calculable quantity, even before passing to
any numerical simulation.
For non-degenerate vacua there is always
a critical bubble radius above which it is energetically
favourable for the bubble of true vacua to expand gaining more
volume energy than is lost in surface energy. Once the horizon
exceeds this critical radius bubbles of the true vacuum will
expand everywhere at the speed of light to fill the whole
universe and this occurs at the same time in all horizon
volumes \cite{coleman}.
Then, if the non-degeneracy of the potential
is measured by a factor $\delta \rho \approx
\frac{\sigma}{R}$.
For instance, if walls decay during matter dominance, this
determines the redshift $z_a$ to be
\begin{equation}
z_{a} = \left(\frac{\delta \rho R_{H_{o}}}
{\sigma}\right)^{2/3} - 1 \, .
\label{eq:zz}
\end{equation}
In what follows, we combine biasing with non-degeneracy of minima,
a situation that, according to the above, can arise for effective
potentials generated by several weakly interacting scalar fields.
Then, if the minimum with the highest probability
has a higher energy than the second one, we have two
competing effects, which can allow modifications from
previous results in the literature.
This is illustrated in Figure 3.
\begin{figure}
\begin{center}
\includegraphics*[height=3cm]{pic2.eps}
\end{center}
\caption{\it Shematic illustration of biased, out-of-equilibrium
phase transition, in a potential with non-degenerate minima.
The field is shifted towards the false vacuum.}
\end{figure}
To undestand these effects, we perform a numerical similation.
A simple and generic parametrisation of the non-degeneracy of the potential
is obtained by adding to $V(\phi)$ a term linear
in $\phi$, namely
$$V(\phi) \rightarrow V(\phi) - \epsilon \, V_0 \, \phi \; .$$
The monitoring of the extrema of the potential is shown
in Appendix IV. During the evolution,
the field ``feels'' only the derivative
of the potential and the gradient of the field, thus
it has the tendancy to evolve towards the place
where the magnitude of the derivative is larger. This may in principle compensate
the biasing of the initial distribution; in fact,
the two effects can be combined in such a way, that one produces a
quasi-stable network, which is shown in Figure 4.
\begin{figure}
\hspace*{2.0cm}
\vspace*{0.5 cm}
\includegraphics*[height=6cm]{surf047.eps} \\
\hspace*{2.0cm}
\vspace*{0.5 cm}
\includegraphics*[height=6cm]{surfm012.eps} \\
\hspace*{2.0cm}
\vspace*{0.3 cm}
\includegraphics*[height=6cm]{meanm012.eps}
\caption{\it Domain wall evolution in potentials with
non-degenerate minima and a
bias in the initial mean field distribution.}
\label{pro:fig}
\end{figure}
In the upper panel of Figure 4, we present the
evolution of the surface energy of
the walls, as a function of the conformal time
$\eta$, for three different cases with the bias in the initial distribution corresponding to
$p_{-}=0.47$: \\
(i) The upper curve, (a), corresponds to the case where the effect
of the non-degeneracy of the minima,
parametrised by $\epsilon=-0.012$, is partially cancelled by
the bias in the initial field disribution. \\
(ii) The middle curve, (b), stands for the case of
degenerate minima. \\
(iii) The lower curve, (c), denotes the case with a
higher non-degeneracy of the minima, parametrised by
$\epsilon = -0.02$. As expected, this
choice leads to a rapid disappearence of the walls.
The middle picture shows the behaviour of the
surface energy of the network
for different initial distributions and a fixed value of the
non-degeneracy of the minima ($\epsilon = -0.012$). \\
(i) Curve (a) is the same as above, with a bias given by $p_{-}= 0.47$. \\
(ii) Curve (d) corresponds to a bias $p_{-} = 0.39$, that is
a probability to occupy the left (lower) vacuum equal to $0.39$.\\
(iii) In curves (b) and (c) the initial bias of the
distribution is given by $p_{-}=0.46$ and $p_{-}=0.44$ respectively.
In these cases the offset has been tuned in such a way that it compensates the effect of non-degeneracy.
One should note that the necessary tuning is of the order of a few percent,
enhancing the a'priori probability of occupying the right vaccuum by about 5\%.
The lower panel shows the evolution of the mean value of the
scalar field during the decay of the wall network, for the same choice of parameters
as in the middle panel. Curves (a), (b), (c), (d) correspond to
the respective ones in the middle panel.
The outcome is that we were able to realise a quasi-stable network
by combining two competing effects.
\section{Runaway potentials}
\subsection{Theoretical Motivation and Description of the Potential}
Runaway potentials arise commonly in theories of dynamical
supersymmetry breaking, hence their collective
dynamical properties deserve a careful study. For the
purpose of this paper we assume the potential in
the form
$$ V(s) = \frac{1}{2s} {\left( A(2s+N_1) e^{-\frac{s}{N_1}} -
B(2s+N_2) e^{-\frac{s}{N_2}}\right)}^2,$$
Its shape and first derivative are plotted in Figure
\ref{etykieta4b}.
\begin{figure}[!h]
\begin{center}
\epsfbox{roll_gr1.eps}
\end{center}
\caption{\it Shape and first derivative for the runaway potential.}
\label{etykieta4b}
\end{figure}
If we would like to identify the field $s$ with a stringy modulus, say a dilaton,
then we would have to define $ s = e^{\sqrt{2} \phi} $, since $\phi$ defined in such a way is
canonically normalised. This makes the potential above a doubly-exponential function of $\phi$.
However, in what follows we shall simply assume that the K\"ahler potential for $s$ is canonical -
this simplification doesn't introduce qualitatively new features in the simulations.
In general, one can see that the doubly-exponential steepness of the potential makes the evolution more
sensitive to the changes of the offset and the width of the distribution.
The degeneracy of this potential can be lifted by adding
a term of the form $\frac{\epsilon}{s^2}$.
The position of the extrema of the potential, and the way they are monitored
in our simulations are discussed in Appendix IV.
We have studied runaway potentials for the parameter set of Table
\ref{ta1},
which corresponds to a weakly coupled vacuum. The expectation balue of $s$
($<s> \sim 10$) corresponds to inverse square of a gauge
coupling in a supersymmetric Yang-Mills theory.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
A & B & $N_1$ & $N_2$ & min & max & $w_0$ \\
\hline
1.0330 & 1.1950 & 10.0 & 9.0 & 10.075 & 21.729 & 3.567 \\
\hline
\end{tabular}
\caption{\it Runaway potential parameter set used in our simulations.}
\label{ta1}
\end{table}
In Table 1, min-max denote the minimum and maximum of the potential,
and $w_0$ gives the width of the (approximate) domain wall.
The initial conditions are controlled by the position of the
mean value of the initial distribution,
$\langle \phi \rangle = max + w_0 \gamma$,
and by the initial width of distribution, $\sigma_\phi = w_0 \gamma'$,
where $\gamma, \, \gamma'$ are real numbers to be discussed later on.
\subsection{Numerical simulations for Runaway Potentials}
Runaway potentials are in principle more complicated to study than
the double well ones,
since they have more intrinsic instabilities.
This implies that several of the assumptions made for the simplest
potentials, have to be re-considered. This by itself is an interesting
problem and will allow to understand the level of validity of the
results in potentials that are to be expected in theoretically motivated
models.
\underline{Modification of equation of motion}
The first step is to analyse what is the effect of the modification
of the equations of motion, (by taking
$\alpha =3$ and $\beta=0$), in order to maintain a constant
comoving thickness for the walls, while
maintaining the condition for conservation of the wall momentum.
The condition for momentum conservation is
$\beta = 6 - 2\alpha$.
To do so, we perform simulations for intermediate
values of $\alpha$ and $\beta$ to see how this
modification changes the evolution of the wall network.
The results are shown in Figure \ref{albe}.
\begin{figure}[!h]
\begin{center}
\includegraphics*[height=7cm]{alpha.eps}
\end{center}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& (a) & (b) & (c) & (d) \\
\hline \hline
$\alpha$ & 1.8 & 2.0 & 2.4 & 3.0 \\
\hline
$\beta$ & 2.4 & 2.0 & 1.2 & 0.0 \\
\hline
\end{tabular}
\caption{\it $\log_{10}(\rm{Vol(L)})$
as a function of $\eta - \eta_{start}$,
for four different combinations of $\alpha$ and $\beta$.}
\label{albe}
\end{figure}
In general, the larger $\beta$ is,
the slower is the rate of disappearence of the
false vacuum. When $\beta>1.2$, the bubbles of the vacua become stable.
This is because the effective potential barrier grows with time with respect to the
gradients, so, they cannot overcome the potential. When
$\beta < 1.2$, the presure of the dominant vacuum takes over.
\underline{Study of the equation of state}
As discussed in previous sections,
we assume that the expansion of the universe is dominated by some smooth component,
filling the universe. Then, if we go smoothly between dust
($\alpha=0$), and radiation,
($\alpha=1/3$), the parameter $\omega$ changes from 2 to 1.
We made simulations for various values of $\omega$, to see whether
this influences the evolution. The results are in Figure \ref{omega}.
\begin{figure}[!h]
\hspace*{-1.5cm}
\includegraphics*[height=5.5cm]{Ekin.eps}
\includegraphics*[height=5.5cm]{Ekinw.eps} \\
\hspace*{-1.0cm}
\includegraphics*[height=5.5cm]{surf.eps}
\includegraphics*[height=5.5cm]{vol.eps}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& (a) & (b) & (c) & (d) \\
\hline
$\omega$ & 0.8 & 1.0 & 1.6 & 2.0 \\
\hline
\end{tabular}
\caption{ \it
$\log_{10}(\rm{Vol(L)})$ versus $\eta - \eta_{start}$
for different values of $\omega$.}
\label{omega}
\end{figure}
All the measured observables change smoothly with $\omega$ and are not influenced
very much. In general, the faster the scale factor $a(\eta)$
grows, the slower the walls disappear.
\underline{Role of the horizon at the time of network creation}
If the scale factor behaves like
$\alpha(\eta) \sim \eta^\omega$, the horizon
(inverse Hubble constant) grows with time as
$H^{-1} = \frac{\eta^{\omega+1}}{\omega} $.
Hence, different values of $\eta_{start}$ give different values of
the horizon at the point of the phase transition.
The results should be sensitive to that and to test how the evolution
of the field changes with the change of the initial horizon we performed
several simulations for $\eta_{start}$ changing between $10^{-4}$ and 10.
In almost all published simulations this parameter was taken
to be 1. The results are given in Figure \ref{etastart}.
\begin{figure}[!h]
\begin{center}
\includegraphics*[height=7cm]{eta.eps}
\end{center}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& (a) & (b) & (c) & (d) \\
\hline
$\eta_{start}$ & 10.0
& 5.0 & 0.8
& $10^{-4}$ \\
\hline
\end{tabular}
\caption{\it $\log_{10}(\rm{Vol(L)})$ versus $\eta-\eta_{start}$ for different
values of $\eta_0$.}
\label{etastart}
\end{figure}
Figure \ref{etastart} indicates the existence
of two competing effects:
(i) For very small values of $\eta_{start}$ the horizon is much
smaller than the wall width. Then, the friction term in the
equation of motion is large, and temporarily freezes the evolution
of the network. One can see that for $\eta_{start}= 0.8$ the
network is less stable than for
$\eta_{start}= 10^{-4}$ - in the first case the
wall width--to--horizon ratio is smaller than in the second and
walls evolve faster under the influence of the potential;
consequently the surface of walls decays faster.
(ii) However, for large values of $\eta_{start}$,
corresponding to curves (a) and (b),
many walls fit within the horizon
and interactions between walls, of the joining and
splitting type, become important and they tend to stabilise the network.
This is illustrated by the fact that the network
corresponding to $\eta_{start}=10$ is more stable that that corresponding to
$\eta_{start}= 5$, as there is more walls inside the horizon in the first case.
One should note that, at the initial stage of the
evolution, in the cases where the horizon
is large, there is a period when the domain wall surface grows with time.
\subsection{Nearly-scaling solutions for runaway potentials}
The dynamics of the walls are determined by several parameters:
the distance to the horizon at the time the evolution starts,
$H^{-1} = \eta^{\omega + 1} / \omega$,
the width of the wall $\Delta$, the width of the initial didtribution $\sigma$ and the offset of the initial distribution with respect to
the maximum of the potential $\bar{\phi} = \phi_{mean}
- \phi_{max}$. Independently of the absolute positions of the extrema and of the absolute height of the maximum of the potential, the relations between these parameters
shall determine the behaviour of the system.
\underline{Width of the walls}
The domain wall width is defined as
\[
w_0 = \frac{\rm width ~of ~barrier}{\sqrt{\rm hight ~of ~barrier}}
\]
In our simulations, the width of the barrier is numerically constant and the width of
the walls will change by changing the hight of the barrier.
We have performed simulations, to check how the width of the walls influences their
evolution, for a range of $w_0$ between 0.05 and 20.0
lattice sites.
These indicate that, if the walls are thin,
their evolution is dominated by the potential
energy, and field gradients are almost unimportant (in each site of the
lattice the field evolves independendly of the other site,
so, effectively, the walls become frozen in). In this
case the overall behaviour of the field, as measured
by the time dependence ot
the mean value, is rather classical.
The wider the walls, the milder the influence of the
potential; gradients contribute significantly to the dynamics, and
the evolution
of the mean value of the field becomes non-classical.
If the walls are wide, between 2 and 20 lattice sites,
the evolution is insensitive to the domain wall width.
\underline{Initial width of the distribution}
In the runs presented in this paper
the field has been initialised randomly, according to a gaussian distribution.
If the width of the distribution is large, then also
the probability to create many walls is larger.
Hence, if we initialise according to a wider
distribution the network lasts longer, meaning, that for
wider distribution, it matters less where is the
center of the distribution (the biasing becomes less
important, since the field can climb over the barrier with a higher probability).
This is particularly significant for the run-away potential, which is asymmetric,
since the potential force (derivative) to the left of the barrier
is larger than the one to the right. Hence, the result is that, at
some stage, the false vaccum (the finite one) starts growing, because
the force towards the left vacuum
is somewhat larger.
\underline{Initial Mean Value of the Field}
We have performed simulations for various positions
of the center of the distribution for the runaway
potential. If the field starts to the left of the maximum,
even high above the barrier, then, very often,
the whole space finishes in the false vacuum. The
reason is the asymmetry of the force in this potential,
together with the damping due to the Hubble
expansion (and the fact that the friction
term is proportional to the time derivative
of the field, which may be large in such situations).
Most interesting effects are seen when the initial value of the
field is close to the maximum - then we get plenty
of walls which disappear rather slowly.
If we want to obtain a stable network, then we have
to start slightly to the right of the maximum,
again because of the asymmetry of the force. In these
cases the networks exhibit nearly-scaling behaviour.
\underline{Examples of nearly-scaling networks}
The advertised behaviour has been illustrated in Figure \ref{nstrun}.
No splitting term has been switched on in this case.
\begin{figure}[!h]
\hspace*{2.0cm}
\vspace*{0.5 cm}
\includegraphics*[height=6cm]{nearly_scal_surf.eps} \\
\hspace*{2.0cm}
\includegraphics*[height=6cm]{nearly_scal_mean.eps}
\caption{\it Wall network evolution in runaway potentials}
\label{nstrun}
\end{figure}
(i) Here, the curves (a) and (b) correspond to initial
distributions positioned at the top of the barrier
and different widths (1.5 domain wall
width for curve (a) and a single domain wall width for (b)).
In both cases the field evolves towards the left vaccuum,
with the difference in the widths playing a minor role. \\
(ii) Curves (c) and (d) correspond to the initial mean value of
the distribution, shifted by one twentieth
of the wall width to the right of
the top of the potential and width
of the distribution equal
to 1.5 domain wall width for (c), and to
the domain wall width for (d).
In both cases one observes a scaling behaviour of the
network surface, however in the case of a wider initial distribution,
the mean field behaves nonclassically and returns to
the left vaccuum, so eventually the left vacuum prevails
over the run-away behaviour.
To summarize,
the asymmetry of the potential and its derivatives
with respect to its maximum makes the evolution of broad and biased
distributions non-classical.
In the limit of vanishing width of the distribution
the classical behaviour is recovered, which is a good check of the numerical routine.
A wide class of initial distributions leads to
relatively short period of the existence of domain-wall
networks, which however rather quickly disappear
leaving behind system rolling towards infinity. This is a
version of the Steinhardt--Brustein effect.
However, a larger width of the distributions slowns down the decay of
the islands of `finite' vacua. As in the case of the symmetric
double well potential, the less favoured vacuum assumes
the topology of compact clusters submerged
in the sea of the run-away vacuum. The formation
of a pseudo-infinite cluster requires a higher degree of
fine-tuning than in the case of the double well potential.
An important factor is the ratio of the distance to
the horizon and the domain wall width at the time when
the initial conditions are imposed; if this ratio is
truly small, the disappearance of the walls becomes slower.
This is more or less expected, as in this case the cosmic friction term
is able to compete efficiently with the potential force.
In the simulations shown in the figures we were assuming
the initial horizon to be $H^{-1}_i = 10^{-4}$, which
corresponds to walls much wider than the initial horizon.
The small ratio discussed above can be obtained by making
the phase transition occur shortly before the end of inflation,
as discussed earlier in this
paper.
\section{Conclusions}
The formation of large scale structure in the universe is at present one of the most
important areas where particle physics meets cosmology.
In particular, important contributions to structure formation
may come from phase transitions,
especially such, when the mass order parameter is so small,
that the characteristic scale
$1/m$ corresponds to the range of scales relevant for cosmological observations.
It is also possible that the mass of the order parameters lies in the electroweak range,
but the phase transition could be seen via its
indirect effects. This is the case for the transition associated to supersymmetry
breaking. In the present work, using numerical and analytical
methods, we studied the physics of the domain walls that
appear during such phase transitions.
In particular, we have investigated domain wall networks and their evolution, for two
types of potential, and two ways of modeling the initial conditions after the
phase transition.
One of the potentials is the well-known
double well potential, and the second one is the
exponential potential, characteristic of supersymmetry breaking
via gaugino condenstation.
In both cases, we checked the evolution of domain walls as a
function of the parameters of the
potential, in particular (i) as a function of the non-degeneracy
of the available vacua, and also,
(ii) as a function of a difference of probabilities of filling these vacua. To this end,
we have constructed a C++ computing code, which we used to confirm earlier results
and to extend our considerations to the new cases.
The program has been optimised, and we have found a
theoretical estimate for the accuracy of the
integration procedure. The latter is constantly
monitored in order to enable the
use of an adaptive time step that greatly increases the speed of the
code while maintaining high sensitivity.
Moreover, we investigated the role of the modifications of the equation of
motion used to model the evolution on the grid.
The simulations show compensation effects between the non-degeneracy of
the vacua and the asymmetry of the
probability distribution: these competing effects
may cancel each other, resulting in the creation of slowly disappearing
metastable domain wall networks, in very general and physically interesting situations.
Extensions to other types of potentials, as
well as a detailed study of structure formation
within this framework shall be addressed in a separate publication.
\vspace*{0.4 cm}
{\bf Appendix I: Discretisation of equation of motion}
In order to treat the equations of motion of the scalar field and the
domain wall network numerically, we devide the universe into balls of radius
$\ell$ much larger than $H^{-1}$,
thus covering many Hubble horizons. We will then simulate
the evolution of the network
in a single ball of radius $L$. The lattice site at the beginning of the simulation
corresponds to a single Hubble horizon. Moreover, we introduce
multi-torus topology of the grid (namely periodic boundary conditions).
To discretise the relevant equations, we
use the ``staggered leapfrog'' method
for the second order time derivatives, and the
Crank--Nicholson scheme for space-like derivatives.
This means that we have second order accuracy in differentials
with respect to time and space.
The discretised equations are as follows:
$$
\delta = \frac{1}{2} \alpha \frac{\Delta\eta}{\eta}
\frac{d\ln{a}}{d\ln{\eta}},
$$
$$
(\nabla^2 \phi)_{i,j,k} \equiv \phi_{i-1,j,k} + \phi_{i+1,j,k}
+ \phi_{i,j-1,k} + \phi_{i,j+1,k} + \phi_{i,j,k-1} +
\phi_{i,j,k+1} - 6\phi_{i,j,k},
$$
\begin{eqnarray}
\label{r_roznicowe}
\dot{\phi}^{n+\frac{1}{2}}_{i,j,k} =
\frac{(1-\delta)\dot{\phi}^{n-\frac{1}{2}}_{i,j,k}
+ \Delta\eta \left( \nabla^2 \phi^{n}_{i,j,k} -
a^{\beta}\frac{\partial V}{\partial \phi^{n}_{i,j,k}}
\right)}{1+\delta},
\end{eqnarray}
$$
\phi^{n+1}_{i,j,k} = \phi^{n}_{i,j,k}
+ \Delta \eta \dot{\phi}^{n+\frac{1}{2}}_{i,j,k}
$$
where $\eta = \eta_0 + m\Delta \eta$ (in the above,
upper indices denote time steps and lower ones
coordinates x,y and z).
For clarity
\begin{eqnarray} \nonumber
\phi^n_{i,j,k} \equiv \phi \left(\eta', x',y', z'\right), \\ \nonumber
\dot{\phi}^{n+\frac{1}{2}}_{i,j,k} \equiv
\frac{\partial \phi}{\partial \eta}\left(\eta'', x', y', z' \right), \\ \nonumber
\frac{\partial V}{\partial \phi^n_{i,j,k}} \equiv
\frac{\partial V}{\partial \phi} \left(\phi^n_{i,j,k}\right),
\end{eqnarray}
\begin{eqnarray} \nonumber
x' = x_0 + i,\\ \nonumber
y' = y_0 + j,\\ \nonumber
z' = z_0 + k,\\ \nonumber
\eta' = \eta_0 + n\Delta\eta,\\ \nonumber
\eta'' = \eta_0 + (n+\frac{1}{2})\Delta\eta. \nonumber
\end{eqnarray}
Here, $\dot{\phi} \equiv \frac{\partial
\phi}{\partial \eta}$, and, $x_0 = y_0 = z_0 = 0$.
For the mean value and the dispersion of the field, we have the following equations:
${VOL}= {N_x \cdot N_y\cdot N_z}$,
$$ \langle \phi \rangle=\sum_{i,j,k} \frac{\phi(i,j,k)}{N_x N_y N_z}
= \frac{\sum \phi}{{VOL}}.
$$
$$
\sigma_{\phi}^{2} = \langle (\phi - \langle\phi\rangle)^{2}\rangle =
\langle\phi^2\rangle - \langle\phi\rangle^2.
$$
$$
\frac{\sigma_{\phi}}{\langle\phi\rangle} = \frac{\sqrt{\frac{\sum
\phi^2}{VOL}-\langle\phi\rangle^2}}{{\langle\phi\rangle}}, ~~
\langle\phi^2\rangle = \frac{\sum \phi(i,j,k)^2}{VOL}.$$
The kinetic energy is given by
$$
E_{kin} = \frac{\sum \dot{\phi^2}}{VOL}.
$$
In all simulations we assume that the field was initially
at rest $(\Phi^{\prime} =0)$, while, for the surface energy we have:
\begin{eqnarray}
A= \int\vec{n} \cdot \vec{dA} = \Delta A \sum_{laczniki}
\frac{\delta^{\pm}}{|\cos{\theta_x}| + |\cos{\theta_y}| + |\cos{\theta_z}|} = \\
= \Delta A \sum_{laczniki} \frac{\delta^{\pm}}{|n_x| + |n_y| + |n_z|} = \\
= \Delta A \sum_{laczniki} \frac{|\nabla\phi|}
{|\frac{\partial \phi}{\partial x}| + |\frac{\partial \phi}{\partial y}|
+ |\frac{\partial \phi}{\partial z}|}\; ,
\end{eqnarray}
\begin{equation}
\delta^{\pm} = \left\{
\begin{array}{ll}
1, & \hbox{if a link crosses a wall}, \\
0, & \hbox{if it does not .} \\
\end{array} \right.
\end{equation}
$ E_{\rm{surf}} = \sigma A $, where $\sigma$ is the tension and $A$ the surface.
In what follows, we take $\sigma=1$ and
$$ E_{\rm{surf}} = \frac{A}{VOL}. $$
The kinetic energy of the walls is given by
$$
E_{\rm{kw}} = \sum_{links} \frac{1}{2} \left[
\dot{\phi}({\overrightarrow{x}})^2 + \dot{\phi}({\overrightarrow{x}
+ \overrightarrow{n}})^2 \right].
$$
(with $\dot{\phi}$ computed at the position of the wall).
To calculate the volume of the vacuum,
we normalise to the total volume, namely take the
number of left-vacua and right vacua over the total
number of vacua.
In our code, instead of looking for the
extrema of the potential analytically,
we use numerical methods which are simpler (particularly for
runaway potentials):
$$ \frac{dV}{d\phi} \left(\phi \right) = \frac{V(\phi + \epsilon) -
V(\phi - \epsilon)}{2 \epsilon},
$$ and we take $\epsilon = 10^{-4} $. The accuracy is
$\epsilon^2 \frac{d^2V}{d\phi^2}$.
\vspace*{0.4 cm}
{\bf Appendix II:
Technical discussion about optimisation of size of time step}
The basic parameter that determines the accuracy of the simulation is the
time step, which must be small, so that the discretised equation must well-represent
the continues one. However, the time step cannot be too small, because there are many
steps in the integration and numerical mistakes cummulate (each step introduces an
error).
We have seen that the field changes rapidly at the beginning of the simulation,
which is due to the random, non-equilibium initialisation and somewhat later, by
the fact that domain walls get created rapidly and then interact very often (since there
are many of them). After some time, the field changes at a slower rate, and its configuration
becomes more regular and more stable. Consequently, we should
change at some point the time step of the simulation
(smaller one at the beginning, when the evolution is rapid,
and larger, at later times).
The change of the field depends on the time step, and on the value of the time
derivative in the next integration step
$$
\Delta \phi = \Delta \eta \dot{\phi}.
$$
Now, let's look at the evolution of the time derivative.
The change of the time derivative over the time $\delta \eta$ is
$$
\Delta \dot{\phi} \equiv \dot{\phi} -
\frac{(1-\delta)\dot{\phi} + \Delta\eta \left( \nabla^2 \phi -
a^\beta \frac{\partial V}{\partial \phi} \right)}{1+\delta}, \\
$$
which gives
$$
\Delta \dot{\phi} = \frac{1}{{1+\delta}} \left[
-2\delta \dot{\phi} + \Delta\eta\ \left( \nabla^2 \phi - a^{\beta}V' \right) \right]. \\
$$
The $\delta$ is given by
$$
\delta = \frac{1}{2} \alpha \frac{\Delta\eta}{\eta} \frac{d\ln{a}}{d\ln{\eta}}.\\
$$
Since $\alpha \sim 2$, and
$\frac{d\ln{a}}{d\ln{\eta}}\sim 1,$
$$
\delta \sim \frac{\Delta\eta}{\eta}, so
$$
$$
\frac{1}{1+\delta} \sim 1 - \frac{\Delta\eta}{\eta}.
$$
Substituting, we get
$$
\Delta \dot{\phi} \sim \Delta \eta \left[
-2\frac{\dot{\phi}}{\eta} + \nabla^2 \phi - a^{\beta} V'\right]
\left( 1- \frac{\Delta\eta}{\eta} \right).
$$
The expression in the square bracket can be estimated from above, by its largest
value, taken anywhere in the lattice.
\begin{equation}
|\Delta \dot{\phi}| \le \Delta \eta \left(
\frac{-2|\dot{\phi}|_{max}}{\eta} + |\nabla^2 \phi|_{max} + |a^{\beta} V'|_{max}
\right).
\end{equation}
Now, we can follow several strategies;
(i) demand that the change to the field is as small as possible
$$
\frac{\Delta \phi}{\phi} \ll 1,
$$
(ii) demand that the average change of the field
on the whole lattice should be smaller than 1
$$
\left< \frac{\Delta \phi}{\phi} \right> \ll 1,
$$
(iii) request that the maximal change of the field with
respect to the wall width is much smaller than 1
$$
\frac{\Delta \phi}{w_0} \ll 1.
$$
In our simulation, we have followed the last path, thus,
$$
\frac{\Delta \phi}{w_0} \le \kappa
$$
where $\kappa$
is the requested accuracy. From this inequality, it turns out that
$$
\Delta \eta = \sqrt{ \frac{\kappa
w_0}{\frac{-2|\dot{\phi}|_{max}}{\eta} + |\nabla^2 \phi|_{max} +
|a^{\beta} V'|_{max}}}.
$$
This is the estimated optimal time step.
To fix the optimal accuracy, we have performed
several simulations, looking for the change of the
results with respect to changes of $\kappa$.
We used a 2d lattice with a size of 248 $\times$ 248, for
$w_0 = 356$. We also made a simulation on larger lattice,
3072 $\times$ 3072, to understand the dependence
on the lattice size. The results appear in Table 2 below.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\kappa$ & $10^{-1}$ & $10^{-2}$ & $10^{-3}$ & $10^{-4}$ & $10^{-5}$& $10^{-6}$ \\
\hline
$({\Delta \eta})_{min}$ & $1.85\cdot 10^{-1}$ & $5.89\cdot 10^{-2}$ & $1.79\cdot 10^{-2}$ & $5.72\cdot 10^{-3}$ & $1.85\cdot 10^{-3}$ & $5.87\cdot 10^{-4}$ \\
$({\Delta \eta})_{max}$ & $6.16\cdot 10^{-1}$& $2.03\cdot 10^{-1}$ & $6.41\cdot 10^{-2}$ & $2.02\cdot 10^{-2}$ & $6.45\cdot 10^{-3}$ & $2.09\cdot 10^{-3}$ \\
\hline
\end{tabular}
\label{tabb2}
\caption{\it
$({\Delta \eta})_{min}$ versus $\kappa$.}
\end{table}
It turns out that the kinetic energy,
mean value of field and dispersion
of the field are not sensitive to the time-step, however, the surface energy,
surface of walls and volume of different vacua, are much more dependent on it
(in fact, the most sensitive statistical observable is the ratio of the surface
of the walls over the volume of the subdominant vacuum - namely the
rate of change of the volume of the walls).
In the simulations described so far in the literature,
there is no analysis of the accuracy of the
results, and the time-step is usually not given. Instead,
large series of simulations are performed and the results are
averaged. However, it may be that the numerical noise is being
averaged as well. The new element arising from our simulation is
that, with appropriately chosen time-step, the plots of the parameters
are simular for different runs, which makes them more reliable.
In our procedure, we
first chose the optimal time step for a given accuracy, $\kappa$,
and then we make small series of simulations (requiring among others
less computing power).
The size of the lattice we use is large, about 4 millions of horizons, thus,
with the optimal time step, the observables are computed very precisely.
The point of time when the graphs stop being smooth (have discontinuities) is
interpreted as the point where the accuracy is lost.
From then onwards, the predictions for the
observables can be treated only qualitatively.
After careful analysis, we have fixed $\kappa$ to be $ 10^{-4}$
which gives us the time step to be in the range 0.005 and 0.02.
\vspace*{0.4 cm}
{\bf Appendix III: Size of the Lattice}
Representing the field on a large lattice requires a lot of memory.
A single simulation in 2D on a lattice of 2048 $\times$
2048 with $\kappa = 10^{-4}$
requires more than 10 hours of computing time, to reach $\eta_{stop} \sim 100$.
The resolution $\delta$ (precision) that we previously discussed,
is the smallest visible relative change of the field
statistics - we estimated this by
looking at the plots of surface energy of
the walls comparing them to the simulation performed
on the largest possible lattice.
The results are given in Table 3, for 2d and 3d cases.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n_x$ & 786 & 1024 & 1536 & 2048 & 3072 & 4096 \\
\hline
$\delta$ & $10^{-3}$ & $2\cdot 10^{-4}$ & $10^{-4}$ & $5\cdot 10^{-5}$ & $1\cdot 10^{-5}$& ${2\cdot 10^{-6}}$ \\
\hline
\end{tabular}
\vspace*{0.5 cm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n_x$ & 80 & 100
& 128 & 160 & 200 & 256 \\
\hline
$\delta$ & $7\cdot 10^{-5}$ & $4\cdot 10^{-5}$
& $10^{-5}$ & $5\cdot 10^{-5}$ & $2.5\cdot 10^{-6}$& ${6.2\cdot 10^{-7}}$ \\
\hline
\end{tabular}
\label{tabb3}
\caption{\it Resolution versus lattice size in 2d (upper panel) and in 3d simulations.}
\end{table}
We have found that the logarithm of the resolution $\delta$
is linear in the
lattice size, and the best fit for 2d is given by
$$
\log_{10}{\delta} = -3.237 - 1.88 \cdot 10^{3}\cdot {n_x}^{3/2},
~~\delta \sim 10^{-3.237} \cdot 10^{-\frac{{n_x}^{3/2}}{1526}}.
$$
For 3d the logarithm of the resolution
is proportional to the power of $n$:
$$
\log_{10}{\delta} = -3 - 6.55 \cdot 10^{-4}\cdot n_x,
~~
\delta \sim 10^{-3} \cdot 10^{-\frac{n_x}{1526}}.
$$
In our simulation, we need these formulas to judge when the number
of domain walls is too small to trust the numerical results.
Technically, because
we are using periodic boundary conditions, after a finite time
(which is
of the order of the lattice size over the velocity of the wall),
a wall which leaves the
horizon could return, because it comes to the lattice
from the other side.
This sets the
limit of the simulations. Thus, looking at the simulations, we can see that the average
velocity of the walls is 0.5, which means that the return time is approximately equal 2 times
the lattice size. Thus, this is the absolute upper limit on the huseful range of $\eta$s
- because here we have a large lattice with periodic boundary conditions.
We have verified that for the lattices we have used, the role of the periodic boundary conditions
is negligible.
\vspace*{0.4 cm}
{\bf Appendix IV: Monitoring of the position of the Minima of different
Potentials}
The position of the extrema of the potential is monitored both analytically
and numerically.
\underline{Double Well Potential}
For the potential
$$V(\phi) \rightarrow V(\phi) - \epsilon V_0 \phi$$
with
$$ V(\phi) = V_0 \left( \left(\frac{\phi}{\phi_0}\right)^2 -1\right)^2. $$
The Taylor expansion of the potential is
$$
V(\phi) = V_0 + \frac{1}{2} \left(\frac{V_0}{v^2} \right)\phi^2 +
\frac{1}{4!} \left(\frac{V_0}{v^4} \right)\phi^4 + \dots
$$
and, for a small non-degeneracy parameter $\epsilon$ we can easily find
the position for the extrema of the potential:
(maximum)
$$ \phi \ = \ - \frac{1}{4{\phi_0}^2}\ \epsilon - \frac{1}{64{\phi_0}^8}\ \epsilon^3
\ - \frac{3}{1024{\phi_0}^14}\ \epsilon^5 \ - \ \frac{3}{4096{\phi_0}^20}\ \epsilon^7 \ldots $$
(minimum)
\begin{eqnarray}
\nonumber \phi = -\phi_0 + \frac{1}{8{\phi_0}^2}\ \epsilon -
\frac{1}{128{\phi_0}^5}\ \epsilon^2
+ \frac{1}{128{\phi_0}^8}\ \epsilon^3 - \frac{105}{32768{\phi_0}^{11}}\ \epsilon^4 + \\ \nonumber
+ \frac{3}{2048{\phi_0}^{14}}\ \epsilon^5 - \frac{3003}{4194304{\phi_0}^{17}}\ \epsilon^6
+ \frac{3}{8192{\phi_0}^{20}}\ \epsilon^7 \ldots, \\ \nonumber\\
\nonumber \phi = \phi_0 + \frac{1}{8{\phi_0}^2}\ \epsilon +
\frac{1}{128{\phi_0}^5}\ \epsilon^2
+ \frac{1}{128{\phi_0}^8}\ \epsilon^3 + \frac{105}{32768{\phi_0}^{11}}\ \epsilon^4 + \\ \nonumber
+ \frac{3}{2048{\phi_0}^{14}}\ \epsilon^5 + \frac{3003}{4194304{\phi_0}^{17}}\ \epsilon^6
+ \frac{3}{8192{\phi_0}^{20}}\ \epsilon^7 \ldots.
\end{eqnarray}
The important thing is the position of the maximum, because this we use as the
border between the left and right vacuum. One can always find numerically the position
of the extrema, via the
Newton--Raphson method: $x_{i+1} = x_i
-\frac{f(x_i)}{f'(x_i)}$ .
\underline{Runaway Potential}
For
$$ V(s) = \frac{1}{2s} {\left( A(2s+N_1) e^{-\frac{s}{N_1}} - B(2s+N_2) e^{-\frac{s}{N_2}}\right)}^2,$$
(where $s$
is canonically normalised), we find the extrema as follows:
\begin{eqnarray}
\nonumber
\frac{dV}{ds} =
- \frac{1}{2 N_1 N_2 s^2} e^{-2 \frac{s(N_1 + N_2)}{N_1 N_2}}
\left[ A(2s+N_1) e^{\frac{s}{N_2}} - B(2s+N_2) e^{\frac{s}{N_1}} \right] \cdot \\ \nonumber
\left[ A N_2 (N_1^2 + 4s^2) e^{\frac{s}{N_2}} - B N_1 (N_2^2 + 4s^2) e^{\frac{s}{N_1}}
\right].
\end{eqnarray}
Here, we have two brackets and either the one or the other vanishes. This gives
the following two conditions:
$$ e^{s \frac{N_1 - N_2}{N_1 N_2}} = \frac{B(N_2 + 2s)}{A(N_1 + 2s)}, $$
$$ e^{s \frac{N_1 - N_2}{N_1 N_2}} = \frac{B N_1 (N_2^2 + 4s^2)}{A N_2 (N_1^2 + 4s^2)}.$$
These two conditions are non-linear and cannot be solved algebraically, but can be solved
iterativelly, step-by-step.
$$ s_{(i+1)} = \frac{N_1 N_2}{ N_1 - N_2} \ln
\left[ \frac{B(N_2 + 2s_{(i)})}{A(N_1 + 2s_{(i)})} \right],$$
$$ s_{(i+1)} = \frac{N_1 N_2}{ N_1 - N_2}
\ln \left[ \frac{B N_1 (N_2^2 + 4s^2_{(i)})}{A N_2 (N_1^2 + 4s^2_{(i)})} \right],$$
$$s_{(0)} = \frac{N_1 N_2}{ N_1 - N_2}.$$
To remove the degeneracy of the vacua, we add a term that looks like
$\sim \frac{\epsilon}{s^2}$.
\vspace*{0.4 cm}
{\bf Acknowledgements}
This work was partially supported by the EC 6th Framework
Programmes MRTN-CT-2006-035863 and MRTN-CT-2004-503369,
the grant MNiSW N202 176 31/3844 and the TOK Project
MTKD-CT-2005-029466. The research of S. Lola is co-funded by
the FP6 Marie Curie Excellence Grant MEXT-CT-2004-014297.
Z. Lalak and S. Lola would like to thank
the Theory Division of CERN for kind hospitality during a
large part of this work.
\noindent
|
2,869,038,156,840 | arxiv |
\section{Introduction}
Water megamasers are extremely luminous sources of 22 GHz radiation generated by the amplification of microwave signals through stimulated emission \citep[for a review, see][]{lo2005mega}. They can be found within a few parsecs of active galactic nuclei (AGN) and may be used to probe the kinematics of this inner region \citep[e.g.][]{greenhill1996vlbi, trotter1998water, peck2003flaring}. Some special disk systems, such as NGC 4258, have masers which trace the ridge-line of an edge on disk, allowing for precise measurements of the disk dynamics \citep{herrnstein1999geometric}. Measurement of the acceleration of systemic features provide an independent evaluation of H$_0$ \citep[e.g.][]{kuo2013megamaser, kuo2015megamaser, reid2013megamaser, gao2016megamaser, pesce2020megamaser}. The rotation axis of the maser disk is also observed to align with the jet axis when jets are detected, suggesting the maser disk can be used to understand the geometry of the accretion disk \citep[e.g.][]{greene2013using, kamali2019accretion}.
In addition, the Keplerian rotation of the sub-parsec scale accretion disk traces the black hole (BH) mass with high precision and accuracy \citep[e.g.][]{miyoshi1995evidence}. Currently, there are only $\sim100$ BHs with masses determined by dynamical tracers such as masers, stars, or gas including both active and non-active galaxies \citep[e.g.][]{kormendy2013coevolution, mcconnell2013revisiting, saglia2016sinfoni}, as this method requires that the BH sphere of influence be spatially resolved. Therefore, we are limited to objects within $\sim 100$ Mpc for dynamical estimates, with current adaptive optics, except for the most massive BHs. No other BH has a mass measured with the same precision as that in the Galactic Center, but the masers offer the most precise measurement after that \citep[e.g.][]{maoz1998dynamical, kuo2010megamaser}. Beyond the available dynamical masses, all other BH masses have been estimated using only indirect tracers, often involving accretion \citep[e.g.][]{shen2013mass}. We will use the high accuracy and precision of the maser masses to test the fidelity of other methods of BH mass measurements, specifically single-epoch scaling relations in AGN.
In the absence of available maser measurements, the most accurate method for determining BH masses using emission from AGN is reverberation mapping (RM). This approach uses broad line region (BLR) gas that is not spatially resolved as a dynamical tracer to measure velocity from line widths. While it is not possible to resolve the gravitational sphere of influence in most of these AGN, it is possible to determine a size scale for the broad line region by measuring the time lag between variations in the AGN continuum to those in the emission lines of the BLR \citep[e.g.][]{blandford1982reverberation, peterson1993reverberation}. This delay provides an estimate of the light-travel time across the BLR, from which the average radius of the BLR can be determined. The BLR radius (R), in combination with the gas velocity measured from the width of the broad emission lines (W) and gravitational constant (G), can then be used to calculate a virial product (Equation \ref{eq:vp}).
\begin{equation}
\label{eq:vp}
\textrm{virial product} = \frac{W^2 R}{G}
\end{equation}
Under the assumption that the BH dominates the gravity in this region, the virial product is expected to be correlated with the mass of the BH, but there is a virial pre-factor, $f$, required to account for the geometry and dynamics of the disk. The pre-factor is defined such that $f$ multiplied by the virial product gives the virial BH mass (Equation \ref{eq:mass}).
\begin{equation}
\label{eq:mass}
M_{\textrm{BH}} = f \frac{W^2 R}{G}
\end{equation}
In general, we do not know the shape or kinematics of the BLR, whether it is flattened or round, or whether the gas is inflowing, outflowing or neither. In a growing number of cases, this has been modeled \citep[e.g.][see more discussion below]{pancoast2014modelling, grier2017structure, williams2018lick, williams2020space, bentz2021detailed, Villafa_a_2022} but in general, an average value of $f$ has been used.
As well as providing an estimate of the BH mass, RM has been used to calibrate a relationship between the luminosity of the AGN and the BLR size \citep[e.g.][]{bentz2013low}. With this relationship, the BH masses of distant AGN can be estimated through the virial product with only luminosity and a measured line width \citep{vestergaard2002determining}, known as the single-epoch method. Most inferences about the cosmic evolution of BH mass density, and potential evolution in BH-galaxy scaling relations, have relied on these single-epoch virial masses \citep[e.g.][]{laor1998quasar, wandel1999central, mclure2002measuring, vestergaard2006determining, kelly2013demographics, volonteri2016inferences, pensabene2018alma}. Therefore, it is crucial to determine whether the virial product provides an accurate value of BH mass.
The virial estimate relies on multiple assumptions about the structure of the BLR. Uncertainties in calibrations alone lead to estimates of 0.3-0.5 dex scatter \citep{vestergaard2006determining, shen2013mass}. There are also as-yet unquantified systematic errors that are still a large cause for concern. Primarily, the virial estimate assumes that broad emission lines are virialized \citep[e.g.][]{peterson1999keplerian, peterson2000evidence, onken2002mass, kollatschny2003accretion, bentz2010lick}. Additionally, the gas velocity of the BLR is assumed to be at the radius measured by reverberation mapping, but this is not necessarily the case \citep{krolik2001systematic}. Although gravity is assumed to dominate in the BLR, there is an unknown contribution from radiation pressure \citep[e.g.][]{krolik2001systematic, marconi2008effect, marconi2009observed, netzer2010effect}. This could introduce additional scatter, as well as a luminosity dependence \citep{shen2013mass}, which may also be generated from BLR breathing modes \citep{wang2020sloan}.
The determination of an accurate black hole mass also depends on the choice of $f$, which in turn must be calculated through an independent method. Historically, $f$ has been calibrated by aligning the M-$\sigma_*$ relation of reverberation mapped AGN with that of quiescent galaxies \citep[e.g.][]{onken2004supermassive, collin2006systematic, woo2010lick, graham2011expanded, grier2013stellar, batiste2017recalibration}. Although $f$ is often taken to be a constant, it has been found to vary significantly between objects \citep{yu2019calibration}. An independent method of determining $f$ by modeling the BLR has also found variation in the virial pre-factor \citep[e.g.][]{pancoast2014modelling, grier2017structure, williams2018lick}. This modeling is intensive, and requires densely sampled light curves, so it has only been done for 27 objects so far \citep{Villafa_a_2022}. More comparisons are needed.
Evaluating the accuracy of the virial product as a probe of BH mass requires a comparison to a known, dynamical mass. In general, this is very challenging since luminous AGN are needed for RM, but severely complicate stellar or gas dynamical measurements, as their light swamps that of the stars. Although there are a few objects with both measurements \citep[e.g.][]{davies2006star, onken2007black,den2015measuring}, building up this sample will be slow. We instead use the larger sample of objects with maser dynamical masses. However, the accretion disk is at very high inclination to enable masing, so the BLRs of maser galaxies are all obscured. Therefore, we must use spectropolarimetric measurements to probe the BLR and measure the hidden broad lines so that we may calculate a virial product. Similar work has been done previously for smaller samples of megamaser galaxies \citep{kuo2010megamaser, du2017hidden}. By expanding this sample through additional spectropolarimetric measurements of maser galaxies, we sought to test the hypothesis that a direct correlation exists between virial product and BH mass.
In Section \ref{sec:methods} we describe our spectropolarimetric observations of megamaser galaxies and subsequent data reduction. We present our measurements of broad line widths, along with additional values from the literature, in Section \ref{sec:results}. BH masses determined through the virial product or through RM modeling are given in Section \ref{sec:BHmass}. Section \ref{sec:disc} includes discussion of the virial pre-factor, and implications for the virial mass and BLR structure. We summarize our results in Section \ref{sec:summary} and discuss possible future work.
\section{Methods}
\label{sec:methods}
\subsection{Sample}
We began with the sample of megamaser galaxies with known, dynamical BH masses listed in \cite{kuo2020megamaser}. The dynamical masses were determined through the work of the Megamaser Cosmology Project \citep[MCP;][]{ reid2009megamaser, braatz2010megamaser}. We selected any megamaser disk with a published BH mass, even if double-peaked rather than triple-peaked with more complex kinematics. Uncertainties in the BH mass are adopted from \cite{kuo2020megamaser} or \cite{greene2016megamaser} based on the dynamical modeling papers referenced therein.
Of the 22 megamasers included in the parent sample, we observed nine as described in Section \ref{sec:data}. Among these nine objects, we find evidence of a polarized broad line in three (Section \ref{sec:fit}). In addition to the nine we observed, we include six additional galaxies with measured polarized broad lines in the literature. Our complete sample of objects with broad line widths is described in Section \ref{sec:results}.
\subsection{Data}
\label{sec:data}
Linear spectropolarimetry of nine megamaser galaxies with known disk dynamical BH masses were obtained with the Robert Stobie Spectrograph (RSS) on the South African Large Telescope (SALT). See Table \ref{tab:sample} for dates of observation and exposure times. Each of the nine objects was observed on one night, except for NGC 1194 which was observed on two. Exposure time was divided evenly between four waveplate angles, with three observations at each angle. The seeing was approximately 0.6$\arcsec$. Resolution was $R \approx 1065$, and the pixel scale was 0.1267$\arcsec$ per pixel. The spectra were taken in a wavelength range of 4200\AA - 7270\AA.
\begin{deluxetable*}{lcclc}
\tablecaption{\label{tab:sample} Observation Megamaser Sample}
\tablewidth{0pt}
\tablehead{
\colhead{Galaxy} & \colhead{Distance (Mpc)} & \colhead{Ref.} & \colhead{Date Observed} & \colhead{Exposure Time (s)}
}
\decimalcolnumbers
\startdata
IC 2560 & 41.8 & 1 & 2017 May 16 & 1440 \\
Mrk 1029 & 124.0 & 1 & 2017 Oct 13 & 2280\\
NGC 1068 & 15.9 & 1 & 2017 Aug 26 &1200\\
NGC 1194 & 53.2 & 2 & 2017 Oct 11, 2017 Oct 17 & 1440, 1440\\
NGC 1320 & 34.2 & 2 & 2017 Oct 11 & 1440 \\
NGC 2960 & 67.1 & 1 & 2017 May 14 & 1440\\
NGC 3393 & 49.2 & 1 & 2017 May 22 & 1200\\
NGC 5495 & 95.7 & 2 & 2017 May 20 &1440 \\
NGC 5765b & 126.3 & 2 & 2017 May 22 &2400\\
\enddata
\tablecomments{Observed galaxy sample. Columns 2-3 give the distance to the megamaser and references. Columns 4-5 provide the date of observation and exposure time. References: (1) \cite{greene2016megamaser}, (2) \cite{kuo2020megamaser}}
\end{deluxetable*}
\vfill\null
\subsection{Reduction}
\label{sec:reduction}
The data were reduced with the polsalt\footnote{https://github.com/saltastro/pysalt} extension to the pysalt\footnote{http://pysalt.salt.ac.za/} \citep{crawford2010pysalt} reduction pipeline with a few minor modifications. Basic reduction steps include overscan subtraction; corrections for gain, crosstalk, and distortion; and cosmic ray cleaning. We modified the wavelength calibration method slightly to ensure that the wavelength was fit over the full pixel domain. After wavelength calibration, individual spectra were extracted by manually selecting the center and width. The O and E spectra for each observation were interpolated to use the same wavelength solution, then combined to calculate the Stokes parameters.
At this step, the reduction pipeline was modified to account for masked pixels. In the original software, if any of the three observations at a given waveplate angle had a masked pixel at a certain wavelength, the corresponding pixel in the combination would be masked. We altered the reduction such that if only one pixel out of three were to be masked, that pixel would be replaced with the average value of the remaining two pixels.
The Stokes parameters were combined to generate the total intensity, polarization fraction (P), and polarization angle ($\theta$) for each object. The resulting spectra are missing $\sim$50 \AA\ of data between $\sim$5220-5270 \AA\ and $\sim$6260-6310 \AA\ due to the location of the chip gaps. These gaps do not affect the analysis of the broad H$\alpha$ region.
Before fitting the broad H$\alpha$ line (\S \ref{sec:fit}), we performed additional continuum subtraction from the Stokes parameters following the method described in \cite{capetti2021spectropolarimetry}. We estimated the continuum polarization for each object by taking regions of 30-80 \AA\ on either side of the H$\alpha$ line, then performing a constant fit to the values of I, Q, and U between these regions. The continuum fit was then subtracted before the Stokes parameters were combined to find P and $\theta$. The regions used for the background fit were chosen in each object to avoid emission and absorption lines, as well as the chip gaps. This subtraction improved the detection of the polarized broad lines and removed interstellar polarization in the region of interest.
\subsection{Standard Star}
To ensure the observations of the nine sample objects are properly calibrated, we observed a standard polarized star, BD-12 5133. This star has a known polarization fraction of 4.27 $\pm$ 0.02 \% in the $V$-band, and a polarization angle of 145.88 $\pm$ 0.09$^{\circ}$ \citep{CikotaStandard}. We apply the polsalt data reduction to the standard star and measure a polarization fraction of 4.6 $\pm$ 0.2 \% and a polarization angle of 144 $\pm$ 1$^{\circ}$ averaged over the V band (5070 - 5950 \AA). The values of polarization fraction and angle are consistent within 2$\sigma$ so we are assured the reduction is well calibrated, although we discuss a possible calibration issue in Section \ref{sec:spectra}.
\section{Results}
\label{sec:results}
\subsection{Spectra}
\label{sec:spectra}
For each object, we show P and $\theta$ in 25 \AA\ bins as well as the total and polarized intensity for the full observed wavelength spectrum. We also show total and polarized intensity in the H$\alpha$ region after performing continuum subtraction. One example is shown for IC 2560 in Figure \ref{fig:ic2560_spectra}, and the rest in Appendix A.
\begin{figure*}[htb!]
\plotone{figures/ic2560_spectra_binned}
\caption{Spectrum of IC 2560. Top row: polarization fraction and polarization angle in 25 \AA\ bins. Middle row: total and polarized intensity in arbitrary units for the full wavelength range. Bottom row: total and polarized intensity in the H$\alpha$ region after continuum subtraction.}
\label{fig:ic2560_spectra}
\end{figure*}
We find the average $\theta$ and P in the V-band continuum (5800 - 6300 \AA) and across H$\alpha$ (6500 - 6625 \AA) in the rest frame wavelength for each object following \cite{ramos2016upholding}. We also find the signal-to-noise ratio of P in the same regions. These values are given in Table \ref{tab:pol}. We find that $\theta$ is not well determined due to large scatter over short wavelength ranges. Our values have fractional errors of $\sim30\%$. A precise value of $\theta$, however, is not important for our analysis of the virial product.
We find the results of P and $\theta$ for IC 2560 and NGC 3393 to be consistent within errors when compared to the observations in \cite{ramos2016upholding}. Our measurements of NGC 1068 agree with the results presented in \cite{inglis1994spatially} and \cite{young1995near} for observations of the nucleus.
The value of P increases towards the red end of the spectrum for both IC 2560 and NGC 5765b (Figures \ref{fig:ic2560_spectra}, \ref{fig:ngc5765b_spectra}). This is most likely an issue with calibration rather than S/N as we do not see the same feature in the other objects in the sample all with similar values of S/N. This might be indicative of a red galaxy continuum present in our polarized spectra. However, as we subtract the total continuum before fitting the broad feature (\S \ref{sec:reduction}), this issue should not affect our measured values.
\begin{deluxetable*}{lcccccc}
\tablecaption{\label{tab:pol} Polarization Angle and Fraction}
\tablewidth{0pt}
\tablehead{
\colhead{Galaxy} & \colhead{$\theta_V$ ($^{\circ}$)} & \colhead{$\theta_{H\alpha}$ ($^{\circ}$)} & \colhead{P$_V$(\%)} & \colhead{P$_{H\alpha}$(\%)} & \colhead{SNR$_V$} & \colhead{SNR$_{H\alpha}$}
}
\decimalcolnumbers
\startdata
IC 2560 & 100 $\pm$ 40 & 120 $\pm$ 20 & 1.6 $\pm$ 0.9 & 1.3 $\pm$ 0.6 & 4.5 & 6.0 \\
Mrk 1029 & 0 $\pm$ 40 & 0 $\pm$ 30 & 2 $\pm$ 1 & 1.0 $\pm$ 0.6 & 4.6 & 3.6 \\
NGC 1068 & 88 $\pm$ 6 & 90 $\pm$ 10 & 2.6 $\pm$ 0.6 & 1 $\pm$ 2 & 70.3 & 364.3 \\
NGC 1194 & 150 $\pm$ 30 & 150 $\pm$ 10 & 2 $\pm$ 1 & 2.2 $\pm$ 0.7 & 6.4 & 10.4 \\
NGC 1320 & 110 $\pm$ 30 & 110 $\pm$ 30 & 3 $\pm$ 2 & 2 $\pm$ 1 & 5.1 & 4.2 \\
NGC 2960 & 140 $\pm$ 40 & 140 $\pm$ 40 & 0.9 $\pm$ 0.5 & 0.7 $\pm$ 0.3 & 3.4 & 3.1 \\
NGC 3393 & 160 $\pm$ 50 & 150 $\pm$ 40 & 1.2 $\pm$ 0.6 & 0.6 $\pm$ 0.4 & 3.1 & 2.4 \\
NGC 5495 & 120 $\pm$ 40 & 120 $\pm$ 40 & 7 $\pm$ 4 & 2 $\pm$ 2 & 4.4 & 3.3 \\
NGC 5765b & 120 $\pm$ 50 & 120 $\pm$ 30 & 2 $\pm$ 1 & 2 $\pm$ 1 & 2.8 & 5.5 \\
\enddata
\tablecomments{Average values of polarization angle and fraction for our observed objects in both the $V$-band continuum (5800-6300\AA) and around H$\alpha$ (6500-6625\AA) \citep{ramos2016upholding} in the rest frame. The signal to noise ratio is quoted for the polarization fraction in the same ranges. The values of the polarization fraction should be considered lower limits because of galaxy continuum dilution in the intensity spectrum.}
\end{deluxetable*}
\subsection{Fitting the Spectra}
\label{sec:fit}
For each of the nine observed objects, we fitted the H$\alpha$-[NII] lines in both total and polarized intensity using the astropy LevMarLSQFitter function. In the total intensity, we used three Gaussian components for the narrow lines and a constant background continuum. The narrow lines were fixed to have the same width in velocity space. The relative amplitudes of the [NII] lines were fixed in a 1:3 ratio, and the relative positions were set by the known wavelength difference. In the polarized intensity, we fitted for the same three lines, although they are not always present in the polarized light, with the same constraints and a constant background. We also included a broad component represented by an additional Gaussian peak. The broad peak was initialized at the same wavelength as H$\alpha$, but its position was allowed to vary. In the case of NGC 1068, there are so many velocity components within our aperture that we could not find a model including narrow lines to fit the polarized spectrum well. Therefore, we only included the broad feature and constant background.
We are primarily interested in polarized intensity, and do not need a precise polarized fraction. Therefore, we do not perform starlight subtraction. We do note, however, that because we do not subtract the continuum, our polarization fractions should be considered lower limits.
Of the nine objects, three show evidence of a broad H$\alpha$ feature: IC 2560, NGC 1068, and NGC 5765b. We find evidence of a broad feature in the weaker H$\beta$ line only for NGC 1068 and measure a FWHM of 2900 $\pm$ 100 km s$^{-1}$, which is slightly narrower compared to our measurement of the broad H$\alpha$ line (3170 $\pm$ 50 km s$^{-1}$). The total and polarized fits for these objects along with residuals are shown in Figures \ref{fig:ic2560_fit} - \ref{fig:ngc5765b_fit}.
\begin{figure*}[htb!]
\plotone{figures/ic2560_fit}
\caption{Spectra of IC 2560. The left panel shows the total intensity in arbitrary units along with the best fit. The narrow lines are represented in blue, the constant background with the yellow dashed line, and the total fit in red. The polarized intensity is given on the right. The fit includes an additional broad feature represented in green. The polarized spectrum is best fit with three narrow lines of width $\sim$ 260 km s$^{-1}$ and a broad component of width $\sim$ 1300 km s$^{-1}$.}
\label{fig:ic2560_fit}
\end{figure*}
\begin{figure*}[htb!]
\plotone{figures/ngc1068_fit}
\caption{Same as Figure \ref{fig:ic2560_fit} but for NGC 1068. In polarized intensity, NGC 1068 only requires a broad component of width 3170 km s$^{-1}$ on top of a constant background.}
\label{fig:ngc1068_fit}
\end{figure*}
\begin{figure*}[htb!]
\plotone{figures/ngc5765b_fit}
\caption{Same as Figure \ref{fig:ic2560_fit} but for NGC 5765b. The polarized intensity of NGC 5765b is best fit with narrow lines of width $\sim$ 220 km s$^{-1}$ and a broad component of width $\sim$ 3300 km s$^{-1}$.}
\label{fig:ngc5765b_fit}
\end{figure*}
To confirm the presence of a broad line in these three galaxies, we fitted the spectra with and without a broad component and calculated $\Delta \chi^2$ between the two models. For the broad component to be considered significant, $\Delta \chi^2$ must be greater than 2.7 (90\% confidence) times the number of additional parameters (three). We found $\Delta \chi^2$ to be greater than 8.1 in all cases.
To estimate the error in the broad line width, we took 1000 random samples from the spectrum assuming a normal distribution of polarized flux at each wavelength. The mean and standard error were assigned to be the observed values output by the reduction pipeline (\S 2.4). We then refit both the narrow and broad components in the artificial observations. Some samples did not require a broad line component by our $\Delta \chi^2$ test. We did not include these cases in calculating the distribution of broad-line properties. Any broad line with lower width than the narrow features was also excluded. Fewer than 15\% of samples were excluded by these conditions for each object. For each accepted sample, we determined the broad line parameters. From the distribution of broad-line widths, we found the difference from the mode to the 1$\sigma$ level of significance (the 83$^{\rm{rd}}$ and 17$^{\rm{th}}$ percentiles, respectively). This provided the upper and lower errors on the FWHM measured from the original spectrum.
In addition, for each sample we calculated the percentage of total polarized flux contained by the broad line by integrating both the broad feature and the total polarized flux. The flux contained in the broad feature was not consistent with zero in any of the three objects, providing additional evidence that the broad feature is a significant component.
Observations of the remaining six objects were insufficient to confirm the presence of a broad feature. Although we do not find a polarized broad line in NGC 3393 with our observation, a broad line was found in this object by \cite{ramos2016upholding}.
The line width we find for the polarized broad line in NGC 1068 is narrower compared to previous results \citep[e.g.][]{antonucci1985spectropolarimetry, young1995near, inglis1994spatially}. We find a FWHM of 3170 $\pm$ 50 km s$^{-1}$ compared to 3750 $\pm$ 400 km s$^{-1}$ \citep{young1995near} and 4377 $\pm$ 300 km s$^{-1}$ \citep{inglis1994spatially}. This difference may be due to the variation of FWHM measured from different regions in the object, which may suggest a contribution from thermal broadening (see Section \ref{sec:linewidth}). For example, \cite{inglis1994spatially} measure a FWHM of 4377 $\pm$ 300 km s$^{-1}$ in the nucleus, but find a value of 3247 $\pm$ 400 km s$^{-1}$ at a location 2.5\arcsec\ NE. Additionally, while \cite{inglis1994spatially} find the FWHM of H$\beta$ to be 4290 $\pm$ 400 km s$^{-1}$ in the nucleus, \cite{miller1991multidirectional} measure 2900 $\pm$ 200 km s$^{-1}$ in a different region. The latter measurement is consistent with our observation of NGC 1068. We note that our value of FWHM has smaller error compared to the other observations. We are considering only the statistical error we estimated above rather than any systematic error from, for example, variations in FWHM between different regions.
\subsection{Broad Line Widths}
\label{sec:blwidths}
We combined our observed broad-line widths with additional objects from the literature. Broad-line width is defined using FWHM rather than line dispersion ($\sigma_{\rm{line}}$), which produces a different standard value of $f$ \citep[see e.g.][]{wang2019sloan}. We took all spectropolarimetric measurements with broad H$\alpha$ features in megamaser galaxies with known dynamical masses. The broad line widths are either provided directly from these sources, or in the case of NGC 2273 estimated by \cite{kuo2010megamaser} from the spectrum provided by \cite{moran2000frequency}.
We found additional values of FWHM for eight objects. For two of these objects, IC 2560 and NGC 1068, we also observed polarized broad features and took the mean of our measurements with those from the literature. Although we did not confirm the presence of polarized broad lines through our observations of NGC 3393, \cite{ramos2016upholding} do, so this object is included in our final sample. \cite{ramos2016upholding} perform a very similar fitting method to that described in Section \ref{sec:fit} on data from VLT/FORS2, and find broad features in four megamaser disk galaxies.
All measurements in our final sample are of polarized broad features, with the exception of NGC 4258. A broad feature in total intensity is observed by \cite{ho1997search} for this object. \cite{barth1999polarized} observe a polarized spectrum of NGC 4258, and fit the broad line fixing the width to the \cite{ho1997search} value. Although \cite{barth1999polarized} determine this is not a significant detection of a broad feature, we include this measurement because NGC 4258 is the archetypal megamaser galaxy and this is the only example of a broad line width for this object. Because \cite{barth1999polarized} fix the broad line width, we had to assign an uncertainty to the value. We chose a similar fractional error to the highest uncertainty measurements in our sample.
In the case of NGC 4388, we have both a direct-light spectrum from \cite{ho1997search} and polarized spectrum \citep{ramos2016upholding}. \cite{ho1997search} observe a broad line with a FWHM of 3900 km s$^{-1}$ whereas \cite{ramos2016upholding} observe a polarized broad line with a FWHM of 4500$\pm$1400 km s$^{-1}$. Nominally these are consistent within the uncertainties for NGC 4388, but the difference is large, so we should view NGC 4258 with some skepticism.
All broad line widths including our observations and literature values are given in Table \ref{tab:fwhm}.
\section{Black Hole Masses}
\label{sec:BHmass}
Our main goal in this paper is to use the secure BH masses derived from the maser dynamics to test the fidelity of single-epoch BH masses. We thus review the strengths and limitations of each method briefly before turning to our comparison.
\subsection{Maser Dynamical Masses}
BH masses can be measured through the observation of dynamical tracers such as stars, gas, and masers \citep[e.g.][]{kormendy2013coevolution, mcconnell2013revisiting, saglia2016sinfoni}. Recently, CO emission has been used to measure BH mass \citep{davis2013black}, and with the use of ALMA these samples continue to grow \citep{boizelle2019precision}. Of these dynamical methods, the most precise extragalactic BH masses are determined with maser dynamics. There are significant limitations to this method, however. Megamaser disks must be edge-on to be detected and within $\sim100$ Mpc to be spatially resolved \citep{kuo2010megamaser}. To accurately determine the BH mass, the galaxies must also have a Keplerian rotation curve \citep{kuo2010megamaser}.
\subsection{Reverberation Mapping}
\label{sec:RM}
While we cannot spatially resolve the broad line region, we can use temporal variability to determine a characteristic size. All AGN show variability in their disk emission on timescales of days to months, and because the BLR is photoionized by the UV photons from the accretion disk, this leads to variability in the broad line emission. There is a lag, however, because the BLR sits light-days from the BH. This time separation can be measured, and provides an estimate of the size scale of the BLR \citep[e.g.][]{blandford1982reverberation, peterson1993reverberation}. With the known speed of light, the average radius of the BLR can be determined. This can be used in combination with a velocity estimate and the virial parameter $f$ to calculate a virial mass (Equation \ref{eq:mass}).
\cite{onken2004supermassive} find $f$ to be a constant value of 1.4 when using FWHM as a velocity estimate. The value of $f$ is generally calibrated using the relationship between BH mass and galaxy stellar velocity dispersion ($\sigma_*$) by assuming that the scaling relations for quiescent galaxies are the same as those of AGN \citep[e.g.][]{onken2004supermassive, collin2006systematic, woo2010lick, graham2011expanded, grier2013stellar, batiste2017recalibration}. It is typically taken to be a constant of order unity, although it has been shown to vary between objects \citep{yu2019calibration} and there is an uncertainty of $\sim$0.4 dex on the value from calibration alone \citep{shen2013mass}. Any dependence of the scale factor $f$ on AGN properties (e.g. luminosity) constitutes a major systematic uncertainty in BH mass determination (\S \ref{sec:disc}).
High cadence, high signal-to-noise RM data allow for modeling of the BLR, as it becomes possible to measure the lag between continuum and line emission as a function of velocity. These data can constrain idealized models of the BLR that include flattening, inclination, and kinematic structure as free parameters, \citep[e.g.][]{pancoast2014modelling, grier2017structure, williams2018lick, williams2020space, bentz2021detailed, Villafa_a_2022}. In practice, simulated line profiles are generated from BLR models and then compared to the observed, reverberation mapping data to produce constraints on the model parameters. Using this technique, the mass of the BH is determined independently of the virial product, and therefore does not require a choice of $f$. Instead, the $f$-factor can be extracted as a model parameter along with the BH mass and other values of interest.
While the modeling is independent of $f$, it is limited by the models of the BLR that go into the fitting. If the models do not span the space of real BLRs in dynamics or structure, then it is possible to induce systematic errors in the BH masses. One interesting test is to attempt to recover BH masses from a simulated BLR. In an analysis of multiple RM modeling methods, \cite{mangham2019reverberation} use a rotating, biconical disk wind BLR model and generate mock data with simulations of ionization and radiative transfer. These data are passed through the RM modeling programs to evaluate how well they extract the BLR kinematics and parameters. Interestingly, while the CARAMEL model \citep{pancoast2011geometric, pancoast2014modellinga, pancoast2014modelling} fails to recover the proper kinematics, it does accurately recover the time delay, inclination of the BLR, and the BH mass.
We include RM modeling results for 16 Seyfert 1 sources from \cite{pancoast2014modelling}, \cite{grier2017structure}, and \cite{williams2018lick} as a comparison sample to our single-epoch maser measurements. We do not include additional RM modeled objects from \cite{williams2020space}, \cite{bentz2021detailed}, or \cite{Villafa_a_2022} as they do not calculate the virial parameter $f$.
\subsection{Single-Epoch Masses}
\label{sec:masscalc}
It is possible to use the radius-luminosity relation calibrated from RM to estimate a BH mass from a single spectrum, the so-called ``single-epoch" BH mass \citep{vestergaard2002determining}. Again, assuming that the BLR is virialized, one takes the luminosity and infers the size scale of the BLR using the radius-luminosity relation. As in RM, the velocity of the BLR gas comes from the line-width, and the virial factor $f$ is usually derived as a constant scaling that makes the ensemble of RM masses obey the M-$\sigma_*$ relation. Single-epoch masses typically have an uncertainty of $\sim$ 0.5 dex \citep{shen2013mass}. Here, our goal is to test these single-epoch masses against the well-known maser mass using our polarized broad line measurements
We estimate the virial products of our observed sample with Equation \ref{eq:vp}. We measure the velocity scale, represented by W in Equation \ref{eq:vp}, with the FWHM of the polarized broad lines, see Table \ref{tab:fwhm}. The size of the BLR is estimated through the radius-luminosity relation determined from RM, and is given by Equation \ref{eq:r_l} \citep{bentz2013low}.
\begin{eqnarray}
\label{eq:r_l}
\rm{log}_{10}(R_{BLR}/1 \textrm{ lt-day}) = 1.527^{+0.031}_{-0.031} + \nonumber \\ 0.533^{+0.035}_{-0.033} \rm{log}_{10}(\lambda L_\lambda / 10^{44} \textrm{ erg/s})
\end{eqnarray}
This relation gives the radius of the broad line region as a function of its 5100 \AA\ luminosity. Although recent results point to possible variation in the slope of the R-L relation \citep{alvarez2020sloan}, this is unlikely to be a significant concern due to the limited range of luminosity spanned by our sample. The optical luminosity of the BLR cannot be measured directly in obscured AGN, so instead we choose to use high energy ($E >10 \, \rm{keV}$) X-rays to provide a proxy for the bolometric luminosity. Hard X-rays are highly penetrating even in the most Compton-thick AGN, which several of the masers sample are known to be \citep[e.g.][]{masini2016nustar}.
The most sensitive, and currently only, focusing hard X-ray telescope is the Nuclear Spectroscopic Telescope Array (NuSTAR), which provides high-quality X-ray spectroscopy in the 3--79 keV band. All nine of the masers considered here have previous observations with NuSTAR \citep{arevalo20142, balokovic2014nustar, bauer2015nustar, masini2016nustar, masini2019measuring}. These studies self-consistently fit each individual NuSTAR spectrum with a well-motivated transmission and reflection model to fully account for even the heaviest obscuration along the line of sight, providing the most direct measure of the intrinsic X-ray luminosity in the 2--10 keV~band derivable directly from the high-energy emission. We convert the intrinsic 2--10~keV luminosities provided by these studies to the luminosity distances adopted throughout (see Table \ref{tab:fwhm}). To estimate the bolometric luminosity for each maser, we use these hard X-ray derived 2--10 keV luminosities and adopt the luminosity-dependent bolometric correction of \cite{duras2020universal}, Equation \ref{eq:bol1}.
\begin{equation}
\label{eq:bol1}
K_X(L_X) = 15.33 \Bigg[1 + \bigg(\frac{\rm{log}(L_x/L_\odot)}{11.48}\bigg)^{16.20}\Bigg]
\end{equation}
Here, $L_X$ is the 2-10 keV intrinsic X-ray luminosity. The error on $K_X$ is dominated by the intrinsic scatter of 0.37 dex. \cite{duras2020universal} provides an additional bolometric correction for 4400 \AA\ luminosity, Equation \ref{eq:bol2}.
\begin{equation}
\label{eq:bol2}
K_O(L_{\rm{BOL}}) = 5.13
\end{equation}
Again, the error is dominated by intrinsic scatter with a value of 0.26 dex. To convert from 4400 \AA\ to 5100 \AA\ luminosity, we use the power law fit to the composite spectrum in \cite{berk2001composite}, Equation \ref{eq:lum}.
\begin{equation}
\label{eq:lum}
f_\lambda \propto \lambda^{-1.56}
\end{equation}
This gives the 5100 \AA\ luminosity to be approximately 80\% of that at 4400 \AA. The final luminosity values can then be used in Equation \ref{eq:r_l} to calculate the radius of the broad line region.
With these values, the virial products of each object can be estimated, and are given in Table \ref{tab:fwhm}. The virial products can then be compared to dynamical masses from maser disks \citep{greene2016megamaser, kuo2020megamaser}.
\begin{deluxetable*}{lcccccccccc}
\tablecaption{\label{tab:fwhm} Megamaser Sample}
\setlength\tabcolsep{1.6pt}
\renewcommand{\arraystretch}{1.1}
\tablehead{
\colhead{Galaxy} & \colhead{H$\alpha$ FWHM} & \colhead{Ref.} & \colhead{Avg H$\alpha$ FWHM} & \colhead{D} & \colhead{log M$_{\rm{BH}}$} & \colhead{Ref.} & \colhead{log L$_{2-10}$} & \colhead{$\lambda_{edd}$} & \colhead{log Virial Product} & \colhead{log(f)}\\ & \colhead{(km s$^{-1}$)} & & \colhead{(km s$^{-1}$)} & \colhead{(Mpc)} & \colhead{(M$_\odot$)} & & \colhead{(erg/s)} & & \colhead{(M$_\odot$)}
}
\decimalcolnumbers
\startdata
Circinus & 2300 $\pm$ 500 & 1 & 2300 $\pm$ 500 & 2.8 & 6.06 $\pm$ 0.1 & 9 & 42.2 & -0.8 & 6.8 $\pm$ 0.3& -0.7 $\pm$ 0.3\\
IC 2560 & 1300$_{-200}^{+100}$ & 2 & 1700 $\pm$ 200 & 41.8 & 6.64 $\pm$ 0.06 & 9 & 43.4 & -0.1 & 7.2 $\pm$ 0.3& -0.5 $\pm$ 0.3\\
& 2100 $\pm$ 300 & 1\\
Mrk 1210 & 2380 $\pm$ 120 & 3 & 2380 $\pm$ 120 & 56.7 & 7.152 $\pm$ 0.006 & 10 & 43.3 & -0.8 & 7.4 $\pm$ 0.3& -0.2 $\pm$ 0.3\\
NGC 1068 & 3170 $\pm$ 50 & 2 & 3800 $\pm$ 200 & 15.9 & 6.92 $\pm$ 0.25 & 9 & 43.4 & -0.4 & 7.9 $\pm$ 0.3& -1.0 $\pm$ 0.4\\
& 3750 $\pm$ 400 & 4\\
& 4377 $\pm$ 300 & 5\\
NGC 2273 & 2900 $\pm$ 200 & 6,7 & 2900 $\pm$ 200 & 25.7 & 6.88 $\pm$ 0.02 & 10 & 43.0 & -0.8 & 7.4 $\pm$ 0.3& -0.5 $\pm$ 0.3\\
NGC 3393 & 5000 $\pm$ 600 & 1 & 5000 $\pm$ 600 & 49.2 & 7.2 $\pm$ 0.33 & 9 & 43.3 & -0.8 & 8.0 $\pm$ 0.3& -0.8 $\pm$ 0.4\\
NGC 4258 & 1700 $\pm$ 500 & 8 & 1700 $\pm$ 500 & 7.3 & 7.58 $\pm$ 0.03 & 9 & 41.2 & -3.3 & 6.0 $\pm$ 0.4& 1.6 $\pm$ 0.4\\
NGC 4388 & 4500 $\pm$ 1400 & 1 & 4500 $\pm$ 1400 & 19.0 & 6.92 $\pm$ 0.01 & 10 & 42.6 & -1.2 & 7.6 $\pm$ 0.4& -0.7 $\pm$ 0.4\\
NGC 5765b & 3300$_{-300}^{+500}$ & 2 & 3300$_{-300}^{+500}$ & 126.3 & 7.66 $\pm$ 0.04 & 10 & 43.0 & -1.5 & 7.6 $\pm$ 0.3& 0.1 $\pm$ 0.3\\
\enddata
\tablecomments{Column 1: Object name. Columns 2-3: FWHM of the H$\alpha$ broad line and reference. Column 4: Averaged FWHM of H$\alpha$ broad line. Columns 5-7: Distance and dynamical black hole mass with references. Values are taken from the sources listed in each reference. Column 8: Instrinsic 2-10 keV X-ray luminosity. All values have estimated error of 0.1 dex except for NGC 1068 and NGC 1194 which have uncertainties of 0.3 dex. Column 9: Eddington ratio estimated from bolometric luminosity (see Equation \ref{eq:bol1}) and dynamical BH mass. Column 10: Virial product calculated with Equation \ref{eq:vp}. Column 11: $f$-value estimated by comparing dynamical mass and virial product. References: (1) \cite{ramos2016upholding}, (2) Our Work, (3) \cite{tran1995naturea,tran1995natureb}, (4) \cite{young1995near}, (5) \cite{inglis1994spatially}, (6) \cite{moran2000frequency} (spectrum), (7) \cite{kuo2010megamaser} (value), (8) \cite{ho1997search}, (9) \cite{greene2016megamaser}, (10) \cite{kuo2020megamaser}}
\end{deluxetable*}
In Figure \ref{fig:masscomp}, we compare the virial product of each maser to its known dynamical mass. These products do not include the $f$-factor. Rather, different values of $f$ are represented by the dashed lines in the figure. If all objects were to fall on the center, bold line, the BH mass would be equal to the virial product with no additional geometric factor. Each additional line has an $f$ value that is five times higher than the line above it. Therefore, within this maser sample $f$ ranges from approximately 0.1 to 40 (or 0.1 to 1.3 if NGC 4258 is excluded).
\begin{figure*}[htb!]
\plotone{edd_flines}
\caption{Comparison between known dynamical mass and the virial product without including an $f$-factor. Each object is colored by its Eddington ratio. The diagonal dashed lines represent the predicted BH masses from virial product using different values of $f$. The center, bold line has $f$ = 1, and each line below (above) increases (decreases) $f$ by a factor of 5. The red dashed line represents a standard value of $f$ = 1.4 calibrated with FWHM \citep{onken2004supermassive}.}
\label{fig:masscomp}
\end{figure*}
For a measured value of the virial product, the true dynamical mass of the object can vary greatly. Sources with a virial product of approximately $10^7$ M$_\odot$, for example, can have a dynamical mass from $10^6$ - $10^8$ M$_\odot$ within only this sample of nine objects. Therefore, $f$ must be well calibrated to determine the true dynamical mass from the virial product. We will consider relationships between $f$ and other observable parameters in Section \ref{sec:corr}. We will also explore whether we should expect to find a well-defined $f$ factor given the uncertainties in our virial product.
\section{Discussion}
\label{sec:disc}
We first discuss the theory of the BLR, and the importance of different measured parameters. Then we turn to our final sample which includes 9 masers with broad polarized lines and 16 RM+dynamical modeling objects from the literature. We consider whether the single-epoch masses are correlated with the dynamical mass measurement, first with the maser sample alone, where we understand well the dynamical masses and uncertainties, then including the full sample. We conclude with a discussion of the caveats to these results.
\subsection{Theoretical expectations for BLR structure}
We have focused on using the BLR as a tracer of the BH mass. The BLR, however, is also one of our primary tools for understanding accretion onto supermassive BHs, as the ionization structure gives us clues about the SED of the accretion disk, and the dynamics are tied to the emission from the disk. Over many decades, several different models have been proposed for the BLR, including orbiting clouds, inflowing and outflowing gas, and rotating disk winds \citep[see reviews in][]{mathews1985structure, sulentic2000phenomenology, Czerny2019}, each of which show some success, particularly in explaining the photoionization of the BLR gas \citep[e.g.][]{kwan1981formation, korista2004optical}. RM is one of the best ways to probe this region, and velocity resolved RM is now within reach for large samples.
One of the models that has been explored most is that of the disk wind \citep[e.g.][]{shields1977origin, emmering1992magnetic, chiang1997disk} where the gas is accelerated by line driving. Although x-ray emission may reduce the efficiency of line-driven winds \citep{waters2016reverberation}, shielding near the central BH ultimately enables line-driving to occur \citep{proga2000dynamics}. These models naturally explain observations like high velocity absorption lines observed in quasars, the observed BLR line ratios \citep{chiang1997disk}, and even the echo images resulting from RM campaigns \citep{waters2016reverberation}. The disk wind models are also consistent with measurements of line kinematics and line strengths from the BLR \citep[e.g.][]{proga2004dynamics}.
In the disk wind model, there are several concrete predictions for how $f$ might depend secondarily on other parameters. For instance, we expect the geometry of the system, including inclination and relative positions of the BLR and polar scatterers, to affect observed line width, an effect which must be accounted for in $f$ \citep[e.g.][]{chiang1995reverberation, smith2002spectropolarimetric, proga2004dynamics, waters2016reverberation}. \cite{proga2004dynamics} find the disk wind to be sensitive to Eddington ratio. In general, the disk structure and temperature depend on luminosity, and therefore too the BLR, so there is a strong motivation to explore this question empirically.
\subsection{Correlations between observable and derived values}
\label{sec:corr}
For the maser sample as a whole, we do not find clear evidence of a correlation between virial product and dynamical BH mass. We will quantify the lack of correlation below. However, in this section we also try to determine whether there is a secondary parameter driving the relation between $f$ and dynamical BH mass, which might help us understand the virial BH masses.
We evaluate the possible trend between the virial product and dynamical mass, along with other correlations, using the Pearson correlation test. Because the values we will compare have individual, possibly correlated, errors we use a Monte Carlo (MC) method to explore the correlations rather than taking the Pearson correlation as measured. To perform the MC correlation test, 10,000 samples of BLR radius (or luminosity), FWHM, and dynamical mass are taken assuming a normal distribution for each with the mean and standard error set by values in Table \ref{tab:fwhm} and Equations \ref{eq:r_l}-\ref{eq:lum}. These values are then used to recalculate the virial product, and we calculate $f$ by dividing the dynamical mass by the virial product. We measure the $r$ and $p$ values from the Pearson test on each sample and look at the distributions of $r$ and $p$ over all samples to evaluate the correlation between different values.
The distributions of these values for the relationship between virial product and the mass of the BH are given Table \ref{tab:corr}. Additional correlations are given in Table \ref{tab:app_corr} in Appendix B. If the virial product is a good estimator for the mass of the BH, there should be a high correlation between the two values. In the full sample of masers, however, we see no correlation between the mass of the BH and the virial product. We additionally see no correlation between the mass of the BH and the individual components of the virial product: luminosity or BLR radius and FWHM. Even if NGC 4258 is removed, there is no significant correlation. This implies that the $f$-factor is unlikely to be a constant value, and that the BH mass depends sensitively on the per-object value of $f$. Given that $f$ is not independent of dynamical mass, we do recover an expected trend between implied per-object $f$-value and M$_{\textrm{dyn}}$ for the maser sample.
\begin{deluxetable*}{llcc}
\tablecaption{\label{tab:corr} Correlation Test Results}
\tablewidth{0pt}
\renewcommand{\arraystretch}{1.3}
\tablehead{
\colhead{Objects} & \colhead{Comparison} & \colhead{$r$} & \colhead{$p$}
}
\decimalcolnumbers
\startdata
All Masers & $\rm{M_{BH}}$ - Virial Product & $-0.06_{-0.20, -0.34}^{+0.25, +0.49}$ & $0.68_{-0.27, -0.47}^{+0.22, +0.31}$\\
\hline
No NGC 4258 & $\rm{M_{BH}}$ - Virial Product & $0.15_{-0.25, -0.42}^{+0.29, +0.59}$ & $0.64_{-0.37, -0.60}^{+0.25, +0.34}$\\
\hline
Masers and RM Modeling & $\rm{M_{BH}}$ - log$_{10}$f & $0.46_{-0.11, -0.22}^{+0.10, +0.18}$ & $0.02_{-0.02, -0.02}^{+0.06, +0.23}$\\
& log$_{10}$f - L & $0.07_{-0.07, -0.14}^{+0.08, +0.15}$ & $0.72_{-0.25, -0.43}^{+0.19, +0.27}$\\
& log$_{10}$f - FWHM & $-0.36_{-0.07, -0.14}^{+0.08, +0.16}$ & $0.08_{-0.05, -0.07}^{+0.10, +0.26}$\\
\enddata
\tablecomments{Selected results of Pearson's $r$ test. The $r$ and $p$ values shown are the median along with bounds containing 68 and 95\% of the random samples. The full correlation test results are given in Appendix B.}
\end{deluxetable*}
If a relationship existed between the observed parameters, i.e. luminosity and FWHM, with the $f$-factor, these values could be used to calibrate $f$ and find an accurate mass from the virial product. Additionally, from our understanding of the structure of the BLR, we may expect a dependence on luminosity. Therefore, we search for a correlation between $f$ and luminosity or FWHM, but do not find a strong correlation with either. We similarly would expect a relationship between Eddington ratio and $f$-factor, and we observe a possible correlation in Figure \ref{fig:masscomp}. When the Pearson test is performed, however, we do not see a strong relationship between these parameters.
After exploring the correlations between parameters using only the masers, we combine the maser sample with the RM sample described in Section \ref{sec:RM}. Figures \ref{fig:threepanel} and \ref{fig:twopanel} show comparisons between $f$ and observable and derived values, respectively, including all objects. The two panels of Figure \ref{fig:threepanel} show $f$ compared to luminosity and FWHM respectively. We find no evidence of a correlation between these values.
\begin{figure*}[htb!]
\plotone{threepanel_4258}
\caption{Values of the $f$-factor for both the megamaser and modeling samples as compared to BLR luminosity and width of the polarized broad line. NGC 4258 is represented by an open circle due to the issues with its measurement. Values of maser luminosity have uncertainties of $\sim$0.5 dex. FWHM values for the objects in \cite{pancoast2014modelling} and \cite{williams2018lick} are taken from \cite{park2012lick} and \cite{barth2015lick} respectively and correspond to the H$\beta$ line.}
\label{fig:threepanel}
\end{figure*}
The $f$-factor does appear to have a relationship with the Eddington ratio and BH mass when including the maser and RM samples, as seen in Figure \ref{fig:twopanel}. Considering only the maser sample, we would expect $f$ to be related to Eddington ratio and dynamical mass because the mass of the BH is included in all three values; $f$ is directly proportional to the dynamical mass and Eddington ratio is inversely related. We note that we still see this possible relationship in the combined sample, where $f$ is not derived directly from M$_{\rm{BH}}$ as in the masers. The Pearson test, however, does not provide evidence for a strong relationship between these parameters. Additionally, even if there was a relationship between $f$, Eddington ratio, and dynamical mass, these values are not directly measurable and therefore would not be useful for determining the value of $f$ for a given object.
\begin{figure*}[htb!]
\plotone{twopanel_4258}
\caption{Values of the $f$-factor for both the megamaser and modeling samples as compared to the Eddington ratio and dynamical mass of the objects.}
\label{fig:twopanel}
\end{figure*}
In the past, one confirmation that single-epoch measurements are accurate tracers of BH mass has been the measured correlation between $\sigma_*$ and virial product. Indeed, $f$ has been calibrated by solving for the value bringing the virial products in line with the M-$\sigma_*$ relation. Therefore, we also examine the relationship between BH mass, $\sigma_*$, and $f$. Because the M-$\sigma_*$ relationship is used to calibrate $f$, we should see a correlation between $\sigma_*$ and BH mass or $f$. We compile values of $\sigma_*$ for the majority of the objects in our sample, see Table \ref{tab:app_big} in Appendix C. When performing the correlation test, however, we do not find evidence for a relationship between $\sigma_*$ and BH mass or $f$ in either the sample of megamasers taken alone or when RM modeling objects are included.
Given the large uncertainties on individual virial products, it is important to ask whether we could measure a single $f$ value from our sample even if virial product and dynamical mass were perfectly correlated. We must test if our sample is strong enough to rule out a relationship between virial product and dynamical mass, and do so as follows.
First, we generate artificial sources by selecting dynamical masses from a uniform distribution spanning the range of our maser sample. Assuming a perfect correlation between dynamical mass and virial product, we calculate virial products by choosing a constant value of $f$. We assign errors to both quantities by taking the average error associated with the virial products and dynamical masses in our maser sample. This allows us to simulate an observation of each object by sampling a Gaussian with mean given by the generated virial product or dynamical mass, and a standard error from the average value. We repeat the process to generate a random sample with $10^4$ simulated maser objects, each with an observed dynamical mass and virial product. These values are combined to generate an observed $f$ value. The Kolmogorov–Smirnov (K-S) test is used to compare the generated sample to our true sample of nine maser objects. This is done for different values for $f$, and including both the masers alone and the combined sample of masers and RM modeling objects. Results are shown as the solid lines in Figure \ref{fig:ks}.
\begin{figure*}[htb!]
\plotone{ks_broad}
\caption{K-S test p-value generated by comparing the observed maser or maser and RM modeling sample to a randomly generated, perfectly correlated sample. The blue shaded region represents the values of $f$ included in the 1$\sigma$ range of \cite{woo2015black}, $f = 1.12 \pm 0.3$. The vertical blue line gives $f$ = 1.4 as predicted by \citep{onken2004supermassive}. The horizontal line represents a $p$ value of 0.05. If a value of $f$ falls below the line, we can reject a perfect correlation between dynamical mass and virial product with that single $f$ value. The test is performed for virial products calculated with the values of FWHM given in Table \ref{tab:fwhm} (solid lines), and the values of FWHM after 2000 km s$^{-1}$ broadening is subtracted in quadrature (dashed lines). We find it to be unlikely that our unchanged sample could be drawn from a perfect correlation between dynamical mass and virial product with $f$ from either \cite{woo2015black} or \cite{onken2004supermassive}. It would be more difficult, however, to reject these $f$ values if thermal broadening were present.}
\label{fig:ks}
\end{figure*}
From the results of the K-S test, we can rule out a correlation between dynamical mass and virial product with a single value of $f$ = 1.4 \citep{onken2004supermassive}. We can also almost entirely reject the 1$\sigma$ range of $f = 1.12 \pm 0.3$ given by \cite{woo2015black} with a $p$-value below 0.05. However, we cannot rule out a perfect correlation for all values of $f$. For example, when including the maser sample alone, values of $f$ between $\sim$0.1-0.6 are allowed. As seen in Figure \ref{fig:masscomp}, the majority of maser objects fall in this range of $f$ values. To show that virial product and dynamical mass are not correlated with any value of $f$, we would need to observe additional maser objects or reduce the uncertainties on existing objects.
The major sources of error in our virial product estimates are our understanding of the polarized broad line width and measurements of the intrinsic luminosity. These will be discussed in the following sections. Reducing these uncertainties will be difficult, so we can instead estimate how many additional maser objects would be required to rule out a single value of $f$. Starting with the observed sample of masers, we assume that there is no relationship between virial product and dynamical mass. We generate a random value of each from a uniform distribution over our sample range. Using these two values, we calculate $f$. We perform the K-S test again with this additional object included in the maser sample, and continue adding objects until $p < 0.05$. We repeat the test 1000 times and fit the resulting distribution with a Poisson function to statistically determine how many additional objects are required. For a fixed $f$ of 0.3, we find that approximately six additional maser objects would be required to show that there is no correlation.
\subsection{Caveats from intrinsic luminosity}
The requirement of converting a hard X-ray luminosity in a heavily obscured AGN to optical continuum measurement introduces several sources of error. As discussed in Section \ref{sec:masscalc}, there is significant uncertainty in the bolometric corrections. The X-ray correction has an intrinsic scatter of 0.37 dex, while the optical has a scatter of 0.26 dex \citep{duras2020universal}. Future surveys that attempt to measure virial products across all types of AGN could potentially reduce the combined uncertainty on the luminosity through the consistent calibration to a wavelength region that is relatively insensitive to dust and gas obscuration, such as using high spatial resolution imaging in the mid-infrared.
\subsection{Caveats in using polarized line widths}
\label{sec:linewidth}
We have presented comparisons between dynamical mass and virial products using broad-line widths measured from polarized light. We must address whether the polarized line widths may either under or overestimate the true velocity distribution.
In megamaser galaxies, the disk must be nearly edge-on to observe masing. When jets are seen, they align with the angular momentum of the disk, suggesting the masers and inner disk are aligned \citep{kamali2019accretion}. With this geometry we would expect polar scattering, where scatterers are located above and below the disk, to dominate. If this were the case, the polarization angle is predicted to be perpendicular to the radio jet \citep[e.g.][]{smith2002spectropolarimetric}. For the objects in our sample which are observed to have a jet, we do find these two angles to be approximately perpendicular. Polar scattering would produce narrower polarized lines compared to the full distribution of velocities in the BLR \citep[e.g.][]{smith2002spectropolarimetric, smith2004seyferts}. Therefore, the geometry of the object may to lead to narrowing of the polarized broad lines. We expect the order of this effect to be comparable to the inclination effects for the BLR sample.
In addition to line narrowing due to the scattering geometry in these objects, we also expect some thermal broadening due to the nature of the scatterers themselves. Studies of NGC 1068 have shown evidence for scattering by both dust and electrons with the dominant scatterer varying by region \citep[e.g.][]{miller1991multidirectional}. By comparing the observed polarized line width between dust scattering regions, which are not expected to cause thermal broadening, and electron scattering areas, which do produce broadening, the effect of thermal broadening can be estimated. NGC 1068 was found to have thermal broadening of approximately 3360 km s$^{-1}$ with a corresponding electron temperature of $\sim10^5$ K \citep{miller1991multidirectional}.
The thermal broadening in NGC 1068 likely represents an extreme case. First, the broadening of $\sim3400$ km s$^{-1}$ is larger than the total line width of the majority of objects in our sample. Additionally, regions dominated by dust or electron scattering could be observed independently in NGC 1068, while we mostly likely see a mix of both scatterers when observing the objects in our sample. We can consider NGC 4388 for which we have both direct-light and polarized broad line measurements with FWHMs of 3900 km s$^{-1}$ \citep{ho1997search} and 4500$\pm$1400 km s$^{-1}$ \citep{ramos2016upholding} respectively. If we take the difference between these values to be solely due to thermal broadening, we find a broadening of $\sim 2200$ km s$^{-1}$ with a corresponding electron temperature of $3 * 10^4$ K. This is significantly lower than the broadening in NGC 1068.
Although we do not expect all of our objects to have as much thermal broadening as NGC 1068, we may consider what effect more moderate broadening would have on our results. In general, our virial masses overestimate the dynamical masses of the BHs if a typical value of $f$ = 1.4 is assumed. It is possible that thermal broadening is responsible for an increase in the observed FWHM leading to this overestimation. Therefore, we determine the value of thermal broadening that must be subtracted in quadrature from the FWHM of each object such that the virial and dynamical masses agree for $f$ = 1.4. For this test, we exclude NGC 4258 which would require the FWHM to be narrower to agree. We find our sample to require an average thermal broadening of $\Delta v \approx 3000 \pm 1000$ km s$^{-1}$ to match the dynamical masses using the accepted value of the virial parameter. The temperature of the scattering electrons that would produce this broadening can be estimated with $T = m_e \Delta v^2 / 16 k_B \rm{ln}(2)$ \citep{miller1991multidirectional}. We find the corresponding electron temperatures to be between $\sim10^4-10^5$ K.
It is possible that electron scattering could cause thermal broadening of this level in almost all objects in our sample. However, if we remove this estimated broadening from our values of FWHM we find the resulting widths to be systematically smaller when compared to the broad line widths in the RM sample, which are not affected by thermal broadening. It is unlikely for the maser sample to have inherently smaller values of FHWM compared to the RM objects since they live in similar host galaxies. Additionally, any geometrical effects leading to line narrowing should have similar magnitudes across both samples. Therefore, although there may be some thermal broadening in our sample, it is likely not to the level of complete agreement with $f$ = 1.4.
We consider the effects more moderate thermal broadening would have on our conclusions about the likely value of $f$. To do so, we recreate the results described in Section \ref{sec:corr} after subtracting 2000 km s$^{-1}$ of thermal broadening from the FWHM of each maser object. Although this represents less thermal broadening than in NGC 1068, or the value required to match $f$ = 1.4, it allows for objects with a value of FWHM less than 2000 km s$^{-1}$ to be included in our test. Additionally, this amount of thermal broadening does not cause the maser FWHM values to be systematically smaller than the RM sample values. After subtracting this thermal broadening, we reproduce the K-S test to determine if our objects could correspond to a correlation between virial product and dynamical mass. These results are shown as the dashed lines in Figure \ref{fig:ks}. If 2000 km s$^{-1}$ thermal broadening were present in each maser object we can no longer completely reject the range of $f$ values given by \cite{woo2015black}, although we still find the value of $f = 1.4$ given by \cite{onken2004supermassive} to be unlikely. Therefore, it is possible that thermal broadening is the source of disagreement between the dynamical masses and our observed virial products. However, the true value of thermal broadening in each individual object is not known and would require more detailed observations to determine.
\subsection{Implications for the structure of the broad line region and virial masses}
\label{sec:disc_2}
There are many reasons why the single-epoch mass estimate may break down, which have been discussed thoroughly in \cite{shen2013mass}. First, this method relies upon the assumption that the BLR is virialized. In a number of AGN, measurements of different line widths and time lags using RM data show the expected virial scaling \citep[e.g.][]{peterson1999keplerian, peterson2000evidence, onken2002mass, kollatschny2003accretion}. This is not sufficient to confirm the region is virialized, however. For example, radiation pressure would lead to a similar scaling \citep{krolik2001systematic}. Even if the BLR were not virialized, the measured line widths would not be expected to deviate significantly from the expected virial value \citep{shen2013mass}.
Another issue may be the difference in measurement between the radius and width used in the virial product. \cite{krolik2001systematic} considers the possibility that the weighting over radial distribution used to determine line width could be different than that for radius of the BLR. Therefore, the product of these values would not be an accurate estimate of the enclosed mass.
The value used for width is another area of concern, as either the FWHM or $\sigma_{\rm{line}}$ could be used. FWHM is used more commonly because it can be measured more easily than $\sigma_{\rm{line}}$, which often requires modeling \citep{dalla2020sloan}. Additionally, measurements of $\sigma_{\rm{line}}$ can vary depending on the choice of method \citep[e.g.][]{denney2009systematic, rafiee2011biases, rafiee2011supermassive, assef2011black}, leading to different values of BH mass \citep[e.g.][]{shen2013mass}. However, use of FWHM may introduce bias into the mass measurement \citep[e.g.][]{rafiee2011biases, dalla2020sloan}.
The assumption of a single $f$ value is not necessarily valid either. The standard $f$ value is calculated with the M-$\sigma_*$ relation, which is assumed to hold between AGN and quiescent galaxies in the calibration of $f$. However, within the dynamical sample there is some evidence for different scaling relations for different galaxy morphologies \citep[e.g.][]{hu2008black, greene2008black, graham2008populating, graham2009m, hu2009black, gultekin2009m, greene2010precise, mcconnell2013revisiting}. At this point, it is unclear whether the maser galaxies behave differently from the quiescent spirals \citep{greene2016megamaser}. Therefore, the choice of objects can produce different values of $f$ \citep[e.g.][]{shen2013mass}.
We find $f$ to vary between our observed galaxies, but must consider the limitations of our method for determining $f$ for megamaser galaxies. Due to the orientation of these objects, we can only measure the broad lines in polarized light. As discussed in Section \ref{sec:linewidth}, this could introduce unaccounted for bias into our calculation. However, when we add the RM sample we see very similar results, suggesting that maser measurements alone are not biased.
\section{Summary and Future Work}
\label{sec:summary}
We have used spectropolarimetric measurements of megamaser galaxies with known dynamical BH masses to determine the accuracy of the single-epoch method. We do not find strong evidence for a correlation between the virial product and dynamical mass. Additionally, $f$ was found to vary significantly between objects and was not found to correlate with any observable parameters. Although we cannot rule out a correlation between virial product and dynamical mass, we show that this correlation is unlikely for specific values of $f$ previously proposed in the literature. We supplemented our sample with RM-modeled objects, and found consistent results.
Further observations would be necessary to calibrate the virial product and reach a better determination of the value of $f$. Additional spectropolarimetric measurements of megamaser galaxies may also provide stronger evidence for a lack of correlation between virial product and dynamical mass. Multi-object RM happening now with SDSS, and planned to continue with SDSS-V, will yield new information about BLR \citep{shen2014sloan, homayouni2020sloan}. Additionally, Las Cumbres \citep{brown2013cumbres}, and in the future Rubin Observatory \citep{bianco2021optimization, abell2009lsst}, will provide high-cadence monitoring of AGN that will hopefully produce new insight into the BLR structure. ELTs will in principle measure dynamical BH masses to $z>1$ \citep{gultekin2019astro2020}, further expanding the possible comparison sample.
Our understanding of the cosmic evolution of BH mass density and BH-galaxy scaling relations often relies on single-epoch virial masses \citep[e.g.][]{laor1998quasar, wandel1999central, mclure2002measuring, vestergaard2006determining, kelly2013demographics, volonteri2016inferences, pensabene2018alma}. If geometry and other unknown factors significantly affect $f$ or the broad line width, then the weak relation between black hole mass and virial product may introduce large unquantified uncertainties in the inference of a mass. Such uncertainties make individual measurements of BHs very challenging, and these measurements should be approached cautiously, particularly in small samples of objects.
\section*{Acknowledgments}
We thank Michael DiPompeo for providing the observations included in this paper and Yue Shen for useful discussions. We also thank the referee for a timely and constructive report. All of the observations reported in this paper were obtained with the Southern African Large Telescope (SALT) under program 2017-1-SCI-002 (PI: DiPompeo). J.E.G. and A.D.G. acknowledge support from the National Science Foundation grant AAG/1007094. R.C.H. acknowledges support from the National Science Foundation through CAREER award number 1554584.
\software{PySALT \citep{crawford2010pysalt}, Astropy \citep{astropy:2013, astropy:2018}, WebPlotDigitizer \citep{Rohatgi2020}}
|
2,869,038,156,841 | arxiv |
\section*{APPENDIX}
\section{Deep Reinforcement Learning Approach} \label{sec:approach}
In this section, we introduce key components of our DRL-based planning method.
We follow the paradigm of centralized training and decentralized execution (CTDE)~\cite{foerster2018counterfactual}, which has been widely used in MARL for Dec-POMDP modeled robot learning problems~\cite{fan2020distributed, tallamraju2020aircaprl}.
The training and planning phases are elaborated in Sec.~\ref{subsec:training_planning}.
In Fig.~\ref{fig:sys_arch}, we summarize our end-to-end planning pipeline.
For an ego agent (\textit{worker} or \textit{station}), the perception-range and communication-range observations described in Sec.~\ref{subsec:observation} are encoded into feature vectors by a convolution neural network.
Then, they are stacked upon zero-range observation, constituting the final observation vector $\hat{o}_t^i$ in latent space.
The policy network $\pi_{\theta}$ for \textit{worker} and $\pi_{\phi}$ for \textit{station} are both Multi-layer Perceptron (MLP) modules, each of which takes $\hat{o}_t^i$ for the $i$-th agent at time $t$ and output its action $a_t^i$.
The action $a_t^i$ is then converted to velocity commands as mentioned in Sec.~\ref{subsec:action}, which solves both rendezvous planning for \textit{station} and coverage planning for \textit{workers}.
\subsection{DRL Training and Planning Phases}\label{subsec:training_planning}
We follow the paradigm of CTDE to train the policy network of each agent in Eq.~\ref{eq:marl_mcpp}, then deploy the corresponding policy networks on robots for planning.
During training, the state of the whole system and observations of all other agents are needed for better training performance in simulation.
During planning, each agent receives only its zero-range, perception-range, and communication-range observations as described in Sec.~\ref{subsec:observation}, then output the best action according to the corresponding trained policy network.
We unfold the details in CTDE in the following two parts.
\subsubsection{Centralized training phase}
for training algorithm, we use the multi-agent actor-critic algorithm MA-POCA~\cite{cohen2021use} to train the policy networks for \textit{workers} and \textit{stations}.
During training, a centralized critic network is trained to estimate the value of the current system state, including the whole system and observations of all agents.
Note that states and observations of the whole system are only needed during training and can be easily accessed in simulations.
According to \cite{lowe2017multi}, such a centralized critic network would greatly help the training of the actor network (i.e., policy network).
\begin{figure}[ht]
\vspace{-2.5mm}
\centering
\includegraphics[width=0.8\linewidth]{figs/curri_design.pdf}
\caption{Two-stage curriculum learning for mCPP problem of \textit{worker-station} MRS: (a) Stage-I: one \textit{station} with single \textit{worker}; (b) Stage-II: multiple \textit{stations} with multiple \textit{workers}.}
\label{fig:curri_design}
\vspace{-2mm}
\end{figure}
For better policy exploration of the coordination behaviors towards the coverage task during training, we adopt the Intrinsic Curiosity Module (ICM)~\cite{pathak2017curiosity}.
In short, the ICM trains a self-supervised inverse dynamic model that predicts the consequences of an agent's actions, and uses that prediction error as an intrinsic reward to guide the agent's exploration during training.
In considerations of training performance, we designed a two-stage curriculum learning~\cite{bengio2009curriculum} evolving from single \textit{worker} into multiple \textit{workers}, which guides \textit{workers} and \textit{station} for better policies during training.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.90\linewidth]{figs/sys_arch.pdf}
\caption{Our DRL-based mCPP pipeline for the \textit{worker-station} MRS: during planning, each agent receives its own zero-range, perception-range and communication-range observations, and outputs the best action at each time step according to its trained policy network.}\label{fig:sys_arch}
\vspace{-4.5mm}
\end{figure*}
As shown in Fig.~\ref{fig:curri_design}-(a), stage-I is designed to make it easier for both \textit{worker} and \textit{station} to focus on learning some basic behaviors, such as the ability of collision avoidance with static obstacles and dynamic interferers.
For \textit{worker}, the ``\textit{cover and replenish}'' behavior is learnt when the remained energy of \textit{worker} is at a low level.
For \textit{station}, the behavior of finding and following exhausted \textit{worker} is learnt.
Once training of stage-I is converged, we can then extend the \textit{worker} and \textit{station} to multiple ones, and adapt the pre-trained policy networks to train for final policies (see Fig.~\ref{fig:curri_design}-(b)).
\subsubsection{Decentralized execution phase}
unlike the centralized training phase, each agent only needs its own observation during the decentralized planning phase.
Specifically speaking, each agent only takes its own observation as introduced in \ref{subsec:observation}, and outputs optimal action by its observation and the corresponding policy network $\pi_\phi$ and $\pi_\theta$.
\subsection{Observation Space}\label{subsec:observation}
For both \textit{worker} and \textit{station}, the observation $\hat{o}_t^i$ of the $i$-th ego agent at time $t$ consists of following three types:
1) zero-range observation $(^z o)_t^i$ contains its own basic information; 2) perception-range observation $(^p o)_t^i$ contains precise local information within its perception range; 3) communication-range observation $(^c o)_t^i$ contains rough global information within its communication range.
A demonstration of observation is shown in the ego agent observation block in Fig.~\ref{fig:sys_arch}.
\begin{table}[ht]
\vspace{-5mm}
\begin{center}\begin{tabular}{||c|c c|c c||}
\hline
& \multicolumn{2}{c|}{\textbf{perception-range}} & \multicolumn{2}{|c||}{\textbf{communication-range}} \\
\hline\hline
\multirow{2}{*}{\textit{\textbf{worker}}} & \textit{worker} & \textit{station} & \textit{worker} & \textit{station} \\
& obstacle & uncovered area & \multicolumn{2}{c||}{uncovered area} \\
\hline
\multirow{2}{*}{\textit{\textbf{station}}} &
obstacle & {\textit{worker}(normal)} & \textit{station} & {\textit{worker}(normal)}\\
& \multicolumn{2}{c|}{\textit{worker}(exhausted)} & \multicolumn{2}{c||}{\textit{worker}(exhausted)} \\
\hline
\end{tabular}
\end{center}
\caption{Encoded objects in perception-range and communication-range observations for ego agents in the \textit{worker-station} MRS.}
\label{tab:observations}
\vspace{-3mm}
\end{table}
Here we elaborate on each type of observation for an ego agent.
For both \textit{workers} and \textit{stations}, $(^z o)_t^i$ includes global position and local velocity, which are then stacked vertically as 1-D zero-range observation.
Note that when the $i$-th agent is \textit{worker}, the percentage of remaining energy $p_t^i$ is also included in $(^z o)_t^i$.
Both $(^p o)_t^i$ and $(^c o)_t^i$ are encoded as images with object positions (see Fig.~\ref{fig:sys_arch}), which are translated and rotated with the $i$-th ego agent.
The encoded objects in $(^p o)_t^i$ and $(^c o)_t^i$ are listed in Tab.~\ref{tab:observations}.
For $(^p o)_t^i$, it is a $20\times20$ image with $n_p$ channels (i.e., the number of encoded objects) and $m_\text{perc}$ grid resolution (i.e., length per pixel).
For $(^c o)_t^i$, it is a $30\times30$ image with $n_c$ channels and $m_\text{comm}$ grid resolution.
\subsection{Action Space}\label{subsec:action}
We define the action space as a 2d continuous vector space consisting of linear and angular velocities.
Given the max linear velocity $v^i_\text{max}$ and max angular velocity $\omega^i_\text{max}$ of the $i$-th robot , the sampled action $a_t^i$ is scaled by multiplying $v^i_\text{max}$ or $\omega^i_\text{max}$ to give the desired velocity commands.
\subsection{Reward Design}
As mentioned in Eq.~\ref{eq:worker_reward}, the shared reward $r$ for all agents consists of four components.
Here we only elaborate on the first two terms since the last two are simply penalty constants as introduced previously.
Recall that the first component $\reward{c}$ is the covering reward for $i$-th \textit{worker} at each time $t$, where positive rewards are given when a new area is covered.
Once the coverage work is completed, the training episode will terminate with a completion reward $r_{\text{finish}}$:
\begin{equation}
\reward{c} = \begin{cases}
r_{\text{finish}},\qquad\qquad\quad \Omega=\bigcup_{t'=0}^t \bigcup_{i=1}^k \mathcal{C}_{t'}^i\\[1pt]
r_{\text{cover}}\times\left(\lvert\mathcal{C}_{t}^i\rvert - \lvert\mathcal{C}_{t-1}^i\rvert\right),\quad\;\;\; \text{otherwise}
\end{cases}
\end{equation}
The second component $\reward{e}$ is a soft approximation modeling on the hard constraint of \textit{worker}'s capacity.
For \textit{worker} $\worker{i}$, it allows $p_t^i$ to be less than zero during training, which let $\worker{i}$ still be able to move when $p_t^i\leq0$ (i.e., no energy left).
More specifically, such a soft approximation uses a truncated exponential function for $\reward{e}$ as below:
\begin{equation}
\reward{e} = \begin{cases}
-1\times \text{min}\{1, \exp\,(p_t^i-p^e)\},& p_t^i < p^e\\[1pt]
0,& \text{otherwise}
\end{cases}
\end{equation}
where $p^e$ is the threshold indicating whether the \textit{worker} is exhausted.
Such design results from practical considerations:
1) direct modeling such hard constraint during training makes \textit{worker} struggles to learn the ``\textit{cover and replenish}" behavior when energy is exhausted;
2) the truncated exponential penalty approximation with a relatively large derivative around $p^e$ makes \textit{worker} aware of its exhausted status when $p_t^i$ approaches $p^e$.
Note that the energy capacity hard constraint of \textit{worker} is only modeled as a soft constraint during training; it is still a hard constraint (i.e., \textit{workers} cannot move once $p_t^i \leq 0$) during planning.
\section{Conclusions \& Future Work}
In this paper, we introduce the \textit{worker-station} Multi-robot System (MRS) to solve the Multi-robot Coverage Path Planning (mCPP) problem, which can be generalized to various applications in the real world.
We provide a fully cooperative multi-agent reinforcement learning formulation of the above problem, and propose an end-to-end decentralized online planning method based on Deep Reinforcement Learning.
Our method simultaneously plans for \textit{workers} and \textit{station} to work together and utilize the mobility of each robot toward the coverage task.
We conduct ablation study, simulation and real robot experiments and demonstrations.
The experimental results show that our method is more efficient in planning for \textit{workers} and \textit{station}, and our method can better utilize the mobility of each robot compared with the mobile-station baseline method.
For future work, there are two directions to further improve the coverage task performance based on our method.
First, by regulating the trajectories with the graph-based mCPP method, there would be fewer missing gaps after \textit{workers} covered an area.
However, it might potentially raise the time for random dynamic collision avoidance.
Second, by explicitly pre-allocating or negotiating which exhausted \textit{workers} should the \textit{stations} be responsible for, the \textit{stations} can be more efficient to replenish specific \textit{workers}.
\section{Environment Modeling}\label{sec:env_modeling}
In order to solve the planning problem formulated in Eq.~\ref{eq:obj_func} using Deep Reinforcement Learning (DRL) based approaches, we first elaborate how the environment is modeled, including the coverage task, the observation space, action space as well as the reward design.
\section{Introduction} \label{sec:intro}
For massive large-scale tasks in hazardous environments, Multi-Robot System (MRS) greatly helps to reduce human exposure to potential dangers and improves efficiency effectively.
There are various applications of MRS that have come to reality, including search and rescue~\cite{liu2016multirobot}, persistent surveillance~\cite{palacios2016distributed}, planetary exploration~\cite{schuster2020arches}.
Typically, a robot has only limited working resources, including energy and consumables.
For example, most robots are driven by electrical or thermal energy stored in batteries or fuels in advance.
While some robots can obtain ambient energy from the environment (e.g., solar energy), the energy transfer is highly dependent on the environmental situations, and the recharging process can sometimes be slow. Thus it can be inefficient for robots in massive long-term tasks.
In scenarios like cleaning or agriculture on large-scale fields, the robot has limited consumables like water or chemicals.
Therefore, it is essential for robots with large-scale tasks to constantly travel between supply stations and working areas to replenish and work, which is very inefficient for such tasks.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figs/problem_demo.pdf}
\caption{Given an arbitrary target coverage area, a mCPP problem for the \textit{worker-station} MRS on planar areas can be decomposed into: 1) coverage planning for \textit{workers} (blue robots) and 2) rendezvous planning for \textit{station} (yellow robot). There are random dynamic interferers (red robots) in the environment.}
\label{fig:problem_demo}
\vspace{-5mm}
\end{figure}
As discussed by Vaughan et al. in \cite{couture2009adaptive}, the placement of the supply station significantly influences work efficiency for an MRS in the above scenarios.
To further improve the efficiency of the MRS, one might consider making the supply station a mobile robot platform.
Similar to the \textit{Frugal Feeding Problem} in \cite{litus2007frugal}, the station moves around to serve the working robots.
For consistency in this paper, we name such an MRS as the "\textit{worker-station}" MRS, which is composed of a mobile supply station robot and several working robots.
We consider the Multi-robot Coverage Path Planning (mCPP) problem on planar areas for the aforementioned \textit{worker-station} MRS.
As shown in Fig.~\ref{fig:problem_demo}, the \textit{workers} are equipped with a range device for general area coverage work, and the \textit{station} is loaded with sufficient resources to provide supplies for \textit{workers}.
The joint objective is to cover a given target area as soon as possible.
Solving such a planning problem for the \textit{worker-station} MRS can be decomposed as below:
\begin{enumerate}
\item Coverage planning for each \textit{worker} to finish general planar coverage work of a given area;
\item Rendezvous planning for the \textit{station} to service \textit{workers} in need of replenishment;
\end{enumerate}
In this paper, we mainly focus on solving the mCPP problem for the \textit{worker-station} MRS on planar areas.
There are several challenges to the above problem.
First, the joint problem space comprised of the above two planning problems is too large to solve directly and simultaneously.
A practical solution is to discretize state and action spaces in mCPP problems \cite{almadhoun2019survey} and rendezvous planning problems \cite{litus2008distributed}, then solve by discrete combinatorial optimization methods separately.
However, the system dynamics are hard to model and identify, where each robot has different capabilities and functionality.
Thus, such methods can still be infeasible for such complex MRS, even after reducing the problem size.
Secondly, planning with dynamic collision avoidance is another challenge for most offline planning methods.
One general solution is to combine offline planning with local collision avoidance controllers.
Nevertheless, such a hierarchical planning scheme would alter the optimal policy that is planned offline without the interference of dynamic obstacles.
For complex scheduling tasks like the mCPP problem for \textit{worker-station} MRS, it will cause conflicts and even deadlocks between robots or planners, and the planning efficiency will get worse as the number of robots grows~\cite{wang2017safety}.
To tackle the above challenges, we adopt Deep Reinforcement Learning (DRL) to solve the mCPP problem for \textit{worker-station} MRS.
However, the coordination behaviors of different agents in the \textit{worker-station} MRS are nontrivial to learn together, and agents often struggle between exploration and exploitation of the coverage task during training.
We summarize the main contributions of this paper below:
\begin{enumerate}
\item
We propose an end-to-end decentralized online planning method to solve the mCPP problem for the \textit{worker-station} MRS.
Our method manages to reduce the influence of random dynamic interferers on planning, while the robots can avoid collisions with them.
\item
We design a two-stage curriculum learning with an intrinsic curiosity module and soft approximation of the \textit{workers}' energy constraints, which successfully guide the training for large-scale coverage tasks.
\item
We provide ablation study, simulation, and real robot experimental results.
The results show that our method outperforms decomposition-based and graph-based baseline methods in coverage finish time metrics.
\end{enumerate}
\section{Problem Formulation}\label{sec:problem}
In this section, we provide our Multi-agent Reinforcement Learning (MARL) problem formulation of the Multi-robot Coverage Path Planning (mCPP) problem on planar areas, for the \textit{worker-station} Multi-robot System (MRS) (see Fig.~\ref{fig:problem_demo}).
Given target area $\Omega$, the \textit{worker-station} MRS consists of $m$ \textit{workers} $\worker{i}$ and $n$ \textit{stations} $\station{j}$, where $i=1,2,...,m$ and $j=1,2,...,n$.
The goal is to find the optimal policy for each robot in the \textit{worker-station} MRS, to minimize the coverage task finish time while avoiding collisions with dynamic interferers in the environment.
\subsection{Preliminaries}
In this subsection, we first introduce several preliminary concepts and assumptions in the rest of the paper.
\subsubsection{\textit{Worker} robot and \textit{station} robot}
we consider both \textit{workers} and \textit{station} have limited range of perception and communication:
within the perception range, each robot can detect collisions and objects precisely; within the communication range, each robot can receive information from other robots (e.g., the rough global position of other robots).
As mentioned previously in the \textit{worker-station} MRS, \textit{workers} also have limited energy, while \textit{station} have unlimited energy to replenish \textit{workers}.
Note that the coverage work range of \textit{worker} does not necessarily equal its perception range.
\subsubsection{Energy Capacity and Rendezvous Recharge}
denote the energy capacity for \textit{worker} $\worker{i}$ as a constant $\capacity{i}$.
Suppose the current energy left for $\worker{i}$ at time $t$ is $e_t^i$, then current percentage of remained energy $p_t^i$ is defined as: $p_t^i=\frac{e_t^i}{\capacity{i}}\in[0, 1]$.
A \textit{worker} $\worker{i}$ is said to be ``\textit{exhausted}'' if $p_t^i$ is lower than a threshold $p^e$, otherwise it is said to be ``\textit{normal}''.
Also, since we mainly focus on the planning problem at a higher level in this paper, the local rendezvous of \textit{workers} and \textit{station} is simplified by comparing with a position threshold $\varepsilon$.
We assume a \textit{worker} can be replenished by any \textit{station}, then the discharge and recharge for each \textit{worker} $\worker{i}$ is determined by comparing $\varepsilon$ with the euclidean distance between the global positions of $\boldsymbol{x}_t^{\worker{i}}$ and $\boldsymbol{x}_t^{\station{j}}$ for \textit{worker} $\worker{i}$ and \textit{station} $\station{j}$:
\begin{equation}\label{eq:energy_model}
e_t^i = \begin{cases} \text{max}\{0,\;e_{t-1}^i - e_{\text{discharge}}\},& \Vert\boldsymbol{x}_t^{\worker{i}} - \boldsymbol{x}_t^{\station{j}}\Vert > \varepsilon \\
\text{min}\{\capacity{i},\; e_{t-1}^i + e_{\text{charge}}\},& \Vert\boldsymbol{x}_t^{\worker{i}} - \boldsymbol{x}_t^{\station{j}}\Vert \leq \varepsilon
\end{cases}
\end{equation}
\subsubsection{Coverage Task and Synchronized Coverage Area}
coverage by \textit{workers} is only carried out when \textit{worker} has energy left.
Once a \textit{worker} starts to recharge, it would be released from \textit{station} if and only if it is fully recharged (i.e., $p_t^i=1$).
Denote the coverage area of each \textit{worker} $\worker{i}$ at time $t$ as $\mathcal{C}_t^i$.
Here we assume the overall covered area $\mathcal{C}_t$ at time $t$ can be updated and synchronized among all agents during the task.
The update and synchronization of $\mathcal{C}_t$ can be implemented by mutual information exchange for robots within the communication range: $\mathcal{C}_t = \bigcup_{t=0}^{T} \bigcup_{i=1}^{m} \mathcal{C}_{t}^{i}$.
\begin{figure}[!htp]
\vspace{-1mm}
\centering
\includegraphics[width=0.7\linewidth]{figs/CovTaskModeling/coverage_task_modeling.pdf}
\caption{Model the coverage task by uniform sampling}
\label{fig:coverage_task_modeling}
\vspace{-1.5mm}
\end{figure}
For a released \textit{worker} $W^i$ with energy left (i.e., $p_t^i>0$), the coverage area $\mathcal{C}_t^i$ at time $t$ is determined by uniformly sampling as depicted in Fig.~\ref{fig:coverage_task_modeling}:
given the sampling resolution $m_\text{rasterization}$, the coverage area $\mathcal{C}_t^i$ is approximated by uniformly sampling the coordinates within the shape boundaries of the target coverage area $\Omega$ (i.e., rasterization).
Then, the coverage area is represented by a set of coordinates in planar space.
Thus, the union operations on coverage areas $\mathcal{C}_t$ turn into set union operations, which is computationally tractable using a hash-set compared with area union operations.
\subsection{Multi-agent Reinforcement Learning}
We first introduce the Decentralized Partially Observable Markov Decision Process (Dec-POMDP)~\cite{oliehoek2016concise} denoted by $\left(\mathcal{R}, \mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{O}, r, b \right)$, where $\mathcal{R}$ is the set of agents, $\mathcal{S}$ is the joint state space, $\mathcal{A}$ is the joint action space, $\mathcal{P}$ is the state-transition model, $\mathcal{O}$ is joint observation space, $r$ is the shared reward function and $b$ is the initial state distribution.
With a shared reward function $r$, we can formulate the MARL problem by single-agent reinforcement learning objective~\cite{zhang2021multi}:
\begin{equation}
\begin{split}\label{eq:marl}
\max_\pi \;\mathbb{E}\left[\sum_{t\leq T, s_0 \sim b} \gamma^{t} r\left(s_{t}, a_{t}, s_{t+1}\right)\mid a^i_t\sim\pi(\cdot\mid o^i_t)\right]
\end{split}
\end{equation}
where $\pi$ is the policy and $\gamma$ is reward discount factor, $s_t\in \mathcal{S}$ and $a_t\in\mathcal{A}$ are the joint state and action of agents at time $t$ respectively.
Given the initial state of agents $s_0$ and the observation $o^i_t$ at time $t$, the action $a^i_t$ is sampled from the policy $\pi$.
The goal is to maximize the expected discounted reward within a time horizon of $T$.
Note that Eq.~\ref{eq:marl} can be adopted to agents with different observations or functionalities (each corresponds to a different policy), as long as they share a common reward function.
\subsection{\textit{Worker-Station} MRS Coverage Task Formulation}
As introduced previously, the \textit{worker-station} MRS consists of two types of agents with different functionality: the \textit{workers} are responsible for coverage work with limited energy, whereas the \textit{station} is responsible for replenishing \textit{workers} with unlimited energy.
Thus following Eq.~\ref{eq:marl}, we define the agents $\mathcal{R}=\{W^i\}_{i=1}^m \bigcup \{S^j\}_{j=1}^n$ as the set of \textit{stations} and \textit{workers}, and $\mathcal{A}=\{a^i\}_{i=1}^{m+n}$ are the actions of \textit{workers} and \textit{stations} sequentially.
We define policy $\pi_\phi$ and policy $\pi_\theta$ for \textit{stations} and \textit{workers} respectively, and a shared reward function $r$ for both agents.
Then, we can formalize the mCPP problem for \textit{worker-station} MRS as a fully cooperative MARL problem~\cite{busoniu2008comprehensive}:
\begin{equation}\label{eq:marl_mcpp}
\pi^*_\theta\;, \pi^*_\phi = \argmax_{\pi_\theta, \pi_\phi}\;\mathbb{E}\left[\sum_{t\leq \text{min}\{T, T_{\text{finish}}\}} \gamma^{t} r\left(s_{t}, a_{t}, s_{t+1}\right)\right]
\end{equation}
where $a^i_t\sim\pi_\theta(\cdot\mid o^i_t), i=1,2,...,m$ are the actions of \textit{workers}, and $a^j_t\sim\pi_\phi(\cdot\mid o^j_t), j=m+1,m+2,...,m+n$ are the actions of \textit{stations}.
Note that the planning horizon $T$ is replaced by the minimum of the original time horizon $T$ and the coverage task finish time $T_\text{finish}$.
In order to finish the cooperative coverage task as soon as possible, with collision avoidance and rendezvous to recharge, we define the shared reward function $r$ as below:
\begin{equation}\label{eq:worker_reward}
r = \sum_{i=0}^k\reward{c} + \sum_{i=0}^k\reward{e} + r_{\text{collision}} + r_{\text{time}}
\end{equation}
The first component $\reward{c}$ is the covering reward for $i$-th \textit{worker}, which is the only positive term to guide the coverage planning of \textit{workers}.
The second component $\reward{e}$ is a penalty term added when a \textit{worker}'s energy is close to its energy capacity, which guides the rendezvous planning of \textit{station}.
The details of the first two reward components are elaborated in Sec.~\ref{sec:approach}.
The third component $r_{\text{collision}}$ is a constant collision penalty whenever a collision occurs, which guides the agents for dynamic collision avoidance.
The last component $r_{\text{time}}$ is a constant time penalty in each time step whenever the coverage task has not finished, which helps to find more time-efficient planning policies.
Since the total coverage reward of the coverage area $\Omega$ is a constant (i.e., only new covered area provide rewards), and the other three penalty terms would stop accumulating once the coverage task is finished, only a time-optimal coverage and rendezvous planning policy with collision avoidance can reach the optimal task performance.
Therefore, by designing and selecting appropriate rewards for the above components in Eq.~\ref{eq:marl_mcpp}, we can apply MARL algorithms to train the agents for the \textit{worker-station} MRS coverage task to coordinate with each other for theoretical optimal performance.
\section{Related Work} \label{sec:related-work}
\subsection{Multi-robot Coverage Path Planning}
The mCPP problem evolved from the classical Coverage Path Planning (CPP) problem by introducing multiple robots to solve the coverage problem.
Most approaches are based on the graph structure, which is proven to be NP-hard \cite{MFC-IROS'05}.
Zheng \textit{et al.} designed a constant-factor approximation algorithm in polynomial time \cite{MFC-2010}.
Kapoutsis \textit{et al.} uses an area division algorithm to allocate tasks for multiple robots \cite{kapoutsis2017darp}.
Apart from graph-based methods, decomposition-based methods also take large parts in the literature~\cite{rekleitis2008efficient, collins2021scalable}, which first partition the target area into obstacle-free convex sub-regions for different robots and then apply single robot coverage planning for each robot separately.
Most graph-based or decomposition-based mCPP methods do offline planning, and some also require the coverage area to satisfy specific assumptions (e.g., convex-shaped area).
In addition, classic offline mCPP methods can be undermined by random dynamic interferers in the environment.
On the other hand, recently, some works have been extending the mCPP problem to various applications with specific constraints, such as geophysical surveys~\cite{azpurua2018multi}, fault-tolerant planning on large-scale outdoor~\cite{sun2021ft}.
However, to the best of our knowledge, there are few works on the mCPP problem for the aforementioned \textit{worker-station} MRS.
\subsection{\textit{Worker-Station} Multi-robot System}
Similar to the \textit{worker-station} MRS, related works on mobilizing the supply station into an autonomous robot mainly focus on rendezvous planning for \textit{station} to efficiently recharge the \textit{workers} in need.
For example, Couture et al. \cite{couture2009adaptive} only plans for \textit{station}, while \textit{workers} are dedicated to delivering goods between fixed source and destination.
Similarly, in \cite{mathew2015multirobot}, only rendezvous planning of \textit{stations} is considered, whereas the \textit{workers} are programmed to monitor the environment by predefined trajectories persistently.
Most of these works consider the \textit{workers} to be stationary in terms of their motion patterns and state transitions, which reduce the complexity to a solvable level for optimization.
More recently, Yu et al. \cite{yu2018algorithms} tried to solve both planning problems for one \textit{worker} and one \textit{station}, but it is restricted to node coverage for a given graph.
Similarly in Sun et al. \cite{sun2020unified}, the \textit{worker} is planned to travel between waypoints, while the \textit{station} is planned to rendezvous to charge the \textit{worker}.
Seyedi et al. \cite{seyedi2019persistent} planned trajectories for multiple \textit{workers} and one \textit{station} with scheduled charging order, which also requires a prior of the environment.
Despite the above work managed to plan for both \textit{workers} and \textit{station}, it is only applicable on convex target areas with static obstacles, thus is infeasible in an arbitrary target area with dynamic interferers.
\section{Experiments \& Results} \label{sec:exp}
\subsection{Implementation Details}
Since we mainly focus on strategy-level planning problems in this paper, we use Unity and ML-Agents toolkit~\cite{juliani2020unity} to build the environment and system.
The dynamic interferers are modeled in a loop to first move in a constant speed and a random direction within a given period, and then rotate with a random angle.
Such a loop for interferers repeats until the coverage task finishes.
Also, \textit{worker} can only be replenished when it is exhausted (i.e., $p_t^i<p^e$) and near the \textit{station}.
\subsection{Simulation Results}
We modeled three simulation scenes in Unity to conduct simulation experiments, including ablation study and the coverage task performance comparison.
\begin{figure}[h!t]
\vspace{-3mm}
\centering
\includegraphics[width=0.9\linewidth]{figs/sim_scene.pdf}
\caption{Modeled simulation scenes in Unity. The target coverage areas are bounded within the grey obstacle areas.}
\label{fig:sim_scene}
\vspace{-5.5mm}
\end{figure}
\begin{table}[ht]
\begin{center}\begin{tabular}{||c||c|c|c|c||}
\hline
\textbf{test-case name} & \textbf{\textit{star}} & \textbf{\textit{corridor}} & \textbf{\textit{cuhksz-1}} & \textbf{\textit{cuhksz-2}} \\
\hline
\textbf{target area size} & 30$\times$30 & 120$\times$50 & 180$\times$60 & 180$\times$60\\
\hline
\textbf{\textit{worker} cover radius} & 4 & 4 & 2 & 2\\
\hline
\textbf{\# of \textit{workers}} & 2 & 3 & 3 & 6\\
\hline
\textbf{\# of \textit{stations}} & 1 & 1 & 1 & 2\\
\hline
\textbf{\# of \textit{interferers}} & 1 & 6 & 6 & 6 \\
\hline
\end{tabular}
\end{center}
\caption{Design details of simulation test-cases.}
\label{tab:design_sim_scene}
\vspace{-3mm}
\end{table}
As in Fig.~\ref{fig:sim_scene}-(a) and (b), two irregularly shaped scenes are used in simulation experiments, where robots of the \textit{worker-station} MRS are initialized in the initial position.
In addition, we modeled the CUHKSZ campus for coverage work in Fig.~\ref{fig:sim_scene}-(c), where the buildings are considered static obstacles.
Fig.~\ref{fig:sim_traj} shows the motion trajectories of the \textit{worker-station} using our planning method in simulation test-cases.
\begin{figure}[h!t]
\vspace{-3mm}
\centering
\includegraphics[width=\linewidth]{figs/sim_traj.pdf}
\caption{motion trajectories of \textit{worker-station} MRS using our method.}
\label{fig:sim_traj}
\vspace{-3mm}
\end{figure}
Based on the three modeled scenes, we designed four test-cases described in Tab.~\ref{tab:design_sim_scene}.
Note that in \textit{cuhksz-2} test-case, only the left-bottom group of robots is included for the coverage work.
\subsubsection{Ablation Study}
to validate the effects on training performance brought by our curriculum learning design and ICM, we conducted ablation study in the \textit{corridor} test-case.
We also trained a centralized policy with PPO~\cite{schulman2017proximal} to validate the benefits of the CTDE decentralization paradigm.
In short, the PPO agent takes the observation of object-positions encoded images, which should cover the whole coverage area with high resolution as in the perception-range observation.
Therefore, it is much larger than the image observations for each ego agent in CTDE.
With the same visual encoder in Fig.~\ref{fig:sys_arch}, the feature vectors are fed into MLP of the same size to output the joint actions that are distributed to each robot.
It is evident in Fig.~\ref{fig:ablation_study} that a centralized policy using PPO failed in our problem.
Such a failure is largely due to: 1) the training difficulties on a much larger network (about $1.5e6$ parameters with centralized PPO, $0.1e6$ parameters for both \textit{worker} and \textit{station} policies with CTDE); 2) and the lack of cooperation between agents with centralized PPO.
We now compare the results within the CTDE paradigm.
For two-stage curriculum learning, as shown in Fig.~\ref{fig:ablation_study}, when training from scratch without curriculum, agents in the \textit{worker-station} MRS struggle at a locally optimal policy and cannot finish the coverage task.
When initializing from Stage-I, it provides basic policy networks for both \textit{worker} and \textit{station}, which vastly improves the sample efficiency and guides the training procedure.
As for ICM, we first initialize policy networks from a pre-trained Stage-I curriculum learning.
As shown in Fig.~\ref{fig:ablation_study}-(b), the task finish time $t_\text{finish}$ shows that agents trained with ICM are better than agents trained without it, which reflects the reward gap between two training curves (green and black) in Fig.~\ref{fig:ablation_study}-(a).
Such reward gap results from the earlier finish of the coverage task, which eliminates more accumulating time penalty $r_\text{time}$.
\begin{figure}[h!t]
\vspace{-3mm}
\centering
\includegraphics[width=0.97\linewidth]{figs/Ablation/ablation.pdf}
\caption{Ablation study of two-stage curriculum learning, Intrinsic Curiosity Module (ICM) and centralized PPO in \textit{corridor} test-case.}
\label{fig:ablation_study}
\vspace{-2.5mm}
\end{figure}
\subsubsection{Decomposition-based and Graph-based Baselines}
to evaluate the coverage task performance, we modified graph-based and decomposition-based mCPP methods to several heuristic baseline methods on discretized state space.
In addition, since these offline centralized mCPP baseline methods have no dynamic collision avoidance ability with interferers, we adopt a \textit{wait-and-move} policy for all baseline methods.
\textbf{Mobile-BCD:} for decomposition-based baseline method, we follow the Boustrophedon Cellular Decomposition (BCD) algorithm~\cite{choset2000coverage} and adopts it as the so-called \textit{mobile-BCD} for our problem.
We briefly describe the procedure: 1) the map is initially decomposed into cells via BCD; 2) in each cell, back-and-forth trajectories on the uncovered area are generated and evenly distributed to \textit{workers}; 3) the \textit{stations} always move to the nearest exhausted \textit{worker} to replenish it.
\textbf{Static-MSTC*:} for graph-based mCPP baseline methods, we first modify the state-of-the-art mCPP algorithm MSTC*~\cite{tang2021mstc} into a static \textit{stations} version, namely \textit{static-MSTC*}.
In order to account for the continuous energy capacity of \textit{workers} in our problem setting, the key modification in static-MSTC* is the constraint approximation from the node-based energy capacity constraint in the original MSTC* to the travel time-based constraint as in Eq.~\ref{eq:energy_model}.
\textbf{Mobile-MSTC*:} based on the static-MSTC* method, we further mobilize the \textit{stations} and design the \textit{mobile-MSTC*} baseline as follows:
1) the target area is first decomposed into sub-regions via \textit{k-means clustering};
2) depth-first-search is applied to plan for the \textit{stations} loaded with \textit{workers}, to travel to the center of next uncovered sub-region;
3) at each sub-region, the \textit{workers} cover the area via the static-MSTC* baseline method.
Note that the $k$ value in the \textit{k-means clustering} algorithm is chosen according to the capacity $\capacity{i}$ to make partitions suitable for efficient planning.
\subsubsection{Coverage Task Performance}
we compared the coverage task finish time $T_\text{finish}$ among our method and the above baseline methods on all the test-cases in Tab.~\ref{tab:design_sim_scene}.
The smaller $T_\text{finish}$ is, the better the planning strategy for the mCPP problem is.
Note that to adopt the mobile-MSTC* baseline method, we decompose the CUHKSZ map into two equal-sized areas, and each area runs the mobile-MSTC* baseline separately to finish the coverage work.
\textbf{Test-cases \textit{star}, \textit{corridor}, \textit{cuhksz-2}:}
we first compare static-MSTC* with mobile-MSTC* and mobile-BCD.
In test-cases \textit{star} and \textit{corridor}, the performance of mobile-MSTC* and mobile-BCD is nearly the same as static-MSTC*.
In test-case \textit{cuhksz}-2, the mobile-MSTC* and mobile-BCD manage to improve task performance by mobilizing the \textit{station} and planning for \textit{station} and \textit{workers} separately.
However, there remains vast space for coverage task performance improvement; the mobile-station and mobile-BCD baselines that separately plan for \textit{stations} and \textit{workers} with dynamic interferers still perform inefficiently.
We now compare our method with baseline methods.
In general, the comparison results on the three test-cases show that our planning method can generate good coordination behaviors of coverage planning and rendezvous planning for \textit{workers} and \textit{station}, which leads to a better performance in metrics of $t_\text{finish}$ after around $0.5e7$ to $1.5e7$ training steps.
Compared with the mobile-MSTC* baseline in test-cases \textit{corridor} and \textit{cuhksz-2} with larger target areas, our method unlocks more benefits by mobilizing the \textit{station} and utilizing the mobility of each robot in the MRS.
Interestingly, when an exhausted \textit{worker} leaves the perception or communication range of \textit{station}, the rendezvous for recharge is still possible, as \textit{stations} would explore to search exhausted \textit{workers}.
\begin{figure}[h!tp]
\vspace{-3mm}
\centering
\includegraphics[width=0.97\linewidth]{figs/CovTaskPerformance/performance.pdf}
\caption{Comparisons of the coverage task finish time.}
\label{fig:cov_task_performance}
\vspace{-3mm}
\end{figure}
\textbf{Test-case \textit{cuhksz-1}:}
compared with test-case \textit{cuhksz-2}, the MRS of test-case \textit{cuhksz-1} works in the same CUHKSZ scene but with the numbers of \textit{workers} and \textit{stations} reduced in half.
The performance of our method is nearly the same as mobile-MSTC* with the best performance, which is mainly due to the following reason:
for less number of \textit{workers} and a comparably smaller cover radius, \textit{workers} with continuous action space would leave uncovered gaps during work, especially when trying to avoid static obstacles or dynamic interferers.
These uncovered gaps require the \textit{workers} to revisit some regions, making our method in this test-case perform not as well as in the other three test-cases.
\subsection{Real Robot Performance}
As complementary to the simulated environments, we also conduct hardware experiments of our method on real robots.
As shown in Fig.~\ref{fig:realexp}, we tested our method in \textit{Star} scene, of which we made a replica in the real world.
The \textit{worker-station} MRS consists of two \textit{workers} (black differential-driven wheeled robots) and one \textit{station} (yellow skid-steer wheeled robot), the dynamic interferer is a quadruped robot.
The \textit{workers} are considered replenished once its distance to the \textit{station} is smaller than a threshold.
We use the PID controller for velocity commands of all robots, with a motion capture system providing their global position information.
\begin{figure}[htp]
\vspace{-2.5mm}
\centering
\includegraphics[width=0.9\linewidth]{figs/real_robot_exp.pdf}
\caption{Real robot demonstration of our planning method.}
\label{fig:realexp}
\vspace{-2mm}
\end{figure}
Fig.~\ref{fig:realexp}-(a) depicts the \textit{worker-station} MRS with our method, where the \textit{station} is moving towards \textit{worker} \#2 to replenish it and \textit{worker} \#1 is executing coverage work.
Fig.~\ref{fig:realexp}-(b) is a comparison of the coverage area between simulation and real robot, the blue area is covered by \textit{workers} and the green area is the motion range of \textit{station}, the whole coverage task in the real world took $140$ seconds.
\subsection{Discussions}
To the best of our knowledge, there is no existing method for online and simultaneous planning for the \textit{worker-station} MRS.
Therefore, our choice of the baseline methods is naive heuristics-based planning approaches, which also need extensive research to reach optimal performance with fine-tuned hyperparameters.
There are several limitations of our DRL-based planning method.
First, when there is only a comparably small number of \textit{workers} with a small cover radius, the unregulated trajectories of \textit{workers} with continuous action space would leave more uncovered gaps needed for revisiting.
Second, since our method mainly focuses on strategy-level planning for \textit{workers} and \textit{station} towards the coverage task, we use a relatively simple controller for the generated velocity actions to control real robots, which could cause a performance gap between simulation and reality.
|
2,869,038,156,842 | arxiv |
\section{Introduction}
Test case generation can be modelled as an optimisation problem, and so different kinds of search algorithms can be used to address it~\cite{harman2012search}.
There can be different objectives to optimise, like for example branch coverage or the detection of mutants in the system under test (SUT).
When aiming at maximising these metrics, often the sought solutions are not single test cases, as a single test cannot cover all the objectives in the SUT.
Often, the final solutions are sets of test cases, usually referred as \emph{test suites}.
There are many different kinds of search algorithms that can be used for generating test suites.
The most famous is perhaps the Genetic Algorithms (GA), which is often the first choice when addressing a new software engineering problem for the first time.
But it can well happen that on specific problems other search algorithms could be better.
Therefore, when investigating a new problem, it is not uncommon to evaluate and compare different algorithms.
On average, no search algorithm can be best on all possible problems~\cite{WoM97}.
It is not uncommon that, even on non-trivial tasks, simpler algorithms like (1+1)~Evolutionary Algorithm (EA) or Hill Climbing (HC) can give better results than GA (e.g., as in~\cite{ali2013generating}).
A major factor affecting the performance of a search algorithm is the so called \emph{search budget}, i.e., for how long the search can be run, usually the longer the better.
But the search budget is also strongly related to the tradeoff between the \emph{exploitation} and \emph{exploration} of the search landscape.
If the budget is low, then a population-based algorithm like GA (which puts more emphasis on the exploration) is likely to perform worse than a single, more focused individual-based algorithm like HC or (1+1)~EA.
On the other hand, if the search budget is large enough, the exploration made by the GA can help it to escape from the so called local optima in which HC and (1+1)~EA can easily get stucked in.
To obtain even better results, then one has to design specialised search algorithms that try to exploit the specific properties of the addressed problem domain.
In the case of test suite generation, there are at least the following peculiarities:
\begin{itemize}
\item testing targets can be sought \emph{independently}. Given an existing test suite, to cover the remaining testing targets (e.g., lines and branches), you can create and add new tests without the need to modify the existing ones in the suite.
At the end of the search, one wants a minimised test suite, but that is a secondary objective compared to code coverage.
\item testing targets can be strongly related (e.g., two nested branches), as well as being completely independent (e.g., code in two different top-level functions with no shared state).
\item some testing targets can be \emph{infeasible}, i.e., impossible to cover. There can be different reasons for it, e.g., dead code, defensive programming or the testing tool not handling all kinds of SUT inputs (e.g., files or network connections). Detecting whether a target is feasible or not is an undecidable problem.
\item for non-trivial software, there can be a very large number of objectives. This is specially true not only for system-level testing, but also for unit testing when mutation score is one of the coverage criteria~\cite{mutation2015emse}.
Traditional multi-objective algorithms are ill suited to tackle large numbers of objectives~\cite{li2015many}.
\end{itemize}
In this paper, we propose a novel search algorithm that exploits such characteristics and, as such, it is specialised for test suite generation (and any problem sharing those properties).
We call it the Many Independent Objective (MIO) algorithm.
We carried out an empirical study to compare the MIO algorithm with the current state-of-the-art, namely the Whole Test Suite~\cite{GoA_TSE12} approach and the Many-Objective Sorting Algorithm~\cite{dynamosa2017}.
On a set of artificial software with different characteristics (clear gradients, plateaus, deceptive local optima and infeasible targets), in most cases MIO achieves higher coverage.
This was also confirmed with unit test experiments on three numerical functions.
\section{Background}
\subsection{Whole Test Suite (WTS)}
The Whole Test Suite~\cite{GoA_TSE12} approach was introduced as an algorithm to generate whole test suites.
Before that, typical test case generators were targeting only single objectives, like specific lines or branches, using heuristics like the \emph{branch distance} and the \emph{approach level} (as for example done in~\cite{HaM10}).
In the WTS approach, a GA is used, where an individual in the GA population is a set of test cases.
Mutation and crossover operators can modify both the set composition (i.e., remove or add new tests) and its content (e.g., modify the tests).
As fitness function, the sum of all branch distances in the SUT is used.
At the end of the search, the best solution in the population is given as output test suite.
To avoid losing good tests during the search, the WTS can also be extended to use an \emph{archive} of best tests seen so far~\cite{rojas2016detailed}.
\subsection{Many-Objective Sorting Algorithm (MOSA)}
The Many-Objective Sorting Algorithm (MOSA)~\cite{dynamosa2017} was introduced to overcome some of the limitations of WTS.
In MOSA, each testing target (e.g., lines) is an objective to optimize.
MOSA is an extension of NSGA-II~\cite{deb2002fast}, a very popular multi-objective algorithm.
In MOSA, the population is composed of tests, not test suites.
When a new target is covered, the test covering it gets stored in an archive, and such target is not used any more in the fitness function.
A final output test suite is composed by the best tests found during the search and that are stored in the archive.
In NSGA-II, selection is based on ranks (from 1 on, where 1 is the best): an individual that subsumes many other individuals gets a better rank, and so it is more likely to be selected for reproduction.
One of the main differences of MOSA compared to NSGA-II is the use of the \emph{preference sorting criterion}: to avoid losing the best individuals for a given testing target, for each uncovered testing target the best individual gets the best rank (0 in MOSA), regardless of its subsuming relations with the other tests.
\section{The MIO Algorithm}
\subsection{Core Algorithm}
\label{sec:core}
Both WTS and MOSA have been shown to provide good results, at least for unit test generation~\cite{GoA_TSE12,rojas2016detailed,dynamosa2017}.
However, both algorithms have intrinsic limitations, like for example:
\begin{itemize}
\item population-based algorithms like WTS and MOSA do put more emphasis on the exploration of the search landscape, which is not ideal in constrained situations of limited search budgets, like for example in system-level testing where each test case execution can be computationally expensive.
Letting the user to tune the population size parameter is not a viable option, unless it is done automatically (but even then, it has side effects, as we will see in the empirical study).
\item although once a target is covered it is not used any more for the fitness function, the individuals optimised for it would still be in the population.
They will die out eventually after a few generations, but, until then, their presence in the population can hamper the search if those covered targets are unrelated to the remaining non-covered targets.
\item in the presence of infeasible targets, some tests can get good fitness score (e.g., a close to 0 branch distance) although they will never cover those infeasible targets. Those not useful tests might end up taking over a large part of the population.
\item there can be a very large number of objectives to cover, even in the order of hundreds of thousands (e.g., in the system-level testing of industrial systems).
A fixed size population would simple not work well: if too small, then there would not be enough diverse genetic material in the first generation;
if too large, not only convergence would be drastically slowed down, but also the computational cost could sky-rock (e.g., NSGA-II has a quadratic complexity based on the population size).
\end{itemize}
To avoid these limitations, we have designed a novel evolutionary algorithm that we call the Many Independent Objective (MIO) algorithm.
In a nutshell, MIO combines the simplicity and effectiveness of (1+1) EA with a dynamic population, dynamic exploration/exploitation tradeoff and feedback-directed target selection.
The MIO algorithm maintains an archive of tests.
In the archive, \emph{for each} testing target we keep a different population of tests of size up to $n$ (e.g, $n=10$).
Therefore, given $z$ objectives/targets, there can be up to $n \times z$ tests in the archive at the same time.
At the beginning of the search, the archive will be empty, and so a new test will be randomly generated.
From the second step on, MIO will decide to either sample a new test at random (probability $P_r$), or will choose (details later) one existing test in the archive (probability $1-P_r$), copy it, and mutate it.
Every time a new test is sampled/mutated, its fitness is calculated, and it will be saved in the archive if needed (details later).
At this point, we need to define how tests are saved in the archive, and how MIO samples from the archive.
When a test is evaluated, a copy of it might be saved in 0 or more of the $z$ populations in the archive, based on its fitness value.
For each target, there will be a heuristics score $h$ in $[0, 1]$, where 1 means that the target is covered, whereas 0 is the worst possible heuristics value.
For example, if the heuristics is the branch distance $d$, this can be mapped into $[0, 1]$ by using $h = 1/(1+d)$ (where $h=0$ if a branch was never reached and so the branch distance $d$ was not calculated).
For each target $k$, a test is saved in population $T_k$, with $|T_k| \le n$, if either:
\begin{itemize}
\item if $h_k=0$, the test is not added regardless of the following conditions.
\item if the target is covered, i.e. $h_k=1$, the test is added and that population is shrunk to one single individual, and it will never expand again (i.e., it will be always $|T_k|=1$). A new test can \emph{replace} the one in $T_k$ only if it is \emph{shorter} (which will depend on the problem domain, e.g. size measured in sequence of function calls in unit testing) or, if it is of the same size, then replace the current test only if the new test has better coverage on the other targets (i.e., sum of all the heuristics values on all targets).
\item if the population is not full (i.e., $|T_k| < n$), then the test is added. Otherwise, if full (i.e., $|T_k| = n$), the test might replace the worst in the population, but only if not worse than it (but not necessarily better). This means no worse heuristic value or, if the same, no larger size.
\end{itemize}
The idea is that, for each target, we keep a population of candidate tests for it, for which we have at least some heuristics value.
But once a target $k$ is covered, we just need to store the best test, and discard the rest.
Note: if a discarded test in $T_k$ was good for another target $j$, then it would be still stored in $T_j$ anyway, so it is not lost.
When MIO needs to sample one test from the archive instead of generating one at random, it will do the following:
\begin{itemize}
\item choose one target $k$ at random where $|T_k|>0$ and $k$ is not covered (i.e., no test has $h_k=1$). If all non-empty populations are for covered targets, then just choose $k$ randomly among them.
\item choose one test randomly from $T_k$.
\end{itemize}
By using this approach, we aim at sampling tests that have non-zero heuristics (and so guidance) for targets that are not covered yet.
\subsection{Exploration/Exploitation Control}
\label{sec:control}
In the MIO algorithm, the two main parameters for handling the tradeoff between exploration and exploitation of the search landscape are the probability $P_r$ of sampling at random and the population size $n$ per target.
Exploration is good at the beginning of the search, but, at the end, a more focused exploitation can bring better results.
Like in Simulated Annealing, we use an approach in which we gradually reduce the amount of exploration during the search.
We define with $F$ the percentage of time after which a focused search should start.
This means that, for some parameters like $P_r$ and $n$, we define two values: one for the start of the search (e.g., $P_r=0.5$ and $n=10$), and one for when the focused phase begins (i.e., $P_r=0$ and $n=1$).
These values will linearly increase/decrease based on the passing of time.
For example, if $F=0.5$ (i.e., the focused search starts after 50\% of the search budget is used), then after 30\% of the search, the value $P_r$ would decrease from $0.5$ to $0.2$.
Note, when during the search decreasing $n$ leads to some cases with $|T|>n$, then those populations are shrunk by removing the worst individuals in it.
Once the focused search begins (i.e., $P_r=0$ and $n=1$), then MIO starts to resemble a parallel (1+1)~EA.
When dealing with many objectives, even if there is a clear gradient to cover them in the fitness landscape, there might be simply not enough time left to cover all of them.
In software testing, the final user is only interested in tests that do cover targets, and not in tests that are heuristically close to cover them (e.g., close to solve complex constraints in some branch predicates, but not there yet).
Therefore, between a test suite $A$ that is close to but does not cover 100 targets, and another one $B$ which does cover 1 target and is very far from covering the remaining 99, the final user would likely prefer $B$ over $A$.
To take this insight into account, MIO tries to focus on just few targets at a time, instead of spreading its resources thin among all the left uncovered targets.
For example, in MIO there is an extra parameter $m$ which controls how many mutations and fitness evaluations should be done on the same individual before sampling a new one.
Like $P_r$ and $n$, $m$ varies over time, like starting from $1$ and then increasing up to $10$ when the focused search begins.
\subsection{Feedback-Directed Sampling}
\label{sec:fds}
When dealing with many objectives and limited resources, it might not be possible to cover all of them.
As discussed in Section~\ref{sec:control}, the final user is only interested in the actually covered targets, and not on how close we are to cover them.
Therefore, it makes sense to try to focus on targets that we have higher chances to cover.
This is helpful also when dealing with infeasible targets for which any heuristics will just plateau at a certain point.
To handle these cases, we use a simple but yet very effective technique that we call Feedback-Directed Sampling (FDS).
The sampling algorithm from the archive discussed in Section~\ref{sec:core} is modified as follow.
Instead of choosing the target $k$ randomly among the non-covered/non-empty ones, each of these targets will have a counter $c_k$.
Every time a test is sampled from a population $T_k$, then $c_k$ is increased by 1.
Every time a new \emph{better} individual is added to $T_k$ (or replace one of its existing tests), then the counter $c_k$ is reset to 0.
When we sample from $k$ from non-covered/non-empty ones, then, instead of choosing $k$ at random, we choose the $k$ with the lowest $c_k$.
As long as we get improvements for a target $k$, the higher chances will be that we sample from $T_k$, as $c_k$ gets reset more often.
On the other hand, for infeasible targets, their $c$ will never be reset once they reach their plateau, and so they will be sampled less often.
Similarly, more complex targets will be sampled less often, and so the search concentrates on the easier targets that are not covered yet.
However, this is not an issue because, once an easy target $k$ is covered, we do not sample from $T_k$ any more (recall Section~\ref{sec:core}), unless also \emph{all} the other targets are either covered or with empty $T$.
\section{Empirical Study}
To evaluate the performance of the MIO algorithm, we compared it with random search, MOSA and WTS.
We used two different case studies:
(1) a set of artificial problems with varying, specific characteristics;
(2) three numerical functions.
In this paper, we aim at answering the following research questions:
\begin{description}
\item[{\bf RQ1}:] On which kinds of problem does MIO perform better than Random, MOSA and WTS?
\item[{\bf RQ2}:] What is the impact of tuning parameters for exploration vs.~exploitation of the search landscape in MIO and MOSA?
\item[{\bf RQ3}:] How do the analysed algorithms fare on actual software?
\end{description}
\subsection{Artificial Software}
In this paper, we designed four different kinds of artificial problems.
In all of them, there are $z$ targets, and the search algorithm can be run for up to $b$ fitness evaluations.
A test is defined by two components: an $id$ (e.g., think about it like the name of a method to call in unit testing) and a numeric integer value $x \in [0, r]$ (e.g., think about it like the input to a method call).
Each target $k$ is independent, and can be covered only by a test with $id=k$.
The artificial problems will differ based on their fitness landscape.
Given $g \in [0, r]$ the single global optimum chosen at random,
and given the normalising function $\rho(d) = 1 / (1 + d)$ for distances,
then we have four different cases for each target:
\begin{description}
\item[Gradient:] $h_k = \rho(|x-g|)$. This represents the simplest case where the search algorithm has a direct gradient from $x$ toward the global optimum $g$.
\item[Plateau:] $h_k = \rho(g-x)$ if $g \ge x$, else $h_k = \rho(0.1 \times r)$.
In this case, we have one side of the search landscape (before the value of the global optimum $g$) with a clear gradient. However, the other side is a plateau with a relatively good fitness value (note that $0 \le |g-x| \le r$).
\item[Deceptive:] $h_k = \rho(g-x)$ if $g \ge x$, else $h_k = \rho(1 + r - x)$.
This is similar to the \emph{Plateau} case, where one side of the search landscape has a clear gradient toward $g$. However, the other side has a deceptive gradient toward leaving $g$ and reach the maximum value $r$.
\item[Infeasible:] like \emph{Gradient}, but with a certain number of the $z$ targets having a constant $h_k = \rho(1)$ and no global optimum.
\end{description}
We implemented the four search algorithms in which, when a test is sampled, its $id$ and $x$ values are chosen at random within the given valid ranges.
Mutations on $x$ is done by adding/subtracting $2^i$, where $i$ is chosen randomly in $[0, 10]$.
We consider mutating $id$ as a \emph{disruptive} operation, and, as such, we only mutate it with low probability $0.01$.
Mutating $id$ means changing both $id$ and $x$ at random (think about mutating a function call with string inputs into another one that requires integers, where the strings $x$ would have no meaning as integers).
All the analysed search algorithms use the same random sampling, mutation operation and archive to store the best tests found so far.
For the MIO algorithm, we used $F=0.5$, $P_r=0.5$, $n=10$ and max mutations 10.
For MOSA, we used the same settings as in~\cite{dynamosa2017}, i.e. population size 50
and tournament selection size 10.
WTS uses the same population size as MOSA, with up to 50 test cases in the same test suite (i.e., one individual).
A randomly sampled test suite in WTS will have size randomly chosen between 1 and 50.
WTS also has mutation operators to add a new test (probability 1/3) in a test suite, remove one test at random (probability 1/3), or modify one (probability 1/3) like in MIO and MOSA.
WTS also uses a crossover operator with probability 70\% to combine test suites.
\begin{figure}[!t]
\centering
\includegraphics[height=.40\textheight]{generated_files/gradient.pdf}
\caption{\label{fig:gradient}
Coverage results on the \emph{Gradient} problem type, with varying number of targets $z$.
}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[height=.40\textheight]{generated_files/plateau.pdf}
\caption{\label{fig:plateau}
Coverage results on the \emph{Plateau} problem type, with varying number of targets $z$.
}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[height=.40\textheight]{generated_files/deceptive.pdf}
\caption{\label{fig:deceptive}
Coverage results on the \emph{Deceptive} problem type, with varying number of targets $z$.
}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[height=.40\textheight]{generated_files/infeasible.pdf}
\caption{\label{fig:infeasible}
Coverage results on the \emph{Infeasible} problem type, with varying number of infeasible targets on top of 10 \emph{Gradient} ones.
}
\end{figure}
For each problem type but \emph{Infeasible}, we created problems having a variable number of $z$ targets, in particular $z \in \{1, 2, 3, 4, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100\}$, i.e., 15 different values in total, ranging from 1 to 100.
We used $r=1000$.
We ran each of the four search algorithms $100$ times with budget $b=1000$.
As the optima $g$ are randomised, we make sure that the search algorithms run on the same problem instances.
In the case of the \emph{Infeasible} type, we used 10 \emph{Gradient} targets, on which we added a different number of infeasible targets in $\{0, 1, 2, 3, 4, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100\}$, i.e., 16 values in total, with $z$ ranging from $(10+0)=10$ to $(10+100)=110$.
Figure~\ref{fig:gradient} shows the results for the \emph{Gradient} type,
Figure~\ref{fig:plateau} for \emph{Plateau},
Figure~\ref{fig:deceptive} for \emph{Deceptive},
and Figure~\ref{fig:infeasible} for \emph{Infeasible}.
The \emph{Gradient} case (Figure~\ref{fig:gradient}) is the simplest, where the four search algorithms obtain their highest coverage.
MIO and MOSA have very similar performance, which is higher than the one of Random and WTS.
However, on the more difficult case of \emph{Plateau} (Figure~\ref{fig:plateau}), MIO starts to have a clear advantage over MOSA.
For example, from $z=30$ on, MOSA becomes equivalent to Random and WTS, covering nearly no target.
However, in that particular case, MIO can still achieve around 20\% coverage (i.e., 6 targets).
Even for large numbers of targets (i.e., 100 when taking into account that the search budget $b$ is only 1000), still MIO can cover some targets, whereas the other algorithms do not.
The \emph{Deceptive} case (Figure~\ref{fig:deceptive}) is of particular interest: for low numbers of $z$ targets (i.e., up to 10), both
MIO and MOSA perform worse than Random.
From 10 targets on, MOSA is equivalent to Random and WTS, whereas MIO has better results.
This can be explained by taking into account two contrasting factors:
(1) the more emphasis of MIO and MOSA on exploitation compared to the exploration of the search landscape is not beneficial in deceptive landscape areas, whereas a random search would not be affected by it;
(2) MIO does better handle large numbers of targets (Figure~\ref{fig:gradient}), even when there is no gradient (Figure~\ref{fig:plateau}).
The value $z=10$ seems to be the turning point where (2) starts to have more weight than (1).
The \emph{Infeasible} case (Figure~\ref{fig:infeasible}) is where MIO obtains the best results compared to the other algorithms.
For this case, we also ran a further version of MIO in which we deactivated FDS (recall Section~\ref{sec:fds}), as we wanted to study its impact in the presence of infeasible targets.
From 20 infeasible targets on, MOSA, Random and WTS become equivalent, covering nearly no target.
However, MIO can still cover nearly 80\% of the 10 feasible testing targets.
For very large numbers of infeasible targets like 100, still MIO can cover nearly 40\% of the feasible ones.
This much better performance is mainly due to the use of FDS (see the gap in Figure~\ref{fig:infeasible} between MIO and MIO-noFDS).
However, even without FDS, MIO still does achieve better results compared to the other algorithms.
\begin{result}
{\bf RQ1}: on all the considered problems, MIO is the algorithm that scaled best.
Coverage improvements were even up to 80\% in some cases.
\end{result}
When using a search algorithm, some parameters need to be set, like the population size or crossover probability in a GA.
Usually, common settings in the literature can already achieve good results on average~\cite{arcuri2013parameter}.
Finding tuned settings that work better on average on a large number of different artefacts is not trivial.
Ideally, a user should just choose for how long a search algorithm should run, and not do long tuning phases by himself on his problem artefacts.
Parameter tuning can also play a role in algorithm comparisons: what if a compared algorithm performed worse just because one of its chosen settings was sub-optimal?
Arguably, among the most important parameters for a search algorithm are the ones that most impact the tradeoff between the exploration and the exploitation of the search landscape.
In the case of MIO, this is clearly controlled by the $F$ parameter (low values put more emphasis on exploitation, whereas for high values a large number of tests are simply sampled at random).
In the case of population-based algorithms, the population size can be considered as a parameter to control such tradeoff.
Small populations would reward exploitation, whereas large populations would reward exploration.
\begin{figure}
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=.45\linewidth]{generated_files/tuningMIO_gradient.pdf}\hfill
\includegraphics[width=.45\linewidth]{generated_files/tuningMOSA_gradient.pdf}
\caption{\emph{Gradient} problem type.}
\end{subfigure}\par\medskip
\begin{subfigure}{\linewidth}
\includegraphics[width=.45\linewidth]{generated_files/tuningMIO_plateau.pdf}\hfill
\includegraphics[width=.45\linewidth]{generated_files/tuningMOSA_plateau.pdf}
\caption{\emph{Plateau} problem type.}
\end{subfigure}\par\medskip
\begin{subfigure}{\linewidth}
\includegraphics[width=.45\linewidth]{generated_files/tuningMIO_deceptive.pdf}\hfill
\includegraphics[width=.45\linewidth]{generated_files/tuningMOSA_deceptive.pdf}
\caption{\emph{Deceptive} problem type.}
\end{subfigure}
\caption{\label{fig:tuning}
Tuning of $F$ for MIO (left side) and population size for MOSA (right side).}
\end{figure}
To study these effects, we carried out a further series of experiments on the \emph{Gradient}, \emph{Plateau} and \emph{Deceptive} problem types.
For MIO, we studied six different values for $F$, in particular $\{0, 0.2, 0.4, 0.6, 0.8, 1\}$.
For MOSA, we studied six different values for the population size, i.e. $\{4, 8, 16, 32, 64, 128\}$.
Each experiment was repeated 100 times.
Figure~\ref{fig:tuning} shows the results of these experiments.
For MIO, the results in Figure~\ref{fig:tuning} do match expectation:
for problems with clear gradient or with just some plateaus, a more focused search that rewards exploitation is better.
The best setting is a low $F=0.2$, although the lowest $F=0$ is not particularly good.
You still need some genetic diversity at the beginning of the search, and not rely on just one single individual.
For deceptive landscapes, exploration can be better, especially for a low number of targets.
For example, with $z=1$ then $F=1$ provides the best performance.
However, for larger number of targets, too much exploration would not be so beneficial, as it would not have enough time to converge to cover the targets.
In the case of MOSA, Figure~\ref{fig:tuning} provides some interesting insight.
For simple problems with clear gradient, one would expect that a focused search should provide better results.
However, the small population size of 4 is actually the configuration that gave the worst results.
The reason is that there is only little genetic material at the beginning of the search, and new one is only generated with the mutation operator.
However, a too large population size would still be detrimental, as not focused enough.
In that particular problem type, the best population size seems ranging from 16 to 32,
i.e., not too large, but not too small either.
In case of plateaus, still a too small population size (e.g., 4) gives the worst result.
However, in case of plateaus, there is a need to have some more exploration in the search landscape, and this confirmed by the fact that the best results are obtained with large population sizes (e.g., 64 and 128).
This effect is much more marked in the case of deceptive landscapes, where large population sizes lead to much better results.
The experiments reported in Figure~\ref{fig:tuning} clearly points out to a challenge in population-based algorithms when dealing with many-objective problems.
A too small population size would reduce diversity in the initial genetic material.
But a too large population size would hamper convergence speed.
Finding a fixed, right population size that works on most problem sizes (e.g., $z=10$ vs $z=1m$) might not be feasible.
To overcome this issue, MIO uses a dynamically sized population, whereas the tradeoff between exploration and exploitation is controlled by a dynamically decreasing probability $P_r$ of creating new tests at random (instead of mutating the current ones stored in the archive).
\begin{result}
{\bf RQ2}: On the analysed problems, the population size and the $F$ parameter have clear effects on performance, which strongly depend on whether on the given problem one needs more or less exploitation/exploration.
\end{result}
\subsection{Numerical Functions}
\begin{table}[!t]
\setlength{\tabcolsep}{3pt}
\centering
\caption{\label{tab:unit}
Comparions of algorithms on three different numerical functions.
Coverage is not a percentage, but rather the average raw sum of statements and branches that are covered.
For each algorithm, we also specify if better than any of the others, i.e. $\hat{A}_{12}>0.5$ (in parenthesis) and p-value less than $0.05$.
}
\input{generated_files/numericUnitTable.tex}
\end{table}
When designing algorithms to work on a large class of problems, it is common to evaluate them on artificial problems to try to abstract away and analyse in details the characteristics for which such algorithms perform best.
For example, the very popular NSGA-II algorithm (on which MOSA is based on) was originally evaluated only on nine numerical functions~\cite{deb2002fast}.
However, using only artificial problems is risky, as those might abstract away some very important factors.
A good example of this issue is Adaptive Random Testing, where artificial problems with artificially high fault rates were masking away its very prohibitive computational cost~\cite{ArB11a}.
To somehow mitigate this issue, as a safety-net we also carried out some experiments on actual software, where we aim at unit testing for line and branch coverage.
We use the branch distance as heuristic for the fitness function.
We considered three numerical functions previously used in the literature (e.g., \cite{ArB11a}):
\emph{Expint} (88 LOC, including everything, also empty lines),
\emph{Gammq} (91 LOC),
and \emph{Triangle} (29 LOC).
Each algorithm was run for up to 5000 fitness evaluations.
Each experiment was repeated 100 times.
Average values are reported in Table~\ref{tab:unit},
where we also report the Vargha-Delaney effect sizes $\hat{A}_{12}$ and the results of
Mann-Whitney-Wilcoxon U-tests at $\alpha=0.05$ level~\cite{Hitchhiker14}.
In all these three numerical functions, the MIO algorithm is the one achieving the highest coverage of the targets.
However, the three chosen numerical functions are not particularly difficult, and, as such, the performance difference between MIO, MOSA and WTS is not so large.
There is one thing to notice in these results:
WTS is much better than Random, whereas in the previous experiments they were very similar.
After an investigation, the reason for this behaviour is rather obvious.
With a population size of 50, and up to 50 tests in the same test suite, on average the first population would have a size of $50 \times 50/2 = 1250$ tests, which is higher than the search budget $b=1000$.
In other words, in those experiments WTS was practically doing just a random search.
However, this is not the case here, as we have $b=5000$.
In retrospective, on one hand those experiments could be considered unfair to WTS.
On the other hand, this issue further stresses out the need for a dynamically sized population when dealing with many-objective problems.
\begin{result}
{\bf RQ3}: the experiments on actual software are consistent with the ones on artificial problems: the MIO algorithm still achieves the best results.
\end{result}
\section{Conclusion}
In this paper, we have presented a novel search algorithm that is tailored for the problem of generating test suites.
We call it the Many Independent Objective (MIO) algorithm.
We have carried out an empirical study to compare MIO with the other main algorithms for test suite generation:
the Whole Test Suite (WTS) approach and the Many-Objective Sorting Algorithm (MOSA).
We also used random search as a baseline.
On artificial problems with increasing complexity and on some numerical functions, MIO achieved better results than the other algorithms.
In some cases, coverage improvements were even in the order of $+80\%$.
Future work will focus on implementing the MIO algorithm in different test generation frameworks, especially in system-level testing, and empirically evaluate how it fares in those contexts.
To help researchers integrate MIO in their frameworks, all the code used for the experiments in this paper is available online on a public repository, as part of the {\sc EvoMaster} tool at
\texttt{www.evomaster.org}
\vspace{0.5em}
\noindent {\bf Acknowledgments. }
This work is supported by the National Research Fund, Luxembourg (FNR/P10/03).
\bibliographystyle{splncs03}
|
2,869,038,156,843 | arxiv | \section{INTRODUCTION}
\label{introduction}
The study of the supercooled liquid and glassy states in molecular
systems is, nowadays, one of the most important topics in the
physics of disordered materials. Though the molecular processes
underlying glass formation still constitute an unsettled subject,
some traces of universality in the behavior of highly viscous
liquids near vitrification have been noticed. As general
characteristics, on approaching the glass transition the
structural ($\alpha$) relaxation process shows {\it i}) a
non-Debye behavior of the relaxation function, and {\it ii}) a
dramatic increase of the relaxation time, $\tau$.
Different physical routes can be covered to get vitrification.
Decreasing the temperature, $T$, is the common way to form a
glass. However, varying the pressure, $P$, also represents an
effective means. Indeed, the effects on molecular motions of an
isothermal compression resemble those which are caused by an
isobaric cooling. For practical reasons, cooling is generally
preferred, since high pressures (of the order of MPa) are
necessary to produce dynamical changes similar to those obtained
by changing the temperature within few tens of degrees. Anyway,
the study of the $\alpha$ relaxation pressure dependence can give
an insight into the nature of the liquid-glass transition.
The past few years have actually seen a growing use of
hydrostatic pressure in experimental investigations of glass
formers (see for instance,
\cite{Corezzi99,Paluch98,PaluchPRL2000,Casalini,CasaliniJCP02,
PaluchJCP03a,PaluchJCP03b,PaluchJCP02a,PaluchJCP02b}). Such
experiments provide a more stringent testing-ground for the
numerous models proposed of the structural relaxation time
evolution near vitrification. Among these, two are the most
widely used, which are based on the free volume and
configurational entropy concepts. Free volume approaches consider
the decrease of unoccupied volume as the basic mechanism leading
to structural arrest of a liquid system. The alternative view is
that the progressive slowdown of molecular motions responsible
for the glass transition is due to a reduction of the system's
configurational entropy.
In this paper, we test on salol the ability of free volume and
configurational entropy models to interpret the temperature and
pressure dependence of the structural relaxation time. Salol, is a
good candidate since much of the thermodynamic data is known,
allowing refinement on testing theoretical models. It has
intensively been studied at ambient pressure with several
spectroscopic techniques, like Brillouin scattering
\cite{Dreyfus92}, depolarized light scattering
\cite{Li92,Cummins93,Krakoviack97}, impulsive stimulated light
scattering \cite{NelsonPRL95}, optical Kerr effect spectroscopy
\cite{Hinze00}, neutron scattering \cite{Toulouse93}, x-ray
diffraction \cite{Eckstein00}, and dielectric spectroscopy
\cite{Stickel95}. On the other hand, few experiments have been
carried out by varying both temperature and pressure, namely
dielectric spectroscopy \cite{CasaliniJPC03}, depolarized Raman
scattering \cite{Pratesi00}, and viscosity measurements
\cite{Schug80}. Recently, some of us presented a preliminary
investigation \cite{ComezPRE02} on salol performed in the $T$ and
$P$ domain by using photon correlation spectroscopy. Here, we
extend our analysis through pressure-volume-temperature (PVT)
data taken in both the supercooled and crystalline phases. We
show how an appropriate use of the PVT results provides a
realistic estimate of the configurational contribution to the
excess entropy of salol. Finally, we compare our $\tau(T,P)$ data
with the prediction of the pressure-extended Cohen-Grest model
\cite{CohGre79}, derived in the frame of the free volume theory.
\section{THEORY}
\label{THEORY}
\subsection{THE PRESSURE EXTENDED ADAM-GIBBS (PEAG) MODEL}
\label{PEAG model}
The entropy model by Adam and Gibbs (AG) \cite{AG} is based on the
concept of configurational entropy and the assumption of
cooperatively rearranging regions. Starting from the observation
that the sluggish relaxation behavior governing the glass
transition is a manifestation of a dearth of configurations
accessible to the system, the AG theory states a relationship
between the structural relaxation time, $\tau$, and the
configurational entropy $S_c$:
\begin{equation}
\tau=\tau_0 \exp \left( \frac{C_{AG}}{T S_c} \right),
\label{AG}
\end{equation}
where $C_{AG}$ is nearly constant, and $\tau_0$ is the relaxation
time at very high temperature. $S_c$ measures the entropic
contribution arising from the possibility of a system to
rearrange its structure in different configurations, which is
typical of a liquid. Theoretically, a quantitative evaluation of
$S_c$ can be done in terms of the difference between the entropy
of the liquid phase and the entropy of an ideal amorphous-solid
phase (ideal glass) in which only vibrations (harmonic and
anharmonic) and secondary relaxation processes are active
\cite{GoldsteinJCP76,JohariJCP2000}. This quantity can, in
principle, be determined by computer simulations, but is
inaccessible to experiments in a direct manner. Some efforts have
been made to bypass a direct experimental determination of
configurational entropy in a number of liquids. Unfortunately,
the procedures proposed require an independent estimate of
vibrational contributions to the entropy over a broad range of
temperatures \cite{Phillips89,giapponesi} or an estimate of the
excess vibrational entropy at $T_{g}$ \cite{Johari}, all of which
implying non-trivial approximations. We also remark that all the
previous estimates of $S_c$ are based on temperature dependent
data alone, and are not constrained by pressure dependent data.\\
Furthermore, much literature documented the extensive use of the
experimentally accessible liquid over crystal (or glass) excess
entropy, $S_{exc}$, in place of $S_{c}$, showing that the AG
expression works well in a number of systems with $S_{c}$
replaced by $S_{exc}$ \cite{GreetTurnbull67,RicAng98}. In this
context, understanding the relationship between $S_{exc}$ and
$S_{c}$ is a challenging issue. A proportionality of these two
quantities at atmospheric pressure has recently been proposed
\cite{Martinez}, but a verification of such hypothesis through a
relaxation experiment performed as a function of temperature
alone cannot be conclusive, as the proportionality constant would
simply renormalize the value of $C_{AG}$ in Eq. (\ref{AG}).
Building on this background, a method based on a pressure extended
Adam-Gibbs (PEAG) equation has recently been proposed by some of
us \cite{Corezzicond-mat} to analyze temperature and pressure
dependent relaxation measurements. The pressure dependence of
$S_{c}$ has been introduced in Eq. (\ref{AG}) writing the
configurational entropy of a system at a given $T$ and $P$ as a
sum of (i) an isobaric contribution at zero pressure,
$S_{c}^{isob}(T,0)$, and (ii) an isothermal contribution at
temperature $T$, $S_c^{isoth}(T,P)$:
\begin{equation}
S_c(T,P)=S_{c}^{isob}(T,0)+S_c^{isoth}(T,P)\\ \label{TECE}
\end{equation}
\noindent (i) Here, the isobaric configurational term, at zero
pressure, is assumed proportional to the excess entropy:
\begin{equation}
S_c^{isob}(T,0)=\Phi S_{exc}^{isob}(T,0).
\label{Scphi}
\end{equation}
The parameter $\Phi$ ($\leq$1) quantifies the fraction of excess
entropy at $P$=0 arising from structural configurations. In
addition, the excess entropy contains any contribution from
secondary relaxation processes and vibrational motions
\cite{GoldsteinJCP76,JohariJCP2000,Fischer02}. It can be evaluated
from the heat capacity of the liquid and the crystal, through the
equation:
\begin{eqnarray}
S_{exc}^{isob}(T,0)&=&S^{liquid}(T)\!-S^{crystal}(T)\nonumber\\
&=&\Delta
S_{f}-\int_{T}^{T_{f}}\!\!\!\left(C_{p}^{liquid}\!\!-\!C_{p}^{crystal}\right)/T'dT'
\label{SmeltScrystal}
\end{eqnarray}
where $\Delta S_{f}\!=\!\Delta H_{f}/T_{f}$ is the entropy of
fusion.
\noindent (ii) According to the Maxwell relationship
$\left(\partial S/\partial P \right)_{T}=-\left(\partial
V/\partial T\right)_{P}$, the isothermal term in Eq.(\ref{TECE})
can be written
\begin{eqnarray}
S_c^{isoth}(T,P)= -\int_0^P [\Delta\left(\partial V/\partial
T\right)_{P}] dP' \label{Scisoth}
\end{eqnarray}
where $\Delta\left(\partial V/\partial T\right)_{P}=\left(\partial
V/\partial T\right)_{P}^{liquid}-\left(\partial V/\partial
T\right)_{P}^{non-struct}$ is the configurational thermal
expansion at temperature $T$ \cite{expansion}. This term can be
evaluated from PVT measurements as follows. The Tait equation
\cite{Tait} is used to describe the volume of the liquid phase as
a function of $T$ and $P$
\begin{equation}
V^{liquid}(T,P)=V^{liquid}(T,0)\left[1-C \ln\left(
1+P/B\right)\right], \label{Tait}
\end{equation}
where $C$ is a dimensionless constant, and $B(T)$ is well
described by $B(T)=b_1 \exp(-b_2T)$, where $b_1$ has the
dimension of pressure and $b_2$ of inverse of temperature
\cite{VanKrevelen}. Moreover, it is reasonable to presume that
the pressure dependence of the thermal expansion of the ideal
glass would be much smaller than that of the liquid, and can be
neglected. Accordingly, the non-structural thermal expansion at
$P$ can be replaced by its value at $P$=0, i.e., $\left(\partial
V/\partial T\right)_{P}^{non-struct}\approx \left(\partial
V/\partial T\right)_{0}^{non-struct}$. Under these assumptions,
calculating the integral in Eq. (\ref{Scisoth}) yields
\begin{widetext}
\begin{equation}
S_c^{isoth}(T,P)\approx-\left( \frac{\partial V}{\partial T}
\right)_0^{liquid} \left[ P+ hCP - BC \left(h +\frac{P}{B}
\right) ln \left( 1+\frac{P}{B} \right) \right] + P \left(
\frac{\partial V}{\partial T} \right)_0^{non-struct}
\label{wideeq}
\end{equation}
\end{widetext}
where $h=1-b_2/\alpha$, and $\alpha=1/V( \partial V /\partial T
)_0$ is the thermal expansion coefficient at zero pressure.
In conclusion, combining Eqs.~[\ref{AG}-\ref{Scphi}] provides a
formula for the structural relaxation time as a function of
temperature and pressure:
\begin{equation}
\tau(T,P)=\tau_0 \exp \left[ \frac{C_{AG}}{T(\Phi S_{exc}^{isob}+S_c^{isoth})}
\right],
\label{AGphi}
\end{equation}
with $ S_{exc}^{isob}$ and $S_c^{isoth}$ given by Eq.
(\ref{SmeltScrystal}) and (\ref{wideeq}), respectively. It is
important to emphasize that the expression of $S_c^{isoth}$, Eq.
(\ref{wideeq}), prevents the parameter $\Phi$ in
Eq.~(\ref{AGphi}) from playing the role of a simple
renormalization constant.
\subsection{THE PRESSURE EXTENDED COHEN-GREST (CG) MODEL}
\label{CG model}
Within a free volume picture, Cohen and Grest \cite{CohGre79}
derived a model to describe the behavior of dense liquids and
glasses on the basis of a percolative approach. The existence is
assumed of glass-like and liquid-like domains. The fraction, $p$,
of these latter increases with temperature, and a percolative
(infinite) cluster does exist above a critical concentration
$p_c$, at which the transition from the glass to the liquid state
occurs. The model predicts an analytical expression for the free
volume $v_f$ which is valid in a broad range of temperatures:
\begin{equation}
v_f=\frac{k}{2\xi_0}\lbrace{T-T_0+[(T-T_0)^2+4v_a\xi_0T/k]^{1/2}\rbrace}
\label{vf}
\end{equation}
where $T_0$, $\xi_0$, and $v_a$ are constants with the dimension
of temperature, pressure, and volume, respectively. For $p$ near
and greater than $p_c$, a link is established between $v_f$ and
the diffusion coefficient $D$, which recovers the Doolittle
equation \cite{doolittle}, $D=D_0p \exp(-v_m/v_f)$, in the case of
$v_m/v_f<<\bar{\nu}$. Here, $v_m$ is the molecular volume, $D_0$
is a constant, and $\bar{\nu}$ is the average size of the
liquid-like clusters. A similar result is assumed for the
rotational correlation time, $\tau=\tau_0 \exp(v_m/v_f)$
\cite{CohGre81}, where $\tau_0$ is the value of $\tau$ in the
limit of very high temperature under isobaric conditions. On this
basis, a simple equation for the structural relaxation time in
the supercooled state can be written:
\begin{equation}
\log\tau(T)=A_{CG}+\frac{B_{CG}}{T-T_0+[(T-T_0)^2+C_{CG}T]^{1/2}}\label{tauCG}
\end{equation}
where $A_{CG}$ is related to the pre-exponential factor $\tau_0$,
and the parameters $B_{CG}=2 \xi_0 v_m \log e /k$ and
$C_{CG}=4v_a\xi_0T/k$ have the dimension of temperature, and must
assume positive values to have a physical meaning.
Cohen and Grest incorporate the effect of pressure in their
theory by including an additional term, proportional to pressure,
into their expression for the local free energy. As a
consequence, the pressure dependence of the relaxation time can
be obtained by changing $\xi_0 \longrightarrow \xi_0 +P$. The
temperature parameter $T_0$ is also affected by this change, via
the relationship $kT_0=kT_1+v_a\xi_0$, with $T_1$ a constant,
which yields $T_0(P)=T_0+(v_a/k)P$. The final expression for
$\tau(T,P)$ is:
\begin{equation}
\log\tau(T,P)=A_{CG}+\frac{B_{CG} D_{CG}}{T-T_0^*+[(T-T^*_0)^2+C_{CG} D_{CG} T]^{1/2}}\label{tauCGex}
\end{equation}
with $D_{CG}=1+P/\xi_0$ and $T_0^*=T_0-(C/4\xi_0)P$. Note that
this expression contains five parameters, i.e. $A_{CG}$, $B_{CG}$,
$T_0$, $C_{CG}$, and $\xi_0$, only the first four appearing in the
temperature dependent expression at $P$=0, i.e. in Eq.
(\ref{tauCG}).
\section{Experiment}
\label{experiment}
\subsection{PVT Measurements}
\label{PVT}
Measurements of specific volume change $\Delta V(T,P)$ of
crystalline and liquid salol were taken in an isothermal mode of
operation by using a confining fluid technique \cite{ZolWal95}.
The PVT data were acquired on a GNOMIX apparatus \cite{Gnomix}
described in Refs. \onlinecite{ZolWal95,ZolBol76}. The sample
(salol) and the confining fluid (mercury) were contained in a
rigid sample cell. A thin nickel foil sample cup surrounding the
sample was used to guarantee hydrostatic pressure during the
experiment. Silicon oil was used as pressurizing fluid. The
temperature was recorded (for operational reasons) close to the
sample, but actually in the pressurizing silicon oil. At a fixed
temperature, starting from the low-temperature end, pressure was
increased to 200 MPa, and data were recorded in pressure
intervals of 10 MPa. On completion of measurements along one
isotherm, the temperature setting was increased 5 K higher, and
the pressure measurements were repeated. $\Delta V(T,P)$
measurements were converted into specific volume $V(T,P)$ data by
using a reference density value, $\rho$=1.1742 g cm$^{-3}$ at
$T$=323.15 K. The whole set of PVT measurements between $T$=290 K
and 380 K over the 0.1-200 MPa range of pressure is reported in
Fig. \ref{volume-SALOL}. The step in the data at a given pressure
marks the fusion/crystallization temperature.
\begin{figure}[t]
\includegraphics[width=9.0cm]{fig1.eps}
\caption{\label{volume-SALOL} Temperature dependence of the
volume of salol in the crystal and liquid state at different
pressures. The pressures are, from top to bottom, from 0.1 MPa to
200 MPa in steps of 10 MPa. In the inset the melting temperature
versus pressure deduced from the PVT measurements here reported.}
\end{figure}
\subsection{Photon correlation measurements}
\label{PCS}
Photon correlation spectroscopy (PCS) measurements under high
hydrostatic pressure, up to 190 MPa, were taken at different
temperatures (namely 267.1, 268.6, 271.0, 274.6, 278.3 and 280.4
K). Depolarized (VH) light scattering spectra were collected in
the 90$^{\circ}$ geometry using an apparatus consisting of an
Ar-ion laser, operating at 514.5 nm, a home made thermostated high
pressure cell (a detailed description of the cell is reported in
refs. \onlinecite{FytPat82,Fytas84}), and an ALV5000E digital
correlator. The scattered light was collected by a single mode
fiber optics and detected by an avalanche diode (Sandercock).
High pressure was generated by using nitrogen pressurized by a
Nova Swiss membrane compressor and introducing the gas over steel
capillaries connected with the high pressure cell. The pressure
was measured by a Heise gauge with a resolution of 0.3 MPa, and
the temperature by a thermocouple with a typical error of 0.1 K.
Special care was taken to prepare the sample avoiding
crystallization on both lowering the temperature and increasing
the pressure. A cleaning procedure to have dust-free cells was
used consisting of rinsing the cells with freshly distilled hot
acetone. Salol [2-hydroxy benzoic acid phenyl ester, 2-(HO)
C$_{6}$H$_{4}$CO$_{2}$C$_{6}$H$_{5}$] purchased from Aldrich
company, purity 99 \%, was filtered (0.22$\mu$m Millipore filter)
into the dust-free cylindrical cell of 10 mm o.d. at about
80$^{\circ}$C. The sample was then brought back to room
temperature at a very slow cooling rate. The measurements were
performed following isothermal curves by varying the pressure.
Each isothermal run was usually done from the higher to the lower
value of pressure, this procedure assuring a shorter
equilibration time before starting the measurement. Finally, we
checked that the diffusion time of N$_2$ was long enough to
prevent contamination of the scattering volume during the
experiment. To this end the forward beam was continuously
monitored on a black screen to directly visualize possible
vertical gradients of the refractive index of the sample.
\begin{figure}[t]
\includegraphics[width=8.2cm]{fig2.eps}
\caption{\label{Fig_spettri} Normalized photon correlation
functions collected at a constant temperature of 267.1 K.
Pressures from left to right are 88, 95, 102.5, 110.5, 119, 125,
132.5, 141, 148.5, 156.5, 163.5, 171, 181, and 189.5 MPa. The
solid lines represent the fits to the data using the KWW
function. The isothermal spectra at 267.1 K taken at different
pressures rescale on a master curve as shown in the inset.}
\end{figure}
\section{RESULTS}
\label{results}
\subsection{Thermodynamic parameters}
\label{PVT results}
The $T$ and $P$ dependence of the volume can be expressed through
the Tait equation, Eq. (\ref{Tait}), found to be valid for a wide
range of materials including liquids and polymers, for changes of
the volume up to 40 $\%$ of the initial value. From the analysis
of the data at atmospheric pressure in the liquid state we
numerically find a constant value of the thermal expansion
coefficient $\alpha=\left(\partial V/\partial
T\right)_{0}^{liquid}\!\!/V^{liquid}(T,0)$, consistent with the
expression $V^{liquid}(T,0)=V_{0}\exp(\alpha T)$ describing the
temperature behavior of the volume of liquid salol at $P=0$
\cite{zeropressure}. The whole set of PVT data in the liquid
state is then fitted by Eq.~(\ref{Tait}). In Fig. 3 the
experimental data are shown together with the result of the fit
(solid lines). An excellent agreement between experimental points
and fit curves is obtained with the values of the parameters
$V_0$, $\alpha$, $b_1$, $b_2$, and $C$ reported in Tab.
\ref{tablePVT}. It is possible to recognize some generality of
the parameters of the Tait equation \cite{VanKrevelen}. Indeed,
the values of $C$ ($\sim 0.09$) and $b_2$ ($\sim$ 4x10$^{-3}$
K$^{-1}$) have been found to be almost the same for a lot of
materials, liquids and polymers, including chlorinated biphenyl
(PCB62) \cite{CasaliniJCP02}, diglycidylether of bisphenol A
(DGEBA) \cite{PaluchJCP03a}, bis-phenol-C-dimethylether (BCDE) and
bis-kresol-C-dimethylether (BKDE) \cite{PaluchJCP03b},
phenylphthalein-dimethylether (PDE) \cite{PaluchJCP02b} and
cresolphthalein-dimethylether (KDE) \cite{PaluchJCP02a}.\\
In the crystalline phase, PVT measurements allow us to evaluate
the thermal expansivity at different pressures. In particular, we
find that $\left(\partial V/\partial T \right)_{P}^{crystal}$
ranges from about $4.5\times 10^{-8}$ \mbox{m$^{3}$ mol$^{-1}$
K$^{-1}$} at $P$=0.1 MPa to about $3.5\times 10^{-8}$
\mbox{m$^{3}$ mol$^{-1}$ K$^{-1}$} at $P$=200 MPa, with an
average value $\left(\partial V/\partial T
\right)_{\bar{P}}^{crystal}\sim (4.0\pm 0.5)\times 10^{-8}$
\mbox{m$^{3}$ mol$^{-1}$ K$^{-1}$} over the pressure range
investigated.
\begin{figure}[t]
\includegraphics[width=9.0cm]{fig3.eps}
\caption{\label{fig:volume-SALOL} Temperature dependence of the
volume of salol in the liquid state. The solid lines through
symbols are the best fit with the Tait equation of state,
Eq.~(\ref{Tait}), with $V^{liquid}(T,0)=V_{0}\exp(\alpha T)$ and
$B(T)=b_{1}\exp(-b_{2}T)$.}
\end{figure}
The heat capacity $C_{p}$ of crystalline, glassy, supercooled and
stable liquid salol at atmospheric pressure was measured by
adiabatic calorimetry \cite{Oguni,OguniPrivate}. From these data,
the glass transition temperature $T_{g}$=217$\pm$1 K and the
temperature of fusion $T_{f}$=315.0 K are determined, and the
excess entropy of the liquid over the crystal, $S_{exc}(T)$, is
calculated by using Eq.~(\ref{SmeltScrystal}), with the value
$\Delta S_{f}\!=\!\Delta H_{f}/T_{f}=60.83\pm 0.04$ \mbox{J
mol$^{-1}$ K$^{-1}$} for the entropy of fusion. In
Fig.~\ref{figSexcSALOL} the experimental excess entropy is shown
with circles.
\begin{table}
\caption{\label{tablePVT} Thermodynamic parameters from the
analysis of volumetric measurements.}
\begin{ruledtabular}
\begin{tabular}{lc}
$V_{0}$ [m$^{3}$ mol$^{-1}$]& $(143.8\pm 0.1)\times 10^{-6}$ \\
$\alpha$ [K$^{-1}$]& $(7.36\pm 0.02)\times 10^{-4}$ \\
$b_{1}$ [MPa]& $790\pm20$ \\
$b_{2}$ [K$^{-1}$]& $(4.70\pm0.06)\times 10^{-3}$ \\
$C$& $(8.68\pm0.05)\times 10^{-2}$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[t]
\includegraphics[width=8.5cm]{fig4.eps}
\caption{\label{figSexcSALOL} Temperature dependence of the
excess entropy over crystal,
\mbox{$S_{exc}\!=\!S^{melt}\!\!-S^{crystal}$}, calculated from the
calorimetric data.
The solid
line represents the fit of the experimental data according to
\mbox{$S_{\infty}\!\!-k/T$}.}
\end{figure}
\subsection{Dynamic parameters}
\label{PCS results}
In the PCS experiment the homodyne technique is used, which
measures the normalized time correlation function of the
scattering intensity $g^{(2)}(t)=<I(t)I(0)>/<I^2>$. For a
Gaussian process, the intensity autocorrelation function
$g^{(2)}(t)$ is related to the autocorrelation function of the
scattered field, $g^{(1)}(t)=<E(t)E(0)>/<|E(0)^2|>$, through the
Siegert equation \cite{Berne}:
\begin{equation}
g^{(2)}(t) = (1+f |g^{(1)}(t)|^2)
\label{G2}
\end{equation}
where $f$ is a constant. The relaxation function of a
glass-forming system is generally broader than a single
exponential, and experimental data are typically represented by
the phenomenological Kohlrausch-Williams-Watts (KWW) function
\cite{KWW}:
\begin{equation}
g^{(1)}(t) = [g_0 \exp (-(t/\tau_K)^{\beta_K})]
\label{KWW}
\end{equation}
Therefore, PCS spectra are fitted by using Eqs. (\ref{G2}) and
(\ref{KWW}). The results show an excellent agreement between
experimental data and fit curves. Typical normalized homodyne
correlation spectra $|g^{(1)}(t)|^2$ (symbols), taken at 267.1 K
in the 88-189.5 MPa pressure range, are represented in Fig. 2
together with their KWW fits (solid lines). The values of the
relaxation time $\tau_K$ and of the stretching parameter
$\beta_K$ have been used to calculate the average relaxation time
$\langle\tau\rangle$, through the formula
\begin{equation} \langle\tau\rangle =
\frac{\tau_K}{\beta_K}\Gamma\left(\frac{1}{\beta_K}\right)
\label{tauav}
\end{equation}
where $\Gamma$ is the Euler $\Gamma$-function. The values of
$\langle\tau\rangle$ as a function of pressure at different
temperatures are shown as symbols in Fig. 5.
Following the evolution with $T$ and $P$ of the shape of the
relaxation function, we find that no appreciable variation is
observable on the stretching parameter by changing $T$ and $P$.
This evidence is further supported when a master plot is drawn,
showing that the spectra taken at different pressures collapse
into a single curve (see inset of Fig. 2). Our determination of
the stretching parameter ($\beta_K=0.68\pm0.02$) agrees with
previous results at ambient pressure and low temperatures from PCS
measurements: $\beta_K=0.66\pm0.03$ \cite{Berger96}, and
$\beta_K=0.60\pm0.08$ \cite{SidebottomSorensen}. Different
techniques, such as dielectric spectroscopy
\cite{Stickel95,Berger96} and impulsive stimulated light
scattering \cite{Yang1995}, also found a time-temperature
superposition (TTS) principle to hold in salol at low
temperatures. Remarkably, our results indicate the validity of a
generalized time-temperature-pressure superposition (TTPS)
principle in the slow dynamic regime, and support recent finding
of only a modest broadening of the dielectric $\alpha$ peak with
increasing pressure up to 0.7 GPa \cite{CasaliniJPC03}.\\
Moreover, Olsen et al. \cite{Olse01} recently reinvestigated TTS
at low temperatures for a large number of systems concluding that
a high-frequency slope of the $\alpha$ peak close to -1/2 is
expected whenever TTS applies. To confront with this expectation,
we first evaluate, through the relationship \cite{Lind80}
\begin{equation}\label{betaKeCD}
\beta_K=0.970 \beta_{CD}+0.144 \qquad 0.2 \leq \beta_{CD}\leq
0.6,
\end{equation}
the value of a Cole-Davidson shape parameter, $\beta_{CD}$,
corresponding to our value of $\beta_K$ in the time domain. We
find $\beta_{CD}$=($0.55\pm0.02)$, and then the $\alpha$ peak
actually decays approximately as $\omega^{-1/2}$ at high
frequencies, at any temperature and any pressure considered here.
\section{DISCUSSION}
\label{discussion}
\subsection{Check of the PEAG model}
\label{CheckPEAG}
Our relaxation data are well in the range in which strong
intermolecular cooperativity is expected for salol
\cite{Stickel95,RicAng98,CasaliniJCP2003}. To check the
consistency of the PEAG model with our relaxation data, following
Sec. \ref{PEAG model} we need to determine both the isobaric
contribution at zero pressure and the isothermal contribution at
temperature $T$ of the configurational entropy, Eq.~(\ref{TECE}).
The former contribution is related to the excess entropy of the
liquid over its crystalline phase at ambient pressure,
Eq.~(\ref{Scphi}). The latter is given by Eq.~(\ref{wideeq}).
The temperature behavior of the excess entropy is well described,
over the whole range between $T_g$ and $T_f$, by the function
$S_{exc}\!=\!S_{\infty}\!\!-k/T$, as observed in a number of other
glass formers \cite{RicAng98}. The best fit curve corresponds to
the parameters $S_{\infty}=137.5\pm 0.3$ \mbox{J mol$^{-1}$
K$^{-1}$}, $k=(24.05\pm 0.08)\times 10^{3}$ \mbox{J mol$^{-1}$},
(see Fig. \ref{figSexcSALOL}). Hence, Eq.~(\ref{Scphi}) becomes
$S_{c}^{isob}(T,0)\!=\! \Phi \! (S_{\infty}\!\!-k/T)$, where
$S_{\infty}$ and $k$ are known, and $\Phi$ will be free in the
global fit with Eq. (\ref{AGphi}).
For what concerns the isothermal term, Eq. (\ref{wideeq}), the
expressions $\left(\partial V/\partial T
\right)_{0}^{liquid}\!\!=\alpha V^{liquid}(T,0)$,
$h=1-b_{2}/\alpha$, and $B=b_{1}\exp(-b_{2}T)$ are known from the
analysis of PVT data. Numerical details are reported in
Tab.~\ref{tabPVTb}. The only parameter which couldn't be
determined experimentally is the thermal expansivity
$\left(\partial V/\partial T \right)_{0}^{non-struct}$ associated
with non-structural contributions. Although the value of this
parameter will be derived from the fit, we expect that such a
value should compare well with that calculated in the crystal of
salol, as our sample is grown in a polycrystalline form that
should mimic better than a perfect crystal the vibrational
properties of an ideal amorphous solid.
\begin{table}
\caption{\label{tabPVTb} Thermodynamic parameters in Eq.
(\ref{wideeq}) calculated from PVT measurements.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
$T$& $P$&$|h|$&$(\partial V/\partial T)^{liquid}_{0}$ &$B$ \\
(K) & (MPa) & &(m$^3$mol$^{-1}$K$^{-1}$) & (MPa) \\
\hline
267.1 & 88.0-189.5 & 3.588 & 1.287x10$^{-7}$ &225.1\\
268.6 & 110.0-180.0 & 3.588 & 1.289x10$^{-7}$ &223.5\\
271.0 & 115.5-185.0 & 3.588 & 1.291x10$^{-7}$ &220.9\\
274.6 & 140.0-185.0 & 3.588 & 1.294x10$^{-7}$ &217.3\\
278.3 & 155.5-190.0 & 3.588 & 1.298x10$^{-7}$ &213.6\\
280.4 & 150.0-194.0 & 3.588 & 1.30x10$^{-7}$ &211.5\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[t]
\includegraphics[width=8.2cm]{fig5.eps}
\caption{\label{Fig_tau_PEAG} Structural relaxation time of salol
from photon-correlation measurements at different temperatures.
Data taken from Comez et al.\cite{ComezPRE02} ($T$=267.1 K
($\circ$), 268.6 K ($\vartriangle$), 271.0 K ($\oplus$), 274.6 K
($\star$), 278.3 K ($\square$), 280.4 K ($\lozenge$)). The
relaxation time is the average time $\langle\tau\rangle$. The
solid lines represent the simultaneous fit with the PEAG equation
--- Eq.~(\ref{AGphi}). As explained in the text, four parameters
are adjusted by the fitting procedure, in particular giving
$\left(\partial V/\partial T\right)_{0}^{non-struct}=(3.8\pm
0.7)\times 10^{-8}$ \mbox{m$^{3}$ mol$^{-1}$ K$^{-1}$} and
$\Phi=0.68\pm 0.08$.}
\end{figure}
\begin{figure}[t]
\includegraphics[width=8.2cm]{fig6.eps}
\caption{\label{Fig_tau_CG0} Structural relaxation time of salol
from depolarized light scattering measurements at atmospheric
pressure. The relaxation time is the average time
$\langle\tau\rangle$. Squares represent depolarized
photon-correlation data from Ref.\onlinecite{Berger96}, circles
are depolarized Brillouin and Raman light scattering from
Ref.\onlinecite{Li92}. The solid line represents the fitting
curve using the CG equation
--- Eq.~(\ref{tauCG}). The four
parameters adjusted by the fitting procedure are $A_{CG}=(10.6\pm
0.1)$, $B_{CG}=(91\pm13)$K, $T_0=(265\pm3)$ K, and
$C_{CG}=(3.4\pm0.4)$ K.}
\end{figure}
\begin{figure}[t]
\includegraphics[width=8.2cm]{fig7.eps}
\caption{\label{Fig_tau_CG} Structural relaxation time of salol
from photon-correlation measurements at different temperatures
\cite{ComezPRE02}. The relaxation time is the average time
$\langle\tau\rangle$. Temperatures are $T$=267.1 K ($\circ$),
268.6 K ($\vartriangle$), 271.0 K ($\oplus$), 274.6 K ($\star$),
278.3 K ($\square$), 280.4 K ($\lozenge$). The solid lines
represent the simultaneous fit with the pressure extended
Cohen-Grest equation --- Eq.~(\ref{tauCGex}). The parameters
$A_{CG}$, $B_{CG}$, $C_{CG}$, and $T_0$ have been taken fixed to
those obtained from the fit of the isobaric data at atmospheric
pressure.}
\end{figure}
Summarizing, in the fit of relaxation time data with
Eq.~(\ref{AGphi}) only four parameters, specifically $\tau_{0}$,
$C_{AG}$, $\Phi$, and $\left(\partial V/\partial T
\right)_{0}^{non-struct}$, remain to be adjusted. The fit is
carried out simultaneously in the $T$-$P$ domains, over the
pressure range $0.1-194$ MPa at six different temperatures
($T$=267.1, 268.6, 271.0, 274.6, 278.3, and 280.4 K). The best
fit curves (solid lines in Fig.~\ref{Fig_tau_PEAG}) correspond to
the values: $\log\tau_{0}[s]=-17.4\pm0.1$,
$C_{AG}=(1.9\pm0.3)\times 10^{5}$ \mbox{J mol$^{-1}$},
$\Phi=0.68\pm0.08$, $\left(\partial V/\partial T
\right)_{0}^{non-struct}=(3.8\pm 0.7)\times 10^{-8}$
\mbox{m$^{3}$ mol$^{-1}$ K$^{-1}$}.
It is important to remark that the value obtained for the
non-structural thermal expansion compares well with the value
calculated for the polycrystal of salol, while it is only in
feasible agreement with that estimated by some of us, $(\partial
V/\partial T)_{0}^{non-struct}=(1.09\pm 0.04)\times 10^{-8}$
\mbox{m$^{3}$ mol$^{-1}$ K$^{-1}$}, in a previous determination
using a preset $\Phi=1$ in Eq.~(\ref{AGphi}), i.e. obtained by
replacing the configurational entropy with the excess entropy
\cite{ComezPRE02}. Moreover, we note that the best fit yields a
value for $(\partial V/\partial T)_{0}^{non-struct}$ whose
uncertainty spans the variation with $T$ and $P$ of the crystal
thermal expansion. Thus, it emerges that the approximation
$(\partial V/\partial T)_{P}^{non-struct}\approx(\partial
V/\partial T)_{0}^{non-struct}$ is justified, and it is
unnecessary to consider a $T$ and $P$ dependence of the
non-structural expansion in Eq.~(\ref{Scisoth}).
\subsection{Check of the CG model}
\label{CheckCG}
Various models interpreting the dynamics of supercooled liquids
provide an equation to represent $\tau$ data as a function of
temperature. Among these, the most frequently used is the
Vogel-Fulcher-Tamman (VFT) equation \cite{Vogel}. However, its
adaptability to experimental data has been demonstrated only over
a limited range of temperatures. In fact, Stickel et al.
\cite{Stickel95,Stickel} have shown that two VFT equations are
needed to describe the relaxation data at ambient pressure for
temperatures ranging from just above the glass transition up to
very high temperatures, because of a change of dynamics occuring
in the vicinity of a crossover temperature $T_B\approx1.2T_g$. On
the other hand, the CG expression at ambient pressure, by virtue
of four characteristic parameters, one more than the VFT, succeeds
in describing structural relaxation times in a broad range of
temperatures. Positive tests have been reported on several glass
forming systems \cite{CohGre79,CumLi97,SchLun99}. Recently,
Paluch et al. \cite{Paluch03} have also shown that the
characteristic temperature $T_0$ of the CG model can be
identified with $T_B$ in a number of liquids, suggesting that the
change of dynamics may be related to an onset of percolation of
the free volume. However, estimates of the free volume available
per liquid-like molecule founded on such a description clash with
estimates extracted from dilatometric measurements \cite{Paluch03}.\\
An interesting and not frequently exploited testing-ground for
this model is the comparison with relaxation data obtained by
varying both temperature and pressure. To do this, in the case of
salol, we analyze the temperature dependent relaxation times at
ambient pressure, available in the literature
\cite{Li92,Berger96}, using Eq. (\ref{tauCG}), and compare the
results with those obtained from our data at variable pressure,
using Eq. (\ref{tauCGex}).
Depolarized light scattering measurements on salol performed at
ambient pressure by photon correlation spectroscopy
\cite{Berger96} and Brillouin and Raman spectroscopy \cite{Li92}
are reported in Fig. \ref{Fig_tau_CG0}, spanning a wide
time-temperature range. The fit parameters of Eq. (\ref{tauCG})
are: $A_{CG}=(10.6\pm 0.1)$, $B_{CG}=(91\pm13)$ K,
$C_{CG}=(3.4\pm0.4)$ K, and $T_0=(265\pm3)$ K, confirming that
$T_0$ matches the crossover temperature $T_B\simeq$265 K
\cite{Stickel,Paluch03}.
Then, we test the generalized CG equation on our $\tau(T,P)$
data. In the fit procedure, the parameters $A_{CG}$, $B_{CG}$,
$C_{CG}$, and $T_0$ are taken fixed to those obtained from the fit
of the data at ambient pressure, these four being the same
parameters which also appear in Eq. (\ref{tauCG}), and $\xi_0$ is
the only free parameter. The inability of the CG equation to
represent the variation of $\tau$ with pressure is apparent in
Fig. 6, where the solid lines are generated by Eq.
(\ref{tauCGex}). On the other hand, treating all the parameters
as free the fitting algorithm does not converge. A similar result
has also been obtained for an epoxy system \cite{CorezziCPL}. The
inapplicability of the generalized CG equation prompts disfavor
towards the robustness of the CG theory. Nevertheless, the free
volume approach remains physically attractive, and we cannot
exclude that the inadequacy of Eq. (\ref{tauCGex}) to describe
the $\tau(T,P)$ data might be ascribed to the number of
simplifications used to derive the equation, which are possibly
no longer valid at high pressures.
\section{Conclusions}
\label{conclusions}
In conclusion, we have studied the slow dynamics of salol under
variable temperature and pressure using PCS in combination with
PVT measurements. Comparing the behavior of the structural
relaxation time with equations derived within the AG entropy
theory and the CG free volume theory, we find that pressure
dependent data are crucial to assess the validity of model
equations of the glass transition. In particular, we confirm
previous work \cite{CorezziCPL} showing that the pressure
dependent expression of $\tau$ predicted by the CG model cannot
reproduce the experimental data, despite the presence of five
adjustable parameters and an ability to parametrize $\tau(T)$
data over a broad temperature range at ambient pressure. Instead,
experimental $\tau(T,P)$ data conform to the entropy-based PEAG
equation. Interestingly, since the parameters which control the
pressure dependence of $\tau$ have separately been determined via
PVT measurements, this equation requires only four adjustable
parameters in the $T$ and $P$ intervals investigated in the
present work. Remarkably, the deduced parameters yield physical
results. Especially, the fraction of excess entropy which arises
from structural configurations is realistically estimated ($\sim$
70\% at ambient pressure).\\
In an effort to determine the role played by volume and thermal
effects in driving molecular dynamics, Casalini et al.
\cite{CasaliniJPC03} have recognized that neither temperature nor
volume is the dominant variable governing the structural
relaxation of salol near $T_{g}$, consistently with results for a
number of other glass formers \cite{PaluchJCP03b,PaluchPRB2002}.
Conceptually, this result accords with our findings that the
dominant thermodynamic variable is configurational entropy, a
quantity which embodies both temperature and volume effects:
different relative contributions to $\tau$ of thermal energy and
volume reflect a different sensitivity of the number of
configurations to change following temperature and volume changes.
We believe that the positive test of the PEAG model presented
here should stimulate further work on other glass formers and by
different techniques.
\medskip
We thank Prof. E.W. Fischer and Prof. C.A. Angell for valuable
comments and suggestions .
|
2,869,038,156,844 | arxiv | \section{Introduction}\label{sec:intro}
\subsection{Total Variation Regularization for Periodic Inverse Problems} \label{sec:problem}
The development of optimization-based methods for the reconstruction of functions from finitely many linear measurements has been an important subject of recent investigation. This paper participates to this effort by considering the special case of periodic functions defined over the $d$-dimensional torus. More specifically, our goal is to recover an unknown periodic function $f_0$ from finitely many observations $y_1, \ldots , y_M \in \mathbb{R}$ of the latter.
The real function $f_{0}$ is defined over the torus $ \mathbb{T}^d = \mathbb{R}^d / 2 \pi \mathbb{Z}^d$ with ambiant dimension $d\geq 1$.
The $M$ observations $y_1, \ldots , y_M$ are possibly noise-corrupted versions of noiseless measurements depending linearly on $f_{0}$. This linear relationship can be modelled in terms of $M$ linear measurement functionals $f \mapsto \langle \nu_m ,f \rangle \in \mathbb{R}$, $m=1,\ldots,M$.
Note that since the unknown function $f_0$ is infinite-dimensional and the data finite-dimensional, the reconstruction task is dramatically ill-posed and must therefore be regularized. This can be achieved by considering a convex penalized optimization problem of the form
\begin{equation} \label{eq:optipb}
\tilde{f}\in\arg\min_f E( \bm{y} , \bm{\nu}(f)) + \lambda \lVert {\rm L} f \rVert_{\mathcal{M}},
\end{equation}
where:
\begin{itemize}
\item $E$ is a suitable convex cost functional encouraging the measurement vector $\bm{\nu}(f) = (\langle \nu_m , f \rangle)_{m=1\ldots M}$ to be close to the observation vector $\bm{y} = (y_1, \ldots , y_M)$.
\item ${\rm L}$ is a suitable regularizing operator acting on periodic functions.
\item $\lVert \cdot \rVert_{\mathcal{M}}$ is the total variation (TV) norm of a periodic Radon measure.
\item $\lambda$ is a positive constant defining the penalty strength.
\end{itemize}
The penalty term $ \lambda \lVert {\rm L} f \rVert_{\mathcal{M}}$ in \eqref{eq:optipb} helps regularizing the ill-posed reconstruction problem. Moreover, the total variation norm is known to promote sparse spline-like solutions~\cite{Unser2017splines}, similarly to the $\ell_1$ norm in finite dimension.
\subsection{Comparison with Previous Works} \label{sec:comparison}
\paragraph{Discrete $\ell_1$ reconstruction methods.}
The problem of reconstructing an unknown physical quantity, or signal, from incomplete measurements has a rich history in applied mathematics, dealing both with discrete and continuous settings. In the former, the signal is modeled as a vector (finite dimensional setting) or a sequence (infinite dimensional discrete setting). The problem is then ill-posed in the sense that we do not have enough measurements to uniquely characterize the unknown vector (underdetermined system of equation). Regularization methods have been introduced in order to make the problem well-posed, starting with the Tikhonov regularization based on the $\ell_2$-norm~\cite{Tikhonov1963solution}, also known as ridge regression in statistics~\cite{Hoerl1962ridge}.
In the 90's, it has been recognized that $\ell_1$ regularizations are much better for providing sparse and interpretable reconstructions, leading to the LASSO~\cite{tibshirani1996regression} and the basis pursuit~\cite{chen2001atomic} and then inspiring the field of compressed sensing~\cite{Donoho2006,Candes2006sparse,chandrasekaran2012convex,eldar2012compressed,Foucart2013mathematical}.
These ideas have been initially developed in finite dimension, and have been extended to infinite-dimensional settings by several authors~\cite{Adcock2015generalized,Adcock2017breaking,eldar2008compressed,unser2016representer,Traonmilin2017compressed,Bodmann2018compressed,marz2020sampling}.
\paragraph{Dirac recovery and optimization over measure spaces.}
Many physical quantities are not adequately described using the discrete formalism introduced above. This is typically the case for applications where the physical phenomenon of interest involves point sources lying in a continuum, often modeled as Dirac streams $w_0 = \sum a_k \delta_{\bm{x}_k}$, \emph{i.e.} weighted sums of Dirac impulses. Such generalized functions are characterized by a finite rate of innovation, which is a continuous-domain generalization of the classically discrete notion of sparsity~ \cite{Vetterli2002FRI,Maravic2005sampling}. A key problem is then to reconstruct the unknown Dirac stream from finitely many observations.
In \cite{candes2014towards,Candes2013super}, Candès and Fernandez-Granda considered the super-resolution problem, aiming at recovering $w_0$ from low-pass Fourier measurements. Remarkably, they showed that it is possible to reconstruct \emph{perfectly} the infinite-dimensional measure from finitely many Fourier measurements as soon as the Dirac impulses are sufficiently far apart.
In this framework, the reconstructed Dirac stream is the solution of an optimization task over Radon measures, where the TV norm is used as a regularization, which corresponds to \eqref{eq:optipb} with $\mathrm{L} = \mathrm{Id}$. Optimization problems over measure spaces can be traced back to the 20th century~\cite{zukhovitskii1956approximation,Fisher1975}, and have been the topic of many works in the recent years~\cite{deCastro2012exact,Bredies2013inverse,Duval2015exact,Azais2015Spike,FernandezGranda2016Super,Chambolle2016geometric,duval2017sparseI,duval2017sparseII,Denoyelle2017support,flinth2020linear,chi2020harnessing,garcia2020approximate}.
These include algorithmic schemes adapted to Dirac recovery, with recent applications to super-resolution microscopy~\cite{denoyelle2019sliding} and cloud tracking~\cite{courbot2019sparse}.
\paragraph{TV beyond measures: the splines realm.}
Although relevant in some applications, Dirac stream recovery remains quite specific. In particular, it cannot be applied to physical quantities admitting pointwise evalutations. Generalizations of the framework to the reconstruction of such objects started with~\cite{Unser2017splines}, with already some roots in~\cite{Fisher1975}. By adding a pseudo-differential operator to the regularization term, similarly to \eqref{eq:optipb}, this new reconstruction paradigm was able to reconstruct smooth solutions while maintaining the sparsity-promoting effect of the TV norm. In this setting, the Dirac streams are replaced by splines in the solution set. Splines are piecewise-smooth functions with finite rates of innovation, whose smoothness can be adapted by adequately choosing the differential operator~\cite{Schoenberg1973cardinal,Unser2017splines}.
Since then, algorithmic schemes have been developed~\cite{gupta2018continuous,flinth2019exact,Debarre2019,debarre2020sparsest}, and important extensions have been proposed, generalizing the framework to other Banach spaces such as measure spaces over spherical domains~\cite{simeoni2020functionalpaper,simeoni2020functional}, hybrid spaces~\cite{debarre2019hybrid}, multivariate settings~\cite{aziznejad2018l1}, or more general abstract settings~\cite{bredies2018sparsity,boyer2019representer,unser2019native}. Applications include geophysical and astronomical data reconstruction~\cite{simeoni2020functionalpaper,simeoni2020functional}, neural networks~\cite{unser2019representer,aziznejad2020deep}, and image analysis~\cite{novosadova2018image}.
\paragraph{Optimization in periodic function spaces.}
Several works for Dirac recovery have been developed over the torus, and are therefore tailored for periodic Dirac streams~\cite{Vetterli2002FRI,Fisher1975,deCastro2012exact,candes2014towards,Duval2015exact,Denoyelle2017support, simeoni2020cpgd}. Contrarily to the non periodic setting, the extension to arbitrary periodic functions has only received a limited attention so far, mostly in the works of Simeoni~\cite{simeoni2020functionalpaper,simeoni2020functional}. There, the author considers functions over the $d$-dimensional sphere $\mathbb{S}^d$, which coincides with the univariate periodic case for $d=1$. The proposed reconstruction framework is however limited to invertible regularization operators, hence excluding important standard differential operators.
There are strong motivations to develop a periodic framework. Periodic reconstruction methods can be used for the parametric representation of closed curves~\cite{Delgado2012ellipse,Uhlmann2016hermite} or the interpolation of periodic functions~\cite{light1992interpolation,Jacob2002sampling}. They can also be used to model 2D acoustical and electromagnetic wave fields, encountered in the field array signal processing \cite{simeoni2019deepwave,pan2017frida, krim1996two}.
Inverse problems for periodic functions have been considered with Tikhonov $L_2$-regularizations, a complete treatment being proposed in~\cite{Badou}. The present paper can be seen as the TV adaptation of this work.
\paragraph{Generalized measurements.}
One specificity of the functional setting in comparison with its discrete counterpart is that the measurement process, modelled via measurement functionals $\nu_m$, deserves a special attention.
For infinite-dimensional optimization problems, the space of linear measurement functionals is indeed intimately linked to the search space of the optimization problem.
For instance, the search space for non periodic Dirac recovery is the space of Radon measures, which can only be sensed by continuous functions vanishing at infinity~\cite{Fisher1975,Unser2017splines}, possibly with additional smoothness technical conditions for specific tasks such as support recovery~\cite{Duval2015exact}.
This problem has been addressed in various ways for generalizations of the Dirac recovery problem.
The main motivation is to determine whether some practical measurement procedures are adapted to the considered optimization task. This includes spatial sampling or Fourier sampling, that were considered for instance in~\cite{Debarre2019,gupta2018continuous}, and that we shall also consider.
Most of the time, theoretical works provide sufficient conditions on the measurement functionals so that the optimization problem is well-posed and admits well-characterized solutions via representer theorems~\cite{Unser2017splines}. However, to the best of our knowledge, the only framework providing necessary and sufficient conditions over the measurement functionals to achieve this goal is proposed in the non periodic setting~\cite{unser2019native}. As we shall see, our present work provides complete answers to such questions in the periodic setting.
\subsection{Contributions and Outline} \label{sec:contributions}
This paper introduces a very generic total variation-based optimization framework for the reconstruction of \emph{periodic} functions. Our main contributions are the following:
\begin{itemize}
\item We provide a rigorous and exhaustive functional-analytic framework for optimization problems of the form~\eqref{eq:optipb}. This requires (i) to identify the Banach space, called the \emph{native space}, on which the optimization problem is well-posed, and (ii) to characterize the space of the linear measurement functionals $\nu_m$, called the \emph{measurement space}, such that the measurement process $f \mapsto \langle \nu_m , f \rangle$ is well-defined and has relevant topological properties. To the best of our knowledge, our work is the first to date to provide a definitive answer to both questions for arbitrary spline-admissible operators ${\rm L}$.
\item We demonstrate a representer theorem for the extreme-point solutions of the optimization problem \eqref{eq:optipb}. The latter are periodic ${\rm L}$-splines whose number of knots is smaller or equal to the number of measurements $M$. This is the periodic counterpart of recent representer theorems obtained in non periodic settings \cite{Unser2017splines,boyer2019representer,bredies2018sparsity}. Our representer theorem is moreover not directly deducible from these results.
\item We give necessary and sufficient conditions for the admissibility of several measurement procedures often used in practice to sense the unknown function $f_0$. These conditions only depend on the properties of the regularizing operator ${\rm L}$. We consider notably the case of spatial sampling, Fourier sampling, and square-integrable filtering.
\item We exemplify our general framework on various classes of pseudo-differential operators and their corresponding periodic splines. This includes classical differential operators such as the derivative and the Laplacian, together with polynomial or fractional generalizations of the latter. We moreover introduce other operators whose splines, called Mat\'ern and Wendland splines, are characterized by their excellent localization properties.
All the results, including the sampling-admissibility, are examplified on these operators.
To the best of our knowledge, this is the first characterization of the classical pseudo-differential operators that are sampling-admissible.
In addition, we provide Python routines for efficiently generating and manipulating the various multivariate periodic ${\rm L}$-splines considered in this paper on the public GitHub repository \cite{periodispline2020}. We give illustrative examples in dimension $d=1,2,3$, prevailing in practice.
\end{itemize}
The paper is organized as follows.
In Section \ref{sec:opandsplines}, we introduce the class of spline-admissible periodic operators and their corresponding ${\rm L}$-splines.
The native space and the measurement space of the optimization problem~\eqref{eq:optipb} are constructed in Section \ref{sec:constructionNativeSpace}.
The periodic representer theorem associated to the optimization problem \eqref{eq:optipb} is derived in Section \ref{sec:RT}.
Examples of admissible operators and of linear functionals are given in Sections \ref{sec:differentialop} and \ref{sec:criteria} respectively.
Finally, we conclude in Section \ref{sec:discussion}.
\section{Periodic Functions, Operators, and Splines} \label{sec:opandsplines}
This section is dedicated to the introduction of periodic spline-admissible operators. We first provide some definitions and results on functions, operators, and splines in the periodic setting.
\subsection{Periodic Function Spaces and Generalized Fourier Series}\label{sec:vocabulary}
\paragraph{Generalized Periodic Functions.} The \emph{Schwartz space} of \emph{infinitely differentiable} periodic functions is denoted by $ \mathcal{S}( \mathbb{T}^d)$. It is endowed with its usual nuclear Fr\'echet topology~\cite{Treves1967} associated to the familly of norms
\begin{equation}
\lVert \varphi \rVert_{N} := \left( \sum_{0 \leq \lvert \bm{n} \rvert \leq N} \lVert \mathrm{D}^{\bm{n}} \{\varphi\} \rVert_{2}^2 \right)^{1/2}, \qquad \forall \varphi \in \mathcal{S}( \mathbb{T}^d),
\end{equation}
where $N \in \mathbb{N}$, $\bm{n} := (n_1,\ldots, n_d) \in \mathbb{N}^d$, $\lvert \bm{n} \rvert := n_1 + \cdots + n_d$, and $\mathrm{D}^{\bm{n}} := \Pi_{i=1}^d \partial_i^{n_i}$.
The topological dual of $ \mathcal{S}( \mathbb{T}^d)$, denoted by $ \mathcal{S}'( \mathbb{T}^d)$, is the space of continuous linear functionals $f: \mathcal{S}( \mathbb{T}^d)\rightarrow \mathbb{R}$, called \emph{generalized periodic functions}. The bilinear map $\langle\cdot,\cdot\rangle: \mathcal{S}( \mathbb{T}^d) \times \mathcal{S}'( \mathbb{T}^d) \rightarrow \mathbb{R}, \,(\varphi,f)\mapsto\langle f,\varphi\rangle:=f(\varphi)$ is called the \emph{Schwartz duality product}. The inner product notation is not fortuitous since when $f \in L_1( \mathbb{T}^d)$, we have $\langle f, \varphi \rangle = \frac{1}{(2\pi)^d}\int_{ \mathbb{T}^d} f(\bm{x}) \varphi(\bm{x}) \mathrm{d}\bm{x}.$
The space $ \mathcal{S}'( \mathbb{T}^d)$ can be endowed with the \emph{weak* topology}, induced by the family of semi-norms $\{\|f\|_\varphi:=|\langle f,\varphi\rangle|, \varphi\in \mathcal{S}( \mathbb{T}^d)\}$. This is the topology of \emph{pointwise convergence}: a sequence of generalized periodic functions $f_n \in \mathcal{S}'( \mathbb{T}^d)$ converges to $f\in \mathcal{S}'( \mathbb{T}^d)$ if $\lim_{n\rightarrow\infty}\langle f_n , \varphi \rangle=\langle f, \varphi \rangle$ for any test function $\varphi \in \mathcal{S}( \mathbb{T}^d)$.
\paragraph{Generalized Fourier Series.} For $\bm{k}\in \mathbb{Z}^d$, we define the sinusoid function $e_{\bm{k}} : \bm{x} \mapsto \mathrm{e}^{ \mathrm{i} \langle \bm{x},\bm{k}\rangle}, $ which is trivially in $ \mathcal{S}( \mathbb{T}^d)$. Any generalized periodic function $f \in \mathcal{S}'( \mathbb{T}^d)$ can then be uniquely decomposed as $f = \sum_{\bm{k}\in \mathbb{Z}^d} \widehat{f}[\bm{k}] e_{\bm{k}}$,
where the convergence of the series holds in $ \mathcal{S}'( \mathbb{T}^d)$.
The sequence $(\widehat{f}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d} \in \mathbb{C}^{ \mathbb{Z}^d}$, called the \emph{Fourier sequence} of $f$, is \emph{slowly growing}~\cite[Chapter VII]{Schwartz1966distributions}---\emph{i.e.}, such that $|\widehat{f} [\bm{k}]|=\mathcal{O}(\|\bm{k}\|^n)$ for some $n\in \mathbb{N}$. The \emph{Fourier coefficients} are given by $\widehat{f} [\bm{k}] := \langle f , e_{\bm{k}} \rangle$ for any $f \in \mathcal{S}'( \mathbb{T}^d)$. A periodic generalized function $\varphi \in \mathcal{S}'( \mathbb{T}^d)$ is in $ \mathcal{S}( \mathbb{T}^d)$ if and only if its Fourier sequence is \emph{rapidly decaying}---\emph{i.e.}, $|\widehat{\varphi} [\bm{k}]|=o(\|\bm{k}\|^{-n}),\, \forall n\in \mathbb{N}$.
\paragraph{Dirac Comb.} The \emph{Dirac comb} $\Sha\in \mathcal{S}'( \mathbb{T}^d)$ is characterized by the relation $\langle \Sha , \varphi \rangle := \varphi(0)$ for any $\varphi \in \mathcal{S}( \mathbb{T}^d)$. The Fourier sequence of $\Sha$ is hence $\widehat{\Sha} [\bm{k}] = \langle \Sha , e_{\bm{k}} \rangle = e_{\bm{k}}(0) = 1$, yielding $\Sha = \sum_{\bm{k}\in \mathbb{Z}^d} e_{\bm{k}}$. Seen as a generalized function over $ \mathbb{R}^d$, we recover the usual definition of the Dirac comb, \emph{i.e.}, $\Sha = \sum_{\bm{n} \in \mathbb{Z}^d} \delta( \cdot - 2\pi\bm{n})$, with $\delta$ the Dirac impulse.
\paragraph{Periodic Sobolev Spaces.} The \emph{periodic Sobolev space of smoothness $\tau \in \mathbb{R}$} is defined by
\begin{equation} \label{eq:sobolevspace}
\mathcal{H}_{2}^{\tau}( \mathbb{T}^d) := \left\{ f \in \mathcal{S}'( \mathbb{T}^d) , \ \lVert f \rVert_{\mathcal{H}_2^\tau}^2 := \sum_{\bm{k}\in \mathbb{Z}^d} ( 1 + \lVert \bm{k} \rVert^2)^{\tau} \lvert \widehat{f} [\bm{k}] \rvert^2 < \infty \right\}.
\end{equation}
It is a Hilbert space for the Hilbertian norm $\lVert \cdot \rVert_{\mathcal{H}_2^\tau}$. Moreover, we have for any $\tau_1, \tau_2 \in \mathbb{R}$ with $\tau_1 \leq \tau_2$ the dense topological embeddings
\begin{equation}
\mathcal{S}( \mathbb{T}^d) = \cap_{\tau \in \mathbb{R}} \mathcal{H}_{2}^{\tau}( \mathbb{T}^d) \subseteq \mathcal{H}_{2}^{\tau_2}( \mathbb{T}^d) \subseteq \mathcal{H}_{2}^{\tau_1}( \mathbb{T}^d) \subseteq \cup_{\tau \in \mathbb{R}} \mathcal{H}_{2}^{\tau}( \mathbb{T}^d) = \mathcal{S}'( \mathbb{T}^d).
\end{equation}
\subsection{Periodic Spline-Admissible Operators and their Periodic Green's Function}
We denote by $\mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ the space of \emph{linear}, \emph{shift-invariant} operators that are \emph{continuous} from $ \mathcal{S}'( \mathbb{T}^d)$ to itself.
The following characterization in terms of Fourier sequences is well-known \cite[Section II.A]{Badou}.
\begin{proposition} \label{prop:firsttrivialstuff}
Let ${\rm L} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$. Then, the $e_{\bm{k}}$ are eigenfunctions of ${\rm L}$ and the sequence of eigenvalues $(\widehat{L}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d}\subset \mathbb{C}$ such that ${\rm L} e_{\bm{k}} = \widehat{L}[\bm{k}] e_{\bm{k}}$ is slowly growing. Moreover, we have that, for any $f \in \mathcal{S}'( \mathbb{T}^d)$,
\begin{equation} \label{eq:Lf}
{\rm L} \{f\} = \sum_{\bm{k}\in \mathbb{Z}^d} \widehat{L}[\bm{k}] \widehat{f}[\bm{k}] e_{\bm{k}},
\end{equation}
where the convergence holds in $ \mathcal{S}'( \mathbb{T}^d)$.
Conversely, any slowly growing sequence $(\widehat{L}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d}$ specifies an operator ${\rm L} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ via the relation \eqref{eq:Lf}.
\end{proposition}
\textit{Remark.} The relation \eqref{eq:Lf} indeed specifies an element of $ \mathcal{S} '( \mathbb{T}^d)$ since the sequence $(\widehat{L}[\bm{k}] \widehat{f}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d}$ is slowly growing as the element-wise product between two slowly growing sequences. \\
{Spline-admissible operators} are operators ${\rm L}\in\mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ for which the notion of ${\rm L}$-spline is well-defined.
They include classical differential operators, together with their fractional versions~\cite{Unser2014sparse}.
Splines are usually considered over the complete real line~\cite{Unser2017splines}. We adapt here the construction to the periodic case.
\begin{definition}[Pseudoinverse] \label{def:pseudoinverse}
Let ${\rm L} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$.
We say that ${\rm L^{\dagger}} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ is a \emph{pseudoinverse} of ${\rm L}$ if it satisfies the four relations $
{\rm L} {\rm L^{\dagger}} {\rm L} = {\rm L}$, ${\rm L^{\dagger}} {\rm L} {\rm L^{\dagger}} = {\rm L^{\dagger}}$, $({\rm L} {\rm L^{\dagger}})^* = {\rm L} {\rm L^{\dagger}}$, and $({\rm L^{\dagger}} {\rm L})^* = {\rm L^{\dagger}} {\rm L}$.
\end{definition}
\textit{Remark.} The operator ${\rm L^{\dagger}}$ is also called the Moore-Penrose pseudoinverse~\cite{campbell2009generalized}. When it exists, the pseudoinverse is known to be unique~\cite{ben2003generalized}.
In our case, we shall see thereafter that the two relations ${\rm L} {\rm L^{\dagger}} {\rm L} = {\rm L}$ and ${\rm L^{\dagger}} {\rm L} {\rm L^{\dagger}} = {\rm L^{\dagger}}$ are sufficient to characterize the pseudoinverse, and that they imply the self-adjoint relations $({\rm L} {\rm L^{\dagger}})^* = {\rm L} {\rm L^{\dagger}}$ and $({\rm L^{\dagger}} {\rm L})^* = {\rm L^{\dagger}} {\rm L}$.
\begin{definition}[Spline-Admissible Operator] \label{def:splineadmissible}
An operator ${\rm L} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ is said to be \emph{spline-admissible} if
\begin{itemize}
\item it has a finite-dimensional null-space and
\item it admits a pseudoinverse operator ${\rm L^{\dagger}} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$.
\end{itemize}
\end{definition}
Spline-admissible operators, their null space, and their pseudoinverse can be readily characterized by their Fourier sequence as follows.
\begin{proposition} \label{prop:finitedimNL}
Let ${\rm L} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ and set $K_{\rm L} := \{\bm{k} \in \mathbb{Z}^d, \ |\widehat{L}[\bm{k}]| \neq 0\}$ and $N_{\rm L} := \{\bm{k} \in \mathbb{Z}^d, \ |\widehat{L}[\bm{k}]| = 0\} = \mathbb{Z}^d \backslash K_{{\rm L}}$.
Then,
\begin{equation} \label{eq:NLKL}
\mathcal{N}_{\Lop} = \{ f \in \mathcal{S}'( \mathbb{T}^d):\, \widehat{f}[\bm{k}] = 0, \,\forall \bm{k} \in K_{{\rm L}} \} = \overline{\mathrm{Span}} \{e_{\bm{k}}, \ \bm{k} \in N_{{\rm L}} \},
\end{equation}
where $ \overline{\mathrm{Span}} \ A$ is the closure of the span of $A$ for the topology of $ \mathcal{S}'( \mathbb{T}^d)$.
Hence, $\mathcal{N}_{\Lop}$ is finite dimensional if and only if finitely many $\widehat{L}[\bm{k}] $ are null.
Moreover, ${\rm L}$ admits a pseudoinverse ${\rm L^{\dagger}}\in\mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ if and only if the sequence
\begin{equation} \label{eq:defL+seq}
\widehat{L^\dagger}[\bm{k}] =
\begin{cases}
\widehat{L}[\bm{k}]^{-1} &\text{ if } \bm{k} \in K_{\rm L}, \\
0 & \text{ otherwise}
\end{cases}
\end{equation}
is slowly growing. In that case, the Fourier sequence of the pseudoinverse ${\rm L}^\dagger$ is given by $( \widehat{L^\dagger}[\bm{k}] )_{\bm{k} \in \mathbb{Z}^d}$.
\end{proposition}
The first part of Proposition \ref{prop:finitedimNL} on the null space of ${\rm L}$ is well-known in the non periodic setting with $d=1$, due to the following result: any shift-invariant and finite-dimensional linear subspace of $ \mathcal{S}'( \mathbb{R})$ is constituted of exponential polynomial functions~\cite{Unser2014sparse}.
This result has been adapted in the periodic setting in~\cite[Proposition 1]{Badou}. We provide here a simple proof for the general case $d\geq 1$.
\begin{proof}[Proof of Proposition \ref{prop:finitedimNL}]
We first observe that, for any ${\rm L} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$, the null space $\mathcal{N}_{\Lop}$ is the $ \mathcal{S}'( \mathbb{T}^d)$-closure of $\mathrm{Span} \{ e_{\bm{k}}, \ \bm{k} \notin K_{\Lop}\}$. This is simply due to the relation ${\rm L} f = \sum_{\bm{k} \inK_{\Lop}} \widehat{L}[\bm{k}] \widehat{f}[\bm{k}] e_{\bm{k}}$, from which one deduces that $f \in \mathcal{N}_{\Lop}$ if and only if $\widehat{f}[\bm{k}] = 0$ for every $\bm{k} \notin K_{\Lop}$, giving \eqref{eq:NLKL}.
It is then obvious that $\mathrm{dim} \mathcal{N}_{\Lop} = \mathrm{Card} ( \mathbb{Z}^d \backslash K_{\Lop}) $ and is therefore finite dimensional if and only if $ \mathbb{Z}^d \backslash K_{\Lop}$ is finite.
If the sequence $ (\widehat{L^\dagger}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d}$ defined in \eqref{eq:defL+seq} is slowly growing and denoting by ${\rm L^{\dagger}}$ the corresponding operator, then, for any $f \in \mathcal{S} '( \mathbb{T}^d)$,
\begin{equation}
{\rm L^{\dagger}} \mathrm{L} {\rm L^{\dagger}} \{f\} = \sum_{\bm{k}\in K_{\mathrm{L}}} \widehat{L}[\bm{k}]^{-1} \widehat{L}[\bm{k}] \widehat{L}[\bm{k}]^{-1} \widehat{f}[\bm{k}] {e}_{\bm{k}}
=
\sum_{\bm{k}\in K_{\mathrm{L}}} \widehat{L}[\bm{k}]^{-1} \widehat{f}[\bm{k}] {e}_{\bm{k}}
= {\rm L^{\dagger}} \{f\},
\end{equation}
which implies that that ${\rm L^{\dagger}} \mathrm{L} {\rm L^{\dagger}} = {\rm L^{\dagger}}$. We show identically that ${\rm L} {\rm L^{\dagger}} {\rm L} = {\rm L}$. Finally, we have ${\rm L}^\dagger{\rm L} f={\rm L}\Lop^\dagger f=\sum_{\bm{k}\in K_{\rm L}} \widehat{f}[\bm{k}]$ yielding trivially $({\rm L}^\dagger{\rm L})^\ast={\rm L}^\dagger{\rm L}$ and $({\rm L}\Lop^\dagger)^\ast={\rm L}\Lop^\dagger$. Therefore, ${\rm L^{\dagger}}$ is indeed the pseudoinverse of ${\rm L}$.
Conversely, if the pseudoinverse exists, we have in particular that
\begin{equation}
\widehat{L}[\bm{k}] {e}_{\bm{k}} = {\rm L} {e}_{\bm{k}} = {\rm L} {\rm L^{\dagger}} {\rm L} {e}_{\bm{k}} = \widehat{L}[\bm{k}]^2 \widehat{L^\dagger}[\bm{k}] {e}_{\bm{k}}.
\end{equation}
Hence, $ \widehat{L^\dagger}[\bm{k}] = \widehat{L}[\bm{k}]^{-1}$ as soon as $\widehat{L}[\bm{k}] \neq 0$. Similarly,
\begin{equation}
\widehat{L^\dagger}[\bm{k}] {e}_{\bm{k}} = {\rm L^{\dagger}} {e}_{\bm{k}} = {\rm L^{\dagger}} {\rm L} {\rm L^{\dagger}} {e}_{\bm{k}} = \widehat{L^\dagger}[\bm{k}]^2 \widehat{L }[\bm{k}] {e}_{\bm{k}},
\end{equation}
implying that $ \widehat{L^\dagger}[\bm{k}] = 0$ if $ \widehat{L}[\bm{k}] = 0$. This shows that the sequence $ \widehat{L^\dagger}$ is given by \eqref{eq:defL+seq}, implying in particular that this sequence, coming from a LSI continuous operator, is slowly growing.
\end{proof}
Thanks to Proposition \ref{prop:finitedimNL}, ${\rm L}$ is spline-admissible if and only if it vanishes on finitely many $e_{\bm{k}}$ and its Fourier sequence does not vanish faster than any rational function. This excludes notably convolution operators ${\rm L} f= h * f$ where the impulse response $h \in \mathcal{S}( \mathbb{T}^d)$ is such that the sequence $(\widehat{h}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d}$ never vanishes and decays exponentially fast. For such operators indeed, the sequence $(\widehat{L}[\bm{k}]^{-1} := \widehat{h}[\bm{k}]^{-1})_{\bm{k}\in \mathbb{Z}^d}$ is not slowly growing and is therefore not the Fourier sequence of a LSI operator from $ \mathcal{S}'( \mathbb{T}^d)$ to itself.
With our notation, the pseudoinverse of a spline-admissible ${\rm L}$ is given by
\begin{equation}
{\rm L^{\dagger}} f = \sum_{\bm{k} \in K_{\Lop}} \frac{\widehat{f}[\bm{k}]}{\widehat{L}[\bm{k}]} e_{\bm{k}}, \qquad \forall f\in\mathcal{S}'(\mathbb{T}^d).
\end{equation}
The orthogonal projector on the null space $\mathcal{N}_{\Lop}$ of a spline-admissible operator satisfies
\begin{align} \label{projdef}
\mathrm{Proj}_{\mathcal{N}_{\Lop}} f &= \sum_{n=1}^{N_0} \widehat{f}[\bm{k}_n] e_{\bm{k}_n}, \qquad \forall f\in\mathcal{S}'(\mathbb{T}^d),
\end{align}
where $\{k_1 ,\ldots, k_{N_0}\} = N_{\mathrm{L}} = \mathbb{Z}^d \backslash K_{\mathrm{L}}$.
From the definition of the pseudoinverse, it is moreover easy to obtain
\begin{equation} \label{eq:LopKop}
\mathrm{Proj}_{\mathcal{N}_{\Lop}}= \mathrm{Id} -{\rm L} {\rm L^{\dagger}} = \mathrm{Id} -{\rm L^{\dagger}} {\rm L}. \end{equation}
We recover from \eqref{eq:LopKop} that a spline-admissible operator ${\rm L}$ is invertible, with inverse ${\rm L}^{-1} = {\rm L^{\dagger}}$, if and only if its null space is trivial.
\textit{Remark.} Spline-admissible operators are sometimes refer to as \emph{Fredholm operators}, which are operators between Banach spaces whose kernel (null space) and co-kernel (quotient of the output space with the range of the operators) are finite-dimensional. This concept can be extended to operators between $ \mathcal{S}'( \mathbb{T}^d)$ to itself, and a spline admissible operator is therefore a Fredholm operator.
\begin{definition}[Spectral Growth]\label{def:growth}
Let ${\rm L}$ be a spline-admissible and $\widehat{L}$ its Fourier sequence. If there exists a parameter $\gamma \geq 0$ and some constants $0<A \leq B < \infty$ and $k_0 \geq 0$ such that
\begin{equation} \label{eq:AGforglop}
A \lVert \bm{k} \rVert^{\gamma} \leq |\widehat{L} [ \bm{k} ]| \leq B \lVert \bm{k} \rVert^{\gamma}, \qquad \forall \lVert \bm{k} \rVert \geq k_0 \in \mathbb{Z}^d,
\end{equation}
then, we call $\gamma$ the \emph{spectral growth} of ${\rm L}$. For brevity, we write \eqref{eq:AGforglop} as $|\widehat{L} [ \bm{k} ]|=\Theta( \lVert \bm{k} \rVert^{\gamma}).$
\end{definition}
If it exists, the spectral growth of a spline-admissible operator is unique. It measures the impact of ${\rm L}$ on the smoothness of the input functions. For instance, for any $\tau \in \mathbb{R}$, $f \in \mathcal{H}_2^\tau( \mathbb{T}^d)$ if and only if ${\rm L} f \in \mathcal{H}_2^{\tau - \gamma} ( \mathbb{T}^d)$, where the periodic Sobolev spaces are defined in \eqref{eq:sobolevspace}. Note that the existence of $\gamma$ is not guaranteed.
This is typically the case if the Fourier sequence of ${\rm L}$ does not behave purely polynomially (\emph{e.g.}, when $d = 1$ and $\widehat{{\rm L}}[{k}] = \lvert {k} \rvert \log ( 1 + \lvert {k} \rvert )$) or alternates between two polynomial behaviors (\emph{e.g.}, $\widehat{{\rm L}}[2k] = k$ and $\widehat{{\rm L}}[2k+1] = k^2$). However, most of the usual pseudo-differential operators admit a spectral growth, as will be exemplified in Section \ref{sec:differentialop}.
As we shall see, an important ingredient for the definition of periodic splines is the notion of Green's function of an operator ${\rm L}$. It is defined as the ${\rm L}$-primitive of the Dirac comb $\Sha$.
\begin{definition}[Green's Function] \label{def:green}
Let ${\rm L}$ be a spline-admissible operator with pseudoinverse ${\rm L^{\dagger}}$. The generalized function $g_{{\rm L}} := {\rm L^{\dagger}} \Sha \in \mathcal{S}'( \mathbb{T}^d)$ is called the \emph{Green's function} of ${\rm L}$.
\end{definition}
The Fourier sequence $(\widehat{g}_{\rm L} [\bm{k}])_{\bm{k}\in \mathbb{Z}^d}$ of the Green's function of ${\rm L}$ coincides with the Fourier sequence of ${\rm L^{\dagger}}$. The proposed notion of Green's function differs from certain convention for pseudo-differential operators for functions in $ \mathbb{R}^d$. It is conventional to define Green's functions as fundamental solutions to ${\rm L} g_{{\rm L}}= \delta$. This is however not adapted to the torus (for which $\delta$ is replaced by the Dirac comb $\Sha$). Indeed, there exists no periodic generalized function $g_{\rm L}$ as soon as the null space of ${\rm L}$ is non trivial, because the generalized function ${\rm L} g_{{\rm L}}$ has vanishing Fourier coefficients at null space frequencies, contrary to $\Sha$. Definition \ref{def:green} is the periodic adaptation to usual spherical Green's function constructions, such as the ones used in \cite[Chapter 4]{freeden2008spherical} and \cite[Definition 5]{simeoni2020functionalpaper}. Note that the different notions coincides when the operator ${\rm L}$ is invertible \cite[Proposition 4.3]{simeoni2020functional}. \\
\textbf{Running example ${\rm L} = {\rm D}^N$, $N\geq 1$.}
The Fourier sequence of ${\rm L} = {\rm D}^N$ is $\widehat{D^N} [k] = (\mathrm{i} k)^N$.
Then, the null space of ${\rm D}^N$ consists in the constant functions and is of dimension $N_0 = 1$ with $N_{\rm L} = \{0\}$.
The pseudoinverse of ${\rm D}^N$ is the operator with Fourier sequence $\widehat{(D^N)^{\dagger}} [{k}] = \frac{1-\delta[{k}]}{(\mathrm{i} k)^N}$ with $\delta[\cdot]$ the Kronecker delta, which is therefore the Fourier sequence of $g_{{\rm D}^N}$.
Having a finite-dimensional null space and a pseudoinverse, the operator ${\rm D}^N$ is spline-admissible in the sense of Definition \ref{def:splineadmissible}. Moreover, it admits a spectral growth $\gamma = N$.
\subsection{Periodic ${\rm L}$-splines}
The class of spline-admissible operators acting on generalized periodic functions allow us to adapt the classical notion of splines~\cite{Unser1999splines} to the periodic setting in full generality.
\begin{definition}[Periodic ${\rm L}$-spline] \label{def:periodicsplines}
Let ${\rm L}$ be a spline-admissible operator.
A \emph{periodic ${\rm L}$-spline} is a function $f \in \mathcal{S}'( \mathbb{T}^d)$ such that
\begin{equation} \label{eq:Lopfspline}
{\rm L} f = \sum_{k=1}^K a_k \Sha( \cdot - \bm{x}_k),
\end{equation}
with $K \geq 0$ is the number of knots ($K=0$ corresponds to the case where ${\rm L} f = 0$, \emph{i.e.}, $f$ is in the null space of ${\rm L}$), $\bm{x}_k \in \mathbb{T}^d$ are the distinct \emph{knots}, and $a_k \in \mathbb{R} \backslash\{0\}$ are the weights of $f$. The pairs $\{(a_k,\bm{x}_k), \, k=1,\ldots,K \}$ are called the \emph{innovations} of the periodic ${\rm L}$-spline.
\end{definition}
In words, a periodic ${\rm L}$-spline is a generalized periodic function such that its ${\rm L}$-derivative is a Dirac stream with finite rate of innovation \cite{Vetterli2002FRI} --i.e. it has a finite number of innovations per period.
It is worth noting that the weights of a periodic ${\rm L}$-spline fulfill a linear system as soon as the null space of ${\rm L}$ is non trivial. For instance, if $ \widehat{L}[\bm{0}] = {\rm L} e_{\bm{0}} = 0$, as is the case for ${\rm L} = \mathrm{D}$ in dimension $d=1$, then, any ${\rm L}$-spline $f$ with weights $a_k$ satisfies $ \sum_{k=1}^K a_k = \widehat{\mathrm{L} f }[\bm{0}] = \widehat{L}[\bm{0}] \widehat{f}[\bm{0}] = 0$.
We generalize this idea to any spline-admissible operator.
\begin{proposition} \label{prop:Mnkmatrix}
Let ${\rm L} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ be a spline-admissible operator with Green's function $g_{{\rm L}}\in\mathcal{S}'(\mathbb{T}^d)$ and null space $\mathcal{N}_{\Lop}$ with null space frequencies $N_{{\rm L}}:=\{\bm{k}_1,\ldots,\bm{k}_{N_0}\}\subset\mathbb{Z}^{d}$.
There exists an ${\rm L}$-spline of the form \eqref{eq:Lopfspline} if and only if the weight vector $\bm{a}:=(a_1,\ldots,a_K)\in \mathbb{R}^K$ satisfies $\bm{\mathrm{M}} \bm{a} = \bm{0}$ with $\bm{\mathrm{M}} \in \mathbb{R}^{N_0 \times K }$ the matrix whose entries are
\begin{equation} \label{eq:matrixM}
M[n,k] = \mathrm{e}^{- \mathrm{i} \langle \bm{k}_n, \bm{x}_k \rangle } , \qquad \forall n=1,\ldots , N_0, \forall k =1, \ldots , K.
\end{equation}
In that case, the generic form of a periodic ${\rm L}$-spline $f$ satisfying \eqref{eq:Lopfspline} is
\begin{equation} \label{eq:splineform}
f = \sum_{k=1}^K a_k g_{{\rm L}}( \cdot -\bm{x}_k) + p
\end{equation}
with $\bm{\mathrm{M}} \bm{a} = \bm{0}$ and $p \in \mathcal{N}_{\Lop}$.
\end{proposition}
\begin{proof}
Assume first that $f$ satisfies \eqref{eq:Lopfspline}. Then, we have that, for any $1 \leq n \leq N_0$,
\begin{equation}
0 = \widehat{L}[\bm{k}_n ] \widehat{f}[\bm{k}_n] = \langle {\rm L} f , e_{\bm{k}_n} \rangle = \sum_{k=1}^K a_k \mathrm{e}^{- \mathrm{i} \langle \bm{x}_k , \bm{k}_n\rangle}= (\bm{\mathrm{M}} \bm{a})_n,
\end{equation}
or equivalently, $\bm{\mathrm{M}} \bm{a} = \bm{0} \in \mathbb{R}^{N_0}$.
Assume now that $\bm{a} \in \mathbb{R}^K$ satisfies $\bm{\mathrm{M}} \bm{a} = \bm{0}$. We set $w = \sum_{k=1}^K a_k \Sha( \cdot - \bm{x}_k)$.
Then,
\begin{equation}
\mathrm{Proj}_{\mathcal{N}_{\Lop}} \{w\} = \sum_{n=1}^{N_0} \widehat{w} [\bm{k}_n] e_{\bm{k}_n} = \sum_{n=1}^{N_0} \sum_{k=1}^K a_k \mathrm{e}^{- \mathrm{i} \langle \bm{x}_k , \bm{k}_n\rangle} = 0.
\end{equation}
This implies, using \eqref{eq:LopKop}, that
\begin{equation}
{\rm L} {\rm L^{\dagger}} w = w - \mathrm{Proj}_{\mathcal{N}_{\Lop}} w = w.
\end{equation}
Then, $f = {\rm L^{\dagger}} w=\sum_{k=1}^K a_k g_{{\rm L}}( \cdot -\bm{x}_k)$ satisfies ${\rm L} f = {\rm L} {\rm L^{\dagger}} w = w = \sum_{k=1}^K a_k \Sha( \cdot - \bm{x}_k)$, and is therefore a periodic ${\rm L}$-spline.
Moreover, two periodic ${\rm L}$-spline solutions of \eqref{eq:Lopfspline} only differs from a null space component $p \in \mathcal{N}_{\Lop}$, implying \eqref{eq:splineform}.
Finally, when $K \leq N_0$, because the matrix $M$ imposes $N_0$ independent conditions on the vector $\bm{a}$ of size $K$, we deduce that $\bm{a}=\bm{0}$ and $f = p \in \mathcal{N}_{\Lop}$.
\end{proof}
Proposition \ref{prop:Mnkmatrix} essentially tells us that periodic ${\rm L}$-splines can be written as sums of a linear combination of shifts of the Green's functions and a trigonometric polynomial in the null space of ${\rm L}$. The shifts are moreover given by the spline knots and the weights are constrained to verify a certain annihilation equation. In particular, an important consequence of Proposition \ref{prop:Mnkmatrix} is that the Green's function $g_{\rm L} = {\rm L^{\dagger}} \Sha$ is \emph{not} a periodic ${\rm L}$-spline when the null space of ${\rm L}$ is non trivial. Indeed, we have that ${\rm L} {\rm L^{\dagger}} \Sha = \Sha - \mathrm{Proj}_{\mathcal{N}_{\Lop}} \Sha$, which is not of the form \eqref{eq:Lopfspline} due to the trigonometric polynomial $\mathrm{Proj}_{\mathcal{N}_{\Lop}} \Sha = \sum_{n=1}^{N_0} e_{\bm{k}_n}$. \\
\textbf{Running example ${\rm L} = {\rm D}^N$, $N\geq 1$.}
Let $f$ be a periodic $({\rm D}^N)$-spline with knots $x_1, \ldots , x_K$. Then, $f$ is a piecewise-polynomial. More precisely, $f$ is a polynomial of degree at most $(N-1)$ on each intervals $[x_{k+1} , x_k]$, $k = 1, \ldots, K$ (with the convention that $x_{K+1} = x_1$). Moreover, for $N \geq 2$, $f$ has continuous derivatives up to order $(N-2)$.
In particular, a periodic ${\rm D}$-spline is piecewise constant and a periodic $({\rm D}^2)$-spline is piecewise linear and continuous.
A non constant periodic $({\rm D}^N)$-spline $f$ has at least $K = 2$ knots. In particular, there is no periodic $({\rm D}^N)$-spline with only $1$ knots. Indeed, such a spline would be such that $\mathrm{D}^N f = a_1 \Sha(\cdot - x_1)$ and would satisfy, according to Proposition \ref{prop:Mnkmatrix}, $\mathrm{e}^{- \mathrm{i} x_1} a_1 = 0$, hence $a_1 = 0$ and $f$ would be constant, which we excluded.
\section{Periodic Native Spaces and Measurement Spaces} \label{sec:constructionNativeSpace}
The goal of this section is to construct the Banach functions spaces associated to the optimization problem \eqref{eq:optipb}. More precisely, we shall introduce:
\begin{itemize}
\item The native space $\mathcal{M}_{\Lop}( \mathbb{T}^d)$: the generalized functions $f$ for which the regularization term $\lVert {\rm L} f \rVert_{\mathcal{M}}$ is finite. \item The measurement space $\mathcal{C}_{\Lop}( \mathbb{T}^d)$: the generalized functions $\nu$ that can be used as linear functionals over the native space $\mathcal{M}_{\Lop}( \mathbb{T}^d)$.
\end{itemize}
The measurement space and the native space form a dual pair, in the same way the space of periodic continuous functions $\mathcal{C}( \mathbb{T}^d)$ and the space of periodic Radon measures $\mathcal{M}( \mathbb{T}^d)$ do. We remind this fact and other useful ones in Section \ref{sec:CandM} before constructing the native and measurements spaces in Sections \ref{sec:nativespace} and \ref{sec:measurementspace}, respectively.
\subsection{The spaces $\mathcal{M}( \mathbb{T}^d)$ and $\mathcal{C}( \mathbb{T}^d)$} \label{sec:CandM}
The space of continuous periodic function is denoted by $\mathcal{C}( \mathbb{T}^d)$.
It is a Banach space when endowed with the supremum norm $\lVert \varphi \rVert_{\infty} = \sup_{\bm{x}\in \mathbb{T}^d} \lvert \varphi(\bm{x}) \rvert$.
The space of periodic finite Radon measure is denoted by $\mathcal{M}( \mathbb{T}^d)$. According to the Riesz-Markov theorem~\cite{Gray1984shaping}, $\mathcal{M}( \mathbb{T}^d)$ is isometric to the space of continuous and linear functionals over $\mathcal{C}( \mathbb{T}^d)$.
As is classical, we therefore make a complete identification between the space of periodic finite Radon measures and the dual of $\mathcal{C}( \mathbb{T}^d)$, \emph{i.e.},
\begin{equation} \label{eq:normM}
\mathcal{M}( \mathbb{T}^d) = (\mathcal{C}( \mathbb{T}^d) , \lVert \cdot \rVert_\infty)'.
\end{equation}
Then, $\mathcal{M}( \mathbb{T}^d)$ is a Banach space for the \emph{total variation (TV)} norm
\begin{equation} \label{eq:Mnorm}
\lVert w \rVert_{\mathcal{M}} = \sup_{\varphi \in \mathcal{C}( \mathbb{T}^d), \ \lVert \varphi \rVert_\infty = 1} \langle w , \varphi \rangle,
\end{equation}
with $ \langle w ,\varphi \rangle = \int_{ \mathbb{T}^d} \varphi (\bm{x}) w(\mathrm{d}\bm{x})$. When $w \in L_1( \mathbb{T}^d)$, we have that $\lVert w \rVert_{\mathcal{M}} = \lVert w \rVert_{1}$.
For $w = \sum_{k=1}^K a_k \Sha(\cdot - \bm{x}_k)$ with distinct $\bm{x}_k \in \mathbb{T}$, we have $\lVert w \rVert_{\mathcal{M}} = \sum_{k=1}^K \lvert a_k\rvert = \lVert \bm{a}\rVert_1$.
Since the Schwartz space $ \mathcal{S}( \mathbb{T}^d)$ is dense in $\mathcal{C}( \mathbb{T}^d)$ for the norm $\lVert \cdot \rVert_\infty$ \cite[Theorem 4.25]{rudin2006real}, we can restrict the supremum
over functions in the Schwartz space $ \mathcal{S}( \mathbb{T}^d)$ in \eqref{eq:Mnorm}. This allows us to extend the total variation norm over $ \mathcal{S}'( \mathbb{T}^d)$ and to deduce that
\begin{equation}
\mathcal{M}( \mathbb{T}^d) = \left\{ w \in \mathcal{S}'( \mathbb{T}^d), \ \lVert w \rVert_{\mathcal{M}}= \sup_{\varphi \in \mathcal{S}( \mathbb{T}^d), \ \lVert f \rVert_\infty = 1} \langle w , \varphi \rangle < \infty \right\}.
\end{equation}
Note that we have the continuous embeddings
\begin{equation} \label{eq:easyembed}
\mathcal{S}( \mathbb{T}^d) \subseteq \mathcal{C}( \mathbb{T}^d) \subseteq \mathcal{S}'( \mathbb{T}^d) \quad \text{and} \quad \mathcal{S}( \mathbb{T}^d) \subseteq \mathcal{M}( \mathbb{T}^d) \subseteq \mathcal{S}'( \mathbb{T}^d).
\end{equation}
In what follows, $\mathcal{M}( \mathbb{T}^d)$ will be endowed with the \emph{weak* topology} defined in Section \ref{sec:vocabulary}. The latter is indeed more convenient for our purposes than the Banach topology induced by the TV norm \eqref{eq:Mnorm}.
\subsection{The Native Space of a Spline-admissible Operator} \label{sec:nativespace}
We define the native space on which the optimization problem \eqref{eq:optipb} is well-defined, and identify its structure.
\begin{definition}[Native Space]
The \emph{native space} associated to the spline-admissible operator ${\rm L}$ is defined as
\begin{equation} \label{eq:ML}
\mathcal{M}_{{\rm L}} ( \mathbb{T}^d) := \left\{ f \in \mathcal{S}'( \mathbb{T}^d), \ {\rm L} f \in \mathcal{M}( \mathbb{T}^d) \right\}.
\end{equation}
\end{definition}
\begin{theorem}[Banach structure of the native space] \label{theo:whatisML}
Let ${\rm L}$ be a spline-admissible operator with finite dimensional null space $\mathcal{N}_{\Lop}$ and pseudoinverse ${\rm L^{\dagger}}$.
We also fix $p\in [1,\infty]$.
Then, $\mathcal{M}_{\Lop}( \mathbb{T}^d)$ is the direct sum
\begin{equation}
\mathcal{M}_{\Lop} ( \mathbb{T}^d) = {\rm L^{\dagger}} ( \mathcal{M} ( \mathbb{T}^d) ) \oplus \mathcal{N}_{\Lop},
\end{equation}
where ${\rm L^{\dagger}} (\mathcal{M}( \mathbb{T}^d)) = \{{\rm L^{\dagger}} w , \ w \in \mathcal{M}( \mathbb{T}^d) \}$.
It is a Banach space for the norm
\begin{equation} \label{eq:normML}
\lVert f \rVert_{\mathcal{M}_{\Lop},p} = \left( \lVert {\rm L} f \rVert_{\mathcal{M}}^p + \lVert \mathrm{Proj}_{\mathcal{N}_{\Lop}} f \rVert_{2}^p \right)^{1/p},
\end{equation}
with $p\in [1, +\infty]$ and the usual adaptation for $p = \infty$.
Moreover, we have the topological embeddings
\begin{equation}\label{eq:embeddingML}
\mathcal{S}( \mathbb{T}^d) \subseteq \mathcal{M}_{\Lop}( \mathbb{T}^d) \subseteq \mathcal{S}'( \mathbb{T}^d).
\end{equation}
The operator ${\rm L}$ is continuous from $\mathcal{M}_{\Lop}( \mathbb{T}^d)$ to $\mathcal{M}( \mathbb{T}^d)$. Moreover, any periodic ${\rm L}$-spline is in $\mathcal{M}_{\Lop}( \mathbb{T}^d)$.
Finally, the norms $\lVert \cdot \rVert_{\mathcal{M}_{\Lop},p}$ are all equivalent on $\mathcal{M}_{\Lop}( \mathbb{T}^d)$.
\end{theorem}
\begin{proof}
\textit{Direct sum.}
Let $f \in \mathcal{M}_{\Lop}( \mathbb{T}^d)$. Then, from \eqref{eq:LopKop}, and since ${\rm L} f \in \mathcal{M}( \mathbb{T}^d)$ by assumption,
$$f = {\rm L^{\dagger}} \{ {\rm L} f \} + \mathrm{Proj}_{\mathcal{N}_{\Lop}} f \in {\rm L^{\dagger}} (\mathcal{M}( \mathbb{T}^d)) + \mathcal{N}_{\Lop}.$$
Now, let $f = {\rm L^{\dagger}} w + p \in {\rm L^{\dagger}} (\mathcal{M}( \mathbb{T}^d)) + \mathcal{N}_{\Lop}$. Then, ${\rm L} f = {\rm L} {\rm L^{\dagger}} w + {\rm L} p = w - \mathrm{Proj}_{\mathcal{N}_{\Lop}} w$, where we used \eqref{eq:LopKop} again. Since $\mathcal{N}_{\Lop} \subset \mathcal{S}( \mathbb{T}^d) \subset \mathcal{M}( \mathbb{T}^d)$, we deduce that ${\rm L} f = w - \mathrm{Proj}_{\mathcal{N}_{\Lop}} w \in \mathcal{M}( \mathbb{T}^d)$. This shows that $\mathcal{M}_{\Lop} ( \mathbb{T}^d) = {\rm L^{\dagger}} ( \mathcal{M} ( \mathbb{T}^d) ) + \mathcal{N}_{\Lop}$.
If now $f \in {\rm L^{\dagger}} ( \mathcal{M} ( \mathbb{T}^d) ) \cap \mathcal{N}_{\Lop}$, then $\widehat{f} [\bm{k}] = 0$ for $\bm{k}\in K_{\Lop}$ because $f = {\rm L^{\dagger}} w$ for some $w$ and $\widehat{f}[\bm{k}] = 0$ for $\bm{k}\notin K_{\Lop}$ because ${\rm L} f = 0$. Hence, $f=0$ and the sum $ {\rm L^{\dagger}} ( \mathcal{M} ( \mathbb{T}^d) ) \oplus \mathcal{N}_{\Lop}$ is direct.
\textit{Banach space structure.}
Clearly, \eqref{eq:normML} is a semi-norm on $\mathcal{M}_{\Lop}( \mathbb{T}^d)$. Moreover, the relation $\lVert f \rVert_{\mathcal{M}_{\Lop},p} = 0$ implies that ${\rm L} f = \mathrm{Proj}_{\mathcal{N}_{\Lop}} f = 0$, which is equivalent to $f=0$.
Finally, ${\rm L^{\dagger}} ( \mathcal{M} ( \mathbb{T}^d) )$ inherits the completeness of $\mathcal{M}( \mathbb{T}^d)$ and the finite-dimensional space $\mathcal{N}_{\Lop}$ is also a Banach space for the $L_2$-norm. Therefore, the direct sum $\mathcal{M}_{\Lop}( \mathbb{T}^d)$ is also a Banach space for the direct sum norm \eqref{eq:normML}.
\textit{Embedding relations.}
We remark that \eqref{eq:LopKop} implies that $\varphi = {\rm L^{\dagger}} \{ {\rm L} \varphi \} + \mathrm{Proj}_{\mathcal{N}_{\Lop}} \varphi$ for any $\varphi \in \mathcal{S}( \mathbb{T}^d)$. Moreover, ${\rm L} \varphi \in \mathcal{S}( \mathbb{T}^d) \subseteq \mathcal{M}( \mathbb{T}^d)$ because ${\rm L}$ is continuous from $ \mathcal{S}( \mathbb{T}^d)$ to itself (as any LSI operator continuous from $ \mathcal{S}'( \mathbb{T}^d)$ to itself) and $\mathrm{Proj}_{\mathcal{N}_{\Lop}} \varphi \in \mathcal{N}_{\Lop}$. This shows that $ \mathcal{S}( \mathbb{T}^d) \subset \mathcal{M}_{\Lop} ( \mathbb{T}^d)$ (set inclusion).
The identity is moreover continuous from $ \mathcal{S}( \mathbb{T}^d)$ to $\mathcal{M}_{\Lop} ( \mathbb{T}^d)$. Indeed, if $\varphi_k$ converges to $\varphi$ in $ \mathcal{S}( \mathbb{T}^d)$, then ${\rm L} \varphi_k$ converges to ${\rm L} \varphi$ in $ \mathcal{S}( \mathbb{T}^d)$, hence in $\mathcal{M}( \mathbb{T}^d)$. Moreover, $\mathrm{Proj}_{\mathcal{N}_{\Lop}} \varphi_k$ also converges to $\mathrm{Proj}_{\mathcal{N}_{\Lop}} \varphi$ in $ \mathcal{S}( \mathbb{T}^d)$, hence in $L_2( \mathbb{T}^d)$. We then have that $$\lVert \varphi_k - \varphi\rVert_{\mathcal{M}_{\Lop},p}^p = \lVert {\rm L} \varphi_k - {\rm L} \varphi\rVert_{\mathcal{M}}^p + \lVert \mathrm{Proj}_{\mathcal{N}_{\Lop}} \varphi_k - \mathrm{Proj}_{\mathcal{N}_{\Lop}} \varphi \rVert_{2}^p \rightarrow 0,$$ as expected. This demonstrates the embedding $ \mathcal{S}( \mathbb{T}^d) \subseteq \mathcal{M}_{\Lop}( \mathbb{T}^d)$.
For the other embedding, we observe that $f = {\rm L^{\dagger}} w + p \in \mathcal{M}_{\Lop}( \mathbb{T}^d) = {\rm L^{\dagger}} ( \mathcal{M} ( \mathbb{T}^d) ) \oplus \mathcal{N}_{\Lop}$ has the Fourier sequence $\widehat{f}[\bm{k}] = \widehat{L^\dagger}[\bm{k}]\widehat{w}[\bm{k}]+ \widehat{p}[\bm{k}]$, which is clearly slowly growing as the sum and product of slowly growing sequences, hence $\mathcal{M}_{\Lop}( \mathbb{T}^d) \subset \mathcal{S}'( \mathbb{T}^d)$ (set inclusion).
Now, there exists some $\tau < 0$ such that $\mathcal{M}( \mathbb{T}^d) \subseteq \mathcal{H}_{2}^{\tau}( \mathbb{T}^d)$, where $\mathcal{H}_{2}^{\tau}( \mathbb{T}^d)$ is the Sobolev space of smoothness $\tau$. Then, ${\rm L^{\dagger}}$ is continuous from $ \mathcal{S}'( \mathbb{T}^d)$ to itself, and hence there exists $\tau'$ such that ${\rm L^{\dagger}} : \mathcal{H}_{2}^\tau( \mathbb{T}^d) \rightarrow \mathcal{H}_{2}^{\tau'} ( \mathbb{T}^d)$ continuously~\cite{Simon2003distributions}. By restriction, ${\rm L^{\dagger}}$ is also continuous from $\mathcal{M}( \mathbb{T}^d)$ to $\mathcal{H}_{2}^{\tau'}( \mathbb{T}^d)$. Then, one has that $\mathcal{M}_{\Lop}( \mathbb{T}^d) \subseteq \mathcal{H}_{2}^{\tau'}( \mathbb{T}^d)$ due to the isometry property between $\mathcal{M}( \mathbb{T}^d)$ and $\mathcal{M}_{\Lop}( \mathbb{T}^d)$. Finally, since $\mathcal{H}_{2}^{\tau'}( \mathbb{T}^d) \subseteq \mathcal{S}'( \mathbb{T}^d)$, this concludes the proof.
\textit{Continuity of ${\rm L}$.}
We simply remark that $\lVert {\rm L} f \rVert_{\mathcal{M}} \leq \lVert f \rVert_{\mathcal{M}_{\Lop},p}$, implying the continuity of ${\rm L}$ from $\mathcal{M}_{\Lop}( \mathbb{T}^d)$ to $\mathcal{M}( \mathbb{T}^d)$.
\textit{${\rm L}$-splines are in $\mathcal{M}_{\Lop}( \mathbb{T}^d)$.} This simply follows from the fact that a ${\rm L}$-splines is such that ${\rm L} f = \sum_{k=1}^K a_k \Sha(\cdot - \bm{x}_k) \in \mathcal{M}( \mathbb{T}^d)$.
\textit{Norm equivalence.} The equivalence of the norms $\lVert \cdot \rVert_{\mathcal{M}_{\Lop},p}$ for $p\in [1,\infty]$ simply follows from the equivalence of the $\ell_p$-norms over $ \mathbb{R}^2$.
\end{proof}
\subsection{The Measurement Space of a Spline-admissible Operator} \label{sec:measurementspace}
A linear functional $\nu$ can be used for the linear measurements in \eqref{eq:optipb} under the condition that $\nu(f)$ is well-defined for any $f \in \mathcal{M}_{\Lop}( \mathbb{T}^d)$. Moreover, as we shall see in Section \ref{sec:RT}, to ensure the existence of extreme point solutions of \eqref{eq:optipb}, $\nu$ should be in the dual of $\mathcal{M}_{\Lop}( \mathbb{T}^d)$ when the latter is equipped qith the weak* topology. In this section, we introduce the measurement space $\mathcal{C}_{\Lop}( \mathbb{T}^d)$, specify its Banach space structure and show its relation with the native space $\mathcal{M}_{\Lop}( \mathbb{T}^d)$.
We recall that the adjoint of ${\rm L} \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d)) $ is the unique operator ${\rm L}^* \in \mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ such that
\begin{equation}
\langle {\rm L} \varphi_1, \varphi_2 \rangle = \langle \varphi_1 , {\rm L}^* \varphi_2\rangle
\end{equation}
for every $\varphi_1, \varphi_2 \in \mathcal{S} ( \mathbb{T}^d)$. The Fourier sequence of ${\rm L}^*$ is given by $\widehat{L^*}[\bm{k}] = \overline{\widehat{L}[\bm{k}]}$ for $\bm{k}\in \mathbb{T}^d$ and the operators ${\rm L}$ and ${\rm L}^*$ share the same null space $\mathcal{N}_{\Lop}$.
\begin{definition}[Measurement Space]
We define the \emph{measurement space} associated to the spline-admissible operator ${\rm L}$ as
\begin{equation}\label{eq:CL}
\mathcal{C}_{\Lop}( \mathbb{T}^d) = \{ g \in \mathcal{S}'( \mathbb{T}^d) , \ {\rm L^{\dagger}}^* g \in \mathcal{C}( \mathbb{T}^d) \},
\end{equation}
where ${\rm L^{\dagger}}^*$ is the adjoint of ${\rm L^{\dagger}}$.
\end{definition}
\begin{theorem}[Banach structure of the measurement space] \label{theo:whatisCL}
Consider a spline-admissible operator ${\rm L}$ with finite-dimensional null space $\mathcal{N}_{\Lop}$. We also fix $q\in [1,\infty]$.
Then, $\mathcal{C}_{\Lop}( \mathbb{T}^d)$ is the direct sum
\begin{equation}\label{eq:CL}
\mathcal{C}_{\Lop}( \mathbb{T}^d) = {\rm L}^* ( \mathcal{C} ( \mathbb{T}^d) ) \oplus \mathcal{N}_{\Lop}.
\end{equation}
The measurement space $\mathcal{C}_{\Lop}( \mathbb{T}^d)$ is a Banach space for the norm
\begin{equation} \label{eq:normCL}
\lVert g \rVert_{\mathcal{C}_{\Lop},q} = \left( \lVert {\rm L^{\dagger}}^* g \rVert_{\infty}^q + \lVert \mathrm{Proj}_{\mathcal{N}_{\Lop}} g \rVert_{2}^q \right)^{1/q},
\end{equation}
with $q\in [1, +\infty]$ the usual adaptation when $q=\infty$.
Moreover, we have the topological embeddings
\begin{equation}\label{eq:embeddingCL}
\mathcal{S}( \mathbb{T}^d) \subseteq \mathcal{C}_{\Lop}( \mathbb{T}^d) \subseteq \mathcal{S}'( \mathbb{T}^d),
\end{equation}
and the space $ \mathcal{S}( \mathbb{T}^d)$ is dense in $\mathcal{C}_{\Lop}( \mathbb{T}^d)$.
Finally, the norms $\lVert \cdot \rVert_{\mathcal{C}_{\Lop},q}$ are all equivalent on $\mathcal{C}_{\Lop}( \mathbb{T}^d)$ for $1 \leq q \leq \infty$.
\end{theorem}
\begin{proof}
The direct sum \eqref{eq:CL}, the Banach structure with norm \eqref{eq:normCL} (remembering that $\lVert \cdot \rVert_\infty$ is a Banach norm on $\mathcal{C}( \mathbb{T}^d)$), and the embeddings \eqref{eq:embeddingCL} are obtained with identical arguments than in Theorem \ref{theo:whatisML}.
The only remaining part is the denseness of $ \mathcal{S}( \mathbb{T}^d)$ in $\mathcal{C}_{\Lop}( \mathbb{T}^d)$. Let $g = {\rm L}^* h + p \in \mathcal{C}_{\Lop}( \mathbb{T}^d)$, with $h \in \mathcal{C}( \mathbb{T}^d)$ and $p \in \mathcal{N}_{\Lop}$. The space $ \mathcal{S}( \mathbb{T}^d)$ is dense in $ \mathcal{C}( \mathbb{T}^d)$, hence there exists a sequence $(\varphi_k)_{n\geq 1}$ of functions in $ \mathcal{S}( \mathbb{T}^d)$ such that $\lVert h - \varphi_k \rVert_\infty \rightarrow 0$ when $k\rightarrow \infty$. We set $\psi_k = {\rm L}^* \varphi_k + p$. Then, $p \in \mathcal{N}_{\Lop} \subset \mathcal{S}( \mathbb{T}^d)$ and ${\rm L}^*$, as for any operator in $\mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$, is continuous from $ \mathcal{S}( \mathbb{T}^d)$ to itself. We therefore have that $\psi_k \in \mathcal{S}( \mathbb{T}^d)$ for any $k \geq 1$.
Moreover, $g - \psi_k = {\rm L}^* \{ h - \varphi_k\} \in \mathcal{C}_{\Lop}( \mathbb{T}^d)$ and we have that
\begin{equation} \label{eq:gminuspsi}
\lVert g - \psi_k \rVert_{\mathcal{C}_{\Lop}, q} = \lVert {\rm L^{\dagger}}^* {\rm L}^* (h-\varphi_k) \rVert_\infty = \lVert (\mathrm{I} - \mathrm{Proj}_{\mathcal{N}_{\Lop}})^* (h-\varphi_k) \rVert_\infty = \lVert (\mathrm{I} - \mathrm{Proj}_{\mathcal{N}_{\Lop}}) (h-\varphi_k) \rVert_\infty ,
\end{equation}
where we used \eqref{eq:LopKop} and $\mathrm{Proj}_{\mathcal{N}_{\Lop}} = \mathrm{Proj}_{\mathcal{N}_{\Lop}}^*$.
If $f \in \mathcal{C}( \mathbb{T}^d)$, then for any $\bm{k}\in \mathbb{Z}^d$, $\lvert \widehat{f} [\bm{k}] \rvert \leq \lVert f \rVert_\infty$. Applied to $f = h - \varphi_k$ and denoting by $\bm{k}_1, \ldots , \bm{k}_{N_0}$ the frequencies of the finite-dimensional null space of ${\rm L}$ (see Proposition~\ref{prop:finitedimNL}), we deduce from \eqref{eq:gminuspsi} that
\begin{equation}
\lVert g - \psi_k \rVert_{\mathcal{C}_{\Lop},q} \leq \lVert h - \varphi_k \rVert_\infty + \sum_{n=1}^{N_0} \lvert \widehat{(h -\varphi_k)}[\bm{k}_n]\rvert \leq (1+N_0) \lVert h - \varphi_k \rVert_\infty.
\end{equation}
Since $ \lVert h - \varphi_k \rVert_\infty \rightarrow 0$, we deduce that $\lVert g - \psi_k \rVert_{\mathcal{C}_{\Lop},q}$ vanishes and the denseness is proved. Finally, the equivalence of the norms $\lVert \cdot \rVert_{\mathcal{C}_{\Lop},q}$ for $q\in [1,\infty]$ simply follows from the equivalence of the $\ell_q$-norms over $ \mathbb{R}^2$.
\end{proof}
\begin{theorem}[Generalized Riesz-Markov Representation Theorem]
\label{theo:RieszMarkovgeneralized}
Let ${\rm L}$ be a spline-admissible operator and $1 \leq p, q \leq \infty$ such that $1/ p + 1/q = 1$.
The topological dual of the Banach space $( \mathcal{C}_{\Lop}( \mathbb{T}^d) , \lVert \cdot \rVert_{\mathcal{C}_{\Lop},q} )$ is isometric to the native space $(\mathcal{M}_{\Lop}( \mathbb{T}^d) ,\lVert \cdot \rVert_{\mathcal{M}_{\Lop},p} ))$; that is,
\begin{equation} \label{eq:dualidentification}
(\mathcal{C}_{\Lop}'( \mathbb{T}^d), \lVert \cdot \rVert_{\mathcal{C}_{\Lop}',q}) = (\mathcal{M}_{\Lop}( \mathbb{T}^d), \lVert \cdot \rVert_{\mathcal{M}_{\Lop},p}).
\end{equation}
Moreover, the topological dual of $\mathcal{M}_{\Lop}( \mathbb{T}^d)$ endowed with the weak* topology inherited from $\mathcal{C}_{\Lop}( \mathbb{T}^d)$ is $\mathcal{C}_{\Lop}( \mathbb{T}^d)$ itself. Hence, $(\mathcal{C}_{\Lop}( \mathbb{T}^d),\mathcal{M}_{\Lop}( \mathbb{T}^d))$ is a dual pair.
\end{theorem}
The relation \eqref{eq:dualidentification} is a representation theorem, since it identifies the topological dual of the measurement space as being the native space $\mathcal{M}_{\Lop}( \mathbb{T}^d)$. It is therefore a generalization of the Riesz-Markov representation theorem, stating that $(\mathcal{C}'( \mathbb{T}^d) , \lVert \cdot \rVert_{\mathcal{C}'} ) = (\mathcal{M}( \mathbb{T}^d), \lVert \cdot \rVert_{\mathcal{M}})$ (isometric identification). The proof of Theorem \ref{theo:RieszMarkovgeneralized} is based on the following lemma, which recalls elementary results on Banach spaces.
\begin{lemma} \label{lemma:XYZ}
Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$ be three Banach spaces with norm $\lVert \cdot \rVert_{\mathcal{X}}, \lVert \cdot \rVert_{\mathcal{Y}},\lVert \cdot \rVert_{\mathcal{Z}}$, respectively. Their topological duals $\mathcal{X}', \mathcal{Y}', \mathcal{Z}'$ are Banach spaces for their dual norms, denoted respectively by $\lVert \cdot \rVert_{\mathcal{X}'}, \lVert \cdot \rVert_{\mathcal{Y}'},\lVert \cdot \rVert_{\mathcal{Z}'}$.
Then, the following statements hold.
\begin{itemize}
\item For any $p \in [1,\infty]$, The product space $\mathcal{X}\times \mathcal{Y}$ is a Banach space for the norm, defined for any $(x,y) \in \mathcal{X}\times \mathcal{Y}$
\begin{equation}
\lVert (x,y) \rVert_{\mathcal{X}\times \mathcal{Y}} = \left( \lVert x \rVert_{\mathcal{X}}^p + \lVert y \rVert_{\mathcal{Y}}^p\right)^{1/p},
\end{equation}
with the usual adaptation for $p = \infty$.
\item The topological dual $(\mathcal{X}\times \mathcal{Y})'$ of $\mathcal{X}\times \mathcal{Y}$ associated with the dual norm is isometric to the Banach space $\mathcal{X}' \times \mathcal{Y}'$ endowed with the norm defined for $(x',y') \in \mathcal{X}' \times \mathcal{Y}'$ by
\begin{equation} \label{eq:dualproduct}
\lVert (x',y') \rVert_{\mathcal{X}'\times \mathcal{Y}'} = \left( \lVert x' \rVert_{\mathcal{X}'}^q + \lVert y' \rVert_{\mathcal{Y}'}^q\right)^{1/q},
\end{equation}
where $q \in [1,\infty]$ is the conjugate of $p$ satisfying $\frac{1}{p} + \frac{1}{q} = 1$, with the usual adaptation for $q=\infty$ (\emph{i.e.}, $p=1$) in \eqref{eq:dualproduct}.
\item Assume that $\Phi : \mathcal{X} \rightarrow \mathcal{Z}$ is an isometry. Then, the adjoint $\Phi^*$ of $\Phi$ is an isometry between $\mathcal{Z}'$ and $\mathcal{X}'$ endowed with their dual norms, and we have
\begin{equation} \label{eq:ZXPhi}
\mathcal{Z}' = (\Phi^*)^{-1} \mathcal{X}'.
\end{equation}
\end{itemize}
\end{lemma}
Lemma \ref{lemma:XYZ} is elementary and let to the reader. Note that the relation \eqref{eq:dualproduct} simply uses that the topological dual of $( \mathbb{R}^2, \lVert \cdot \rVert_p)$ is $( \mathbb{R}^2, \lVert \cdot \rVert_q)$.
\begin{proof}[Proof of Theorem \ref{theo:RieszMarkovgeneralized}]
We recall that the projector $\mathrm{Proj}_{\mathcal{N}_{\Lop}}$ is defined in \eqref{projdef}, and set $\mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} = \mathrm{Id} - \mathrm{Proj}_{\mathcal{N}_{\Lop}}$. Then, the spaces $\mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{C}( \mathbb{T}^d))$ and $\mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{M}( \mathbb{T}^d))$
inherit the Banach space structures of $\mathcal{C}( \mathbb{T}^d)$ and $\mathcal{M}( \mathbb{T}^d)$ for the restriction of the norms $\lVert \cdot \rVert_\infty$ and $\lVert \cdot \rVert_{\mathcal{M}}$, respectively.
According to the Riesz-Markov theorem, the space $\mathcal{M}( \mathbb{T}^d)$ is isometric to the topological dual of $(\mathcal{C}( \mathbb{T}^d), \lVert \cdot \rVert_\infty)$ endowed with the dual norm, and this property is transmit to the projections of those spaces, which implies the isometric identification
\begin{equation}\label{eq:dualonprojnlprop}
( (\mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{C}( \mathbb{T}^d)))' = \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{M}( \mathbb{T}^d)).
\end{equation}
Due to Theorems \ref{theo:whatisML} and \ref{theo:whatisCL}, we have that
\begin{align}
\mathcal{C}_{\Lop}( \mathbb{T}^d) &= {\rm L}^* ( \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{C} ( \mathbb{T}^d) ) ) \oplus \mathcal{N}_{\Lop} \quad \text{ and } \quad
\mathcal{M}_{\Lop}( \mathbb{T}^d) = {\rm L}^\dagger ( \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{M} ( \mathbb{T}^d) ) ) \oplus \mathcal{N}_{\Lop}.
\end{align}
Moreover, the operator ${\rm L}^*$ is an isometry between $\mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{C} ( \mathbb{T}^d) )$ and $\mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{C}_{\Lop}( \mathbb{T}^d) )$, and its inverse's adjoint is $ {\rm L^{\dagger}}$.
This can be easily seen from the definition of the norm \eqref{eq:normCL} and the fact that $\mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp}$ simply set to zero the Fourier coefficients associated to the finitely many null space frequencies. Thanks to \eqref{eq:ZXPhi} in Lemma \ref{lemma:XYZ}, we therefore deduce the isometric identification $\left( {\rm L}^* ( \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{C} ( \mathbb{T}^d) ) ) \right)' = {\rm L}^\dagger ( \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{M} ( \mathbb{T}^d) ) )$. We observe moreover that $ {\rm L}^* \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} = {\rm L}^*$ and $ {\rm L}^\dagger \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} = {\rm L^{\dagger}}$, implying that
\begin{equation}
\left( {\rm L}^* (\mathcal{C} ( \mathbb{T}^d) ) \right)' = {\rm L}^\dagger ( \mathcal{M} ( \mathbb{T}^d) ).
\end{equation}
The finite-dimensional space $\mathcal{N}_{\Lop}$ has an orthonormal basis for the $L_2$ scalar product, given by $\{e_{\bm{k}_n}, n=1 \ldots N_0\}$, with $\bm{k}_1, \ldots, \bm{k}_{N_0}$ the null space frequencies of ${\rm L}$. Hence, $\mathcal{N}_{\Lop}'$ is isometrically identified to $\mathcal{N}_{\Lop}$. Applying \eqref{eq:dualproduct}, we deduce \eqref{eq:dualidentification}. The second statement of Theorem \ref{theo:RieszMarkovgeneralized} then follows directly follows.
\end{proof}
\textbf{Running example ${\rm L} = {\rm D}^N$, $N\geq 1$.}
The native space of ${\rm D}^N$ is the space of functions $f$ such that ${\rm D}^N f \in \mathcal{M}( \mathbb{T}^d)$. According to Theorem \ref{theo:whatisML}, a function of this space can be written as
$f = ({\rm D}^N)^\dagger w + \alpha $ with $w \in \mathcal{M}( \mathbb{T}^d)$ and $\alpha \in \mathbb{R}$.
The norm of $f$ is then $\lVert f \rVert_{\mathcal{M}_{{\rm D}^N},p} = \left( \lVert w \rVert_{\mathcal{M}}^p + \alpha^p \right)^{1/p}$.
A function $g$ of the measurement space $\mathcal{C}_{{\rm D}^N}( \mathbb{T}^d)$ is of the form $g = {\rm D}^N \{ h \} + \beta$ with $h \in \mathcal{C}( \mathbb{T}^d)$ and $\beta \in \mathbb{R}$. Note that the adjoint of ${\rm D}^N$ is $(-1)^N {\rm D}^N$ but the constant $(-1)^N$ can be absorbed in $h$. The norm of $g$ is then $\lVert g \rVert_{\mathcal{C}_{{\rm D}^N, q}} = \left( \lVert \varphi \rVert_{\infty}^q + \beta^q \right)^{1/q}$.
\section{Periodic Representer Theorem} \label{sec:RT}
Assume that we want to reconstruct an unknown periodic function $f_0$ from its possibly noisy linear measurements $\bm{y} \approx \bm{\nu}(f_0)\in \mathbb{R}^M$. Typically, $\bm{y}$ is a random perturbation of $\bm{\nu}(f_0)$ such that $\mathbb{E} [\bm{y} ] = \bm{\nu} ( f_0 )$. This mild assumption allows to consider additive noise models, but also more general ones. We shall not discuss further the model for the data acquisition on this paper and we refer \cite[Chapter 7.5]{simeoni2020functional} for more details on this topic.
To achieve our reconstruction goal, we consider the penalized optimization problem:
\begin{equation} \label{eq:theoptibeforeRT}
\tilde{f}\in{\arg \min_f} \quad E(\bm{y}, \bm{\nu} (f ) ) + \lambda \lVert {\rm L} f \rVert_{\mathcal{M}} ,
\end{equation}
where $E(\bm{y}, \bm{\nu} (f ) )$ is a data-fidelity term that enforces the fidelity of the predicted measurements $\bm{\nu} (f )$ to the observed one $\bm{y}$. A typical choice for $E$ is the quadratic cost function $E(\bm{y}, \bm{\nu} (f ) ) = \lVert \bm{y} - \bm{\nu}(f) \rVert^2$.
The regularization $ \lVert {\rm L} f \rVert_{\mathcal{M}}$ promotes functions with certain smoothness properties.
The spline-admissible operator ${\rm L}$ typically characterizes the smoothness of the reconstruction from \eqref{eq:theoptibeforeRT}, while the choice the $\mathcal{M}$-norm is know to promote sparse reconstructions~\cite{Unser2017splines,gupta2018continuous}.
An optimizer $\tilde{f}$ of \eqref{eq:theoptibeforeRT} is expected to adequately approximate the function $f_0$.
In Section \ref{sec:constructionNativeSpace}, we have introduced the function spaces required to give a clear meaning to \eqref{eq:theoptibeforeRT}. The functions $f$ should typically live in the native space $\mathcal{M}_{\Lop}( \mathbb{T}^d)$, for which $\lVert {\rm L} f \rVert_{\mathcal{M}} < \infty$, while the measurement functionals $\nu_m$ are taken in the measurement space $\mathcal{C}_{\Lop}( \mathbb{T}^d)$.
\begin{theorem}[Periodic Representer Theorem] \label{theo:RT}
Consider the following assumptions:
\begin{itemize}
\item a spline-admissible operator ${\rm L}$ with null space $\mathcal{N}_{\Lop}$ of finite dimension $N_0 \geq 0$ and pseudoinverse ${\rm L^{\dagger}}$;
\item a linearly independent family of $M \geq N_0$ linear functionals $\nu_m \in \mathcal{C}_{\Lop}( \mathbb{T}^d)$ such that $\bm{\nu}= (\nu_1,\ldots , \nu_M)$ is injective on the null space of ${\rm L}$; that is, such that the condition $\langle \bm{\nu}, p \rangle = \bm{0}$ for $p\in \mathcal{N}_{\Lop}$ implies that $p = 0$;
\item a cost function $E(\cdot, \cdot) : \mathbb{R}^M\times \mathbb{R}^M \rightarrow \mathbb{R}^+ \cup \{\infty\}$ such that $E(\bm{z}, \cdot)$ is a lower semi-continuous, convex, coercive, and proper function for any fixed $\bm{z} \in \mathbb{R}^M$;
\item a measurement vector $\bm{y}\in \mathbb{R}^M$; and
\item a tuning parameter $\lambda > 0$.
\end{itemize}
Then, the set of minimisers
\begin{equation}\label{eq:optiwellstated}
\mathcal{V} := \underset{f \in \mathcal{M}_{\Lop}( \mathbb{T}^d) }{\arg \min} E(\bm{y}, \bm{\nu} (f ) ) + \lambda \lVert {\rm L} f \rVert_{\mathcal{M}}
\end{equation}
is non empty, convex, and compact with respect to the weak* topolgy on $\mathcal{M}_{\Lop}( \mathbb{T}^d)$.
Moreover, the extreme points of \eqref{eq:optiwellstated} are periodic ${\rm L}$-splines of the form
\begin{equation} \label{eq:LopfRT}
f_{\mathrm{ext}} = \sum_{k=1}^K a_k {\rm L^{\dagger}} \{ \Sha\} ( \cdot -\bm{x}_k) + p = \sum_{k=1}^K a_k g_{{\rm L}} ( \cdot -\bm{x}_k) + p
\end{equation}
for some $\bm{x}_k \in \mathbb{T}^d$, $a_k \in \mathbb{R}$ with $\bm{\mathrm{M}} \bm{a} = \bm{0}$ where $\bm{\mathrm{M}}$ is given by \eqref{eq:matrixM}, $K \leq M$ knots, and $p \in \mathcal{N}_{\Lop}$.
\end{theorem}
The periodic representer theorem reveals the form of the solutions of the optimization problem \eqref{eq:optiwellstated} in the following sense: (i) The extreme point solutions are periodic ${\rm L}$-splines with at most $M$ knots, with $M$ the number of measurements. (ii) The other spline solutions are finite convex combinations of the extreme points. (iii) Any solution is the (weak*) limit of spline solutions.
Note that an extreme point solution is such that ${\rm L} f_{\mathrm{ext}} = \sum_{k=1}^K a_k \Sha( \cdot - \bm{x}_k) $
is a finite sum of Dirac combs. \\
\textbf{Running example ${\rm L} = {\rm D}^N$, $N\geq 1$.} Theorem \ref{theo:RT} can be applied to the periodic spline-admissible operator ${\rm D}^N$. The second condition is then equivalent to the existence of $1 \leq m \leq M$ such that $\langle \nu_m , 1 \rangle = \widehat{\nu}_m [0] \neq 0$, or equivalently, the existence of a linear functional with nonzero mean, which is a mild requirement.
Then, under the conditions of Theorem \ref{theo:RT}, the extreme points of the solution set $\mathcal{V}$ are periodic ${\rm D}^N$-splines.
\\
The seminal work of Fisher and Jerome~\cite{Fisher1975} was developed over compacts domains, and we will see that Theorem \ref{theo:RT} can be deduced from their main result, that we recall here with our notation. We present here a slight adaptation of \cite[Theorem 1]{Fisher1975}.
We fix a set of distinct Fourier frequencies $\{\bm{k}_n, \,n=1,\ldots , N_0\}$ and introduce $\mathcal{N} = \mathrm{Span}\{e_{\bm{k}_n}, n=1,\ldots , N_0\}$ which is a space of dimension $N_0$. We define the orthogonal projector $\mathrm{Proj}_{\mathcal{N}}$ over $\mathcal{N}$ as in \eqref{projdef}, and set $\mathrm{Proj}_{\mathcal{N}^\perp} = \mathrm{Id} - \mathrm{Proj}_{\mathcal{N}}$. Then, the spaces $\mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{C}( \mathbb{T}^d))$ and $\mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{M}( \mathbb{T}^d))$
inherit the Banach space structures of $\mathcal{C}( \mathbb{T}^d)$ and $\mathcal{M}( \mathbb{T}^d)$ for the restriction of the norms $\lVert \cdot \rVert_\infty$ and $\lVert \cdot \rVert_{\mathcal{M}}$, respectively. Moreover, as we have seen in the proof of Theorem \ref{theo:RieszMarkovgeneralized}, $(\mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{C}( \mathbb{T}^d)))' = \mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{M}( \mathbb{T}^d))$.
\begin{lemma}[Fisher-Jerome Theorem] \label{theo:FJ}
Let $\mathcal{N} = \mathrm{Span}\{e_{\bm{k}_n}, n=1,\ldots , N_0\}$as above and $M \geq N_0$.
Let $(f_m,q_m) \in \mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{C}( \mathbb{T}^d)) \times \mathcal{N}$, $m=1,\ldots , M$, be a set of linearly independent couples of functions. We assume moreover that $\bm{q}(p) = (\langle q_1, p \rangle , \ldots , \langle q_M, p \rangle ) = \bm{0}$ if and only if $p=0$ for $p\in \mathcal{N}$.
Let $\bm{z}_0 \in \mathbb{R}^M$ be such that there exists $(w, p) \in \mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{M}( \mathbb{T}^d))\times \mathcal{N}$ with
\begin{equation} \label{eq:conditionwp}
\bm{\mu}((w,p)) := (\langle f_1 , w \rangle + \langle q_1, p \rangle , \ldots , \langle f_M , w \rangle + \langle q_M, p \rangle ) = \bm{z}_0.
\end{equation}
Then, the set of minimizers
\begin{equation} \label{eq:theFJadapted}
\underset{\bm{\mu}(w,p) = \bm{z}_0}{\arg \min} \lVert w \rVert_{\mathcal{M}}
\end{equation}
is non empty, convex, weak* compact in $\mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{M}( \mathbb{T}^d)) \times \mathcal{N}$, and its extreme points are of the form
\begin{equation} \label{eq:extremewp}
(w_{\mathrm{ext}}, p_{\mathrm{ext}}) = \left( \sum_{k=1}^K a_k \Sha( \cdot - \bm{x}_k) , p_{\mathrm{ext}}\right),
\end{equation}
where $p_{\mathrm{ext}} \in \mathcal{N}$, $a_k \in \mathbb{R}$ with $\bm{\mathrm{M}}\bm{a} = \bm{0}$ where $\bm{\mathrm{M}}$ is defined in \eqref{eq:matrixM}, $\bm{x}_k \in \mathbb{T}^d$, and $K \leq M$.
\end{lemma}
We call condition \eqref{eq:conditionwp} the \emph{feasibility assumption}, it means that one can achieve the measure $\bm{z}_0$ and is obviously needed so that \eqref{eq:theFJadapted} has a solution. Lemma \ref{theo:FJ} is an adaptation of \cite[Theorem 1]{Fisher1975}, where the authors work with a compact metric space $X$ that we specialize with $ \mathbb{T}^d$, and where we simply work with $\mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{C}( \mathbb{T}^d))$ instead of $\mathcal{C}(X)$.
The proof is identical. For a more recent treatment, we refer the reader to \cite[Theorem 7]{Unser2017splines} which considers the real line $ \mathbb{R}^d$.
Note that the relation $\bm{\mathrm{M}}\bm{a} = \bm{0}$ comes from the fact that $w_{\mathrm{ext}} \in \mathrm{Proj}_{\mathcal{N}^\perp} (\mathcal{M}( \mathbb{T}^d))$ (see Proposition \ref{prop:Mnkmatrix}).
\begin{proof}[Proof of Theorem \ref{theo:RT}]
The proof has two parts. We first prove that the solution set $\mathcal{V}$ in \eqref{eq:optiwellstated} is non empty, convex, and weak* compact, and then obtain the form of the extreme points using Lemma \ref{theo:FJ}. \\
\textit{Properties of $\mathcal{V}$.}
The first part of the proof is classical, we briefly mention the key steps since it follows exactly the line of the one of \cite[Theorem 4]{gupta2018continuous}.
For $f \in \mathcal{M}_{\Lop}( \mathbb{T}^d)$, we define $J(f) := E(\bm{y}, \bm{\nu} (f ) ) + \lambda \lVert {\rm L} f \rVert_{\mathcal{M}}$. The functional $J : \mathcal{M}_{\Lop}( \mathbb{T}^d) \rightarrow \mathbb{R}^+ \cap \{\infty\}$ is proper, convex, coercive, and weak*-lower semi-continuous (see \cite[Appendix B]{gupta2018continuous} for a full proof).
Then, we are exactly in the conditions of \cite[Proposition 8]{gupta2018continuous}, revealing that $\mathcal{V} = \arg\min J$ is effectively non empty, convex, and weak* compact.
As such, from the Krein-Milman theorem \cite[p. 75]{RudinFA}, the set $\mathcal{V}$ admits extreme points and is the weak* closure of those extreme points. \\
\textit{Form of the extreme points.}
We fix an extreme point solution $f^* \in \mathcal{V}$, set $\bm{z}_0 := \bm{\nu}(f^*) \in \mathbb{R}^M$, and consider the optimization problem
\begin{equation} \label{eq:newpb}
\tilde{\mathcal{V}} := \underset{f \in \mathcal{M}_{\Lop}( \mathbb{T}^d), \ \bm{\nu}(f) = \bm{z}_0 }{\arg \min} \lVert {\rm L} f \rVert_{\mathcal{M}}.
\end{equation}
Clearly, we have that $\tilde{\mathcal{V}} \subset \mathcal{V}$ since $\lVert {\rm L} g^* \rVert_{\mathcal{M}} = \lVert {\rm L} f^* \rVert_{\mathcal{M}}$ and $\bm{\nu}(g^*) = \bm{\nu}(f^*)$ for any $g^* \in \mathcal{V}_{f^*}$. Moreover, $f^*$ is an extreme point of $\mathcal{V}_{f^*}$ (being an extreme point of the bigger set $\mathcal{V}$). We can therefore focus on the optimization problem \eqref{eq:newpb} and show that its extreme points have the expected representation.
With Theorem \ref{theo:whatisML}, we know that any $f \in \mathcal{M}_{\Lop}( \mathbb{T}^d)$ has a unique representation as $f = {\rm L^{\dagger}} w + p$ with $w \in \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{M}( \mathbb{T}^d))$ and $p \in \mathcal{N}_{\Lop}$. In particular, we have the equivalence
\begin{equation} \label{eq:equivalentproblems}
f^* \in \tilde{\mathcal{V}} \Longleftrightarrow (w^* , p^*) \in \mathcal{W} := \underset{(w,p) \in\mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{M}( \mathbb{T}^d))\times\mathcal{N}_{\Lop}, \ \bm{\nu}({\rm L^{\dagger}} w + p) = \bm{z}_0}{\arg \min} \lVert w \rVert_{\mathcal{M}},
\end{equation}
where $f^* = {\rm L^{\dagger}} w^* + p^*$, $w^* \in \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{M}( \mathbb{T}^d))$ and $p^* \in \mathcal{N}_{\Lop}$. The equivalence also holds for the extreme points of both problems: $f_{\mathrm{ext}}$ is an extreme point of $\tilde{\mathcal{V}}$ if and only if $(w_{\mathrm{ext}}, p_{\mathrm{ext}})$ is an extreme point of $\mathcal{W}$, with $f_{\mathrm{ext}} = {\rm L^{\dagger}} w_{\mathrm{ext}} + p_{\mathrm{ext}}$.
We observe that, for any $f = {\rm L^{\dagger}} w + p$ with $(w,p) \in \mathrm{Proj}_{\mathcal{N}_{\Lop}^\perp} (\mathcal{M}( \mathbb{T}^d)) \times \mathcal{N}_{\Lop}$, we have
\begin{equation}
\nu_m(f) = \langle \nu_m , {\rm L^{\dagger}} w \rangle + \langle \nu_m , p \rangle
= \langle ({\rm L^{\dagger}})^* \nu_m , w \rangle + \langle \mathrm{Proj}_{\mathcal{N}_{\Lop}} \nu_m , p \rangle.
\end{equation}
Then, $f_m := ({\rm L^{\dagger}})^* \nu_m \in \mathrm{Proj}_{\mathcal{N}_{\Lop}}(\mathcal{C}( \mathbb{T}^d))$ and $\mathrm{Proj}_{\mathcal{N}_{\Lop}} \nu_m \in \mathcal{N}_{\Lop}$. We define $\bm{\mu}$ as in \eqref{eq:conditionwp}.
We are then in the conditions of Lemma \ref{theo:FJ} with $\mathcal{N} = \mathcal{N}_{\Lop}$ for those functionals.
Indeed, the condition $\bm{q} (p) = \bm{0}$ if and only if $p=0$ for $p\in \mathcal{N}_{\Lop}$ comes from the second assumption in Theorem \ref{theo:RT}.
Note that the feasibility condition is satisfied because $\tilde{\mathcal{V}}$, hence $\mathcal{W}$, are non empty.
We deduce that $\tilde{\mathcal{V}}$ inherits the properties of $\mathcal{W}$, and is therefore convex and weak* compact in $\mathcal{M}_{\Lop}( \mathbb{T}^d)$.
Moreover, the extreme points are such that
$f_{\mathrm{ext}} = {\rm L^{\dagger}} w_{\mathrm{ext}} + p_{\mathrm{ext}}$, where $(w_{\mathrm{ext}}, p_{\mathrm{ext}})$ are given by \eqref{eq:extremewp}.
This shows that $f_{\mathrm{ext}}$ has the expected form, the relation $\bm{\mathrm{M}} \bm{a} = \bm{0}$ coming from the condition over $\bm{a}$ in \eqref{eq:extremewp} and Theorem \ref{theo:RT} is proved.
\end{proof}
\section{Pseudo-Differential Periodic Spline-Admissible Operators} \label{sec:differentialop}
The goal of this section is to provide examples of periodic spline-admissible operators and to highlight their main properties.
The considered operators are pseudo-differential: they have a \emph{roughening} behaviour---\emph{i.e.}, they reduce the smoothness of the input function. This roughening behaviour is quantified by the spectral growth $\gamma > 0$ (see Definition \ref{def:growth}).
We include both univariate (ambiant dimension $d=1$, Section \ref{sec:univariate}) and multivariate ($d \geq 1$, Section \ref{sec:multivariate}) spline-admissible operators. Python routines for efficiently generating and manipulating the various multivariate periodic ${\rm L}$-splines considered in this section are available on the public GitHub repository \cite{periodispline2020}.
The code is compatible with any ambiant dimension $d \geq 1$, and only requires to specify the knots and the amplitudes of the splines. We only use it to represent periodic spines with minimum number of knots (one or two, depending on the operators) but it can be used for any number of knots.
Our periodic spline generator leverages truncated Fourier series expansions, implemented efficiently via FFTs using the routines from the GitHub repository \cite{pyffs}. We moreover make use of F\'ejer kernels for a fast convergence of the truncated Fourier series to the spline function.
\subsection{Univariate Splines-admissible Operators} \label{sec:univariate}
In ambiant dimension $d=1$, we
consider classical differential operators and their fractional versions.
Table \ref{table:operators1d} provides the list of the considered univariate spline-admissible operators, together with their Fourier sequence and their null space via its set $N_{\rm L}$ of Fourier frequencies. We recall that $\mathcal{N}_{\Lop} = \mathrm{Span} \{ e_k, \ k \in N_{\rm L} \}$.
\begin{table*}[h!]
\centering
\caption{Families of univariate spline-admissible operators}
\begin{tabular}{ccccccc}
\hline
\hline\\
Spline's type & Operator & Parameter & $\widehat{L}[k]$ & Spectral growth & $N_{\rm L}$
\\
\hline\\[-1ex]
Polynomial splines & ${\rm D}^N$ & $N\in \mathbb{N}$ & $ ( \mathrm{i} k)^N $ & $N$ & $0$ \\
Exponential splines & ${\rm D} + \alpha \mathrm{Id}$ & $\alpha \notin \mathrm{i} \mathbb{Z}$ & $ \mathrm{i} k + \alpha$ & $1$ & $\emptyset$ \\
& ${\rm D} - \mathrm{i} k_0 \mathrm{Id}$ & $k_0 \in \mathbb{Z}$ & $ \mathrm{i} (k - k_0)$ & $1$ & $k_0$ \\
& ${\rm D}^2 + k_0^2 \mathrm{Id}$ & $k_0 \in \mathbb{Z}$ & $k_0^2 - k^2$ & $2$ & $k_0, -k_0$ & \\
Fractional splines & ${\rm D}^\gamma$ & $\gamma > 0$ & $( \mathrm{i} k)^\gamma$ & $\gamma$ & $0$ \\
Fractional \\
exponential splines & $({\rm D} + \alpha \mathrm{Id})^\gamma$ & $\gamma > 0$, $\alpha \in \mathbb{R}$ & $( \mathrm{i} k + \alpha)^\gamma$ & $\gamma$ & $\emptyset$ \\
Fractional \\
polyharmonic splines & $(-\Delta)^{\gamma/2}$ & $\gamma > 0$ & $\lvert k \rvert^\gamma$ & $\gamma$ & $0$ \\
Sobolev splines & $ (\alpha^2 \mathrm{Id} - \Delta)^{\gamma/2}$ & $\alpha \neq 0$, $\gamma \in \mathbb{R}$ & $(\alpha^2 + k^2)^{\gamma/2}$ & $\gamma$ & $\emptyset$ \\
Mat\'ern splines & $\mathrm{M}_\epsilon^\beta$ & $\epsilon > 0$ & see Definition \ref{def:matern} & $2(\beta - 1/2) $ & $\emptyset$ \\
& & $\beta \in \mathbb{N}_{\geq 1} + 1/2$ & & & \\
Wendland splines & $\mathrm{W}_{\epsilon,\mu}^\beta$ & $\epsilon > 0$, $\nu \in \mathbb{N}$ & see Definition \ref{def:wendland} & $2(\beta - 1/2)$ & $\emptyset$ \\
& & $\beta \in \mathbb{N}_{\geq 2} $ & & & \\
\hline
\hline
\end{tabular} \label{table:operators1d}
\end{table*}
\paragraph{Periodic Polynomial Splines.}
We already considered the derivative operator ${\rm L} = {\rm D}^N$ of order $N\geq 1$, for which $({\rm D}^N)$-splines are periodic piecewise-polynomial functions of degree less than $(N-1)$ and are $(N-2)$ times continuously differentiable for $N\geq 2$ at their junctions.
We have seen that a $({\rm D}^N)$-spline has at least 2 knots, and that the Green's function of ${\rm D}^N$ is not a periodic $({\rm D}^N)$-spline. However, the function
\begin{equation}
\rho_{{\rm D}^N} (x) =(\mathrm{D}^N)^\dagger \{\Sha\}(x) - (\mathrm{D}^N)^\dagger \{\Sha\}(x-\pi) = (\mathrm{D}^N)^{\dagger} \{ \Sha - \Sha(\cdot - \pi) \} (x)
\end{equation}
is a $({\rm D}^N)$-spline, its weights $(a_1, a_2) = (1, -1)$ satisfying the system of equation $\bm{\mathrm{M}} \bm{a} = \bm{0}$ of Proposition \ref{prop:Mnkmatrix}.
We represent $\rho_{{\rm D}^N}$ over two periods in Figure \ref{fig:Dgamma} for integer values of the parameter $N = \gamma$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{fractional_derivative.pdf}
\caption{Periodic ${\rm L}$-splines $\rho_{{\rm D}^\gamma}$ defined in \eqref{eq:rhoforDgamma} associated to ${\rm L} = \mathrm{D}^\gamma$ over two periods for various values of $\gamma > 0$.
The splines are normalized so that the maximum value is $1$.}
\label{fig:Dgamma}
\end{figure}
\paragraph{Periodic Exponential Splines.}
We fix $\alpha \in \mathbb{C}$ and consider the operator ${\rm L} = ({\rm D} + \alpha \mathrm{Id})^N$ with $\alpha \in \mathbb{C}$.
We distinguish two cases, depending if $\alpha$ is in $\mathrm{i} \mathbb{Z}$ or not.
Assume first that $\alpha \notin \mathrm{i} \mathbb{Z}$. Then, $({\rm D} + \alpha \mathrm{Id})^N$ has a trivial and therefore finite dimensional null space ($N_0=0$) and is invertible, with Fourier sequence $(\mathrm{i} k + \alpha)^{N}$, which corresponds to a spectral growth of $\gamma = N$. It is therefore spline-admissible.
A periodic $({\rm D} - \alpha \mathrm{Id})^N$-spline $f$ with knots $x_1, \ldots , x_K$ is a piecewise-exponential-polynomial. More precisely, $f$ is a exponential-polynomial of the form $x \mapsto P(x) \exp(- \alpha x)$, with $P$ a polynomial of degree at most $(N-1)$ on each intervals $[x_{k+1} , x_k]$, $k = 1, \ldots, K$ (with the convention that $x_{K+1} = x_1 + 1$). For $N \geq 2$, $f$ has continuous derivatives rder $(N-2)$.
Moreover, the Green's function of $({\rm D} + \alpha \mathrm{Id})$ satisfies for $x \in \mathbb{T}=[0,1]$ that $g_{({\rm D} + \alpha \mathrm{Id})} (x) = ({\rm D} + \alpha \mathrm{Id})^{-1} \{ \Sha \} (x) = \frac{\mathrm{e}^{- \alpha x}}{1 - \mathrm{e}^{-\alpha}}$, or equivalently, for $x\in \mathbb{R}$,
\begin{equation} \label{eq:formulaforDalphaIdGreen}
g_{({\rm D} + \alpha \mathrm{Id})} (x) = \frac{\mathrm{e}^{- \alpha (x - \lfloor x \rfloor )}}{1 - \mathrm{e}^{-\alpha}},
\end{equation}
where $\lfloor x \rfloor \in \mathbb{Z}$ is the largest integer smaller or equal to $x$.
To show this, it suffices to apply $({\rm D} + \alpha \mathrm{Id})$ on both sides of \eqref{eq:formulaforDalphaIdGreen}. Similar formulas can be obtained for any $N \geq 1$.
We represent the periodic exponential splines $({\rm D} - \alpha \mathrm{Id})^{-1} \{ \Sha \}$ for various values of $\alpha > 0$ and integer values $N= \gamma$ in Figure \ref{fig:expgamma}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{exponential_gamma.pdf}
\caption{Periodic Green's functions $g_{\rm L}$ associated to ${\rm L} = (\mathrm{D} + \alpha \mathrm{Id})^\gamma$ given by \eqref{eq:formulaforDalphaIdGreen} over two periods for various values of $\gamma > 0$ and $\alpha = 1$ (top) or $\alpha = 3$ (bottom). The splines are normalized so that the maximum value is $1$.}
\label{fig:expgamma}
\end{figure}
Assume now that $\alpha = \mathrm{i} k_0 \in \mathrm{i} \mathbb{Z}$. In order to define a real operator, we consider instead ${\rm L} = ({\rm D}^2 + k_0^2 \mathrm{Id})^N = ({\rm D} - \mathrm{i} k_0 \mathrm{Id})^N ({\rm D} + \mathrm{i} k_0 \mathrm{Id})^N$. This defines a spline-admissible operator whose null space of dimension $N_0 = 2$ is generated by $\cos(k_0 \cdot)$ and $\sin(k_0 \cdot)$, and whose pseudo-inverse has Fourier sequence $\One_{|k| \neq |k_0|} \cdot ( k_0^2 - k^2)^{-N}$.
More generally, we can consider ${\rm L} = P({\rm D})$ with $P = X^N + a_{N-1} X^{N-1} + \cdots + a_0$ a polynomial function. By decomposing $P({\rm D}) = \prod_{n=1}^{N} ({\rm D} - \alpha_n \mathrm{Id})$ with $N$ the degree and $\alpha_n$ the complex roots of $P$,
the general case reduces to the convolution between the periodic polynomial and/or periodic exponential splines considered above. The corresponding spectral growth is again $\gamma = N$.
\paragraph{Periodic Fractional Splines.}
There are several ways of considering fractional versions of integro-differential operators~\cite{samko1993fractional}.
In our case, we follow the traditional approach for the periodic setting and consider Weyl's fractional derivative~\cite[Chapter XII, Section 8]{zygmund2002trigonometric}.
For $\gamma > 0$, we define the operator ${\rm D}^{\gamma}$ from its Fourier sequence given by
\begin{equation}
\widehat{D^\gamma}[k] = ( \mathrm{i} k)^\gamma = \lvert k \rvert^\gamma \exp\left( - \frac{ \mathrm{i} \gamma \ \mathrm{sign} k}{4} \right).
\end{equation}
When $\gamma = N \in \mathbb{N}$, we recover the classical $N$th order derivative.
The fractional derivative operator is spline-admissible.
Its null space of ${\rm D}^{\gamma}$ is made of constant functions. The pseudoinverse $({\rm D}^{\gamma})^\dagger$ of ${\rm D}^{\gamma}$ has Fourier sequence $\One_{k\neq 0} / (\mathrm{i} k)^\gamma$. Clearly, this corresponds to a spectral growth of $\gamma$.
As for the $N$th order derivative, a non constant periodic spline has at least two knots. In Figure \ref{fig:Dgamma}, we represent the function
\begin{equation} \label{eq:rhoforDgamma}
\rho_{{\rm D}^\gamma} (x) =(\mathrm{D}^\gamma)^\dagger \{\Sha\}(x) - (\mathrm{D}^\gamma)^\dagger \{\Sha\}(x-\pi) = (\mathrm{D}^\gamma)^{\dagger} \{ \Sha - \Sha(\cdot - \pi) \} (x)
\end{equation}
for various values of $\gamma > 0$.
We also consider the fractional versions of the exponential splines, corresponding to the operator ${\rm L} = (\mathrm{D}+ \alpha \mathrm{Id})^\gamma$ with $\alpha \in \mathbb{R}$ and $\gamma >0$. In Figure \ref{fig:expgamma}, we represent such functions for various values of $\gamma$ and for $\alpha = 1$ and $\alpha = 3$.
\paragraph{Periodic Polyharmonic Fractional Splines.}
The periodic fractional operator $(-\Delta)^{\gamma/2}$ is defined for $\gamma > 0$ via its Fourier response $\widehat{(-\Delta)^{\gamma/2}} [k] = \lvert k \rvert^\gamma$.
Its impact on the smoothness of the input function is identical to the fractional derivative ${\rm D}^\gamma$ (and is actually equal for even integers $\gamma = 2n \geq 2$) but leads to different notions of periodic splines and admit, contrarily to the fractional derivative, non separable multivariate extensions (see Section \ref{sec:multivariate}).
The operator $(-\Delta)^{\gamma/2}$ is spline-admissible, with a $1$-dimensional null space made of constant functions. In Figure \ref{fig:Lapgamma}, we plot the function
\begin{equation} \label{eq:splinepolyharm}
\rho_{(-\Delta)^{\gamma/2}} (x) =((-\Delta)^{\gamma/2})^\dagger \{\Sha\}(x) - ((-\Delta)^{\gamma/2})^\dagger \{\Sha\}(x-\pi) = (-\Delta)^{\gamma/2})^{\dagger} \{ \Sha - \Sha(\cdot - \pi) \} (x)
\end{equation}
for different values of $\gamma>0$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{fractional_laplace.pdf}
\caption{Periodic ${\rm L}$-splines \eqref{eq:splinepolyharm} associated to ${\rm L} = (-\Delta)^{\gamma/2}$ over two periods for various values of $\gamma > 0$. The splines are normalized so that the maximum value is $1$.}
\label{fig:Lapgamma}
\end{figure}
\paragraph{Periodic Sobolev Splines.}
Fix $\alpha > 0$ and $\gamma > 0$.
We consider the periodic Sobolev operator ${\rm L}_{\gamma,\alpha} = (\alpha^2 \mathrm{Id} - \Delta)^{\gamma/2}$, whose Fourier sequence is given by $\widehat{L}_{\gamma,\alpha} [{k}] = (\alpha^2 +k^2 )^{\gamma/2}$.
Then, ${\rm L}_{\gamma,\alpha}$ has a trivial null space and admits a periodic inverse operator: it is therefore spline-admissible. Moreover, it has a spectral growth of $\gamma$.
We represent the periodic Green's function $g_{{\rm L}_{\gamma,\alpha}} = {\rm L}^{-1}_{\gamma,\alpha} \Sha$ in Figure \ref{fig:sobolev1d}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{sobolev.pdf}
\caption{Periodic Green's functions associated to ${\rm L} = ( \alpha^2 \mathrm{Id} -\Delta)^{\gamma/2}$ over two periods for various values of $\gamma > 0$ and $\alpha\in \mathbb{R}$. The splines are normalized so that the maximum value is $1$.}
\label{fig:sobolev1d}
\end{figure}
\paragraph{Spline-admissible Operators via their Green's Functions.}
The previous spline-admissible operators have been defined via their Fourier sequences. It is convenient to readily recognize their null space and their spectral growth (see Definition \ref{def:growth}) but also has some limitations.
We now propose an alternative construction, which has been described more extensively in \cite[Chapter 8]{simeoni2020functional}, for which the operator is specified from the construction of its Green's function. In particular, the operators we will consider have the following desirable properties.
\begin{itemize}
\item They are invertible, self-adjoint, and symmetric in the sense that $g_{{\rm L}} (x) = g_{{\rm L}} (-x)$ for any $x \in \mathbb{T}$.
\item They are well-localized: $g_{\rm L}( \cdot - x_0)$ is concentrated around $x_0$. This is especially appealing for applications, what can be leveraged in practice to design well-conditioned and parsimonious discretization schemes for the gTV-penalized optimisation problem \eqref{eq:optiwellstated} (see \cite[Chapter 8]{simeoni2020functional} for more details).
\item Their Green's function $g_{\rm L}$ admits a closed form expression in spatial domain.
\item They have a well-identified spectral growth.
\end{itemize}
The Sobolev operators typically share these properties with the exception of the third one. Moreover, the localization, which is visible in Figure \ref{fig:sobolev1d} for large values of $\alpha>0$, can again be improved .
The general principle for constructing the Green's functions and their corresponding periodic operator is as follows.
Consider a function $G^\beta : \mathbb{R}^+ \rightarrow \mathbb{R}$, where $\beta > 1$ will play the role of a smoothness parameter. For $\epsilon > 0$, we set
\begin{equation}
g^\beta_\epsilon(x)=G^\beta\left(\epsilon^{-1}\sqrt{2-2\cos x }\right),\qquad \forall x\in \mathbb{T},
\label{restrictions_rbf}
\end{equation}
Note that, upon identification of $\mathbb{T}=[0,2\pi)$ with the circle $\mathbb{S}^1=\{\bm{x}\in \mathbb{R}^2: \|\bm{x}\|=1\}\subset \mathbb{R}^2,$ it is possible to interpret \eqref{restrictions_rbf} as the restriction on $\mathbb{S}^1$ of the radial function $\bm{x} \mapsto G^\beta( \epsilon^{-1} \lVert \bm{x}\rVert)$.
The following Lemma is a reformulation of \cite[Lemma 2.1]{gia2012multiscale} where the authors consider the general case of the $d$-dimensional sphere, that we particularize with $d=1$ with our notations. Consider a function $G^\beta : \mathbb{R}^+ \rightarrow \mathbb{R}$ defining a radial function $G^\beta(\|\cdot\|) : \mathbb{R}^2 \rightarrow \mathbb{R}$.
Then, the Fourier transform of $G^\beta(\|\cdot\|)$ is itself radial, and given by $\widehat{G}^\beta( \lVert \cdot\rVert): \mathbb{R}^2\rightarrow \mathbb{R}$ where $\widehat{G}^\beta: \mathbb{R}_+\rightarrow \mathbb{R}$ is the \emph{Hankel transform of order zero} of ${G}^\beta$: $\widehat{G}^\beta(p):=\int_0^{+\infty} r {G}^\beta(r) J_0(rp) dr, \, p\in \mathbb{R}_+.$
Then, we say that the radial function $G^\beta(\|\cdot\|)$ \emph{reproduces} the Sobolev space
\begin{equation}\label{eq:soboR2}
\mathcal{H}_2^{\beta}( \mathbb{R}^2)=\left\{f\in\mathcal{S}'( \mathbb{R}^2) , \quad (\mbox{Id}-\Delta_{ \mathbb{R}^2})^{\beta/2}f\in\mathcal{L}_2( \mathbb{R}^2)\right\},
\end{equation}
if $\widehat{G}^\beta$ is strictly positive and the bilinear map $$(f, g) \mapsto \int_{ \mathbb{R}^2} \frac{ \widehat{f}(\bm{\omega} )\overline{\widehat{g}(\bm{\omega})}}{\widehat{G}^\beta( \lVert \bm{\omega}\rVert)} \mathrm{d} \bm{\omega},$$ is a well-defined inner-product for every element of $\mathcal{H}_2^{\beta}( \mathbb{R}^2)$, whose induced norm is equivalent to the canonical Hilbertian norm $\|(\mbox{Id}-\Delta_{ \mathbb{R}^2})^{\beta/2}f\|_2$ on $\mathcal{H}_2^{\beta}( \mathbb{R}^2)$. If $G^\beta(\|\cdot\|)$ does {reproduce} the Sobolev space $\mathcal{H}_2^{\beta}( \mathbb{R}^2)$, we have in particular the set equality:
\begin{equation*}
\mathcal{H}_2^{\beta}( \mathbb{R}^2) =\left\{ f \in \mathcal{S}'( \mathbb{R}^d), \ \int_{ \mathbb{R}^2} \frac{\lvert \widehat{f}(\bm{\omega} ) \rvert^2}{\widehat{G}^\beta( \lVert \bm{\omega}\rVert)} \mathrm{d} \bm{\omega} < \infty \right\}.
\end{equation*}
\begin{lemma} \label{lemma:gia}
Let $\beta > 1$ and $G^\beta(\|\cdot\|): \mathbb{R}^2 \rightarrow \mathbb{R}$ be a radial function reproducing $\mathcal{H}_2^\beta( \mathbb{R}^2)$. Then, for each $\epsilon > 0$, there exists constants $0< c_1 \leq c_2 < \infty$ such that
\begin{equation}
c_1(1+\epsilon | k |)^{-2(\beta-1/2)}\leq \widehat{g}_\epsilon^\beta[k] \leq c_2(1+\epsilon | k |)^{-2(\beta-1/2)}, \quad \forall k\in \mathbb{Z},
\label{tight_bound}
\end{equation}
where the $ \widehat{g}_\epsilon^\beta[k]$ are the Fourier coefficients of the periodic function $g_\epsilon^\beta$ in \eqref{restrictions_rbf}.
\end{lemma}
One can use the function $g_\epsilon^\beta$ to specify a spline-admissible operator, as detailed in the next elementary proposition.
\begin{proposition} \label{prop:usefulcriterion}
Let $g_\epsilon^\beta$ be as above.
Then, the Fourier sequence $(1/ \widehat{g}_\epsilon^\beta[k])_{k\in \mathbb{Z}}$ defines an operator in $\mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ via the relation
\begin{equation} \label{eq:Gepsibetadef}
\mathrm{G}_\epsilon^\beta \{f\} = \sum_{k\in \mathbb{Z}} \frac{\widehat{f}[k]}{\widehat{g}_\epsilon^\beta[k]} e_k, \qquad \forall f \in \mathcal{S}'( \mathbb{T}).
\end{equation}
Moreover, $\mathrm{G}_\epsilon^\beta$ is spline-admissible with trivial null space. Its Green's function is $g_\epsilon^\beta$ and its spectral growth is $\gamma = 2(\beta - 1/2)$.
\end{proposition}
\begin{proof}
The sequence $(1/ \widehat{g}_\epsilon^\beta[k])_{k\in \mathbb{Z}}$ is clearly slowly growing due to the left inequality in \eqref{tight_bound}, hence the operator $\mathrm{G}_\epsilon^\beta$ defined by \eqref{eq:Gepsibetadef} is in $\mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$ (see Proposition \ref{prop:firsttrivialstuff}).
The Fourier sequence of $\mathrm{G}_\epsilon^\beta$ is non vanishing, which implies, due to Proposition \ref{prop:finitedimNL}, that the null space of $\mathrm{G}_\epsilon^\beta$ is trivial. Moreover, the spectral growth is directly deduced from \eqref{tight_bound} in Lemma \ref{lemma:gia}. Finally, we readily see that, by construction, $\mathrm{G}_\epsilon^\beta g_\epsilon^\beta = \Sha$, so that $g_\epsilon^\beta$ is effectively the Green's function of $\mathrm{G}^\beta_\epsilon$.
\end{proof}
\paragraph{Periodic Mat\'ern Splines.}
For $\beta > 1$, we define $G^\beta$ as the \emph{Mat\'ern function of order $(\beta - 1)$}, which is given by~\cite[Eq. (4.14)]{rasmussen2003gaussian}
\begin{equation} \label{eq:kernelmatern}
G^\beta(r)= S_{\beta-1}(r) = \frac{2^{2-\beta}}{\Gamma(\beta - 1)}\left(\sqrt{2(\beta - 1)} r\right)^{\beta - 1} K_{\beta - 1}\left(\sqrt{2(\beta - 1)}r\right), \qquad \forall r \geq 0,
\end{equation}
where
where $K_\nu$ denotes the \emph{modified Bessel function of the second kind of parameter $\nu> 0$} \cite[Section 9.6]{abramowitz1948handbook}.
\begin{definition} \label{def:matern}
Let $\beta > 1$ and $\epsilon > 0$. The Mat\'ern operator $\mathrm{M}_{\epsilon}^{\beta}$ is the operator whose Green's function $g_\epsilon^\beta$ is given by \eqref{restrictions_rbf}, where the radial function satisfies \eqref{eq:kernelmatern}.
\end{definition}
The next proposition, mostly based on known results, characterizes the properties of the Mat\'ern operators $\mathrm{M}_{\epsilon}^{\beta}$.
\begin{proposition} \label{prop:maternop}
For each $\beta > 1$ and $\epsilon > 0$, $\mathrm{M}_{\epsilon}^{\beta}$ is a spline-admissible operator with trivial null space, spectral growth $\gamma = 2(\beta - 1/2)$, and whose Fourier sequence $\widehat{g}_{\epsilon}^\beta$ satisfies \eqref{tight_bound}.
Moreover, when $\beta \in \mathbb{N}_{\geq 1}+ 1/2$, the Green's function is the product of an exponential and a polynomial functions due to the relation, for the Mat\'ern function,
\begin{equation}
\label{matern_function}
S_{k+1/2}(r)=\exp\left(-\sqrt{2k+1} r\right)\frac{k!}{(2k)!}\sum_{i=0}^k \frac{(k+i)!}{i!(k-i)!}\left(\sqrt{8k+4} r\right)^{k-i}, \qquad \forall r \geq 0.
\end{equation}
with $k \in \mathbb{N}$.
\end{proposition}
\begin{proof
The kernel $G^\beta$ defined by \eqref{eq:kernelmatern} reproduces the Sobolev space $\mathcal{H}_2^{\beta}( \mathbb{R}^2)$ according to \cite[Theorem 6.13]{wendland2004scattered}. We can therefore apply Lemma \ref{lemma:gia} to deduce \eqref{tight_bound}.
Moreover, the relation \eqref{matern_function} has been derived in \cite[Eq. (4.16)]{rasmussen2003gaussian} and \cite[Eq. (10.2.15)]{abramowitz1948handbook}.
The rest of the Proposition is then a direct application of Proposition \ref{prop:usefulcriterion} to the case of the Mat\'ern function.
\end{proof}
Proposition \ref{prop:maternop} implies that the Green's function of the Mat\'ern operator admit a closed form expression when $\beta - 1/2 \in \mathbb{N}_{\geq 1}$. For instance, for $\beta = 3/2$, we get that $g_\epsilon^{3/2}(x) = S_{1/2}(\epsilon^{-1} \sqrt{2- 2\cos x})$ for every $x \in \mathbb{T}$,
where $S_{1/2}(r)=\exp(-r)$ for all $r \geq 0$.
The Mat\'ern function converges to the Gaussian function~\cite[Chapter 4, p. 84]{rasmussen2003gaussian} and is practically indistinguishable from it when $\beta \geq 9/2$.
Therefore, the periodic Mat\'ern Green's function $g_\epsilon^\beta$ in \eqref{restrictions_rbf} should resemble a bump function, sharply decaying away from zero. Using the Gaussian approximation above, it is moreover possible to approximate its effective support as $2\arcsin(5\epsilon/2)$, highlighting the role of the parameter $\epsilon$.
On Figure \ref{fig:matern}, we plot the periodic Mat\'ern Green's function $g_\epsilon^\beta$ for
$\beta\in\{3/2,5/2,7/2,9/2\}$ and $\epsilon\in\{1,3/10\}$.
As we have seen, due to \eqref{tight_bound}, this corresponds to a spectral growth $\gamma = 2(\beta - 1/2) \in \{2, 4, 6, 8\}$.
We use the parameter $\gamma$ in Figure \ref{fig:matern} to be consistent with other the other family of operators.
We observe moreover that the function $g_\epsilon^\beta$ is more localized for smaller scale parameters $\epsilon$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth]{matern.pdf}
\caption{Periodic ${\rm L}$-splines associated to the Mat\'ern operators over two periods for various values of $\gamma > 0$ and $\epsilon = 1$ (top) or $\epsilon = 0.3$ (bottom). The splines are normalized so that the maximum value is $1$. }
\label{fig:matern}
\end{figure}
\paragraph{Periodic Wendland Splines.}
While exhibiting good spatial localizations, the Green's functions of the Mat\'ern operators are nevertheless theoretically supported over the entire period. By considering compactly supported radial functions $G^\beta$ in \eqref{restrictions_rbf}, it is possible to construct a class of spline-admissible operators, namely the \emph{Wendland operators}, whose Green's functions are supported on a subregion of $\mathbb{T}$.
\begin{definition}\label{def:wendland}
For $\mu \in \mathbb{N}$ and $\beta \in \mathbb{N}_{\geq 2}$, the \emph{missing Wendland function} is defined by
\begin{equation}\label{eq:Gmubeta}
W_{\mu}^{\beta}(r):=P_{\mu,\beta - 3/2}(r^2)\log\left(\frac{r}{1+\sqrt{1-r^2}}\right)+Q_{\mu,\beta - 3/2}(r^2)\sqrt{1-r^2},\qquad \forall r \geq 0,
\end{equation}
where $P_{\mu,\beta - 3/2}, Q_{\mu,\beta - 3/2}$ are polynomial functions defined in \cite[Eq. 3.12]{zhu2012compactly} and \cite[Corollary 4.6]{hubbert2012closed}, respectively.
The periodic Wendland operator $\mathrm{W}_{\epsilon,\mu}^\beta$ is then the operator whose Green's function $g_{\epsilon,\mu}^\beta$ is given by \eqref{restrictions_rbf} with $G^\beta = W_{\mu}^{\beta}$.
\end{definition}
The next proposition characterizes the properties of the Mat\'ern operators $\mathrm{W}_{\epsilon,\mu}^{\beta}$.
\begin{proposition} \label{prop:wendlanderie}
Let $\mu \in \mathbb{N}$ and $\beta \in \mathbb{N}_{\geq 2}$. Then, the periodic Wendland operator $\mathrm{W}_{\epsilon,\mu}^\beta$ is in $\mathcal{L}_{\mathrm{SI}} ( \mathcal{S}'( \mathbb{T}^d))$, is spline-admissible, and has a trivial null space. Moreover, the Fourier sequence of its Green's function satisfies \eqref{tight_bound} and its support is $[-2\arcsin(\epsilon/2),2\arcsin(\epsilon/2)]\subset [-\pi,\pi]$ for $0<\epsilon\leq 2$. Finally, the spectral growth of $\mathrm{W}_{\epsilon,\mu}^\beta$ is $2(\beta - 1/2)$.
\end{proposition}
\begin{proof}
According to \cite[Proposition 3.5]{zhu2012compactly} , the missing Wendland function $G_{\mu,\beta}$ reproduces the Sobolev spaces $\mathcal{H}^\beta_2(\mathbb{R}^2)$. Lemma \ref{lemma:gia} therefore applies and \eqref{tight_bound} is shown. As for the Mat\'ern case, this means that the corresponding operator with Green's function $g_\epsilon^\beta$ is well-defined and spline-admissible with a trivial null space.
Moreover, the missing Wendland functions are compactly supported on $[0,1]$ \cite[Theorem 2.2]{hubbert2012closed}, which implies that the support of $g_\epsilon^\beta$ is $[-2\arcsin(\epsilon/2),2\arcsin(\epsilon/2)]\subset [-\pi,\pi]$ for $0 < \epsilon \leq 2$.
\end{proof}
Closed form expressions for the missing Wendland functions with parameters $\mu = \beta \in \{ 2, 3, 4, 5\}$ are listed in \cite[Table 4.2]{zhu2012compactly}.
In Figure \ref{fig:wendland}, we represent the periodic Wendland Green's functions for $\mu = \beta \in \{2,3,4\}$. We again use the spectral growth $\gamma = 2(\beta - 1/2) \in \{3,5,7\}$. \\
\textit{Remark.} It is worth noting that the Mat\'ern splines admitting a closed form expression are associated to an even spectral growth $\gamma \in 2 \mathbb{N}_{\geq 1}$. In comparison, the Wendland operator has an odd spectral growth $\gamma \in 2\mathbb{N}_{\geq 1}+ 1$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth]{wendland.pdf}
\caption{Periodic ${\rm L}$-splines associated to the Wendland operator over two periods for various values of $\gamma > 0$. The splines are normalized so that the maximum value is $1$.}
\label{fig:wendland}
\end{figure}
\subsection{Multivariate Splines-admissible Operators} \label{sec:multivariate}
In any ambiant dimension $d \geq 1$, we consider two types of spline-admissible operators.
First, we consider separable ones, based on univariate operators for each of the $d$ variables. Second, we introduce operators with isotropic Fourier sequences, which are nonseparable and have isotropic Green's functions.
We plot families multivariate periodic splines for the ambiant dimensions $d = 2$ and $d=3$.
Table \ref{table:operators1d} provides the list of some of the multivariate spline-admissible operators introduced thereafter, together with their Fourier sequence and their null space via the set of null space frequencies $N_{{\rm L}}$.
\begin{table*}[h!]
\centering
\caption{Families of multivariate spline-admissible operators}
\begin{tabular}{ccccccc}
\hline
\hline
Spline's type & Operator & Parameter & $\widehat{L}[\bm{k}]$ & $N_{\rm L}$
\\
\hline\\[-1ex]
Separable splines & $\prod_{i=1}^d ({\rm D}_i - \alpha_i \mathrm{Id})^{\gamma_i}$ & $\gamma_i >0, \ \alpha_i \in \mathbb{R}$ & $\prod_{i=1}^d ( \mathrm{i} k_i - \alpha_i)^{\gamma_i}$ & $\emptyset$ \\
Polyharmonic spline & $\Delta^N$ & $N\in \mathbb{N}$ & $ (-1)^N \lVert \bm{k}\rVert^{2N} $ & $0$ \\
& $\Delta + \lVert \bm{k}_0 \rVert^2 \mathrm{Id}$ & $\bm{k}_0 \in \mathbb{Z}^d$ & $ \lVert \bm{k}_0\rVert^2 - \lVert \bm{k}\rVert^2$ & $\{ \bm{k}, \ \lVert \bm{k}\rVert^2 = \lVert \bm{k}_0\rVert^2 \} $ \\
Fractional polyharmonic splines & $(- \Delta)^{\gamma/2}$ & $\gamma > 0$ & $\lVert \bm{k}\rVert^{\gamma}$ & $0$ \\
Sobolev splines & $(\alpha^2 \mathrm{Id} - \Delta)^{\gamma/2}$ & $\alpha > 0$, $\gamma > 0$ & $( \alpha^2 + \lVert \bm{k}\rVert^2)^{\gamma/2}$ & $\emptyset$ \\
\hline
\hline
\end{tabular} \label{table:operatorsmultid}
\end{table*}
\paragraph{Periodic Separable Splines.}
Let ${\rm L}_i$ be univariate spline-admissible operators for $i=1,\ldots , d$. We assume moreover that each ${\rm L}_i$ has a trivial null space, and therefore admits an inverse operator ${\rm L}_i^{-1}$. Then, the operator ${\rm L}$ with Fourier sequence
\begin{equation} \label{eq:Lopseparable}
\widehat{L}[\bm{k}] = \prod_{i=1}^d \widehat{L}_i[k_i]
\end{equation}
for any $\bm{k} = (k_1, \ldots , k_d) \in \mathbb{Z}^d$ is spline-admissible with trivial null space and inverse ${\rm L}^{-1}$ with Fourier sequence $(1/\widehat{L}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d}$.
We denote by ${\rm D}_i$ the derivative with respect to the $i$th coordinate. Applying the previous principle, we easily see that the operator
${\rm L} = \prod_{i=1}^d ({\rm D}_i - \alpha_i \mathrm{Id})^{\gamma_i}$ with $\gamma_i > 0$, $\alpha_i \in \mathbb{R}$ is a periodic separable spline-admissible operator, using ${\rm L}_i = ( {\rm D} - \alpha_i \mathrm{Id})^{\gamma_i}$ for any $i=1, \ldots , d$ in \eqref{eq:Lopseparable}.
We represent the corresponding separable splines with a unique knot at $\bm{x} = \bm{0}$ for the ambiant dimension $d=2$ in Figure \ref{fig:exposeparable}. \\
\begin{figure}[t!]
\centering
\includegraphics[width=0.70\textwidth]{exp2_alpha=1,3_exp=2,2.pdf}
\caption{Periodic ${\rm L}$-spline with one knot $\bm{x}_1 = \bm{0}$ (Green's function) associated to ${\rm L} = (\mathrm{D}_1 - \mathrm{Id})^2 (\mathrm{D}_2 - 3 \mathrm{Id})^2$ over $3 \times 3$ periods (left) or in the geometrical $2$d torus (right).}
\label{fig:exposeparable}
\end{figure}
\textit{Remark.} If one of the ${\rm L}_i$ has a non trivial null space, then the null space of ${\rm L}$ defined by \eqref{eq:Lopseparable} is infinite-dimensional, and the operator ${\rm L}$ is therefore not spline-admissible. Indeed, the null space contains any generalized function of the form $\bm{x} = (x_1, \ldots, x_d) \mapsto p(x_i) f(x_1, \ldots , x_{i-1}, x_{i+1} , \ldots x_d)$ for any $f \in \mathcal{S}'( \mathbb{T}^{d-1})$ and $p \in \mathcal{N}_{{\rm L}_i}$. As a typical example, the operator ${\rm D}_1 \ldots {\rm D}_d$ is not spline-admissible.
\paragraph{Multivariate Polyharmonic Splines.}
The fractional Laplacian operator $(-\Delta)^{\gamma/2}$ introduced in the univariate setting has multivariate counterpart. It corresponds to the Fourier sequence $(\lVert \bm{k}\rVert^\gamma)_{\bm{k}\in \mathbb{Z}^d}$. Its null space is made of constant functions, and its pseudoinverse has Fourier sequence $\One_{\bm{k}\neq 0} / \lVert \bm{k} \rVert^\gamma$. It is therefore spline admissible. As we have seen for the derivative ${\rm D}$, a non constant periodic $(-\Delta)^{\gamma/2}$-spline has at least two dots. We represents such periodic splines with knots in $\bm{0}$ and $(\pi , 0)$ in Figure \ref{fig:bilaplacien2d}.
They corresponds to the periodic counterpart of the polyharmonic splines considered for instance in \cite{Madych1990polyharmonic,rabut1992elementarym,VDV2005polyharmonic}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth]{squared_bidelta.png}
\caption{Periodic ${\rm L}$-spline with two knots at $\bm{x}_1 = (0,0)$ and $\bm{x}_2 = (0, \pi)$ associated to ${\rm L} = (-\Delta)^2$ over $3 \times 3$ periods (left) or in the geometrical $2$d torus (right).}
\label{fig:bilaplacien2d}
\end{figure}
\paragraph{Multivariate Sobolev Splines.}
Similarly as what we did for the fractional Laplacian, the Sobolev operator admits a multivariate generalization, the operator $(\alpha^2 \mathrm{Id} - \Delta)^{\gamma/2}$ having Fourier sequence $( \alpha^2 + \lVert \bm{k}\rVert^2)^{\gamma/2}$. It is an invertible operator with trivial null space, and is therefore spline-admissible. The corresponding splines with a unique knot at $\bm{x}=\bm{0}$ are represented in Figure \ref{fig:bisobolev2d}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth]{bisobolev_alpha=2_exp=2.pdf}
\caption{Periodic ${\rm L}$-spline with one knot $\bm{x}_1 = \bm{0}$ (Green's function) associated to ${\rm L} = (4 \mathrm{Id} -\Delta)^2$ over $3 \times 3$ periods (left) or in the geometrical $2$d torus (right).}
\label{fig:bisobolev2d}
\end{figure}
\paragraph{Periodic splines in dimension $d=3$.}
To illustrate the versatility of our framework, we also represent periodic splines in ambiant dimension $d=3$ for separable exponential operators, Sobolev operators and separable Mat\'ern operators in Figure \ref{fig:exponential3d}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.3\textwidth]{exp3_alpha=05,1,15_exp=2,2,2.pdf}
\includegraphics[width=0.3\textwidth]{sobolev_alpha=2_exp=2.png}
\includegraphics[width=0.3\textwidth]{matern3_nu=2_5,2_5,2_5_scale=1,1,1.png}
\caption{Periodic ${\rm L}$-spline with one knot $\bm{x}_1 = \bm{0}$ (Green's function) associated to ${\rm L} = (0.5 \mathrm{Id} - \mathrm{D}_1)^2 (\mathrm{Id} - \mathrm{D}_2)^2 (1.5 \mathrm{Id} - \mathrm{D}_3)^2$ (left), ${\rm L} = (4 \mathrm{Id} - \Delta)^2$ (center) and ${\rm L} =(\mathrm{M}_{1}^{3.5})^3$ (right) over $2 \times 2 \times 2$ periods.}
\label{fig:exponential3d}
\end{figure}
\section{Measurement Space and Admissible Measurements} \label{sec:criteria}
We have seen in Theorem \ref{theo:RT} that $\mathcal{C}_{\Lop}( \mathbb{T}^d)$ of a spline-admissible operator ${\rm L}$ is the exact function space from which the measurements can be taken such that the optimization problem is well-posed and the extreme points are characterizable as periodic ${\rm L}$-splines.
This is of practical significance: it delineates which measurements can be applied in order to keep the conclusions of Theorem \ref{theo:RT}.
In this section, we provide conditions to identify if classical measurement procedures are applicable for a given spline-admissible operator ${\rm L}$.
A special focus is given to Fourier sampling (Section \ref{sec:Fouriersampling}) and space sampling measurements (Section \ref{sec:spatialsampling}).
\subsection{Fourier Sampling} \label{sec:Fouriersampling}
The ${e}_{\bm{k}}$ are infinitely differentiable, therefore in $ \mathcal{S}( \mathbb{T}^d) \subseteq \mathcal{C}_{\Lop}( \mathbb{T}^d)$ for any spline-admissible operator ${\rm L}$ (the embedding is proven in Theorem \ref{theo:whatisCL}).
One can therefore consider the measurement functional $\bm{\nu} = (e_{\bm{k}^1} , \ldots , e_{\bm{k}^M}) \in (\mathcal{C}_{\Lop}( \mathbb{T}^d))^M$ as linear measurements with distinct frequencies $\bm{k}^m$.
In order to apply Theorem \ref{theo:RT}, the only restriction is that $\bm{\nu} : \mathcal{N}_{\Lop} \rightarrow \mathbb{R}^M$ should be injective over the finite dimensional null space of ${\rm L}$.
Equivalently, we require that the frequencies $\bm{k}^m$ used for the Fourier sampling should include the null space frequencies $\bm{k}_1,\ldots,\bm{k}_{N_0}$ of ${\rm L}$.
For instance, with the $N$th order derivative operator ${\rm D}^N$ in dimension $d=1$, one should include $\nu_m = e_{0} = 1$ as a measurement functional.
\subsection{Spatial Sampling} \label{sec:spatialsampling}
In view of Theorem \ref{theo:RT}, classical sampling is an admissible measurement procedure if and only if
\begin{equation} \label{eq:RKBS}
\Sha(\cdot - \bm{x}_0) \in \mathcal{C}_{\Lop}( \mathbb{T}^d), \qquad \forall \bm{x}_0 \in \mathbb{T}^d.
\end{equation}
We recover the classical notion of Reproducing Kernel Hilbert Spaces (RKHS), here in the context of (non reflexive) Banach spaces.
Under the assumption \eqref{eq:RKBS} and selecting $\nu_m = \Sha(\cdot - \bm{x}_m)$, for any $f \in \mathcal{M}_{\Lop}( \mathbb{R}^d)$, we have $\bm{\nu}(f) = (f(\bm{x}_1),\ldots , f(\bm{x}_M) )$.
\begin{definition}[Sampling-Admissible Operator]
A spline-admissible operator ${\rm L}$ is said to be \emph{sampling-admissible} if \eqref{eq:RKBS} holds.
\end{definition}
\begin{proposition} \label{prop:conditionsampling}
Let ${\rm L}$ be a spline-admissible operator.
Then, we have the equivalence
\begin{equation} \label{eq:equivalenceCLC}
{\rm L} \text{ is sampling-admissible} \quad \Longleftrightarrow \quad {\rm L^{\dagger}} \Sha \in \mathcal{C}( \mathbb{T}^d).
\end{equation}
\end{proposition}
\begin{proof}
First of all, $\mathcal{C}_{\Lop}( \mathbb{T}^d)$ is shift-invariant because ${\rm L}$ is. Hence, $\Sha (\cdot - \bm{x}_0) \in \mathcal{C}_{\Lop}( \mathbb{T}^d)$ for every $\bm{x}_0$ if and only if $\Sha\in \mathcal{C}_{\Lop}( \mathbb{T}^d)$.
Assume that ${\rm L^{\dagger}} \Sha \in \mathcal{C}( \mathbb{T}^d)$.
Then, $\Sha = {\rm L} \{ {\rm L^{\dagger}} \Sha\} + \mathrm{Proj}_{\mathcal{N}_{\Lop}} \Sha \in {\rm L} (\mathcal{C}( \mathbb{T}^d)) + \mathcal{N}_{\Lop} = \mathcal{C}_{\Lop}( \mathbb{T}^d)$, as expected.
If now $\Sha \in \mathcal{C}_{\Lop}( \mathbb{T}^d)$, then $\Sha = {\rm L} f + p $ with $f \in \mathcal{C}( \mathbb{T}^d)$ and $p\in \mathcal{N}_{\Lop}$.
Therefore, we have that
\begin{equation}
{\rm L^{\dagger}} \Sha = {\rm L^{\dagger}} {\rm L} f + {\rm L^{\dagger}} p = f - \mathrm{Proj}_{\mathcal{N}_{\Lop}} f + {\rm L^{\dagger}} p
\end{equation}
where we used that ${\rm L^{\dagger}} {\rm L} f = f - \mathrm{Proj}_{\mathcal{N}_{\Lop}} f$ according to \eqref{eq:LopKop}.
Moreover, ${\rm L^{\dagger}} p \in \mathcal{S}( \mathbb{T}^d) \subset \mathcal{C}( \mathbb{T}^d)$ because $p \in \mathcal{S}( \mathbb{T}^d)$, $\mathrm{Proj}_{\mathcal{N}_{\Lop}} f \in \mathcal{N}_{\Lop} \subset \mathcal{S}( \mathbb{T}^d) \subset \mathcal{C}( \mathbb{T}^d)$, and $f \in \mathcal{C}( \mathbb{T}^d)$ by definition. Hence, ${\rm L^{\dagger}} \Sha \in \mathcal{C}( \mathbb{T}^d)$. The equivalence \eqref{eq:equivalenceCLC} is established.
\end{proof}
Proposition~\ref{prop:conditionsampling} characterizes the validity of sampling measurements from the smoothness properties of the function ${\rm L^{\dagger}} \Sha$, which plays a similar role to the one of the Green's function for differential operators in a non periodic setting.
We now present some criteria based on the Fourier sequence of the pseudoinverse operator ${\rm L^{\dagger}}$.
\begin{proposition} \label{prop:conditionsamplingbis}
Let ${\rm L}$ be a spline-admissible operator with pseudoinverse ${\rm L^{\dagger}}$.
\begin{itemize}
\item If $\sum_{\bm{k} \in \mathbb{Z}^d} \lvert \widehat{L^\dagger}[\bm{k}] \rvert = \sum_{\bm{k}\in K_{\Lop}} \frac{1}{\lvert \widehat{L}[\bm{k}] \rvert} < \infty$, then ${\rm L}$ is sampling-admissible.
\item If $\sum_{\bm{k} \in \mathbb{Z}^d} \lvert \widehat{L^\dagger}[\bm{k}] \rvert^2 = \sum_{\bm{k}\in K_{\Lop}} \frac{1}{\lvert \widehat{L}[\bm{k}] \rvert^2} = \infty$, then ${\rm L}$ is not sampling-admissible.
\end{itemize}
In particular, an spline-admissible operator admitting a spectral growth $\gamma > d$ is sampling-admissible. If $\gamma \leq d/2$, then, the operator is not sampling-admissible.
\end{proposition}
\begin{proof}
Recall that $( \widehat{L^\dagger}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d}$ is the Fourier sequence of ${\rm L^{\dagger}} \Sha$.
The first condition means that Fourier sequence of ${\rm L^{\dagger}} \Sha$ is in $\ell_1( \mathbb{Z}^d)$, from which we deduce the continuity of ${\rm L^{\dagger}} \Sha$, and therefore \eqref{eq:RKBS}. The second condition means that $( \widehat{L^\dagger}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d} \notin \ell_2( \mathbb{Z}^d)$, which is equivalent to ${\rm L^{\dagger}} \Sha \notin \mathcal{L}_2( \mathbb{T}^d)$. Hence, ${\rm L^{\dagger}} \Sha \notin \mathcal{C}( \mathbb{T}^d)$ and ${\rm L}$ is not sampling-admissible.
When the operator admits a growth order $\gamma$, \eqref{eq:AGforglop} reveals the asymptotic behavior of $ \lvert \widehat{L^\dagger}[\bm{k}] \rvert = \widehat{g}_{\rm L}[k]$ and the two last results follow.
\end{proof}
\textit{Remarks.} If we only know that $( \widehat{L^\dagger}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d} \in \ell_2( \mathbb{Z}^d)$ and $( \widehat{L^\dagger}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d} \notin \ell_1( \mathbb{Z}^d)$, we cannot say anything in general. Indeed, we shall see in Proposition \ref{prop:1dwhoissamplingOK} that the fractional derivative $\mathrm{D}^\gamma$, which is typically in this regime for $\gamma \in (1/2,1]$ is not sampling-admissible. However, there exists sequences $(c_{\bm{k}})_{\bm{k}\in \mathbb{Z}^d}$ such that $\lvert c_{\bm{k}} \rvert \sim_{\infty} \lVert \bm{k} \rVert^{-\gamma}$, and such that $f = \sum_{\bm{k} \in \mathbb{Z}^d} c_{\bm{k}} e_{\bm{k}} \in \mathcal{C}( \mathbb{T}^d)$. An example for $d=1$ is given by the Hardy-Littlewood series, defined for $\gamma \in (1/2,1]$ by
\begin{equation}
c_0 = 0 \text{ and } \forall k \neq 0, \ c_k = \frac{\mathrm{e}^{\mathrm{i} |k| \log |k|}}{|k|^\gamma}.
\end{equation}
This Fourier series is known to converge uniformly to a continuous function~\cite[Section V-4]{zygmund2002trigonometric}.
In that case, if ${\rm L}$ is defined by its Fourier sequence with $\widehat{L}[k] = c_{k}$, $k \in \mathbb{Z}$, we have that $\lvert \widehat{L}[k] \rvert = \lvert \widehat{D^\gamma}[k] \rvert$, while ${\rm L}$ is sampling-admissible. Note that this behavior is based on strong phase oscillations of the coefficients.
We now visit the sampling-admissibility of the classes of operators introduced in Section \ref{sec:differentialop}. \\
Moreover, we can easily generalize Propositions \ref{prop:conditionsampling} and \ref{prop:conditionsamplingbis} to other sampling measurements. For instance, sampling measurements on the derivative of the unknown function is allowed if and only if the derivative of the Dirac comb $\Sha' = {\rm D} \Sha$ is in the measurement space, with potential applications to spline-based reconstruction with tangent control~\cite{Uhlmann2016hermite}.
\subsubsection{Sampling-Admissibility of Univariate Operators}
The ambiant dimension is $d=1$. We investigate the sampling-admissibility of classical differential operators, and their fractional counterparts.
\begin{proposition} \label{prop:1dwhoissamplingOK}
Let $\gamma \geq 0$, $\alpha \in \mathbb{R}$. We have the equivalences:
\begin{align} \label{eq:1}
\mathrm{D}^\gamma \text{ is sampling-admissible } & \Longleftrightarrow (\mathrm{D} + \alpha \mathrm{Id})^\gamma \text{ is sampling-admissible }
\Longleftrightarrow (-\Delta)^{\gamma/2} \text{ is sampling-admissible } \\ \label{eq:2}
& \Longleftrightarrow (\alpha \mathrm{Id} - \Delta)^{\gamma/2} \text{ is sampling-admissible } \Longleftrightarrow \gamma > 1.
\end{align}
Moreover, the Mat\'ern and Sobolev operators $\mathrm{M}_\epsilon^\beta$ and $\mathrm{W}_{\epsilon,\mu}^\beta$ are sampling-admissible for any $\epsilon, \beta, \nu$.
\end{proposition}
\begin{proof}
All the considered operators have a spectral growth $\gamma$. Moreover, a spline-admissible operator with spectral growth $\gamma > 1$ is sampling admissible according to Proposition \ref{prop:conditionsamplingbis}. Hence, the condition $\gamma > 1$ implies the sampling admissibility of all the considered operators in\eqref{eq:1} and \eqref{eq:2}. Similarly, the condition $\gamma \leq 1/2$ implies that the Green's function is not square-integrable, and therefore not continuous, and the operators are not sampling-admissible in this case.
For $\gamma = 1$, the function $\mathrm{D}^{\dagger} \Sha$ is given by
$\mathrm{D}^{\dagger} \Sha(x) = \pi - x$ for $x \in \mathbb{T}$, and then periodized. Since $\mathrm{D}^{\dagger} \Sha(0^+) = 1/2 \neq \mathrm{D}^\dagger \Sha(2\pi ^-) = -1/2$, the function is discontinuous. Hence, $\mathrm{D}$ is not sampling-admissible.
For the case $1/2 < \gamma < 1$, we refer to \cite[Eq. (8.10), Section XII-8]{zygmund2002trigonometric}, where the function $(\mathrm{D}^\gamma)^{\dagger} \Sha$---denoted by $\Psi_{\gamma}$ up to a rescaling---is shown to be such that
\begin{equation}
\forall x \in (-2\pi,2\pi), \quad (\mathrm{D}^\gamma)^{\dagger} \Sha (x) = \frac{1}{\Gamma(\gamma)} (x)_+^{\gamma -1} + r_\gamma(x),
\end{equation}
with $\Gamma$ the Gamma function and $r_\gamma$ a function that is infinitely differentiable on $(-2\pi,2\pi)$.
The function $x \mapsto (x)_+^{\gamma -1}$ being discontinuous at the origin for $\gamma \in (1/2, 1)$, we deduce that $(\mathrm{D}^\gamma)^{\dagger} \Sha$ is also discontinuous, and $\mathrm{D}^\gamma$ is not sampling-admissible.
For ${\rm L} = (\mathrm{D} + \alpha \mathrm{Id})^\gamma$, we simply observe that
\begin{equation} \label{eq:argufracandexpo}
\widehat{L^\dagger}[k] - \widehat{(D^\gamma)^{\dagger}}[k] = \frac{1}{(\mathrm{i} k)^\gamma} \left( \frac{1}{(1 + \frac{\alpha}{ \mathrm{i} k})^\gamma} - 1 \right) = O\left( \frac{1}{k^{\gamma +1}}\right).
\end{equation}
In particular, we deduce that ${\rm L^{\dagger}}\Sha - \mathrm{D}^\gamma \Sha \in \mathcal{C}( \mathbb{T})$, because its Fourier sequence is in $\ell_1( \mathbb{Z})$. This means in particular that ${\rm L^{\dagger}}\Sha \in \mathcal{C}( \mathbb{T})$ if and only if $\mathrm{D}^\gamma \Sha \in \mathcal{C}( \mathbb{T})$, and the result follows.
We show similarly that $ (-\Delta)^{\gamma/2}$ is spline-admissible if and only if $(\alpha \mathrm{Id} - \Delta)^{\gamma/2} $ is.
For $\gamma = 1$, we actually have that
\begin{equation}
\widehat{((-\Delta)^{1/2})^{\dagger}}(x) = 2 \sum_{k\geq 1} \frac{e_k(x)}{k} = \log (1 - \cos x) + \log 2, \qquad \forall x \in \mathbb{T},
\end{equation}
which is clearly discontinuous (and unbounded) around $0$. Fix $\gamma \in (1/2,1)$. Assume that the periodic function $x \mapsto f (x) = \sum_{k\geq 1} \widehat{f}[k] \cos( k x) \in \mathcal{C}( \mathbb{T})$ has positive Fourier coefficients. Then, for any $\alpha \in (0,1)$, we have that $\sum_{k \geq 1} k^{\gamma -1} \widehat{f}[n] < \infty$~\cite[Theorem 1]{boas1966fourier}.
Consider the function
\begin{equation}
f(x) = \widehat{((-\Delta)^{\gamma/2})^{\dagger}}(x) = 2 \sum_{k\geq 1}\frac{e_k(x)}{k^\gamma}, \qquad \forall x \in \mathbb{T}.
\end{equation}
Then, $\sum_{k \geq 1} k^{\alpha -1} \widehat{f}[n] = \sum_{k \geq 1} k^{- (\gamma + 1- \alpha)}$, which is infinite as soon as $\gamma \leq \alpha$. Applying the contraposition of \cite[Theorem 1]{boas1966fourier}, we therefore deduce that $f$ is discontinuous, hence $ (-\Delta)^{\gamma/2}$ is not spline-admissible.
For the Mat\'ern and Wendland operators, we remark that their growth order is at least equal to $2$ according to Propositions \ref{prop:maternop} and \ref{prop:wendlanderie}, implying the sampling-admissibility.
\end{proof}
\subsubsection{Sampling Admissibility of multivariate Operators}
The ambiant dimension is $d\geq 1$. We evaluate the sampling admissibility of the separable and isotropic operators introduced above.
\begin{proposition}\phantom{test}
\begin{itemize}
\item Let ${\rm L}_i$ be $d$ univariate spline-admissible operators with trivial null space. Then, the separable multivariate spline-admissible operator ${\rm L} = \prod_{i=1}^d {\rm L}_i$ is sampling-admissible if and only if each ${\rm L}_i$ is. In particular, $\prod_{i=1}^d ({\rm D}_i - \alpha_i \mathrm{Id})^{\gamma_i}$ is sampling-admissible if and only if $\gamma_i > 1$ for any $i= 1 , \ldots , d$.
\item Let $\gamma \geq 0$. Then, we have the relations:
\begin{equation}\label{eq:twoequivalences}
\gamma > d \Longrightarrow (-\Delta)^{\gamma/2} \text{ is sampling-admissible } \Longleftrightarrow (\alpha^2 \mathrm{Id} -\Delta)^{\gamma/2} \text{ is sampling-admissible }.
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
The Green's function of the separable operator ${\rm L}$ is $g_{{\rm L}}(\bm{x}) = g_{{\rm L}_1} (x_1) \ldots g_{{\rm L}_d}(x_d)$ for every $\bm{x} = (x_1, \ldots, x_d) \in \mathbb{T}^d$. Then, $g_{{\rm L}} \in \mathcal{C}( \mathbb{T}^d)$ if and only if each $g_{{\rm L}_i} \in \mathcal{C}( \mathbb{T})$ for $i=1, \ldots , d$. Applying this principle to $L_i = ({\rm D} - \alpha_i \mathrm{Id})^{\gamma_i}$, which is sampling-admissible if and only if $\gamma_i > 1$ according to Proposition \ref{prop:1dwhoissamplingOK}, gives the second result.
Let $\gamma > d$. Then, $(-\Delta)^{\gamma/2}$ has a growth order $\gamma > d$ and is therefore sampling-admissible using Proposition \ref{prop:conditionsamplingbis}.
The equivalence in \eqref{eq:twoequivalences} follows from an argument identical to \eqref{eq:argufracandexpo}: we readily show that the difference of the two Green's functions has a summable Fourier series and is therefore continuous.
\end{proof}
\textit{Remark.}
The case of multidimensional Fourier series is more evolved than for the univariate case~\cite[Chapter XVII]{zygmund2002trigonometric} and the literature on this subject is much less developed (see \cite{shapiro2011fourier} for an extensive discussion on these topics). In particular, the arguments we used in Proposition \ref{prop:1dwhoissamplingOK} for the equivalence between the sampling-admissibility of $(-\Delta)^{\gamma/2}$ and $\gamma > 1$ are not directly applicable. We conjecture however that the $d$-dimensional generalization of this result is true. That is, $\gamma > d$ if and only if $(-\Delta)^{\gamma/2}$ is sampling-admissible.
\subsection{Square-Integrable Measurement Functions} \label{sec:spatialsampling}
We provide a simple characterization of the spline-admissible operators for which the space of square-integrable measurement functions is included in the measurement space in Proposition \ref{prop:squaremeasurement}.
\begin{proposition}
\label{prop:squaremeasurement}
Let ${\rm L}$ be a spline-admissible opeartor with pseudoinverse ${\rm L^{\dagger}}$. Then, the following equivalences hold:
\begin{equation}
\mathcal{L}_2( \mathbb{T}^d) \subseteq \mathcal{C}_{\Lop}( \mathbb{T}^d) \quad \Longleftrightarrow \quad {\rm L^{\dagger}} \Sha \in \mathcal{L}_2( \mathbb{T}^d) \quad \Longleftrightarrow \quad (\widehat{L^{\dagger}}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d} \in \ell_2( \mathbb{Z}^d).
\end{equation}
More generally, we have the following equivalences:
\begin{equation}
\mathcal{H}_{2}^{-\tau}( \mathbb{T}^d) \subseteq \mathcal{C}_{\Lop}( \mathbb{T}^d) \quad \Longleftrightarrow \quad {\rm L^{\dagger}} \Sha \in \mathcal{H}_{2}^{\tau}( \mathbb{T}^d)
\quad \Longleftrightarrow \quad ( \lVert \bm{k} \rVert^\tau \widehat{L^{\dagger}}[\bm{k}])_{\bm{k}\in \mathbb{Z}^d} \in \ell_2( \mathbb{Z}^d)
\end{equation}
for any $\tau \in \mathbb{R}$, with $ \mathcal{H}_{2}^{\tau}( \mathbb{T}^d) $ the periodic Sobolev space of smoothness $\tau$ defined in \eqref{eq:sobolevspace}.
\end{proposition}
\begin{proof}
The proof for periodic Sobolev spaces works identically, therefore we first prove the first part of Proposition \ref{prop:squaremeasurement}.
The second equivalence is simply due to the Parseval relation. We therefore focus on the first one. \\
$\Longleftarrow$ Assume first that ${\rm L^{\dagger}} \Sha \in \mathcal{L}_2( \mathbb{T}^d)$.
Then, we have that $\lVert \widehat{L^\dagger} \rVert_{\ell_2}^2 = \sum_{\bm{k}\in \mathbb{Z}^d} | \widehat{L^{\dagger}}[\bm{k}]|^2 < \infty$. Let $f \in \mathcal{L}_2( \mathbb{T}^d)$. Then,
\begin{align} \label{eq:equationforcontinuity1}
\lVert f \rVert_{\mathcal{C}_{\Lop}}^2 &= \lVert {\rm L^{\dagger}}^* f \rVert_\infty^2 + \lVert \mathrm{Proj}_{\mathcal{N}_{\Lop}} f \rVert_2^2 \in [0,\infty].
\end{align}
Since $\mathrm{Proj}_{\mathcal{N}_{\Lop}}$ is an orthogonal projector, we have $\lVert \mathrm{Proj}_{\mathcal{N}_{\Lop}} f \rVert_2^2 \leq \lVert f \rVert_2^2$. Moreover, we have
\begin{align} \label{eq:lasteqpkopf}
\lVert {\rm L^{\dagger}}^* f \rVert_\infty &= \sup_{\bm{x}\in \mathbb{T}^d} \left\lvert \sum_{\bm{k}\in \mathbb{Z}^d} \overline{\widehat{L^\dagger} [\bm{k}] } \widehat{f}[\bm{k}] e_{\bm{k}} (\bm{x}) \right\rvert \leq \sum_{\bm{k}\in \mathbb{Z}^d} \lvert \widehat{L^\dagger} [\bm{k}] \rvert \lvert \widehat{f}[\bm{k}] \rvert \leq \lVert \widehat{L^{\dagger}} \rVert_{\ell_2} \lVert \widehat{f} \rVert_{\ell_2} = \lVert {\rm L^{\dagger}} \Sha \rVert_2 \lVert f \rVert_2 ,
\end{align}
where we used Cauchy-Schwarz in the last inequality and the Parseval relation in the last equality in \eqref{eq:lasteqpkopf}.
This shows that ${\rm L^{\dagger}}^* f \in L_{\infty}( \mathbb{T}^d)$. Let us show moreover that ${\rm L^{\dagger}}^* f \in \mathcal{C}( \mathbb{T}^d)$.
We define $g_K = \sum_{\lVert \bm{k} \rVert \leq K} \widehat{L^{\dagger}}[\bm{k}] \widehat{f} [\bm{k}] e_{\bm{k}}$, which is the truncated Fourier series of ${\rm L^{\dagger}}^* f$. The functions $g_K$ are continuous (and even infinitely differentiable). Moreover, we have that, for any $\bm{x} \in \mathbb{T}^d$,
\begin{equation} \label{eq:partialsumforcontinuity}
\lvert {\rm L^{\dagger}}^* f (\bm{x}) - g_K(\bm{x}) \rvert
=
\left\lvert \sum_{\lVert \bm{k} \rVert > K} \overline{\widehat{L^{\dagger}}[\bm{k}] } \widehat{f} [\bm{k}] e_{\bm{k}} (\bm{x}) \right\rvert
\leq
\sum_{\lVert \bm{k} \rVert > K} \lvert \widehat{L^{\dagger}}[\bm{k}] \rvert \lvert \widehat{f} [\bm{k}] \rvert
\leq
\left(\sum_{\lVert \bm{k} \rVert > K} \lvert \widehat{L^{\dagger}}[\bm{k}] \rvert^2 \right)^{1/2} \left(\sum_{\lVert \bm{k} \rVert > K} \lvert \widehat{f} [\bm{k}] \rvert^2\right)^{1/2}
\end{equation}
where we used that $\lvert e_{\bm{k}}(\bm{x}) \rvert \leq 1$ and the Cauchy-Schwarz inequality. Both $\sum_{\lVert \bm{k} \rVert > K} \lvert \widehat{L^{\dagger}}[\bm{k}] \rvert^2$ and $\sum_{\lVert \bm{k} \rVert > K} \lvert \widehat{f} [\bm{k}] \rvert^2$ and \eqref{eq:partialsumforcontinuity} holds for any $\bm{x}\in \mathbb{T}^d$, hence $\lVert {\rm L^{\dagger}}^* f - g_K\rVert_\infty \rightarrow 0$ when $K\rightarrow 0$. Then, $ {\rm L^{\dagger}}^* f $ is the limit of the continuous functions $g_K$ for the uniform convergence, and is therefore continuous. In particular, $f \in \mathcal{C}_{\Lop}( \mathbb{T}^d)$ showing the set inclusion $\mathcal{L}_2( \mathbb{T}^d) \subset \mathcal{C}_{\Lop}( \mathbb{T}^d)$.
Using \eqref{eq:equationforcontinuity1} and \eqref{eq:lasteqpkopf}, we moreover deduce that
\begin{equation}
\lVert f \rVert_{\mathcal{C}_{\Lop}} \leq (1 + \lVert {\rm L^{\dagger}} \Sha \rVert_2^2)^{1/2} \lVert f \rVert_2,
\end{equation}
which, together with the set inclusion $\mathcal{L}_2( \mathbb{T}^d) \subset \mathcal{C}_{\Lop}( \mathbb{T}^d)$, implies the topological embedding $\mathcal{L}_2( \mathbb{T}^d) \subseteq \mathcal{C}_{\Lop}( \mathbb{T}^d)$. \\
$\Longrightarrow$ Assume now that $\mathcal{L}_2( \mathbb{T}^d) \subseteq \mathcal{C}_{\Lop}( \mathbb{T}^d)$. Moreover, we know with Theorem \ref{theo:whatisCL} that $ \mathcal{S}( \mathbb{T}^d)$ is dense in $ \mathcal{C}_{\Lop}( \mathbb{T}^d)$. This implies that the embedding $\mathcal{L}_2( \mathbb{T}^d) \subseteq \mathcal{C}_{\Lop}( \mathbb{T}^d)$ is also dense, from which we deduce the topological embedding $\mathcal{M}_{\Lop}( \mathbb{T}^d) \subseteq \mathcal{L}_2( \mathbb{T}^d)$, using that the $(\mathcal{L}_2( \mathbb{T}^d))' = \mathcal{L}_2( \mathbb{T}^d)$ and $(\mathcal{C}_{\Lop}( \mathbb{T}^d))' = \mathcal{M}_{\Lop}( \mathbb{T}^d)$ due to Theorem \ref{theo:RieszMarkovgeneralized}. Finally, since ${\rm L^{\dagger}} \Sha \in \mathcal{M}_{\Lop}( \mathbb{T}^d)$ (because ${\rm L} {\rm L^{\dagger}} \Sha = \Sha + \mathrm{Proj}_{\mathcal{N}_{\Lop}} \Sha \in \mathcal{M}( \mathbb{T}^d)$), we conclude that ${\rm L^{\dagger}} \Sha \in \mathcal{L}_2( \mathbb{T}^d)$ as expected.
\end{proof}
We say that a spline-admissible operator is \emph{$\mathcal{L}_2$-admissible} is its measurement space contains the square-integrable functions.
In particular, a {$\mathcal{L}_2$-admissible} operator admits indicator functions as valid measurement functions.
From the previous results, we deduce that a sampling admissible operator is necessarily $\mathcal{L}_2$-admissible. Indeed, the sampling admissibility implies that $\widehat{L}^\dagger \in \ell_2( \mathbb{Z}^d)$ (second part of Proposition \ref{prop:conditionsamplingbis}), which is equivalent to the $\mathcal{L}_2$-admissibility with Proposition \ref{prop:squaremeasurement}.
The next corollary reveals which pseudo-differential operators introduced in Section \ref{sec:differentialop} are {$\mathcal{L}_2$-admissible}.
\begin{corollary} \label{coro:L2admonpseudodiff}
Let $\gamma \geq 0$, $\alpha \in \mathbb{C}$.
Then, the univariate spline-admissible operators $\mathrm{D}^\gamma$ and $(\mathrm{D} + \alpha \mathrm{Id})^\gamma$ are $\mathcal{L}_2$-admissible if and only if $\gamma > 1/2$.
In any ambiant dimension $d\geq 1$, the multivariate spline-admissible operators $(-\Delta)^{\gamma/2}$ and $(\alpha^2 \mathrm{Id} -\Delta)^{\gamma/2}$ are $\mathcal{L}_2$-admissible if and only if $\gamma > d/2$.
Finally, Mat\'ern and Wendland operators are $\mathcal{L}_2$-admissible.
\end{corollary}
\begin{proof}
The proof is very simple using Proposition \ref{prop:squaremeasurement}, which implies that ${\rm L}$ is $\mathcal{L}_2$-admissible if and only if $\sum_{\bm{k}\in \mathbb{Z}^d} | \widehat{L^\dagger}[\bm{k}] |^2 < \infty$. For instance, the Sobolev operator ${\rm L}_{\gamma,\alpha} = (\alpha^2 \mathrm{Id} - \Delta)^{\gamma/2}$ is such that
\begin{equation}
\sum_{\bm{k}\in \mathbb{Z}^d} | \widehat{L_{\gamma,\alpha}^\dagger}[\bm{k}] |^2 =
\sum_{\bm{k}\in \mathbb{Z}^d} \frac{1}{(\alpha^2 + \lVert \bm{k}\rVert^2)^{\gamma}},
\end{equation}
which is finite if and only if $2 \gamma > d$, as expected. Finally, the Mat\'ern and Wendland operators are $\mathcal{L}_2$-admissible as any sampling-admissible operators.
\end{proof}
The results of Proposition \ref{prop:squaremeasurement} and Corollary \ref{coro:L2admonpseudodiff} are consistent with \cite[Proposition 8]{simeoni2020functionalpaper}, which obtains similar but partial results over the $d$-dimensional sphere $\mathbb{S}^d$. The two main differences are that \cite{simeoni2020functionalpaper} only provides a \emph{sufficient} condition for the set inclusion $\mathcal{L}_2(\mathbb{S}^d) \subset \mathcal{C}_{\Lop}(\mathbb{S}^d)$, and for a specific class of spline-admissible operators.
\section{Discussion and Conclusion} \label{sec:discussion}
For the sake of simplicity, we restrict our attention in this section to the case where the regularizing spline-admissible operator ${\rm L}$ has a trivial null space, that is, $\mathcal{N}_{\Lop} = \{0\}$. The pseudoinverse is then an inverse.
\subsection{Practical Discretization Schemes} \label{sec:algo}
Theorem \ref{theo:RT} can be used to derive canonical discretization schemes for the optimization problem \eqref{eq:optiwellstated}. Indeed, the representer theorem tells us that the extreme point solutions of \eqref{eq:optiwellstated} take the form of periodic ${\rm L}$-splines with sparse innovations---\emph{i.e.}, less innovations then available data. One idea for solving \eqref{eq:optiwellstated} in practice consists then in discretizing it by replacing the function $f : \mathbb{R}^d \rightarrow \mathbb{R}$ by a non-uniform ${\rm L}$-spline with unknown knots and weights. The spline innovations must then be estimated from the data. While the spline weights can be recovered from the measurements using a convex optimization problem, the same is not true for the knots, which are consequently much harder to estimate.
To circumvent this issue, one strategy consists in considering overparametrised uniform splines with knots chosen over a very fine uniform grid to approximate the non-uniform splines with unknown knots. The weights are then recovered by solving a discrete penalized basis pursuit problem using state-of-the-art proximal algorithms such as the ones discussed in \cite[Section 5.1]{simeoni2020functionalpaper}. Such discretization schemes were investigated and analyzed in \cite[Section V.B]{gupta2018continuous} and \cite[Section 5.1]{simeoni2020functionalpaper} in the Euclidean and spherical setting respectively. Extensions to B-splines and multiresolution grids were also considered in \cite{Debarre2019}.
While conceptually simple, this approach is computationally wasteful since the approximating uniform spline typically has much more innovations than the number of measurements.
As a potential cure to this issue, one could consider meshfree algorithms capable of directly recovering the non uniform knots in the continuum. Candidate reconstruction algorithms include the Cadzow Plug-and-Play Gradient Descent (CPGD) algorithm \cite{simeoni2020cpgd} as well as the Franck-Wolfe algorithm and its variants \cite{denoyelle2019sliding,flinth2020linear}. Both algorithms have been successfully used for the reconstruction of periodic Dirac streams. To the best of our knowledge however, they have not yet been tested for the purpose of reconstructing spline knots, and would therefore need to be adapted for this specific purpose.
\subsection{Comparison with Generalized Periodic Tikhonov Regularization}
We compare here the solutions of the periodic TV-regularized problem \eqref{eq:optiwellstated} to its analog with quadratic Tikhonov regularization considered in~\cite{Badou}. The latter takes the form
\begin{equation} \label{eq:L2optipb}
{\min} \ E(\bm{y}, \bm{\nu} (f ) ) + \lambda \lVert {\rm L} f \rVert^2_{\mathcal{L}_2},
\end{equation}
where $E$ is a cost function sharing the same properties as in Theorem \ref{theo:RT} and ${\rm L}$ is a spline-admissible operator.
According to \cite[Theorem 1]{Badou} the solution of \eqref{eq:L2optipb} is unique and of the form
\begin{equation} \label{eq:uniquedudeL2}
f_{\mathrm{opt}} (x) = \sum_{m=1}^M a_m ( h_{{\rm L}} * \nu_m (x)) ,
\end{equation}
where $h_{{\rm L}} =({\rm L}^* {\rm L})^{-1}\{\Sha\}= \sum_{k\in \mathbb{Z}} \frac{e_k}{\lvert \widehat{L}[k]|^2}$.
The main differences between the two settings are then as follows:
\begin{itemize}
\item For Tikhonov regularization the solution is unique, which is not the case in general for \eqref{eq:optiwellstated}, as revealed by Theorem \ref{theo:RT}. Tikhonov regularization is hence a more effective regularization strategy when it comes to enforcing uniqueness of the solution.
\item The unique solution of \eqref{eq:L2optipb} lives in a finite dimensional space of dimension $M$ generated by the functions $\{h_{{\rm L}} * \nu_m, \,1 \leq m \leq M\}$. This is reminiscent of kernel methods: the function to reconstruct lies in a possibly infinite dimensional Hilbert space, but the regularization procedure enforces the solution to lie in a finite-dimensional space determined by the choice of the kernel and the measurement functionals.
In contrast, TV regularization benefits from an infinite union of finite-dimensional subsets, given by periodic ${\rm L}$-splines with less than $M$ knots at any possible locations.
This is known to improve the adaptiveness of the method and yield higher accuracy estimates with sharp variations~\cite{lu2008theory,eldar2009robust}.
\item Finally, for the Tikhonov optimization problem \eqref{eq:L2optipb}, the measurement functionals $\nu_m$ directly impact the form of the estimate \eqref{eq:uniquedudeL2}. For instance, with Fourier measurements, this conducts to the well-known Gibbs phenomenon and the presence of oscillations in the reconstruction (see \cite[Figure 4(a)]{gupta2018continuous}). In contrast, the form of the solutions of \eqref{eq:optiwellstated} is agnostic to the measurement process and depends only on the chosen regularizing operator ${\rm L}$. Solutions to the TV regularized optimisation problem \eqref{eq:optiwellstated} are hence less sensitive to Gibbs-like phenomena.
\end{itemize}
\subsection{Comparison with TV Regularization over $ \mathbb{R}^d$}
As we have seen in Section \ref{sec:intro}, many recent works have investigated the reconstruction of continuous-domain functions $f : \mathbb{R}^d \rightarrow \mathbb{R}$ from finitely many noisy measurements.
Our paper contributes to this global effort by considering the use of TV-based regularization norms for the reconstruction of periodic functions.
We believe that the periodic setting has several remarkable advantages that greatly facilitate the construction of the framework.
First, Schwartz functions over $ \mathbb{R}^d$ mix smoothness and decay properties, while periodic Schwartz functions must only be smooth. This significantly simplifies the construction of the native space and the measurement space, as can be appreciated when comparing to the derivations in \cite{unser2019native}. In the periodic setting, we are moreover able to provide complete characterizations of spline-admissible operators from their Fourier symbol and of sampling-admissible operators with concrete criteria applicable to classical families of (possibly fractional) pseudo-differential operators. In both cases and to the best of our knowledge, similar results are only partially known in the non periodic setting.
Second, even if splines play a central role for both the periodic and non periodic settings, the construction of the splines differs. Consequently, the form of the extreme points solutions differs. Consider for instance the univariate operator ${\rm L} = {\rm D}^N$. In the non periodic setting, an extreme-point solution has at most $(M-N_0)$-knots~\cite[Theorem 2]{Unser2017splines}. For the periodic case, extreme points solutions has at most $M$-knots whose weights satisfy the linear condition $\bm{\mathrm{M}} \bm{a} = \bm{0}$ (see Theorem \ref{theo:RT}).
Finally, it is worth noting that the dimension $N_0$ of the null space of ${\rm L}$ depends on the chosen setting: for the periodic case, $N_0=1$ when ${\rm L}=D^{N}$, while $N_0 = N+1$ over the real line.
\subsection{Conclusions}
We presented a general framework for the reconstruction of sparse and periodic functions defined over a continuum from finitely many noisy linear measurements.
This is achieved by using total variation based regularizations in addition to a data fidelity term.
The main novelty of our work was to address the problem in full generality for periodic functions in a self-contained manner.
In particular, we characterized the complete class of periodic operators and periodic measurement functionals for which a periodic representer theorem for the solution of \eqref{eq:optipb} can be obtained.
In a future work, we plan to work on practical aspects of the proposed periodic framework, including discretization procedures, reconstructions algorithms, and practical applications to signal processing tasks.
\section*{Acknowledgments}
The authors are grateful to Michael Unser, Thomas Debarre, and Quentin Denoyelle for interesting discussions at the early stage of this research project.
Julien Fageot is supported by the Swiss National Science Foundation (SNSF) under Grant P2ELP2\_181759. For this work, Matthieu Simeoni was in part supported by the Swiss National Science Foundation grant number 200021 181978/1, “SESAM - Sensing and Sampling: Theory and Algorithms”.
\small
\bibliographystyle{IEEEtran}
|
2,869,038,156,845 | arxiv | \section{Alternative form of the order parameter}
\label{sec:altern-form-order}
In order to show that the proposed alternative form of the order
parameter for the Vicsek model is identical to the original one, we
start from the expression
\begin{equation}
\label{eq:1}
\phi_\eta(t) = \frac{1}{N} \sum_{i=1}^N \cos \left[ \theta_i(t) -
\bar{\theta}(t)\right].
\end{equation}
Using the cosine angle difference identity, we can write
\begin{equation}
\label{eq:8}
\phi_\eta(t) = \frac{1}{N} \sum_{i=1}^N \left[ \cos \theta_i(t) \cos
\bar{\theta}(t) + \sin \theta_i(t) \sin
\bar{\theta}(t)\right].
\end{equation}
Now, using
\begin{eqnarray}
\label{eq:9}
\cos \bar{\theta}(t) &=& \frac{\sum_i \cos \theta_i(t)}{\sqrt{
\left(\sum_i \cos \theta_i(t)\right)^2 + \left(\sum_i \sin
\theta_i(t)\right)^2}}, \\
\sin \bar{\theta}(t) &=& \frac{\sum_i \sin \theta_i(t)}{\sqrt{
\left(\sum_i \cos \theta_i(t)\right)^2 + \left(\sum_i \sin
\theta_i(t)\right)^2}},
\end{eqnarray}
and substituting into Eq.~(\ref{eq:8}), we obtain
\begin{equation}
\label{eq:10}
\phi_\eta(t) = \frac{1}{N} \sqrt{
\left(\sum_i \cos \theta_i(t)\right)^2 + \left(\sum_i \sin
\theta_i(t)\right)^2},
\end{equation}
which is exactly the form of the temporal order parameter in its
classical definition, Eq.~(2) in the main paper.
\section{Analytical solution for fully connected networks}
\label{sec:analyt-solut-fully}
In the case of fully connected networks, in which every node is
connected to every other one, we have
$\theta_i(t) = \bar{\theta}(t) + \eta \xi_i(t)$, an therefore
\begin{equation}
\label{eq:11}
\phi_\eta(t) = \frac{1}{N} \sum_{i=1}^N \cos \left[ \eta
\xi_i(t)\right].
\end{equation}
As $\xi_i(t)$ is uncorrelated in both the particle index an time, we can
write the average value, in the thermodynamic limit,
\begin{equation}
\label{eq:12}
\av{\phi_\eta} = \frac{1}{2 \pi} \int_{-\pi}^{\pi} \cos \left[ \eta
\xi\right] \; d \xi = \frac{\sin(\eta \pi)}{\eta \pi}.
\end{equation}
This value of the average order parameter is different from zero for any
$\eta < 1$, indicating $\eta_c = 1$. In the vicinity of this critical
point, a Taylor expansion of Eq.~(\ref{eq:12}) leads to $\av{\phi_\eta}
\sim 1-\eta$, defining the critical exponent $\beta = 1$.
To compute the susceptibility, $\chi_\eta = N [\av{\phi_\eta^2} -
\av{\phi_\eta}^2]$, we start from the variance of $\phi_\eta$
\begin{eqnarray}
\label{eq:1}
\mathrm{Var}(\phi_\eta) &=&
\left< \frac{1}{N^2} \sum_{i=1}^N
\cos(\eta \xi_i) \sum_{j=1}^N \cos(\eta \xi_j) \right> \nonumber
-\left( \frac{\sin(\eta \pi)}{\eta \pi}\right)^2 \\
&=& \left< \frac{1}{N^2} \sum_{i=1}^N
\cos^2(\eta \xi_i) + \frac{1}{N^2} \sum_{i \neq j}^N \cos(\eta
\xi_i) \cos(\eta \xi_j)\right> \nonumber \\
&-& \left( \frac{\sin(\eta \pi)}{\eta \pi}\right)^2 \nonumber\\
&=& \frac{1}{2\pi N} \int_{-\pi}^\pi\cos^2(\eta \xi) d\xi
+ \frac{N-1}{2\pi N} \left[
\int_{-\pi}^\pi\cos(\eta \xi)
d\xi\right]^2 \nonumber \\
&-& \left( \frac{\sin(\eta \pi)}{\eta \pi}\right)^2 \nonumber\\
&=& \frac{1}{4N} \left(2+ \frac{\sin(2\eta \pi)}{\eta \pi} \right) -
\frac{1}{N} \left( \frac{\sin(\eta \pi)}{\eta \pi}\right)^2.
\end{eqnarray}
From here, we obtain the susceptibility
\begin{equation}
\label{eq:2}
\chi_\eta = \frac{1}{2} \left(1+ \frac{\sin(2\eta \pi)}{2 \eta \pi} \right) -
\left( \frac{\sin(\eta \pi)}{\eta \pi}\right)^2.
\end{equation}
An expansion of Eq.~(\ref{eq:2}) around the critical point $\eta_c = 1$
leads to $\chi_\eta \sim \frac{1}{2} - (1-\eta)$, yielding the critical
exponent $\gamma = 0$. Therefore, for the dynamic susceptibility
$\chi_\infty(\eta)$, Eq.~(5) in the main paper, we have in the
thermodynamic limit
\begin{equation}
\label{eq:3}
\chi_\infty(\eta) \sim \frac{1}{2} (1-\eta)^{-1},
\end{equation}
compatible with $\beta + \gamma = 1$.
\section{Majority vote model on fully connected networks}
\label{sec:majority-vote-model}
In order to explore the behavior of the majority vote model on fully
connected networks we have performed numerical simulations of the
dynamics in which each node takes the majority state of the other $N-1$
nodes with probability $1-f$, and the opposite state with probability
$f$. The order parameter $\av{\phi_f}$ is defined as the average
absolute value of the magnetization in the steady state. The dynamic
susceptibility is, in its turn, defined as
\begin{equation}
\label{eq:4}
\chi_N(f) = N \frac{\av{\phi_f^2} - \av{\phi_f}^2}{\av{\phi_f}}.
\end{equation}
In Fig.~\ref{fig:angularhistogram} we plot the results of numerical
simulations, averaging over $50000$ Monte Carlo time steps, after
letting the system relax for $10000$ steps.
\begin{figure}[t]
\includegraphics{SupplementaryFigure.pdf}
\caption{Average order parameter (a) and dynamic susceptibility (b) of
the majority vote model on fully connected networks of different
size. Inset of (b): Scaling of the dynamical susceptibility as a
function of $(0.5 - f)^{-1}$.}
\label{fig:angularhistogram}
\end{figure}
In Fig.~\ref{fig:angularhistogram}a) we plot the average order parameter
as a function of $f$. The clear linear behavior indicates a value
$f_c = 1/2$ with critical exponent $\beta = 1$. In
Fig.~\ref{fig:angularhistogram}b) we depict instead the dynamics
susceptibility $\chi_N(f)$ as a function of the probability $f$. As we
can see, it tends to diverge close to $f_c = 1/2$, with rounding effects
for small system sizes. In the inset in
Fig.~\ref{fig:angularhistogram}b) we plot $\chi_N(f)$ as a function of
$1/(\frac{1}{2} - f)$. The linear behavior for small values of this
quantity leads to $\chi_N(f) \sim \left( \frac{1}{2} - f\right)^{-1}$,
which combined with $\chi_N(f) \sim (f_c - f)^{-\gamma -\beta}$ yields
the critical exponent $\gamma = 0$, in agreement with the observations
on the Vicsek model on fully connected graphs.
\begin{figure}[t]
\includegraphics{suscepc_majority.pdf}
\caption{Dynamical susceptibility as a function of $f$ in the majority
vote model in UCM networks of different size. The groups of
functions for different $N$ correspond to (from left to right):
$\gamma_d = 2.75$, $2.40$, $2.25$, and $2.10$.}
\label{fig:majoritychi}
\end{figure}
\section{Majority vote model on scale-free networks}
\label{sec:majority-vote-model-1}
We consider the majority vote model on scale-free networks of degree
distribution $P(k) \sim k^{-\gamma_d}$, generated using the UCM network
model. In this case, each node takes the majority state of its nearest
neighbors with probability $1-f$, and the opposite state with
probability $f$. In Fig.~\ref{fig:majoritychi} we plot the dynamic
susceptibility $\chi_N(f)$, Eq.~(\ref{eq:4}) as a function of the
probability $f$ in networks of different size $N$ and degree exponent
$\gamma_d$. This function is evaluated averaging over $50000$ Monte
Carlo time steps, after letting the system relax to its steady state. As
we can see, the effective critical point for a given network size,
$f_c(N)$, defined as the position of the peak of the dynamic
susceptibility, appears to tend to a constant for large $\gamma_d$,
while it approaches the limit of large noise $f = 0.5$ in the case of
small $\gamma_d < 5/2$ and large size. Indeed, the value of the
critical point $f_c$ in the thermodynamic limit, extrapolated from the
finite-size scaling ansatz
\begin{equation}
\label{eq:5}
f_c(N) \sim f_c + b N^{-1/\nu},
\end{equation}
leads to a result in agreement with the theoretical prediction in
Ref.~\cite{Chen2015}, namely a value approaching $f_c \simeq 0.5$ for
$\gamma_d < 5/2$, and a value $f_c < 0.5$ for $\gamma_d > 5/2$.
\begin{figure}[t]
\includegraphics{peaks_majority.pdf}
\caption{Scaling of the peak of susceptibility with network size for
different values of $\gamma_d$.}
\label{fig:majoritydelta}
\end{figure}
In Fig.~\ref{fig:majoritydelta} we plot the height of the peak of the
dynamic susceptibility, which should scale with network size as
\begin{equation}
\label{eq:6}
\chi_N^\mathrm{peak} \sim N^{\delta},
\end{equation}
with the exponent $\delta = (\beta + \gamma)/\nu$. By means of a linear
regression in double logarithmic scale, we estimate the values of
$\delta$ for the different degree exponents considered:
$\gamma_d = 2.10$, $\delta = 0.57(1)$; $\gamma_d = 2.25$,
$\delta = 0.61(1)$; $\gamma_d = 2.40$, $\delta = 0.67(2)$;
$\gamma_d = 2.75$, $\delta = 0.78(2)$. These exponents are in excellent
agreement with the ones obtained for the Vicsek model in scale-free
networks (see Table~I in the main paper), and confirm the equivalence of
behavior of the Vicsek model, with a continuous symmetry, and the
majority vote model, with a discrete symmetry, on complex networks with
a heterogeneous degree distribution.
|
2,869,038,156,846 | arxiv |
\chapter{The Structure of the Standard Model}
\label{cha:app_structure}
The fundamental constituents of matter in Nature are
fermions: leptons and quarks, with interactions specified by
the gauge symmetries
SU(3)$_C$$\times$SU(2)$_L$$\times$U(1)$_Y$
in the framework of the Standard Model (SM).
Why are the fermions in an electroweak doublet?
Why are left-handed fermions in a doublet, and
right-handed fermions in a singlet?
What tells us that quarks have color degrees-of-freedom?
Why must quark doublets be paired with lepton doublets?
Here we come to a brief review of how the structure of
the SM emerged. A good introduction can be found in
Ref.~\cite{Field:1994}.
The relationships among the fermions are interpreted from
the interactions they experience, namely the cross sections and
decay widths measured, calculated, and measured ... an
interplay of experimental inputs and theoretical constraints.
The objective is to unify the different interactions.
\vspace{0.2in}
\noindent{\bf Charged Current}
\vspace{0.1in}
Let us recall Fermi's theory~\cite{Fermi:1934sk}
of charged current (CC)
weak interaction for four fermions, e.g. the
crossed $\beta$-decay, $e p \to n\nu_e$,
\begin{center}
\parbox{2.0in}{\epsfxsize=\hsize\epsffile{2.1/Evolution/Fermi.eps}}
\end{center}
The amplitude (matrix element) of this process can be written as
\begin{equation}
\mathcal{M} = G_F J^{(e)\mu} J^{(p)}_{\mu}
\end{equation}
where $G_F$ is Fermi's constant and the charged
currents for the fermion fields are
\begin{equation}
J^{(e)\mu} = \bar{u}_e \gamma^{\mu} u_{\nu_e}, \;\;\;\;\;\;
J^{(p)}_{\mu} = \bar{u}_p \gamma_{\mu} u_n
\end{equation}
The next advance came after the discovery that
CC violates parity maximally~\cite{Lee:1956qn}
and the V-A theory of the weak interaction~\cite{Feynman:1958ty}
was proposed. Only
left-handed fermions, which are projected by a
V-A operator $\frac{1}{2}(1 - \gamma_5)$,
appears in CC.
\begin{center}
\parbox{2.0in}{\epsfxsize=\hsize\epsffile{2.1/Evolution/V-A.eps}}
\end{center}
\begin{equation}
\mathcal{M} = \frac{4G}{\sqrt{2}} J_{\mu}^{\dagger} J^{\mu}
\end{equation}
\begin{equation}
J^{\mu} = \bar{u}_e \gamma^{\mu} \frac{1}{2}(1-\gamma_5) u_{\nu_e}
+\bar{u}_p \gamma^{\mu} \frac{1}{2}(1-\gamma_5) u_n
\end{equation}
After the introduction of quarks~\cite{Gell-Mann:1964nj}
for understanding the classification
of the hadrons, it was natural to re-write the hadronic part
of CC in terms of quark fields. The transition
$u \to d$ occurs via CC, with the other two quarks
in the nucleon being spectators.
\begin{center}
\parbox{2.0in}{\epsfxsize=\hsize\epsffile{2.1/Evolution/Quark_Cabibbo.eps}}
\end{center}
\begin{equation}
J^{\mu} = \bar{e} \gamma^{\mu} \frac{1}{2}(1-\gamma_5) \nu_e
+\bar{u} \gamma^{\mu} \frac{1}{2}(1-\gamma_5) d'
\end{equation}
There was an inconsistency found
in the value of the Fermi constant $G_F$ as determined
from $\beta$-deay and the purely leptonic muon decay.
This lead Cabibbo to the hypothesis that the quark states
in CC are not the physical states (eigenstates of mass),
but rather a quantum superposition of the physical
states.
\begin{equation}
\left( \begin{array}{c}
d \\
s
\end{array}
\right)_{\mbox{weak}}
=
\left( \begin{array}{cc}
\cos\theta_C & \sin\theta_C \\
-\sin\theta_C & \cos\theta_C
\end{array}
\right)
\left( \begin{array}{c}
d \\
s
\end{array}
\right)_{\mbox{mass}}
\end{equation}
where $\theta_C$ is Cabibbo angle, thus the Fermi constant
is replaced by $G_F\cos\theta_C$. This idea was generalized
to the case of three quark generations in terms of
the CKM (Cabbibo-Kobayashi-Maskawa) matrix~\cite{Cabibbo:1963yz},
\begin{equation}
\left( \begin{array}{c}
d \\
s \\
b
\end{array}
\right)_{\mbox{weak}}
=
\left( \begin{array}{ccc}
V_{ud} & V_{us} & V_{ub} \\
V_{cd} & V_{cs} & V_{cb} \\
V_{td} & V_{ts} & V_{tb}
\end{array}
\right)
\left( \begin{array}{c}
d \\
s \\
b
\end{array}
\right)_{\mbox{mass}}
\end{equation}
Glashow proposed the intermediate vector
boson model (IVB) in 1961~\cite{Glashow:1961tr}
and the form has
been incorportated in the SM. The
basic idea is to replace the four fermion
interaction by the exchange of a massive
charged boson $W^{\pm}$, e.g.
$\nu_{\mu} e^- \to \nu_e \mu^-$,
(a) four fermion interaction,
(b) the IVB model:
\begin{center}
\parbox{4.0in}{\epsfxsize=\hsize\epsffile{2.1/Evolution/IVB_CC.eps}}
\end{center}
The matrix element can be written as
\begin{equation}
\mathcal{M}_{Fermi}^{CC} = \frac{4G_F}{\sqrt{2}}
J_{\mu}^{CC}
\left(J^{CC}\right)^{\mu}
\label{eq:CC_Fermi}
\end{equation}
\begin{equation}
\mathcal{M}_{IVB}^{CC} \approx \frac{g}{\sqrt{2}}
J_{\mu}^{CC}
\left(\frac{1}{m_W^2}\right)
\frac{g}{\sqrt{2}}
\left(J^{CC}\right)^{\mu}
\label{eq:CC_IVB}
\end{equation}
Comparing Eq.~(\ref{eq:CC_Fermi}) with Eq.~(\ref{eq:CC_IVB}),
substituting $g = e/\sin\theta_W$
and $\alpha = e^2/(4\pi)$,
and using experimental values:
$\alpha = 1/137$,
$G_F = 1.166\times10^{-5}$ GeV$^{-2}$,
$\sin^2\theta_W = 0.22$
($\sin^2\theta_W$ was first determined
from the NC/CC cross section ratio in
neutrino scattering where NC is the neutral
current interaction explained below),
this leads to the prediction for the W mass:
\begin{equation}
m_W = \left(\frac{\sqrt{2}g^2}{8G_F}\right)^{1/2}
= \frac{37.3}{\sin\theta_W}
= 79.5 \; \mbox{GeV}/c^2
\label{eq:mW_g2_GF}
\end{equation}
This may be compared with the experimental
value~\cite{Eidelman:2004wy}:
\begin{equation}
m_W = 80.425 \pm 0.038 \; \mbox{GeV}/c^2
\end{equation}
The interdediate $W^{\pm}$ bosons, along with the $Z^0$ bosons
explained below, were discovered
at CERN in 1983~\cite{Arnison:1983rp}.
\vspace{0.2in}
\noindent{\bf A Doublet in Weak Isospin Space}
\vspace{0.1in}
We write the left-handed leptons in a weak isospin SU(2)$_L$ doublet
and the right-handed leptons in a singlet, for example,
\begin{equation}
L =
\left(
\begin{array}{c}
\nu_e \\
e_L^-
\end{array}
\right),
\;\;\;
e_R
\end{equation}
The generators of the SU(2)$_L$ transformations are
$T_L^i = \frac{1}{2}\tau^i$,
where $\tau^i$ are Pauli matrices. The charge
raising opertator $\tau^+$, the charge lowering operator
$\tau^-$, and the original $\tau^3$ are
\begin{eqnarray}
\tau^+ & = & \frac{1}{2}(\tau_1 + i\tau_2)
= \left( \begin{array}{cc}
0 & 1 \\
0 & 0
\end{array}
\right) \\
\tau^- & = & \frac{1}{2}(\tau_1 - i\tau_2)
= \left( \begin{array}{cc}
0 & 0 \\
1 & 0
\end{array}
\right) \\
\tau^3 & = & \left( \begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right)
\end{eqnarray}
The currents can be written as
\begin{eqnarray}
J_{\mu}^+ & = & \bar{\nu}_e
\gamma_{\mu}
\frac{1}{2}(1 - \gamma_5)
e
\equiv \bar{\nu}_e
\gamma_{\mu}
e_L
= \bar{L}
\gamma_{\mu}
\tau^+
L \\
J_{\mu}^- & = & \bar{e}
\gamma_{\mu}
\frac{1}{2}(1 - \gamma_5)
\nu_e
\equiv \bar{e}_L
\gamma_{\mu}
\nu_e
= \bar{L}
\gamma_{\mu}
\tau^-
L \\
J_{\mu}^3 & = & \frac{1}{2}
\left[ \bar{\nu}_L
\gamma_{\mu}
\nu_L
-\bar{e}_L
\gamma_{\mu}
e_L
\right]
= \bar{L}
\gamma_{\mu}
\frac{1}{2}\tau^3
L
\end{eqnarray}
These can be combined into an isospin triplet of
currents
\begin{equation}
\mathbf{J}_{\mu} = \bar{L}
\gamma_{\mu}
\mathbf{T}
L
\end{equation}
The weak isospin invariance implies that the SU(2)$_L$
invariant Lagrangian to describe the interaction
between the $W$ bosons and the current $\mathbf{J}$
with a coupling $g$ is of the form
\begin{equation}
\mathcal{L} = g \mathbf{J}^{\mu} \cdot \mathbf{W}_{\mu}
\end{equation}
Hence a neutral IVB W$^3$ should exist, coupling
to $J^3$.
Since the electromagnetic current $J_{\mu}^{em}$
is parity conserving,
\begin{equation}
J_{\mu}^{em} = e( \bar{e}_R
\gamma_{\mu}
e_R
+\bar{e}_L
\gamma_{\mu}
e_L
)
\end{equation}
Whereas $J_{\mu}^3$ has a V-A structure.
$J_{\mu}^3$ cannot be directly identified with
the electromagnetic current, nor W$^3$ with the
photon.
\vspace{0.2in}
\noindent{\bf Neutral Current}
\vspace{0.1in}
Next came the inputs from the neutral current
(NC) interactions. NC were discovered by the
Gargamelle Collaboration at CERN in 1973~\cite{Hasert:1973ff},
$\nu_{\mu} q \to \nu_{\mu} q$.
\begin{center}
\parbox{4.0in}{\epsfxsize=\hsize\epsffile{2.1/Evolution/CC_NC.eps}}
\end{center}
The matrix element can be written as
\begin{equation}
\mathcal{M} = \frac{8G_F\rho}{\sqrt{2}}
\left(J^{NC}\right)^{\mu}
J^{NC}_{\mu}
\end{equation}
with NC in the form
\begin{equation}
\left(J^{NC}\right)^{\mu}
= \sum_l\left[ \bar{\nu}_l
\gamma^{\mu}
\frac{1}{2}(1 - \gamma_5)
\nu_l
\right]
+\sum_f\left[ \bar{f}
\gamma^{\mu}
\frac{1}{2}(C_V^f - C_A^f\gamma_5)
f
\right]
\end{equation}
\[
l = e, \mu, \tau;
\;\;\;
f = l, q;
\;\;\;
q = u, d, s, c, b, t
\]
The neutrino part has a V-A structure.
The lepton/quark part has
parity violation ($C_A^f \neq 0$),
but not maximally ($C_A^f \neq C_V^f$).
Universality of NC and
CC requires $\rho = 1$,
later predicted in the SM.
We can write the NC interactions in terms of IVB, e.g.
$\nu_{\mu} e^- \to \nu_{\mu} e^-$,
(a) four fermion interaction,
(b) the IVB model:
\begin{center}
\parbox{4.0in}{\epsfxsize=\hsize\epsffile{2.1/Evolution/IVB_NC.eps}}
\end{center}
The matrix element can be written as
\begin{equation}
\mathcal{M}_{Fermi}^{NC} = \frac{8 \rho G_F}{\sqrt{2}}
J_{\mu}^{NC}
\left(J^{NC}\right)^{\mu}
\label{eq:NC_Fermi}
\end{equation}
\begin{equation}
\mathcal{M}_{IVB}^{NC} \approx \frac{g}{\cos\theta_W}
J_{\mu}^{NC}
\left(\frac{1}{m_Z^2}\right)
\frac{g}{\cos\theta_W}
\left(J^{NC}\right)^{\mu}
\label{eq:NC_IVB}
\end{equation}
Comparing Eq.~(\ref{eq:NC_Fermi}) with Eq.~(\ref{eq:NC_IVB}),
and assuming universality of the charged and
neutral currents ($\rho = 1$), this gives the
prediction for Z mass:
\begin{equation}
m_Z = \left(\frac{\sqrt{2}g^2}{8G_F}\right)^{1/2}
\frac{1}{\sqrt{\rho}\cos\theta_W}
= \frac{m_W}{\sqrt{\rho}\cos\theta_W}
= \frac{79.5}{\cos\theta_W}
= 90 \; \mbox{GeV}/c^2
\end{equation}
This may be compared with the experimental
value~\cite{Eidelman:2004wy}:
\begin{equation}
m_Z = 91.1876 \pm 0.0021 \; \mbox{GeV}/c^2
\end{equation}
\vspace{0.2in}
\noindent{\bf Flavor Changing Neutral Current}
\vspace{0.1in}
The flavor changing neutral current (FCNC)
interaction is strongly suppressed~\cite{Eidelman:2004wy},
(a) CC, (b) FCNC:
\begin{center}
\parbox{4.0in}{\epsfxsize=\hsize\epsffile{2.1/Evolution/SCNC_GIM.eps}}
\end{center}
\begin{equation}
J^{NC} = \bar{u}u + \bar{d}'d'
= \bar{u}u + \bar{d}d\cos^2\theta_C + \bar{s}s\sin^2\theta_C
+ \underbrace{
(\bar{s}d + \bar{d}s)
}_{\mbox{FCNC}}
\sin\theta_C\cos\theta_C
\end{equation}
\begin{eqnarray}
\mbox{BR}(K^+ \to \pi^+ \pi^0) & = & (21.13 \pm 0.14)\% \\
\mbox{BR}(K^+ \to \pi^+ e^+ e^-) & = & (2.88 \pm 0.13)\times10^{-7}
\end{eqnarray}
The GIM (Glashow-Iliopoulos-Maiani) mechanism~\cite{Glashow:1970gm}
proposed that
quarks must be paired in doublets. This naturally solved
FCNC. In addition, $c$ quark was predicted and later
discovered~\cite{Aubert:1974js}.
\begin{equation}
\left( \begin{array}{c}
u \\
d'
\end{array}
\right)
\;\;\;
\left( \begin{array}{c}
c \\
s'
\end{array}
\right)
\end{equation}
\begin{equation}
J^{NC} = \bar{u}u + \bar{c}c + \bar{d}'d' + \bar{s}'s'
= \bar{u}u + \bar{c}c + \bar{d}d + \bar{s}s
\end{equation}
\vspace{0.2in}
\noindent{\bf A Triplet in Quark Color Space}
\vspace{0.1in}
The quarks in the spin-$\frac{3}{2}$ baryons
are in a symmetrical state of space, spin
and flavor degrees of freedom, e.g.
\begin{equation}
\Delta^{++} = uuu,
\;\;\;\;\;\;
\Omega^- = sss
\end{equation}
However the requirements of Fermi-Dirac
statistics imply the total antisymmetry of
the wave function. The solution was the
introduction of the color degree of freedom,
with indices as red ($r$), green ($g$), and
blue ($b$).
\begin{equation}
q = \left( \begin{array}{c}
q_r \\
q_g \\
q_b
\end{array}
\right)
\end{equation}
One of the tests of the number of charged
fundamental constituents is provided by
\begin{equation}
R = \frac{\sigma(e^+e^- \to \mbox{hadrons})}
{\sigma(e^+e^- \to \mu^+\mu^-)}
\end{equation}
The virtual photon emitted by the $e^+e^-$
annihilation will excite all kinematically
accessible $q\bar{q}$ pairs from the vacuum.
\begin{equation}
R = \sum_q e_q^2
\end{equation}
At low energy where only the $u$, $d$ and $s$
quarks are available, in the absense of color
degree of freedom, we expect
\begin{equation}
R = e_u^2 + e_d^2 + e_s^2
= \left(\frac{2}{3}\right)^2
+\left(-\frac{1}{3}\right)^2
+\left(-\frac{1}{3}\right)^2
= \frac{2}{3}
\end{equation}
If quarks have three colors,
\begin{equation}
R = 3(e_u^2 + e_d^2 + e_s^2)
= 2
\end{equation}
For energies above 10 GeV, $c$ and $b$ quarks
are available,
\begin{equation}
R = 3(e_u^2 + e_d^2 + e_s^2 + e_c^2 + e_b^2)
= \frac{11}{3}
\end{equation}
The color triplet model is excellently supported
by data, see the ``$\sigma$ and $R$ in $e^+e^-$ Collisions'' plots
in the Section ``Plots of cross sections and related quantities (Rev.)''
in PDG~\cite{Eidelman:2004wy}.
\vspace{0.2in}
\noindent{\bf Pair Quarks with Leptons}
\vspace{0.1in}
Some classical symmetries, known as anomalous
symmetries~\cite{Adler:1969gk}
are broken by quantum effects.
The requirement for an anomaly-free theory~\cite{Adler:1969er}
is that:
\begin{equation}
\sum Q_f = 0
\end{equation}
where the sum is over all quarks and leptons. For
example consider the two doublets,
\begin{eqnarray}
\left(\begin{array}{c}
\nu_e \\
e
\end{array}
\right)
\nonumber \\
\left(\begin{array}{c}
u \\
d
\end{array}
\right)
\nonumber
\end{eqnarray}
\begin{equation}
\sum Q_f = (0 - 1)
+3\times(\frac{2}{3} - \frac{1}{3})
= 0
\end{equation}
Cancellation of anomalies requires that quark
doublets must be paired with lepton doublets.
The SM identifies a generation in a natural
way by identifying the doublet containing the
heaviest charged lepton with the doublet
containing the heaviest quarks (and so on),
but one could in principle associate any quark
doublet with any lepton doublet and call that
a generation, because there are no interactions
between quarks and leptons in the SM. What
needs to be guaranteed is that the number of
quark and lepton generations must be equal.
\chapter{Gauge Symmetry \& Spontaneous Symmetry Breaking}
\label{cha:app_gs_ssb}
The interactions between the fermions and the vector bosons
in the Standard Model (SM) are uniquely specified by requiring
the theory, i.e. the SM Lagrangian, invariant under gauge
transformations which are local and involve transformations
varying from point to point. Some of the standard texts
are listed in Ref.~\cite{Quigg:1983}.
A symmetry indicates a deeper relationship
among the elementary particles with a further
unification of the interactions and makes the form
of a Lagrangian more compact. Symmetry dictates
design and plays the central role in the direction
to \emph{find the simplest model}.
\vspace{0.2in}
\noindent{\bf Gauge Symmetry}
\vspace{0.1in}
Let us take electromagnetism as an example and consider
the Lagrangian for a free fermion field $\Psi(x)$.
\begin{equation}
\mathcal{L}_0 = \bar{\Psi}(x)
(i\gamma^{\mu}\partial_{\mu} - m)
\Psi(x)
\label{eq:FreeFermion_a}
\end{equation}
This is invariant under a global
U(1) phase transformation which is space-time
independent and is illustrated in
the left plot in Fig.~\ref{fig:Global_Local},
\begin{equation}
\Psi(x) \to \Psi'(x) = e^{-iQ\theta}\Psi(x)
\end{equation}
where $Q$ is the charge or the U(1) quantum number
of the fermion. For example, the charge assignment
for $u$ quark, $d$ quark, $\nu_e$, and $e$ are
+2/3, -1/3, 0, and -1, respectively.
We are going to construct an invariant Lagrangian
under a local, i.e., gauge, U(1) phase transformation
which is space-time dependent and is illustrated in
the right plot in Fig.~\ref{fig:Global_Local}.
\begin{equation}
\Psi(x) \to \Psi'(x) = e^{-iQ\theta(x)}\Psi(x)
\end{equation}
The partial derivative $\partial_{\mu}$
in Eq.~(\ref{eq:FreeFermion_a}) spoils the invariance.
We need to form a gauge-covariant derivative
$D_{\mu}$ which will have the simple
transformation property,
\begin{equation}
D_{\mu}\Psi(x) \to e^{-iQ\theta(x)}D_{\mu}\Psi(x)
\label{eq:CovariantDerivative_a1}
\end{equation}
so that the combination $\bar{\Psi}D_{\mu}\Psi$ is
gauge invariant. To achieve this, we enlarge the
Lagrangian with a new vector gauge field $A_{\mu}(x)$
and form the covariant form as
\begin{equation}
D_{\mu}\Psi = (\partial_{\mu} + ieQA_{\mu})\Psi
\label{eq:CovariantDerivative_a2}
\end{equation}
where $e$ is a free parameter which eventually
will be identified as the coupling of the gauge
field to the fermion field. The transformation
property in Eq.~(\ref{eq:CovariantDerivative_a1}) will
be satisfied if the gauge field $A_{\mu}(x)$ has
the transformation property
\begin{equation}
A_{\mu}(x) \to A'_{\mu}(x)
= A_{\mu}(x)
+\frac{1}{e}\partial_{\mu}\theta(x)
\label{eq:GaugeFieldTransformation_a}
\end{equation}
Note that the coupling of the gauge field (photon)
to any fermion field is determined by its
transformation property under the symmetry group.
This is usually referred to as \emph{universality}.
Also note that photon is massless because an
$A_{\mu}A^{\mu}$ term is not gauge invariant under
this transformation.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{2.1/GaugeSymmetry/Global_Local.eps}}
\caption[Global and local transformations]
{Global and local transformations.}
\label{fig:Global_Local}
\end{center}
\end{figure}
To make the photon field a
truely dynamical variable we need to add a kinetic
term to the Lagrangian involving its derivatives.
The simplest gauge-invariant term with a
conventional normalization is
\begin{equation}
\mathcal{L}_A = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
\label{eq:GaugeFieldKineticTerm_a}
\end{equation}
where
\begin{equation}
F_{\mu\nu} = \partial_{\mu}A_{\nu}
-\partial_{\nu}A_{\mu}
\label{eq:GaugeFieldDerivative_a}
\end{equation}
Terms with higher powers are omitted in order that
the theory be renormalizable.
We notice that photon does not have self-coupling
because it does not carry a charge.
Now we arrive at the
gauge-invariant QED Lagrangian
\begin{equation}
\mathcal{L}_{QED} = \bar{\Psi}
i\gamma^{\mu}(\partial_{\mu}+ieQA_{\mu})
\Psi
-m\bar{\Psi}\Psi
-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
\label{eq:L_QED}
\end{equation}
Most remarkably, if one demands the symmetry
be local, one is forced to include the
electromagnetic field, and hence, light. Recall
that there are four Maxwell equations.
While here we just require
``gauge symmetry'' and electromagnetism
is determined. This
illustrates how
physics becomes simpler.
\vspace{0.2in}
\noindent{\bf Non-Abelian Gauge Symmetry}
\vspace{0.1in}
Yang and Mills extended the gauge principle to
non-Abelian symmetry~\cite{Yang:1954ek}.
Consider the simplest case
isospin SU(2). Let the fermion field be an isospin
doublet,
\begin{equation}
\Psi = \left(\begin{array}{c}
\Psi_1 \\
\Psi_2
\end{array}
\right)
\end{equation}
The free Lagragian
\begin{equation}
\mathcal{L}_0 = \bar{\Psi}(x)
(i\gamma^{\mu}\partial_{\mu} - m)
\Psi(x)
\label{eq:FreeFermion_b}
\end{equation}
is invariant under the global SU(2) transformation
\begin{equation}
\Psi(x) \to \Psi'(x)
= e^{-i\mathbf{T} \cdot \mathbf{\theta}}
\Psi(x)
\end{equation}
where $\mathbf{\theta} = (\theta_1, \theta_2, \theta_3)$
are the SU(2) transformation parameters and
$\mathbf{T} = \frac{\mathbf{\tau}}{2}$
are the SU(2) generators
with $\mathbf{\tau} = (\tau_1, \tau_2, \tau_3)$
the Pauli matrices satisfying
\begin{equation}
[T_i, T_j] = i\epsilon_{ijk}T_k
\;\;\;\;\;\;
i,j,k = 1,2,3
\end{equation}
with $\epsilon_{ijk}$ the structure constants for SU(2).
It is easy to check that two successive SU(2)
transformations do not commute because the generators
do not commute and this is why SU(2) is called
a non-Abelian symmetry, in contrast to an Abelian
symmetry such as U(1) where two successive U(1)
transformations commute.
Under the local symmetry
transformation
\begin{equation}
\Psi(x) \to \Psi'(x)
= e^{-i\mathbf{T} \cdot \mathbf{\theta}(x)}
\Psi(x)
\end{equation}
the partial derivative $\partial_{\mu}$ in Eq.~(\ref{eq:FreeFermion_b})
spoils the invariance. To construct a
gauge-invariant Lagrangian we follow a procedure
similar to that of the Abelian case:
\begin{itemize}
\item We form a gauge-covariant derivative
\begin{equation}
D_{\mu}\Psi(x) \to e^{-i\mathbf{T} \cdot \mathbf{\theta}(x)}
D_{\mu}\Psi(x)
\label{eq:CovariantDerivative_b1}
\end{equation}
by introducing vector gauge fields
$A_{\mu}^i$, $i = 1, 2, 3$ (one
for each group generator) and
a coupling $g$
\begin{equation}
D_{\mu}\Psi = (\partial_{\mu} + ig\mathbf{T} \cdot \mathbf{A}_{\mu})
\Psi
\label{eq:CovariantDerivative_b2}
\end{equation}
and defining the transformation property
for the vector gauge fields as,
\begin{equation}
A_{\mu}^i \to A_{\mu}^{i'}
= A_{\mu}^i
-\epsilon^{ijk}\theta^jA_{\mu}^k
+\frac{1}{g}\partial_{\mu}\theta^i
\label{eq:GaugeFieldTransformation_b}
\end{equation}
The gauge fields are massless
because an $A_{\mu}^iA^{i\mu}$ term is not
gauge invariant, similar to an Abelian field.
But, the second term is clearly the transformation
for a triplet representation under SU(2),
thus the $A_{\mu}^i$ fields carry charges.
\item Then we add a gauge invariant kinetic term
for the gauge fields
\begin{equation}
\mathcal{L}_A = -\frac{1}{4}F_{\mu\nu}^iF^{i\mu\nu}
\label{eq:GaugeFieldKineticTerm_b}
\end{equation}
where
\begin{equation}
F_{\mu\nu}^i = \partial_{\mu}A_{\nu}^i
-\partial_{\nu}A_{\mu}^i
-g\epsilon^{ijk}A_{\mu}^jA_{\nu}^k
\label{eq:GaugeFieldDerivative_b}
\end{equation}
The third term shows that the gauge fields
have self-coupling because they carry
charge, in contrast to an Abelian
field.
\end{itemize}
We arrive at the complete gauge-invariant Lagrangian
which describes the interaction between the gauge
fields $A_{\mu}^i$ and the SU(2) doublet fields,
\begin{equation}
\mathcal{L} = \bar{\Psi}
i\gamma^{\mu}
(\partial_{\mu} + ig\mathbf{T} \cdot \mathbf{A}_{\mu})
\Psi
-m\bar{\Psi}\Psi
-\frac{1}{4}F_{\mu\nu}^iF^{i\mu\nu}
\end{equation}
Generalization of the Yang-Mills theory to a higher
group SU(N) with $N\geq3$
is straightforward.
\vspace{0.2in}
\noindent{\bf SU(3)$_C$$\times$SU(2)$_L$$\times$U(1)$_Y$}
\vspace{0.1in}
The structure of the gauge symmetries in the SM is
SU(3)$_C$$\times$SU(2)$_L$$\times$U(1)$_Y$.
For a particular fermion $\Psi$, its quantum field
is a product of factors,
\begin{equation}
\Psi = \left(\begin{array}{c}
\mbox{space-time} \\
\mbox{factor}
\end{array}
\right)
\times
\left(\begin{array}{c}
\mbox{spin} \\
\mbox{factor}
\end{array}
\right)
\times
\left(\begin{array}{c}
\mbox{U(1)}_Y \\
\mbox{factor}
\end{array}
\right)
\times
\left(\begin{array}{c}
\mbox{SU(2)}_L \\
\mbox{factor}
\end{array}
\right)
\times
\left(\begin{array}{c}
\mbox{SU(3)}_C \\
\mbox{factor}
\end{array}
\right)
\end{equation}
Each factor has some labels, coordinates, or indices.
The orthonormality of the quantum field holds separately
for each factor.
Since the gauge bosons of one
of the symmetry groups do not transform under the other
gauge symmetries in the product of groups, the gauge
invariant Lagrangian may be simply written as a sum of
the terms of individual groups.
The gauge
symmetric Lagrangian in the framework of Yang-Mills
theory is
\begin{eqnarray}
\mathcal{L}_{symmetric}
& = & \bar{\Psi}
i\gamma^{\mu}
\left(\partial_{\mu}
+ i g_1 \frac{Y}{2}B_{\mu}
+ i g_2 T^j W_{\mu}^j
+ i g_3 \lambda^a G_{\mu}^a
\right)\Psi
\label{eq:L_symmetric}
\\
& & - \frac{1}{4}B_{\mu\nu}B^{\mu\nu}
- \frac{1}{4}W_{\mu\nu}^iW^{i\mu\nu}
- \frac{1}{4}G_{\mu\nu}^aG^{a\mu\nu}
\nonumber
\end{eqnarray}
where
the eight $G_{\mu}^a$ and $\lambda^a$,
the three $W_{\mu}^i$ and $T^i$,
the one B$_{\mu}$ and $Y$
are the gauge bosons and generators corresponding to
the SU(3)$_C$ color,
the SU(2)$_L$ weak isospin, and
the U(1)$_Y$ hypercharge gauge symmetries,
respectively;
$g_i$ are the gauge couplings;
and
\begin{eqnarray}
B_{\mu\nu} & = & \partial_{\mu}B_{\nu}
- \partial_{\nu}B_{\mu}
\\
W_{\mu\nu}^i & = & \partial_{\mu}W_{\nu}^i
- \partial_{\nu}W_{\mu}^i
+ g_2\epsilon^{ijk}W_{\mu}^jW_{\nu}^k
\\
G_{\mu\nu}^a & = & \partial_{\mu}G_{\nu}^a
- \partial_{\nu}G_{\mu}^a
+ g_3f^{abc}G_{\mu}^bG_{\nu}^c
\end{eqnarray}
with $\epsilon^{ijk}$ and $f^{abc}$ the structure
constants for SU(2) and SU(3).
At this stage, all of the gauge bosons and
fermions are massless. The explicit mass terms
break gauge invariance.
For gauge bosons, the
exptected mass terms
\begin{equation}
m_W^2W_{\mu}W^{\mu}
\end{equation}
plus similar terms for the others, are clearly
not invariant under gauge transformations
$W_{\mu}^i \to W_{\mu}^{i'}
= W_{\mu}^i
-\epsilon^{ijk}\theta^jW_{\mu}^k
+\frac{1}{g_2}\partial_{\mu}\theta^i$.
This is true for any gauge theory.
For fermions, using the left- and right-handed
projection operator $P_L$ and $P_R$, the mass
term can be written as
\begin{eqnarray}
m\bar{\Psi}\Psi
& = & m\bar{\Psi}(P_L + P_R)\Psi \nonumber \\
& = & m\bar{\Psi}P_LP_L\Psi +
m\bar{\Psi}P_RP_R\Psi \nonumber \\
& = & m(\bar{\Psi}_R\Psi_L +
\bar{\Psi}_L\Psi_R)
\end{eqnarray}
In the SM, left-handed fermions are in SU(2)
doublets and the right-handed fermions are in SU(2)
singlets, thus they transform differently. The
$\bar{\Psi}_R\Psi_L$ and $\bar{\Psi}_L\Psi_R$
terms are not SU(2) singlets and would not give
an SU(2) invariant Lagrangian.
However the description that all of the gauge
bosons and fermions are massless is not true in
Nature. We need to
\begin{itemize}
\item[(a)] generate the masses of the leptons
and quarks;
\item[(b)] generate the masses of the $W^+$,
$W^-$, and $Z^0$ weak vector bosons;
\item[(c)] but also keep the photon and gluon
massless.
\end{itemize}
In other words, the SU(3)$_C$
will be kept precise, and the gluon will remain massless.
We need to break SU(2)$_L$$\times$U(1)$_Y$ down to
U(1)$_{EM}$, resulting in mixing between the
$B_{\mu}$ and $W_{\mu}^3$ fields, and non-zero masses
for three of the gauge bosons ($W^{\pm}$ and $Z^0$).
The photon ($A$) remain massless, due to a residual
U(1)$_{EM}$ gauge symmetry that remains
unbroken.
\vspace{0.2in}
\noindent{\bf Spontaneous Symmetry Breaking}
\vspace{0.1in}
The solution in the SM is to add a spontaneous
symmetry breaking (SSB) term into the symmetric
Lagrangian ``by hand''. The Lagrangian will
remain symmetric but the physical vacuum does not
respect the symmetry. In this case, the symmetry
of the Lagrangian is said to be spontaneously
broken.
\begin{equation}
\mathcal{L} = \mathcal{L}_{symmetric}
+\mathcal{L}_{SSB}
\label{eq:L}
\end{equation}
The assumption to construct $\mathcal{L}_{SSB}$
is that the universe is filled with a scalar
field, called Higgs field. One real scalar field
could solve (a). One complex field could solve (a)
and create one massive vector boson. To achieve (a),
(b) and (c), the minimum requirement of the Higgs
field is two complex fields arranged in a doublet
in the SU(2) space and carries U(1) hypercharge +1
(electric charge $Q = T_L^3 + \frac{Y}{2}$ is +1
and 0 for the upper and lower component, respectively),
but is a singlet in color space.
\begin{equation}
\phi = \left(\begin{array}{c}
\phi^+ \\
\phi^0
\end{array}
\right)
= \frac{1}{\sqrt{2}}
\left(\begin{array}{c}
\phi_1 + i\phi_2 \\
\phi_3 + i\phi_4
\end{array}
\right)
\end{equation}
Under a SU(2)$_L$$\times$U(1)$_Y$ gauge
transformation, the doublet transforms as
\begin{equation}
\phi \to e^{-i\frac{1}{2}\alpha(x)}
e^{-iT^i\beta^i(x)}
\phi
\label{eq:HiggsTransformation}
\end{equation}
The scalar field can be given gauge invariant
terms in $\mathcal{L}_{SSB}$:
the kinetic term required by gauge invariance,
the Higgs potential including a mass-like term
and a self-interaction term, and
the Yukawa coupling between the doublet and
a particular fermion $\Psi$.
\begin{equation}
\mathcal{L}_{SSB}
= (D_{\mu}\phi)^{\dagger}(D^{\mu}\phi)
- V(\phi)
- \mathcal{L}_{Yukawa}
\label{eq:L_SSB}
\end{equation}
with
\begin{eqnarray}
D_{\mu}
& = & \partial_{\mu}
+ig_1\frac{1}{2}B_{\mu}
+ig_2T^jW_{\mu}^j
\\
V(\phi)
& = & \mu^2 \phi^{\dagger}\phi
+\lambda \left(\phi^{\dagger}\phi\right)^2
\\
\mathcal{L}_{Yukawa}
& = & g_f\bar{\Psi}\phi\Psi
\label{eq:YukawaCoupling}
\end{eqnarray}
Spontaneous symmetry breaking of the Higgs potential~\cite{Higgs:1964ia}
is possible by assuming $\mu^2<0$ (also a positive
$\lambda$ to possess a stable vacuum).
This is shown in Fig.~\ref{fig:SSB}.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{2.1/Standard_Model_3.eps}}
\caption[Spontaneous symmetry breaking of Higgs potential]
{Spontaneous symmetry breaking of Higgs potential.}
\label{fig:SSB}
\end{center}
\end{figure}
The minimum of the Higgs potential shifts
(in field space) from $\phi = 0$ to
\begin{equation}
\phi^{\dagger}\phi
= \frac{1}{2}
\left(
\phi_1^2 + \phi_2^2 + \phi_3^2 + \phi_4^2
\right)
= \frac{-\mu^2}{\lambda}
= v^2
\end{equation}
The field thus acquires a non-zero vacuum expectation
value (VEV). Choosing $\langle\phi^3\rangle = v$, we
expand about $v$,
\begin{equation}
\phi = \frac{1}{\sqrt{2}}
\left(\begin{array}{c}
\phi_1 + i\phi_2 \\
v + H + i\phi_4
\end{array}
\right)
\end{equation}
with $\phi^3 = v + H$. Any SU(2) doublet can be
written as
\begin{equation}
\phi = \left(e^{-iT^i\theta^i(x)}\right)^{\dagger}
\left(\begin{array}{c}
0 \\
\sigma(x)
\end{array}
\right)
\end{equation}
By applying the gauge
symmetry of
$\mathcal{L}_{SSB}$ under the transformation
of the Higgs doublet in Eq.~(\ref{eq:HiggsTransformation}),
the algebra can be simplied by ``gauging away''
three of the four real degrees of freedom of
the Higgs doublet with $\phi^1, \phi^2, \phi^4 = 0$,
\begin{equation}
\phi
= \frac{1}{\sqrt{2}}
\left(\begin{array}{c}
0 \\
v + H(x)
\end{array}
\right)
\label{eq:UnitaryGauge}
\end{equation}
This is called the unitary gauge. On the other
hand, the physical quantities are independent of
the choice of gauge. This indicates these degrees
of freedom are unphysical.
\vspace{0.2in}
\noindent{\bf Gauge Boson Mass}
\vspace{0.1in}
The generators of the
SU(2)$_L$ transformations are
$T_L^i = \frac{1}{2}\tau^i$, where $\tau^i$ are
Pauli matrices.
\begin{eqnarray}
T^1 & = & \frac{1}{2}\tau^1
= \frac{1}{2}\left( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right) \\
T^2 & = & \frac{1}{2}\tau^2
= \frac{1}{2}\left( \begin{array}{cc}
0 & -i \\
i & 0
\end{array}
\right) \\
T^3 & = & \frac{1}{2}\tau^3
= \frac{1}{2}\left( \begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right)
\end{eqnarray}
We write explicitly
\begin{equation}
g_1\frac{1}{2}B_{\mu} +
g_2T^jW_{\mu}^j
=
g_1\frac{1}{2}B_{\mu} +
g_2\frac{1}{2}
\left( \begin{array}{cc}
W_{\mu}^3 &
W_{\mu}^1 - iW_{\mu}^2 \\
W_{\mu}^1 + iW_{\mu}^2 &
- W_{\mu}^3
\end{array}
\right)
\label{eq:g1B_g2W}
\end{equation}
We then substitute Eq.~(\ref{eq:UnitaryGauge}) and
Eq.~(\ref{eq:g1B_g2W}) into the kinetic term and the
Higgs potential of $\mathcal{L}_{SSB}$ in
Eq.~(\ref{eq:L_SSB}). After some algebra the
tree-level mass terms for the $H$ field and the gauge
bosons are present. The unphysical scalars reappear
as the longitudinal polarizations of the weak bosons.
\begin{eqnarray}
(D_{\mu}\phi)^{\dagger}(D^{\mu}\phi)
- V(\phi)
& = & - \frac{1}{2}\left( 2\lambda v^2 \right)
H^2
\\
& & + \left( \frac{g_2\nu}{2} \right)^2
W^{+\mu}W^-_{\mu}
\\
& & + \left( \frac{\nu}{2} \right)^2
\left( g_2W^3_{\mu} - g_1B_{\mu} \right)
\left( g_2W^{3\mu} - g_1B^{\mu} \right)
\\
& & + \cdots
\nonumber
\end{eqnarray}
and many other interaction terms. The fields
$W_{\mu}^{\pm}$ are defined as the electric charge
eigenstates. The SSB has mixed the $B_{\mu}$ and
$W_{\mu}^3$ gauge bosons with the weak mixing angle
$\theta_W$.
\begin{equation}
W_{\mu}^{\pm} = \frac{1}{\sqrt{2}}
\left(W_{\mu}^1 \mp W_{\mu}^2\right)
\label{eq:Wpm}
\end{equation}
\begin{equation}
\left( \begin{array}{c}
Z_{\mu} \\
A_{\mu}
\end{array}
\right)
=
\left( \begin{array}{cc}
\cos\theta_W & -\sin\theta_W \\
\sin\theta_W & \cos\theta_W
\end{array}
\right)
\left( \begin{array}{c}
W_{\mu}^3 \\
B_{\mu}
\end{array}
\right)
\label{eq:ZA}
\end{equation}
\begin{equation}
\tan\theta_W = \frac{g_1}{g_2}
\end{equation}
Now we can read out the tree-level masses,
\begin{eqnarray}
m_H & = & v \sqrt{2\lambda} \\
m_W & = & v \frac{g_2}{2} \\
m_Z & = & v \frac{g_2}{2\cos\theta_W} \\
m_{\gamma} & = & 0
\end{eqnarray}
Using
$m_W = (\frac{\sqrt{2}g_2^2}{8G_F})^{1/2}$
in Eq.~(\ref{eq:mW_g2_GF})
with Fermi's constant $G_F = 1.166\times10^{-5}$ GeV$^{-2}$,
we can estimate the VEV of the
Higgs field:
\begin{equation}
v = \frac{2m_W}{g_2}
= (\sqrt{2}G_F)^{-1/2}
\approx 246 \; \mbox{GeV}
\end{equation}
The quantity
\begin{equation}
\rho = \frac{m_W}{m_Z\cos\theta_W}
\end{equation}
is the universality parameter of the neutral current
interactions and the charged current interactions. It
is predicted to be one at tree level in the SM,
thus provides a test of the SM realization of SSB
compared to other models. Any deviation from
$\rho = 1$ would be an important signal of new
physics.
\vspace{0.2in}
\noindent{\bf Eletroweak Unification}
\vspace{0.1in}
Substituting the physical state of
$W_{\mu}^{\pm}$ in Eq.~(\ref{eq:Wpm}) and
$Z_{\mu}$, $A_{\mu}$ in Eq.~(\ref{eq:ZA})
into the electroweak interaction in the covariant
derivative term in Eq.~(\ref{eq:L_symmetric}),
and using $Y = 2(Q - T^3)$,
we can identify the weak CC, weak NC, and
electromagnetic interactions.
\begin{eqnarray}
& & \bar{\Psi}
i\gamma^{\mu}
\left(
i g_1 \frac{Y}{2}B_{\mu} +
i g_2 T^j W_{\mu}^j
\right)
\Psi
\nonumber \\
& = & - \bar{\Psi}\gamma^{\mu}
\left[
g_1 \left(Q - T^3\right) B_{\mu} +
g_2 \left(T^1W_{\mu}^1 + T^2W_{\mu}^2 + T^3W_{\mu}^3\right)
\right]
\Psi
\nonumber \\
& = & \begin{array}[t]{ll}
- \bar{\Psi}\gamma^{\mu}
\left[
\frac{g_2}{\sqrt{2}}\left(T^-W_{\mu}^+ + T^+W_{\mu}^-\right)
\right]
\Psi
& \mbox{(weak CC)}
\\
- \bar{\Psi}\gamma^{\mu}
\left[
\frac{g_2}{\cos\theta_W}\left(T^3-\sin^2\theta_WQ\right)Z_{\mu}
\right]
\Psi
& \mbox{(weak NC)}
\\
- \bar{\Psi}\gamma^{\mu}
g_2\sin\theta_WQA_{\mu}
\Psi
& \mbox{(electromagnetic)}
\end{array}
\label{eq:WZA}
\end{eqnarray}
Comparing the electromagnetic part with
the $-\bar{\Psi}\gamma^{\mu}eQA_{\mu}\Psi$
term of $\mathcal{L}_{QED}$ in Eq.~(\ref{eq:L_QED}),
this implies the unification relation:
\begin{equation}
e = g_2\sin\theta_w = g_1\cos\theta_w
\label{eq:EWKunification}
\end{equation}
\vspace{0.2in}
\noindent{\bf Yukawa Coupling}
\vspace{0.1in}
Now we check the fermion masses. The structure of
the lepton fields, for example, of the first generation is
\begin{equation}
\begin{array}{cc}
L = \left(\begin{array}{c} \nu_e \\ e_L \end{array}\right),
& e_R
\end{array}
\end{equation}
The Higgs field is an SU(2) doublet. This makes
it possible to write an SU(2)-invariant interaction
of the fermions with the Higgs field, i.e., the Yukawa
coupling term in Eq.~(\ref{eq:YukawaCoupling}), which
can be written as
\begin{eqnarray}
\mathcal{L}_{Yukawa}
& = & g_e \bar{L} \phi e_R +
g_e \bar{e}_R \phi^{\dagger} L
\label{eq:YukawaCoupling_LR_1}\\
& = & g_e\left(\begin{array}{cc}
\times & \times
\end{array}
\right)
\left(\begin{array}{c}
\times \\
\times
\end{array}
\right)
\left(\begin{array}{c}
\times
\end{array}
\right)
+
g_e\left(\begin{array}{c}
\times
\end{array}
\right)
\left(\begin{array}{cc}
\times & \times
\end{array}
\right)
\left(\begin{array}{c}
\times \\
\times
\end{array}
\right) \nonumber
\end{eqnarray}
Here $\bar{L}\phi$ is an SU(2) invariant.
Multiplying by the $e_R$ does not change the
SU(2) invariance. The second term is the Hermitian
conjugate of the first. The coupling $g_e$ is
arbitrary because it is not specified by the gauge
symmetry principle of the theory. After SSB by
substituting $\phi$ with Eq.~(\ref{eq:UnitaryGauge}),
and using $\bar{e}_Le_R + \bar{e}_Re_L = \bar{e}{e}$,
we get
\begin{eqnarray}
\mathcal{L}_{Yukawa}
& = & \frac{g_ev}{\sqrt{2}}
\left(\bar{e}_Le_R + \bar{e}_Re_L\right)
+\frac{g_e}{\sqrt{2}}
\left(\bar{e}_Le_R + \bar{e}_Re_L\right)
H
\nonumber \\
& = & m_e\bar{e}e
+\frac{m_e}{v}\bar{e}eH
\label{eq:YukawaCoupling_SSB_1}
\end{eqnarray}
We have identified the fermion mass as
$m_e = \frac{g_ev}{\sqrt{2}}$. Thus the theory can
now accommodate a non-zero fermion mass. The second
term says that there is a lepton-Higgs coupling
$\frac{m_e}{v}$.
We notice that there is no mass term occured for neutrinos,
$m_{\nu} = 0$. By assumption the theory contains no
right-handed neutrino state $\nu_R$, therefore
a term analogous to Eq.~(\ref{eq:YukawaCoupling_LR_1})
cannot be written that will lead to a mass term
$\bar{\nu}_R\nu_L$. And this implies neutrinos do
not interact with $H$.
The structure of the quark fields, for example, of
the first generation is
\begin{equation}
\begin{array}{ccc}
Q_L = \left(\begin{array}{c} u_L \\ d_L \end{array}\right),
& u_R,
& d_R
\end{array}
\end{equation}
Since the structure of the right-handed quark is
different from the lepton case, there is a subtlety
in writing down the Yukawa coupling term. We know
$\phi$ is an SU(2) doublet, then so is
\begin{equation}
\tilde{\phi}
= i\tau^2\phi^*
= \left(
\begin{array}{c}
\phi^{0*} \\
-\phi^-
\end{array}
\right)
\end{equation}
This is true for any SU(2) doublet. Since $\phi$ has
hypercharge $Y = +1$, $\tilde{\phi}$ has $Y = -1$,
and for each state, $Q = T^3 + Y/2$ is still satisfied.
After SSB, $\tilde{\phi}$ becomes
\begin{equation}
\tilde{\phi}
\to \frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
v + H \\
0
\end{array}
\right)
\label{eq:UnitaryGauge_tilde}
\end{equation}
The SU(2)-invariant Yukawa coupling for the quarks can be
written as
\begin{equation}
\mathcal{L}_{Yukawa}
= g_d \bar{Q}_L \phi d_R +
g_u \bar{Q}_L \tilde{\phi} u_R +
\mbox{h.c.}
\label{eq:YukawaCoupling_LR_2}
\end{equation}
After SSB by substituting $\phi$ with Eq.~(\ref{eq:UnitaryGauge}),
$\tilde{\phi}$ with Eq.~(\ref{eq:UnitaryGauge_tilde}), and using
$\bar{q}_Lq_R + \bar{q}_Rq_L = \bar{q}{q}$, we get
\begin{eqnarray}
\mathcal{L}_{Yukawa}
& = &
\frac{g_uv}{\sqrt{2}}\bar{u}u
+\frac{g_dv}{\sqrt{2}}\bar{d}d
+\frac{g_u}{\sqrt{2}}\bar{u}uH
+\frac{g_d}{\sqrt{2}}\bar{d}dH
\nonumber \\
& = &
m_u\bar{u}u
+m_d\bar{d}d
+\frac{m_u}{v}\bar{u}uH
+\frac{m_d}{v}\bar{d}dH
\label{eq:YukawaCoupling_SSB_2}
\end{eqnarray}
Again the quark masses can be accommodated, but are
arbitrary parameters. They have to be provided by
experiment. The last two terms describe the
interaction of $u$ and $d$ quarks with $H$.
The procedure can be copied for the second and third
generations with $e \to \mu, \tau$ and with $u \to c, t$
and $d \to s, b$. Since $H$ interacts with a coupling
proportional to $m_f$, it couples most strongly to
the heaviest generation.
\vspace{0.2in}
\noindent{\bf CKM Matrix}
\vspace{0.1in}
The spaces we have been working on are an internal quantum phase
space called gauge space of fermions, and an internal field space
of the Higgs potential. The logic line is gauge symmetry $+$ SSB.
Let us write down the SM Lagrangian~(\ref{eq:L}) explicitly by
combining Eq.~(\ref{eq:L_symmetric}) and Eq.~(\ref{eq:L_SSB}). This time
is not for a particular fermion $\Psi_f$
only. There are three generations of fermions
in the SM. We will sum up all of them. Once we do that, there is
a new internal space: generation space. The eigenstates of the
fermions in gauge space could be \emph{not} the eigenstates of the
fermions in generation space which are the physical mass eigenstates
we observe in experiment.
\begin{eqnarray}
\mathcal{L}
& = & \mathcal{L}_{symmetric}
+\mathcal{L}_{SSB}
\\
& = & \begin{array}[t]{ll}
\sum_f
\bar{\Psi}_f
i\gamma^{\mu}
\left(\partial_{\mu}
+ i g_1 \frac{Y}{2}B_{\mu}
+ i g_2 T^j W_{\mu}^j
+ i g_3 \lambda^a G_{\mu}^a
\right)\Psi_f
& (L_{symm, \; covariant})
\\
- \frac{1}{4}B_{\mu\nu}B^{\mu\nu}
- \frac{1}{4}W_{\mu\nu}^iW^{i\mu\nu}
- \frac{1}{4}G_{\mu\nu}^aG^{a\mu\nu}
& (L_{symm, \; GK})
\\
+
\left|
\left(
\partial_{\mu}
+ig_1\frac{1}{2}B_{\mu}
+ig_2T^jW_{\mu}^j
\right)
\phi
\right|^2
& (L_{SSB, \;\;\; kinetic})
\\
-
\left[
\mu^2 \phi^{\dagger}\phi
+\lambda \left(\phi^{\dagger}\phi\right)^2
\right]
& (L_{SSB, \;\;\; V(\phi)})
\\
-
\sum_f
g_f\bar{\Psi}_f\phi\Psi_f
& (L_{SSB, \;\;\; Yukawa})
\end{array}
\nonumber
\end{eqnarray}
We collect all of the terms for the fermions after SSB:
the kinetic and QCD terms in Eq.~(\ref{eq:L_symmetric}),
the mass and Higgs coupling terms in Eq.~(\ref{eq:YukawaCoupling_SSB_1})
for the leptons and in Eq.~(\ref{eq:YukawaCoupling_SSB_2}) for the quarks,
and the weak CC, weak NC and electromagnetic terms in Eq.~(\ref{eq:WZA}).
We simplify the notation for a fermion field $\Psi_f$ as $f$.
The part of the SM Lagrangian for fermions is given by
\begin{equation}
\mathcal{L}_F = \begin{array}[t]{ll}
\sum_f
\bar{f}
\left( i /\!\!\!\partial
- m_f
- \frac{m_f}{v}H
\right)
f
& (\mbox{Higgs})
\\
- \;
\frac{g_3}{2}
\sum_q
\bar{q}_{\alpha}
\gamma^{\mu}
\lambda^a_{\alpha\beta}
q_{\beta}
G_{\mu}^a
& (\mbox{QCD})
\\
- \;
e
\sum_f
Q_f
\bar{f}
\gamma^{\mu}
f
A_{\mu}
& (\mbox{QED})
\\
- \;
\frac{g_2}{\cos\theta_w}
\sum_f
\bar{f}
\gamma^{\mu}
\left(T^3-\sin^2\theta_WQ\right)
f
Z_{\mu}
& (\mbox{weak NC})
\\
- \;
\frac{g_2}{\sqrt{2}}
\sum_f
\bar{f}
\gamma^{\mu}
(T^+W^+_{\mu} + T^-W^-_{\mu})
f
& (\mbox{weak CC})
\end{array}
\label{eq:L_F}
\end{equation}
We denote the gauge eigenstate triplets in the generation space as
\begin{eqnarray}
\begin{array}{cc}
\mathbf{e}_L = \left(\begin{array}{ccc}
e_L \\
\mu_L \\
\tau_L
\end{array}
\right),
&
\mathbf{e}_R = \left(\begin{array}{ccc}
e_R \\
\mu_R \\
\tau_R
\end{array}
\right)
\end{array}
\nonumber \\
\begin{array}{cc}
\mathbf{u}_L = \left(\begin{array}{ccc}
u_L \\
c_L \\
t_L
\end{array}
\right),
&
\mathbf{u}_R = \left(\begin{array}{ccc}
u_R \\
c_R \\
t_R
\end{array}
\right)
\end{array}
\\
\begin{array}{cc}
\mathbf{d}_L = \left(\begin{array}{ccc}
d_L \\
s_L \\
b_L
\end{array}
\right),
&
\mathbf{d}_R = \left(\begin{array}{ccc}
d_R \\
s_R \\
b_R
\end{array}
\right)
\end{array}
\nonumber
\end{eqnarray}
and denote the rotations from the gauge eigenstates to
the mass eigenstates as unitary matrices
$L_e$, $R_e$, $L_u$, $R_u$, $L_d$, and $R_d$
such that
\begin{eqnarray}
\begin{array}{cc}
\mathbf{e}_L \to L_e\mathbf{e}_L,
&
\mathbf{e}_R \to R_e\mathbf{e}_R
\end{array}
\nonumber \\
\begin{array}{cc}
\mathbf{u}_L \to L_u\mathbf{u}_L,
&
\mathbf{u}_R \to R_u\mathbf{u}_R
\end{array}
\\
\begin{array}{cc}
\mathbf{d}_L \to L_d\mathbf{d}_L,
&
\mathbf{d}_R \to R_d\mathbf{d}_R
\end{array}
\nonumber
\end{eqnarray}
Because all of the neutrinos in the SM are massless,
they are degenerate in the mass eigenstates, namely we
cannot tell the differences among the mass eigenstates.
We set the rotation for neutrinos as a unit matrix
denoted as $I_{\nu}$.
First we check the QED part
in Eq.~(\ref{eq:L_F}) to see if there is any change
under the rotations,
\begin{eqnarray}
\mathcal{L}_{F}^{QED}
& = &
- \;
e
\sum_f
Q_f
\bar{f}
\gamma^{\mu}
f
A_{\mu}
\nonumber \\
& = &
- \;
e
Q_{\mathbf{f}}
\left(
\bar{\mathbf{f}}_L
\gamma^{\mu}
\mathbf{f}_L
+
\bar{\mathbf{f}}_R
\gamma^{\mu}
\mathbf{f}_R
\right)
A_{\mu},
\;\;\;\;\;\;
\mbox{ with }
\mathbf{f} = \mathbf{e}, \mathbf{u}, \mathbf{d}
\nonumber \\
& \to &
- \;
e
Q_{\mathbf{f}}
\left(
\bar{\mathbf{f}}_L
\gamma^{\mu}
L_{\mathbf{f}}^{\dagger}L_{\mathbf{f}}
\mathbf{f}_L
+
\bar{\mathbf{f}}_R
\gamma^{\mu}
R_{\mathbf{f}}^{\dagger}R_{\mathbf{f}}
\mathbf{f}_R
\right)
A_{\mu}
\label{eq:L_F_QED_rotated}
\\
& = &
- \;
e
Q_{\mathbf{f}}
\left(
\bar{\mathbf{f}}_L
\gamma^{\mu}
\mathbf{f}_L
+
\bar{\mathbf{f}}_R
\gamma^{\mu}
\mathbf{f}_R
\right)
A_{\mu}
\nonumber
\end{eqnarray}
where we have let $L_f^{\dagger}$ ($R_f^{\dagger}$)
pass $\gamma^{\mu}$ forward in Eq.~(\ref{eq:L_F_QED_rotated})
because the former rotates in the generation space and
the latter is in the spinor space. Since the unitary
rotation matrices give $L_f^{\dagger}L_f = I$
and $R_f^{\dagger}R_f = I$, the electromagnetic
interaction is diagonized in both the gauge eigenstates
and the mass eigenstates.
The same result holds for the Higgs, QCD, and weak NC
parts in Eq.~(\ref{eq:L_F}) for the same reason. For
the weak NC, this is called the GIM mechanism~\cite{Glashow:1970gm}.
The flavor changing neutral currents (FCNC), e.g.
$s \to d$ decay ``off-diagonal'' in the generation
space, are strongly suppressed. On the other hand,
the FCNC rare decays are very interesting because
they are possible probes for new interactions.
Now we check the weak CC in Eq.~(\ref{eq:L_F}).
For leptons, we have
\begin{eqnarray}
\mathcal{L}_l^{CC}
& = &
- \;
\frac{g_2}{\sqrt{2}}
\sum_l
\bar{l}
\gamma^{\mu}
(T^+W^+_{\mu} + T^-W^-_{\mu})
l
\nonumber \\
& = &
- \;
\frac{g_2}{\sqrt{2}}
\left(
\bar{\mathbf{\nu}}
\gamma^{\mu}
\mathbf{e}_L
W^-_{\mu}
+ h.c.
\right)
\nonumber \\
& \to &
- \;
\frac{g_2}{\sqrt{2}}
\left(
\bar{\mathbf{\nu}}
I_{\nu}^{\dagger}L_e
\gamma^{\mu}
\mathbf{e}_L
W^-_{\mu}
+ h.c.
\right)
\label{eq:L_l_CC_rotated}
\\
& = &
- \;
\frac{g_2}{\sqrt{2}}
\left(
\bar{\mathbf{\nu}}
\gamma^{\mu}
\mathbf{e}_L
W^-_{\mu}
+ h.c.
\right)
\nonumber
\end{eqnarray}
where we have let $L_e$ pass $\gamma^{\mu}$ backward
in~(\ref{eq:L_l_CC_rotated}). With $I_{\nu}^{\dagger}L_e$
acting backward on the vector of the degenerated
neutrino mass eigenstates, we just go back to the
original form, and the leptonic weak CC interactions
are diagonized in both kinds of the eigenstates.
So far, the distinction between the gauge eigenstates
and the mass eigenstates has been seen to have no apparent
effect. However, mixing between generations does
manifest itself in the system of the weak CC for quarks.
By convention, the quark mixing is assigned to the
down-type quarks,
\begin{eqnarray}
\mathcal{L}_q^{CC}
& = &
- \;
\frac{g_2}{\sqrt{2}}
\sum_q
\bar{q}
\gamma^{\mu}
(T^+W^+_{\mu} + T^-W^-_{\mu})
q
\nonumber \\
& = &
- \;
\frac{g_2}{\sqrt{2}}
\left(
\bar{\mathbf{u}}_L
\gamma^{\mu}
\mathbf{d}_L
W^-_{\mu}
+ h.c.
\right)
\nonumber \\
& \to &
- \;
\frac{g_2}{\sqrt{2}}
\left(
\bar{\mathbf{u}}_L
\gamma^{\mu}
L_u^{\dagger}L_d
\mathbf{d}_L
W^-_{\mu}
+ h.c.
\right)
\label{eq:L_q_CC_rotated}
\\
& = &
- \;
\frac{g_2}{\sqrt{2}}
\left(
\bar{\mathbf{u}}_L
\gamma^{\mu}
V
\mathbf{d}_L
W^-_{\mu}
+ h.c.
\right)
\nonumber
\end{eqnarray}
where
\begin{equation}
V = L_u^{\dagger}L_d
\end{equation}
Thus the down-type quark gauge states participating in
the transitions of the weak CC are linear combinations of
their mass eigenstates. For three generations, it is called
the CKM (Cabibbo-Kobayashi-Maskawa) matrix~\cite{Cabibbo:1963yz}.
The SM does
not predict the content of $V$. Rather its matrix
elements must be extracted from experiment.
\begin{equation}
\left( \begin{array}{c}
d \\
s \\
b
\end{array}
\right)_{\mbox{weak}}
=
\left( \begin{array}{ccc}
V_{ud} & V_{us} & V_{ub} \\
V_{cd} & V_{cs} & V_{cb} \\
V_{td} & V_{ts} & V_{tb}
\end{array}
\right)
\left( \begin{array}{c}
d \\
s \\
b
\end{array}
\right)_{\mbox{mass}}
\label{eq:CKM_Matrix}
\end{equation}
Any $3\times3$ complex matrix has 18 paramters.
The quark mixing matrix $V$, being the product of two
unitary matrices, is itself unitary, $V^{\dagger}V = 1$,
and this eliminates 9 paramters. The rest of 9 parameters
can be identified with 3 rotation angle, and 6
phase angles with 5 of them eliminated by rephasing the
relative quark phase angles in Eq.~(\ref{eq:CKM_Matrix}) and
leaving 1 global phase angle. So the actual total number
of free parameters is $18 - 9 - 5 = 4$, which includes 3
rotation angle and 1 phase angle.
The ``standard''
parametrization of the CKM matrix advocated in
PDG~\cite{Eidelman:2004wy} is
\begin{equation}
V =
\left(
\begin{array}{ccc}
c_{12}c_{13} &
s_{12}c_{13} &
s_{13}e^{-i\delta_{13}} \\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta_{13}} &
c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta_{13}} &
s_{23}c_{13} \\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta_{13}} &
-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta_{13}} &
c_{23}c_{13}
\end{array}
\right)
\label{eq:CKM_StandardParametrization}
\end{equation}
In this equation, $c_{ij} = \cos\theta_{ij}$ and
$s_{ij} = \sin\theta_{ij}$, with $i$ and $j$ labeling
the generations. The interpretation is that if $\theta_{ij}$
vanishes, so does the mixing between those two generations.
For example, in the limit $\theta_{23} = \theta_{13} = 0$,
the third generation decouples and it reduces to two generations
with $\theta_{12}$ identified as the Cabibbo angle.
The complex parameter in phase angle goes into the weak
charged interaction terms
$\bar{\mathbf{u}}\gamma^{\mu}P_LV\mathbf{d}W_{\mu}$,
and from quantum theory we know that the Hamiltonian will not
be invariant under time reversal, or equivalently, CP.
So this induces CP violation.
The magnitude of the complex
matrix element in the CKM matrix presently measured is
\begin{equation}
\left(
\begin{array}{ccc}
0.9739-0.9751 & 0.221-0.227 & 0.0029-0.0045 \\
0.221-0.227 & 0.9730-0.9744 & 0.039-0.044 \\
0.0048-0.0140 & 0.037-0.043 & 0.9990-0.9992
\end{array}
\right)
\label{eq:CKM_Measurement}
\end{equation}
Here we discuss some of the immediate consequences.
For top quark, with $V_{tb}\approx0.999$, we have
\begin{equation}
\mbox{BR}(t \to W b) \approx 100\%
\end{equation}
For bottom quark, with $V_{cb}\approx0.04$ ten times
larger than $V_{ub}\approx0.004$, it mostly decays by
$b \to W c$. Then $W$ can decay to $e\nu$, $\mu\nu$,
$\tau\nu$, $u\bar{d}$, and $c\bar{s}$ with a color factor
3 for each quark decaying mode. The width of $b$ decays is
$\Gamma_b\approx(9V_{cb}^2G_F^2m_b^5)/(192\pi^3)$.
This gives
\begin{equation}
\frac{\Gamma_b}{\Gamma_{\tau}}
\approx
\frac{9}{5}
V_{cb}^2
\left(
\frac{m_b}{m_{\tau}}
\right)^5
\approx
0.4
\end{equation}
So the life time of $b$ is about two and half times
longer than the life time of $\tau$ because its
decay can only happen by the rotation from the
mass eigenstates to the weak eigenstates and
the magnitudes of the matrix elements for this
rotation are small.
\vspace{0.2in}
\noindent{\bf Couplings to Fermions}
\vspace{0.1in}
For convenience, we repeat Eq.~(\ref{eq:L_F}) here.
\begin{equation}
\mathcal{L}_F = \begin{array}[t]{ll}
\sum_f
\bar{f}
\left( i /\!\!\!\partial
- m_f
- \frac{m_f}{v}H
\right)
f
& (\mbox{Higgs})
\\
- \;
\frac{g_3}{2}
\sum_q
\bar{q}_{\alpha}
\gamma^{\mu}
\lambda^a_{\alpha\beta}
q_{\beta}
G_{\mu}^a
& (\mbox{QCD})
\\
- \;
e
\sum_f
Q_f
\bar{f}
\gamma^{\mu}
f
A_{\mu}
& (\mbox{QED})
\\
- \;
\frac{g_2}{\cos\theta_w}
\sum_f
\bar{f}
\gamma^{\mu}
\left(T^3-\sin^2\theta_WQ\right)
f
Z_{\mu}
& (\mbox{weak NC})
\\
- \;
\frac{g_2}{\sqrt{2}}
\sum_f
\bar{f}
\gamma^{\mu}
(T^+W^+_{\mu} + T^-W^-_{\mu})
f
& (\mbox{weak CC})
\end{array}
\label{eq:repeat_L_F}
\end{equation}
We can read out the couplings to the fermions in the SM
as follows:
\begin{itemize}
\item The Higgs coupling for $H \to f\bar{f}$
is $\frac{m_f}{v}$.
\item The QCD coupling for $g \to q\bar{q}$ is
$\frac{g_3}{2}\lambda^a$.
(For the electroweak interactions of the quarks
$\gamma/Z/W/H \to \bar{q}_cq_c$,
the effect of the color charge is that
the probabilities, i.e., the decay widths
are multiplied by a constant color factor
$N_c = 3$, rather than that the couplings
appearing in the amplitudes are multiplied
by the color generator $\lambda^a$.
This is because that $\gamma/Z/W/H$ are
colorless and the number of color
combinations of $\bar{q}_cq_c$ is fixed
to be three.)
\item The electromagnetic coupling for
$\gamma \to f\bar{f}$ is $eQ_f$.
\item The neutral weak coupling for
$Z^0 \to f\bar{f}$ is
$\frac{g_2}{\cos\theta_W}(T^3 - \sin^2\theta_W Q_f)$
for left-handed fermions and
$\left(-\frac{g_2Q_f\sin^2\theta_W}{\cos\theta_W}\right)$
for right-handed fermions.
\item The charged weak coupling is
$\frac{g_2}{\sqrt{2}}$,
and this only applies to left-handed fermions.
We notice that the coupling for
$W^{\pm} \to l \nu_l$ is $\frac{g_2}{\sqrt{2}}$,
while the coupling for $W^{\pm} \to qq'$
should be multiplied by a quark mixing element
in the CKM matrix and it becomes
$V_{qq'}\frac{g_2}{\sqrt{2}}$.
\end{itemize}
These results are summarized in
Table~\ref{tab:CouplingsToFermions}
in Section~\ref{sec:theory_SM}.
\chapter{How to Calculate Cross Section}
\label{cha:HowToCalculateXSec}
We are concerned about the resonance production of tau pairs in the
SM i.e. $p\bar{p}\to\gamma^*/Z\to\tau\tau$.
This is a good example to see how event
generator~\cite{Barger:1997}~\cite{Richardson:2003}
works by using Monte Carlo simulation.
At $p\bar{p}$ collider, the production of any process starts
from parton interaction. A proton is made of quarks and gluons
and can be written as
\begin{equation}
\mbox{proton} =
\underbrace{uud}_{valence} + \;\;
\underbrace{u\bar{u}+d\bar{d}+\cdots}_{sea} \;\; + \;\;
\underbrace{g+g+\cdots}_{gluons}
\end{equation}
The probability density for a given parton $i$ in a proton
carrying momentum fraction $x$ and being ``seen'' in an interaction
by an intermediate boson with energy scale $Q$ is characterized
by a function $f_i(x, Q)$, called the Parton Density Function
(PDF)~\cite{Lai:1999wy}.
The momentum density of a parton is its PDF multiplied
by its momentum fraction and is expressed as $x f_i(x, Q)$.
An example of parametrization is shown in Fig.~\ref{fig:PDF}.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{2.3/PDF.eps}}
\caption[Proton's parton density functions]
{Proton's parton density functions.}
\label{fig:PDF}
\end{center}
\end{figure}
The differential cross section for $12\to34$ can be written as
\begin{equation}
d\sigma =
\frac{\left(2\pi\right)^4}{2\hat{s}}
\frac{d^3p_3}{\left(2\pi\right)^32E_3}
\frac{d^3p_4}{\left(2\pi\right)^32E_4}
\delta^4(p_1+p_2-p_3-p_4)
dx_1dx_2f_1(x_1)f_2(x_2)
\sum_{spins}\left|\mathcal{M}\right|^2_{12\to34}
\end{equation}
where $\hat{s}$ is the parton center-of-mass energy squared,
$p_i$ ($E_i$) is the momentum (energy) of the $i$th particle,
$x_{1,2}$ are the fractions of the momenta of the incoming
beam particles carried by the incoming partons, $f_i(x_i)$
are the PDF's with an implicit dependence on the energy scale of the
interaction, and $\sum_{spins}\left|\mathcal{M}\right|^2_{12\to34}$
is the matrix element squared for the process averaged over
the spins and colors of the incoming particles and summed
over the spins and colors of the outgoing particles.
First we consider the phase space. We perform the integral over
the three-momentum of $p_4$, and reexpress the integral over the
momentum of $p_3$ in terms of the magnitude of the
three-momentum $p$ in the parton center-of-mass frame and the
angle with respect to the beam $\theta$ and the azimuthal angle
$\phi$. Then we make a transformation using $\hat{s}=x_1x_2s$
with $s$ the $p\bar{p}$ center-of-mass energy squared and we
get $dx_2=d\hat{s}/(sx_1)$. After some algebra, the differential
cross section becomes
\begin{equation}
d\sigma =
\frac{p}{32\pi^2\hat{s}^{5/2}}
d\cos\theta
d\phi
d\hat{s}
\frac{dx_1}{x_1}
x_1f_1(x_1)x_2f_2(x_2)
\sum_{spins}\left|\mathcal{M}\right|^2_{12\to34}
\end{equation}
The angular part can be uniformly generated with $0<\phi<2\pi$
and $-1<\cos\theta<1$. The momentum fraction $dx_1/x_1$ part
can be transformed to $\ln x_1$ and then uniformly generated.
For the distribution over $\hat{s}$, we impose a minimum value
of $\hat{s}$. There are two types
of distributions to be smoothed in order to converge faster
for the Monte Carlo simulation. One type is a power law
distribution $1/\hat{s}^{\alpha}$ with $\alpha>1$ which is the rise
in the cross section due to the photon exchange at small
center-of-mass energies. The other type is the Breit-Wigner
resonance due to the $Z$ boson exchange with a mass $m$ and
a width~$\Gamma$,
\begin{equation}
\begin{array}[c]{lllll}
\int_{\hat{s}/s}^1\frac{dx_1}{x_1}
& \rho\equiv\ln x_1
& \to
& \int d\rho
& \mbox{uniformly} \\
\int_{\hat{s}_{min}}^s\frac{d\hat{s}}{\hat{s}^{\alpha}}
& \rho\equiv\hat{s}^{(1-\alpha)}
& \to
& \int d\rho
& \mbox{uniformly} \\
\int_{\hat{s}_{min}}^s\frac{d\hat{s}}{\left(\hat{s}-m^2\right)^2 + m^2\Gamma^2}
& \rho\equiv\tan^{-1}\left(\frac{\hat{s}-m^2}{m\Gamma}\right)
& \to
& \int d\rho
& \mbox{uniformly}
\end{array}
\end{equation}
Second we consider the matrix element which is the interesting part.
With a non-constant matrix element, the distribution is expected
to deviate from the pure phase space distribution. Further,
compared with the distributions described by the SM, there are
probably deviations in the distributions in real data due to some
unknown matrix elements of new physics.
The effects shown in cross section could be an enhancement or
a suppression, a new resonance, changes in the angular distributions,
a divergence or a cancellation by interference, etc. A good deal of
particle physics consists of the measurements and the interpretations
of such effects in cross section.
For the SM process $q\bar{q}\to\gamma^*/Z\to\tau\tau$, we have
\begin{equation}
\sum_{spins}\left|\mathcal{M}\right|^2_{12\to34} =
\begin{array}[t]{l}
(\hat{t}-m_3^2)
(\hat{t}-m_4^2)
(|g^{RL}|^2+|g^{LR}|^2)
\\
+
(\hat{u}-m_3^2)
(\hat{u}-m_4^2)
(|g^{LL}|^2+|g^{RR}|^2)
\\
+
2m_3m_4
\mathcal{R}e
\{g^{RL}g^{RR*}+g^{LR}g^{LL*}\}
\end{array}
\end{equation}
where
$\hat{t} = (p_1-p_3)^2$,
$\hat{u} = (p_1-p_4)^2$,
$m_{3,4}$ are the masses of the outgoing tau particles.
In the center-of-mass frame using
$p_{cm}^2 = \frac{1}{4\hat{s}}[\hat{s}-(m_3+m_4)^2][\hat{s}-(m_3-m_4)^2]$,
the value of $\hat{t}$ can be expressed as
$\hat{t} = m_3^2-\hat{s}^{1/2}(E_3-p_{cm}\cos\theta)$,
and the value of $\hat{u}$ can be expressed as
$\hat{u} = m_4^2-\hat{s}^{1/2}(E_4-p_{cm}\cos\theta)$.
The couplings are defined to be
\begin{equation}
g^{ab}
= \sum_{i=\gamma*/Z}
\frac{g_{in}^ag_{out}^b}
{(\hat{s}-m_i^2)^2+m_i^2\Gamma_i^2}
\end{equation}
where the sum runs over $\gamma^*/Z$ the intermediate gauge
bosons with mass $0/m_Z$ and width $0/\Gamma_Z$, and
$g_{in}^{L,R}$ is the coupling of the gauge boson to the incoming
partons and $g_{out}^{L,R}$ is the coupling of the gauge boson to
the outgoing tau particles. The couplings to the fermions in the
SM are listed in Table~\ref{tab:CouplingsToFermions}.
To summarize, the major parts for generating an event include:
(a) generating randomly the incoming partons and incorporating the PDF's,
(b) generating randomly the kinematic variables which describe the event
in the phase space of the final particles, and
(c) calculating the matrix element.
Now we can put the parts together and get the weight for an event by
multiplying all of the factors. The weight is in GeV$^{-2}$ and we
need to convert to picobarn with a conversion constant $3.89379\times10^8$
GeV$^2$ pb.
After generating a large sample of events, we can fill the weights of
the events into a histogram, for example, a one-dimensional histogram
of $\hat{s}$ which is the invariant mass of the tau pairs. The
differential cross section versus the invariant mass of the tau pairs
can be obtained by dividing the histogram by the number of events
generated and the size of the bins. The result is shown in
Fig.~\ref{fig:TauTau_3} in Section~\ref{sec:theory_tt}.
\chapter{Separation Angle under Boost}
\label{cha:Alpha_Gamma}
The calculable case is to boost the simplest
phase space, i.e. a two-body decay, from the
rest frame to the lab frame, as shown in
Fig.~\ref{fig:Boost_1}. The two final
particles are back-to-back in the rest frame.
The separation angle $\alpha$ of the two
final particles in the lab frame can be
parametrized as a function of $\theta^*$ the
polar angle in the rest frame, which has an
equal probability to be any value between
$0^o$ and $90^o$, and the boost~$\gamma$.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{3.2.1/Boost_1.eps}}
\caption[Boost two-body decay from rest frame to lab frame]
{Boost two-body decay from rest frame to lab frame.}
\label{fig:Boost_1}
\end{center}
\end{figure}
Let us consider two massless final particles, e.g.
two photons from a $\pi^0$ decay with a mass $m$,
an energy $E$, and a boost $\gamma = E/m$. We boost
the four-momentum of $p_1$ from the rest frame
to the lab frame,
\begin{equation}
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \gamma & \sqrt{\gamma^2-1} \\
0 & 0 & \sqrt{\gamma^2-1} & \gamma
\end{array}
\right)
\left(
\begin{array}{c}
0 \\
\frac{m}{2}\sin\theta^* \\
\frac{m}{2}\cos\theta^* \\
\frac{m}{2}
\end{array}
\right)
=
\left(
\begin{array}{c}
0 \\
\frac{m}{2}\sin\theta^* \\
\frac{m}{2}(\gamma\cos\theta^*+\sqrt{\gamma^2-1}) \\
\frac{m}{2}(\sqrt{\gamma^2-1}\cos\theta^*+\gamma) \\
\end{array}
\right)
\end{equation}
We denote the angle between $p_1$ in the lab frame
and the direction of the boost as $\theta_1$.
We have
\begin{equation}
\sin\theta_1 =
\frac{\sin\theta^*}
{\sqrt{\sin^2\theta^*+(\gamma\cos\theta^*+\sqrt{\gamma^2-1})^2}}
\end{equation}
We denote the angle between $p_2$ in the lab frame
and the direction of the boost as $\theta_2$.
By substituting $\theta^*$ with $\theta^*+\pi$,
we have
\begin{equation}
\sin\theta_2 =
\frac{-\sin\theta^*}
{\sqrt{\sin^2\theta^*+(-\gamma\cos\theta^*+\sqrt{\gamma^2-1})^2}}
\end{equation}
Now we can calculate the separation angle $\alpha$,
\begin{equation}
\sin\alpha =
\sin[\theta_1 + (2\pi - \theta_2)] =
\frac{2\sin\theta^*\sqrt{\gamma^2-1}}
{\sin^2\theta^*(\gamma^2-1)+1}
\end{equation}
For $\theta^*$ not too small and $\gamma\gg1$,
we get an approximation for small $\alpha$,
\begin{equation}
\alpha \approx
\frac{1}{\sin\theta^*}
\times
\frac{2}{\gamma}
\label{eq:Alpha_Gamma}
\end{equation}
For fixed $\theta^*$ (not too small) values,
the functions are shown in Fig.~\ref{fig:Boost_2}.
For large boosts, the smearing by $\theta^*$ is
small, thus the correlation between the separation
angle and the boost (energy) is very strong.
Since $\theta^*$ has an equal probability
to be any value between $0^o$ and $90^o$,
the probability that the separation angle stays
between the curve for $\theta^* = 30^o$ and the
curve for $\theta^* = 90^o$ is three times
larger than the probability that the separation
angle stays between the curve for
$\theta^* = 10^0$ and the curve for
$\theta^* = 30^o$. The effect is very obvious.
We use Monte Carlo simulation to check the same
plot, as shown in Fig.~\ref{fig:Boost_3}.
It confirms that the simplest case of two-body
decay is indeed calculable and the correlation
between the separation angle and the boost
(energy) is very strong.
\begin{figure}
\begin{center}
\parbox{5.3in}{\epsfxsize=\hsize\epsffile{3.2.1/Boost_2.eps}}
\caption[Separation angle vs. boost, calculated in $\theta^*$ slices]
{Separation angle vs. boost, calculated in $\theta^*$ slices.}
\label{fig:Boost_2}
\vspace{0.5in}
\parbox{5.3in}{\epsfxsize=\hsize\epsffile{3.2.1/Boost_3.eps}}
\caption[Separation angle vs. boost, Monte Carlo distribution]
{Separation angle vs. boost, Monte Carlo distribution.}
\label{fig:Boost_3}
\end{center}
\end{figure}
For the more complicated phase spaces such as
those of tau's hadronic decays, the calculation
is very hard. But Eq.~(\ref{eq:Alpha_Gamma}) is
still a good hint. We need to use
Monte Carlo simulation to get the distribution,
which is shown in Fig.~\ref{fig:TauId_shr1} in
Section~\ref{subsec:TauId_shrinking}.
\chapter{Conclusions}
\label{cha:conclusions}
We have performed a blind search for high mass
tau pairs using data corresponding to 195
pb$^{-1}$ of integrated luminosity from Run II
of the Tevatron, using the CDF detector. In
the high-mass region with $m_{vis}>120$ GeV/$c^2$,
we expect $2.8\pm0.5$ events from known background
sources, and observe $4$ events in the data
sample. Thus no significant excess is observed,
and we use the result to set upper limits on
the cross section times branching ratio to tau
pairs of scalar and vector particles as a
function of mass,
shown in
Table~\ref{tab:results_2}
and ploted in
Fig.~\ref{fig:results_6}.
\chapter{Introduction}
\label{cha:Intro}
The Standard Model (SM) combines the electroweak
theory together with Quantum Chromodynamics (QCD)
of strong interactions and shows good agreement with
collider experiments. However the SM
does not include gravity and
is expected to be an effective
low-energy theory.
The Fermilab Tevatron is currently the high energy
frontier of particle physics and delivers
proton-antiproton collisions at high luminosity.
The Run II of the Collider Dectector at Fermilab (CDF)
continues the precision measurements
of hadron collider physics and the search for new
physics at and above the electroweak scale.
With the precision capability at the energy frontier, we
can attack the open questions of high energy physics
from many complementary directions, including: the properties
of top quark, the precision electroweak measurements,
e.g. mass of the $W$ boson,
the direct searches for new phenomena, the tests of
perturbative QCD at Next-to-Leading-Order and large
$Q^2$, and the constraint of the CKM matrix with high
statistics of the B decays.
This thesis is about a direct
search for new particles decaying to tau pairs.
The evidence for such new particles is
that at accessible energies the events with tau pairs
deviate clearly and significantly from the SM
prediction.
In Run I CDF recorded an unusual event in which
there were two very high energy $\tau\to h\nu$
candidates nearly back-to-back in direction.
Figure~\ref{fig:Intro} shows a display of the event.
This event was recorded in the data sample from the
missing transverse energy trigger, and was
noticed in the context of the Run I charged Higgs
search~\cite{CDFnote:3546}. In Run I,
{\em a posteriori}, it was not possible to estimate
a probability for observing such an event, though
less than about 0.1 such events were expected from
backgrounds, including $Z/\gamma^*\to\tau\tau$
Drell-Yan ($q\bar{q}\to Z/\gamma^*\to l^+l^-$).
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{1/introduction_run1_tt.eps}}
\end{center}
\caption[Run I high-mass di-tau candidate event]
{Run I high-mass di-tau candidate event.
The left plot shows energy measurement in calorimeters
and the event is very clean. The right plot shows
the display in the transverse plane. The three-prong
identified tau object has energy 160 GeV. The one-prong
identified tau object has energy at least 135 GeV.
There is also a significant missing transverse energy
indicating significant neutrinos.
The scale of the invariant mass of the two tau objects
and the neutrinos is above 300 GeV/$c^2$.}
\label{fig:Intro}
\end{figure}
Various new physics processes can lead to very
high-mass tau pairs, for example, the new vector
boson $Z^\prime\to\tau\tau$ predicted in the
extension to the Standard Model by adding a
new U(1) gauge symmetry and the pseudoscalar Higgs
boson $A\to\tau\tau$ predicted in the minimum
supersymmetric extension of the Standard Model (MSSM).
The known backgrounds are from
the high-mass tail of Drell-Yan processes
(mainly $Z/\gamma^*\to\tau\tau$) as well as jet$\to\tau$
fakes from $W$+jets, QCD di-jet, and mutli-jet
events.
In this analysis we search for such signal
processes by performing a counting experiment.
We select events with $e+\tau_h$, $\mu+\tau_h$,
and $\tau_h+\tau_h$ (here, ``$\tau_h$'' means a
$\tau$ hadronic decay). We construct an
invariant mass which we call $m_{vis}$ using the
four-vector sum of the lepton, the tau, and the missing transverse energy
vector (ignoring in the latter the $z$ component).
The region which has $m_{vis}>120$ GeV/$c^2$ is
defined as the signal region, while the region
which has $m_{vis}<120$ GeV/$c^2$ is retained as
a control region. We
perform a blind analysis in the signal region, i.e.,
we do not look at the data in the signal region until we have
precisely estimated the backgrounds.
If there is a significant excess
over the known backgrounds, we have
discovered new physics; otherwise, we set
limits on the possible signal rates.
The thesis is organized as follows:
theorectical models including the SM,
extensions of the SM, and high-mass
tau pair phenomenology are described
in Chapter~\ref{cha:theory}.
The experimental appratus including the Fermilab
Accelerator and CDF detector is introduced
in Chapter~\ref{cha:apparatus}.
We discuss the logic behind the analysis
in Chapter~\ref{cha:Search_Strategy}.
Particle identifications for tau, electron and
muon, and the study of missing transverse energy
are discussed in detail
in Chapter~\ref{cha:PID}.
The data samples and event selection are discussed
in Chapter~\ref{cha:event}.
The low-mass control region background estimate,
uncertainties, and the observed events are discussed
in Chapter~\ref{cha:control}.
The high-mass signal region, signal acceptance,
background estimate, and uncertainties are
discussed in Chapter~\ref{cha:signal}.
The results of the observed events after opening
the box, and the method to extract limit are
discussed
in Chapter~\ref{cha:results}.
Finally, the conclusion is presented
in Chapter~\ref{cha:conclusions}.
\chapter{Theoretical Model}
\label{cha:theory}
The goal of elementary particle physics is to
answer the following fundamental questions:
\begin{itemize}
\item What is the world made of?
\item How do the parts interact?
\end{itemize}
The Standard Model (SM)~\cite{Weinberg:1967tq}
of particle physics
is a beautiful theory which attempts to
\textit{find the simplest model}
that quantitatively answer these questions.
The thousands of cross sections and decay
widths listed in the Particle Data Group (PDG)~\cite{Eidelman:2004wy},
and all of the data from collider
experiments, are calculable and explained in
the framework of the SM, which is the bedrock
of our understanding of Nature.
Building on the success of the SM, ambitious attempts
have been made to extend it.
This thesis is concerned about a direct search for
new particles decaying to two taus. The phenomenology
of tau pairs, namely the production rates of intermediate
bosons and the branching ratio of their decays to tau
pairs, in the framework of the SM and some of the
extensions will be presented in this chapter.
\section{The Standard Model}
\label{sec:theory_SM}
The SM elementary particles include the fermion
matter particles and the force carriers.
There are three generations of fermion matter
particles: leptons and quarks. The second and
third generations have the same quantum numbers
of the first generation, but with heavier masses.
The masses of the leptons and quarks are listed in
Table~\ref{tab:LeptonsQuarks}.
The force carriers include the gluon for the strong
interaction, and the photon, the W and Z vector bosons
for the electroweak interaction. The masses of the
force carriers are listed in Table~\ref{tab:ForceCarriers}.
The Higgs boson predicted in the SM is a fundamental
scalar particle and has special interaction strength
proportional to the mass of the elementary particles.
Since it is not discovered yet, it is not
listed in Table~\ref{tab:ForceCarriers}.
\begin{table}
\begin{center}
\begin{tabular}{|c|lc|c|} \hline
Generation & Particle & & Mass [GeV/$c^2$] \\ \hline \hline
I & electron neutrino & $\nu_e$ & 0 \\
& electron & $e$ & 0.00051 \\
& up quark & $u$ & 0.002 to 0.004 \\
& down quark & $d$ & 0.004 to 0.008 \\ \hline
II & muon neutrino & $\nu_{\mu}$ & 0 \\
& muon & $\mu$ & 0.106 \\
& charm quark & $c$ & 1.15 to 1.35 \\
& strange quark & $s$ & 0.08 to 0.13 \\ \hline
III & tau neutrino & $\nu_{\tau}$ & 0 \\
& tau & $\tau$ & 1.777 \\
& top quark & $t$ & 174.3 $\pm$ 5.1 \\
& bottom quark & $b$ & 4.1 to 4.4 \\ \hline
\end{tabular}
\caption[Leptons and quarks in the SM]
{Three generations of leptons and quarks in the Standard
Model and their masses.}
\label{tab:LeptonsQuarks}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|lc|c|} \hline
Force & Carrier & & Mass [GeV/$c^2$] \\ \hline \hline
electromagnetic & photon & $\gamma$ & 0 \\
charged weak & W boson & $W^{\pm}$ & 80.425 $\pm$ 0.038 \\
neutral weak & Z boson & $Z^0$ & 91.1876 $\pm$ 0.0021 \\
strong & gluon & $g$ & 0 \\ \hline
\end{tabular}
\caption[Force carriers in the SM]
{Force carriers in the Standard Model and their masses.}
\label{tab:ForceCarriers}
\end{center}
\end{table}
The SU(3)$_C$$\times$SU(2)$_L$$\times$U(1)$_Y$
structure of the leptons and quarks is shown in
Fig.~\ref{fig:GaugeSymmetries}.
The quarks are arranged in triplets with respect to the
color gauge group SU(3)$_C$, with indices as red ($r$),
green ($g$), and blue ($b$).
\begin{equation}
q = \left( \begin{array}{c}
q_r \\
q_g \\
q_b
\end{array}
\right)
\end{equation}
The left- and right-handed fermions have different
transformation properties under the weak isospin
group SU(2)$_L$. The left-handed fermions
are arranged in doublets, and the right-handed
fermions are arranged in singlets. There is no
right-handed neutrino in the SM.
\begin{equation}
\begin{array}{rccccccccc}
\mbox{Leptons:}
& \left(\begin{array}{c} \nu_e \\ e \end{array}\right)_L
& \left(\begin{array}{c} \nu_{\mu} \\ \mu \end{array}\right)_L
& \left(\begin{array}{c} \nu_{\tau} \\ \tau \end{array}\right)_L
&
& e_R
&
& \mu_R
&
& \tau_R \\
\mbox{Quarks:}
& \left(\begin{array}{c} u \\ d \end{array}\right)_L
& \left(\begin{array}{c} c \\ s \end{array}\right)_L
& \left(\begin{array}{c} t \\ b \end{array}\right)_L
& u_R
& d_R
& c_R
& s_R
& t_R
& b_R
\end{array}
\end{equation}
Table~\ref{tab:QuantumNumbers}
lists the transformation properties, i.e., the quantum
numbers, of the fermions of the first generation under
the gauge groups. The hypercharge of U(1)$_Y$ is
related to the electric charge by $Q = T_L^3 + \frac{Y}{2}$.
The assignments of the quantum numbers to the second and third
generations are the same.
A brief review about how this structure emerges is given in
Appendix~\ref{cha:app_structure}.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{2.1/Standard_Model_2.eps}}
\caption[SU(3)$_C$$\times$SU(2)$_L$$\times$U(1)$_Y$ gauge symmetries]
{SU(3)$_C$$\times$SU(2)$_L$$\times$U(1)$_Y$
gauge symmetries of fermions in the Standard Model.
Quarks have three color degrees-of-freedom, while
leptons are colorless. Left-handed fermions
are arranged in SU(2) weak isospin doublets and
right-handed fermions are arranged in SU(2)
singlets. Each fermion also has
U(1) weak hyper-charge. The interactions
are uniquely specified by the gauge symmetries.}
\label{fig:GaugeSymmetries}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
& $Q$ & $T_L^3$ & $Y$ & $C$ \\ \hline \hline
$\nu_e$ & 0 & 1/2 & -1 & 0 \\
$e_L$ & -1 & -1/2 & -1 & 0 \\ \hline
$e_R$ & -1 & 0 & -2 & 0 \\ \hline
$u_L$ & 2/3 & 1/2 & 1/3 & $r,g,b$ \\
$d_L$ & -1/3 & -1/2 & 1/3 & $r,g,b$ \\ \hline
$u_R$ & 2/3 & 0 & 4/3 & $r,g,b$ \\ \hline
$d_R$ & -1/3 & 0 & -2/3 & $r,g,b$ \\ \hline
\end{tabular}
\caption[Quantum numbers of the fermions]
{Quantum numbers of the fermions.}
\label{tab:QuantumNumbers}
\end{center}
\end{table}
The interactions are uniquely specified by the
SU(3)$_C$$\times$SU(2)$_L$$\times$U(1)$_Y$
gauge symmetries. All of the gauge bosons
and fermions acquire mass by the
Higgs mechanism~\cite{Higgs:1964ia}.
It introduces an extra Higgs boson, and its physical
vacuum is spontaneously broken in the field space
of the Higgs potential.
The quark states in charged weak interactions mediated by
$W^{\pm}$ bosons are not the physical states, but rather
a quantum superposition of the physical states, described
by the CKM (Cabbibo-Kobayashi-Maskawa)
matrix~\cite{Cabibbo:1963yz}.
\begin{equation}
\left( \begin{array}{c}
d \\
s \\
b
\end{array}
\right)_{\mbox{weak}}
=
\left( \begin{array}{ccc}
V_{ud} & V_{us} & V_{ub} \\
V_{cd} & V_{cs} & V_{cb} \\
V_{td} & V_{ts} & V_{tb}
\end{array}
\right)
\left( \begin{array}{c}
d \\
s \\
b
\end{array}
\right)_{\mbox{mass}}
\end{equation}
The topic of this thesis is mostly related
to the fermion couplings. The couplings to
fermions in the SM are listed in
Table~\ref{tab:CouplingsToFermions}.
A very detailed review with explicit derivations
on these topics starting from the gauge symmetry
to the couplings to the fermions in the SM
is given in Appendix~\ref{cha:app_gs_ssb}.
\begin{table}
\begin{center}
\begin{tabular}{|ll|c|c|} \hline
&
& Left Coupling
& Right Coupling \\ \hline \hline
Higgs &
$H \to f\bar{f}$ & $\frac{m_f}{v}$
& $\frac{m_f}{v}$ \\ \hline
Strong &
$g \to q\bar{q}$ & $\frac{g_3}{2}\lambda^a$
& $\frac{g_3}{2}\lambda^a$ \\ \hline
EM &
$\gamma \to f\bar{f}$ & $eQ_f$
& $eQ_f$ \\ \hline
Weak &
$Z^0 \to f\bar{f}$ &
$\frac{g_2}{\cos\theta_W}(T^3_f - \sin^2\theta_W Q_f)$
&
$-\frac{g_2}{\cos\theta_W}\sin^2\theta_WQ_f$ \\
&
$W^{\pm}\to l\nu_l$ & $\frac{g_2}{\sqrt{2}}$
& 0 \\
&
$W^{\pm}\to qq'$ & $V_{qq'}\frac{g_2}{\sqrt{2}}$
& 0 \\ \hline
\end{tabular}
\caption[Couplings to fermions in the SM]
{Couplings to fermions in the Standard Model.}
\label{tab:CouplingsToFermions}
\end{center}
\end{table}
In spite of its tremendous success in explaining
collider results, there are still many
unexplained aspects in the SM. The set of group
representations and hypercharge it requires are quite
bizarre, and there are 18 free parameters which must be
input from experiment: 3 gauge couplings (usually traded
as $e$, $\sin^2\theta_W$ and $g_3$), 2 Higgs potential
couplings (usually traded as $m_Z$ and $m_H$), 9 fermion
masses, and 4 CKM mixing parameters. Do particle masses
really originate from a Higgs field? Can all the particle
interactions be unified in a simple gauge group? What is
the origin of the CKM matrix? The ultimate ``theory of
everything'' should explain all of these parameters. The
imaginary goal, for example, is probably to express
everything in terms of the Planck constant $\hbar$, the
speed of light $c$, the mathematical constant $\pi$, and
without any free parameters. That would be an amazing accomplishment. There
are still many things to do in particle physics in the
direction to \emph{find the simplest model} and many
exciting challenges are ahead!
\section{Extensions to the Standard Model}
\label{sec:theory_BSM}
One interesting extension to the SM is to add a new U(1) gauge
group. This predicts a new
$Z'$ gauge boson~\cite{Carena:2004xs}
at high energy scale.
We will use the $Z'$ as our model to calculate the signal
acceptance for any kind of new vector boson.
Another interesting extension is
supersymmetry~\cite{Wess:1974tw},
which is
motivated by the desire to unify fermions and bosons,
shown in Fig.~\ref{fig:Supersymmetry}.
\begin{figure}
\begin{center}
\parbox{3.7in}{\epsfxsize=\hsize\epsffile{2.2/Supersymmetry.eps}}
\caption[Particles in the Supersymmetry Theory]
{Particles in the Supersymmetry Theory.}
\label{fig:Supersymmetry}
\end{center}
\end{figure}
For each fermion
(lepton and quark) it predicts a bosonic super partner
(slepton and squark), and for each gauge boson it predicts
a fermionic super partner (gaugino).
There is a divergence
from scalar contributions to radiative corrections
for the Higgs mass in the SM, while the new fermion loops
appearing in supersymmetry have a negative sign
relative to the scalar contributions, thus cancel the
divergence.
We will use the pseudoscalar Higgs particle $A$, one of the
Higgs particles predicted in
the minimal supersymmetric extension of the Standard Model
(MSSM)~\cite{Nilles:1983ge}
as our model to
calculate the signal acceptance for any kind of new scalar
boson.
\section{High Mass Tau Pairs}
\label{sec:theory_tt}
At the Tevatron, the
tau pair production in the SM is through the Drell-Yan process,
$p\bar{p}\to\gamma^*/Z\to\tau\tau$,
as shown in Fig.~\ref{fig:TauTau_1}.
The center-of-mass energy of
$p\bar{p}$ collisions at the Tevatron is 1.96 TeV. At the parton
level, one incoming quark from a proton and the other anti-quark from
an anti-proton collide via an intermediate boson which decays to two outgoing
taus. The details about how to calculate cross sections are shown in
Appendix~\ref{cha:HowToCalculateXSec} and the mass spectrum of the
final two taus is shown in Fig.~\ref{fig:TauTau_3}. We perform
a direct search for new hypothetical particle in high mass region by
its decay to two taus $X\to\tau\tau$. The low mass region of the SM
processes $\gamma^*/Z\to\tau\tau$ is the control region
and its high mass Drell-Yan tail is the major background for
this search.
\begin{figure}
\begin{center}
\parbox{4.5in}{\epsfxsize=\hsize\epsffile{2.3/TauTau_1.eps}}
\caption[Tau pair production
$p\bar{p}\to\gamma^*/Z\to\tau\tau$
in the SM]
{Tau pair production
$p\bar{p}\to\gamma^*/Z\to\tau\tau$
in the Standard Model.}
\label{fig:TauTau_1}
\vspace{0.5in}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{2.3/TauTau_3.eps}}
\caption[High mass tau pair search]
{High mass tau pair search.
Low-mass region including the $Z$ peak is the control
region. High-mass region is the signal region.
The high-mass tail of the Drell-Yan process is
the main background of this search. The signature
of new particles is a sigficant deviation from
the known backgrounds, such as $X$ shown in this plot.}
\label{fig:TauTau_3}
\end{center}
\end{figure}
The two extensions described above are
shown in Fig.~\ref{fig:TauTau_2}. For U(1) extension, we consider
the simplest model with the same interactions as the $Z$ boson in the
SM, called the sequential $Z'$, and the only unknown parameter is
the mass of the new gauge boson. The MSSM requires two Higgs doublets
and the ratio of the two Higgs expectation values is defined as
$\tan\beta$, which is undetermined and should be treated as a free
parameter. Thus the $A$ boson is governed by one more free parameter
in addition to its mass.
The couplings to fermions in the SM are listed in
Table~\ref{tab:CouplingsToFermions}. For each mass point of
the sequential $Z'$, we can use the same couplings to fermions
as the $Z$ boson in the SM and repeat the procedure to calculate
the cross section.
The leading order cross section $\sigma_0$ is subject to a
correction $K$ factor~\cite{Barger:1997}
such that the corrected cross section
$\sigma
= (1 + \mbox{correction}) \times \sigma_0
= K \times \sigma_0$.
Including the $K$ factor, the predicted cross section
versus mass for the sequential $Z'$ is shown in
Fig.~\ref{fig:TauTau_4}.
The SM requires one Higgs doublet with a coupling of the SM
Higgs boson to fermions as $m_f/v$, where $m_f$ is the fermion
mass and $v$ is the vacuum expectation value of the SM Higgs
boson, about 246 GeV. Therefore Higgs boson prefers to couple to the
fermions in the heaviest generation. In the MSSM, at large
$\tan\beta$, the coupling of $A\to\tau\tau$ and
$A\to b\bar{b}$ are enhanced to $m_f\tan\beta/v$, whereas
the coupling of $A\to t\bar{t}$ is suppressed to
$m_t\cot\beta/v$ when the top quark is kinematically available,
i.e. $m_A > 2 m_t \approx 350$ GeV/$c^2$. We use the programs
{\textsc HIGLU}~\cite{Spira:1995mt} and
{\textsc HDECAY}~\cite{Djouadi:1997yw}
to calculate the next-to-leading-order cross
section of $gg\to A\to\tau\tau$. They are also shown in
Fig.~\ref{fig:TauTau_4}.
\begin{figure}
\begin{center}
\parbox{3.0in}{\epsfxsize=\hsize\epsffile{2.3/TauTau_2.eps}}
\caption[Tree-level Feynman diagrams for
$Z'\to\tau\tau$ and $A\to\tau\tau$]
{Tree-level Feynman diagrams for the productions
at $p\bar{p}$ collider and decays of $Z'$ predicted
in U(1) extension and pseudoscalar $A$ predicted in
minimum supersymmetric extension of the Standard
Model.}
\label{fig:TauTau_2}
\vspace{0.5in}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{2.3/TauTau_4.eps}}
\caption[Theoretical signal
$\sigma(p\bar{p}\to X)\cdot\mbox{B}(X\to\tau\tau)$]
{Theoretical signal
$\sigma(p\bar{p}\to X)\cdot\mbox{B}(X\to\tau\tau)$.}
\label{fig:TauTau_4}
\end{center}
\end{figure}
\chapter{The Tevatron Accelerator and the CDF Detector}
\label{cha:apparatus}
Fermilab is the home of the highest energy
particle accelerator in the world, the Tevatron.
The center-of-mass energy of proton-antiproton
($p\bar{p}$) collision is $\sqrt{s}=1.96$ TeV.
We shall describe the Tevatron accelerator and
the Collider Detector at Fermilab (CDF) in this
chapter.
\section{Fermilab's Accelerator Chain}
\label{sec:accelerator}
Protons and antiprotons have equal and opposite electric
charge. The advantage of $p\bar{p}$ collider is that $p$ and
$\bar{p}$ travel in opposite
directions through the magnets and a $p\bar{p}$ collider
can be built with one ring of magnets instead of two.
The disadvantage is that it is difficult
to produce and accumulate $\bar{p}$ at a high efficiency.
The aerial view of Fermilab is shown in
Fig.~\ref{fig:Ferimlab}. The Fermilab's accelerator
chain is shown in Fig.~\ref{fig:accelerator}. It
consists of the Proton/Antiproton Sources (8 GeV), the Main
Injector (150 GeV), the Recycler, and the Tevatron (980 GeV).
\begin{figure}
\begin{center}
\parbox{4.0in}{\epsfxsize=\hsize\epsffile{3.1/aerial_oblique.eps}}
\caption[Aerial view of Fermilab]
{Aerial view of Fermilab showing
the Main Injector in the foreground,
the Tevatron collider ring and the
fixed target facilities in the background.}
\label{fig:Ferimlab}
\vspace{0.5in}
\parbox{4.4in}{\epsfxsize=\hsize\epsffile{3.1/accelerator.eps}}
\caption[Fermilab's accelerator chain]
{Fermilab's accelerator chain consists of
the 8 GeV proton source, the 8 GeV anti-proton
source, the Main Injector, the Recycler for
recycling the precious anti-protons,
and the Tevatron. The Main Injector accelerates
protons and anti-protons to 150 GeV. The
Tevatron ramps up their energies to 980 GeV.
The center-of-mass energy of $p\bar{p}$
collision is thus 1.96 TeV. The linear
accelerators for the fixed targed experiments
are also shown.}
\label{fig:accelerator}
\end{center}
\end{figure}
The Proton Source includes the Cockcroft-Walton, the Linear
Accelerator (Linac), and the Booster. The Cockcroft-Walton
uses DC power to accelerate H$^-$ ions to 750 KeV.
The Linac uses Radio Frequency (RF) power to
accelerate H$^-$ ions to 400 MeV. The electrons are
stripped off and the bare protons are injected into
the Booster. The Booster uses RF cavities to accelerate
protons to 8 GeV.
The Anti-proton Source includes the Target Station,
the Debuncher and the Accumulator. A bunched beam of 120 GeV
protons from the Main Injector hits a Nickel Target to
make anti-protons and other particles as well. The
particles are focused with a lithium lens and filtered
through a pulsed magnet acting as a charge-mass
spectrometer to select anti-protons. The antiproton
beam is bunched since the beam from the Main Injector
is bunched and the antiprotons have a wide range of
energies, positions and angles. The transverse spread of the beam out of
the Target Station is ``hot'', in terms analogous to
temperature. Both RF and stochastic cooling systems
are used in the momentum stacking process.
The Debuncher exchanges the large energy spread and narrow
time spread into a narrow energy spread and large time
spread. The Accumulator stacks successive pulses of
antiprotons from the Debuncher over several hours or
days. For every million protons that hit the target,
only about twenty 8 GeV anti-protons finally get stacked
into the Accumulator.
Protons at 8 GeV from the Booster are injected into
the Main Injector. They are accelerated to 120 GeV
for fixed target experiments or 150 GeV for injection
into the Tevatron. Antiprotons at 8 GeV from either
the Accumulator or the Recycler are accelerated to
150 GeV in the Main Injector and then injected into
the Tevatron.
The Recycler is placed directly above the Main
Injector beamline, near the ceiling. One role of
the Recycler is a post-Accumulator ring. Another
role, and by far the leading factor in the luminosity
increase, is to act as a recycler for the precious
antiprotons left over at the end of Tevatron stores.
It is a ring of
steel cases
holding bricks of ``refrigerator'' magnets (the
same permanent magnet used in home refrigerators).
Permanent magnets do not need power supplies,
cooling water systems, or electrical safety systems.
The Recycler is a highly reliable storage ring
for antiprotons.
The Tevatron was the world's first superconducting
synchrotron. A magnet with superconducting coils
has no electrical resistance, and consumes minimal electrical
power, except that is needed to keep the magnets cold.
The particles of a beam are guided around the closed
path by dipole magnetic field. The radius of the
circle is 1000 meters. As the beam energy is ramped
up by RF cavities from 150 GeV to 980 GeV, the bending
magnetic field and the RF frequency must be
synchronized to keep the particles in the ring and
this enables a stable longitudinal motion.
The stability of the transverse motion is achieved with
a series quadrupole magnets with alternating gradient.
Luminosity is a measure of the chance that a proton
will collide with an antiproton. To achieve high
luminosity we place as many particles as possible
into as small a collision region as possible. At the
interaction point, the two beams of $p$ and
$\bar{p}$ are brought together
by special quadrupole magnets called Low Beta
magnets, shown in Fig.~\ref{fig:ppbar}.
\begin{figure}
\begin{center}
\parbox{1.5in}{\epsfxsize=\hsize\epsffile{3.1/ppbar.eps}}
\caption[$p\bar{p}$ collision]
{$p\bar{p}$ collision.}
\label{fig:ppbar}
\end{center}
\end{figure}
The current status (at the writing of the thesis)
of the luminosity is shown in Fig.~\ref{fig:store_lum},
and the integrated luminosity delivered and to tape is
shown in Fig.~\ref{fig:store_tot}.
\begin{figure}
\begin{center}
\parbox{5.0in}{\epsfxsize=\hsize\epsffile{3.1/store_lum.eps}}
\caption[Run II instantaneous initial luminosity]
{Run II instantaneous initial luminosity.}
\label{fig:store_lum}
\vspace{0.5in}
\parbox{5.0in}{\epsfxsize=\hsize\epsffile{3.1/store_tot.eps}}
\caption[Run II integrated luminosity]
{Run II integrated luminosity.}
\label{fig:store_tot}
\end{center}
\end{figure}
The design value for the peak instantaneous luminosity
during Run II is $2\times10^{32}$ cm$^{-2}$s$^{-1}$.
Typically a year allows 10$^7$ seconds of running at
the peak instantaneous luminosity. This is about one third
of the actual number of seconds in a year, which accounts
both for the drop in luminosity and for a normal amount of
down-time. Using the conversion constant
$1 \mbox{ fb} = 10^{-39} \mbox{ cm}^2$, the design value
corresponds to an integrated luminosity about 2 fb$^{-1}$
per year.
Ultimately it is hoped that an integrated luminosity of
8$-$10 fb$^{-1}$ can be attained in Run II.
The total number of events $N$ in a scattering process
is proportional to the luminosity and the cross section
$\sigma$ of the process,
\begin{equation}
N = L\sigma
\end{equation}
We can get a rough sense of the reach for new physics and
the challenge of enhancing signal and suppressing
background by considering the following examples.
At a center-of-mass energy of 1.96 TeV, we have
\begin{eqnarray}
\sigma(p\bar{p}\to \mbox{anything}) & \approx & 75 \mbox{ mb} \\
\sigma(p\bar{p}\to t\bar{t}+\mbox{anything}) & \approx & 6 \mbox{ pb} \\
\sigma(p\bar{p}\to hZ +\mbox{anything}) & \approx & 75 \mbox{ fb}
\end{eqnarray}
\newpage
\section{The CDF Dectector}
\label{sec:detector}
The CDF detector~\cite{Acosta:2004yw} is
cylindrically symmetric around the beamline.
A solid cutaway view is shown in Fig.~\ref{fig:cdfiso},
and an elevation view is shown in Fig.~\ref{fig:cdfelev}.
It is a general-purpose solenoidal detector
with tracking system, calorimetry and muon detecion.
The tracking system is contained in a superconducting
solenoid, 1.5~m in radius and 4.8~m in length.
The magnetic field is 1.4~T, parallel to
the beamline. The calorimetry and muon system are
outside the solenoid. These sub-systems will be
described in more details below.
\begin{figure}
\begin{center}
\parbox{5.2in}{\epsfxsize=\hsize\epsffile{3.2/cdfiso.eps}}
\caption[Solid cutaway view of CDF II detector]
{Solid cutaway view of CDF II detector.}
\label{fig:cdfiso}
\vspace{0.5in}
\parbox{4.5in}{\epsfxsize=\hsize\epsffile{3.2/cdfelev.ps}}
\caption[Elevation view of CDF II detector]
{Elevation view of CDF II detector.}
\label{fig:cdfelev}
\end{center}
\end{figure}
\subsection{CDF Coordinate System}
\label{subsec:detector_xyz}
The origin of the CDF detector is its geometric
center. The luminous region of the beam at the
interaction point has Gaussian profiles with
$(\sigma_x, \; \sigma_y, \; \sigma_z)_{beam}
\approx
(0.003, \; 0.003, \; 30)$ cm.
The $p\bar{p}$ collision point is not
necessarily at the origin.
The CDF detector uses a right-handed coordinate
system. The horizontal direction pointing out of
the ring of the Tevatron is the positive $x$-axis.
The vertical direction pointing upwards is the
positive $y$-axis. The proton beam direction
pointing to the east is the positive $z$-axis.
A spherical coordinate system is also used.
The radius $r$ is measured from the center of
the beamline. The polar angle $\theta$ is
taken from the positive $z$-axis. The
azimuthal angle $\phi$ is taken anti-clockwise
from the positive $x$-axis.
At a $p\bar{p}$ collider, the production of any
process starts from a parton-parton interaction
which has an unknown boost along the $z$-axis,
but no significant momentum in the plane
perpendicular to the $z$-axis, i.e. the
transverse plane. This makes the transverse
plane
an important plane in $p\bar{p}$ collision.
Momentum conservation requires the vector sum of
the transverse energy and momentum of all of the
final particles to be zero. The transverse
energy $E_T$ and transverse momentum $p_T$ are
defined by
\begin{eqnarray}
E_T & = & E\sin\theta \\
p_T & = & p\sin\theta
\end{eqnarray}
Hard $p\bar{p}$ head-on collisions produce
significant momentum in the transverse plane.
The CDF detector has been optimized to measure
these events. On the other hand, the soft
collisions such as elastic or diffractive
interactions or minimum-bias events, and
by-products from the spectator quarks from hard
collisions, have most of their energy directed
along the beampipe, and will not be measured by
the detector.
Pseudorapidity $\eta$ is used by
high energy physicists and is defined as
\begin{equation}
\eta = -\ln\tan\frac{\theta}{2}
\end{equation}
Consider occupancy in a sample of large
amount of $p\bar{p}$ collision events.
Typically, particles in a $p\bar{p}$
collision event tend to be more in the forward
and backward regions than in the central
region because there is usually a boost
along the $z$-axis, which could be shown
in $\theta$ occupancy of the particles
of the events in the sample. Now we
transform $\theta$ to $\eta$. The derivative
of $\eta$ is
\begin{equation}
d\eta = -\frac{d\theta}{\sin\theta}
\label{eq:eta_derivative}
\end{equation}
A constant $\eta$ slice corresponds to
variant $\theta$ slice which is smaller
in the forward and backward regions
than in the central region. This can
make the $\eta$ occupancy more uniform than
$\theta$ occupancy. For example,
calorimeters are constructed in $\eta$
slices, instead of $\theta$ slices.
\subsection{Tracking}
\label{subsec:detector_trk}
The tracking volume is surrounded by the solenoid
magnet and the endplug calorimeters as shown in
Fig.~\ref{fig:cdfii_tracker_quad}. The tracking
system records the paths of charged particles
produced in the $p\bar{p}$ collisions. It consists of a
silicon microstrip system~\cite{Sill:2000zz} with
radius from $r=1.5$ to 28 cm and $|\eta|<2$, and
an open-cell wire drift chamber called central
outer tracker (COT)~\cite{Affolder:2003ep} with
radius from $r=40$ to 137 cm and $|\eta|<1$.
\begin{figure}
\begin{center}
\parbox{4.7in}{\epsfxsize=\hsize\epsffile{3.2.2/cdfii_tracker_quad.eps}}
\caption[CDF II tracking volume]
{CDF II tracking volume.}
\label{fig:cdfii_tracker_quad}
\end{center}
\end{figure}
The silicon microstrip is made from Si with a p-n junction. When
p-type semiconductors and n-type semiconductors are
brought together to form a p-n junction, migration
of holes and electrons leaves a region of net charge
of opposite sign on each side, called the depletion
region (depleted of free charge carriers). The
p-n junction can be made at the surface of a
silicon wafer with the bulk being n-type (or the
opposite way). By applying a reverse-bias voltage
we can increase the depletion region to the full volume of the device.
A charged
particle moves through this depletion region,
creates electron-hole pairs which drift and are
collected at the surfaces.
This induces a signal on metal strips deposited
on the surface, connected to readout amplifiers.
The silicon microstrip detector consists of
three components:
the Layer~00, the Silicon VerteX detector II (SVX~II), and
the Intermediate Silicon Layers (ISL).
An end
view is shown in Fig.~\ref{fig:cdf_silicon_endview}.
Layer~00 is physically mounted on and supported by
the beam pipe.
The sensors are single-sided p-in-n
silicon and have a pitch of 25~$\mu$m.
The next five layers compose the SVX II and are
double-sided detectors. The axial side of each
layer is used for $r$-$\phi$ measurements and the
sensors have a strip pitch of about 60~$\mu$m. The
stereo side of each layer is used for $r$-$z$
measurements. Both 90$^{\circ}$ and small-angle
stereo sensors are used in the pattern
(90, 90, $-$1.2, 90, +1.2) degrees and have a
strip pitch of (141, 125.5, 60, 141, 60)~$\mu$m from
the innermost to outermost layers.
The two outer layers compose the ISL and are
double-sided detectors with a strip pitch of
112~$\mu$m on both the axial and the
1.2$^{\circ}$ stereo sides.
This entire system allows charged particle
track reconstruction
in three dimensions. The impact parameter
resolution of SVX~II~+~ISL is 40~$\mu$m
including 30~$\mu$m contribution from the
beamline. The~$z_0$ resolution of SVX~II~+~ISL
is 70~$\mu$m.
The COT is arranged in 8 superlayers shown in
Fig.~\ref{fig:cot_plate}. The superlayers are
alternately axial and $\pm$2$^{\circ}$ stereo,
four axial layers for $r$-$\phi$ measurement and
four stereo layers for $r$-$z$ measurement. Within
each superlayer are cells which are tilted about
30$^{\circ}$ to the radial direction to compensate
for the Lorentz angle of the drifting charged
particles due to the solenoid magnet field. Each
cell consists of 12 layers of sense wires, thus
total 8$\times$12 = 96 measurements per track.
\begin{figure}
\begin{center}
\begin{minipage}[b]{2.5in}
\begin{center}
\parbox{2.5in}{\epsfxsize=\hsize\epsffile{3.2.2/cdf_silicon_endview.eps}}
\caption[Silicon system]
{Silicon system.}
\label{fig:cdf_silicon_endview}
\end{center}
\end{minipage}
\hspace{0.4in}
\begin{minipage}[b]{2.5in}
\begin{center}
\parbox{2.5in}{\epsfxsize=\hsize\epsffile{3.2.2/cot_plate.eps}}
\caption[COT superlayers]
{COT superlayers.}
\label{fig:cot_plate}
\end{center}
\end{minipage}
\end{center}
\end{figure}
The COT is filled with a mixture of
argone:ethane~=~50:50 which determines the drift
velocity $v$. A charged particle enters
gas, ionizes gas and produces electrons.
There is an electric field around each sense wire.
In the low
electric field region, the ionization electrons
drift toward the sense wire. In the high electric field
region within a few radii of the sense wire, there
is an avalanche multiplication of charges by
electron-atom collision. A signal is
induced via the motion of electrons. By measuring
the drift time $t$ (the arrival time of ``first''
electrons) at sense wire relative to collision time
$t_0$, we can calculate the distance of the hit
$D = v \Delta t$.
A track is formed from a series of hits, fit to a helix.
We can measure the curvature of a track $C = 1/R$ and
then calculate transverse momentum $p_T = 0.3$$R$$B$,
with $p_T$, $R$ and
$B$ in the units GeV/$c$, m, and T, respectively.
The hit position resolution is approximately 140
$\mu$m and the momentum resolution
$\sigma(p_T)/p_T^2$ = 0.0015~(GeV/$c$)$^{-1}$.
\subsection{Calorimetry}
\label{subsec:detector_calo}
The CDF electromagnetic and hadronic sampling calorimeters
surround the tracking system and measure the energy flow
of interacting particles up to $|\eta|<3.64$. They are
segmented in $\eta$ and $\phi$ with a projective ``tower''
geometry, shown in Fig.~\ref{fig:calor_tower_segementation}.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{3.2.3/calor_tower_segementation.eps}}
\caption[Calorimeter tower segmentation in $\eta-\phi$ space]
{Calorimeter tower segmentation in $\eta-\phi$ space.}
\label{fig:calor_tower_segementation}
\end{center}
\end{figure}
The energy measurement is done by sampling
calorimeters
which are absorber and
sampling scintillator sandwich with phototude readout.
When interacting with the absorber, electrons lose energy by
ionization and bremsstrahlung, and photons lose energy
by the photoelectric effect, Compton scattering and pair
production. Both electrons and photons develop
electromagnetic shower cascades. The size of the
longitudinal shower cascade grows only logarithmically
with energy. A very useful cascade parameter is
the radiation length $X_0$ which is the mean distance
for the $e^{\pm}$ to lose all but 1/e of its energy.
For example, for a 10 GeV electron in
lead glass, the maximum electromagnetic shower is at
about 6$X_0$ and the 95\% containment depth is at about 16$X_0$.
Hadrons lose energy by nuclear interaction cascades which
can have charged pions, protons, kaons, neutrons, neutral
pions, neutrinos, soft photons, muons, etc. It is much more
complicated than an electromagnetic cascade and thus results in a large
fluctuation in energy measurement. In analogy to $X_0$,
a hadronic interaction length $\lambda$ can be defined.
Hadronic showers are much longer than the electromagnetic
ones.
The central calorimeters consist of the central
electromagnetic calorimeter (CEM)~\cite{Balka:1987ty},
the central hadronic calorimeter (CHA)~\cite{Bertolucci:1987zn},
and the end wall hadronic calorimeter (WHA).
At approximately 6$X_0$ in depth in the CEM, at which
electromagnetic showers typically reach the maximum
in their shower profile,
is the central shower maximum detector
(CES). The CEM and CHA are constructed in wedges which
span 15$^{\circ}$ in azimuth and extend about 250~cm in
the positive and negative $z$ direction, shown in
Fig.~\ref{fig:CCAL}. There are thus 24 wedges on both
the $+z$ and $-z$ sides of the detector, for a total of
48. A wedge contains ten towers, each of which covers
a range 0.11 in pseudorapidity. Thus each tower subtends
$0.11\times15^{\circ}$ in $\eta\times\phi$. CEM covers
$0<|\eta|<1.1$, CHA covers $0<|\eta|<0.9$, and WHA covers
$0.7<|\eta|<1.3$.
The CEM uses lead sheets interspersed with polysterene
scintillator as the active medium and employs phototube
readout, approximately 19$X_0$ in depth, and has an
energy resolution $13.5\%/\sqrt{E_T}\oplus2\%$, where
$\oplus$ denotes addition in quadrature.
The CES uses proportional strip and wire counters in a
fine-grained array, as shown in Fig.~\ref{fig:CES}, to
provide precise position (about 2~mm resolution) and
shape information for electromagnetic cascades.
The CHA and WHA use steel absorber interspersed with
acrylic scintillator as the active medium. They are approximately
4.5$\lambda$ in depth, and have an energy resolution of
$75\%/\sqrt{E_T}\oplus3\%$.
The plug calorimeters consist of the plug electromagnetic
calorimeter (PEM)~\cite{Albrow:2001jw}, and the plug
hadronic calorimeter (PHA). At approximately 6$X_0$ in
depth in PEM is the plug shower maximum detector (PES).
Fig.~\ref{fig:PCAL} shows the layout of the detector and
coverage in polar angle $36.8^{\circ}>\theta>3^{\circ}$
($1.1<|\eta|<3.64$). Each plug wedge spans 15$^{\circ}$
in azimuth, however in the range
$36.8^{\circ}>\theta>13.8^{\circ}$ ($1.1<|\eta|<2.11$)
the segmentation in azimuth is doubled and each tower spans
only 7.5$^{\circ}$.
The PEM is a lead-scintillator sampling
calorimeter. It is approximately 21$X_0$ in depth, and has an
energy resolution of $16\%/\sqrt{E}\oplus1\%$.
The PES consists of two layers
of scintillating strips: U and V layers offset from the
radial direction by $+22.5^{\circ}$ and $-22.5^{\circ}$
respectively, as shown in Fig.~\ref{fig:PES}. The position
resolution of the PES is about 1~mm.
The PHA is a steel-scintillator sampling
calorimeter. It is approximately 7$\lambda$ in depth, and
has an energy resolution of $74\%/\sqrt{E}\oplus4\%$.
\begin{figure}
\begin{center}
\begin{minipage}[b]{2.8in}
\begin{center}
\parbox{2.2in}{\epsfxsize=\hsize\epsffile{3.2.3/cdf_pic6_cem_wedge.eps}}
\caption[CEM/CES/CHA wedge]
{CEM/CES/CHA wedge.}
\label{fig:CCAL}
\end{center}
\end{minipage}
\hspace{0.4in}
\begin{minipage}[b]{2.2in}
\begin{center}
\parbox{2.2in}{\epsfxsize=\hsize\epsffile{3.2.3/cdf_pic7_ces_2.eps}}
\caption[CES strip and wire]
{CES strip and wire.}
\label{fig:CES}
\end{center}
\end{minipage}
\\
\vspace{0.3in}
\begin{minipage}[b]{2.8in}
\begin{center}
\parbox{2.8in}{\epsfxsize=\hsize\epsffile{3.2.3/plug_schem.ps}}
\caption[PEM/PES/PHA layout]
{PEM/PES/PHA layout.}
\label{fig:PCAL}
\end{center}
\end{minipage}
\hspace{0.4in}
\begin{minipage}[b]{2.2in}
\begin{center}
\parbox{1.8in}{\epsfxsize=\hsize\epsffile{3.2.3/plug_smd_uv.ps}}
\caption[PES U and V layers]
{PES U and V layers.}
\label{fig:PES}
\end{center}
\end{minipage}
\end{center}
\end{figure}
\subsection{Muon Chambers}
\label{subsec:detector_muon}
The muon chambers are situated outside the calorimeters.
In addition to the calorimeters, the magnet return
yoke and additional steel shielding are used to
stop electrons, photons and hadrons from entering
the muon chambers. The muon is a minimum ionizing
particle which loses very little energy in detector
materials. The muon's lifetime is long enough to allow
it to pass through all the detector components,
reach the muon chambers, and decay outside.
A muon chamber contains a stacked array of drift
tubes and operates with a gas mixture of
argon:ethane~=~50:50. The basic drift principle
is the same as that of the COT, but the COT is a
multi-wire chamber, while at the center of a muon
drift tube there is only a single sense wire. The
sense wire is connected to a positive high voltage
while the wall of the tube is connected to a
negative high voltage to produce a roughly uniform
time-to-distance relationship throughout the tube.
The drift time of a single hit gives the distance
to the sense wire, and the charge division at each
end of a sense wire can in principle be used to
measure the longitudinal coordinate along the
sense wire. The hits in the muon chamber are linked
together to form a short track segment called a
muon stub.
If a muon stub is
matched to an extrapolated track, a muon is
reconstructed. This is shown in Fig.~\ref{fig:CMU_tower}.
\begin{figure}
\begin{center}
\parbox{5.0in}{\epsfxsize=\hsize\epsffile{3.2.4/cdf_pic10_cmu_tower.ps}}
\caption[Muon stub matching to a track]
{Muon stub matching to a track.}
\label{fig:CMU_tower}
\end{center}
\end{figure}
There are four independent muon detectors: the
central muon detector (CMU)~\cite{Ascoli:1987av},
the central muon upgrade (CMP), the central
muon extension (CMX), and the intermediate muon
detector (IMU). The muon coverage in $\eta-\phi$
space is shown in Fig.~\ref{fig:newetaphimu}.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{3.2.4/newetaphimu.eps}}
\caption[Muon coverage in $\eta$ and $\phi$]
{Muon coverage in $\eta$ and $\phi$.}
\label{fig:newetaphimu}
\end{center}
\end{figure}
The CMU is behind the central hadronic calorimeter
and has four layers of cylindrical drift chambers.
The CMP is behind an additional 60 cm of shielding
steel outside the magnet return yoke. It consists of
a second set of four layers with a fixed length in
$z$ and forms a box around the central detector.
Its psuedorapidity coverage thus varies with the
azimuth. A layer of scintillation counters (the
CSP) is installed on the outside surface of the
CMP. The CMU and CMP each covers $|\eta|<0.6$.
The maximum drift time of the CMU is longer than
the $p\bar{p}$ bunch crossing separation. This
can cause an ambiguity in the Level 1 trigger
(described in the next section)
about which bunch the muon belongs to. By requiring CMP
confirmation, this ambiguity is resolved by
the CSP scintillators.
The CMX has eight layers and covers $0.6<|\eta|<1.0$.
A layer of scintillation counters (the CSX) is
installed on both the inside and the outside
surfaces of the CMX. No additional steel was
added for this detector because the large angle
through the hadron calorimeter, magnet yoke,
and steel of the detector end support structure
provides more absorber material than in the
central muon detectors. The azimuthal coverage
of CMX has a 30$^{\circ}$ gap for the solenoid
refrigerator.
The IMU consists of barrel chambers (the BMU)
and scintillation counters (the BSU), and covers
the region $1.0<|\eta|<1.5$.
\section{Trigger and Data Acquisition System}
\label{sec:trgdaq}
The trigger system has a three-level architecture:
level 1 (L1), level 2 (L2), and level 3 (L3).
The data volume is reduced at each level which
allows more refined filtering at subsequent levels
with minimal deadtime. The trigger needs to be fast
and accurate to record as many interesting events as
possible, while rejecting uninteresting events.
Each sub-detector generates primitives that we can
``cut'' on. The trigger system block diagram is
shown in Fig.~\ref{fig:trigger_system}.
The available trigger primitives at L1 are
\begin{itemize}
\item XFT tracks, with $\phi$ and $p_T$
provided by the eXtreme Fast Tracker using
the hits in the axial layers of the COT,
\item electrons, based on XFT and HAD/EM which is
the ratio of the hadronic energy and the
electromagnetic energy of a calorimeter
tower,
\item photons, based on HAD/EM ratio,
\item jets, based on EM+HAD,
\item muons, based on muon hits and XFT, and
\item missing $E_T$ and sum $E_T$ which are
the negative of the vector sum and the
scalar sum of the energies of all of
the calorimeter towers, respectively.
\end{itemize}
The available trigger primitives at L2 are
\begin{itemize}
\item SVT, the Silicon Vertex Tracker
trigger based on the track impact
parameter of displaced tracks,
\item jet clusters,
\item isolated clusters, and
\item EM ShowerMax which is the strip and
wire clusters in the CES.
\end{itemize}
\begin{figure}
\begin{center}
\parbox{4.0in}{\epsfxsize=\hsize\epsffile{3.2.5/trigger_system.eps}}
\caption[Trigger system block diagram]
{Trigger system block diagram.}
\label{fig:trigger_system}
\end{center}
\end{figure}
There are two important factors for trigger design:
the time between beam crossing and $\bar{N}$, the
average number of overlapping interactions in a
given beam crossing.
We can have many bunches in the Tevatron
to enhance the luminosity. Since the radius of the ring is
1000 m, a proton (or an anti-proton) at a speed
very close to the speed of light circulates the
ring once every 20 $\mu$s.
To accomodate 36 bunches, the maximum bunch
separation allowed is about 600 ns, and the
Run IIa configuration is 396 ns.
The
bunch separation defines an overall time constant
for signal integration, data acquisition and
triggering.
Another key design input is the average number of
overlapping interactions $\bar{N}$, which is
shown as a function of luminosity and the number
of bunches in
Fig.~\ref{fig:nbar_noctc}~\cite{Blair:1996kx}.
For example, with 36 bunches, $\bar{N}$ is about
1 at $3\times31$ cm$^{-2}$s$^{-1}$ and
about 10 at $4\times32$ cm$^{-2}$s$^{-1}$. The
trigger with fast axial tracking at L1 can handle
the former environment, but cannot handle the
latter environment because of the presence of too many fake tracks.
To be able to handle $4\times32$ cm$^{-2}$s$^{-1}$
we would need
108 bunches and even that seems not enough, thus
we will also need to upgrade the trigger to include,
for example, stereo tracking at L1 to suppress fake
tracks.
\begin{figure}
\begin{center}
\parbox{5.0in}{\epsfxsize=\hsize\epsffile{3.2.5/nbar_noctc.ps}}
\caption[Average number of interactions per crossing]
{Average number of interactions per crossing
for various bunches, as a function of
instantaneous luminosity.}
\label{fig:nbar_noctc}
\end{center}
\end{figure}
The data flow in the trigger system is constrained
by the processing time, i.e. how fast a decision
can be made to clear events at each level and the
tape writing speed for permanant storage at the
end of the triggering process. The implementation
needs a sufficient buffer while filtering because
any overflow means deadtime. The ``deadtimeless''
design for 132 ns crossing is shown in
Fig.~\ref{fig:cdf_dataflow}.
\begin{figure}
\begin{center}
\parbox{5.0in}{\epsfxsize=\hsize\epsffile{3.2.5/cdf_dataflow.eps}}
\caption[Data flow of ``deadtimeless'' trigger and data acquisition]
{Data flow of ``deadtimeless'' trigger and data acquisition.}
\label{fig:cdf_dataflow}
\end{center}
\end{figure}
The L1 decision occurs at a fixed time
about 5.5 $\mu$s after beam collision.
L1 is a synchronous hardware trigger.
To process one event every 132 ns, each
detector element is pipelined to have
local data buffering for 42 beam crossings.
The L1 accept rate is less than 50 KHz which
is limited by the L2 processing time.
The L2 decision time is about 20 $\mu$s.
L2 is a combination of hardware and
software triggers and is asynchronous.
If an event is accepted by L1, the
front-end electronics moves the data
to one of the four onboard L2 buffers.
This is sufficient to process a L1
50 KHz accept rate and to average
out the rate fluctuations.
The L2 accept rate is about 300 Hz which
is limited by the speed of the
event-builder in L3.
L3 is purely a software trigger consisting
of the event builder running on a large PC
farm. The event builder assembles
event fragments from L1 and L2 into
complete events, and then the PC farm
runs a version of the full offline
reconstruction code. This means that
fully reconstructed three-dimensional
tracks are available to the trigger
decision. The L3 accept rate is about 75 Hz
which is limited by tape writing speed
for permanent storage.
Once an event passes L3 it is delivered to the
data-logger sub-system which sends the event out
to permanent storage for offline reprocess,
and to online monitors which verify the entire
detector and trigger systems are working properly.
The data used in this
analysis were collected from March 2002 to
September 2003. It was for 396 ns with 36
bunches and for luminosity about $3\times31$ cm$^{-2}$s$^{-1}$.
This means that the trigger (designed for 132 ns)
was sufficiently capable to handle the timing
of bunch crossing with no need to worry about
multiple interactions in this environment.
\chapter{Search Strategy}
\label{cha:Search_Strategy}
This chapter describes the overall logic of the high-mass
tau tau search. There are three steps:
\begin{enumerate}
\item Use $W\to\tau\nu$ events to cross check the $\tau$
identification efficiency.
\item Use $Z\to\tau\tau$ events to study the low-mass
control region with $m_{vis}<$ 120 GeV/$c^2$.
\item Examine the high-mass signal region with
$m_{vis}>$ 120 GeV/$c^2$
for evidence of an excess signalling new physics.
\end{enumerate}
\vspace{0.2in}
\noindent{\bf Tau Hadronic Decays}
\vspace{0.1in}
The dominant decays of $\tau$'s are into leptons or
into either one or three charged hadrons, shown in
Table~\ref{tab:strategy_1}. The following
short-hand notations for $\tau$ and its decays
are used,
\begin{eqnarray}
\tau_e & & \tau\to e\bar{\nu}\nu \\
\tau_{\mu} & & \tau\to\mu\bar{\nu}\nu \\
\tau_h & & \tau\to\mbox{hadrons}~\nu
\end{eqnarray}
The leptonic decays cannot be distinguished from
prompt leptons. So tau identification requires a hadronic
tau decay only, with a mass less than
\begin{equation}
m(\tau) = 1.777 \mbox{ GeV/}c^2
\end{equation}
The net charge of the charged tracks is $\pm$1. But
we will not cut on charge because for very high energy
taus there is an ambiguity of charge sign for very
straight tracks.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|r|} \hline
Decay Mode & Final Particles & BR \\ \hline \hline
Leptonic & $e^- \bar{\nu}_e \nu_{\tau}$ & 17.8\% \\
& $\mu^- \bar{\nu}_{\mu} \nu_{\tau}$ & 17.4\% \\ \hline
Hadronic 1-prong & $\pi^- \nu_{\tau}$ & 11.1\% \\
& $\pi^- \pi^0 \nu_{\tau}$ & 25.4\% \\
& $\pi^- 2\pi^0 \nu_{\tau}$ & 9.2\% \\
& $\pi^- 3\pi^0 \nu_{\tau}$ & 1.1\% \\
& $K^- \nu_{\tau}$ & 0.7\% \\
& $K^- \pi^0 \nu_{\tau}$ & 0.5\% \\ \hline
Hadronic 3-prong & $2\pi^- \pi^+ \nu_{\tau}$ & 9.5\% \\
& $2\pi^- \pi^+ \pi^0 \nu_{\tau}$ & 4.4\% \\ \hline
\end{tabular}
\end{center}
\caption[Tau dominant decay modes]
{Tau dominant decay modes and branching ratios.}
\label{tab:strategy_1}
\end{table}
The characteristic signature of hadronically decaying
taus is the track multiplicity distribution
with an excess in the 1- and 3-track bins. The excess,
about 2:1 in these bins, is related to the tau hadronic
branching ratios to one or three charged pions. Quark
or gluon jets from QCD processes tend not to have such
low charged track multiplicity, but have a broader
distribution peaking at higher multiplicities (3-5
charged tracks). Other final particles, namely photons,
electrons, and muons have mainly 0, 1, or 1 tracks, respectively,
which are different from tau hadronic decays too.
Seeing the tau's characteristic track multiplicity signature is
a very important
indication that backgrounds are under control.
Since $\sigma\cdot B(W\to\tau\nu)$ is about ten times
larger than $\sigma\cdot B(Z\to\tau\tau)$~\cite{Acosta:2004uq}
we will use $W\to\tau\nu$ events to cross check the
tau identification efficiency.
\vspace{0.2in}
\noindent{\bf Di-Tau Visible Mass}
\vspace{0.1in}
There are six final states for tau pairs, shown
in Table~\ref{tab:strategy_2}. $\tau_e\tau_e$ and
$\tau_{\mu}\tau_{\mu}$ modes cannot be distinguished
from the prompt $ee$ or the prompt $\mu\mu$,
respectively. $\tau_e\tau_{\mu}$ mode has a special
signature, but its branching ratio is small and its
final particles tend to have low energy.
For this analysis, we will look for three golden final
states with at least one hadronic decay.
\begin{table}
\begin{center}
\begin{tabular}{|c|r|} \hline
Final States & BR \\ \hline \hline
$\tau_e\tau_h$ & 22\% \\
$\tau_{\mu}\tau_h$ & 22\% \\
$\tau_h\tau_h$ & 41\% \\
$\tau_e\tau_{\mu}$ & 3\% \\
$\tau_e\tau_e$ & 6\% \\
$\tau_{\mu}\tau_{\mu}$ & 6\% \\ \hline
\end{tabular}
\end{center}
\caption[Tau pair final states]
{Tau pair final states and their branching ratios.}
\label{tab:strategy_2}
\end{table}
The high-mass tau pair search will be based on just
counting the number of events with some specified set
of cuts. It is desirable to measure for some variable
a distribution which agrees with the Standard Model in
some range, but deviates from it in another, thus giving a
more convincing signal while also providing an estimate
of the new particle's mass scale.
There are at least two missing neutrinos in the golden
final states, and therefore six unknown momentum components.
With only two constraints from the two components of
the missing transverse energy
and the two constraints from two tau masses, there is at
least a 2-fold ambiguity. It is not possible to
reconstruct the tau pair invariant mass in
general.
The mass of the sum of the two tau's visible momentum and
the missing transverse energy $\,/\!\!\!\!E_{T}$ with its $z$-component
set to zero is called the visible mass,
\begin{equation}
m_{vis} = m(\tau^1_{vis} + \tau^2_{vis} + \,/\!\!\!\!E_{T})
\end{equation}
The invariant mass of the irreducible $Z\to\tau\tau$
background peaks at $m(Z)\approx$ 91 GeV/$c^2$. The
visible mass distribution will be broadened and peak at
somewhere less than 91 GeV/$c^2$. We will study the
sample with
$m_{vis} < 120$ GeV/$c^2$
for $Z\to\tau\tau$ cross check.
After all of the cuts, we want the control sample to
be dominated by $Z\to\tau\tau$ background, with jet
background under control and other backgrounds
negligible. A successful cross check between data
and MC in the low-mass region will give us confidence
to go further to the high-mass region.
\newpage
\vspace{0.2in}
\noindent{\bf Blind Analysis}
\vspace{0.1in}
If a new particle with high mass exists and the
statistics are sufficient, it will show up in the
high-mass signal region. The strategy we choose is
a blind analysis. The data sample with
$m_{vis} > 120$ GeV/$c^2$
will be put aside until all selection criteria are
fixed and all backgrounds are determined. The principle
of a blind analysis is to avoid human bias. If
the selection cuts are decided by the distributions
of high mass region in the real data sample, there
will be a strong bias
and the probabilities
calculated are meaningless.
Given good
understanding of backgrounds,
there will be two possibilities after examining the
data in the signal region. Either one will observe
a number of events statistically consistent with the
expected background rate, or there will be an excess
signalling new physics.
\chapter{Particle Identification and Missing Transverse Energy}
\label{cha:PID}
High energy $p\bar{p}$ collisions can produce a large number of
particles. As illustrated in Fig.~\ref{fig:PID}, the CDF detector with
its tracking system, calorimeter and muon chambers can identify the
following particles by the following patterns:
\begin{itemize}
\item photon: cascade showering in electromagnetic calorimeter,
but no associated charged tracks;
\item electron: a track, and cascade showering in
electromagnetic calorimeter;
\item muon: a track, minimum ionization energy
deposit in calorimeter, and hits in muon chambers;
\item jet: an object which cannot be identified as an isolated
photon, or an isolated electron, or an isolated muon is
identified as a jet;
\item missing transverse energy ($\,/\!\!\!\!E_{T}$): an imbalance of transverse
energy in the whole calorimeter.
\end{itemize}
The final particles and the $\,/\!\!\!\!E_{T}$ are
reconstructed by CDF II offline programs.
\begin{figure}
\begin{center}
\parbox{4.3in}{\epsfxsize=\hsize\epsffile{5/PID.eps}}
\caption[Particle identification and missing transverse energy]
{Patterns for identifying photon,
electron, muon, charged hadron, and jet.
Neutrino induces missing transverse energy.}
\label{fig:PID}
\end{center}
\end{figure}
\section{Monte Carlo Simulation}
\label{sec:MC}
Often we need to predict the output in the detector
including the final reconstructed particles and the $\,/\!\!\!\!E_{T}$ of
a particular interesting process and compare
with data. Usually the phase space of an event
of the $p\bar{p}$ collision is too
complicated to be calculated analytically.
In this case Monte Carlo (MC) simulation is used.
It has become a powerful tool used in
many research areas including high energy
physics.
A well-known MC example is the Buffon's Needle.
It involves dropping a needle on a lined sheet
of paper and determining the probability of the
needle crossing one of the lines on the page.
The remarkable result is that the probability
is directly related to the value of the
mathematical $\pi$. Suppose the length of the
needle is one unit and the distance between
the lines is also one unit. There are two
variables, the angle $\theta$ at which the
needle falls and the distance $D$ from the
center of the needle to the closest line.
$\theta$ can vary from 0$^{\circ}$ to 180$^{\circ}$
and is measured against a line parallel to the
lines on the paper. $D$ can never be more than
half the distance between the lines. The needle
will hit the line if $D\le\frac{1}{2}\sin\theta$.
How often does this occur? The probability
$\mathcal{P}$ is $2/\pi$ by integrating over
$\theta$. With a computer, we can
generate a large sample of random needle
drops. The probability $\mathcal{P}$ can
be simply taken as the number of hits divided
by the number of drops, yielding
$\pi=2/\mathcal{P}$.
Here we discuss the basic techniques of MC simulation.
For a one-dimensional integral, we can choose $n$ numbers
$x_i$ randomly with probability density uniform on the
interval from $a$ to $b$, and for each $x_i$ evaluate
the function $f(x_i)$. The sum of these function values,
divided by $n$, will converge to the expectation of the
function $f$.
\begin{equation}
\int_a^bf(x)dx
= (b-a)\langle f(x)\rangle
\approx (b-a)\frac{1}{n}\sum_{i=1}^nf(x_i)
= (b-a)\overline{f_n}
\end{equation}
The central limit theorem tells us that the sum
of a large number of independent random variables
is always normally distributed (i.e. a Gaussian
distribution), no matter how the individual
random variables are distributed. To understand
this, we can test with uniformly distributed
random variable $x_1$, $x_2$, $x_3$, $x_4$,
(a) $x_1$ is a uniform distribution;
(b) $x_1+x_2$ is a triangle distribution;
(c) $x_1+x_2+x_3$ is already close to a Gaussian
distribution;
(d) $x_1+x_2+x_3+x_4$ is almost like the exact
Gaussian distribution.
Applying this theorem, we know the MC
method is particularly useful as we can also
calculate an error on the estimate by computing
the standard deviation,
\begin{equation}
\langle f(x)\rangle
= \overline{f_n} \pm \frac{\sigma_n}{\sqrt{n}}
\end{equation}
where
$\sigma_n = (\overline{f_n^2} - \overline{f_n}^2)^{1/2}$
and
$\overline{f_n^2} = \frac{1}{n}\sum_{i=1}^nf^2(x_i)$.
The convergence for numerically evaluating the
integral goes as $1/\sqrt{n}$ with the number of
function evaluation, $n$.
And obviously if the distribution $f(x)$ is flatter, then the
$\sigma_n$ is smaller for the same number of events in a
sample generated. If there is a peak in the distribution
such as the distribution of a resonance production, it is
better to transform that variable to
some other variable with a flatter distribution in order
to converge faster.
The generalisation to multi-dimensional integrals
$\int\!f(x,y,z,...)dxdydz...$ is straightforward.
We can choose $n$ numbers of grid $(x,y,z,...)$
randomly with probability density uniform on the
multi-dimensional phase space, and for each grid
evaluate the function $f(x,y,z,...)$. The sum of
these function values, divided by $n$, will
converge to the expectation of the function $f$.
A nice feature is that it will always converge as
$1/\sqrt{n}$, even for very high dimensional
integrals. This can make the performance of the
MC method on multi-dimensional integrals
very efficient.
In high energy physics, an event occurs with a
probability in the phase space of the kinematic
variables. A MC simulation generates a large
number of random events according to the probability
described by a model.
With a large sample, we can get the predictions
of the model by looking at the distributions of the
kinematic variables and the derived variables, and
the correlations among the variables. By confronting
the predictions with real data, it is possible to tell
if a model describes Nature correctly.
For this analysis,
we use PYTHIA 6.215 program~\cite{Sjostrand:2000wi}
with CTEQ5L parton density functions (PDF's)~\cite{Lai:1999wy}
to generate the large samples of the processes of
$p\bar{p}$ collision, such as
$p\bar{p}\to\gamma^*/Z\to\tau\tau$,
$p\bar{p}\to Z'\to\tau\tau$,
$p\bar{p}\to A\to\tau\tau$,
and use TAUOLA 2.6~\cite{Was:2000st} to simulate tau decays.
We use GEANT 3~\cite{GEANT3:1993} to simulate the
response to the final particles in the CDF II detector.
\section{Tau Identification}
\label{sec:TauId}
Tau leptons decay predominantly into charged and neutral pions
and suffer from large backgrounds from jet production. Hadronic
tau decays appear in the detector as narrow isolated jets. The
most powerful cut to suppress the jet background is in fact isolation,
requiring no other tracks or $\pi^0$s near the tau cone.
To do this we define a signal cone and an isolation cone around
the direction of the seed track (the track with the highest $p_T$)
and then require that there is no track
or $\pi^0$ between the signal cone and the isolation cone. This is
shown in Fig.~\ref{fig:TauId_iso}.
\begin{figure}
\begin{center}
\parbox{3.2in}{\epsfxsize=\hsize\epsffile{5.1/tauId_isolation.eps}}
\caption[Illustration of tau isolation cone definitions]
{Illustration of tau isolation cone definitions.}
\label{fig:TauId_iso}
\end{center}
\end{figure}
\subsection{Cone Size Definition}
\label{subsec:TauId_cone}
There are two useful cone size definitions.
One is to construct a cone in $\Delta R$ defined
below which has relativity invariance under a
boost along the $z$-axis. The other
is to construct a cone in three-dimensional separation
angle, $\alpha$, which has geometry invariance.
Below we discuss
why $\Delta R$ is chosen as cone size definition
for jet identification and why $\alpha$ is chosen
as cone size definition for hadronic tau
identification.
We start with the discussion on relativity invariance.
For a particle under a boost $\beta = v/c$ along
the $z$-axis and $\gamma = (1-\beta^2)^{-1/2}$,
its four-momentum $(p_x, \; p_y, \; p_z, \; E)$
is transformed to
\begin{equation}
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \gamma & \beta\gamma \\
0 & 0 & \beta\gamma & \gamma
\end{array}
\right)
\left(
\begin{array}{c}
p_x \\
p_y \\
p_z \\
E
\end{array}
\right)
=
\left(
\begin{array}{c}
p_x \\
p_y \\
\gamma(p_z+\beta E) \\
\gamma(\beta p_z+E)
\end{array}
\right)
\end{equation}
The $p_x$ and $p_y$ components in the transverse
plane are not changed, while the $p_z$ component
and the energy are changed.
Rapidity is defined by
\begin{equation}
y = \frac{1}{2}\ln\frac{E+p_z}{E-p_z}
\end{equation}
Using
$\tanh^{-1}\beta
= \frac{1}{2}
\ln
\frac{1+\beta}{1-\beta}$,
it is easy to check that rapidity has a nice
additive property under the boost along
the $z$-axis,
\begin{equation}
y \to y + \tanh^{-1}\beta
\end{equation}
For ultra-relativistic particle with
$p\gg m$, we have
$p_z/E
\approx p_z/p
= \cos\theta$.
Using
$\cos\theta
= (1-\tan^2\frac{\theta}{2})/(1+\tan^2\frac{\theta}{2})$,
the rapidity is well approximated by
pseudorapidity~$\eta$,
\begin{equation}
\eta = -\ln\tan\frac{\theta}{2}
\end{equation}
Particles in a jet deposits energy in the calorimeter towers.
For the traditional cone jet
algorithm, we can call the tower with
$E_T$ above a seed threshold as the seed
(abbreviated as $s$), and the other
towers with $E_T$ above a shoulder
threshold as shoulders (abbreviated as
$h$). To identify a jet, we can put the
seed at the center and make a cone
starting at a reconstructed interaction
vertex point and around the seed to
include the shoulders. Since the
transverse components of a particle's
four-momentum are not changed under the
unknown boost $\beta$ of the parton-parton
system along the $z$-axis, $\phi$ is not
changed. For an ultra-relativistic
particle, $\eta$ is a good approximation
of its rapidity. We have
\begin{equation}
\begin{array}[c]{llllllll}
\phi_{s} & \to & \phi_{s}, & & &
\phi_{h} & \to & \phi_{h}
\\
\eta_{s} & \to & \eta_{s} + \tanh^{-1}\beta, & & &
\eta_{h} & \to & \eta_{h} + \tanh^{-1}\beta
\end{array}
\end{equation}
The separations in $\phi$ and $\eta$ are
not changed under the unknown boost along
the $z$-axis,
\begin{equation}
\begin{array}[c]{ccc}
\Delta\phi = \phi_{h} - \phi_{s} & \to & \Delta\phi \\
\Delta\eta = \eta_{h} - \eta_{s} & \to & \Delta\eta
\end{array}
\end{equation}
Therefore the separation in $\Delta R$ which
is constructed in the combination of
$\Delta\phi$ and $\Delta\eta$ is not
changed under the unknown boost along
the $z$-axis,
\begin{equation}
\begin{array}[c]{ccc}
\Delta R = \sqrt{(\Delta\eta)^2+(\Delta\phi)^2}& \to & \Delta R
\end{array}
\end{equation}
Given the $E_T$ and the configuration
(shape) of a jet, whatever the magnitude of
the boost along the $z$-axis of the
parton-parton system is, or, equivalently, whatever the
direction of the seed of the jet is, we can
use the same cone to include or exclude a
tower
into the jet by calculating
its separation in $\Delta R$ to the seed.
Thus $\Delta R$ is a very useful shape
variable for jet identification.
It also makes sence that there is a strong
correlation between the two variables $E_T$
and $\Delta R$: a higher $E_T$ should give
a smaller cone in $\Delta R$ to include all
of the final particles, e.g. of a jet. It
is very common that there are hundreds of
final particles after a $p\bar{p}$ collision.
The problem is that the energy of a jet in
real data cannot be measured before a cone
is actually constructed, otherwise there is
no constraint to tell which
tower
should be included or excluded.
Jet identification usually starts with a large
and constant cone around a seed. The towers
with significant energy in the cluster may
or may not be contiguous. The energy of the
jet is determined afterwards by summing up
the energies of all of the towers in the
cluster.
Now consider hadronic tau identification
with a narrow cone and
small number of final particles.
The situation is quite different from
jet identification. Since there
are only a small amount of final particles,
each final particle has significant energy.
And since all of the final particles are in
a narrow cone, they make a narrow and
contiguous cluster with significant energy
in each tower. This constraint of a narrow
and contiguous cluster with significant
energy in each tower tells us that we can
determine energy first, and then construct
a narrow cone to include or exclude charged
particles reconstructed in the tracking
system and/or neutral $\pi^0$s reconstructed
in the shower maximum detector which is
inside the electromagnetic calorimeter.
The question now is: is $\Delta R$ a good
choice of cone size definition for hadronic
tau identification?
A $\Delta R$ cone has a relativity invariance
under a boost along the $z$-axis.
However a $\Delta R$ cone
does not have geometry
invariance.
What does a constant $\Delta R$ imply in geometry? The top
plot of Fig.~\ref{fig:TauId_cone} shows three constant isolation
annulus at different $\eta$ in a uniform $\eta$-$\phi$ space;
the bottom plot shows the same three isolation annulus in a
uniform $\theta$-$\phi$ space after using the function
$\eta = -\ln\tan\frac{\theta}{2}$ to map $\eta$ slices to $\theta$
slices. In the central region, the isolation annulus is almost
unchanged; outside the central region, they are severely squeezed,
thus $\Delta\eta$ doesn't have geometry invariance.
$\Delta\phi$ doesn't have geometry invariance either.
Think of one step at the Equator of the Earth and another
step at the North Pole of the Earth, the former is a tiny
one in $\Delta\phi$ while the latter is a giant one in
$\Delta\phi$.
A constant $\Delta R$ cone
with relativity invariance is not expected
to be a constant cone with geometry
invariance.
\begin{figure}
\begin{center}
\parbox{4.5in}{\epsfxsize=\hsize\epsffile{5.1.1/tauId_coneSize.eps}}
\caption[Lack of geometry invariance in $\Delta R$ cone]
{Lack of geometry invariance in $\Delta R$ cone.}
\label{fig:TauId_cone}
\end{center}
\end{figure}
Instead of $E_T$ and $\Delta R$, we can use energy
$E$ and three-dimensional separation angle $\alpha$
to construct a cone for hadronic tau identification.
There are two reasons.
First, consider a rotation of a solid cone;
the geometry invariance of a three-dimensional
separation angle $\alpha$ is easy to
visualize.
The unknown
boost of the parton-parton system along the
$z$-axis
doesn't affect the
energy measurement of the hadronic tau
identification at all. Under the known high
energy boost, the final particles are flying
together in a narrow cone. In one case the
boost is to the central region, and in another
case the boost is to somewhere forward or
backward. Are these two cones geometrically
invariant? The answer is yes.
Second, the correlation of $E$ and $\alpha$
is very strong. The case with the simplest
phase space of final particles is calculable,
see Appendix~\ref{cha:Alpha_Gamma}.
Comparing with a constant cone, a variable
cone determined by this correlation can give
extra power to suppress the jet
background for hadronic tau identification.
This is described by the ``shrinking'' cone algorithm
for hadronic tau identification below.
\subsection{The ``Shrinking'' Cone}
\label{subsec:TauId_shrinking}
As shown in Fig.~\ref{fig:TauId_iso},
tau isolation cone, i.e., the outer cone, is a
constant 30$^{\circ}$ (0.525~rad) cone.
For a particle with definite mass like
tau, the bigger the energy, the smaller the
separation angle of its decay daughters,
hence a smaller signal cone which is the
inner cone in Fig.~\ref{fig:TauId_iso}.
The tau resonctruction algorithm~\cite{CDFnote:6252}
starts with a seed tower with
$E_T>6$~GeV. It adds all of the adjacent shoulder
towers with $E_T>1$~GeV to make a calorimeter cluster.
The cluster is required to be narrow, i.e., the number
of towers~$\leq6$. The visible energy, denoted as
$E_{vis}$, of the final particles of tau haronic decays
is measured by the energy of the calorimeter cluster,
denoted as $E^{\tau\;obj}_{cluster}$. Then the
algorithm asks a seed track with $p_T>4.5$~GeV/$c$ to
match with the cluster. The matched seed track is a
track with the highest $p_T$ in the neighbor of the
calorimeter cluster. The tau signal cone is
constructed around the direction of the seed track.
The other tracks with $p_T>1$~GeV/$c$, and the $\pi^0$s
with $p_T>1$~GeV/$c$ which are reconstructed by the
strip and wire clusters in the CES detector, are
included in the tau candidate if they are inside the
tau signal cone. The size of the tau signal cone is
determined by $E_{vis}$.
The phase space of tau hadronic decays is
very complicated and the energy dependence
of the signal cone cannot easily be calculated
analytically. We use a large MC sample
of $p\bar{p}\to Z\to\tau\tau$ to get this
correlation.
The concept of tau shrinking signal cone at generation
level (without underlying track or $\pi^0$) is shown in
Fig.~\ref{fig:TauId_shr1}. The cone starts out
at a constant 10$^{\circ}$, and then, if the
quantity (5~rad)/$E_{vis}$ is less than 10$^{\circ}$
we use this angle, unless it is less than 50~mrad.
\begin{figure}
\begin{center}
\parbox{4.9in}{\epsfxsize=\hsize\epsffile{5.1.2/tauId_shrinkingCone_1.eps}}
\caption[Tau ``shrinking'' signal cone as a function of energy]
{Distribution of maximum angle between
tau decay products and tau seed track as a function of
tau visible decay product energy. The red line indicates
the half-width of the "shrinking" tau signal cone as a
function of energy.}
\label{fig:TauId_shr1}
\end{center}
\end{figure}
For reconstructed tracks a cone defined as that
shown in Fig.~\ref{fig:TauId_shr1}
is efficient and selective against jet backgrounds.
However, for $\pi^0$s, the
reconstructed angle can, at large visible energies,
be larger than 50~mrad. Thus we relax the minimum
to 100~mrad. With underlying track or $\pi^0$, the
shrinking cone is shown in the left two plots of
Fig.~\ref{fig:TauId_shr2}. Inside the tau isolation
cone (the outer 0.525~rad cone), the separation angle
between the farthest track/$\pi^0$ and the seed track
is ploted. A tau object between the tau isolation
cone and the shrinking signal cone is non-isolated
and will be removed by isolation cut.
The right two plots of Fig.~\ref{fig:TauId_shr2}
show how the shrinking cone looks when applied
to jets reconstructed as tau objects. Comparing with
a constant signal cone, the shrinking signal cone, a
natural consequence of the tau's relativistic boost,
dramatically helps to reduce jet background in the
high mass search.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{5.1.2/tauId_shrinkingCone_2.eps}}
\caption[Tau track/$\pi^0$ ``shrinking'' signal cone]
{Due to different reconstruction
resolutions, the minimum cone sizes
of the ``shrinking'' cone for
track and $\pi^0$ are 0.05 and
0.1 radian, respectively,
shown in the left two plots for
tau. The right two plots show
how the ``shrinking'' cone looks when
applied to jets reconstructed as
tau objects.}
\label{fig:TauId_shr2}
\end{center}
\end{figure}
\subsection{Tau Identification Cuts}
\label{subsec:TauId_cuts}
Now we can put the seed track in the center of the cone and
include in the tau candidate all tracks and $\pi^0$s
whose direction is within the ``shrinking'' signal cone.
Table~\ref{tab:TauId_cuts}
shows the list of tau identification cuts
using the information about calorimeter cluster,
seed track, shoulder tracks/$\pi^0$s of the tau
candidate.
The $p_T$(tracks + $\pi^0$s)
threshold is not listed because it is not an
identification cut and it should be chosen by
looking at the trigger cuts applied and by comparing
tau identification efficiency with the jet$\to\tau$
misidentification rate.
We do not cut on charge
because there is an ambiguity in the charge for high
$p_T$ tracks; we do not cut on
track multiplicity either because we will check
track multiplicity to see hadronic tau signature.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|l|} \hline
Variable & Cut & Note & Denominator \\ \hline \hline
$|\eta_{det}|$ & $<$1 & central calorimeter & \\
$|z_{loc}|$ & 9$<|z_{loc}|<$230 cm & fiducial ShowerMax & \\
$\xi$ & $>$0.2 & electron removal & $D_{\xi}$ \\
$p_T^{seed}$ & $>$6 GeV/$c$ & seed track $p_T$ & \\
10$^\circ$ track isolation & constant cone & weaker than shrinking & $D_{trkIso10Deg}$ \\
m(tracks) & $<$1.8 GeV/$c^2$ & weaker than vis. mass & $D_{trkMass}$ \\
$|z_0|$ & $<$60 cm & vertex $z$ & \\
$|d_0|$ & $<$0.2 cm & impact prameter & \\
seed track ax. seg. & $\ge$3$\times$7 & COT axial segments & \\
seed track st. seg. & $\ge$3$\times$7 & COT stereo segments & \\
track isolation & shrinking track cone & shoulder tracks & \\
$\pi^0$ isolation & shrinking $\pi^0$ cone & shoulder $\pi^0$s & \\
$E^{em}_{iso}$ & $<$2 GeV & EM cal. isolation & \\
m(tracks + $\pi^0$s) & $<$1.8 GeV/$c^2$ & visible mass & Numerator \\ \hline
\end{tabular}
\caption[Tau identification cuts]
{Tau identification cuts.}
\label{tab:TauId_cuts}
\end{center}
\end{table}
\vspace{0.2in}
\noindent{\bf Electron Removal}
\vspace{0.1in}
Using the requirements discussed above, electrons can be
reconstructed as hadronic tau objects if they have a narrow
calorimeter cluster and a high $p_T$ seed track.
To remove electrons we demand that the tau be consistent with
having only pions in the final state.
We define the variable $\xi$ as
\begin{equation}
\xi \equiv \frac{E}{p}(1 - \frac{E_{em}}{E}) = \frac{E_{had}}{p}
\end{equation}
Fig.~\ref{fig:TauId_cuts_1}
shows the tau object EM fraction ($E_{em}/E$)
versus E/p. The top plot is for hadronic taus reconstructed
as tau objects, and the bottom plot is for electrons
reconstructed as tau objects. For an ideal hadronic tau
and a perfect calorimeter, $\xi$ = 1. For an ideal electron,
$\xi$ = 0. However, the calorimeter is not perfect and there
can be a large background from $Z\to ee$ events. To remove
this background we use a very tight cut, $\xi>$ 0.2. The
remaining background is discussed below.
\begin{figure}
\begin{center}
\parbox{3.4in}{\epsfxsize=\hsize\epsffile{5.1.3/tauId_cuts_1.eps}}
\caption[Electron removal in tau identification]
{Distributions of EM fraction ($E_{em}/E$) vs.
$E/p$ for hadronic tau and electron.
$\xi>0.2$ is used to remove electron.}
\label{fig:TauId_cuts_1}
\end{center}
\end{figure}
\vspace{0.2in}
\noindent{\bf EM Calorimeter Isolation}
\vspace{0.1in}
The motivation for the EM calorimeter isolation cut is due to
$\pi^0$ reconstruction inefficiency,
for example, some CES clusters are not reconstructed as
$\pi^0$s if a track is nearby. This affects the power
of the $\pi^0$ isolation requirement. We add an EM
calorimeter isolation cut to deal with the remaining jet
background. We calculate the EM energy in a
$\Delta R = 0.4$ cone around the seed track, summing over
all EM towers which are not members of the tau cluster.
Here $\Delta R$ is used to calculate the distance between
the centroid of a calorimeter tower and the seed track
because the calorimeter tower segmentation is fixed
in $\eta\times\phi$ space, namely $0.11\times15^{\circ}$
around the central region.
Since the EM calorimeter isolation cut is strongly correlated with
other isolation cuts, its marginal distribution is shown
in Fig.~\ref{fig:TauId_cuts_2}. The EM cal. isolation
energy versus cluster energy plots show that we do not
need to use a relative (fractional) cut, which is
necessary if for high energy tau objects there is
significant energy leakage outside tau cluster. We
instead choose an abosolute cut, $E^{em}_{iso}<2$ GeV.
\begin{figure}
\begin{center}
\parbox{5.2in}{\epsfxsize=\hsize\epsffile{5.1.3/tauId_cuts_2.eps}}
\caption[EM calorimeter isolation in tau identification]
{Disbutions of EM calorimeter isolation
for tau and jet, and distributions of
EM calorimeter isolation vs. energy
of reconstructed tau object for tau
and jet.}
\label{fig:TauId_cuts_2}
\end{center}
\end{figure}
\vspace{0.2in}
\noindent{\bf Object Uniqueness}
\vspace{0.1in}
Though not listed in the summary table of tau
identification cuts, we note that all reconstructed
objects in the event are required to be unique.
Thus we only apply the tau identification cuts to
objects not already reconstructed as a photon,
electron, or muon. In practice, we require that a tau
object be 30$^{\circ}$ away from any identified
photon, electron, or muon.
\vspace{0.2in}
\noindent{\bf Denominators}
\vspace{0.1in}
For various subsequent studies presented here we
will use specific subsets of the tau identification
cuts listed in the summary table. The cuts are in
cumulative order which is important for
calculating rates and efficiencies. There are three
different denominators in Table~\ref{tab:TauId_cuts}
corresponding to three different relative rates,
which will be applied on different data samples with
consistent denominators later.
\subsection{Tau Identification Efficiency}
\label{subsec:TauId_efficiency}
Table~\ref{tab:TauId_efficiency}
shows the procedure to measure the tau
identification efficiency, using different samples.
For all of the generated taus, we pick those taus decaying
hadronically, and consider the central ones in the
pseudorapidity range $|\eta|<1$ which are able to be
reconstructed as tau object, called CdfTau in the table.
We require
the seed track of the generated tau to match with the seed track of a
reconstructed tau object
within 0.2 radian.
Then we apply the tau identification cuts on the reconstructed tau objects
and calculate tau identification efficiency.
Fig.~\ref{fig:TauId_efficiency} shows the absolute
tau identification efficiency, which includes the effects of
both reconstruction and identification, vs. tau visible energy,
using the $Z'$ sample which has a lot of high energy taus.
\subsection{Jet$\to\tau$ Misidentification Rate}
\label{subsec:TauId_jet}
Table~\ref{tab:TauId_jet}
shows the procedure to measure the jet$\to\tau$
misidentification rate, using four different jet
samples called JET20, JET50, JET70, and JET100
samples collected with different trigger thesholds.
The L1 tower $E_T$, L2 cluster $E_T$ and L3 jet
$E_T$ trigger thresholds in the unit of GeV for
a triggered jet in each jet sample are
\begin{itemize}
\item JET20: 5, 15, 20
\item JET50: 5, 40, 50
\item JET70: 10, 60, 70
\item JET100: 10, 90, 100
\end{itemize}
We use the central jets with $|\eta|<1$ which may be
reconstructed as tau object, called CdfTau in the table.
We require the central jet to match with a reconstructed tau
object by requiring that they share the seed tower of
the reconstructed tau object.
Then we apply the tau identification cuts on the reconstructed tau objects
and calculate jet$\to\tau$ misidentification rate.
Fig.~\ref{fig:TauId_jet_1} shows the absolute
jet$\to\tau$ misidentification rate, which includes the effects of
both reconstruction and identification, vs. jet cluster
energy, using JET50 sample.
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|l|} \hline
Procedure & $W\to\tau\nu$ & $Z\to\tau\tau$ & $Z'\to\tau\tau$ & Denominator \\ \hline \hline
event & 491513 & 492000 & 1200000 & \\ \hline
tau hadronic & 319357 & 637889 & 1554159 & \\
tau central & 150984 & 275330 & 898102 & $D_{absolute}$ \\
tau match CdfTau & 86325 & 165495 & 800262 & $D_{CdfTau}$ \\ \hline
$|\eta_{det}|<1$ & 85899 & 164722 & 797705 & \\
$9<|z_{loc}|<230$ cm & 82240 & 157748 & 758403 & \\
$\xi>0.2$ & 65854 & 127403 & 663845 & $D_{\xi}$ \\
$p_T^{seed}>6$ GeV/$c$ & 60960 & 119451 & 651328 & \\
10$^\circ$ track isolation & 50309 & 98717 & 540485 & $D_{trkIso10Deg}$ \\
m(tracks) $<1.8$ GeV/$c^2$ & 50141 & 98355 & 532190 & $D_{trkMass}$ \\
$|z_0|<60$ cm & 48659 & 95333 & 515239 & \\
$|d_0|<0.2$ cm & 47975 & 93969 & 506453 & \\
seed track ax. seg. $\ge3\times7$ & 47822 & 93657 & 501965 & \\
seed track st. seg. $\ge3\times7$ & 47312 & 92666 & 494069 & \\
track isolation (shrinking) & 47112 & 92042 & 475017 & \\
$\pi^0$ isolation (shrinking) & 45687 & 89148 & 451129 & \\
$E^{em}_{iso}<2$ GeV & 43981 & 85910 & 428641 & \\
m(tracks + $\pi^0$s) $<1.8$ GeV/$c^2$ & 43155 & 84218 & 404105 & Numerator \\ \hline
\end{tabular}
\caption[Tau identification efficiency measurement]
{Number of events for tau identification efficiency measurement.}
\label{tab:TauId_efficiency}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\parbox{5.4in}{\epsfxsize=\hsize\epsffile{5.1.4/tauId_efficiency.eps}}
\caption[Tau identification efficiency vs. energy]
{Tau identification efficiency vs. tau visible energy.}
\label{fig:TauId_efficiency}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|r|l|} \hline
Procedure & JET20 & JET50 & JET70 & JET100 & Denominator \\ \hline \hline
event & 7696880 & 1951396 & 910618 & 1137840 & \\
event goodrun & 4309784 & 1213104 & 556961 & 697231 & \\ \hline
jet non-triggered & 21957203 & 6071557 & 2641643 & 2935801 & \\
jet central & 8214991 & 2480232 & 1127376 & 1321840 & $D_{absolute}$ \\
jet match CdfTau & 653680 & 425086 & 189148 & 201530 & $D_{CdfTau}$ \\ \hline
$|\eta_{det}|<1$ & 643190 & 416560 & 184996 & 196651 & \\
$9<|z_{loc}|<230$ cm & 611401 & 393222 & 174474 & 184980 & \\
$\xi>0.2$ & 521326 & 354504 & 159320 & 169441 & $D_{\xi}$ \\
$p_T^{seed}>6$ GeV/$c$ & 414966 & 315384 & 145124 & 156391 & \\
10$^\circ$ track isolation & 105846 & 74425 & 36231 & 42727 & $D_{trkIso10Deg}$ \\
m(tracks) $<1.8$ GeV/$c^2$ & 92475 & 63616 & 31865 & 37709 & $D_{trkMass}$ \\
$|z_0|<60$ cm & 85754 & 56951 & 28146 & 32747 & \\
$|d_0|<0.2$ cm & 79889 & 51829 & 25391 & 28994 & \\
seed track ax. seg. $\ge3\times7$ & 78500 & 50043 & 24293 & 27474 & \\
seed track st. seg. $\ge3\times7$ & 71926 & 42754 & 20058 & 21828 & \\
track isolation (shrinking) & 64489 & 20679 & 7475 & 7293 & \\
$\pi^0$ isolation (shrinking) & 50886 & 13910 & 5025 & 4965 & \\
$E^{em}_{iso}<2$ GeV & 41749 & 11132 & 4073 & 3969 & \\
m(tracks + $\pi^0$s) $<1.8$ GeV/$c^2$
& 35314 & 7965 & 2879 & 2792 & Numerator \\ \hline
\end{tabular}
\caption[Jet$\to\tau$ misidentification rate measurement]
{Number of events for jet$\to\tau$ misidentification rate
measurement.}
\label{tab:TauId_jet}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\parbox{5.4in}{\epsfxsize=\hsize\epsffile{5.1.5/tauId_jetMisidRate_1.eps}}
\caption[Jet$\to\tau$ misidentification rate vs. energy]
{Jet$\to\tau$ misidentification rate vs. energy,
using JET50 sample.}
\label{fig:TauId_jet_1}
\end{center}
\end{figure}
\vspace{0.2in}
\noindent{\bf Discrepancies}
\vspace{0.1in}
To try to minimize trigger bias, we use
non-triggered jet only. Based on the L1 tower $E_T$,
L2 cluster $E_T$ and L3 jet $E_T$ trigger thresholds
in each sample, we find all of the
jets which can satisfy the trigger requirements.
The choice of the triggered jets in an event in the
case of zero, one or more than one jet
satisfying trigger requirements are
\begin{itemize}
\item If zero, throw away the event
\item If only one, choose that jet
\item If more than one, do not choose any as triggered
\end{itemize}
Non-triggered jets are just the jets not chosen as
the triggered jet. Even after trying to minimize
trigger bias by using non-triggered jet only,
there are still discrepancies among jet$\to\tau$
misidentification rates obtained from different
jet samples, shown in Fig.~\ref{fig:TauId_jet_2}.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{5.1.5/tauId_jetMisidRate_2.eps}}
\caption[Discrepancies of jet$\to\tau$ misidentification rates]
{Discrepancies of jet$\to\tau$ misidentification rates
in JET samples.}
\label{fig:TauId_jet_2}
\end{center}
\end{figure}
\vspace{0.2in}
\noindent{\bf Two-Dimensional Parametrization}
\vspace{0.1in}
There is no doubt that the jet$\to\tau$
misidentification rate has a very strong dependence
on energy because the tau isolation annulus is a function
of energy. To resolve the discrepancies among the jet$\to\tau$ rates,
we add another parameter to make a two-dimensional parametrization.
The second parameter should not be correlated strongly with energy,
otherwise adding another parameter is meaningless.
Given the final particles, the transverse size of a jet depends
on its boost: jets with a bigger boost have smaller size and
smaller size jets have higher probability to
survive tau identification. The relativistic boost
$\gamma$ is
\begin{eqnarray}
\gamma = \frac{E}{m}
\end{eqnarray}
where $E$ is the energy of the jet which can be
measured by its cluster energy in calorimeter, and $m$
is the invariant mass of its final particles. The mass $m$ is
not easy to measure because some of the final particles
can be neutral and leave no track in tracking system.
We use cluster mass, which treats each tower in the
cluster as a massless photon and sums up the photons,
as an approximation of $m$. The cluster mass has
a strong correlation with energy, while the cluster
boost does not. This is shown in
Fig.~\ref{fig:TauId_jet_3}. We choose cluster boost as
the second parameter.
In the one-dimensional jet$\to\tau$ misidentification
rate what we see is the average over all of the bins
of cluster boost. Given the energy of a jet, the
average cluster boost is different in JET samples,
shown in Fig.~\ref{fig:TauId_jet_4}.
\begin{figure}
\begin{center}
\parbox{5.2in}{\epsfxsize=\hsize\epsffile{5.1.5/tauId_jetMisidRate_3.eps}}
\caption[Jet cluster: mass vs. energy, and boost vs. energy]
{Distributions of jet cluster: mass vs. energy,
and boost vs. energy, in JET50 sample.}
\label{fig:TauId_jet_3}
\vspace{0.5in}
\parbox{5.2in}{\epsfxsize=\hsize\epsffile{5.1.5/tauId_jetMisidRate_4.eps}}
\caption[Jet cluster: boost vs. energy in various samples]
{Profiles of jet cluster boost vs.
cluster energy in JET samples.}
\label{fig:TauId_jet_4}
\end{center}
\end{figure}
Now we plot the jet$\to\tau$ misidentification rate
vs. energy, in each boost slice, shown in
Fig.~\ref{fig:TauId_jet_5}. With the new
two-dimensional parametrization, the overall
discrepancy drops down to about 20\%. Since the
discrepancies are not totally resolved, there are other
unknown effects.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{5.1.5/tauId_jetMisidRate_5.eps}}
\caption[Jet$\to\tau$ misidentification rate vs. energy, in boost slices]
{Jet$\to\tau$ misidentification rate vs. energy
in JET samples and in jet cluster
boost slices.}
\label{fig:TauId_jet_5}
\end{center}
\end{figure}
\subsection{Jet$\to\tau$ Background Estimate}
\label{subsec:TauId_jetBg}
After applying the full set of tau identification cuts,
there will be some jet background left because of the
huge production rate of jets in $p\bar{p}$
collisions. The jet$\to\tau$ misidentification rate
and tau identification efficiency are very useful for
estimating jet background.
To estimate the jet background, the starting point is not jets, or
tau candiates, but tau candidates with at least electron
removal, with a very tight $\xi>$ 0.2 cut applied. Muons
usually cannot have enough energy to make a tau cluster
in the calorimeter. We have two general equations,
\begin{eqnarray}
\mbox{Before full tau ID:} & & \tilde{N} = \tilde{N}^{\tau} + \tilde{N}^{jet} \\
\mbox{After full tau ID:} & & N = N^{\tau} + N^{jet}
= e\tilde{N}^{\tau} + f\tilde{N}^{jet}
\end{eqnarray}
where $f$ is jet$\to\tau$ misidentification rate and
$e$ is tau identification efficiency. Both are
relative in a sense that they are relative to the starting point chosen
as ``Before full tau ID''. The solution is
\begin{equation}
N^{jet} = \frac{f}{e-f}(e\tilde{N} - N).
\end{equation}
Fig.~\ref{fig:TauId_jetBg}
is a demonstration of picking one bin and using the
formula to estimate jet background. This is only
an example
because the parametrization
of the relative rates is a one-dimensional function of
energy.
For the jet$\to\tau$ misidentification rate there is a
better parametrization, i.e., the two-dimensional function of
energy and boost.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{5.1.6/tauId_jetBgEstDemo.eps}}
\caption[Demonstration of estimating jet$\to\tau$ misidentification]
{Demonstration of estimating jet$\to\tau$ misidentification.}
\label{fig:TauId_jetBg}
\end{center}
\end{figure}
\vspace{0.2in}
\noindent{\bf Implementation}
\vspace{0.1in}
The actual implementation is done on an event-by-event basis. For a tau
object in an event under consideration, the knowns are:
$\tilde{N}$ = 1, $e$, $f$ and whether this tau object passes the
full set of the tau identification cuts. If it does,
$N$ = 1; otherwise, $N$ = 0. For the two cases, the weight
to be a jet is estimated as
\begin{eqnarray}
\mbox{If not passing the full tau ID cuts:} & & \omega^{jet} = \frac{f}{e - f}(e - 0)
\label{eq:weight_jet_notpassing} \\
\mbox{If passing the full tau ID cuts:} & & \omega^{jet} = \frac{f}{e - f}(e - 1)
\label{eq:weight_jet_passing}
\end{eqnarray}
In terms of coding, it means the rest of full tau
identification cuts are replaced by the weight
$\omega^{jet}$. We sum up the weights of all the events
in the sample, and get the jet background estimate
$N^{jet}$,
\begin{equation}
N^{jet} = \sum \omega^{jet}
\label{eq:sum_weights_jet}
\end{equation}
\newpage
\vspace{0.2in}
\noindent{\bf Special Case}
\vspace{0.1in}
This method actually needs both the jet$\to\tau$
misidentification rate $f$ and the tau identification
efficiency $e$. The main idea is to remove the
contribution from any real tau signal in jet
background estimate.
The special case is that if we start with
a jet-dominated sample and $f$ is much smaller than $e$, then
we can suppress signal by replacing tau identification
cuts with the jet$\to\tau$ misidentification rate,
\begin{equation}
N^{jet} = f\tilde{N}^{jet} \approx f\tilde{N} \;\;\;\; (f \ll e)
\end{equation}
\section{Tau Scale Factor Using $W\to\tau\nu$}
\label{sec:TauSF}
In this section, we apply tau
identification cuts to select hadronic taus
in $W\to\tau\nu$ events, estimate jet$\to\tau$
misidentification background, study tau identification scale
factor and compare tau distributions in data
and MC simulation.
\subsection{Data/MC Scale Factor}
\label{subsec:TauSF_chart}
The scale factor for a set of cuts
quantifies and corrects for
the difference between data and MC simulation.
It should be multiplied on MC to get the
scaled efficiency consistent with the
efficiency in data.
Fig.~\ref{fig:TauSF_chart}
shows lepton flow in data and in MC, and
lepton data/MC scale factors.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{5.2.1/tauSF_chart.eps}}
\caption[Lepton in data and in MC, and lepton data/MC scale factors]
{Lepton in data and in MC, and lepton data/MC scale factors.}
\label{fig:TauSF_chart}
\end{center}
\end{figure}
\vspace{0.2in}
\noindent{\bf Ratio of Efficiencies}
\vspace{0.1in}
A data/MC scale factor is defined as the ratio
of efficiencies,
\begin{equation}
f_{data/MC} = \frac{\epsilon_{data}}
{\epsilon_{MC}}
\end{equation}
where $\epsilon_{MC}$ is the efficiency in MC
which is straightforward to obtain because the MC
simulation has the true information of particle identity,
and $\epsilon_{data}$ is the efficiency in data,
which can be a challenge to measure.
In the electron or muon case, we can use
electron or muon pairs from the Z boson
peak, which gives us a pure sample with negligible
background in real data. This is so reliable that we can use
it as ``standard candle'' to calibrate detector
and even measure luminosity. We select
one leg to satisfy the trigger requirements in
data, and ask whether the second leg passes the set of
cuts, and thereby get the efficiency in data.
\vspace{0.2in}
\noindent{\bf Ratio of Numbers}
\vspace{0.1in}
Due to the missing energy from the neutrino in
tau decays, the tau pair mass at the Z boson peak is
severely broadened. Instead, we will use
$W\to\tau\nu$ to select a relatively clean tau
sample. There is no second leg to get efficiency
data/MC. We use the method of absolute number
data/MC,
\begin{equation}
f_{data/MC} = \frac{n_{data}}
{n_{MC}}
\end{equation}
where $n_{MC}$ is the absolute number of
$W\to\tau\nu$ events in MC normalized to the
luminosity of data, and $n_{data}$ is the number
of $W\to\tau\nu$ events observed in the data after
subtracting backgrounds.
\subsection{$W\to\tau\nu$ Selection}
\label{subsec:TauSF_WTauNu}
We select $W\to\tau\nu$ events by using a data sample
from the TAU\_MET trigger which requires:
\begin{itemize}
\item level 1 trigger (L1) $\,/\!\!\!\!E_{T}>$ 25 GeV
\item level 3 trigger (L3) tau $E_T>$ 20 GeV
\end{itemize}
where
(a) L1 $\,/\!\!\!\!E_{T}$ is based on
a tower threshold of 1 GeV for a fast calculation;
(b) for L3 tau, the cuts $|\eta_{det}|<$ 1,
10$^{\circ}$ track isolation and m(tracks)
$<$ 2 GeV/$c^2$ are applied in the trigger.
The top plot of
Fig.~\ref{fig:TauSF_WTauNu_1} shows that the
integrated luminosity of the good runs is
72 $\pm$ 4 pb$^{-1}$, and the bottom plot
shows the L3 cross section is reasonablly
flat (no sudden drop to zero), thus all
of the good runs are present in the data
file.
\begin{figure}
\begin{center}
\parbox{3.3in}{\epsfxsize=\hsize\epsffile{5.2.2/tauSF_WTauNu_1.eps}}
\caption[Offline luminosity and L3 cross section of the TAU\_MET trigger]
{Distributions of offline luminosity vs.
good run sequence and L3 cross
section vs. good run sequence,
in the data sample from TAU\_MET trigger.}
\label{fig:TauSF_WTauNu_1}
\end{center}
\end{figure}
The offline selection cuts are:
\begin{itemize}
\item Monojet
\item $\,/\!\!\!\!E_{T}>$ 30 GeV
\item Tau $p_T$(tracks + $\pi^0$s) $>$ 25 GeV/$c$
\end{itemize}
where
(a) monojet selection requires
one central cone 0.7 jet
with $|\eta_{det}|<$ 1
and $E_T>$ 25 GeV,
no other jets
with $E_T>$ 5 GeV
anywhere;
(b) offline $\,/\!\!\!\!E_{T}$ is obtained from the vector
sum of $E_T$ for towers with $E_T>0.1$ GeV;
(c) in addition to tau $p_T$ threshold, the
whole set of tau identification cuts under
study will be applied on the offline tau
candidates.
The monojet cut dramatically helps clean up
the data sample. But, to get the estimated
$n_{MC}$ of $W\to\tau\nu$ events, we need to
study the monojet cut and the L1 $\,/\!\!\!\!E_{T}>$ 25 GeV
trigger efficiency for monojet-type events.
\vspace{0.2in}
\noindent{\bf Monojet Selection}
\vspace{0.1in}
The monojet selection essentially requires there is no other
underlying jet with $E_T>$ 5 GeV. We select
$Z\to\mu\mu$ events, count the number of
cone 0.7 jets with $E_T>$ 5 GeV,
no $\eta$ cut, and 0.7 radian in $\Delta$R
away from muons.
The $Z\to\mu\mu$ selection cuts are:
(a) cosmic veto~\cite{CDFnote:6089},
(b) one tight muon and one track with $p_T>$ 20 GeV/$c$,
(c) opposite charges,
(d) track $|z_0(1)-z_0(2)|<$ 4 cm, and
(e) $80<m_{\mu\mu}<100$ GeV/$c^2$.
We require one tight muon and one track, instead
of two tight muons to get higher statistics.
The track is required to be of minimum ionisation
particle (MIP) type.
Both the tight muon and the track requires tau-like
track isolation which is to mimic the isolated tau
in $W\to\tau\nu$ events.
We use a data sample from a trigger
designed to select ``muon plus track'' events
which have $\mu$ with $p_T>8$ GeV/$c$ plus another
charged track with $p_T>5$ GeV/$c$.
We select 5799 events with negligible background which is
confirmed by the negligible number of same-charge
muon pair events.
There are 2152 events in the zero jet bin. The fraction
of zero jet events in the data is
2152/5799 = 0.371.
We use about 500K MC events.
46297 events survived after the same selection cuts as in data.
There are 20149 events in the zero jet bin. The
fraction of zero jet events in the MC is
20149/46297 = 0.435.
The number of jets distribution in data and in MC
are shown in Fig.~\ref{fig:TauSF_WTauNu_2}.
So $W\to\tau\nu$ monojet data/MC scale factor is
\begin{equation}
f^{monojet}_{data/MC} = \frac{2152 / 5799}{20149 / 46297}
= \frac{0.371}{0.435}
= 0.85 \pm 0.02
\end{equation}
The uncertainty is statistical only.
\vspace{0.2in}
\noindent{\bf L1 $\,/\!\!\!\!E_{T}>$ 25 GeV}
\vspace{0.1in}
The TAU\_MET trigger triggers directly on tau objects, and so there
is no marginal trigger efficiency from TAU side.
But there is marginal trigger efficiency from
MET side: L1 $\,/\!\!\!\!E_{T}$ uses a 1 GeV tower
threshold, and offline $\,/\!\!\!\!E_{T}$ uses a 0.1
GeV tower threshold.
We use JET20 data to study this trigger
efficiency. The event topology is monojet-like,
since here that is what we are interested in.
The L1 $\,/\!\!\!\!E_{T}>$ 25 GeV trigger efficiency vs
offline $\,/\!\!\!\!E_{T}$ for monojet event is shown in
Fig.~\ref{fig:TauSF_WTauNu_3}.
It is a slow turn-on due to a large tower
threshold. An offline $\,/\!\!\!\!E_{T}>$ 30 GeV
cut is not fully efficient.
\begin{figure}
\begin{center}
\parbox{5.3in}{\epsfxsize=\hsize\epsffile{5.2.2/tauSF_WTauNu_2.eps}}
\caption[Number of jets in $Z\to\mu\mu$ data and MC]
{Distributions of the number of jets
in $Z\to\mu\mu$ data and MC.}
\label{fig:TauSF_WTauNu_2}
\vspace{0.5in}
\parbox{5.3in}{\epsfxsize=\hsize\epsffile{5.2.2/tauSF_WTauNu_3.eps}}
\caption[L1 $\,/\!\!\!\!E_{T}>$ 25 GeV trigger efficiency vs. offline $\,/\!\!\!\!E_{T}$ for monojet event]
{L1 $\,/\!\!\!\!E_{T}>$ 25 GeV trigger efficiency vs. offline $\,/\!\!\!\!E_{T}$ for monojet event.}
\label{fig:TauSF_WTauNu_3}
\end{center}
\end{figure}
\subsection{Tau Scale Factor}
\label{subsec:TauSF_sf}
After all of the above, we count
the absolute number of $W\to\tau\nu$ events
$n_{data}$ and $n_{MC}$ for total integrated
luminosity 72 pb$^{-1}$. Their ratio will be
the tau scale factor.
\begin{itemize}
\item To get $n_{data}$, we will use the data sample
from the TAU\_MET trigger. We apply the
offline cuts to get the observed number
of $W\to\tau\nu$ candidates, and subtract
various backgrounds.
\item To get $n_{MC}$, we will use
$W\to\tau\nu$ MC simulation. We apply the
offline cuts, multiply the number of accepted events
by the monojet scale factor and
the trigger efficiency, and
normalize to 72~pb$^{-1}$.
\end{itemize}
The main source of backgrounds are $W\to e\nu$,
$W\to\mu\nu$, $Z/\gamma^*\to\tau\tau$, and jet
background.
We will use MC simulation to get
$W\to e\nu$, $W\to\mu\nu$, and
$Z/\gamma^*\to\tau\tau$ backgrounds.
We apply the offline cuts, multiply the number of
accepted events by the monojet
scale factor and the trigger
efficiency, and normalize to 72~pb$^{-1}$.
For the normalization in MC,
$\sigma\cdot B(W\to l\nu)$ is 2700~pb~\cite{Acosta:2004uq},
and $\sigma\cdot B(Z/\gamma^*\to l l)$ is
326~pb with $m_{Z/\gamma^*}>30$~GeV/$c^2$,
which is obtained from the measured value 250~pb~\cite{Acosta:2004uq}
at the Z boson mass peak with $66<m_{Z/\gamma^*}<116$~GeV/$c^2$ and normalizing
the $Z/\gamma^*\to l l$ mass spectrum generated by PYTHIA.
The jet background will be estimated directly
from the data by applying the relative
jet$\to\tau$ misidentification rate and
the relative tau identification efficiency.
Since the cuts $|\eta_{det}|<1$,
10$^{\circ}$ track isolation and m(tracks)~$<2$~GeV/$c^2$
are applied in the trigger,
we use the relative rates up to the
denominator $D_{trkMass}$. Then we just follow
the implementation described in
section~\ref{subsec:TauId_jetBg}.
Table~\ref{tab:TauSF_sf_1} shows the procedure
to estimate the contributions from signal and
backgrounds estimated from MC, the jet
background estimated from data, and the observed
number of events in data.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c||c|c|} \hline
& $W\to\tau\nu$
& $W\to e\nu$
& $W\to\mu\nu$
& $Z/\gamma^*\to\tau\tau$
& \multicolumn{2}{c|}{etau08} \\ \cline{6-7}
& \multicolumn{1}{c|}{signal}
& \multicolumn{1}{c|}{bkgd}
& \multicolumn{1}{c|}{bkgd}
& \multicolumn{1}{c||}{bkgd}
& \multicolumn{1}{c|}{jet bkgd}
& \multicolumn{1}{c|}{observed} \\ \hline
event & 491513
& 1480550
& 760457
& 492000
& \multicolumn{2}{c|}{3747680} \\ \hline
hadronic tau & 319517
& N/A
& N/A
& N/A
& \multicolumn{2}{c|}{TAU\_MET 342164} \\
monojet & 11368
& 192806
& 7557
& 7311
& \multicolumn{2}{c|}{23818} \\
$\,/\!\!\!\!E_{T}>$ 30 GeV & 4874
& 154256
& 4535
& 3149
& \multicolumn{2}{c|}{17490} \\
tau ID & 1982
& 319
& 130
& 1230
& \multicolumn{2}{c|}{$D_{trkMass}$ 1519} \\ \hline
monojet SF & 1684.7
& 271.2
& 110.5
& 1045.5
& \multicolumn{1}{c|}{$\sum\omega^{jet}$}
& \multicolumn{1}{c|}{tau ID} \\ \cline{6-7}
trigger eff. & 1622.1
& 267.0
& 107.6
& 1012.3
& 81.8
& 814 \\
normalized & 638.8
& 34.9
& 27.4
& 48.3
& 81.8
& 814 \\ \hline
\end{tabular}
\caption[Estimate $W\to\tau\nu$ events]
{Expected number of events for the signal,
backgrounds and observed number of $W\to\tau\nu$ events.}
\label{tab:TauSF_sf_1}
\end{center}
\end{table}
The uncertainties include
\begin{itemize}
\item statistical uncertainty,
\item monojet scale factor: 2\%,
\item luminosity: 6\%~\cite{Klimenko:2003if}, and
\item $\sigma\cdot B(W\to l\nu)$ and
$\sigma\cdot B(Z/\gamma^*\to l l)$,
2\%, aside from luminosity uncertainty~\cite{Acosta:2004uq}.
\end{itemize}
Since there are discrepancies among the
jet$\to\tau$ misidentification rates
obtained from different jet samples, we
use the average jet$\to\tau$
misidentification rate to get a central
value of 81.8 events. The estimates using the
individual jet$\to\tau$
misidentification rate from JET20,
JET50, JET70, and JET100 samples are
90.3, 67.1, 72.8, and 66.3, respectively.
We take the biggest difference as the
the uncertainty for jet background:
$|$(66.3-81.8)/81.8$|$ = 18.9\%.
The numbers and the uncertainties of each
channel are summarized in
Table~\ref{tab:TauSF_sf_2}.
We now arrive at the tau scale factor as follows:
\begin{eqnarray}
f^{\tau}_{data/MC} & = & \frac{n_{data}}
{n_{MC}} \nonumber \\
& = & \frac{n_{obs.} - n_{WZ\;bgs} - n_{jet\;bg}}
{n_{sig.}} \nonumber \\
& = & \frac{814 - (34.9 + 27.4 + 48.3) - 81.8}
{638.8} \nonumber \\
& = & 0.97\pm0.10
\end{eqnarray}
with statistical uncertainty and all of the
systematic uncertainties.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|} \hline
$W\to\tau\nu$ & $638.8\pm42.7$ \\
$W\to e\nu$ & $34.9\pm2.9$ \\
$W\to\mu\nu$ & $27.4\pm3.0$ \\
$Z/\gamma^*\to\tau\tau$ & $48.3\pm3.3$ \\
Jet$\to\tau$ & $81.8\pm15.5$ \\ \hline
Expected & $831.2\pm45.7$ \\ \hline
Observed & 814 \\ \hline
\end{tabular}
\caption[Expected and observed $W\to\tau\nu$ events]
{Cross check for the numbers of $W\to\tau\nu$ events.}
\label{tab:TauSF_sf_2}
\end{center}
\end{table}
Lastly we put signal and background together, and show the $W\to\tau\nu$
kinematic distributions in data and MC in Fig.~\ref{fig:TauSF_TAUMET}.
The agreement between data and MC is very good.
\begin{figure}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{5.2.4/tauSF_TAUMET_thesis.eps}
\caption[Distributions of tau variables using $W\to\tau\nu$ events]
{Distributions of hadronic tau identification using
$W\to\tau\nu$ events for data (points) and predicted
backgrounds (histograms).}
\label{fig:TauSF_TAUMET}
\end{center}
\end{figure}
\section{Electron Identification}
\label{sec:EleId}
Identification of electrons is based on the energy
it deposits in the calorimeter, its track in the COT,
and its position in the CES.
The central electron reconstruction algorithm~\cite{CDFnote:5456}
starts with clusters in the CEM detector. The electromagnetic
towers are ordered in $E_T$ and the highest $E_T$ tower
that has not yet been clustered is taken as a seed. The
available shoulder towers are added to the cluster if they
are adjacent in $\eta$ to the seed, and the clusters are
restricted to two towers. The default threshods for seed
towers and shoulder towers are 3.0 and 0.1 GeV, respectively.
For the leading electrons used in our analysis with
$e + \tau_h$ channel, the threshods for seed towers and
shoulder towers are set to be 8 and 7.5 GeV, respectively.
Then we associate tracks with the candidate cluster. For
all of the tracks associated, the one with highest $p_T$
is chosen as the matched one. The CES strip and wire clusters are
associated with the CEM cluster if they are reconstructed
in the same wedge. The ``best-matching'' CES cluster is
the one seeded by the matched track.
\subsection{Electron Identification Cuts}
\label{subsec:EleId_cuts}
The electron identification~\cite{CDFnote:6580} cuts, and
the conversion veto~\cite{CDFnote:6250} cuts to remove electrons
from photon conversion, are listed in Table~\ref{tab:EleId_cuts}.
The $E_T$ and $p_T$ thresholds are not listed because
they depend on the process and trigger sample. The probe
electron must be a fiducial CEM electron and pass the
vertex $z$ cut.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|l|} \hline
Variable & Cut & Note & Denominator \\ \hline \hline
region & ==0 & CEM & \\
fiducial & ==1 & fiducial $X_{CES}$,
$Z_{CES}$ & \\
$|z_0|$ & $<$60 cm & vertex $z$ & Probe \\
track ax. seg. & $\ge$3$\times$7 & COT axial segments & \\
track st. seg. & $\ge$3$\times$7 & COT stereo segments & \\
cal. isolation & $<$0.1 & cone 0.4 & \\
$E_{had}/E_{em}$ & $<$0.055+0.00045$\times$$E$ & had./em. & \\
$E/p$ & $<$4 (for $E_T<$100 GeV) & cal./track with brem. & \\
$L_{shr}$ & $<$0.2 & lateral shower profile & \\
$|\Delta X|$ & $<$3 cm & $X_{track} - X_{CES}$ & \\
$|\Delta Z|$ & $<$5 cm & $Z_{track} - Z_{CES}$ & \\
conversion veto & $|\Delta XY|$$<$0.2 cm, and & separation, and & \\
& $|\Delta\cot\theta|$$<$0.04 & parallel & Numerator \\ \hline
\end{tabular}
\caption[Electron identification cuts]
{Electron identification cuts.}
\label{tab:EleId_cuts}
\end{center}
\end{table}
\subsection{Electron Scale Factor}
\label{subsec:EleSF}
The electron identification scale factor is the ratio
of the efficiency in data/MC. The data sample is
from the TAU\_ELE trigger which requires an electron
with $E_T>$ 8 GeV, $p_T>$ 8 GeV/$c$ and an isolated track
with $p_T>$ 5 GeV/$c$. We study the electron scale factor
versus $E_T$.
\begin{itemize}
\item For medium-$E_T$ (between 5 GeV and 20 GeV) electrons,
the MC uses electrons
from $Z\to\tau\tau\to eX$, and in the real data we use
the second leg after selecting
$\Upsilon\to ee$.
We require the probe electrons have
$E_T>$ 5 GeV and $p_T>$ 5 GeV/$c$ in both the real data and the MC.
\item For high-$E_T$ (above 20 GeV) electron, the MC uses electrons
from $Z\to ee$, and in the real data we use the second leg
after selecting $Z\to ee$.
We require the probe electrons have
$E_T>$ 20 GeV and $p_T>$ 10 GeV/$c$ in both the real data and the MC.
\end{itemize}
The procedure to select $\Upsilon\to ee$ events is:
\begin{itemize}
\item Require a tight electron with $E_T>$ 8
GeV, $p_T>$ 8 GeV/$c$ which are the trigger
requirements and the electron
identification cuts.
\item Require a probe electron with $E_T>$ 5
GeV, $p_T>$ 5 GeV/$c$.
\item Same-sign pair will be used later for fitting
the slope of background and opposite-sign pair
will be used later for fitting signal +
background.
\item Require the invariance mass of the $ee$
pair to lie in the range (0, 20) GeV/$c^2$.
\end{itemize}
The procedure to select $Z\to ee$ events is:
\begin{itemize}
\item Require a tight electron with $E_T>$ 20
GeV, $p_T>$ 10 GeV/$c$ and the electron
identification cuts.
\item Require a probe electron with $E_T>$ 20
GeV, $p_T>$ 10 GeV/$c$.
\item Require opposite sign.
\item Require the invariance mass of the $ee$
pair to lie in the range (75, 105) GeV/$c^2$.
\end{itemize}
The procedure to select the second leg is:
\begin{itemize}
\item Require exactly one $\Upsilon$ or Z boson.
\item If there is one tight electron, the probe electron
is the second leg.
\item If there are two tight electrons, both are used as
second leg.
\end{itemize}
Then we apply the set of electron identification
cuts under study on the second leg electrons in
data, and on the probe electrons in the MC.
The result of the procedure is shown in
Table~\ref{tab:EleSF_1}.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c||l|c|c|l|} \hline
& \multicolumn{2}{c||}{data}
&
& \multicolumn{2}{c|}{MC}
&
\\ \cline{2-3} \cline{5-6}
& $\Upsilon\to ee$
& \multicolumn{1}{c||}{$Z\to ee$}
&
& $Z\to\tau_e\tau_x$
& $Z\to ee$
&
\\ \cline{1-6}
event & \multicolumn{2}{c||}{11922805}
& event
& 492000
& 398665
&
\\ \cline{1-6}
good run & \multicolumn{2}{c||}{9103020}
&
&
&
&
\\
triggered & \multicolumn{2}{c||}{5575584}
&
&
&
&
\\
unique & \multicolumn{2}{c||}{5310963}
&
&
&
&
\\ \cline{1-6}
process & 10373 & 4534 & electron& 175515 & 797330 & \\ \cline{1-6}
second leg & 10770 & 7973 & match & 36733 & 204680 & Probe \\ \hline \hline
track ax. seg. $\ge3\times7$ & 10687 & 7946 & same & 36665 & 204266 & \\
track st. seg. $\ge3\times7$ & 10165 & 7721 & same & 36448 & 202907 & \\
cal. isolation $<0.1$ & 2797 & 7484 & same & 32094 & 197065 & \\
$E_{had}/E_{em}<0.055$+0.00045$E$ & 2553 & 7427 & same & 31290 & 194746 & \\
$E/p<4$ (for $E_T<100$ GeV) & 2551 & 7379 & same & 31187 & 194133 & \\
$L_{shr}<0.2$ & 2331 & 7318 & same & 30240 & 188653 & \\
$|\Delta X|<3$ cm & 2304 & 7198 & same & 30028 & 186360 & \\
$|\Delta Z|<5$ cm & 2292 & 7189 & same & 29976 & 186222 & \\
conversion veto & 2249 & 6878 & same & 29714 & 181449 & Id \\ \hline
\end{tabular}
\caption[Electron identification efficiency measurement]
{Number of events for electron identification efficiency
measurement.}
\label{tab:EleSF_1}
\end{center}
\end{table}
For the $Z\to ee$ selection in the real data,
the backgrounds in the sample with a tight electron plus
a probe electron and in the sample with two tight electrons
are both negligible which is confirmed by the negligible
number of same-sign events in these two samples.
For the $\Upsilon\to ee$ in the real data,
the backgrounds in the sample with tight electron plus
probe electron and in the sample with two tight electrons
are both significant.
The same-sign samples provide the shapes of the invariant
mass distribution of the backgrounds,
which are taken as the slopes of linear backgrounds.
Then in the opposite sign samples we fit the invariant
mass distributions by the ``Crystal Ball'' function~\cite{Skwarnicki:1986}
plus a linear background. The $\Upsilon\to ee$ invariant
mass distribution has a Bremsstrahlung tail at lower mass
side where at least one of the electrons radiates.
The ``Crystal Ball'' line-shape serves to model this
Gaussian core with a power-law tail.
The yield of signal is obtained by the entries in the histogram
subtracted by the integral of the linear background.
Up to this point, all of the $\Upsilon\to ee$ candidates
in the mass window (0,~20)~GeV/$c^2$ are accepted.
We then subtract background, as shown in
Fig.~\ref{fig:EleSF_1}.
The plot only shows the mass window (4,~15.5)~GeV/$c^2$.
The fit is performed in the mass window (5,~12.2)~GeV/$c^2$.
The fitting result is N(e + probe) = 818.0,
N(e + Id) = 644.4, efficiency = 78.8\%.
Now we put everything together to get the electron
scale factor vs. $E_T$ and perform a fit in $E_T$.
This is shown in Fig.~\ref{fig:EleSF_2}.
Data: the medium $E_T$ electrons (5, 20) GeV are
from the second leg of $\Upsilon\to ee$;
the high $E_T$ electrons (30, 100) GeV are from
the second leg of $Z\to ee$; there is
a gap (20, 30) GeV which has very low
statistics and is not used.
MC: the medium $E_T$ electrons (5, 20) GeV are
from the probe electrons of
$Z\to\tau\tau\to eX$;
the high $E_T$ electrons
(30, 100) GeV are from the probe
electrons of $Z\to ee$.
In each $E_T$ bin, the efficiency in data
divided by the efficiency in MC gives
scale factor in that $E_T$ bin.
For all of the $E_T$ bins, the scale
factor is flat. A fit by a polynomial
of degree 0, which is exactly the same
as the weighted average, gives a
scale factor $0.974\pm0.004$.
There are two bins (45, 50) GeV and
(50, 100) GeV with efficiency close to 100\%,
in data and MC.
The binomial uncertainty in this case is
always close to zero and underestimated.
This propagates to the scale factors in
those two $E_T$ bins, and finally
propagates to the weighted average.
There is also uncertainty in the (5, 20)
GeV bin due to $\Upsilon\to ee$ background
subtraction. This uncertainty is not
estimated.
We assign a conservative 4\% uncertainty for
electron scale factor~\cite{CDFnote:7288}:
\begin{equation}
f^{e}_{data/MC} = 0.97 \pm 0.04
\end{equation}
\begin{figure}
\begin{center}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.3.2/eleSF_1.eps}}
\caption[$\Upsilon\to ee$ in data]
{Distributions of the invariant mass
of $\Upsilon\to ee$ for medium $E_T$
electron identification
efficiency measurement in data.
Same-sign samples provide
the slopes of the linear backgrounds.
The numbers of $\Upsilon\to ee$
signal events are obtained
from fitting the histograms with
the ``Crystal Ball''
function plus a linear background
in the oppisite-sign samples.}
\label{fig:EleSF_1}
\vspace{0.5in}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.3.2/eleSF_2.eps}}
\caption[Electron scale factor vs. $E_T$]
{Electron scale factor vs. $E_T$. This is
obtained from dividing the efficiency in
data by the efficiency in MC.}
\label{fig:EleSF_2}
\end{center}
\end{figure}
\section{Muon Identification}
\label{sec:MuId}
Muon reconstruction~\cite{CDFnote:5870} uses information from tracking,
the calorimeters and the muon chambers. The momentum is
measured by the curvature of the muon trajectory
bent by magnetic field in tracking system.
Muons behave as minimum ionizing particles and
they are the only charged particle that can
travel through the large amount of material in
calorimeter with a very small
energy loss. Muons are not stable, but they are
so long lived that they can reach the muon
chamber, leave hits there, and continue to travel and
decay outside the detector. These features
allow a rather simple and clean muon
identification.
\subsection{Muon Identification Cuts}
\label{subsec:MuId_cuts}
The muon identification cuts~\cite{CDFnote:6825} are listed in
Table~\ref{tab:MuId_cuts}.
We use COT-only tracks and add the beam constraint
to the track. The $E_T$ and $p_T$ thresholds are not
listed because they depend on the process and
trigger.
For data/MC scale factor studies, we require the
track to be fiducial which means that the track is
headed in a direction that will lead it to hit
enough chambers for a stub to be reconstructed.
All three subdetectors CMU, CMP, and CMX which
are used in this analysis require 3 hits in 3
different layers for a stub to be reconstructed.
And we will study two kinds of data/MC scale
factors:
\begin{itemize}
\item Muon identification scale factor.
A fiducial stub muon and vertex $z$ cut
are required for the probe muon for this study,
called ``Probe (Id)'' in Table~\ref{tab:MuId_cuts}.
\item Marginal muon reconstruction scale
factor. We require a fiducial track and a stubless muon,
which also has the information of energy
loss in calorimeter, for the probe muon for this study,
called ``Probe (Rec)'' in Table~\ref{tab:MuId_cuts}.
It is not necessary
to have hits in the muon chambers. The vertex $z$ cut,
calorimeter isolation cut, EM energy cut, and
hadronic energy cut are required. Then
we check if this track has a muon stub.
The default track $p_T$ threshold to
make a stubless muon is 10 GeV;
we lower it to 5 GeV to allow more medium
$p_T$ stubless muons.
\end{itemize}
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|l|} \hline
Variable & Cut & Note & Probe \\ \hline \hline
$|z_0|$ & $<$60 cm & vertex $z$ & Probe (Id) \\
cal. isolation & $<$0.1 & cone 0.4 & \\
$E_{em}$ & $<$2+max(0,($p$$-$100)$\times$0.0115) GeV & EM energy & \\
$E_{had}$ & $<$6+max(0,($p$$-$100)$\times$0.028) GeV & had. energy & Probe (rec.) \\
$|d_0|$ & $<$0.2 cm & impact parameter & \\
track ax. seg. & $\ge$3$\times$7 & COT ax. seg. & \\
track st. seg. & $\ge$3$\times$7 & COT st. seg. & \\
$|\Delta x_{\mbox{CMU}}|$ & $<$3 cm (for CMUP) & $x_{\mbox{track}} - x_{\mbox{CMU}}$ & \\
$|\Delta x_{\mbox{CMP}}|$ & $<$5 cm (for CMUP) & $x_{\mbox{track}} - x_{\mbox{CMP}}$ & \\
$|\Delta x_{\mbox{CMX}}|$ & $<$6 cm (for CMX) & $x_{\mbox{track}} - x_{\mbox{CMX}}$ & \\
$\rho_{\mbox{COT}}$ & $>$140 cm (for CMX) & COT exit radius & \\ \hline
\end{tabular}
\caption[Muon identification cuts]
{Muon identification cuts.}
\label{tab:MuId_cuts}
\end{center}
\end{table}
\subsection{Muon Scale Factor}
\label{subsec:MuSF}
The muon identification scale factor is the ratio
of the identification efficiency in real data to that in MC.
The muon marginal reconstruction scale factor is
the ratio of the marginal reconstruction
efficiency in data/MC.
The data sample is from the TAU\_CMU trigger
which requires a CMUP muon with
$p_T>$ 8 GeV/$c$ and an isolated track with $p_T>$ 5
GeV/$c$. (A CMUP muon is required to have stubs in
both CMU and CMP). We study the muon scale factors versus muon $p_T$.
\begin{itemize}
\item For medium $p_T$ (between 5 and 20 GeV/$c$) muons, the MC uses muon from
$Z\to\tau\tau\to\mu X$, data uses
the second leg after selecting
$\Upsilon\to\mu\mu$.
We require the probe muons have
$p_T>$ 5 GeV/$c$ in both the real data and the MC.
\item For high $p_T$ (above 20 GeV/$c$) muons, the MC uses muon from
$Z\to\mu\mu$, and for the data we use
the second leg after selecting
$Z\to\mu\mu$.
We require the probe muons have
$p_T>$ 20 GeV/$c$ in both the real data and the MC.
\end{itemize}
The procedure to select $\Upsilon\to\mu\mu$ events is:
\begin{itemize}
\item Cosmic veto~\cite{CDFnote:6089}.
\item Require a tight CMUP muon with $p_T>$
8 GeV/$c$ which are trigger requirements
and the CMUP muon identification cuts.
\item Require a probe muon with $p_T>$ 5
GeV/$c$.
\item Require $|z_0(1)-z_0(2)|<$ 4 cm.
\item Require opposite sign.
\item Mass window (7, 13) GeV/$c^2$. We
will use side band for background
subtraction.
\end{itemize}
The procedure to select $Z\to\mu\mu$ events is:
\begin{itemize}
\item Cosmic veto.
\item Require a tight CMUP muon with
$p_T>$ 20 GeV/$c$ and the CMUP
muon identification cuts.
\item Require a probe muon with
$p_T>$ 20 GeV/$c$.
\item Require $|z_0(1)-z_0(2)|<$ 4 cm.
\item Require opposite sign. The
negligible number of same sign
events confirms that background
is negligible.
\item Mass window (80, 100) GeV/$c^2$.
\end{itemize}
The procedure to select the second leg is:
\begin{itemize}
\item Require exactly one $\Upsilon$ or
Z boson.
\item If one tight muon, the probe
muon is the second leg.
\item If two tight muons, both are used
as second leg.
\end{itemize}
\vspace{0.2in}
\noindent{\bf Muon Identification Scale Factor}
\vspace{0.1in}
In the muon identification scale factor study,
we apply the set of muon identification
cuts under study on the second leg muons in
data, and on the probe muons in the MC.
Table~\ref{tab:MuSF_1} shows the
procedure in data and Table~\ref{tab:MuSF_2}
shows the procedure in MC.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|l|} \hline
& \multicolumn{2}{c|}{$\Upsilon\to\mu\mu$}
& \multicolumn{2}{c|}{$Z\to\mu\mu$}
&
\\ \cline{1-5}
event & \multicolumn{4}{c|}{11922805}
&
\\ \cline{1-5}
good run & \multicolumn{4}{c|}{9103020}
&
\\
triggered & \multicolumn{4}{c|}{1881529}
&
\\
unique & \multicolumn{4}{c|}{1800059}
&
\\ \cline{1-5}
process & \multicolumn{2}{c|}{971}
& \multicolumn{2}{c|}{2025}
&
\\ \cline{1-5}
second leg & \multicolumn{2}{c|}{1047}
& \multicolumn{2}{c|}{2805}
&
\\ \cline{2-5}
& CMUP & CMX & CMUP & CMX & \\ \cline{2-5}
& 762 & 285 & 1820 & 985 & Probe \\ \hline \hline
$|d_0|<0.2$ cm & 758 & 283 & 1816 & 982 & \\
track ax. seg. $\ge3$$\times$7 & 756 & 283 & 1814 & 978 & \\
track st. seg. $\ge3$$\times$7 & 739 & 280 & 1777 & 955 & \\
cal. isolation $<0.1$ & 527 & 194 & 1750 & 947 & \\
$E_{em}<$ 2+max(0,($p$$-$100)$\times$0.0115) GeV & 525 & 191 & 1700 & 929 & \\
$E_{had}<$ 6+max(0,($p$$-$100)$\times$0.028) GeV & 524 & 191 & 1670 & 907 & \\
$|\Delta x_{\mbox{CMU}}|<3$ cm
($|\Delta x_{\mbox{CMX}}|<5$ cm) & 427 & 129 & 1590 & 877 & \\
$|\Delta x_{\mbox{CMP}}|<6$ cm
($\rho_{\mbox{COT}}>140$ cm) & 287 & 113 & 1560 & 753 & Id \\ \hline
\end{tabular}
\caption[Muon identification efficiency measurement in data]
{Number of events for muon identification efficiency
measurement in data.}
\label{tab:MuSF_1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|l|} \hline
& \multicolumn{2}{c|}{$Z\to\tau_{\mu}\tau_x$}
& \multicolumn{2}{c|}{$Z\to\mu\mu$}
&
\\ \cline{1-5}
event & \multicolumn{2}{c|}{492000}
& \multicolumn{2}{c|}{405291}
&
\\ \cline{1-5}
muon & \multicolumn{2}{c|}{170596}
& \multicolumn{2}{c|}{810582}
&
\\ \cline{1-5}
match & \multicolumn{2}{c|}{32388}
& \multicolumn{2}{c|}{170382}
&
\\ \cline{2-5}
& CMUP & CMX & CMUP & CMX & \\ \cline{2-5}
& 20516 & 11872 & 107481 & 62901 & Probe \\ \hline \hline
$|d_0|<0.2$ cm & 20499 & 11827 & 107410 & 62692 & \\
track ax. seg. $\ge3$$\times$7 & 20490 & 11732 & 107348 & 62184 & \\
track st. seg. $\ge3$$\times$7 & 20417 & 11601 & 106985 & 61463 & \\
cal. isolation $<0.1$ & 18361 & 10448 & 104247 & 59929 & \\
$E_{em}<$ 2+max(0,($p$$-$100)$\times$0.0115) GeV & 18055 & 10261 & 100115 & 57680 & \\
$E_{had}<$ 6+max(0,($p$$-$100)$\times$0.028) GeV & 17871 & 10080 & 97906 & 55799 & \\
$|\Delta x_{\mbox{CMU}}|<3$ cm
($|\Delta x_{\mbox{CMX}}|<5$ cm) & 16799 & 8994 & 97707 & 55617 & \\
$|\Delta x_{\mbox{CMP}}|<6$ cm
($\rho_{\mbox{COT}}>140$ cm) & 14198 & 7419 & 96788 & 46059 & Id \\ \hline
\end{tabular}
\caption[Muon identification efficiency measurement in MC]
{Number of events for muon identification efficiency
measurement in MC.}
\label{tab:MuSF_2}
\end{center}
\end{table}
Up to this point, all of the $\Upsilon\to\mu\mu$ candidates
in mass window (7, 13) GeV/$c^2$ are accepted.
Now we break the probe into two $p_T$
bins 5 $<p_T<$ 10 GeV/$c$ and 10 $<p_T<$ 20 GeV/$c$.
Fig.~\ref{fig:MuSF_1} shows the distributions
of the pair mass of the first leg and the second leg
in each $p_T$ bin of the second leg, for CMUP probe.
We see three clear peaks at about 9.5, 10 and 10.3
GeV/$c^2$. This is the signature of $\Upsilon\to\mu\mu$.
Now we subtract the linear background. The signal mass window is
defined as (9.2, 10.6) GeV/$c^2$. We use a side-band
method:
yield
= entries in (9.2, 10.6)
$-$ entries in (7.8, 8.5)
$-$ entries in (11.3, 12) GeV/$c^2$
mass windows.
In $5<p_T<10$ ($10<p_T<20$) GeV/$c$ bin,
N(muon + probe) = 410 (85),
N(muon + Id) = 168 (56),
we get efficiency = 41.0\% (65.9\%).
We put everything together to get the CMUP muon
identification scale factor vs. $p_T$ and perfom
a fit in $p_T$. This is shown in Fig.~\ref{fig:MuSF_3}.
Fig.~\ref{fig:MuSF_2} shows the
mass distribution of muon pair in each $p_T$ bin
of the second leg, for CMX probe.
In $5<p_T<10$ ($10<p_T<20$) GeV/$c$ bin,
N(muon + probe) = 126 (32),
N(muon + Id) = 57 (19),
we get efficiency = 45.2\% (59.4\%).
Fig.~\ref{fig:MuSF_4} shows
the procedure to get the CMX muon identification
scale factor.
Analogous to the
electron scale factor study, we assign
a conservative uncertainty of 4\%. The resulting
identification scale factors are
$0.93\pm0.04$ for CMUP muon, and
$1.03\pm0.04$ for CMX muon.
\begin{figure}
\begin{center}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.4.2/muSF_1_CMUP.eps}}
\caption[$\Upsilon\to\mu\mu$ for CMUP muon identification efficiency measurement]
{Distributions of the invariant mass
of $\Upsilon\to\mu\mu$ for medium $p_T$
CMUP muon identification
efficiency measurement in data.
The three peaks are signature
of $\Upsilon\to\mu\mu$.
The left two plots are for CMUP
muons with $5<p_T<10$ GeV/$c$.
The right two plots are for CMUP
muons with $10<p_T<20$ GeV/$c$.
Side-band method is used for
background subtractions.}
\label{fig:MuSF_1}
\vspace{0.5in}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.4.2/muSF_2_CMUP.eps}}
\caption[CMUP muon identification scale factor vs. $p_T$]
{CMUP muon identification scale factor vs. $p_T$.
This is obtained from dividing the efficiency
in data by the efficiency in MC.}
\label{fig:MuSF_3}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.4.2/muSF_1_CMX.eps}}
\caption[$\Upsilon\to\mu\mu$ for CMX muon identification efficiency measurement]
{Distributions of the invariant mass
of $\Upsilon\to\mu\mu$ for medium $p_T$
CMX muon identification
efficiency measurement in data.
The three peaks are signature
of $\Upsilon\to\mu\mu$.
The left two plots are for CMX
muons with $5<p_T<10$ GeV/$c$.
The right two plots are for CMX
muons with $10<p_T<20$ GeV/$c$.
Side-band method is used for
background subtractions.}
\label{fig:MuSF_2}
\vspace{0.5in}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.4.2/muSF_2_CMX.eps}}
\caption[CMX muon identification scale factor vs. $p_T$]
{CMX muon identification scale factor vs. $p_T$.
This is obtained from dividing the efficiency
in data by the efficiency in MC.}
\label{fig:MuSF_4}
\end{center}
\end{figure}
\vspace{0.2in}
\noindent{\bf Muon Reconstruction Scale Factor}
\vspace{0.1in}
In the reconstruction scale factor study, the probe
is a stubless muon which may or may not have a stub
in the muon chamber associated with the fiducial track.
It must have passed the vertex $z$ cut, calorimeter
isolation, EM energy cut, and hadronic energy cut.
For such a probe, we check if it has a stub.
Table~\ref{tab:MuSF_3} shows the
procedure in data and Table~\ref{tab:MuSF_4}
shows the procedure in MC.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|l|} \hline
& \multicolumn{2}{c|}{$\Upsilon\to\mu\mu$}
& \multicolumn{2}{c|}{$Z\to\mu\mu$}
&
\\ \cline{1-5}
event & \multicolumn{4}{c|}{11922805}
&
\\ \cline{1-5}
goodrun & \multicolumn{4}{c|}{9103020}
&
\\
triggered & \multicolumn{4}{c|}{1881529}
&
\\
unique & \multicolumn{4}{c|}{1800059}
&
\\ \cline{1-5}
process & \multicolumn{2}{c|}{691}
& \multicolumn{2}{c|}{1861}
&
\\ \cline{1-5}
second leg & \multicolumn{2}{c|}{760}
& \multicolumn{2}{c|}{2583}
&
\\ \cline{2-5}
& CMUP & CMX & CMUP & CMX & \\ \cline{2-5}
& 570 & 190 & 1718 & 865 & Probe \\ \hline \hline
has stub & 474 & 170 & 1569 & 838 & Rec. \\ \hline
\end{tabular}
\end{center}
\caption[Muon reconstruction efficiency measurement in data]
{Number of events for muon reconstruction efficiency
measurement in data.}
\label{tab:MuSF_3}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|l|} \hline
& \multicolumn{2}{c|}{$Z\to\tau_{\mu}\tau_x$}
& \multicolumn{2}{c|}{$Z\to\mu\mu$}
&
\\ \cline{1-5}
event & \multicolumn{2}{c|}{492000}
& \multicolumn{2}{c|}{405291}
&
\\ \cline{1-5}
muon & \multicolumn{2}{c|}{170596}
& \multicolumn{2}{c|}{810582}
&
\\ \cline{1-5}
match & \multicolumn{2}{c|}{27109}
& \multicolumn{2}{c|}{149411}
&
\\ \cline{2-5}
& CMUP & CMX & CMUP & CMX & \\ \cline{2-5}
& 17334 & 9775 & 95503 & 53908 & Probe \\ \hline \hline
has stub & 16672 & 9709 & 93044 & 53827 & Rec. \\ \hline
\end{tabular}
\end{center}
\caption[Muon reconstruction efficiency measurement in MC]
{Number of events for muon reconstruction efficiency
measurement in MC.}
\label{tab:MuSF_4}
\end{table}
We break the second leg muon into two
$p_T$ bins: 5 $<p_T<$ 10 GeV/$c$ and 10 $<p_T<$ 20 GeV/$c$.
Fig.~\ref{fig:MuSF_5} shows the distributions
of the pair mass of the first leg and the second leg
in each $p_T$ bin of the second leg, for CMUP probe.
We use the side-band method to do background subtraction.
In $5<p_T<10$ ($10<p_T<20$) GeV/$c$ bin,
N(muon + probe) = 307 (65),
N(muon + stub) = 272 (59),
we get efficiency = 88.6\% (90.8\%).
We put everything together to get the muon
reconstruction scale factor vs. $p_T$ and perform
a fit in $p_T$. This is shown in Fig.~\ref{fig:MuSF_7}.
Fig.~\ref{fig:MuSF_6} shows the mass distribution of muon pair
in each $p_T$ bin of the second leg, for CMX probe.
In $5<p_T<10$ ($10<p_T<20$) GeV/$c$ bin,
N(muon + probe) = 92 (22),
N(muon + stub) = 85 (21),
we get efficiency = 92.4\% (95.5\%).
Fig.~\ref{fig:MuSF_8} shows the procedure to get
the CMX muon reconstruction scale factor.
As in the
electron scale factor study, we assign
a conservative systematic uncertainty of 4\%. The results
of the reconstruction scale factors are
$0.94\pm0.04$ for CMUP muon, and
$0.97\pm0.04$ for CMX muon.
\begin{figure}
\begin{center}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.4.2/muSF_3_CMUP.eps}}
\caption[$\Upsilon\to\mu\mu$ for CMUP muon reconstruction efficiency measurement]
{Distributions of the invariant mass
of $\Upsilon\to\mu\mu$ for medium $p_T$
CMUP muon reconstruction
efficiency measurement in data.
The three peaks are signature
of $\Upsilon\to\mu\mu$.
The left two plots are for CMUP
muons with $5<p_T<10$ GeV/$c$.
The right two plots are for CMUP
muons with $10<p_T<20$ GeV/$c$.
Side-band method is used for
background subtractions.}
\label{fig:MuSF_5}
\vspace{0.5in}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.4.2/muSF_4_CMUP.eps}}
\caption[CMUP muon reconstruction scale factor vs. $p_T$]
{CMUP muon reconstruction scale factor vs. $p_T$.
This is obtained from dividing the efficiency
in data by the efficiency in MC.}
\label{fig:MuSF_7}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.4.2/muSF_3_CMX.eps}}
\caption[$\Upsilon\to\mu\mu$ for CMX muon reconstruction efficiency measurement]
{Distributions of the invariant mass
of $\Upsilon\to\mu\mu$ for medium $p_T$
CMX muon reconstruction
efficiency measurement in data.
The three peaks are signature
of $\Upsilon\to\mu\mu$.
The left two plots are for CMX
muons with $5<p_T<10$ GeV/$c$.
The right two plots are for CMX
muons with $10<p_T<20$ GeV/$c$.
Side-band method is used for
background subtractions.}
\label{fig:MuSF_6}
\vspace{0.5in}
\parbox{3.9in}{\epsfxsize=\hsize\epsffile{5.4.2/muSF_4_CMX.eps}}
\caption[CMX muon reconstruction scale factor vs. $p_T$]
{CMX muon reconstruction scale factor vs. $p_T$.
This is obtained from dividing the efficiency
in data by the efficiency in MC.}
\label{fig:MuSF_8}
\end{center}
\end{figure}
We summarize the muon reconstruction and identification
scale factors with uncertainties:
\begin{eqnarray}
f^{CMUP \;\; rec}_{data/MC} & = & 0.94\pm0.04 \\
f^{CMUP \;\; Id}_{data/MC} & = & 0.93\pm0.04 \\
f^{CMX \;\; rec}_{data/MC} & = & 0.97\pm0.04 \\
f^{CMX \;\; Id}_{data/MC} & = & 1.03\pm0.04
\end{eqnarray}
\section{Missing Transverse Energy}
\label{sec:MET}
Weakly interacting particles such as neutrinos of
the Standard Model and the lightest supersymmetric
particle (LSP) predicted in new physics, deposit
no energy in the calorimeters. Minimum
ionizing particles such as muons leave little
energy in the calorimeters. When present these
cause a significant vector sum of the transverse
energy of all of the detected particles. The
imbalance, i.e., the negative of the vector sum
in the transverse plane corresponds to the missing
transverse energy ($\,/\!\!\!\!E_{T}$).
Since $\,/\!\!\!\!E_{T}$ measures the vector sum of all of the
momentum of particles escaping detection in the calorimeters,
there is no information on the energy and
direction of an individual particle or how many
particles escaped detection. With many such
particles in an event there is also a chance
that their transverse momenta cancel each
other.
There is an instrumental source of $\,/\!\!\!\!E_{T}$
because the calorimeters are not perfect. There
are crack regions due to the support structures,
and the transition regions between components,
for example from the central calorimeters to the
plug calorimeters.
The probability that all the energy of a particle
is undetected is rather small. But QCD processes
have a large production rate. Some of the jets
can have a lot of energy undetected and make a
significant $\,/\!\!\!\!E_{T}$.
In our high-mass tau pair analysis, we will use
an $\,/\!\!\!\!E_{T}$ cut and several other kinematic cuts
related to $\,/\!\!\!\!E_{T}$. To get the uncertainty
due to the instrumental $\,/\!\!\!\!E_{T}$, we should get
the distributions in data and MC, and compare
the same variable.
In the real data, the physics processes
$Z\to ee$ and $\gamma$ + jet, which have
zero true missing energy, can be
used to study the effect of the
instrumental $\,/\!\!\!\!E_{T}$. The latter is a
better choice for our purpose because
hadronic taus in the calorimeters are more
like jets than electrons. The inclusive
photon sample is used to select
$\gamma$ + jet events. Jets are required
to be reconstructed as hadronic tau
objects. The true $\,/\!\!\!\!E_{T}$ in this sample
should be zero. The reconstructed
$\,/\!\!\!\!E_{T}$ corresponds to the instrumental
$\,/\!\!\!\!E_{T}$ in data.
The simulation uses $Z\to\tau_e\tau_h$ process and
requires a tight electron and a hadronic
tau object. The difference between the
reconstructed $\,/\!\!\!\!E_{T}$ in the simulation minus
the $\,/\!\!\!\!E_{T}$ from neutrinos corresponds to
the instrumental $\,/\!\!\!\!E_{T}$ in MC.
Then the instrumental $\,/\!\!\!\!E_{T}$ is projected to the
direction of hadronic tau object, shown in
Fig.~\ref{fig:MET}. The distributions in data
and MC are different for the longitudinal
component and the transverse component,
respectively.
To get the uncertainty due to the instrumental
$\,/\!\!\!\!E_{T}$, we ``smear'' the longitudinal component and
the transverse component of the instrumental
$\,/\!\!\!\!E_{T}$ in MC according to their differences
between data and MC, then add neutrinos back
to get the smeared~$\,/\!\!\!\!E_{T}$.
Now we can calculate the uncertainty of the cuts
related to $\,/\!\!\!\!E_{T}$ by the effect with/without
smearing the instrumental $\,/\!\!\!\!E_{T}$.
Table~\ref{tab:MET} shows the effect in
$\tau_e\tau_h$ channel of $Z^\prime(m=300\mbox{ GeV/}c^2)$ sample
and $Z^\prime(m=600\mbox{ GeV/}c^2)$ sample. The uncertainties
are
(1191-1125)/1125 = 5.9\% in $Z^\prime(300)$
sample and
(1875-1814)/1814 = 3.4\% in $Z^\prime(600)$
sample. Taking the larger value, we find that
the uncertainty in acceptance due~to~$\,/\!\!\!\!E_{T} \approx 6\%$.
\begin{figure}
\begin{center}
\parbox{3.0in}{\epsfxsize=\hsize\epsffile{5.5/met.eps}}
\caption[Instrumental $\,/\!\!\!\!E_{T}$ in data and MC]
{Distributions of the instrumental $\,/\!\!\!\!E_{T}$ in data
using $\gamma$+jet sample and MC using
$Z\to\tau_e\tau_h$ sample.
Instrumental $\,/\!\!\!\!E_{T}$ is projected to
the direction of the reconstructed
leading tau object to get the longitudinal
and transverse components.}
\label{fig:MET}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|} \hline
sample & \multicolumn{2}{c|}{$Z^\prime(m=300\mbox{ GeV/}c^2)$}
& \multicolumn{2}{c|}{$Z^\prime(m=600\mbox{ GeV/}c^2)$} \\ \hline
$\tau\tau$ event
& \multicolumn{2}{c|}{100000}
& \multicolumn{2}{c|}{100000} \\
$\tau_e\tau_h$ decay mode
& \multicolumn{2}{c|}{23246}
& \multicolumn{2}{c|}{23250} \\
$E_T^e>$ 10 GeV, $p_T^{\tau}>25$ GeV/$c$
& \multicolumn{2}{c|}{2135}
& \multicolumn{2}{c|}{3044} \\ \hline
smear instrumental $\,/\!\!\!\!E_{T}$
& $\;\;\;\;$ no $\;\;\;\;$
& $\;\;\;\;$ yes $\;\;\;\;$
& $\;\;\;\;$ no $\;\;\;\;$
& $\;\;\;\;$ yes $\;\;\;\;$ \\ \hline
$\,/\!\!\!\!E_{T}>$ 15 GeV & 1720 & 1801 & 2745 & 2829 \\
$\Delta\phi(e - \,/\!\!\!\!E_{T})<30^{\circ}$ & 1231 & 1299 & 1844 & 1907 \\
$m_{vis}>$ 120 GeV/$c^2$ & 1125 & 1191 & 1814 & 1875 \\ \hline
uncertainty
& \multicolumn{2}{c|}{5.9\%}
& \multicolumn{2}{c|}{3.4\%} \\ \hline
\end{tabular}
\caption[The uncertainty in acceptance due to instrumental $\,/\!\!\!\!E_{T}$]
{Number of $Z'\to\tau\tau$ events to study
the uncertainty in acceptance due to
the imperfect modeling of the instrumental
$\,/\!\!\!\!E_{T}$ in MC simulation. The uncertainty
is obtained from the effect of with/without
``smearing'' the instrumental $\,/\!\!\!\!E_{T}$ in
MC to that in data.}
\label{tab:MET}
\end{center}
\end{table}
\chapter{Event Kinematic Selection}
\label{cha:event}
In this chapter we first discuss the trigger paths.
Second, we discuss the good run selections and the
integrated luminosities.
Third, in addition to the particle identification,
we add event kinematic cuts to further suppress
backgrounds. Since the kinematic cuts need to keep
high efficiency for the signals, optimization on
the event kinematic cuts is necessary.
Fourth, there are thresholds in the triggers. The
trigger primitives are not exactly the same as the
offline variables we cut on, and so we need to evaluate
the marginal trigger efficiencies for selected events.
\section{Trigger Path}
\label{sec:event_trigger}
For the $\tau_e\tau_h$ selection, we use
the ``electron plus track'' trigger
called TAU\_ELE. It requires an electron in
the CEM detector with $E_T>8$ GeV, $p_T>8$ GeV/$c$
and an isolated track with $p_T>5$ GeV/$c$.
For the $\tau_{\mu}\tau_h$ selection, there are two
``muon plus track'' triggers called TAU\_CMU (TAU\_CMX)
which requires a CMUP (CMX) muon with
$p_T>8$~GeV/$c$ and an isolated track with
$p_T>5$~GeV/$c$.
For the $\tau_h\tau_h$ selection, we use
the ``$\,/\!\!\!\!E_{T}$ plus tau'' trigger
called TAU\_MET. It requires L1 $\,/\!\!\!\!E_{T}>25$
GeV and an L3 tau object with $E_T>20$ GeV,
track isolation and
$m$(tracks)~$<2.0$~GeV/$c^2$.
The TAU\_ELE, TAU\_CMU, and TAU\_CMX triggers
are cleaned up by requiring an isolated track.
The TAU\_MET trigger requires only one
isolated tau object thus the other tau objects
in this trigger are not necessarily isolated.
The track isolation requirement in these triggers
is that there is no additional track in a
10$^o$ to 30$^o$ annulus. This track isolation is
looser than the offline tau track isolation with a
shrinking inner cone.
The detailed descriptions of the tau triggers
can be found in Ref.~\cite{Anastassov:2003vc}.
In addition to selecting the candidate events,
there is also an important issue regarding
the jet$\to\tau$ misidentification background.
This fake background is not negligible because
of the large production rate of jets.
Using MC simulation to model all the processes
of the fake background is not adequate.
We estimate the contribution of these events
directly from real data.
For the purpose of estimating jet$\to\tau$
misidentification background, it is better to
use those triggers without the isolation
requirement in order to have a sample which
has a larger statistics and is dominated by
jet background.
There is an ELECTRON\_CENTRAL\_8 (abbreviated
as CELE8) trigger which has the same
requirement as TAU\_ELE but without the track
isolation requirement. There is also a
MUON\_CMUP8 (abbreviated as CMUP8) trigger
which has the same requirement as TAU\_CMU
but without the track isolation requirement.
The CELE8 and the CMUP8 triggers are
dynamically prescaled. A prescale is imposed
to reduce the rate of a trigger. A fixed
prescale under-utilizes the trigger bandwidth
when the luminosity falls during a run. A
dynamic prescale is based on the availability
of the trigger bandwidth, and automatically
reduce the prescales as the luminosity falls.
There is not a corresponding trigger path
available for the TAU\_CMX trigger. There
is a prescaled trigger available for the
TAU\_MET trigger but its prescale is 100
which is too big. Thus their jet$\to\tau$
fake background estimates have to be done
with the trigger itself.
\section{Good Run Selection and Integrated Luminosity}
\label{sec:event_GTU}
We use the data samples collected in CDF from
March 2002 to September 2003 for this analysis.
The Good Run List~\cite{CDFnote:5613} used in
this analysis is in the range of the run number
141544$-$168889.
We use the online initial filtering and the
offline periodic classification to decide
whether a run is good or bad. The former gets
rid of obviously bad runs where there are
problems with the sub-detectors or the triggers.
The latter is based on the classification using
a large sample in a run, for example, of the
$J/\Psi\to ee, \; \mu\mu$ events which is
expected to have a very narrow peak,
or the photon plus jet events
which is expected to have very good
energy balancing, etc.
The status of a trigger or a sub-detector is a
single bit 1 or 0, which means good or bad.
The bit 1 or 0 of a trigger is based on whether
the deadtime is less than 5\% and is set by the
online run control shift crew. The bit 1 or 0
of a sub-detector at the online stage is based
on the status of the high voltage, the calibration,
the occupancy, etc. and is set by the monitoring
operator. The bit 1 or 0 of a sub-detector at the
offline stage is based on, for example, the
reconctructed $J/\Psi\to ee, \; \mu\mu$ mass which
can tell possible problems in the tracking system,
the calorimeters or the muon chambers, and is set
by the physics groups.
Here are the details of the requirements on a
good run.
There are several run configurations (trigger
tables) when the CDF detector is taking data:
test, calibration, cosmic, and physics. A
good run must be a physics run.
At the online stage the losses of the beam should
be low. The ``on-tape'' luminosity should be
greater than 10 nb$^{-1}$. The bits of
the L1, L2, L3 triggers,
the calorimeters, the CMU detector, the CES
detector should be 1.
At the offline stage the bits of the calorimeters,
the COT detector, the CMU and CMP detectors
should be 1.
The runs after 150145 when the CMX trigger
updated L1 hardware, in addition to the bits
above, are required to have the online and
offline bits of the CMX detector set to 1.
The total integrated luminosity in the included
good runs in the run number range 141544$-$168889
is 195~pb$^{-1}$. However, the good run of the
data sample from the TAU\_CMX trigger starts from
150145 and its integrated luminosity is 179~pb$^{-1}$;
the good run number of the data sample from the TAU\_MET
trigger stops at 156487 and its integrated luminosity
is 72~pb$^{-1}$. The TAU\_MET trigger was changed
after run 156487 to include L2 two-dimensional track
isolation which needs further study. The uncertainty
in the luminosity measurements is about
6\%~\cite{Klimenko:2003if}.
The integrated luminosity in the data sample
from the CELE8 trigger, which is
dynamically prescaled, is 46~pb$^{-1}$. It
is calculated by adding the isolated track
requirement and comparing its survived number
of events with the total number of events in
the data sample from the TAU\_ELE trigger
whose luminosity is known. Analogously, the
integrated luminosity in the data sample from
the CMUP8 trigger is found to be 38~pb$^{-1}$.
There were duplicate events incorrectly processed
and put in the data samples that were later
reprocessed. We reprocessed all of the events.
To avoid the duplicate events, we pick one of
them and require that it be a unique event.
\section{Selection Criteria}
\label{sec:event_cuts}
The event kinematic cuts are designed to further
suppress background while to keep high signal
efficiency.
Table~\ref{tab:event_cuts} shows the list of cuts
for event selection.
We note several features of the requirements:
\begin{itemize}
\item The $p_T^{\tau}$ threshold is 25 GeV/$c$ because
tau identification is fully efficient at about
25 GeV/$c$ and it is a high threshold to reduce
background.
\item The $E_T^e$, $p_T^e$, and $p_T^{\mu}$ thresholds
are 10 GeV, 10 GeV/$c$ and 10 GeV/$c$, respectively.
(The thresholds in the corresponding triggers are
8 GeV, 8 GeV/$c$ and 8 GeV/$c$.) For $\tau_h\tau_h$,
we require the second tau $p_T^{\tau_2}>10$ GeV/$c$.
\item The $\,/\!\!\!\!E_{T}$ cut and the angle cut
$\Delta\phi(l-\,/\!\!\!\!E_{T})<30^o$ are designed to
remove hadronic jet backgrounds. They are
explained below.
\item We use $m_{vis}>$ 120 GeV/$c^2$ cut to remove
the ``irreducible'' $Z/\gamma^*\to\tau\tau$
background. The low mass region with
$m_{vis}<$ 120 GeV/$c^2$ is our control region.
\item For the $\tau_{\mu}\tau_h$ selection, we have
a cosmic veto~\cite{CDFnote:6089}.
\item For the $\tau_h\tau_h$ selection, we require
the second tau has exactly one track to further
clean up QCD backgrounds. We will check tau
signature by track multilicity on the leading
tau side.
\end{itemize}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
$\tau_e\tau_h$ & $\tau_{\mu}\tau_h$ & $\tau_h\tau_h$ \\ \hline \hline
$p_T^{\tau}>25$ & $p_T^{\tau}>25$ & $p_T^{\tau_1}>25$ \\
$E_T^e>10$, $p_T^e>10$ & $p_T^{\mu}>10$ & $p_T^{\tau_2}>10$ \\
$\,/\!\!\!\!E_{T}>15$ & $\,/\!\!\!\!E_{T}>15$ & $\,/\!\!\!\!E_{T}>25$ \\
$\Delta\phi(e-\,/\!\!\!\!E_{T})<30^o$ & $\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^o$ & $\Delta\phi(\tau_2-\,/\!\!\!\!E_{T})<30^o$ \\
$m(e+\tau+\,/\!\!\!\!E_{T})>120$ & $m(\mu+\tau+\,/\!\!\!\!E_{T})>120$ & $m(\tau_1+\tau_2+\,/\!\!\!\!E_{T})>120$ \\
& cosmic veto & $\tau_2$ num. track == 1 \\ \hline
\end{tabular}
\caption[Event kinematic cuts]
{Event kinematic cuts.}
\label{tab:event_cuts}
\end{center}
\end{table}
The $\,/\!\!\!\!E_{T}$ measured in $\tau_{\mu}\tau_h$ channel
needs a muon correction since there is an effect
of missing energy due to the fact that muons are
minimum ionizing partilces.
The procedure
of the muon correction is: first, we subtract the
$p_T$ of a tight muon; second, we add muon energy
deposits in the calorimeters to avoid counting the
same energy twice.
We require $\,/\!\!\!\!E_{T}>15$ GeV for the $\tau_e\tau_h$
and $\tau_{\mu}\tau_h$ selections. For the
$\tau_h\tau_h$ selection, we use data from the
TAU\_MET trigger and we require $\,/\!\!\!\!E_{T}>25$ GeV to
match the 25 GeV $\,/\!\!\!\!E_{T}$ trigger threshold. We
could suppress more backgrounds by requiring more
significant $\,/\!\!\!\!E_{T}$. However, for the signal
processes, since there is at least one neutrino
at each side, there is a chance that the transverse
momentum of the neutrinos cancel each other, and
hence raising $\,/\!\!\!\!E_{T}$ thresholds can reduce signal
efficiency. We found those $\,/\!\!\!\!E_{T}$ thresholds are
at optimzed points.
The $\Delta\phi<30^o$ cut requires that the
significant $\,/\!\!\!\!E_{T}$ should follow the $e$ ($\mu$)
for the $\tau_e\tau_h$ ($\tau_{\mu}\tau_h$)
channels and follow the lower $p_T$ tau object
for the $\tau_h\tau_h$ channel. The $\,/\!\!\!\!E_{T}$
measured is the vector sum of the neutrinos from
the decays of the two taus. Here is the example with the
$\tau_e\tau_h$ channel which has one neutrino
associated with $\tau_h$ and two neutrinos
associated with $\tau_e$. Thus this event
topology cut is able to get the most of the
signals, and to strongly suppress the backgrounds,
especially the jet$\to\tau$ misidentified fake
backgrounds which mostly has a $\Delta\phi$
with a random topology.
\section{Marginal Efficiency Correction}
\label{sec:event_trigEff}
We need to include in our estimates of the signal
and background rates the effect of the triggers.
We are concerned, however, only with the effect
of the triggers on those events passing the offline
cuts: the marginal efficiency.
The TAU\_MET trigger for the $\tau_h\tau_h$
analysis triggers directly on tau object, thus
there is no marginal trigger efficiency from
the TAU side. But there is a marginal trigger
efficiency from the MET side which is based on a
1 GeV tower threshold for a fast calculation
at L1 while the offline $\,/\!\!\!\!E_{T}$ is based on a
0.1 GeV tower threshold. We use the JET20 data
sample and mimic the $\tau_h\tau_h$ event
topology in the calorimeter by requiring one
central jet with $E_T>25$ GeV and at least
another one central jet with $E_T>10$ GeV.
The L1 $\,/\!\!\!\!E_{T}>$ 25 GeV trigger efficiency
vs. offline $\,/\!\!\!\!E_{T}$ for di-tau event is shown
in Fig.~\ref{fig:event_trigEff}.
The marginal trigger efficiency of the TAU\_MET
trigger for the $\tau_h\tau_h$ analysis is a slow
turn-on due to the large trigger tower threshold.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{6.3/event_trigEff.eps}}
\caption[L1 $\,/\!\!\!\!E_{T}>$ 25 GeV trigger efficiency vs. offline $\,/\!\!\!\!E_{T}$ for di-tau event]
{L1 $\,/\!\!\!\!E_{T}>$ 25 GeV trigger efficiency vs. offline $\,/\!\!\!\!E_{T}$ for di-tau event.}
\label{fig:event_trigEff}
\end{center}
\end{figure}
The marginal efficiencies of the TAU\_ELE and
TAU\_CMU (TAU\_CMX) triggers for the $\tau_e\tau_h$
and $\tau_{\mu}\tau_h$ analyses are all at plateau,
\begin{eqnarray}
\epsilon\mbox{(TAU\_ELE)} & = & 0.92\pm0.03 \\
\epsilon\mbox{(TAU\_CMU)} & = & 0.85\pm0.03 \\
\epsilon\mbox{(TAU\_CMX)} & = & 0.92\pm0.03
\end{eqnarray}
The trigger efficiencies of the electron part,
the muon part and the isolated track part
are calculated by using conversion electrons
from $\gamma\to ee$, muons from $\Upsilon/Z\to\mu\mu$,
and tracks from jet samples, respectively.
The details can be found in Ref.~\cite{CDFnote:6257}.
The biggest uncertainty is from the track provided
by the XFT trigger, which uses the four axial
$r-\phi$ superlayers (no stereo $r-z$ superlayers)
of the COT detector with at least 10 hits (out of
total 12 hits) in each axial superlayer. In the
event reconstruction, we require at least 3 axial
superlayers with at least 7 hits in each axial
superlayer, and the same configuration for the
stereo superlayers. The marginal XFT track
finding trigger efficiency is found to be a
function of $p_T$, $\eta$, the number of prongs,
and the different run ranges. The overall
uncertainty is about 3\%.
\chapter{Low Mass Control Region}
\label{cha:control}
The low-mass region with $m_{vis}<120$ GeV/$c^2$ is
used as the control region to test the event cuts
and background determination.
If we find that the observed and predicted event
rates agree in the control region, we can proceed
to unblind the signal region.
The main source of events in the control region
is from $Z/\gamma^*\to\tau\tau$.
The other backgrounds include $Z/\gamma^*\to ee$,
$Z/\gamma^*\to\mu\mu$ and jet$\to\tau$ misidentified fake
background. Top background $t\bar{t}$ and
di-boson backgrounds such as $WW$ and $WZ$ are
negligible because their cross sections are two
orders of magnitude smaller than Drell-Yan
backgrounds and their event topology is the
opposite of the requirement that a significant
$\,/\!\!\!\!E_{T}$ follows the lepton direction. The
jet$\to\tau$ misidentified fake background is not
negligible because the dijet production cross section
is large.
For the jet$\to\tau$ misidentified fake background,
rather than trying to model all the processes that
could produce fake events, we estimate the
contribution of these events from real data which
includes any process contributing to the fake
background.
\section{Drell-Yan Cross Section}
\label{sec:control_sigmaDY}
The cross section times branching ratio of the
Drell-Yan processes in the mass window $66<m<116$
GeV/$c^2$ at $\sqrt{s}=1.96$ TeV is about 250 pb~\cite{Acosta:2004uq}.
Fig.~\ref{fig:control_sigmaDY} shows the mass
spectrum and event counts in different
mass regions.
The $Z/\gamma^*\to\tau\tau$
sample has 377143 events in the mass window
$66<m<116$~GeV/$c^2$ which corresponds to
a 250~pb production cross section.
The number of events in a mass
window is proportional to the cross
section in that mass window. For example,
the number of events 492000 in the mass window
$m>30$~GeV/$c^2$ gives a cross section
$250\times492000/377143\approx326$~pb.
By the same algebra, we get the cross
sections in different mass windows:
\begin{eqnarray}
\sigma\cdot B(Z/\gamma^*\to l^+l^-)_{66-116} & \approx & 250 \mbox{ pb} \\
\sigma\cdot B(Z/\gamma^*\to l^+l^-)_{>30\ \ \ \,} & \approx & 326 \mbox{ pb} \\
\sigma\cdot B(Z/\gamma^*\to l^+l^-)_{30-100} & \approx & 315 \mbox{ pb} \\
\sigma\cdot B(Z/\gamma^*\to l^+l^-)_{>100\ \ \,} & \approx & \ \, 11 \mbox{ pb}
\end{eqnarray}
\begin{figure}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{7.1/control_sigmaDY.eps}}
\caption[Drell-Yan mass spectra in different mass regions]
{Drell-Yan mass spectra in different mass regions.}
\label{fig:control_sigmaDY}
\end{center}
\end{figure}
\section{Drell-Yan Background}
\label{sec:control_DY}
The Drell-Yan backgrounds can be estimated from
MC simulation with three pieces:
\begin{equation}
\mbox{Expected MC background}
= \mbox{luminosity}
\times (\sigma\cdot B)
\times \mbox{acceptance}
\end{equation}
We just discussed the production cross section, and
we have discussed the luminosity
for each trigger path in Section~\ref{sec:event_GTU}.
Now we discuss the acceptance and the estimate of
the Drell-Yan backgrounds.
Table~\ref{tab:control_DY} shows the Drell-Yan
background acceptances, the application of
the trigger efficiencies, the application of
the lepton data/MC scale factors, and the
normalization to the integrated luminosities
of the data samples from the triggers.
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|} \hline
source & $Z/\gamma^*\to\tau\tau$
& $Z/\gamma^*\to ee$
& $Z/\gamma^*\to\mu\mu$ \\
mass window & $m>30$ & $m>30$ & $m>30$ \\
$\sigma\cdot B$ (pb) & 326 & 326 & 326 \\
event & 492000 & 398665 & 405291 \\ \hline
$\tau_e\tau_h$ (TAU\_ELE) & & & \\
$\tau(25)+e(10)$ & 1528 & 272 & 1 \\
$\,/\!\!\!\!E_{T}>15$ & 514 & 29 & 1 \\
$\Delta\phi(e-\,/\!\!\!\!E_{T})<30^{\circ}$ & 415 & 2 & 0 \\
$m_{vis}<120$ & 405 & 1 & 0 \\
trigger efficiency & 373.07 & 0.92 & 0.00 \\
lepton scale factors & 351.03 & 0.87 & 0.00 \\
normalized (195 pb$^{-1}$) & 45.36 & 0.14 & 0.00 \\ \hline
$\tau_{\mu}\tau_h$ (TAU\_CMU) & & & \\
$\tau(25)+\mbox{CMUP }\mu(10)$ & 836 & 0 & 415 \\
cosmic veto & 836 & 0 & 415 \\
$\,/\!\!\!\!E_{T}>15$ & 294 & 0 & 351 \\
$\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^{\circ}$ & 253 & 0 & 7 \\
$m_{vis}<120$ & 248 & 0 & 4 \\
trigger efficiency & 226.06 & 0.00 & 3.65 \\
lepton scale factors & 191.69 & 0.00 & 3.09 \\
normalized (195 pb$^{-1}$) & 24.77 & 0.00 & 0.48 \\ \hline
$\tau_{\mu}\tau_h$ (TAU\_CMX) & & & \\
$\tau(25)+\mbox{CMX }\mu(10)$ & 425 & 0 & 219 \\
cosmic veto & 425 & 0 & 219 \\
$\,/\!\!\!\!E_{T}>15$ & 150 & 0 & 181 \\
$\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^{\circ}$ & 134 & 0 & 1 \\
$m_{vis}<120$ & 130 & 0 & 0 \\
trigger efficiency & 118.50 & 0.00 & 0.00 \\
lepton scale factors & 114.84 & 0.00 & 0.00 \\
normalized (179 pb$^{-1}$) & 13.62 & 0.00 & 0.00 \\ \hline
$\tau_h\tau_h$ (TAU\_MET) & & & \\
$\tau_1(25)+\tau_2(10)$ & 4264 & 1 & 9 \\
$\,/\!\!\!\!E_{T}>25$ & 295 & 0 & 0 \\
$\Delta\phi(\tau_2-\,/\!\!\!\!E_{T})<30^{\circ}$ & 240 & 0 & 0 \\
$\tau_2$ num. track == 1 & 185 & 0 & 0 \\
$m_{vis}<120$ & 169 & 0 & 0 \\
trigger efficiency & 93.39 & 0.00 & 0.00 \\
lepton scale factors & 87.87 & 0.00 & 0.00 \\
normalized (72 pb$^{-1}$) & 4.19 & 0.00 & 0.00 \\ \hline
\end{tabular}
\caption[Drell-Yan background estimates in the control region]
{Drell-Yan background estimates for each channel in
the low mass control region.}
\label{tab:control_DY}
\end{center}
\end{table}
\section{Fake Background}
\label{sec:control_fake}
In a ``fake'' background event a jet is misidentified
as a tau. This background is not negligible
because the dijet production cross section is large.
The relative jet$\to\tau$ misidentification rate and
the relative tau identification efficiency corresponding
to the denominator chosen is applied to the denominator
tau objects to compute their weight for being a jet.
We sum up the weights of all the events to get
jet$\to\tau$ misidentified fake background
estimate in the sample, as described in
Section~\ref{subsec:TauId_jetBg}.
There is also a probability that, for example,
for $\tau_e\tau_h$ channel, a jet is misidentified
as an electron. But the jet$\to e$ misidentification
rate is an order of magnitude smaller than the jet$\to\tau$
misidentification rate. Electron identification
requires at most two calorimeter towers with EM
energy fraction greater than 0.95 and other cuts.
Tau identification requires at most six calorimeter
towers with EM energy fraction less than 0.8
corresponding to $\xi$ greater than 0.2 and other
cuts. Naively assuming a flat distribution between
0 and 6 of the number of towers of jet, and a flat
distribution between 0.0 and 1.0 of jet EM energy
fraction, we have
\begin{equation}
\frac{\mbox{jet}\to\tau}
{\mbox{jet}\to e}
\approx \frac{(6-0)\times(0.8-0.0)}
{(2-0)\times(1.0-0.95)}
= 48
\end{equation}
The electron side is much cleaner than the tau side.
It is a good approximation to estimate fakes
from the tau side. The situation is the same for
$\tau_{\mu}\tau_h$ channel.
There is a subtlety in the fake estimate for
$\tau_h\tau_h$ channel.
In the data sample from the TAU\_MET trigger,
we order the tau objects in each event by
their $p_T$. To illustrate the subtlety, here we
temporately call the leading tau object with
the highest $p_T$ as $\tau_1$ in the case it is
a true tau or jet$_1$ in the case it is a true
jet, and the second tau object with a lower
$p_T$ as $\tau_2$ or jet$_2$.
The trigger only requires one isolated tau
object. We estimate the fake background from
the second tau object side which is not
necessarily isolated. This is able to cover the two
cases (a) and (b) out of the total three cases of the
fake background sources:
(a)~$\tau_1$~+~jet$_2$, (b)~jet$_1$~+~jet$_2$, and
(c)~jet$_1$~+~$\tau_2$. Jet$_1$ has a lower
misidentification rate than jet$_2$ because of
its higher $p_T$, so we get $\mbox{c}<\mbox{a}$
and $\mbox{a}+\mbox{b}\approx\mbox{a}+\mbox{b}+\mbox{c}$.
The fake estimate from the second tau object
side is an approximation.
The procedure to estimate the jet fake background
in the various channels is
shown in Table~\ref{tab:control_fake_1}. We need to
define a specific denominator according to data sample
from the trigger path available, and we need to find out
the normalization factors of the dynamically prescaled
trigger paths.
The denominators $D_{\xi}$ which is up
to the electron removal cut $\xi>0.2$ and $D_{trkIso10Deg}$
which is up to the 10$^{\circ}$ track isolation cut are
explained in Table~\ref{tab:TauId_cuts} in
Section~\ref{subsec:TauId_cuts}. Note that the relative
jet$\to\tau$ misidentification rate and the relative tau
identification efficiency for different denominator
samples are different.
The available dynamically prescaled triggers
are discussed in Section~\ref{sec:event_trigger},
and their integrated luminosities are discussed
in Section~\ref{sec:event_GTU}.
\begin{itemize}
\item The $\tau_e\tau_h$ channel has a dynamically
prescaled data sample from the CELE8
trigger path available. There is no trigger
cut on the tau objects, so it is ideal for the
fake background estimate. We apply the cuts
up to the electron removal cut $\xi>0.2$ listed
in Table~\ref{tab:TauId_cuts} on the tau objects
and use the denominator $D_{\xi}$ to estimate
the fakes. The integrated luminosity
of this trigger path is 46~pb$^{-1}$, thus
the normalization factor to the integrated
luminosity 195~pb$^{-1}$ of the data sample
from the TAU\_ELE trigger is 195/46 = 4.239.
\item The $\tau_{\mu}\tau_h$ with CMUP muon channel
has a dynamically prescaled data sample from
the CMUP8 trigger path available. There is
no trigger cut on the tau objects, and we use
the denominator $D_{\xi}$ to estimate the fakes.
The normalization factor is 195/38 = 5.132.
\item The $\tau_{\mu}\tau_h$ with CMX muon channel
has to use the TAU\_CMX trigger itself for
the fake background estimate. The tau objects
have already been cleaned up by the 10$^{\circ}$
track isolation cut in the trigger. We apply
the cuts up to the 10$^{\circ}$ track isolation
cut listed in Table~\ref{tab:TauId_cuts} on the
tau objects and use the denominator
$D_{trkIso10Deg}$ to estimate the fakes.
\item The $\tau_h\tau_h$ channel has to use the
TAU\_MET trigger itself. The leading tau
object is cleaned up by track isolation,
but the second tau object is not. We estimate
the fake background from the second tau object
side, and use the denominator~$D_{\xi}$.
\end{itemize}
For each event, we substitute the relative tau identification
efficiency and the relative jet$\to\tau$ misidentification
rate corresponding to the defined denominator into
Eq.~(\ref{eq:weight_jet_notpassing}) if the tau object does
not pass the full set of the tau identification cuts, or
into Eq.~(\ref{eq:weight_jet_passing}) if it does, to
calculate the weight to be a jet.
We sum up the weights of all the events
in the sample to estimate the jet background,
using Eq.~(\ref{eq:sum_weights_jet}).
We then apply the event kinematic cuts
and normalize the numbers to the luminosities
of the data samples of the tau trigger paths.
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|r|r|r|} \hline
channel & \multicolumn{2}{c|}{$\tau_e\tau_h$}
& \multicolumn{2}{c|}{CMUP $\tau_{\mu}\tau_h$}
& \multicolumn{2}{c|}{CMX $\tau_{\mu}\tau_h$}
& \multicolumn{2}{c|}{$\tau_h\tau_h$} \\ \hline
trigger path & \multicolumn{2}{c|}{CELE8}
& \multicolumn{2}{c|}{CMUP8}
& \multicolumn{2}{c|}{TAU\_CMX}
& \multicolumn{2}{c|}{TAU\_MET} \\
denominator & \multicolumn{2}{c|}{$D_{\xi}$}
& \multicolumn{2}{c|}{$D_{\xi}$}
& \multicolumn{2}{c|}{$D_{trkIso10Deg}$}
& \multicolumn{2}{c|}{$D_{\xi}$} \\
norm. factor & \multicolumn{2}{c|}{4.239}
& \multicolumn{2}{c|}{5.132}
& \multicolumn{2}{c|}{1}
& \multicolumn{2}{c|}{1} \\ \hline
& $\sum\omega^{jet}$ & event
& $\sum\omega^{jet}$ & event
& $\sum\omega^{jet}$ & event
& $\sum\omega^{jet}$ & event \\ \cline{2-9}
$\sum\omega^{jet}$ or event & 92.1 & 2292
& 12.4 & 362
& 64.4 & 379
& 106.8 & 2778 \\
kinematic cuts & 0.903 & 56
& 0.403 & 12
& 1.649 & 30
& 3.163 & 43 \\ \hline
normalized & \multicolumn{2}{c|}{$3.83\pm0.51$}
& \multicolumn{2}{c|}{$2.07\pm0.60$}
& \multicolumn{2}{c|}{$1.65\pm0.30$}
& \multicolumn{2}{c|}{$3.16\pm0.48$} \\ \hline
\end{tabular}
\caption[Fake background estimates in the control region]
{Fake background estimates in the low mass control region.
Uncertainties are statistical.}
\label{tab:control_fake_1}
\end{center}
\end{table}
The event entries which are integers corresponding
to the sum of weights which are real numbers are
also shown in Table~\ref{tab:control_fake_1}. The
event entries are used to estimate the statistical
uncertainties.
There is a systematic uncertainty due to the uncertainty
in the jet$\to\tau$ misidentification fake rate. The
rate used is the average fake rate of the JET samples.
We use the individual fake rate of the JET20, JET50,
JET70, and JET100 samples to estimate this uncertainty,
shown in Table~\ref{tab:control_fake_2}. For example,
for the $\tau_e\tau_h$ channel, using the average fake
rate we get an estimate of 3.83; while using the
individual fake rates from the JET20, JET50, JET70,
and JET100 samples, we get estimates
4.43, 3.25, 3.03, and 2.94, respectively.
We take the biggest difference, i.e. $3.83-2.94=0.89$ as
the systematic uncertainty. The fractional systematic
uncertainty for this channel is $0.89/3.83\approx20\%$.
The fractional systematic uncertainties of other channels
are about 20\% too.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|} \hline
channel & $\tau_e\tau_h$
& CMUP $\tau_{\mu}\tau_h$
& CMX $\tau_{\mu}\tau_h$
& $\tau_h\tau_h$ \\ \hline
average & 3.83 & 2.07 & 1.65 & 3.16 \\ \hline
JET20 & 4.43 & 2.23 & 1.83 & 3.26 \\
JET50 & 3.25 & 1.61 & 1.35 & 2.90 \\
JET70 & 3.03 & 1.62 & 1.40 & 3.01 \\
JET100 & 2.94 & 1.77 & 1.32 & 3.20 \\ \hline
syst. err. & 0.89 & 0.46 & 0.33 & 0.26 \\ \hline
\end{tabular}
\caption[Systematic uncertainties on fake backgrounds in the control region]
{Systematic uncertainties of fake background estimates
in the low mass control region.}
\label{tab:control_fake_2}
\end{center}
\end{table}
Combining in quadrature the statistical
uncertainties in
Table~\ref{tab:control_fake_1} and the
systematic uncertainties in
Table~\ref{tab:control_fake_2},
we get
\begin{eqnarray}
\tau_e\tau_h \mbox{ fake} & = & 3.83\pm1.03 \\
\mbox{CMUP } \tau_{\mu}\tau_h \mbox{ fake} & = & 2.07\pm0.76 \\
\mbox{CMX } \tau_{\mu}\tau_h \mbox{ fake} & = & 1.65\pm0.45 \\
\tau_h\tau_h \mbox{ fake} & = & 3.16\pm0.55
\end{eqnarray}
\section{Cross Check Fake Background}
\label{sec:control_crossCheck}
We perform a cross check on the fake background
estimate as follows: relax the tau isolation and
the lepton isolation, and apply all of the other
cuts. The tau isolation and the lepton isolation
are uncorrelated, thus we can extrapolate from the
fake regions into the signal region. For
example, for $\tau_e\tau_h$ channel, the signal
region A and the background regions B, C and D
are defined as in
Fig.~\ref{fig:control_crossCheck},
and the fake backgrounds in A extrapolated =
$\mbox{B}\times\mbox{D}/\mbox{C}$.
\begin{figure}
\begin{center}
\parbox{4.4in}{\epsfxsize=\hsize\epsffile{7.4/control_crossCheck.eps}}
\caption[Cross check fake background estimate]
{Using the uncorrelated tau isolation
and electron isolation to estimate fake background
for $\tau_e\tau_h$ channel.}
\label{fig:control_crossCheck}
\end{center}
\end{figure}
Unfortunately, we can only cross check the fake
background for $\tau_e\tau_h$ channel using the
data sample from the CELE8 trigger path
and possibly for $\tau_{\mu}\tau_h$ with CMUP
muon channel using the data sample from the CMUP8
trigger path. Neither sample has isolation in
the trigger. There is no such sample for
$\tau_{\mu}\tau_h$ with CMX muon channel. There
is a sample without isolation for $\tau_h\tau_h$
channel, but its prescale is 100 which is too large
for this exercise.
Due to the statistics in the region B, C and D,
this cross check can only be done for the
$\tau_e\tau_h$ channel in the low mass control
region. The numbers in region B, C and D are 12,
142 and 13, respectively. When we extrapolate to
region A, we find that
\begin{equation}
\mbox{A}
= \mbox{B}\times\mbox{D}/\mbox{C}
= 12\times13/142
= 1.099
\end{equation}
The normalization factor is 4.239,
thus we get
$\tau_e\tau_h$ fake extroplated
= $1.099\times4.239$
= 4.66.
This is in good agreement with $3.83\pm1.03$
obtained by summing up the weights of tau object
being a jet. It does give us confidence in the
method of the jet$\to\tau$ misidentified fake
background estimate.
\section{Uncertainties in Control Region}
\label{sec:control_err}
The statistical uncertainty and the systematic uncertainty
of Drell-Yan background estimate include
\begin{itemize}
\item statistical uncertainty,
\item $\sigma\cdot B$ uncertainty, 2\%, aside from luminosity
uncertainty (see Ref.~\cite{Acosta:2004uq}),
\item trigger efficiencies (see Section~\ref{sec:event_trigEff}),
\item lepton scale factors (see Section~\ref{subsec:TauSF_sf} for $\tau$ scale factor,
Section~\ref{subsec:EleSF} for $e$ scale factor,
and Section~\ref{subsec:MuSF} for $\mu$ scale factors),
\item $\,/\!\!\!\!E_{T}$ uncertainty, 6\% (see Section~\ref{sec:MET}), and
\item luminosity, 6\% (see Ref.~\cite{Klimenko:2003if}).
\end{itemize}
The statistical uncertainty and systematic uncertainty
of the jet$\to\tau$ misidentified fake background estimate
are discussed in Section~\ref{sec:control_fake}.
We combine the $\tau_{\mu}\tau_h$ CMUP muon channel
with a luminosity 195 pb$^{-1}$ and the
$\tau_{\mu}\tau_h$ CMX muon channel with a luminosity
179 pb$^{-1}$ into one channel, simply called the
$\tau_{\mu}\tau_h$ channel.
The observed events in
$\tau_e\tau_h$,
$\tau_{\mu}\tau_h$ and
$\tau_h\tau_h$ channels are
46,
36 and
8, respectively.
Table~\ref{tab:control_err}
shows the summary of the control sample in low mass
region for 195 pb$^{-1}$ (72 pb$^{-1}$ for
$\tau_h\tau_h$).
The total background estimate is $99.27\pm12.55$,
dominated by the source from $Z/\gamma^*\to\tau\tau$
as expected.
The observed number of events, 90, in the control
region is in good agreement with this prediction.
Fig.~\ref{fig:control_ZTauTau_1}$-$\ref{fig:control_ZTauTau_3}
show the distributions of each channel in the low mass
control region. The observed distributions
in the data are in good agreement with the predicted
distributions.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
Source & $\tau_e\tau_h$ & $\tau_{\mu}\tau_h$ & $\tau_h\tau_h$ & Total \\ \hline
$Z/\gamma^*\to\tau\tau$ & $45.36\pm6.84$ & $38.39\pm5.72$ & $4.19\pm0.77$ & $87.94\pm12.38$ \\
$Z/\gamma^*\to ee$ & $0.14\pm0.14$ & 0 & 0 & $0.14\pm0.14$ \\
$Z/\gamma^*\to\mu\mu$ & 0 & $0.48\pm0.25$ & 0 & $0.48\pm0.25$ \\
Jet$\to\tau$ & $3.83\pm1.03$ & $3.72\pm0.88$ & $3.16\pm0.55$ & $10.71\pm1.46$ \\ \hline
Expected & $49.32\pm6.94$ & $42.59\pm5.85$ & $7.35\pm0.95$ & $99.27\pm12.55$ \\ \hline
Observed & 46 & 36 & 8 & 90 \\ \hline
\end{tabular}
\caption[Expected and observed events in the control region]
{Number of expected events for each channel
and each source, compared with the number
observed, in the control region $m_{vis}<120$ GeV/$c^2$.}
\label{tab:control_err}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\parbox{5.4in}{\epsfxsize=\hsize\epsffile{7.6/control_ZTaueTauh_thesis.eps}}
\caption[Distributions of the $\tau_e\tau_h$ channel in the control region]
{Distributions of the $\tau_e\tau_h$ channel in the control region
for data (points) and predicted backgrounds (histograms).}
\label{fig:control_ZTauTau_1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox{5.4in}{\epsfxsize=\hsize\epsffile{7.6/control_ZTaumTauh_thesis.eps}}
\caption[Distributions of the $\tau_{\mu}\tau_h$ channel in the control region]
{Distributions of the $\tau_{\mu}\tau_h$ channel in the control region
for data (points) and predicted backgrounds (histograms).}
\label{fig:control_ZTauTau_2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox{5.4in}{\epsfxsize=\hsize\epsffile{7.6/control_ZTauhTauh_thesis.eps}}
\caption[Distributions of the $\tau_h\tau_h$ channel in the control region]
{Distributions of the $\tau_h\tau_h$ channel in the control region
for data (points) and predicted backgrounds (histograms).}
\label{fig:control_ZTauTau_3}
\end{center}
\end{figure}
\chapter{High Mass Signal Region}
\label{cha:signal}
The high mass region with $m_{vis}>120$ GeV/$c^2$
is the signal region.
First we calculate signal acceptance,
then we estimate the backgrounds. The main
backgrounds are $Z/\gamma^*\to\tau\tau$,
$Z/\gamma^*\to ee$, $Z/\gamma^*\to\mu\mu$ which
can be estimated from MC simulation, and
the jet$\to\tau$ misidentified fake background
which can be estimated from data, as in the
control region.
\section{Signal Acceptance}
\label{sec:signal_acc}
Table~\ref{tab:signal_ZprimeAcc}
shows the procedure to measure the signal acceptances
in each channel for the new vector particle decaying to
two taus, using $Z^\prime\to\tau\tau$ events. For example,
for the $\tau_e\tau_h$ channel, we match the offline
tau object and electron object with the $\tau_h$ and $\tau_e$
by requiring the separation angle be less than 0.2 radian,
apply the event kinematic cuts, multiply the number of accepted
events by the trigger efficiency and the lepton scale factors,
and calculate the overall acceptance. Since the mass of the
$Z^\prime$ is unknown, we calculate the signal acceptance as a
function of its mass. Only five
mass points (120, 180, 300, 450, 600) GeV/$c^2$ out of total
twelve mass points (120, 140, 160, 180, 200, 250, 300,
350, 400, 450, 500, 600) GeV/$c^2$ are shown in
Table~\ref{tab:signal_ZprimeAcc}.
The signal acceptances of the $\tau_{\mu}\tau_h$ channel with
a CMUP muon and of the $\tau_{\mu}\tau_h$ with a CMX muon are
combined into one signal acceptance for the $\tau_{\mu}\tau_h$
channel. The total acceptance is a combination of the acceptance
of the $\tau_e\tau_h$, the $\tau_{\mu}\tau_h$, and
the $\tau_h\tau_h$ channels.
The signal acceptances are shown in
in Fig.~\ref{fig:signal_ZprimeAcc}.
Table~\ref{tab:signal_AAcc}
shows the procedure to measure the signal acceptances
in each channel for the new scalar particle decaying to two
taus, using $A\to\tau\tau$. We set $\tan\beta$ = 20 as a
representative value of $\tan\beta$. Similarly,
since the mass of $A$ is unknown, we calculate the signal
acceptances as a function of mass, as shown in
Fig.~\ref{fig:signal_AAcc}.
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|} \hline
$Z^\prime\to\tau\tau$ & m=120 & m=180 & m=300 & m=450 & m=600 \\
event & 100000 & 100000 & 100000 & 100000 & 100000 \\ \hline
$\tau_e\tau_h$ (TAU\_ELE) & & & & & \\
$\tau_e\tau_h$ decay & 23527 & 23209 & 23246 & 23345 & 23250 \\
match $\tau(25)+e(10)$ & 761 & 1256 & 2135 & 2816 & 3044 \\
$\,/\!\!\!\!E_{T}>15$ & 380 & 797 & 1720 & 2416 & 2745 \\
$\Delta\phi(e-\,/\!\!\!\!E_{T})<30^{\circ}$ & 296 & 583 & 1231 & 1655 & 1844 \\
$m_{vis}>120$ & 14 & 355 & 1125 & 1610 & 1814 \\
trigger efficiency & 12.9 & 327.0 & 1036.3 & 1483.1 & 1671.0 \\
lepton scale factors & 12.1 & 307.7 & 975.1 & 1395.4 & 1572.2 \\
acceptance (\%) & 0.012 & 0.308 & 0.975 & 1.395 & 1.572 \\ \hline
$\tau_{\mu}\tau_h$ (TAU\_CMU) & & & & & \\
$\tau_{\mu}\tau_h$ decay & 22540 & 22500 & 22437 & 22358 & 22463 \\
match $\tau(25)+\mbox{CMUP }\mu(10)$ & 418 & 698 & 1121 & 1492 & 1775 \\
cosmic veto & 418 & 698 & 1121 & 1491 & 1775 \\
$\,/\!\!\!\!E_{T}>15$ & 198 & 460 & 894 & 1313 & 1615 \\
$\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^{\circ}$ & 169 & 348 & 677 & 919 & 1134 \\
$m_{vis}>120$ & 14 & 208 & 632 & 882 & 1114 \\
trigger efficiency & 12.8 & 189.6 & 576.1 & 804.0 & 1015.4 \\
lepton scale factors & 10.8 & 160.8 & 488.5 & 681.7 & 861.1 \\
acceptance (\%) & 0.011 & 0.161 & 0.489 & 0.682 & 0.861 \\ \hline
$\tau_{\mu}\tau_h$ (TAU\_CMX) & & & & & \\
$\tau_{\mu}\tau_h$ decay & 22540 & 22500 & 22437 & 22358 & 22463 \\
match $\tau(25)+\mbox{CMX }\mu(10)$ & 196 & 322 & 505 & 551 & 605 \\
cosmic veto & 196 & 322 & 505 & 551 & 605 \\
$\,/\!\!\!\!E_{T}>15$ & 99 & 200 & 408 & 473 & 535 \\
$\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^{\circ}$ & 88 & 140 & 301 & 345 & 379 \\
$m_{vis}>120$ & 2 & 83 & 279 & 336 & 372 \\
trigger efficiency & 1.8 & 75.7 & 254.3 & 306.3 & 339.1 \\
lepton scale factors & 1.8 & 73.3 & 246.4 & 296.8 & 328.6 \\
acceptance (\%) & 0.002 & 0.073 & 0.246 & 0.297 & 0.329 \\ \hline
$\tau_h\tau_h$ (TAU\_MET) & & & & & \\
$\tau_h\tau_h$ decay & 41677 & 41880 & 41934 & 41772 & 42027 \\
match $\tau_1(25)+\tau_2(10)$ & 1662 & 2449 & 3415 & 3932 & 4257 \\
$\,/\!\!\!\!E_{T}>25$ & 277 & 940 & 2037 & 2888 & 3383 \\
$\Delta\phi(\tau_2-\,/\!\!\!\!E_{T})<30^{\circ}$ & 242 & 832 & 1679 & 2244 & 2459 \\
$\tau_2$ num. track == 1 & 185 & 653 & 1335 & 1789 & 2043 \\
$m_{vis}>120$ & 31 & 526 & 1282 & 1768 & 2028 \\
trigger efficiency & 21.2 & 388.3 & 1023.7 & 1469.1 & 1716.9 \\
lepton scale factors & 19.9 & 365.3 & 963.2 & 1382.3 & 1615.4 \\
acceptance (\%) & 0.020 & 0.365 & 0.963 & 1.382 & 1.615 \\ \hline
channels combined & & & & & \\
acceptance (\%) & 0.045 & 0.907 & 2.673 & 3.756 & 4.377 \\ \hline
\end{tabular}
\caption[$Z^\prime\to\tau\tau$ signal acceptance]
{New vector particle $Z^\prime\to\tau\tau$ signal acceptance,
for each channel, as a function of the $Z^\prime$ mass.}
\label{tab:signal_ZprimeAcc}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|} \hline
$A\to\tau\tau$ & m=120 & m=180 & m=300 & m=450 & m=600 \\
event & 100000 & 100000 & 100000 & 100000 & 100000 \\ \hline
$\tau_e\tau_h$ (TAU\_ELE) & & & & & \\
$\tau_e\tau_h$ decay & 23427 & 23391 & 23364 & 23051 & 23242 \\
match $\tau(25)+e(10)$ & 1063 & 1806 & 2556 & 2991 & 3375 \\
$\,/\!\!\!\!E_{T}>15$ & 539 & 1237 & 2098 & 2665 & 3142 \\
$\Delta\phi(e-\,/\!\!\!\!E_{T})<30^{\circ}$ & 396 & 870 & 1445 & 1723 & 2047 \\
$m_{vis}>120$ & 23 & 547 & 1354 & 1684 & 2028 \\
trigger efficiency & 21.2 & 503.9 & 1247.3 & 1551.3 & 1868.1 \\
lepton scale factors & 19.9 & 474.1 & 1173.6 & 1459.6 & 1757.7 \\
acceptance (\%) & 0.020 & 0.474 & 1.174 & 1.460 & 1.758 \\ \hline
$\tau_{\mu}\tau_h$ (TAU\_CMU) & & & & & \\
$\tau_{\mu}\tau_h$ decay & 22649 & 22759 & 22344 & 22472 & 22398 \\
match $\tau(25)+\mbox{CMUP }\mu(10)$ & 650 & 1001 & 1454 & 1832 & 2076 \\
cosmic veto & 650 & 1000 & 1454 & 1832 & 2076 \\
$\,/\!\!\!\!E_{T}>15$ & 353 & 671 & 1198 & 1634 & 1923 \\
$\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^{\circ}$ & 286 & 492 & 855 & 1088 & 1272 \\
$m_{vis}>120$ & 17 & 329 & 790 & 1063 & 1265 \\
trigger efficiency & 15.5 & 299.9 & 720.1 & 969.0 & 1153.1 \\
lepton scale factors & 13.1 & 254.3 & 610.6 & 821.6 & 977.8 \\
acceptance (\%) & 0.013 & 0.254 & 0.611 & 0.822 & 0.978 \\ \hline
$\tau_{\mu}\tau_h$ (TAU\_CMX) & & & & & \\
$\tau_{\mu}\tau_h$ decay & 22649 & 22759 & 22344 & 22472 & 22398 \\
match $\tau(25)+\mbox{CMX }\mu(10)$ & 239 & 407 & 552 & 601 & 612 \\
cosmic veto & 239 & 406 & 552 & 601 & 612 \\
$\,/\!\!\!\!E_{T}>15$ & 120 & 297 & 449 & 522 & 553 \\
$\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^{\circ}$ & 88 & 214 & 291 & 363 & 370 \\
$m_{vis}>120$ & 6 & 138 & 266 & 355 & 365 \\
trigger efficiency & 5.5 & 125.8 & 242.5 & 323.6 & 332.7 \\
lepton scale factors & 5.3 & 121.9 & 235.0 & 313.6 & 322.4 \\
acceptance (\%) & 0.005 & 0.122 & 0.235 & 0.314 & 0.322 \\ \hline
$\tau_h\tau_h$ (TAU\_MET) & & & & & \\
$\tau_h\tau_h$ decay & 41813 & 41837 & 42008 & 42104 & 41891 \\
match $\tau_1(25)+\tau_2(10)$ & 2325 & 3117 & 3951 & 4333 & 4348 \\
$\,/\!\!\!\!E_{T}>25$ & 495 & 1322 & 2534 & 3316 & 3653 \\
$\Delta\phi(\tau_2-\,/\!\!\!\!E_{T})<30^{\circ}$ & 400 & 1072 & 1969 & 2467 & 2579 \\
$\tau_2$ num. track == 1 & 293 & 821 & 1531 & 2005 & 2106 \\
$m_{vis}>120$ & 46 & 630 & 1483 & 1985 & 2101 \\
trigger efficiency & 30.8 & 472.8 & 1202.9 & 1672.1 & 1789.5 \\
lepton scale factors & 29.0 & 444.9 & 1131.8 & 1573.3 & 1683.8 \\
acceptance (\%) & 0.029 & 0.445 & 1.132 & 1.573 & 1.684 \\ \hline
channels combined & & & & & \\
acceptance (\%) & 0.067 & 1.295 & 3.151 & 4.168 & 4.742 \\ \hline
\end{tabular}
\caption[$A\to\tau\tau$ signal acceptance]
{New scalar particle $A\to\tau\tau$ signal acceptance,
for each channel, as a function of the $A$ mass.}
\label{tab:signal_AAcc}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\parbox{5.1in}{\epsfxsize=\hsize\epsffile{8.1/zprime_acceptance.eps}}
\caption[Signal acceptance of $Z^\prime\to\tau\tau$]
{Signal acceptance of a new vector particle
$Z^\prime\to\tau\tau$ in each channel,
as a function of the $Z^\prime$ mass.}
\label{fig:signal_ZprimeAcc}
\vspace{0.5in}
\parbox{5.1in}{\epsfxsize=\hsize\epsffile{8.1/a_acceptance.eps}}
\caption[Signal acceptance of $A\to\tau\tau$]
{Signal acceptance of a new scalar particle
$A\to\tau\tau$ in each channel,
as a function of the $A$ mass.}
\label{fig:signal_AAcc}
\end{center}
\end{figure}
\section{Drell-Yan Background}
\label{sec:signal_DY}
The largest portion of the production cross section
for the Drell-Yan backgrounds is at the $Z$ boson
resonance peak, about 91 GeV/$c^2$. However the
events in the high mass signal region are mostly
from the high mass Drell-Yan tail. To model
the high mass tail better, we need more statistics
in MC simulation at that region. To achieve this,
we break the generation level mass into two
exclusively separated regions: $30<m<100$ GeV/$c^2$
and $m>100$ GeV/$c^2$, and simulate them separately.
The production cross sections in these two regions
are about 315 pb and 11 pb, respectively (see
Section~\ref{sec:control_sigmaDY}).
Therefore we have a low-mass sample and a high-mass
sample for each $Z/\gamma^*\to l^+l^-$ source.
Table~\ref{tab:signal_DY}
shows the procedure to estimate Drell-Yan backgrounds.
We apply the event kinematic cuts on the MC samples,
multiply the number of surviving events by the trigger
efficiencies and the lepton scale factors, normalize
to the integrated luminosity 195 pb$^{-1}$ (179 pb$^{-1}$
for the TAU\_CMX trigger, 72 pb$^{-1}$ for the TAU\_MET
trigger), and combine the estimate for the low-mass
Drell-Yan sample and the estimate for the high-mass
Drell-Yan sample.
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|r|} \hline
source & \multicolumn{2}{c|}{$Z/\gamma^{*}\to\tau\tau$}
& \multicolumn{2}{c|}{$Z/\gamma^{*}\to ee$}
& \multicolumn{2}{c|}{$Z/\gamma^{*}\to\mu\mu$} \\ \cline{2-7}
mass window &30$-$100 & $>$100 &30$-$100 & $>$100 &30$-$100 & $>$100 \\
$\sigma\cdot B$ (pb) & 315 & 11 & 315 & 11 & 315 & 11 \\
event & 475901 & 160000 & 385686 & 160000 & 392063 & 160000 \\ \hline
$\tau_e\tau_h$ (TAU\_ELE) & & & & & & \\
$\tau(25)+e(10)$ & 1405 & 1062 & 257 & 190 & 1 & 1 \\
$\,/\!\!\!\!E_{T}>15$ & 456 & 472 & 28 & 20 & 1 & 0 \\
$\Delta\phi(e-\,/\!\!\!\!E_{T})<30^{\circ}$ & 381 & 364 & 2 & 2 & 0 & 0 \\
$m_{vis}>120$ & 0 & 48 & 1 & 2 & 0 & 0 \\
trigger efficiency & 0.000 & 44.216 & 0.921 & 1.842 & 0.000 & 0.000 \\
lepton scale factors & 0.000 & 41.603 & 0.867 & 1.733 & 0.000 & 0.000 \\
normalized (195 pb$^{-1}$) & 0.000 & 0.558 & 0.138 & 0.023 & 0.000 & 0.000 \\ \cline{2-7}
combined & \multicolumn{2}{c|}{0.56}
& \multicolumn{2}{c|}{0.16}
& \multicolumn{2}{c|}{0.00} \\ \hline
$\tau_{\mu}\tau_h$ (TAU\_CMU) & & & & & & \\
$\tau(25)+\mbox{CMUP }\mu(10)$ & 783 & 554 & 0 & 0 & 408 & 139 \\
cosmic veto & 783 & 554 & 0 & 0 & 408 & 139 \\
$\,/\!\!\!\!E_{T}>15$ & 272 & 233 & 0 & 0 & 346 & 124 \\
$\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^{\circ}$ & 238 & 179 & 0 & 0 & 7 & 0 \\
$m_{vis}>120$ & 0 & 24 & 0 & 0 & 3 & 0 \\
trigger efficiency & 0.000 & 21.877 & 0.000 & 0.000 & 2.735 & 0.000 \\
lepton scale factors & 0.000 & 18.551 & 0.000 & 0.000 & 2.319 & 0.000 \\
normalized (195 pb$^{-1}$) & 0.000 & 0.249 & 0.000 & 0.000 & 0.363 & 0.000 \\ \cline{2-7}
combined & \multicolumn{2}{c|}{0.25}
& \multicolumn{2}{c|}{0.00}
& \multicolumn{2}{c|}{0.36} \\ \hline
$\tau_{\mu}\tau_h$ (TAU\_CMX) & & & & & & \\
$\tau(25)+\mbox{CMX }\mu(10)$ & 384 & 284 & 0 & 0 & 212 & 49 \\
cosmic veto & 384 & 284 & 0 & 0 & 212 & 49 \\
$\,/\!\!\!\!E_{T}>15$ & 127 & 129 & 0 & 0 & 174 & 41 \\
$\Delta\phi(\mu-\,/\!\!\!\!E_{T})<30^{\circ}$ & 114 & 107 & 0 & 0 & 1 & 1 \\
$m_{vis}>120$ & 0 & 23 & 0 & 0 & 1 & 1 \\
trigger efficiency & 0.000 & 20.965 & 0.000 & 0.000 & 0.912 & 0.912 \\
lepton scale factors & 0.000 & 20.318 & 0.000 & 0.000 & 0.883 & 0.883 \\
normalized (179 pb$^{-1}$) & 0.000 & 0.250 & 0.000 & 0.000 & 0.127 & 0.011 \\ \cline{2-7}
combined & \multicolumn{2}{c|}{0.25}
& \multicolumn{2}{c|}{0.00}
& \multicolumn{2}{c|}{0.14} \\ \hline
$\tau_h\tau_h$ (TAU\_MET) & & & & & & \\
$\tau_1(25)+\tau_2(10)$ & 4023 & 2524 & 1 & 3 & 8 & 3 \\
$\,/\!\!\!\!E_{T}>25$ & 249 & 428 & 0 & 0 & 0 & 2 \\
$\Delta\phi(\tau_2-\,/\!\!\!\!E_{T})<30^{\circ}$ & 202 & 361 & 0 & 0 & 0 & 0 \\
$\tau_2$ num. track == 1 & 158 & 269 & 0 & 0 & 0 & 0 \\
$m_{vis}>120$ & 2 & 84 & 0 & 0 & 0 & 0 \\
trigger efficiency & 1.547 & 63.373 & 0.000 & 0.000 & 0.000 & 0.000 \\
lepton scale factors & 1.455 & 59.627 & 0.000 & 0.000 & 0.000 & 0.000 \\
normalized (72 pb$^{-1}$) & 0.069 & 0.295 & 0.000 & 0.000 & 0.000 & 0.000 \\ \cline{2-7}
combined & \multicolumn{2}{c|}{0.36}
& \multicolumn{2}{c|}{0.00}
& \multicolumn{2}{c|}{0.00} \\ \hline
\end{tabular}
\caption[Drell-Yan background estimates in the signal region]
{Drell-Yan background estimates for each channel in
the high mass signal region.}
\label{tab:signal_DY}
\end{center}
\end{table}
\section{Fake Background}
\label{sec:signal_fake}
The procedure to estimate the jet$\to\tau$
fake background is similar to what we have done for
low mass control region in Section~\ref{sec:control_fake}.
The trigger path, the luminosity normalization factor,
the denominator tau object definition, and the sum
of the weights of tau objects being a jet in the high
mass signal region are exactly the same as those in the
low mass control region. The only one difference
is this cut: $m_{vis}<120$ GeV/$c^2$ for the low mass
control region, while $m_{vis}>120$ GeV/$c^2$ for the
high mass signal region.
Now we repeat the same procedure, as shown in
Table~\ref{tab:signal_fake_1}.
The event entries which are integers corresponding to the sum
of weights which are real numbers are also shown.
The event entries are used to estimate the statistical
uncertainties.
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|r|r|r|} \hline
channel & \multicolumn{2}{c|}{$\tau_e\tau_h$}
& \multicolumn{2}{c|}{CMUP $\tau_{\mu}\tau_h$}
& \multicolumn{2}{c|}{CMX $\tau_{\mu}\tau_h$}
& \multicolumn{2}{c|}{$\tau_h\tau_h$} \\ \hline
trigger path & \multicolumn{2}{c|}{CELE8}
& \multicolumn{2}{c|}{CMUP8}
& \multicolumn{2}{c|}{TAU\_CMX}
& \multicolumn{2}{c|}{TAU\_MET} \\
denominator & \multicolumn{2}{c|}{D\_xi}
& \multicolumn{2}{c|}{D\_xi}
& \multicolumn{2}{c|}{D\_trkIso10Deg}
& \multicolumn{2}{c|}{D\_xi} \\
norm. factor & \multicolumn{2}{c|}{4.239}
& \multicolumn{2}{c|}{5.132}
& \multicolumn{2}{c|}{1}
& \multicolumn{2}{c|}{1} \\ \hline
& $\sum\omega^{jet}$ & event
& $\sum\omega^{jet}$ & event
& $\sum\omega^{jet}$ & event
& $\sum\omega^{jet}$ & event \\ \cline{2-9}
$\sum\omega^{jet}$ or event & 92.1 & 2292
& 12.4 & 362
& 64.4 & 379
& 106.8 & 2778 \\
kinematic cuts & 0.068 & 13
& 0.006 & 1
& 0.152 & 4
& 0.282 & 12 \\ \hline
normalized & \multicolumn{2}{c|}{$0.29\pm0.08$}
& \multicolumn{2}{c|}{$0.03\pm0.03$}
& \multicolumn{2}{c|}{$0.15\pm0.08$}
& \multicolumn{2}{c|}{$0.28\pm0.08$} \\ \hline
\end{tabular}
\caption[Fake background estimates in the signal region]
{Fake background estimates in the signal region.
Uncertainties are statistical.}
\label{tab:signal_fake_1}
\end{center}
\end{table}
There is a systematic uncertainty due to the uncertainty
in the jet$\to\tau$ fake rate. The
rate used is the average fake rate of the JET samples.
We use the individual fake rate of the JET20, JET50,
JET70, and JET100 samples to estimate this uncertainty,
as shown in Table~\ref{tab:signal_fake_2}.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|} \hline
channel & $\tau_e\tau_h$
& CMUP $\tau_{\mu}\tau_h$
& CMX $\tau_{\mu}\tau_h$
& $\tau_h\tau_h$ \\ \hline
average & 0.29 & 0.03 & 0.15 & 0.28 \\ \hline
JET20 & 0.18 & 0.03 & 0.16 & 0.31 \\
JET50 & 0.23 & 0.04 & 0.15 & 0.23 \\
JET70 & 0.31 & 0.03 & 0.14 & 0.25 \\
JET100 & 0.28 & 0.03 & 0.13 & 0.22 \\ \hline
syst. err & 0.11 & 0.01 & 0.02 & 0.06 \\ \hline
\end{tabular}
\caption[Systematic uncertainties on fake backgrounds in the signal region]
{Systematic uncertainties of fake background estimates
in the signal region.}
\label{tab:signal_fake_2}
\end{center}
\end{table}
Combining in quadrature the statistical
uncertainties in
Table~\ref{tab:signal_fake_1} and the
systematic uncertainties in
Table~\ref{tab:signal_fake_2},
we get
\begin{eqnarray}
\tau_e\tau_h \mbox{ fake} & = & 0.29\pm0.14 \\
\mbox{CMUP } \tau_{\mu}\tau_h \mbox{ fake} & = & 0.03\pm0.03 \\
\mbox{CMX } \tau_{\mu}\tau_h \mbox{ fake} & = & 0.15\pm0.08 \\
\tau_h\tau_h \mbox{ fake} & = & 0.28\pm0.10
\end{eqnarray}
\section{Uncertainties in Signal Region}
\label{sec:signal_err}
We summarize all of the systematic uncertainties in
the high mass signal region in this section. Some of
these are due to statistical uncertainties on the
various backgrounds due to limited Monte Carlo or
other statistics. Others come from separate
external studies as indicated. And in this
section, we combine the $\tau_{\mu}\tau_h$ with
CMUP muon channel and the $\tau_{\mu}\tau_h$ with
CMX muon channel into one single $\tau_{\mu}\tau_h$
channel.
The systematic uncertainty in the Drell-Yan and
new particle signal rates due to the imperfect
knowledge of the parton density functions
(PDF's)~\cite{Lai:1999wy} is calculated by
comparing the acceptance change ratio for
various PDF's. The CTEQ5L is used in PYTHIA.
We add in quadrature the difference between
MRST72 to CTEQ5L,
MRST75 to MRST72,
CTEQ6L1 to CTEQ6L, and
CTEQ6M to CTEQ5L PDF's.
The MRST72 and MRST75 compare the effect of varying
$\alpha_s$ on the PDF. The CTEQ5L set is leading
order, and the CTEQ6M sets are next to leading
order but at the same value of $\alpha_s$. Using
$Z^\prime\to\tau\tau$, this is shown in
Table~\ref{tab:signal_err_1}.
We take 8\% as a conservative number.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|} \hline
$Z^\prime\to\tau\tau$ & $m=120$ & $m=180$ & $m=300$ & $m=400$ & $m=600$ \\ \hline
MRST72 / CTEQ5L & 1.047 & 1.029 & 1.021 & 1.006 & 1.002 \\
MRST75 / MRST72 & 0.951 & 0.980 & 0.983 & 0.995 & 0.993 \\
CTEQ6L1 / CTEQ6L & 1.006 & 1.006 & 1.003 & 0.999 & 1.002 \\
CTEQ6M / CTEQ5L & 1.035 & 1.023 & 1.021 & 1.008 & 1.004 \\ \hline
PDF uncertainty & 7.7\% & 4.2\% & 3.4\% & 1.1\% & 0.8\% \\ \hline
\end{tabular}
\caption[PDF uncertainty]
{PDF uncertainty.}
\label{tab:signal_err_1}
\end{center}
\end{table}
We are careful to identify the correlated and
the uncorrelated systematic uncertainties.
The correlated uncertainties include
the uncertainties of the PDF, the integrated
luminosity, the $e$, $\mu$, $\tau$ scale
factors, the $\,/\!\!\!\!E_{T}$, and the jet$\to\tau$ fake rate.
Table~\ref{tab:signal_err_2} lists the
uncertainties, their magnitude, and the affected
channels. (When uncertainties are correlated we
assume a 100\% correlation.)
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
Uncertainty & Magnitude (\%) & Affected Channels \\ \hline \hline
PDF & 8 & all \\
integrated luminosity & 6 & all \\
$e$ scale factor & 4 & $\tau_e\tau_h$ \\
$\mu$ scale factor & 5.5 & $\tau_{\mu}\tau_h$ \\
$\tau$ scale factor & 10 & all \\
$\,/\!\!\!\!E_{T}$ & 6 & all \\
jet$\to\tau$ fake rate & 20 & all \\ \hline
\end{tabular}
\caption[Systematic uncertainties and the affected channels]
{Systematic uncertainties, in percent,
and the affected channels.}
\label{tab:signal_err_2}
\end{center}
\end{table}
The $Z^\prime\to\tau\tau$ and $A\to\tau\tau$ signal
acceptances and the systematic uncertainties are listed in
Table~\ref{tab:signal_err_3}$-$\ref{tab:signal_err_4}.
The acceptance itself reflects the effects of trigger
efficiency and the lepton scale factors.
The uncertainties include the contributions from
\begin{itemize}
\item statistical uncertainty (MC statistics),
\item PDF uncertainty (this Section),
\item trigger efficiencies (see Section~\ref{sec:event_trigEff}),
\item lepton scale factors (see Section~\ref{subsec:TauSF_sf} for $\tau$ scale factor,
Section~\ref{subsec:EleSF} for $e$ scale factor,
and Section~\ref{subsec:MuSF} for $\mu$ scale factors), and
\item $\,/\!\!\!\!E_{T}$ uncertainty (see Section~\ref{sec:MET}).
\end{itemize}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
$m(Z^\prime)$
& $\tau_e\tau_h$ (\%) & $\tau_{\mu}\tau_h$ (\%) & $\tau_h\tau_h$ (\%) & combined (\%) \\ \hline \hline
120 & $0.012\pm0.004$ & $0.013\pm0.004$ & $0.020\pm0.005$ & $0.045\pm0.009$ \\
140 & $0.084\pm0.015$ & $0.088\pm0.015$ & $0.105\pm0.020$ & $0.278\pm0.043$ \\
160 & $0.213\pm0.035$ & $0.151\pm0.025$ & $0.206\pm0.038$ & $0.571\pm0.086$ \\
180 & $0.308\pm0.049$ & $0.234\pm0.037$ & $0.365\pm0.066$ & $0.907\pm0.136$ \\
200 & $0.453\pm0.070$ & $0.351\pm0.054$ & $0.476\pm0.085$ & $1.280\pm0.190$ \\
250 & $0.727\pm0.111$ & $0.548\pm0.083$ & $0.776\pm0.137$ & $2.052\pm0.30$3 \\
300 & $0.975\pm0.148$ & $0.735\pm0.110$ & $0.963\pm0.170$ & $2.673\pm0.394$ \\
350 & $1.098\pm0.167$ & $0.826\pm0.124$ & $1.144\pm0.202$ & $3.068\pm0.452$ \\
400 & $1.239\pm0.188$ & $0.966\pm0.144$ & $1.308\pm0.230$ & $3.512\pm0.517$ \\
450 & $1.395\pm0.211$ & $0.979\pm0.146$ & $1.382\pm0.243$ & $3.756\pm0.553$ \\
500 & $1.537\pm0.232$ & $1.148\pm0.172$ & $1.431\pm0.252$ & $4.116\pm0.604$ \\
600 & $1.572\pm0.237$ & $1.190\pm0.178$ & $1.615\pm0.284$ & $4.377\pm0.644$ \\ \hline
\end{tabular}
\caption[Uncertainties of $Z^\prime\to\tau\tau$ signal acceptance]
{Uncertainties of $f\bar{f}\toZ^\prime\to\tau\tau$
signal acceptance (SM coupling).}
\label{tab:signal_err_3}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
$m(A)$
& $\tau_e\tau_h$ (\%) & $\tau_{\mu}\tau_h$ (\%) & $\tau_h\tau_h$ (\%) & combined (\%) \\ \hline \hline
120 & $0.020\pm0.005$ & $0.018\pm0.005$ & $0.029\pm0.007$ & $0.067\pm0.012$ \\
140 & $0.113\pm0.019$ & $0.082\pm0.015$ & $0.126\pm0.024$ & $0.321\pm0.050$ \\
160 & $0.284\pm0.045$ & $0.213\pm0.034$ & $0.324\pm0.058$ & $0.822\pm0.123$ \\
180 & $0.474\pm0.074$ & $0.376\pm0.058$ & $0.445\pm0.079$ & $1.295\pm0.191$ \\
200 & $0.603\pm0.093$ & $0.485\pm0.074$ & $0.660\pm0.117$ & $1.748\pm0.259$ \\
250 & $0.889\pm0.135$ & $0.703\pm0.106$ & $0.972\pm0.172$ & $2.564\pm0.379$ \\
300 & $1.174\pm0.178$ & $0.846\pm0.127$ & $1.132\pm0.199$ & $3.151\pm0.463$ \\
350 & $1.254\pm0.190$ & $1.004\pm0.150$ & $1.356\pm0.239$ & $3.614\pm0.532$ \\
400 & $1.411\pm0.213$ & $1.101\pm0.165$ & $1.485\pm0.261$ & $3.996\pm0.588$ \\
450 & $1.460\pm0.220$ & $1.135\pm0.170$ & $1.573\pm0.277$ & $4.168\pm0.614$ \\
500 & $1.649\pm0.249$ & $1.177\pm0.176$ & $1.561\pm0.275$ & $4.386\pm0.644$ \\
600 & $1.758\pm0.265$ & $1.300\pm0.194$ & $1.684\pm0.296$ & $4.742\pm0.696$ \\ \hline
\end{tabular}
\caption[Uncertainties of $A\to\tau\tau$ signal acceptance]
{Uncertainties of $gg\to A\to\tau\tau$
signal acceptance ($\tan\beta = 20$).}
\label{tab:signal_err_4}
\end{center}
\end{table}
The systematic uncertainties on the Drell-Yan
backgrounds and the jet$\to\tau$ misidentified
fake backgrounds are listed in
Table~\ref{tab:signal_err_5}.
The systematic uncertainties on the Drell-Yan
backgrounds incorporate the effects of
\begin{itemize}
\item statistical uncertainty (MC statistics),
\item PDF uncertainty (this Section),
\item $\sigma\cdot B$ uncertainty, 2\%, aside
from luminosity uncertainty (see Ref.~\cite{Acosta:2004uq}),
\item trigger efficiencies (see Section~\ref{sec:event_trigEff}),
\item lepton scale factors (see Section~\ref{subsec:TauSF_sf} for $\tau$ scale factor,
Section~\ref{subsec:EleSF} for $e$ scale factor,
and Section~\ref{subsec:MuSF} for $\mu$ scale factors),
\item $\,/\!\!\!\!E_{T}$ uncertainty (see Section~\ref{sec:MET}), and
\item luminosity, 6\% (see Ref.~\cite{Klimenko:2003if}).
\end{itemize}
The systematic uncertainties on the jet$\to\tau$
misidentified fake background incorporates the
effects of
\begin{itemize}
\item statistical uncertainty (see Section~\ref{sec:signal_fake}), and
\item systematic uncertainty due to jet$\to\tau$
misidentification rate (see Section~\ref{sec:signal_fake}).
\end{itemize}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
Source & $\tau_e\tau_h$ & $\tau_{\mu}\tau_h$ & $\tau_h\tau_h$ & Total \\ \hline
$Z/\gamma^*\to\tau\tau$ & $0.56\pm0.11$ & $0.50\pm0.10$ & $0.36\pm0.08$ & $1.42\pm0.23$ \\
$Z/\gamma^*\to ee$ & $0.16\pm0.14$ & 0 & 0 & $0.16\pm0.14$ \\
$Z/\gamma^*\to\mu\mu$ & 0 & $0.50\pm0.26$ & 0 & $0.50\pm0.26$ \\
Jet$\to\tau$ & $0.29\pm0.14$ & $0.18\pm0.09$ & $0.28\pm0.10$ & $0.75\pm0.19$ \\ \hline
Expected & $1.01\pm0.24$ & $1.18\pm0.30$ & $0.64\pm0.13$ & $2.83\pm0.46$ \\ \hline
\end{tabular}
\caption[Uncertainties of backgrounds in the signal region]
{Uncertainties of backgrounds in signal region,
195 pb$^{-1}$ (72 pb$^{-1}$ for $\tau_h\tau_h$).}
\label{tab:signal_err_5}
\end{center}
\end{table}
\chapter{Results}
\label{cha:results}
\section{Observed Events}
\label{sec:box}
After unblinding the signal region, we observe four
events in $\tau_e\tau_h$ channel, zero events in
$\tau_{\mu}\tau_h$ channel, and zero events in
$\tau_h\tau_h$ channel. The numbers of background
events estimated and observed are in
Table~\ref{tab:results_1}.
Fig.~\ref{fig:results_1} shows the $m_{vis}$
distribution.
Fig.~\ref{fig:results_2}$-$\ref{fig:results_5}
shows the event displays of the four events
observed in $\tau_e\tau_h$ channel.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
Source & $\tau_e\tau_h$ & $\tau_{\mu}\tau_h$ & $\tau_h\tau_h$ & Total \\ \hline
$Z/\gamma^*\to\tau\tau$ & $0.56\pm0.11$ & $0.50\pm0.10$ & $0.36\pm0.08$ & $1.42\pm0.23$ \\
$Z/\gamma^*\to ee$ & $0.16\pm0.14$ & 0 & 0 & $0.16\pm0.14$ \\
$Z/\gamma^*\to\mu\mu$ & 0 & $0.50\pm0.26$ & 0 & $0.50\pm0.26$ \\
Jet$\to\tau$ & $0.29\pm0.14$ & $0.18\pm0.09$ & $0.28\pm0.10$ & $0.75\pm0.19$ \\ \hline
Expected & $1.01\pm0.24$ & $1.18\pm0.30$ & $0.64\pm0.13$ & $2.83\pm0.46$ \\ \hline
Observed & 4 & 0 & 0 & 4 \\ \hline
\end{tabular}
\caption[Expected and observed events in the signal region]
{Number of expected events for each channel
and each source, and number of
observed events, in the signal region.}
\label{tab:results_1}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/m_tau_tau_thesis.eps}}
\caption[Distribution of visible mass in the signal and control regions]
{Distribution of visible mass ($m_{vis}$)
for data (points) and predicted backgrounds (histograms)
in the signal and control regions. The upper plot
is in linear scale. The lower plot is in log scale.}
\label{fig:results_1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/cot_1_r152669_e629080.eps}}
\end{center}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/lego1_1_r152669_e629080.eps}}
\end{center}
\begin{center}
\caption[$\tau_e\tau_h$ candidate run=152669 event=629080 $m_{vis}$=148 GeV/$c^2$]
{Event display $\tau_e\tau_h$ candidate
run=152669
event=629080
$m_{vis}$=148 GeV/$c^2$.}
\label{fig:results_2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/cot_1_r153693_e815662.eps}}
\end{center}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/lego1_1_r153693_e815662.eps}}
\end{center}
\begin{center}
\caption[$\tau_e\tau_h$ candidate run=153693 event=815662 $m_{vis}$=129 GeV/$c^2$]
{Event display $\tau_e\tau_h$ candidate
run=153693
event=815662
$m_{vis}$=129 GeV/$c^2$.}
\label{fig:results_3}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/cot_1_r160591_e207616.eps}}
\end{center}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/lego1_1_r160591_e207616.eps}}
\end{center}
\begin{center}
\caption[$\tau_e\tau_h$ candidate run=160591 event=207616 $m_{vis}$=125 GeV/$c^2$]
{Event display $\tau_e\tau_h$ candidate
run=160591
event=207616
$m_{vis}$=125 GeV/$c^2$.}
\label{fig:results_4}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/cot_1_r162252_e612118.eps}}
\end{center}
\begin{center}
\parbox{5.7in}{\epsfxsize=\hsize\epsffile{9.1/lego1_1_r162252_e612118.eps}}
\end{center}
\begin{center}
\caption[$\tau_e\tau_h$ candidate run=162252 event=612118 $m_{vis}$=124 GeV/$c^2$]
{Event display $\tau_e\tau_h$ candidate
run=162252
event=612118
$m_{vis}$=124 GeV/$c^2$.}
\label{fig:results_5}
\end{center}
\end{figure}
\section{Experimental Limits}
\label{sec:limits}
Since we observe no excess, we proceed to calculate
the 95\% confidence level (CL) upper limit on the
cross section times branching ratio for new particle
production using a Bayesian procedure described in
Ref.~\cite{CDFnote:6428}.
We need to combine multiple search channels and
incorporate both uncorrelated and correlated
systematic uncertainties.
For each channel $i$,
the integrated luminosity,
the signal acceptance,
the expected background events, and
the observed events are denoted as
$L_i$,
$\epsilon_i$,
$b_i$, and
$n_i$,
respectively;
the uncorrelated uncertainties of
the signal acceptance and
the expected background events
are denoted as
$f_{\epsilon i}$ and
$f_{bi}$,
respectively.
The correlated uncertainties of
the integrated luminosity,
the signal acceptance, and
the expected background events
are denoted as
$g_L$,
$g_{\epsilon}$, and
$g_b$,
respectively.
(Note that the $f$ factors carry $i$ indices and
the $g$ factors do not.)
With a signal cross section $\sigma_{sig}$, the expected
number of event $\mu_i$ in each channel can be written as
\begin{equation}
\mu_i
= (1+g_L)L_i\sigma_{sig}(1+f_{\epsilon i})(1+g_{\epsilon})\epsilon_i
+(1+f_{bi})(1+g_b)b_i
\end{equation}
where the $f$ and $g$ factors are in a form
$1+x$ thus \emph{relative} systematic uncertainties.
We define a likelihood which is the product of the
Poisson probabilities of observing $n_i$ events
in each channel,
\begin{equation}
\mathcal{L}(\bar{n}|\sigma_{sig},\bar{b},\bar{\epsilon}) =
\prod_i\mathcal{L}(n_i|\mu_i) =
\prod_i\frac{\mu_i^{n_i}e^{-\mu_i}}{n_i!}
\end{equation}
where the overbars indicate that the variables
are arrays carrying an $i$ index.
We use a Monte Carlo method to
convolute the effects of the systematic
uncertainties using Gaussian prior
probability density functions
for the $f$ and $g$ factors.
For an evaluating point of the $\sigma_{sig}$,
we sample the $f$ and $g$ factors
within
their Gaussian widths around a central value of
zero, calculate the $\mu_i$
and
the
$\mathcal{L}(n_i|\mu_i)$ for each channel,
and average the resulting likelihood
$\mathcal{L}(\bar{n}|\sigma_{sig},\bar{b},\bar{\epsilon})$.
Using Bayes' Theorem,
we then construct a probability density function
for the signal cross section,
\begin{equation}
\mathcal{P}(\sigma_{sig}|\bar{n},\bar{b},\bar{\epsilon})
= \frac{ \mathcal{L}(n|\sigma_{sig}, \bar{b},\bar{\epsilon})P(\sigma_{sig}) }
{\int_0^{\infty} \mathcal{L}(n|\sigma_{sig}',\bar{b},\bar{\epsilon})P(\sigma_{sig}') d\sigma_{sig}'}
\end{equation}
with
a prior
probability density function
$P(\sigma_{sig})$ which expresses the subjective
``degree of belief'' for the value of the
signal cross section.
The 95\% CL upper limit $\sigma_{95}$ is obtained
by solving this integral equation
\begin{equation}
\int_0^{\sigma_{95}}
\mathcal{P}(\sigma_{sig}|\bar{n},\bar{b},\bar{\epsilon})
d\sigma_{sig}
= 0.95
\label{eqn:sigma95}
\end{equation}
We assume a uniform prior in the signal cross section
up to some high cutoff; the value of the cutoff has
no significant influence on the 95\% CL upper limit.
We thereby extract the experimental 95\% CL
upper limit of $\sigma\cdot B$ for models
using vector boson and scalar boson,
respectively.
The results are listed in
Table~\ref{tab:results_2}
and shown in
Fig.~\ref{fig:results_6}.
These are the generic limits for
$gg\to X_{\mbox{scalar}}\to\tau\tau$
and
$f\bar{f}\to X_{\mbox{vector}}\to\tau\tau$
which can be interpreted in various models.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
mass & vector & scalar \\
(GeV/$c^2$) & limit (pb) & limit (pb) \\ \hline
120 & 122.294 & 87.338 \\
140 & 18.884 & 17.899 \\
160 & 9.446 & 6.996 \\
180 & 6.066 & 4.229 \\
200 & 4.185 & 3.187 \\
250 & 2.637 & 2.192 \\
300 & 1.999 & 1.764 \\
350 & 1.757 & 1.540 \\
400 & 1.537 & 1.396 \\
450 & 1.441 & 1.330 \\
500 & 1.296 & 1.290 \\
600 & 1.237 & 1.174 \\ \hline
\end{tabular}
\caption[The 95\% CL upper limits]
{The 95\% CL upper limits on vector
and scalar particle production and
decay to tau pairs.}
\label{tab:results_2}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{9.2/ditau_scalar_vector.eps}}
\caption[Upper limits at 95\% CL for vector and scalar bosons]
{Upper limits at 95\% CL on
$\sigma(p\bar{p}\to X)B(X\to\tau\tau)$
for vector and scalar boson,
as a function of mass.}
\label{fig:results_6}
\end{center}
\end{figure}
\section{Exclusion Regions}
\label{sec:exclusion}
Now we can put the theoretical predictions on high mass
tau pair production discussed in Section~\ref{sec:theory_tt}
and the experimental 95\% CL upper limits together.
We take the region where the theoretical prediction is bigger than
the upper limit to be excluded~at~95\%~CL.
For reference, this analysis would thus exclude at 95\%
CL a $Z^\prime$ with standard model couplings having a
mass of less than 394 GeV/$c^2$, as shown in
Fig.~\ref{fig:results_7}. For the MSSM pseudoscalar
Higgs boson $A$, this analysis is not sensitive to exclude
a region yet.
\begin{figure}
\begin{center}
\parbox{5.5in}{\epsfxsize=\hsize\epsffile{9.3/zprime_limit.eps}}
\caption[Exclusion region for $Z^\prime$]
{Upper limits at 95\% CL and theoretical predictions of
$\sigma(p\bar{p}\toZ^\prime)B(Z^\prime\to\tau\tau)$.
The excluded region is the region with $m(Z')<394$ GeV/$c^2$.}
\label{fig:results_7}
\end{center}
\end{figure}
|
2,869,038,156,847 | arxiv | \section*{Introduction}
In this paper we treat several questions related to weight complex functors; the latter are defined on triangulated categories endowed with weight structures (as independently defined by the author and D. Pauksztello).
We give an important definition of {\it pure} (co)homological functors.\footnote{The relation of pure functors to Deligne's purity of (singular and \'etale) cohomology is recalled in Remark \ref{rpuresd}(3).}
Functors of this type have already found interesting applications in several papers (note in particular that the results of our \S\ref{sdet} are important for the study of Picard groups of triangulated categories in \cite{bontabu}; other interesting pure functors were crucial for \cite{kellyweighomol}, \cite{bachinv}, \cite{bgn}, and \cite{bsoscwhn}). Pure functors can be defined in two distinct ways: for a weight structure $w$ on a triangulated category $\cu$ one can either demand that a (co)homological functor $H$ from $\cu$ into an abelian category $\au$ kills objects whose "weights" are either (strictly) positive or negative, or "reconstruct" $H$ of this type from its values on the {\it heart} $\hw$ of $w$ using the corresponding weight complex functor (and one obtains a pure functor from any additive functor from $\hw$ into $\au$ using this method).
Now we recall that the original weight complex functor from \cite{gs} has associated certain complexes of Chow motives to varieties over characteristic $0$ fields. We vastly generalize this construction (following \cite{bws})
and obtain a "weakly exact" functor $t:\cu\to \kw(\hw)$ (this is a certain "weak" category of complexes; see Proposition \ref{pwt} and Remark \ref{rwc}(\ref{irwc7}) below)
corresponding to any weight structure $w$.
These general weight complex functors are closely related to {\it weight spectral sequences} (that generalize Deligne's ones and
also "calculate" the values of pure functors; see Proposition \ref{pwss}). Moreover, the conservativity properties of these functors enable us to prove that a {\it weight-exact functor} (i.e., an exact functor that "respects" the corresponding weight structures) whose restriction to the heart is full and conservative is also conservative on weight-bounded objects. Combined with the recent results of J. Ayoub (see Remark \ref{rgap}), this statement
implies the conservativity of the $\ql$-\'etale (and de Rham) realization on the category $\dmgq$ of geometric Voevodsky motives over a characteristic $0$ field (see Remark \ref{rayoub}(3)). We also extend this result to a bigger motivic category.
Furthermore, we apply our general theory to the case of the {\it spherical} weight structure $\wg$ on the equivariant stable homotopy category $\shg$; here $G$ is a compact Lie group. $\wg$ is generated by the (stable) {\it orbit category}; the latter consists of spectra of the form form $S_H^0$ (see \S\ref{shg}), where $H$ runs through closed subgroups of $G$.
The corresponding class $\shg_{\wg\ge 0}$ is the class of connective $G$-spectra, the heart $\hw^G$ consists of retracts of coproducts of $S_{H_i}^0$, whereas the weight complex functor calculates the
equivariant ordinary
homology with Burnside ring coefficients $H^G_*$ considered in \cite{lewishur} and \cite{mayeq},
and pure cohomological functors into abelian groups are representable by Eilenberg-MacLane $G$-spectra and equal the Bredon cohomology functors corresponding to Mackey functors. Moreover, in the case $G=\{e\}$ (respectively, $\shg=\shtop$) the corresponding {\it $\wsp$-Postnikov towers} are the cellular ones in the sense of \cite[\S6.3]{marg}.
\begin{rrema}\label{rgap}
Unfortunately, the current proof of the main results of Ayoub's \cite{ayoubcon} contains a gap. Hopefully, it will be closed eventually.
Anyway, the fact that Theorem II of ibid. implies the conservativity of realizations conjecture is non-trivial and interesting for itself.
Note also that the latter conservativity assertion has several nice applications; some of them were described in \cite{bcons}.
\end{rrema}
Now we describe the contents of the paper;
some more information of this sort can also be found at the beginnings of sections.
In \S\ref{sold} we recall a significant part of the general theory of weight structures. Moreover, we treat weight complexes more accurately than in
\cite[\S3]{bws} (see also
Appendices \ref{sdwc}--\S\ref{swhe} for some additional remarks on this matter). Furthermore, we apply
our theory to obtain a new theorem on the conservativity of weight-exact functors; our results generalize certain statements from \cite{wildcons}.
In \S\ref{spuredet} we introduce and study {\it pure} (homological and cohomological) functors. These functors are quite important (cf. \S\ref{stop} for "topological" examples, whereas several motivic examples were shown to be actual in several recent papers). We relate them to detecting weights (i.e., we prove for a {\it $w$-bounded below} object $M$ that it belongs to
the $n$th level of the weight filtration whenever $H_i^{\ca}(M)=0$ for all $i< n$, where $H_*^{\ca}$ is a certain homological functor). This matter is important for the categorical Picard calculations of \cite{bontabu}.
We also study ({\it smashing}) weight structures and pure functors that respect coproducts.
In \S\ref{sexamples} we recall some statements that allow to reconstruct a weight structure
from a subcategory of its heart. These theorems give the existence of so-called Chow weight structures on certain categories of Voevodsky motives; next we discuss the aforementioned motivic conservativity statements. Moreover, we study pure functors and detecting weights for {\it purely compactly generated} (smashing) weight structures. We also study possible "variations" of {\it weight Postnikov towers} and the corresponding weight complexes (for a fixed object $M$ of $\cu$); as a consequence, we obtain a new existence of weight structure statement along with some more motivic conservativity results.
In \S\ref{stop} we relate our general theory to the stable homotopy category $\shg$ of $G$-equivariant spectra (for any compact Lie group $G$) along with the "spherical" weight structure $\wg$ (generated by the stable orbit subcategory of equivariant spheres). We prove that the corresponding pure cohomology is Bredon one. In the case $G=\{e\}$
we prove that singular homology detects weights, and that $\wsp$-Postnikov towers are the {\it cellular} ones in the sense of \cite{marg}. We also discuss the relation of our results to ({\it adjacent}) $t$-structures, and to the {\it connective stable homotopy theory} as described in \S7 of \cite{axstab}.
The author is deeply grateful to prof. J.P. May for his very useful answers concerning equivariant homotopy categories, and to the referee for really important comments to the text. He is also
extremely grateful to the Max Planck Institute
in Bonn for the support
and hospitality during the work on this version.
\section{On weight structures: reminder, weight complexes, and conservativity applications
\label{sold}
In \S\ref{snotata} we introduce some notation and conventions.
In \S\ref{ssws} we recall some basics on weight structures.
\S\ref{sswc} is dedicated to the theory of weight complex functors. Our treatment of this subject (along with weight Postnikov towers) is more accurate than the original one in \cite{bws} (cf. \S\ref{sdwc}--\ref{swhe} below).
In \S\ref{swss} we recall the basics of the theory of weight spectral sequences.
In \S\ref{sweap} we apply our theory to obtain an interesting statement on the conservativity of weight-exact functors.
\subsection{Some (categorical) notation }\label{snotata}
\begin{itemize}
\item All coproducts in this paper will be small.
\item Given a category $C$ and $X,Y\in\obj C$ we write $C(X,Y)$ for the set of morphisms from $X$ to $Y$ in $C$.
\item For categories $C',C$ we write $C'\subset C$ if $C'$ is a full
subcategory of $C$.
\item We say that $D$ is an {\it essentially wide} subcategory of $C$ if $D$ is a full subcategory of $C$ that is equivalent to $C$. Moreover, we will say that $D$ is a {\it skeleton} of $C$ if any two isomorphic objects of $D$ are equal.
\item Given a category $C$ and $X,Y\in\obj C$, we say that $X$ is a {\it
retract} of $Y$ if $\id_X$ can be
factored through $Y$.\footnote{If $C$ is triangulated or abelian,
then $X$ is a retract of $Y$ if and only if $X$ is its direct summand.}\
\item A (not necessarily additive) subcategory $\hu$ of an additive category $C$
is said to be {\it retraction-closed} in $C$ if it contains all retracts of its objects in $C$.
\item For any $(C,\hu)$ as above the full subcategory $\kar_{C}(\hu)$ of
$C$ whose objects are all retracts of (finite) direct sums of objects $\hu$ in $C$ will be called the {\it retraction-closure} of $\hu$ in $C$; note that this subcategory is obviously additive and retraction-closed in $C$.
\item The {\it Karoubi envelope} $\kar(\bu)$ (no lower index) of an additive category $\bu$ is the category of ``formal images'' of idempotents in $\bu$. Consequently, its objects are the pairs $(A,p)$ for $A\in \obj \bu,\ p\in \bu(A,A),\ p^2=p$, and the morphisms are given by the formula
$$\kar(\bu)((X,p),(X',p'))=\{f\in \bu(X,X'):\ p'\circ f=f \circ p=f \}.$$
The correspondence $A\mapsto (A,\id_A)$ (for $A\in \obj \bu$) fully embeds $\bu$ into $\kar(\bu)$.
Moreover, $\kar(\bu)$ is {\it Karoubian}, i.e., any idempotent morphism yields a direct sum decomposition in
$\kar(\bu)$. Recall also that $\kar(\bu)$ is triangulated if $\bu$ is (see \cite{bashli}).
\item The symbol $\cu$ below will always denote some triangulated category; usually it will
be endowed with a weight structure $w$. The symbols $\cu'$ and $\du$ will also be used for triangulated categories only.
\item For any $A,B,C \in \obj\cu$ we say that $C$ is an {\it extension} of $B$ by $A$ if there exists a distinguished triangle $A \to C \to B \to A[1]$.
\item A class $D\subset \obj \cu$ is said to be {\it extension-closed} if it is closed with respect to extensions and contains $0$. We call the smallest extension-closed subclass of objects of $\cu$ that contains a given class $B\subset \obj\cu$ the {\it extension-closure} of $B$.
\item Given a class $D$ of objects of $\cu$ we will write $\lan D\ra$ for the smallest full retraction-closed triangulated subcategory of $\cu$ containing $D$. We call $\lan D\ra$ the triangulated category {\it densely generated} by $D$. Certainly, this definition can be applied in the case $\du=\cu$.
Moreover, we will say that $D$ {\it strongly generates} $\cu$ if $\cu$ equals its own smallest strictly full triangulated subcategory that contains $D$.\footnote{Clearly, this condition is fulfilled if and only if $ \cu$ equals the extension-closure of $\cup_{j\in \z}D[j]$.}
\item For $X,Y\in \obj \cu$ we will write $X\perp Y$ if $\cu(X,Y)=\ns$. For
$D,E\subset \obj \cu$ we write $D\perp E$ if $X\perp Y$ for all $X\in D,\
Y\in E$. Given $D\subset\obj \cu$ we write $D^\perp$ for the class $$\{Y\in \obj \cu:\ X\perp Y\ \forall X\in D\}.$$
Dually, ${}^\perp{}D$ is the class $\{Y\in \obj \cu:\ Y\perp X\ \forall X\in D\}$.
\item Given $f\in\cu (X,Y)$, where $X,Y\in\obj\cu$, we call the third vertex
of (any) distinguished triangle $X\stackrel{f}{\to}Y\to Z$ a {\it cone} of
$f$.\footnote{Recall that different choices of cones are connected by non-unique isomorphisms.}\
\item Below $\au$ will always denote some abelian category; $\bu$ is an additive category.
\item We write $C(\bu)$ for the category of (cohomological) complexes over $\bu$; $K(\bu)$ is its homotopy category.
The full subcategory of $K(\bu)$ consisting of bounded complexes will be denoted by $K^b(\bu)$.
We write $M=(M^i)$ if $M^i$ are the terms of a complex $M$.
\item We will say that an additive covariant (resp. contravariant) functor from $\cu$ into $\au$ is {\it homological} (resp. {\it cohomological}) if it converts distinguished triangles into long exact sequences.
For a (co)homological functor $H$ and $i\in\z$ we write $H_i$ (resp. $H^i$) for the composition $H\circ [-i]$.
\end{itemize}
\subsection{Weight structures: basics}\label{ssws}
\begin{defi}\label{dwstr}
I. A pair of subclasses $\cu_{w\le 0},\cu_{w\ge 0}\subset\obj \cu$
will be said to define a weight
structure $w$ on a triangulated category $\cu$ if
they satisfy the following conditions.
(i) $\cu_{w\ge 0}$ and $\cu_{w\le 0}$ are retraction-closed in $\cu$ (i.e., contain all $\cu$-retracts of their objects).
(ii) {\bf Semi-invariance with respect to translations.}
$\cu_{w\le 0}\subset \cu_{w\le 0}[1]$ and $\cu_{w\ge 0}[1]\subset \cu_{w\ge 0}$.
(iii) {\bf Orthogonality.}
$\cu_{w\le 0}\perp \cu_{w\ge 0}[1]$.
(iv) {\bf Weight decompositions}.
For any $M\in\obj \cu$ there exists a distinguished triangle $$LM\to M\to RM {\to} LM[1]$$
such that $LM\in \cu_{w\le 0} $ and $ RM\in \cu_{w\ge 0}[1]$.
Moreover, if $\cu$ is endowed with a weight structure then we will say that $\cu$ is a {\it weighted} (triangulated) category.
\end{defi}
We will also need the following definitions.
\begin{defi}\label{dwso}
Let $i,j\in \z$; assume that a triangulated category $\cu$ is endowed with a weight structure $w$.
\begin{enumerate}
\item\label{idh}
The full subcategory $\hw$ of $ \cu$ whose objects are $\cu_{w=0}=\cu_{w\ge 0}\cap \cu_{w\le 0}$ is called the {\it heart} of $w$.
\item\label{id=i}
$\cu_{w\ge i}$ (resp. $\cu_{w\le i}$, resp. $\cu_{w= i}$) will denote the class $\cu_{w\ge 0}[i]$ (resp. $\cu_{w\le 0}[i]$, resp. $\cu_{w= 0}[i]$).
\item\label{id[ij]}
$\cu_{[i,j]}$ denotes $\cu_{w\ge i}\cap \cu_{w\le j}$; hence this class equals $\ns$ if $i>j$.
$\cu^b\subset \cu$ will be the category whose object class is $\cup_{i,j\in \z}\cu_{[i,j]}$; we say that its objects are the {$w$-bounded} objects of $\cu$.
\item\label{idbo}
We say that $(\cu,w)$ is {\it bounded} if $\cu^b=\cu$ (i.e., if $\cup_{i\in \z} \cu_{w\le i}=\obj \cu=\cup_{i\in \z} \cu_{w\ge i}$).
\item\label{idbob} We call $\cup_{i\in \z} \cu_{w\ge i}$ (resp. $\cup_{i\in \z} \cu_{w\le i}$) the class of {\it $w$-bounded below} (resp., {\it $w$-bounded above}) objects of $\cu$.
\item\label{idwe} Let $\cu'$ be a triangulated category endowed with a weight structure $w'$; let $F:\cu\to \cu'$ be an exact functor.
Then $F$ is said to be {\it weight-exact} (with respect to $w,w'$) if it maps $\cu_{w\le 0}$ into $\cu'_{w'\le 0}$ and
sends $\cu_{w\ge 0}$ into $\cu'_{w'\ge 0}$.
\item\label{idrest}
Let $\du$ be a full triangulated subcategory of $\cu$.
We will say that $w$ {\it restricts} to $\du$ whenever the couple $(\cu_{w\le 0}\cap \obj \du,\ \cu_{w\ge 0}\cap \obj \du)$ is a weight structure on $\du$.
\item\label{ilrd}
We will say that $M$ is left (resp., right) {\it $w$-degenerate} (or {\it weight-degenerate} if the choice of $w$ is clear) if $M$ belongs to $ \cap_{i\in \z}\cu_{w\ge i}$ (resp. to $\cap_{i\in \z}\cu_{w\le i}$).
\item\label{iwnlrd} We say that $w$ is left (resp., right) {\it non-degenerate} if all left (resp. right) weight-degenerate objects are zero.
\end{enumerate}
\end{defi}
\begin{rema}\label{rstws}
1. A simple (and still useful) example of a weight structure comes from the stupid filtration on the homotopy category $K(\bu)$ of complexes over an arbitrary additive $\bu$ (it can also be restricted to bounded complexes; see Definition \ref{dwso}(\ref{idrest})).
We set $K(\bu)_{\wstu\le 0}$ (resp. $K(\bu)_{\wstu\ge 0}$) to be the class of complexes that are
homotopy equivalent to complexes concentrated in degrees $\ge 0$ (resp. $\le 0$); see Remark 1.2.3(1) of \cite{bonspkar} for more detail. We will use this notation below.
The heart of this weight structure is the retraction-closure of $\bu$ in $K(\bu)$; hence it is equivalent to $\kar(\bu)$.
2. A weight decomposition (of any $M\in \obj\cu$) is almost never canonical.
Still for any $m\in \z$ the axiom (iv) gives the existence of a distinguished triangle \begin{equation}\label{ewd} w_{\le m}M\to M\to w_{\ge m+1}M\to (w_{\le m}M)[1] \end{equation} with some $ w_{\ge m+1}M\in \cu_{w\ge m+1}$ and $ w_{\le m}M\in \cu_{w\le m}$; we call it an {\it $m$-weight decomposition} of $M$.
We will often use this notation below even though $w_{\ge m+1}M$ and $ w_{\le m}M$ are not canonically determined by $M$. We call any possible choice either of $w_{\ge m+1}M$ or of $ w_{\le m}M$ (for any $m\in \z$) a {\it weight truncation} of $M$. Moreover, when we will write arrows of the type $w_{\le m}M\to M$ or $M\to w_{\ge m+1}M$ we always assume that they come from some $m$-weight decomposition of $M$.
3. In the current paper we use the ``homological convention'' for weight structures; it was previously used in \cite{wildshim}, \cite{wildcons}, \cite{hebpo},
\cite{brelmot}, \cite{bonivan}, \cite{bonspkar},
\cite{bokum}, \cite{bgn}, \cite{bkwn}, \cite{bvtr}, and \cite{bpws}, whereas in \cite{bws}, \cite{bger}, and \cite{bontabu} the ``cohomological convention'' was used. In the latter convention the roles of $\cu_{w\le 0}$ and $\cu_{w\ge 0}$ are essentially interchanged, i.e., one
considers the classes $\cu^{w\le 0}=\cu_{w\ge 0}$ and $\cu^{w\ge 0}=\cu_{w\le 0}$. Consequently, a
complex $X\in \obj K(\bu)$ whose only non-zero term is the fifth one (i.e.,
$X^5\neq 0$) has weight $-5$ in the homological convention, and has weight $5$
in the cohomological convention. Thus the conventions differ by ``signs of
weights''; $K(\bu)_{[i,j]}$ is the class of retracts of complexes concentrated in degrees $[-j,-i]$.
We also recall that
D. Pauksztello has introduced weight structures independently (see \cite{paucomp}); he called them co-t-structures. \end{rema}
\begin{pr}\label{pbw}
Let $m\le l\in\z$, $M,M'\in \obj \cu$, $g\in \cu(M,M')$.
\begin{enumerate}
\item \label{idual}
The axiomatics of weight structures is self-dual, i.e., for $\cu'=\cu^{op}$ (so $\obj\cu'=\obj\cu$) there exists the (opposite) weight structure $w'$ for which $\cu'_{w'\le 0}=\cu_{w\ge 0}$ and $\cu'_{w'\ge 0}=\cu_{w\le 0}$.
\item\label{iort}
$\cu_{w\ge 0}=(\cu_{w\le -1})^{\perp}$ and $\cu_{w\le 0}={}^{\perp} \cu_{w\ge 1}$.
\item\label{icoprod} $\cu_{w\le 0}$ is closed with respect to all coproducts that exist in $\cu$.
\item\label{iext} $\cu_{w\le 0}$, $\cu_{w\ge 0}$, and $\cu_{w=0}$ are additive and extension-closed.
\item\label{isplit} If $A\to B\to C\to A[1]$ is a $\cu$-distinguished triangle and $A,B,C\in \cu_{w=0}$ then this distinguished triangle splits, that is, $B\cong A\bigoplus C$.
\item\label{igenlm}
The class $\cu_{[m,l]}$ is the extension-closure of $\cup_{m\le j\le l}\cu_{w=j}$.
\item\label{ibond} If $M$ is bounded above (resp. below) and also left (resp. right) $w$-degenerate then it is zero.
\item\label{ifact} Assume $M'\in \cu_{w\ge m}$. Then any $g\in \cu(M,M')$ factors through $w_{\ge m}M$ (for any choice of the latter object).
Dually, if $M\in \cu_{w\le m}$ then any $g\in \cu(M,M')$ factors through $w_{\le m}M'$.
\item\label{iwdmod} If $M$ belongs to $ \cu_{w\le 0}$ (resp. to $\cu_{w\ge 0}$) then it is a retract of any choice of $w_{\le 0}M$ (resp. of $w_{\ge 0}M$).
\item\label{iwd0}
If $M\in \cu_{w\ge m}$ then $w_{\le l}M\in \cu_{[m,l]}$ (for any $l$-weight decomposition of $M$).
Dually, if $M\in \cu_{w\le l}$ then $w_{\ge m}M\in \cu_{[m,l]}$.
\item\label{icompl} For any (fixed) $m$-weight decomposition of $M$ and an $l$-weight decomposition of $M'$ (see Remark \ref{rstws}(2)) $g$ can be extended to a morphism of the corresponding distinguished triangles:
\begin{equation}\label{ecompl} \begin{CD} w_{\le m} M@>{c}>>
M@>{}>> w_{\ge m+1}M\\
@VV{h}V@VV{g}V@ VV{j}V \\
w_{\le l} M'@>{}>>
M'@>{}>> w_{\ge l+1}M' \end{CD}
\end{equation}
Moreover, if $m<l$ then this extension is unique (provided that the rows are fixed).
\item\label{iwdext} For any distinguished triangle $M\to M'\to M''\to M[1]$ and any
weight decompositions $LM\stackrel{a_{M}}{\to} M\stackrel{n_{M}}{\to} R_M\to LM[1]$ and $LM''\stackrel{a_{M''}}{\to} M''\stackrel{n_{M''}}{\to} R_M''\to LM''[1]$ there exists a commutative diagram
$$\begin{CD}
LM @>{}>>LM'@>f>> LM''@>{}>>LM[1]\\
@VV{a_M}V@VV{a_{M'}}V @VV{a_{M''}}V@VV{a_{M}[1]}V\\
M@>{}>>M'@>{}>>M''@>{}>>M[1]\\
@VV{n_M}V@VV{n_{M'}}V @VV{n_{M''}}V@VV{n_{M}[1]}V\\
RM@>{}>>RM'@>{}>>RM''@>{}>>RM[1]\end{CD}
$$
in $\cu$ whose rows are distinguished triangles and the second column is a weight decomposition (along with the first and the third one).
\end{enumerate}
\end{pr}
\begin{proof}
Assertions \ref{idual}-\ref{igenlm} and
\ref{icompl}--\ref{iwdext}
were proved in \cite{bws} (cf. Remark 1.2.3(4) of \cite{bonspkar} and pay attention to Remark \ref{rstws}(3) above!).
To prove assertion \ref{ibond} it suffices to consider the case where $M$ is bounded above and left $w$-degenerate since the remaining case is its dual (see assertion \ref{idual}). Now, these assumptions imply that $M\in \cu_{w\le n}$ and $M\in \cu_{w\ge n+1}$ for any large enough $n\in \z$; hence $M\perp M$, i.e., $M=0$.
Assertion \ref{ifact} follows from assertion \ref{icompl} immediately. Next, assertion \ref{iwdmod} is straightforward from the previous assertion (applied to the morphism $\id_M$).
Lastly, assertion \ref{iwd0} is an easy consequence of assertion \ref{iext}; cf. Proposition 1.3.3(6) of \cite{bws}.
\end{proof}
\subsection{On weight Postnikov towers and weight complexes}\label{sswc}
To define the weight complex functor we need the following definitions.
\begin{defi}\label{dfilt}
Let $M\in \obj \cu$.
1. A datum consisting of $M_{\le i}\in \obj \cu$, $h_i\in \cu(M_{\le i},M)$, and $j_i\in \cu(M_{\le i},M_{\le i+1})$ for $i$ running through integers will be called a {\it filtration on $M$} if we have $h_{i+1}\circ j_i=h_i$ for all $i\in \z$; we write $\fil_M$ for this filtration.
A filtration will be said to be {\it bounded} if there exist $l\le m\in \z$ such that $M_{\le i}=0$ for all $i<l$ and $h_i$ are isomorphisms for all $i\ge m$.
2. A filtration as above equipped with distinguished triangles \begin{equation}\label{etpt} M_{\le i-1}\stackrel{j_{i-1}}{\to}M_{\le i} \stackrel{c_{i}}{\to} M_i\stackrel{e_{i-1}}{\to} M_{\le i-1}[1]\end{equation}
for all $i\in \z$ will be called a {\it Postnikov tower} for $M$ or for $\fil_M$; this tower will be denoted by $Po_{\fil_M}$.
We use the symbol $M^p$ to denote $M_{-p}[p]$.
3. If $\fil_{M'}=(M'_{\le i}, h'_i, j'_i)$ is a filtration on $M'\in \obj \cu$ and $g\in \cu(M,M')$ then we call $g$ along with a collection of $g_{\le i}\in \cu(M_{\le i}, M'_{\le i})$ a {\it morphism of filtrations compatible with $g$} if $g\circ h_i=h'_i\circ g_{\le i}$ and $j'_i\circ g_{\le i} =g_{\le i+1}\circ j_i$ for all $i\in \z$.
Moreover,
if we have Postnikov towers for $Po_{\fil_M}$ and $Po_{\fil_{M'}}$ for $\fil_{M}$ and $\fil_{M'}$, respectively, then a datum consisting of a morphism of filtrations compatible with $g$ along with $g_i:M_i\to M'_i$ will be said to give a Postnikov tower morphism $Po_{\fil_M}\to Po_{\fil_{M'}}$ if all the
diagrams of the form
$$\begin{CD}
M_{\le i}@>{c_i}>>M_i@>{e_{i-1}}>>M_{\le i-1}[1]\\
@VV{g_{\le i}}V@VV{g_i}V@VV{g_{\le i-1}[1]}V \\
M'_{\le i}@>{c'_i}>>M'_i@>{e'_{i-1}}>>M'_{\le i-1}[1]
\end{CD}$$
are commutative. Lastly, we will write $g^i$ for $g_{-i}[i]$.
\end{defi}
Let us recall a few simple properties of these notions.
\begin{lem}\label{lrwcomp}
1. For a Postnikov tower as above the morphisms \break $d^i= c_{-i-1}[i+1] \circ e_{-i-1}[i]: M^i\to M^{i+1}$ (for $i\in \z$) give a complex.
We call it the {\it complex associated with} $Po_{\fil_M}$.
2. Any filtration can be completed to a Postnikov tower uniquely up to a non-unique isomorphism, and any morphism of filtrations extends to a morphism of the corresponding Postnikov towers.
Moreover, any morphism of Postnikov towers gives a morphism of the associated complexes.
3. If a filtration of $M$ is bounded then $M$ belongs to the extension-closure of the corresponding $\{M_i\}$. \end{lem}
\begin{proof}
1. We have $d^{i+1}\circ d^i=c_{-i-2}[i+2] \circ (e_{-i-2}[i+1]\circ c_{-i-1}[i+1]) \circ e_{-i-1}[i]=0$.
2. The existence an uniqueness of these extensions is
a straightforward application of the axioms TR 1 and TR 3 of triangulated categories.
Lastly, $g^i$ give a morphism of complexes since all the diagrams of the form
$$\begin{CD}
M^i@>{e_{-1-i}[i]}>>M_{\le -i-1}[1+i]@>{c_{-1-i}[i+1]}>>M^{i+1} \\
@VV{g^i}V@VV{g_{\le -i-1}[1+i]}V@VV{g^{i+1}}V \\
M'{}^i@>{e'_{1-i}[i]}>>M'_{\le -i-1}[1+i]@>{c'_{-1-i}[i+1]}>>M'{}^{i+1}
\end{CD}$$
are commutative.
3. Assume that $M_{\le l}=0$
for some $l\in \z$. Then obvious induction yields that the corresponding objects $M_{\le n}$ belong to the extension-closure of $\{M_i\}$ for all $n\ge l$. Thus if $M_{\le m}\cong M$ for some $m\ge l$ then $M$ belongs to this extension-closure as well.
\end{proof}
Now let us relate these notions with weight structures.
\begin{defi}\label{dwpt}
Assume that $\cu$ is a weighted triangulated category (see Definition \ref{dwstr}).
1. We call a filtration (see Definition \ref{dfilt}) $\fil_M$ on $M\in \obj \cu$ a {\it weight filtration} (on $M$) if the morphisms $h_i:M_{\le i}\to M$ yield $i$-weight decompositions for all $i\in \z$ (in particular,
$M_{\le i}=w_{\le i}M$; see Remark \ref{rstws}(2)).
We will call the corresponding $Po_{\fil_M}$ (see Lemma \ref{lrwcomp}(2)) a {\it weight Postnikov tower} for $M$.
2. $\pwcu$ will denote the category whose objects are objects of $\cu$ endowed with
weight Postnikov towers and whose morphisms are morphisms of Postnikov towers.
$\cuw$ will be the category whose objects are the same as for $\pwcu$ and such that $\cuw(Po_{\fil_M},Po_{\fil_{M'}})=\imm (\pwcu(Po_{\fil_M},Po_{\fil_{M'}})\to \cu(M,M'))$ (i.e., we kill those morphisms of towers that are zero on the underlying objects).
3. For an additive category $\bu$, complexes $M,N\in \obj C(\bu)$, and morphisms $m_1,m_2\in C(\bu)(M,N)$ we write $m_1\backsim m_2$ if $m_1-m_2=d_N\circ x+y\circ d_M$ for some collections of arrows $x^*,y^*:M^*\to N^{*-1}$, where $d_M$ and $d_N$ are the corresponding differentials.
We call this equivalence relation the {\it weak homotopy (equivalence)} one (cf. Remark \ref{rwc}(\ref{irwc2}) below)
\end{defi}
\begin{pr}\label{pwt}
In addition to the notation introduced above (in particular, note that $g\in \cu(M,M')$) assume that $\bu$ is an additive category and $n\in \z$
\begin{enumerate}
\item\label{iwpt1}
Any choice of $i$-weight decompositions of $M$ for $i$ running through integers gives a unique weight filtration on $M$ with $M_{\le i}=w_{\le i}M$, and $M^i\in \cu_{w=0}$.
\item\label{iwpt2} Any $g\in \cu(M,M')$ can be extended to a morphism of (any choice of) weight filtrations for $M$ and $M'$, respectively; hence it also extends to a morphism of weight Postnikov towers.
\item\label{iwpt3} The obvious functor $\cuw\to \cu$ is an equivalence of categories.
\item\label{iwhecat} Factoring morphisms in $K(\bu)$ by the weak homotopy equivalence relation yields an additive category $\kw(\bu)$. Moreover, the corresponding full functor $K(\bu)\to \kw(\bu)$ is (additive and) conservative.
\item\label{iwhefu}
Let $\ca:\bu\to \au$ be an additive functor, where $\au$ is any abelian category. Then for any $B,B'\in \obj K(\bu)$ any pair of weakly homotopic morphisms $m_1,m_2\in C(\bu)(B,B')$ induce equal morphisms of the homology $H_*((\ca(B^i)))\to H_*((\ca(B'^i)))$.
\item\label{iwhefun}
Sending an object of $\cuw$ into the complex given by Lemma \ref{lrwcomp}(1) and a morphism of Postnikov towers into the corresponding $(g^i)$ (see Definition \ref{dfilt}(3)) yields a well-defined additive functor
$t=t_w:\cuw\to \kw(\hw)$.
We call this functor the {\it weight complex} one.
We
will often write $t(M)$ for $M\in \obj \cu$ (resp. $t(g)$) assuming that some weight Postnikov tower for $M$ (resp. a lift of $g$ to $\cuw$) is chosen; we say that $t(M)$ is {\it a choice of a weight complex} for $M$.
\item\label{irwcsh} $t\circ [n]_{\cuw}\cong [n]_{\kw(\hw)}\circ t$, where $[n]_{\cuw}$ and $[n]_{\kw(\hw)}$ are the obvious shift by $[n]$ (invertible) endofunctors of the categories $\cuw$ and $\kw(\hw)$, respectively.
\item\label{iwcons} Assume that $M$ is bounded above (resp. below). Then $M\in \cu_{w\le n}$ (resp. $M\in \cu_{w\ge n}$) if and only if $t(M)$ belongs to $K(\hw)_{\wstu\le n}$ (resp. to $K(\hw)_{\wstu\ge n}$; cf. Remark \ref{rwc}(\ref{irwco}) below).
\item\label{iwcex}
If $M\stackrel{g}{\to} M' \stackrel{f}{\to} M''$ is a distinguished triangle in $\cu$ then for any choice of $t(M)$ and $t(M'')$
there exists a compatible choice of $(t(g),t(f))$ (so, the domain of this $t(g)$ is the chosen $t(M)$ and the target of $t(f)$ is $t(M'')$) along with their lifts to $K(\hw)$ that can be completed to a distinguished triangle in $K(\hw)$.
\item\label{irwcons}
If $M\in \cu_{w\ge n}$ for some $n\in \z$ then there exists a weight Postnikov tower $Po_M$ with $M_{\le i}=w_{\le i}M=0$ for all $i<n$; consequently, $M^i=0$ for $i>-n$.
Moreover, if $M\in \cu_{w\le n}$ then we can take $M_{\le i}=w_{\le i}M=M$ for all $i\ge n$ to obtain $M^i=0$ for $i<-n$.
Consequently, if $M$ is left or right weight-degenerate then $t(M)=0$ for the corresponding choice of a weight Postnikov tower for $M$; the composition of the obvious embedding $\hw\to \cuw$ with $t$ is isomorphic to the obvious embedding $\hw \to \kw(\hw)$.
\item\label{iwch} Assume $N\in \cu_{w=0}$, and $N$ is a retract of $M$. Then $N$ is a also retract of
the object $M^0$ for any choice of $t(M)=(M^i)$.
\item\label{iwcfunct} Let $\cu'$ be a triangulated category endowed with a weight structure $w'$; let
$F:\cu\to \cu'$ be a weight-exact functor. Then $F$ is compatible with a naturally defined functor $F_w:\cuw\to \cuw'$, and the composition $t'\circ F_w$
equals $\kw(\hf)\circ t$, where $t'$ is the weight complex functor corresponding to $w'$, and the functor $\kw(\hf):\kw(\hw)\to \kw(\hw')$ is the obvious $\kw(-)$-version of the restriction $\hf:\hw\to \hw'$ of $F$ to $\hw$.
\item\label{iwc2342} Assume that $t(g)=(g^i)$, and the arrows $g^i$ come from an actual $\pwcu$-morphism (between the corresponding weight Postnikov towers) that is compatible with $g$. Then any family $({\tilde g}^{i})$ such that $({\tilde g}^{i})=(g^i)$ in $K(\hw)$ (that is, $(\tilde g^{i})$ is homotopy equivalent to $(g^{i})$) extends to a morphism of these towers that is compatible with $g$ as well.
\end{enumerate}
\end{pr}
\begin{proof}
Taking into account our definitions (cf. also Lemma \ref{lrwcomp}(2)), assertions \ref{iwpt1}-
\ref{iwcons}
easily follow from the results of \cite{bws}; see Lemma 1.5.1(1,2)
Lemma 3.1.4(I.1,II.1), Remark 3.1.7(2), Theorem 3.2.2(II), and Theorem 3.3.1(IV,I) of ibid., respectively.
Moreover, in Appendix \ref{sdwc}
below
we
prove assertions \ref{iwpt1}--\ref{iwpt3}, \ref{iwhefun}, \ref{iwcons}, and \ref{iwcex}, whereas assertions \ref{iwhecat} and \ref{iwhefu} are contained in Proposition \ref{pwwh}.
Next, all the statements in assertions \ref{irwcsh} and \ref{irwcons} follow from our definitions immediately.
Assertion \ref{iwch} is easy as well. We can set $t(N)=N$ and $t(M)=(M^i)$. Applying $t$ to the fact that $N$ is a retract of $M$ we obtain that $\id_N$ factors through $t(M)$ in $\kw(\hw)$. Looking at the corresponding
$C(\hw)$-morphisms we deduce that $N$ is a retract of $M^0$ indeed (cf. the end of assertion \ref{iwcex}).
To prove assertion \ref{iwcfunct}
we note that weight-exact functors send weight Postnikov towers and their morphisms in $\cu$ into that in $\cu'$. Hence we can define the functor $F_w$ in the obvious way, and the equality $t'\circ F_w=\kw(\hf)\circ t$ is automatic.
Assertion \ref{iwc2342} is rather technical and will not be applied in this paper. For this reason, we place this proof in \S\ref{sirwc} as well.
\end{proof}
\begin{rema}\label{rwc}
\begin{enumerate}
\item\label{irwco}
Combining parts \ref{iwhefun}, \ref{iwpt3}, and \ref{iwhecat} of our proposition we obtain that all possible choices of $t(M)$ are homotopy equivalent.
Thus the assumptions that $t(M)\in K(\hw)_{\wstu\ge n}$ and $t(M)\in K(\hw)_{\wstu\le n}$ do not depend on the choice of $t(M)$.
More generally, we will not mention choices when speaking of those properties of $t(M)$ that are preserved by $K(\hw)$-isomorphisms.
\item\label{irwc2} The weak homotopy equivalence relation was introduced in \S3.1 of \cite{bws} independently from the earlier and closely related notion of {\it absolute homology}; cf. Theorem 2.1 of \cite{barrabs}.
\item\label{irwc7}
$t$ can "usually" be "enhanced" to an exact functor $t^{st}:\cu\to K(\hw)$; see Corollary 3.5 of \cite{sosnwc} and \S6.3 of \cite{bws}. However, the author currently does not know how to obtain exact weight complex functors for the Chow weight structures studied in \cite{bokum}.
Note also that to obtain a strong weight complex functor that is compatible with $t$ one
has to change the signs of differentials either in Lemma \ref{lrwcomp}(1) or in the category $K(\hw)$ (see \S2 and Definition 5.7 of \cite{schnur}). However, since this matter does not appear to affect any of the applications of weight complexes known to the author, the reader may probably ignore it.
\item\label{irwc51} Our definition of weight complexes is not (quite) self-dual; see Remark \ref{rwcbws}(\ref{irwc52}) below for more detail. Note also that we do
use any octahedra in our definitions (in contrast to Definition 1.5.8 of \cite{bws} and Definition 5.7 of \cite{schnur}); yet we will need an octahedral diagram in Appendix \ref{sdwc} below.
Lastly, our weight Postnikov towers generalize {\it cellular towers} of \cite{marg}; see Theorem \ref{top}(\ref{itopcell},\ref{itopskel}) below and Theorem 4.2.3(7) of \cite{bkwn}.
\end{enumerate}
\end{rema}
\subsection{Weight spectral sequences: reminder}\label{swss}
Let us recall weight spectral sequences for cohomology and homology.
\begin{pr}\label{pwss}
Assume that $\cu$ is a weighted category.
1. If $H$ is a cohomological functor from $\cu$ into an abelian category $\au$ then for any $M\in \obj \cu$ and any
choice of $t(M)$ there exists a spectral sequence $T=T_w(H,M)$ with $E_1^{pq}=H^{q}(M^{-p})$, such that $M^i$ and the boundary morphisms of $E_1(T)$ come from this $t(M)$.
Moreover, $T_w(H,M)$ is $\cu$-functorial in $M$
starting from $E_2$; in particular, these levels of $T$ do not depend on any choices.
Furthermore, $T$ converges to $H^{p+q}(M)$ whenever $H$ kills $\cu_{w\ge i}$ and $\cu_{w\le -i}$ for $i$ large enough, or if $M$ is bounded above and $H$ kills $\cu_{w\le -i}$ for $i$ large enough.
2. Dually, for a homological $H':\cu\to \au$, any $M\in \obj \cu$, and any
choice of $t(M)$ there exists a spectral sequence $T=T_w(H',M)$ with $E_1^{pq}(T)=H'_{-q}(M^{p})$, such that the boundary morphisms of $E_1(T)$ come from $t(M)$ as well.
Moreover, $T_w(H',M)$ is $\cu$-functorial in $M$
starting from $E_2$, and converges to $H'_{-p-q}(M)$ whenever $H'$ kills $\cu_{w\ge i}$ and $\cu_{w\le -i}$ for $i$ large enough.
\end{pr}
\begin{proof}
I. This is (most of) Theorem 2.4.2 of \cite{bws} (yet take into account Remark \ref{rstws}(3)!).
2. See Theorem 2.3.2 of \cite{bws} (yet note that the numeration of homology in the current paper is opposite
to that in loc. cit.!).
\end{proof}
\begin{rema}\label{re1ss}
Recall that weight spectral sequences in \cite[\S2]{bws} were constructed using weight Postnikov towers of objects.
Now, all choices of $t(M)$ (that come for weight Postnikov towers for this object) are homotopy equivalent (see Remark \ref{rwc}(\ref{irwco})); yet it is not quite true that all complexes that are $K(\hw)$-isomorphic to a given $t(M)$ can be "realized" by $w$-Postnikov towers (see Remark \ref{realiz} below). However, this subtlety is not really essential for the theory of weight spectral sequences.
Let us justify this claim for cohomological weight spectral sequences (coming from part 1 of our proposition). For {\bf any} complex $(M'{}^i)\in \obj K(\hw)$ that is homotopy equivalent to $t(M)$ the homology of the complex $H^{q}(M'{}^{-*})$ is isomorphic to that for $H^{q}(M{}^{-*})$ (for our fixed choice $t(M)=(M^i)$).
Hence for any $(M'{}^i)$ of this sort the corresponding $E_1$-level of $T_w(H,M)$ "fits" with higher levels of $T$ computed using any fixed weight Postnikov tower for $M$.
\end{rema}
\subsection{An application to "detecting weights" by weight-exact functors}\label{sweap}
Proposition \ref{pwt} easily implies that certain weight-exact functors are conservative. We will discuss some motivic applications of the following theorem in Remark \ref{rayoub}(2,3) below.
\begin{theo}\label{twcons}
Let $\cu$ and $\cu'$ be triangulated categories endowed with weight structures $w$ and $w'$, respectively; let $F:\cu\to \cu'$ be a weight-exact functor (see Definition \ref{dwso}(\ref{idwe})).
Assume that the induced functor $\hf:\hw\to \hw'$ is full and conservative, $M$ is an object of $\cu$, and $n\in \z$.
1. Suppose that $M$ is $w$-bounded above (resp. below). Then $F(M)$ belongs to $ \cu'_{w'\le n}$ (resp. to $ \cu'_{w'\ge n}$) if and only if $M$ belongs to $ \cu_{w\le n}$ (resp. to $ \cu_{w\ge n}$).
2. Assume that $M$ is $w$-bounded above (resp., below) and $F(M)=0$. Then $M$ is right (resp. left) $w$-degenerate.
Moreover, if $M$ is $w$-bounded then it is zero.
3. Suppose that $M$ is $w$-bounded above; for any $i\in \z$ such that $M\in \cu_{w\le i}$ and any $N\in \cu_{w=i}$ assume that the homomorphism $\cu(M,N)\to \cu'(F(M),F(N))$ induced by $F$ is zero.
Then $M$ is right $w$-degenerate.
\end{theo}
\begin{proof}
1. The "if" implication is immediate from the definition of weight-exactness. So we verify the converse implication.
It clearly suffices to consider the case where $M$ is $w$-bounded above and $F(M)$ belongs to $ \cu'_{w'\le n}$ since the remaining case is its dual (see Proposition \ref{pbw}(\ref{idual})). We will write $t(M)=(M^i)$ for a choice of a weight complex for $M$; the boundary morphism of this complex will be denoted by $d^i$.
According to Proposition \ref{pwt}(\ref{iwcons}) it suffices to verify that $t(M)\in K(\hw)_{\wstu\le n}$ (see Remark \ref{rstws}(2)). Now choose the minimal integer $m\ge n$ such that $t(M)\in K(\hw)_{\wstu\le m}$; we should prove that $m=n$.
Next, we can assume that $M^i=0$ for $i<-m$ (see Proposition \ref{pwt}(\ref{irwcons})). By Proposition \ref{pwt}(\ref{iwcfunct}), the complex $t_{w'}(F(M))=(M'{}^i)$ can be obtained from $(M^i)$ by means of termwise application of $F$; hence $M'{}^i=0$ for $i<-m$ (and this choice of $t_{w'}(F(M))$) as well.
Now suppose that $m-1\ge n$. Then $(M'{}^i)\in K(\hw')_{\wstu\le m-1}$; hence the morphism $d'{}^{-m}$ is split monomorphic, i.e., there exists $s'\in \hw'(M'{}^{1-m},M'{}^{-m})$ such that $s'\circ d'{}^{-m}=\id_{M'{}^{-m}}$. Since $\hf$ is full and conservative and $\hw$ is a retraction-closed subcategory of $\cu$, the morphism $d^{-m}$ is isomorphic to the split monomorphism $M^{-m}\to (M^{-m}\bigoplus C)\cong M^{1-m}$ according to Lemma \ref{lsplit} below (here $C$ is some object of $\hw$). Thus $t(M)$ actually belongs to $ K(\hw)_{\wstu\le m-1}$ and we obtain a contradiction as desired.
2. If $F(M)=0$ then $F(M)$ belongs to $\cu_{w\le m}$ and also to $\cu_{w\ge m}$ for all $m\in \z$. Hence the assertion follows from the previous one immediately.
Next, if $M$ is right or left degenerate and also $w$-bounded then
$M=0$
according to Proposition \ref{pbw}(\ref{ibond}).
3. To prove that $M$ is right $w$-degenerate it clearly suffices to verify for any $m\in \z$ that $M$ belongs to $ \cu_{w\le m-1}$ whenever it belongs to $ \cu_{w\le m}$. We assume the latter and choose an $m-1$-weight decomposition triangle $$w_{\le m-1}M\to M\to w_{\ge m}M\to w_{\le m-1}M[1].$$ We will write $N$ for $w_{\ge m}M$; note that $N$ belongs to $\cu_{w=m}$ according to Proposition \ref{pbw}(\ref{iwd0}).
Since $F$ is weight-exact, the corresponding triangle
\begin{equation}\label{efwd}
F(w_{\le m-1}M)\to F(M)\stackrel{z}\to F(N)\to F(w_{\le m-1}M)[1]
\end{equation}
is an $m-1$-weight decomposition of $F(M)$.
Similarly to the proof of assertion 1, we choose $t(M)=(M^i)$ to satisfy $M^i=0$ for $i<-m$. Then Proposition \ref{pwss}(1) easily implies that $\cu(M,N)$ is the quotient of $\hw(M^{-m},N[-m])$ by the corresponding image of $\hw(M^{1-m},N[-m])$. Moreover, since the functor $F$ is weight-exact, we can take $t_{w'}(F(M))=(F(M^i))$ (see Proposition \ref{pwt}(\ref{iwcfunct})); thus the group $\cu(F(M),F(N))$ is the corresponding quotient of $\hw'(F(M^{-m}),F(N[-m]))$. Since $\hf$ is full, it follows that the homomorphism $\cu(M,N)\to \cu'(F(M),F(N))$ is surjective under our assumptions; thus $\cu'(F(M),F(N))=0$. We obtain that the morphism $z$ in (\ref{efwd}) is zero. Hence the object $F(M)$ is a retract of $F(w_{\le m-1}M)$; thus it belongs to $ \cu'_{w'\le m-1}$. Applying assertion 1 we conclude that $M$ belongs to $ \cu_{w\le m-1}$ indeed.
\end{proof}
Thus it remains to prove the following statement.
\begin{lem}\label{lsplit}
Let $G:\bu\to \bu'$ be a full and conservative additive functor (between additive categories),
and $h\in \bu(M,N)$ for some objects $M,N\in \obj \cu$.
1.
If $G(h)$ is split injective then $h$ is split injective as well.
2. Suppose that $\bu$ is a full retraction-closed subcategory of a triangulated category $\cu$ and
$h$ is split injective. Then $N$ can be presented as the direct sum $M\bigoplus N'$ for some $N'\in \obj \bu$ so that
$h=\id_M\bigoplus 0:M\to N$.
\end{lem}
\begin{proof}
1. Since $G(h)$ splits, there exists $s'\in \bu'(G(N),G(M)$ such that $s'\circ G(h)=\id_{G(M)}$. Since $G$ is full, there exists $s\in \bu(N,M)$ such that $s'=G(s)$. Since $G$ is conservative and $G(s\circ h)=\id_{F(M)}$, the composition $s\circ h$ is an automorphism $c$ of $M$. Thus the morphism $h$ is split by $c\ob \circ s$.
2. Since $\cu$ is a triangulated category, $h$ can be presented in the desired form in $\cu$. Since $\bu$ is retraction-closed in $\cu$, we obtain that the corresponding object $N'$ actually belongs to $\bu$.
\end{proof}
\begin{comment}
\begin{coro}\label{cwcons}
Let $F:\cu\to \cu'$ be a weight-exact functor such that its restriction $\hf:\hw\to \hw'$ is full and conservative, and $M$ be an object of $\cu$.
1. If $M$ is $w$-bounded and $F(M)=0$ then $M=0$.
2.
\end{coro}
\begin{proof}
\end{proof}
\end{comment}
\begin{rema}\label{rwcons}
1. Respectively, one may say that any functor $F$ as in our theorem detects weights of $w$-bounded objects, i.e., looking at $F(M)$ one can find out whether $M\in \cu_{w\ge n}$ and whether $M\in \cu_{w\le n}$. This property of $F$ was called {\it $w$-conservativity} in \cite{bachinv}; see Proposition 17 and Theorem 22 of ibid.
2.
The bounded ("moreover")
part of Theorem \ref{twcons}(2) substantially generalizes Theorem 2.5 of \cite{wildcons}, where $\hw$ was assumed to be {\it semi-primary} (in the sense of \cite[Definition 2.3.1]{andkahn}) and Karoubian. Moreover, part 1 of our theorem obviously implies Theorem 2.8(a--c) of ibid. (where $\hw$ was assumed to be semi-primary as well).
3. Proposition 3.2.1 of \cite{bkwn} also gives a generalization of part d of loc. cit.; however, a somewhat stronger assumption on $\hf$ is imposed (cf. Corollary \ref{consb}(2) below). Under this condition other parts of our theorem are
extended to objects that are not necessarily $w$-bounded either above or below. An important tool for obtaining (unbounded) results of this sort was the theory of {\it morphisms killing weights (in a range)}; we do not treat this notion in the current paper.
4. Even though the conditions of the theorem
seem to be somewhat restrictive, it appears to be rather difficult to weaken them. Obviously, the "heart-conservativity" assumption in it cannot be dropped (cf. Remark \ref{rdetect}(4) below).
It is an interesting question
to what degree the fullness condition can be weakened. Looking at the proof of the theorem one can easily see that instead of assuming that $\hf$ is (conservative and) full it suffices to assume the existence of an additive functor $G:\hw'\to \bu$ such that the composition $G\circ \hf$ satisfies this condition. In particular, here one can take $G$ to be the functor that kills the {\it radical} morphism ideal of $\hw'$ (see Definition 1.4 of \cite{wildshim}).
\end{rema}
\section{On pure functors and detecting weights}\label{spuredet}
In \S\ref{spured} we define pure functors as those that kill all weights except $0$, and prove that they can be expressed in terms of weight complexes.
In \S\ref{sdet} we study conditions ensuring that a pure functor detects weights of objects. The results of this section are important for the study of Picard groups of weighted triangulated categories carried over in \cite{bontabu}.
In \S\ref{smash} we prove a rich collection of properties of {\it smashing} weight structures (these are weight structures
"coherent with (small) coproducts") along with those pure functors that preserve coproducts.
\subsection{Pure functors: equivalent definitions}\label{spured}
Let us define an important class of (co)homological functors from weighted categories.
\begin{defi}\label{dpure}
Assume that $\cu$ is endowed with a weight structure $w$.
We will say that a (co)homological functor $H$ from $\cu$ into an abelian category $\au$ is {\it $w$-pure} (or just pure if the choice of $w$ is clear) if $H$ kills both $\cu_{w\ge 1}$ and $\cu_{w\le -1}$.
\end{defi}
Now we give an explicit description of pure functors.
\begin{theo}\label{tpure}
1. Let $\ca:\hw\to \au$ be an additive functor, where $\au$ is an abelian category. Choose a weight complex $t(M)=(M^j)$ for each $M\in \obj \cu$, and denote by $H(M)=H^{\ca}(M)$ the zeroth homology of the complex $\ca(M^{j})$. Then $H(-)$ yields a homological functor that
does not depend on the choices of weight complexes. Moreover, the assignment $\ca\mapsto H^\ca$ is natural in $\ca$.
2. The correspondence $\ca\to H^\ca$ is an equivalence of categories between the following (not necessarily locally small) categories of functors: $\adfu(\hw,\au)$ and the category of pure homological functors from $\cu$ into $\au$.
3. Assume that $w$ is bounded. Then a (co)homological functor $H$ from $\cu$ into $\au$ is $w$-pure if and only if it annihilates $\cu_{w=i}$ for all $i\neq 0$.
\end{theo}
\begin{proof}
1. Proposition \ref{pwt}(\ref{iwpt3},\ref{iwhefu},\ref{iwcex}) immediately implies that for any additive $\ca:\hw\to \au$ the functor $H^{\ca}$ is homological and does not depend on the choices of weight complexes.
2. Firstly, let us prove that any functor of the form $H^{\ca}$ is pure. For any $M\in \cu_{w\ge 1}\cup \cu_{w\le -1}$ we can choose $t(M)$ with $M^0=0$ according to Proposition \ref{pwt}(\ref{iwcons}); thus $H^{\ca}(M)=0$ in this case.
To finish the proof of the statement it suffices to verify that a $w$-pure functor can be functorially recovered from its restriction to $\hw$. This is immediate from assertion 1 combined with Proposition \ref{pwss}(2).
3. Immediate from our definitions combined with Proposition \ref{pbw}(\ref{igenlm}).
\end{proof}
\begin{rema}\label{rpuresd}
1. The definition of pure functors is obviously self-dual (cf. Proposition \ref{pbw}(\ref{idual}), i.e., $H$ is pure (co)homological if and only if the functor from $\cu$ into $\au\opp$ obtained from $H$ by means of inversion of arrows is pure cohomological (resp. homological).
Combining this observation with our theorem we obtain that the correspondence $(w,\ca)\mapsto H^{\ca}$ is self-dual as well (cf. Remark \ref{rwcbws}(\ref{irwc52}) below).
We will also use the following notation: for a contravariant additive functor $\ca'$ from $\hw$ into $\au$ we will write $H_{\ca'}$ for the corresponding cohomological pure functor; it clearly sends $M$ as above into the zeroth homology of the complex $\ca'(M^{-j})$.
2. Immediately from the definition of pure functors, a representable functor $\cu(-,M)$ is pure if and only if $M$ belongs to $ (\cu_{w\ge 1}\cup \cu_{w\le -1})\perpp$.
3. The author is using the term "pure" due to the relation of pure functors to Deligne's purity of cohomology.
To explain it we recall that various categories of Voevodsky motives are endowed with so-called Chow weight structures; we will say more on them in Remarks \ref{rayoub} and \ref{rwchow}(1) below.
Now, for any $r\in \z$ the $r$th level of the Deligne's weight filtration both of singular and \'etale cohomology of motives clearly kills $\chow[i]$ for all values of $i$ except one (and the remaining value of $i$ is either $r$ or $-r$
depending on the choice of the convention for Deligne's weights).\footnote{Certainly, singular (co)homology (of motives) is only defined if the base field $k$ is a subfield of complex numbers,
whereas the Deligne's weight filtration on \'etale (co)homology can be defined (at least) if $k$ is a finitely generated field. For both of these cases the comparison of the corresponding weight factors with the ones computed in terms of $w_{\chow}$ is carried over in
Theorem 3.5.4(2) of \cite{bsoscwhn} (cf. also Remark 2.4.3 of \cite{bws}). Note also that
in \cite[\S3.4,3.6]{brelmot} certain "relative perverse" versions of these weight calculations were discussed.}\ Thus
(the corresponding shifts of) Deligne's pure factors of (singular and \'etale) cohomology are pure with respect to $w_{\chow}$.
4. Note however that in the context of equivariant stable homotopy categories (see Theorem \ref{tshg} below) pure (co)homology theories are usually called {\it ordinary} or {\it classical} ones. Functors of this sort were essentially introduced by Bredon (see condition (4) in \S I.2 of \cite{bred} and Remark \ref{rshg}(\ref{iruniv}) below), and the spectral sequence argument used for the proof of Theorem \ref{tpure}(2) is rather similar to that applied in \S IV.5 of ibid.
5. Part 1 of our theorem was applied in \cite{bontabu} (see Proposition 3.5(ii) of ibid.).\footnote{Since \cite{bontabu} was written earlier than the current paper, it actually referred to other texts of the author. Note however that all the weight structure pre-requisites for ibid. are gathered and described more accurately in the current paper.}
\end{rema}
We will also need the following easy statement in succeeding papers.
\begin{lem}\label{lsubc}
Assume that $\au'$ is a strict abelian subcategory of $\au$, i.e., $\au'$ is its strictly full subcategory that contains the $\au$-kernel and the $\au$-cokernel of any morphism in $\au'$.
Then an additive functor $\ca:\hw\opp\to \au$ factors through $\au'$ if and only if the values of the pure functor $H_{\ca}$ (see Remark \ref{rpuresd}(1)) belong to $\au'$. \end{lem}
\begin{proof}
Obvious.
\end{proof}
\subsection{Detecting weights via pure functors}\label{sdet}
Let us prove that pure functors can be used to detect weights (cf. Remark \ref{rwcons}(1)); these results are crucial for \cite{bontabu}.
It will be convenient for us to use the following definitions.
\begin{defi}\label{dbh}
Let $\hu$ be a (not necessarily additive) subcategory of an additive category $\bu$.
1. We call the full additive subcategory of $\bu$ whose objects are the retracts of all (small) coproducts of objects of $\hu$ in $\bu$ (we take all coproducts that exist in $\bu$) the {\it coproductive hull} of $\hu$ in $\bu$.
2. Let $\hu'$ be the category of "formal coproducts" of objects of $\hu$, i.e., the objects of
$\hu'$ are of the form $\coprod_i P_i$ for (families of) $P_i\in \obj \hu$ and $\hu'(\coprod M_i,\coprod N_j)=\prod_i (\bigoplus_j \hu(M_i,N_j))$. Then we will call $\kar(\hu')$ the {\it formal coproductive hull} of $\hu$.
\end{defi}
\begin{rema}\label{rbh}
Obviously, the coproductive hull of $\hu$ in $\bu$, the category $\hu'$ mentioned in our definition, and the formal coproductive hull of $\hu$ are additive categories. Moreover, there exist fully faithful functors between the retraction-closure of $\hu$ in $\bu$ and both aforementioned "coproductive hulls" of $\hu$.
\end{rema}
\begin{pr}\label{pdetect}
1. Let $\ca:\hw\to \au$ be an additive functor (where $\au$ is an abelian category; cf. Theorem \ref{tpure}) and assume that the following conditions are fulfilled.
(i) the image of $\ca$ consists of $\au$-projective objects only;
(ii) if an $\hw$-morphism $h$ is not split surjective (i.e., it is not a retraction) then $\ca(h)$ is not split surjective as well.
Then for any $M\in \obj \cu$, $n\in \z$, and $H=H^\ca$ we have the following: $M$ is $w$-bounded below and $H_i(M)=0$ (see \S\ref{snotata}) for all $i< n$ if and only if $M\in \cu_{w\ge n}$.
2. Assume that $\hw$ is an $R$-linear category, where $R$ is a commutative unital ring (certainly, one may take $R=\z$ here), $\hu$ is a full small subcategory of $\hw$, $\au$ is the category of $R$-linear functors from $\hu\opp$ into the category of $R$-modules (i.e., the objects and morphisms of $\au$ respect the $R$-module structure on morphism groups), $\ca$ is the corresponding Yoneda-type functor, i.e., $\ca(M)$ sends $N\in\obj \hu$ into the $R$-module $\bu(N,M)$ for any $M\in \obj \hw$; suppose in addition that the image of $\ca$ is contained in the coproductive hull of the image of its restriction to $\hu$ in $\au$.
Then $\ca$ fulfils condition (i) of assertion 1.
3. Assume that $\ca$ is full and conservative. Then it fulfils condition (ii) of assertion 1.
\end{pr}
\begin{proof}
1. The "if" part of the statement is very easy (for any $\ca$); just combine the definition of $H^\ca$ with Proposition \ref{pwt}(\ref{irwcons}).
Now we prove the converse application; we argue similarly to the proof of Theorem \ref{twcons}.
So, we choose the maximal integer $m\le n$ such that $M\in \cu_{w\ge m}$, choose $t(M)=(M^i)$ such that $M^i=0$ for $i>-m$. According to Proposition \ref{pwt}(\ref{iwcons}), we have $t(M)\notin K(\hw)_{\wstu\ge m+1}$; applying Lemma \ref{lsplit}(2) (in the dual form) we obtain that the boundary morphism $d^{-m-1}:M^{-m-1}\to M^{-m}$ is not split surjective.
Applying our assumption (ii) on $\ca$ we obtain that the morphism $\ca(d^{-m-1})$ does not split as well. Since $\ca(M^{-m})$ is projective in $\au$, it follows that $\cok\ca(d^{-m-1})\cong H_{m}(M)\neq 0$. Thus $m\ge n$, and we obtain the result in question.
2. This statement is much easier than its formulation. Objects of $\hu$ obviously become projective in the category $\au$
(see Remark \ref{rabver}(2) below). Since coproducts and retracts of projective objects are projective, all elements of $\ca(\obj \hw)$ are projective in $\au$ as well.
3. This is just a particular case of Lemma \ref{lsplit}(1).
\end{proof}
\begin{rema}\label{rabver}
1. Part 1 of our proposition is really similar to Theorem \ref{twcons}(1), and it may be called its "homological functor version". Its advantage as a weight detection method is that pure functors are somewhat easier to construct than weight-exact ones (thanks to Theorem \ref{tpure}; cf. also Corollary \ref{cbontabu}(2) below). In particular, one can easily deduce Theorem \ref{twcons} from Proposition \ref{pdetect} using certain functors similar to the ones that we will now describe.
2. In the current section we will only apply Proposition \ref{pdetect}(2) (in Corollary \ref{cbontabu}(2)) in the simple case where $\hu$ is an essentially wide small subcategory of $\hw$ (see \S\ref{snotata}; consequently, we assume that $\hw$ is essentially small and then we can take $\hu$ to be a skeleton of $\hw$); in this case it follows immediately both from Lemma 5.1.2 of \cite{neebook} and from Lemma 8.1 of \cite{vbook}.
We have formulated the general case of our proposition for the sake of applying it in Theorem \ref{thegcomp} below. So we give some more detail for this setting.
Choose an essentially wide small subcategory $\hu_0$ of the retraction-closure of $\hu$ in $\cu$. It is easily seen that the category
$\au$ is equivalent to the category $\au'$ of $R$-linear functors from $\hu_0\opp$ into $R$-modules.
The category $\au'$ is obviously Grothendieck abelian. Moreover, the projectives in it are precisely the objects of the coproductive hull of the image of $\hu_0$ in $\au'$ (see Definition \ref{dbh}(1)) and there are enough of them according to Lemma 8.1 of \cite{vbook}. Note also that this hull coincides with the coproductive hull of the image of $\hu_0$ in $\au'$.
\end{rema}
Let us now deduce Propositions 4.6 and 5.2 of \cite{bontabu} from our proposition.
\begin{coro}\label{cbontabu}
1. Let $\ca:\hw\to \au$ be a full additive conservative functor whose target is semi-simple. Then for a $w$-bounded object $M$ of $\cu$ we have $M\in \cu_{w=0}$ if and only if $H^\ca_i(M)=0$ for all $i\neq 0$.
2. The assumptions of Proposition \ref{pdetect}(1) are fulfilled whenever $\hu$ is an essentially wide small subcategory of the category $\hw$, whereas $R$ and $\ca$ are as in part 2 of that proposition.
\end{coro}
\begin{proof}
1. Once again, the "if" part of the statement immediately follows from Proposition \ref{pwt}(\ref{irwcons}).
To prove the converse implication we note that our assumptions on $\cu,\ca$, and $M$ are self-dual (see Proposition \ref{pbw}(\ref{idual}) and Remark \ref{rpuresd}(1)). Thus is suffices to prove that $M\in \cu_{w\le 0}$. According to Proposition \ref{pdetect}(1), for this purpose it remains to verify that the image of $\ca$ consists of projective objects only. The latter is automatic since $\au$ is semi-simple.
2. This is very easy. Proposition \ref{pdetect}(2) implies that all objects in the image of $\ca$ are projective in $\au$. Thus it remains to note that the functor $\ca$ is a full embedding according to the Yoneda lemma; hence it is also conservative.
\end{proof}
\begin{rema}\label{rdetect}
1. Immediately from Lemma \ref{lsplit}(1) (that we have just applied) condition (ii) of Proposition \ref{pdetect} is fulfilled both for $\ca$ and for the opposite functor $\ca^{op}:\hw^{op}\to \au^{op}$ whenever $\ca$ is a full
conservative functor. In particular, it suffices to assume that $\ca$ is a full embedding.
Hence it may be useful to demand (in addition to assumption (i) of the proposition) that the image of $\ca$ consists of injective objects only (cf. Corollary \ref{cbontabu}(1)).
2. The objects in the essential image of the functor provided by Corollary \ref{cbontabu}(2) may be called {\it purely $R$-representable homology}. Since they are usually not injective in $\psvr(\bu)$, a dual construction may be useful for checking whether $M\in \cu_{w\le -m}$.
3. The boundedness assumption in Proposition \ref{pdetect} cannot be dropped (unless certain additional restrictions are imposed).
Indeed, take $\bu$ to be the category of free finitely generated $\z/4\z$-modules, $\cu=K(\bu)$, and $w=\wstu$. Then the complex $M=\dots \stackrel{\times 2}{\to}\z/4\z\stackrel{\times 2}{\to}\z/4\z\stackrel{\times 2}{\to}\z/4\z\stackrel{\times 2}{\to}\dots$ does not belong to $\cu_{\wstu\ge i}$ for any $i\in \z$, whereas its purely $\z$-representable homology (as well as the $\z/4\z$-representable one) is obviously zero in all degrees.
4. Condition (ii) of Proposition \ref{pdetect} is certainly necessary. Indeed, if $h\in \mo (\hw)$ does not split whereas $\ca(h)$ does, then one can easily check that $\co(h)\in \cu_{w\ge 0}\setminus \cu_{w\ge 1}$ and $H_i(\co(h))=0$ for $i\neq 1$.
\end{rema}
\subsection{On smashing weight structures and pure functors respecting coproducts}\label{smash}
It is currently well known (in particular, from \cite{neebook}) that the existence of all (small) coproducts
is a very reasonable assumption on a ("big") triangulated category. Now we relate it to weight structures and pure functors.
\begin{defi}\label{djsmash}
1. We will say that a triangulated category $\cu$ is {\it smashing} whenever it is closed with respect to
coproducts.
2. We say that a weight structure $w$ on $\cu$ is {\it smashing} if the class $\cu_{w\ge 0}$ is closed with respect to $\cu$-coproducts (cf. Proposition \ref{pbw}(\ref{icoprod})).
3. We will say that a full strict triangulated subcategory $\du\subset \cu$ is {\it localizing} whenever it is closed with respect to $\cu$-coproducts. Respectively, we call the smallest localizing subcategory of $\cu$ that contains a given class $\cp\subset \obj \cu$ the {\it localizing subcategory of $\cu$ generated by $\cp$}. \end{defi}
Let us prove several simple properties of smashing weight structures.
\begin{pr}\label{ppcoprws}
Let $w$ be a smashing weight structure (on a smashing triangulated category $\cu$), and $i,j\in \z$. Then the following statements are valid.
\begin{enumerate}
\item\label{icopr1} The classes $\cu_{w\le j}$, $\cu_{w\ge i}$, and $\cu_{[i,j]}$ are closed with respect to (small) $\cu$-coproducts.
\item\label{icoprhw} In particular, the category $\hw$ is closed with respect to $\cu$-coproducts, and the embedding $\hw\to \cu$ preserves coproducts.
\item\label{icopr2} Coproducts of weight decompositions are weight decompositions.
\item\label{icopr3} Coproducts of weight Postnikov towers are weight Postnikov towers.
\item\label{icopr4} The categories $\cuw$ and $\kw(\hw)$ are closed with respect to coproducts, and the functor $t$ preserves coproducts.
\item\label{icopr5} Assume that $\au$ is an AB4 abelian category. Then pure functors $\cu\to \au$
that respect coproducts are exactly the functors of the form $H^\ca$ (see Theorem \ref{tpure}), where $\ca:\hw\to \au$ is an additive functor preserving coproducts. Moreover, this correspondence is an equivalence of (possibly, big)
categories.
\item\label{icopr5p} Assume that $\au'$ is an AB4* abelian category. Then pure cohomological
functors from $\cu$ into $\au'$ that convert coproducts into products are exactly those of the form $H_{\ca'}$ (see Remark \ref{rpuresd}(1)), where $\ca':\hw^{op}\to \au'$ is an additive functor that sends $\hw$-coproducts into products.
\item\label{icopr5b} Assume that $\cu$ satisfies the following {\it Brown representability} property: any cohomological functor from $\cu$ into $\ab$ that converts $\cu$-coproducts into products of groups is representable in $\cu$.
Then the pure functor $H_{\ca'}$ (as above) is representable if and only if $\ca'$ is a contravariant additive functor from $\hw$ into $\ab$ that converts $\hw$-coproducts into products of groups. Thus the corresponding Yoneda functor embeds the category of functors satisfying
this condition into $\cu$.
\item\label{icopr7} Let $\du$ be the localizing subcategory of $\cu$ generated by a class of objects $\cp$, and assume that for any
$P\in \cp$ a choice of (the terms of) its weight complex $t(P)=(P^k)$ is fixed. Then for any $M\in \obj \du$
its weight complex $t(M)$ belongs to
the localizing subcategory $K$ of $K(\hw)$ (see assertion \ref{icoprhw} and Remark \ref{rwc}(\ref{irwco})) generated by all of the $P^k$. In particular, if $\cp\subset \cu_{w=0}$ then $t(M)$ belongs to
the localizing subcategory of $K(\hw)$ generated by $\cp$.
Moreover, any element of $\cu_{w=0}\cap \obj \du$
belongs to the coproductive hull of $\{P^k\}$ in $\hw$ (see Definition \ref{dbh}(1)).
\item\label{icopr7p}
For $\cp$ and $\du$ as in the previous assertion assume that for any $P\in \cp$ and any $k\in\z$ some choices of $w_{\le k}P$ and of $w_{\ge k}P$ are fixed. Then for any $D\in\obj \du$
there exists a choice of $w_{\le 0}D$ (resp. of $w_{\ge 0}D$) that belongs to the smallest class $D_1$ (resp. $D_2$) of objects of $\cu$ that is closed with respect to extensions and coproducts and contains ($0$ and) the corresponding objects $(w_{\le k}P)[-k]$ (resp. $(w_{\ge k}P)[-k]$) for all $P\in \cp$ and $k\in\z$. Moreover, $\cu_{w\le 0}\cap \obj \du\subset D_1$ and $\cu_{w\ge 0}\cap \obj \du\subset D_2$.
\item\label{icoprhcl} For a sequence of objects $Y_i$ of $\cu$ for $i\ge 0$ and maps $\phi_i:Y_{i}\to Y_{i+1}$
we consider $C=\coprod Y_i$ and the morphism $a:\oplus \id_{Y_i}\bigoplus \oplus (-\phi_i): C\to C$.
Then a cone $Y$ of $a$ (that is a {\it countable homotopy colimit} of $Y_i$ as defined in \cite{bokne}; respectively, we will write $Y=\hcl Y_i=\hcl_{i\ge 0} Y_i$) is right (resp. left) weight-degenerate whenever for all $i\ge 0$ we have $Y_i\in\cu_{w\le -i}$ (resp. $Y_i\in\cu_{w\ge i}$).
\end{enumerate}
\end{pr}
\begin{proof}
\begin{enumerate}
\item The class $\cu_{w\le j}$ is closed with respect to $\cu$-coproducts according to Proposition \ref{pbw}(\ref{icoprod}). Next, $\cu_{w\ge i}$ is closed with respect to $\cu$-coproducts immediately from Definition \ref{djsmash}(1). It clearly follows that $\cu_{[i,j]}$ possesses this property as well.
\item Immediate from the previous assertion.
\item If the triangles $L_i\to M_i\to R_i\to L_i[1]$ are weight decompositions of certain $M_i\in \obj \cu$ then the triangle $$\coprod L_i\to \coprod M_i\to \coprod R_i\to \coprod L_i[1]$$ is distinguished according to Remark 1.2.2 of \cite{neebook}. Hence this triangle is a weight decomposition of $\coprod M_i$ according to assertion \ref{icopr1}.
\item Straightforward from
assertions \ref{icopr1} and \ref{icopr2}.
\item Immediate from assertions \ref{icopr1} and \ref{icopr3}.
\item Recall that pure functors are exactly those of the type $H^\ca$, and the correspondence $\ca\to H^{\ca}$ is functorial; see Theorem \ref{tpure}(2). Next, if $H$ respects coproducts then its restriction to $\hw$ also does according to assertion \ref{icoprhw}. Conversely, if $\ca$ preserves coproducts then $H^\ca$ also does according to assertion \ref{icopr4}.
\item Similarly to assertion \ref{icopr5},
$\ca'$ sends coproducts into products if $H_{\ca'}$ does by
assertion \ref{icoprhw}. Conversely, if $\ca'$ sends coproducts into products then $H_{\ca'}$ also does according to assertion \ref{icopr4}.
\item This is an obvious combination of assertions \ref{icopr5p} and \ref{icoprhw} (cf. also
assertion \ref{icopr5}).
\ref{icopr7},\ref{icopr7p}.
Consider the class $W$ of those $D\in \obj \cu$ that satisfy the following conditions: $t(D)$ (considered as an object of $K(\hw)$) belongs to
$K$, and for any $m\in \z$ there exist a choice of $(w_{\le m}D)[-m]$ that belongs to $D_1$ and a choice of $(w_{\ge m}D)[-m]$ that belongs to $D_2$.
Obviously, $W$ is closed with respect to shifts ($[1]$ and $[-1]$). Moreover, $W$ is closed with respect to coproducts according to assertion \ref{icopr2} and \ref{icopr4}. Furthermore, Proposition \ref{pbw}(\ref{iwdext}) along with Proposition \ref{pwt}(\ref{iwcex}) imply that $W$ is extension-closed. Thus $W$ is the class of objects of a localizing subcategory of $\cu$. Since $W$ contains $\cp$, we obtain that it contains $\obj \du$ as well.
Hence for any $M\in \obj \du$ there exists a choice of $t(M)$
that belongs to $K$; note also that in the case $\cp\subset \cu_{w=0}$ one can take $\{P_k\}=\cp$. Next, any object of $K$ is clearly $K(\hw)$-isomorphic to a complex whose terms are coproducts of $P^k$. Thus if $M$ also belongs to $\cu_{w=0}$ then
$M$ belongs to the coproductive hull of $\{P^k\}$ in $\hw$ by Proposition \ref{pwt}(\ref{iwch}) applied to the object $t(M) $ of $K(\hw)$ (here we take the stupid weight structure on $K(\hw)$). This finishes the proof of assertion \ref{icopr7}.
Lastly, if
$M\in \obj \du$ belongs to $ \obj \du \cap \cu_{w\le 0}$ (resp. to $ \obj \du \cap \cu_{w\ge 0}$) then the existence of $w_{\le 0}M$ that belongs to $D_1$ (resp. of $w_{\ge 0}M$ that belongs to $D_2$) implies that $M$ belongs to the retraction-closure of $D_1$ (resp. of $D_2$) according to Proposition \ref{pbw}(\ref{iwdmod}).
Now, $D_1$ and $D_2$ are retraction-closed in $\cu$ according to Corollary 2.1.3(2) of \cite{bsnew}; this concludes the proof of assertion \ref{icopr7p}.
\ref{icoprhcl}. Recall that $Y=\hcl_{i\ge 0} Y_i\cong \hcl_{i\ge 0} Y_{i+j}$ for any integer $j\ge 0$ according to Lemma 1.7.1 of \cite{neebook}. Since the classes $\cu_{w\le -j}$ and $\cu_{w\ge j}$ are closed with respect to extensions and coproducts (see assertion \ref{icopr1} and Proposition \ref{pbw}(\ref{iext})), we obtain that $Y$ belongs to $\cu_{w\le 1-j}$ (resp. $Y_i\in\cu_{w\ge j}$) whenever $Y_i\in\cu_{w\le -i}$ (resp. $Y_i\in\cu_{w\ge i}$) for all $i\ge 0$, i.e., we obtain the result in question.
\end{enumerate}
\end{proof}
\begin{rema}\label{ral}
1. In all the parts of our proposition one can replace arbitrary small coproducts by coproducts of less than $\al$ objects,
where $\al$ is any regular infinite cardinal; cf. Proposition \ref{pral} below.
Note however that the corresponding version of
Proposition \ref{ppcoprws}(\ref{icopr5b}) appears to be vacuous since the corresponding
modification of the Brown representability condition cannot be fulfilled for a non-zero category $\cu$.
2.
On the other hand the Brown representability property
is known to hold for several important classes of triangulated categories (thanks to the foundational results of A. Neeman and others). In particular, it suffices to assume that either $\cu$ or $\cu\opp$ is generated by a set of {\it compact} objects (see Definition \ref{dcomp2} below) as its own localizing subcategory.
3. Some of these statements can be generalized to so-called {torsion theories}; cf. Propositions 2.4(5) and 3.2(2,4) of \cite{bvt}.
\end{rema}
Now let $\al$ be an infinite {\it regular} cardinal number, i.e., $\al$ cannot be presented as a sum of less than $\al$ cardinals that are less than $\al$.
\begin{pr}\label{pral}
Assume that $\cu$ closed with respect to $\cu$-coproducts of less than $\al$ objects and $\cu_{w\le 0}$ is closed with respect to $\cu$-coproducts of this cardinality.
Then the following statements are valid.
1. The category $\hw$ is closed with respect to $\cu$-coproducts of less than $\al$ objects, and the embedding $\hw\to \cu$ preserves these coproducts.
2. The categories $\cuw$ and $\kw(\hw)$ are closed with respect to coproducts of less than $\al$ objects, and the functor $t$ preserves coproducts of this sort.
3. Assume that $\cu$ equals its own minimal strict full triangulated subcategory that is closed with respect to coproducts of less than $\al$ objects and contains a class of objects $\cp\subset \obj \cu$, and assume that for any
$P\in \cp$ a choice of (the terms of) its weight complex $t(P)=(P^k)$ along with $w_{\le k}P$ and $w_{\ge k}P$ for all $k\in \z$ are fixed.
Then any element of $\cu_{w=0}$ is a retract of a coproduct of a family of $P^k$ of cardinality less than $\al$. Moreover, for any object $M$ of $\cu$ there exists a choice of $w_{\le 0}M$ (resp. of $w_{\ge 0}M$) that belongs to the smallest class of objects of $\cu$ that is closed with respect to extensions and coproducts of cardinality less than $\al$, and contains ($0$ and) the corresponding objects $(w_{\le k}P)[-k]$ (resp. $(w_{\ge k}P)[-k]$) for all $P\in \cp$ and $k\in\z$.
\end{pr}
\begin{proof}
The proofs can be obtained from that of Proposition \ref{ppcoprws}(\ref{icoprhw}, \ref{icopr4}, \ref{icopr7},\ref{icopr7p}) simply by replacing arbitrary small coproducts by coproducts of cardinality less than $\al$ in all occurrences.
\end{proof}
\begin{rema}\label{rpralz}
1. Certainly, for any $\cu$ and $w$ one can take $\al=\aleph_0$. However, parts 1 and 2 of our proposition do not say anything new in this case. On the other hand, part 3 gives a non-trivial statement; note that in this case $\cu$ is strongly generated by $\cp$ (see \S\ref{snotata}).
2.
One can also easily prove that any element $N$ of $\cu_{w=0}$ is a retract of the coproduct of a finite family of the corresponding $P^k$ whenever $\cu$ is densely generated by $\cp$; see \S\ref{snotata} and Proposition \ref{pwt}(\ref{iwch}).
Let us also make the following observation: if $P\in \cp$ is a $w$-bounded object then we can take $P^k=0$ for almost all values of $k$; see Proposition \ref{pwt}(\ref{irwcons}).
\end{rema}
\section{On "explicit" weight structures and pure functors} \label{sexamples}
In this section we recall two earlier statements on the construction of weight structures "with a given heart", prove a new result of this sort, and describe motivic examples to these assertions.
In \S\ref{sconstrneg} we recall that any {\it connective} densely generating subcategory $\hu$ of $\cu$ gives a weight structure on it. Combining this statement with Theorem \ref{twcons} we obtain a conservativity result that does not mention weight structures. Moreover, we discuss Chow weight structures on categories of Voevodsky motives, and demonstrate that Theorem \ref{twcons} allows to deduce the conservativity of
the \'etale realization on the category $\dmgq$ of $\q$-linear geometric motives from Theorem II of \cite{ayoubcon}.
In \S\ref{spcgw} we recall that one can obtain a smashing weight structure on (a smashing triangulated category) $\cu$ from a connective compactly generating subcategory $\hu$ of $\cu$. We also study pure functors and detecting weights for weight structures of this sort. Moreover, we prove that the Chow weight structure $\wchow$ on the category "big" motivic category $\dmr$ is degenerate (if $R$ is not torsion and the base field $k$ is "big enough").
In \S\ref{subtlety} we study in detail the relation between different choices of weight complexes and weight Postnikov towers for a fixed object $M$ of $\cu$. This allows us to construct some new weight structures; in particular, we obtain
a new conservative weight-exact motivic functor.
\subsection{Constructing weight structures starting from connective subcategories, and the conservativity of motivic functors}\label{sconstrneg}
\begin{defi}\label{dcomp1}
Let $\hu$ be a full subcategory of a triangulated category $\cu$.
We will say that $\hu$ is {\it connective} (in $\cu$) if $\obj \hu\perp (\cup_{i>0}\obj (\hu[i]))$.\footnote{ In earlier texts of the author connective subcategories were called {\it negative} ones. Moreover, in several papers (mostly, on representation theory and related matters) a connective subcategory satisfying certain additional assumptions was said to be {\it silting}.}
\end{defi}
First we recall a statement that allows to construct all bounded weight structures (cf. Proposition \ref{pbw}(\ref{igenlm})).
\begin{pr}\label{pexw}
Assume that $\cu$ is densely generated by its connective additive subcategory $\bu$.
1. Then there exists a unique weight structure $w$ on $\cu$ whose heart contains $\bu$.
Moreover, this weight structure is bounded, $\hw=\kar_{\cu}\bu$, and $\cu_{w\ge 0}$ (resp. $\cu_{w\le 0}$) is the smallest class of objects that contains $\bu[i]$ for all $i\ge 0$ (resp. $i\le 0$), is extension-closed and retraction-closed in $\cu$.
2. Let $w'$ be a weight structure on a triangulated category $\du$. Then an exact functor $F:\cu\to \du$ is weight-exact with respect to $(w,w')$ (for $w$ as above) if and only if it sends $\bu$ inside the heart $\hw'$.
\end{pr}
\begin{proof}
1. This is essentially Corollary 2.1.2 of \cite{bonspkar}; see also Theorem 5.5 of \cite{mendoausbuch}.
2. This is an immediate consequence of the description $w$ given in assertion 1 along with the fact that both $\cu'_{w'\ge 0}$ and $\cu'_{w'\le 0}$ are retraction-closed and extension-closed in $\cu'$; cf. also Lemma 2.7.5 of \cite{bger}.
\end{proof}
Let us combine this statement with Proposition \ref{pdetect} to obtain a conservativity result that does not mention weight structures.
\begin{coro}\label{consb}
1. Let $F:\cu\to \cu'$ be an exact functor; assume that there exists a connective additive subcategory $\bu$ of $\cu$ that densely generates it and such that the full subcategory $\bu'$ of $\cu'$ whose object class equals $F(\obj \bu)$ is connective (in $\cu'$), whereas the restriction of $F$ to $\bu$ is full and conservative.
Then $F$ is conservative itself, i.e., it does not kill non-zero objects.
2. The conservativity condition in assertion 1 is fulfilled if all endomorphisms of objects of $\bu$ that are killed by $F$ are nilpotent. \end{coro}
\begin{proof}
1. Obviously, we can assume that the category $\cu'$ is densely generated by its subcategory $\bu'$, and we will do so. Thus Proposition \ref{pexw} yields the following: $\cu$ and $\cu'$ are endowed with bounded weight structures $w$ and $w'$, respectively, such that $\hw=\kar_{\cu}\bu$, $\hw'=\kar'_{\cu}\bu$, and $F$ is weight-exact with respect to them. Now we verify that the remaining assumptions of Theorem \ref{twcons} are fulfilled in this setting.
Since all objects of $\hw$ are retracts of objects of $\bu$, the fullness of the restriction of $F$ to $\bu$ implies the fullness of the
corresponding functor $\hf:\hw\to \hw'$. Thus it remains to verify that $\hf$ is conservative. For an $\hw$-morphism $g:M\to N$ we should check that it is invertible whenever $F(g)$ is. Now the fullness assumption gives us the existence of a morphism $h\in \hw(N,M)$ such that $F(h)$ is the inverse to $F(g)$. It remains to prove that the endomorphisms $g\circ h$ and $h\circ g$
are automorphisms.
Thus it suffices to verify that for any $Q\in \cu_{w=0}$ a morphism $p\in \cu(Q,Q)$ is an automorphism whenever $F(p)$ is. Next, $Q$ is a retract of an object of $S$ of $\bu$, and since $\cu$ is a triangulated, $S\cong Q\bigoplus R$
(for some $R\in \obj \cu$; actually, $R$ belongs to $\cu_{w=0}$ as well). Thus $F(p\bigoplus \id_{R})$ is an automorphism; since the restriction of $F$ to $\bu$ is conservative, it follows that $p\bigoplus \id_{R}$ is an automorphism as well. Hence $p$ is an automorphism indeed.
2. A well-known easy fact; see Remark 3.1.5 of \cite{bws}.
\end{proof}
\begin{rema}\label{rayoub}
Let us describe the relation of our results to Voevodsky motives.
1. Let $k$ be a perfect field of characteristic $p$ ($p$ may be a prime or zero); let $R$ be a $\zop$-algebra, where we set $\zop=\z$ if $p=0$. Denote by $\dmr$ the smashing category of $R$-linear Voevodsky motives over $k$ (see \S4.2 of \cite{degmod} or \S1.1 of \cite{cdint}). The category $\dmr$ contains the additive category $\chowr$ of $R$-linear Chow motives that is connective in it (see Remark 3.1.2 of \cite{bokum}, Remark \ref{rwchow}(1), or Proposition \ref{paydegen} below). Thus we obtain a {\it Chow} weight structure on the subcategory $\dmgr$ densely generated by $\chowr$ in $\dmr$. Moreover, this weight structure extends to a smashing weight structure on the whole $\dmr$; see Proposition \ref{paydegen} below.
This method of constructing the Chow weight structure originates from \S6.5 of \cite{bws} (cf. \cite{bzp} for the case $p>0$); it was carried over to relative motives in \cite{hebpo} and \cite[\S2.1]{brelmot}, whereas in \S2.3 of ibid., \S2.1 of \cite{bonivan}, and \cite{bokum} some other methods were described.
Moreover, applying Theorem \ref{tpure}(3) we obtain that a (co)homological functor from $\dmgr$ is pure whenever it kills $\chowr[i]$ for all $i\neq 0$. Furthermore, Theorem \ref{thegcomp}(\ref{itnp2}) below gives a similar characterization of those pure functors from $\dmr$ that respect coproducts.
2. Let us now describe an interesting weight-exact functor from $\dm$ that is quite relevant for \cite{ayoubcon}.
In this treatise the case $p=0$ and $R=k$ was considered. The category $\dmk$ is equivalent to the category $\da$ (of \'etale $k$-linear motives). Now let us use the symbol $\omp$ for the truncated de Rham spectrum $\tau_{\ge 0}\omdr$ (see Theorem II of ibid.); note that it is a highly structured ring spectrum with respect to the model structure on $\da$ that was considered in ibid.
Thus we can take $\cu'$ to be the derived category of highly structured $\omp$-modules in $\da$, $\cu=\dmgk$,
and take $F$ to be the restriction to $\dmgk$ of the "free module" functor $-\otimes \omp:\da\to \cu'$.
Next, we can take $\bu$ either to be the subcategory $\chowk\subset \da\cong \dmk$ (of $k$-linear Chow motives) or the category of twists of motives of smooth projective varieties by $-(i)[2i]=\lan i \ra$ for $i\in \z$. The images of these motives in $\cu'$ give a connective category whose Karoubi envelope is the $k$-linear category of $k$-motives up to algebraic equivalence (cf. the formula (xxviii) of \cite{ayoubcon}; these statements are based on simple cohomological dimension and Poincare duality arguments along with Remark 7.6 of \cite{blog}).
It remains to note that the nilpotence assumption of Corollary \ref{consb}(2) easily follows from Corollary 3.3 of \cite{voevnilp}. We obtain that the functor $F$ is conservative (and detects weights; cf. Remark \ref{rwcons}(1)).
3. Furthermore, Theorem II of \cite{ayoubcon}
appears to imply that for any $M\in \obj \dmgk$ whose de Rham cohomology is zero the functor $F$ kills all morphisms from $M$ into $\chowk[i]$ for any $i\in \z$ such that $M\perp \cup_{j>i}\obj \chowk[j]$. Applying Theorem \ref{twcons}(3) we obtain that $M=0$ (cf. Remark \ref{rwcons}(2)). Thus loc. cit. implies that de Rham cohomology is conservative on $\dmgk$ (this Conjecture II of ibid.). This is a very interesting observation that is substantially stronger than Theorem I of ibid. This conservativity conjecture was shown to have very interesting implications in \cite{bcons}; yet see Remark \ref{rgap} above.
4. We also recall that certain functors that are pure with respect to the corresponding Chow weight structures were crucial for the recent papers \cite{kellyweighomol}, \cite{bachinv}, \cite{bontabu}, and \cite{bsoscwhn}. All of these pure functors were defined in terms of weight complex functors (cf. Theorem \ref{tpure}). Moreover, in \cite{bgn} functors that are pure with respect to certain {\it Gersten weight structures} are considered.
\end{rema}
\subsection{On purely compactly generated weight structures}\label{spcgw}
Now we pass to the study of a particular family of smashing weight structures.
In this subsection we will always assume that $\cu$ is smashing.
\begin{defi}\label{dcomp2}
1. An object $M$ of $\cu$ is said to be {\it compact} if the functor $H^M=\cu(M,-):\cu\to \ab$ respects coproducts.
2. We will say that $\cu$ is {\it compactly generated} by $\cp\subset \obj \cu$ if $\cp$ is a {\bf set} of compact objects
that generates $\cu$ as its own localizing subcategory (see Definition \ref{djsmash}(3)).
\end{defi}
First we recall the following well-known statement.
\begin{lem}\label{lcg}
Let $\cp$ be a set of compact objects of $\cu$.
Then $\cp$ compactly generates $\cu$ if and only if $(\cup_{i\in \z}\cp[i])^\perp=\ns$. \end{lem}
\begin{proof}
This is (a part of) \cite[Proposition 8.4.1]{neebook}.\end{proof}
\begin{theo}\label{thegcomp}
Let $\hu$ be a connective subcategory of $\cu$ such that $\obj \hu$ compactly generates $\cu$, and
$\cp=\obj \hu$. Then the following statements are valid.
\begin{enumerate}
\item\label{itnpb} $\cu$ satisfies the Brown representability property (see Proposition \ref{ppcoprws}(\ref{icopr5b})).
\item\label{itnp1}
There exists a unique smashing weight structure $w$ on $\cu$ such that $\cp\subset \cu_{w=0}$; this weight structure is left non-degenerate.
\item\label{itnp1d} For this weight structure $\cu_{w\le 0}$ (resp. $\cu_{w\ge 0}$) is the smallest subclass of $\obj \cu$ that is closed with respect to coproducts, extensions, and contains $\cp[i]$ for $i\le 0$ (resp. for $i\ge 0$), and $\hw$ is the coproductive hull of $\hu$ in $\cu$ (see Definition \ref{dbh}(1)).
Moreover, $\cu_{w\ge 0}=(\cup_{i<0}\cp[i])^{\perp}$.
\item\label{itnp2} Let $H$ be a cohomological functor from $\cu$ into an abelian category $\au$ that converts all small coproducts into products. Then it is pure if and only if it kills $\cup_{i\neq 0}\cp[i]$.
\item\label{itnwe} Let $F:\cu\to \du$ be an exact functor that respects coproducts, where $\du$ is (a triangulated category) endowed with a smashing weight structure $v$. Then $F$ is weight-exact if and only if it sends $\cp$ into $\du_{v=0}$.
\item\label{itnp3} The category $\hrt\subset \cu$ of $w$-pure representable functors from $\cu$ (so, we identify an object of $\hrt$ with the functor from $\cu$ that it represents) is equivalent to the category $\au_{\cp}$ of
additive contravariant functors from $\hu$ into $\ab$ (i.e., we take those functors that respect the addition of morphisms).
Moreover, $\aucp$ (and so also $\hrt$) is Grothendieck abelian, has enough projectives, and an injective cogenerator.
Restricting functors representable by elements of $\cp$ to $\hw$ one obtains a full embedding $\hw\to \aucp$ whose essential image is the subcategory of projective objects of $\aucp$.
\item\label{itnpr} $w$ restricts (see Definition \ref{dwso}(\ref{idrest})) to a bounded weight structure on the subcategory of compact objects of $\cu$, and the heart of this restriction is the retraction-closure of $\hu$ in $\cu$ (so, we consider only retracts of finite coproducts $\coprod P_i$ for $P_i\in \cp$).
\end{enumerate}
\end{theo}
\begin{proof}
Assertion \ref{itnpb} is a particular case of Proposition 8.4.2 of \cite{neebook}.
Assertions \ref{itnp1}--\ref{itnwe} follow from Corollary 2.3.1 and Lemma 2.3.3 of \cite{bsnew} easily (see Remark 2.3.2(2) of ibid.).
\ref{itnp3}. Since objects of $\hu$ are compact in $\cu$, $\hw$ is naturally equivalent to the formal coproductive hull of $\hu$ (see Definition \ref{dbh}(2)). Thus $\hw$ embeds into the category $\aucp$. As we have noted in Remark \ref{rabver}(2) (we take $R=\z$ in that remark), the essential image of this embedding is the subcategory of projective objects of $\aucp$. Moreover, the category $\aucp$ has enough projectives; since it is Grothendieck abelian, it also possesses an injective cogenerator.
It remains to prove that $\hrt$ is equivalent to $\aucp$. We recall that $\aucp$ is equivalent to the category $\adfu(\hu_0\opp,\ab)$, where $\hu_0$ is an essentially wide small subcategory of the retraction-closure of $\hu$ in $\cu$. Moreover, recall from Remark \ref{rpuresd}(2) that pure representable functors are the ones represented by elements of $(\cu_{w\le -1}\cup \cu_{w\ge 1})^{\perp}$. Combining these statements with Theorem 4.5.2(II.2) of \cite{bws} (along with Lemma \ref{lcg} above) we easily obtain the equivalence in question (cf. Proposition \ref{padj} below); one an also deduce it from Proposition\ref{ppcoprws}(\ref{icopr5b}).
\ref{itnpr}. According to Lemma 4.4.5 of \cite{neebook}, the subcategory of compact objects of $\cu$ equals $\lan \cp \ra$. Thus it remains to apply Proposition \ref{pexw}(1).
\end{proof}
\begin{rema}\label{rgensmash}
\begin{enumerate}
\item\label{irspure}
Let us make
two simple observations related to pure functors in this setting.
If $H': \cu\to \au$ is a homological functor that respects coproducts then applying part \ref{itnp2} of our theorem to the opposite (cohomological) functor $H$ from $\cu$ into $\au\opp$ we obtain that $H'$ is pure if and only if it kills $\cup_{i\neq 0}\cp[i]$.
Next, the description of $\hw$ (combined with Theorem \ref{tpure}) immediately implies that two pure (co)homological functors $H'_1$ and $H'_2$ from $\cu$ that respect coproducts (resp. $H_1$ and $H_2$ that convert coproducts into products) are isomorphic if and only if their restrictions to $\hu$ are.
\item\label{irs3}
Let us now describe a simple example to our theorem that will be useful for us below.
So, let $\bu$ be a small additive category; define $\hu'$ as the formal coproductive hull of $\obj \bu$ (see Definition \ref{dbh}(2)). Then we take $\cu\subset K(\hu')$ to be the localizing subcategory that is generated by $\obj \hu'$. Obviously, the set $\cp=\obj \bu$ compactly generates $\cu$, and $\bu$ is connective in $\cu$. Thus there is a unique smashing weight structure $w$ on $\cu$ whose heart contains $\bu$. It is easily seen that this $w$ is the restriction of the weight structure $\wstu$ (see Remark \ref{rstws}(1)) from $K(\hu')$ to $\cu$; see Remark 2.3.2(1) of \cite{bsnew}. Thus $\cu_{w\ge 0}=K(\hu')_{\wstu\ge 0}$, and part \ref{itnp1d} of our theorem tells us that this class also equals $(\cup_{i<0}\cp[i])^{\perp}$.
\item\label{irs1}
In \S2.3 of \cite{bsnew} (cf. also \S2.2 of ibid.) a much more general setting of {\it class-generated} weight structures was considered. In particular, it is not necessary to assume that $\cp$ is a set to have parts \ref{itnp1}--\ref{itnp2} of our theorem.
So the main problem with the corresponding weight structures is that it is not clear whether the categories of pure representable functors are "nice enough".
Another generalization of our theorem was obtained in \S3.3 of \cite{bvtr}.
\item\label{irs5} Furthermore, by Theorem 5 of \cite{paucomp}, for {\bf any} set $\cp$ of compact objects there exists a weight structure with $\cu_{w\ge 0}=(\cup_{i<0}\cp[i])^{\perp}$.
However, it does not follow that $w$ restricts to the subcategory of compact objects of $\cu$ (as it does in our theorem and in Theorem 3.3.1 of \cite{bvtr}). Respectively, one may call weight structures studied in our theorem {\it purely compactly generated} ones (to distinguish them from general compactly generated ones).
\item\label{irst} As we recall in Proposition \ref{padj} below, the category $\hrt$ is the heart of a $t$-structure $t$ that is {\it right adjacent} to $w$ in the sense of \cite[\S1.3]{bvt}; whence the notation.
\end{enumerate}
\end{rema}
Now let us study pure functors and detecting weights for weight structures provided by our theorem (cf. Remark \ref{rpgcomp} below).
\begin{pr}\label{pgcomp}
Adopt the notation and the assumptions of Theorem \ref{thegcomp}.
Let us choose an injective cogenerator $I$ of the category $\hrt$; we will write $H_I$ for the representable functor $\cu(-,I)$, and $\cacp$ for the Yoneda embedding functor $Q\mapsto (P\mapsto \cu(P,Q)):\ \hw\to \aucp$ (where $P$ runs through $\cp$). Let $M$ be an object of $\cu$, $n\in \z$.
Then the following conditions are equivalent.
(i). $t(M)\in K(\hw)_{\wstu\ge n}$ (cf. Remark \ref{rwc}(\ref{irwco})).
(ii). $H_j^{\cacp}(M)=0$ for $j<n$.
(iii). $M\perp \cup_{j<n}\{I[j]\}$.
(iv). $H_j(M)=0$ for any $j<n$ and any pure homological functor $H$.
\end{pr}
\begin{proof}
Applying Proposition \ref{ppcoprws}(\ref{icopr7}) we obtain that $t(M)$ is an object of the localizing subcategory $K'(\hw)$ of $K(\hw)$ generated by $\obj \hw$. Recalling Remark \ref{rgensmash}(\ref{irs3}) we deduce that $t(M)\in K(\hw)_{\wstu\ge 0}$ if and only if $N[j]\perp M$ for all $j<0$ and $N\in \obj \hw$. Combining this statement with Theorem \ref{thegcomp}(\ref{itnp1d}) we conclude that conditions (i) and (ii) are equivalent.
Next, for any $J\in \obj \hrt$ and $j\in \z$ the group $\cu(M[-j],J)$ is isomorphic to the $-j$th cohomology of the complex $\cu(M^s,J)$ according to Theorem \ref{tpure}(2). If $J$ is an injective object of $\hrt$ then this groups is isomorphic to $\aucp(H_j^{\cacp}(M), J)$. Since $I$ cogenerates $\hrt$, we obtain that $H_j^{\cacp}(M)=0$ if and only if $M\perp I[j]$; hence conditions (ii) and (iii) are equivalent.
Lastly, condition (iv) clearly implies condition (ii), and it follows from condition (i) according to Theorem \ref{tpure}(2).
\end{proof}
\begin{rema}\label{rpgcomp}
1. This statement can be thought about as a complement to Proposition \ref{pwt}(\ref{iwcons}) (in our setting). Clearly, if $M\in \cu_{w\ge n}$ then it fulfils (all) the conditions of our proposition, and the converse application is fulfilled whenever $M$ is $w$-bounded above.
2. A more or less explicit description of all objects fulfilling these conditions immediately follows from Corollary 4.1.4(1)
of \cite{bkwn}. \end{rema}
Let us now prove that the Chow weight structure on $\dmr$ is ("very often") degenerate.
\begin{pr}\label{paydegen}
Let $k$ be a perfect field, $p=\cha k$; let $R$ be a commutative associative $\zop$-algebra (where we set $\zop=\z$ if $p=0$). Denote by $\dmr$ the smashing category of $R$-linear Voevodsky motives over $k$ (cf. Remark \ref{rayoub}(1))
Then the subcategory $\hu\subset \dmr$ of twists of $R$-motives of smooth projective $k$-varieties by $-(i)[2i]=-\lan i \ra$ is connective and compactly generates $\dmr$ (cf. Theorem \ref{thegcomp}); consequently, $\hu$ purely compactly generates a left non-degenerate weight structure $\wchow$ on $\dmr$.
Moreover, if $R$ is not torsion and $k$ is of infinite transcendence degree over its prime subfield then
$\wchow$ is right degenerate.
\end{pr}
\begin{proof}
These properties of $\hu$ are rather well-known.
The objects of $\hu$ are compact in $\dmr$ essentially by the definition of the latter category; see Remark 4.11 of \cite{degmod}, \S1.1 of \cite{cdint}, or \S1.3 and 3.1 of \cite{bokum}. In Remark 3.1.2 of \cite{bokum} it is shown that $\hu$ is
connective in $\dmr$; this is an easy application of Corollary 6.7.3 of \cite{bev}. Lastly, $\dmr$ is well-known to coincide with its localizing subcategory generated by the subcategory $\dmgr$ of compact objects; see \S3.1 of \cite{bokum}. Hence it suffices to recall that $\dmgr$ is densely generated by $\hu$; see Proposition 2.2.1(1) of \cite{bsoscwhn} or Theorem 2.2.1 of \cite{bzp}. Hence Theorem \ref{thegcomp}(\ref{itnp1}) gives the existence and the left non-degeneracy of $\wchow$.
Now we demonstrate that
$\dmr$ contains a right $\wchow$-degenerate object whenever $R$ is not torsion and $k$ is of infinite transcendence degree over its prime subfield. In this case Lemma 2.4 of \cite{ayconj} essentially gives a sequence of morphisms between objects $Y_i=R(i)[i]$ (for $i\ge 0$) such that the motif $Y=\hcl Y_i$ (see Proposition \ref{ppcoprws}(\ref{icoprhcl})) is not zero (note that in loc. cit. formally only the case $R=\q$ and $k\subset \com$ was considered; yet the simple proof of that lemma extends to our setting without any difficulty). Now, $R(i)[i]\in \obj \chowr[-i]\subset \dmr{}_{\wchow\le -i}$; hence Proposition \ref{ppcoprws}(\ref{icoprhcl}) gives the statement in question.
\end{proof}
\begin{rema}\label{rwchow}
1. Clearly, this weight structure $\wchow$ is also compactly purely generated by any small essentially wide subcategory of $\chowr=\kar(\hu)$ of $R$-Chow motives (whereas the category $\chowr$ itself is essentially small); whence we call it a Chow weight structure (as well).
2. Moreover, if $R$ is torsion but not zero then one can probably use a similar argument if $k$ is the perfect closure of a purely transcendental extension $k'$ of infinite degree of some field $k''\subset k'$.\end{rema}
\subsection{On "variations" of weight Postnikov towers and weight complexes}\label{subtlety}
Now we study the question of lifting $\wstu$-truncations and Postnikov towers of $t(M)$ to that of $M$; so we (try to) describe all possible choices of weight complexes for a fixed object of $\cu$ (see Corollary \ref{cenvel}(1)). These questions are rather natural and actual even though somewhat technical.
As a consequence we obtain a new existence of weight structures statement.
\begin{pr}\label{piwcex}
Let $M\in \obj \cu$, $m\in \z$; assume that $\hw$ is Karoubian.
1. Then for a complex $C\in \obj K(\hw)$ there exists a choice of $w_{\le m}M$ such that $t(w_{\le m}M)\cong C$ if and only if $C$ is a choice of $\wstu_{\le m}(t(M))$ (note that this assumption does not depend on the choice of $t(M)$;
see Remark \ref{rwc}(\ref{irwco})).
2. Assume that $N'=w_{\le m+1}M$. Then for any $m$-$\wstu$-decomposition triangle
\begin{equation}\label{ekwd}
C \stackrel{e}{\to} t(N')\stackrel{f}{\to} D\stackrel{g}{\to} C[1] \end{equation} (in $K(\hw))$),
where $t(N')$ comes from a weight Postnikov tower $Po_{N'}$ for $N'$, there exist
certain $\pwcu$-morphisms $Po_{N'}\stackrel{f_{\pwcu}}{\longrightarrow} Po_{\tilde D}\stackrel{g_{\pwcu}}{\longrightarrow} Po_{N[1]}$ satisfying the following conditions: the corresponding weight complex morphisms $(f_{K(\hw)}, g_{K(\hw)})$ form a couple $K(\hw)$-isomorphic to $(f,g)$ (with the first "component" of this isomorphism being just $\id_{t(N')}$), and the underlying $\cu$-morphisms $N'\stackrel{f_{\cu}}{\to} {\tilde D}\stackrel{g_{\cu}}{\to} N[1]$ extend to a $\cu$-distinguished triangle such that the composed morphism $N\to N'\to M$ gives an $m$-weight decomposition of $M$.
3. Let $\bu$ be a full additive subcategory of $\hw$. Then the full subcategory $\cu^{\bu}$ of $\cu$ consisting of those $N\in \obj \cu$ such that $t(N)$ is $K(\hw)$-isomorphic to an object of
$K(\bu)$, is triangulated. Moreover, $w$ restricts (see Definition \ref{dwso}(\ref{idrest})) to $\cu^{\bu}$, and the heart of this restriction $w^{\bu}$ equals $\kar_{\cu}\bu\cong \kar(\bu)$.
\end{pr}
\begin{proof}
1. Applying Proposition \ref{pwt}(\ref{iwcex}) to any $m$-weight decomposition of $M$ (see Remark \ref{rstws}(2)) we obtain the existence of a $K(\hw)$-distinguished triangle $t(w_{\le m}M)\to t(M)\to t(w_{\ge m+1}M)\to t(w_{\le m}M)[1]$. Next, $t(w_{\le m}M)\in K(\hw)_{\wstu\le m}$ and $t(w_{\ge m+1}M)\in K(\hw)_{\wstu\ge m+1}$ according to Proposition \ref{pwt}(\ref{iwcons}); hence $t(w_{\le m}M)$ is a choice of $\wstu_{\le m}(t(M))$.
Now we prove the converse implication, i.e., that for any $C=\wstu_{\le m}(t(M))$ there exists $N=w_{\le m}M$ such that $t(N)\cong C$. We take $N'$ to be a choice of $w_{\le m+1}M$; then the complex $C'=t(N')$ (as we have just proved) is a choice of $\wstu_{\le m+1}(t(M))$. So we take $f\in K(\hw)(C,C')$ to be
a morphism "compatible with $\id_{t(M)}$" via Proposition \ref{pbw}(\ref{icompl}).
Applying Proposition \ref{pwt}(\ref{iwpt1}) we obtain that $f$ can be completed to a distinguished triangle of the form (\ref{ekwd}). Thus the implication in question reduces to assertion 2.
2. We can certainly assume $m=-1$. The idea is to "truncate" $N'$ to obtain $N$.
The aforementioned Proposition \ref{pwt}(\ref{iwpt1}) actually implies that $D\in K(\hw)_{\wstu=0}$. Since $\hw$ is Karoubian, we can assume $D\in \obj \hw$; so we set ${\tilde D}=D$. Let us make a choice of $N'{}^0=w_{\ge 0}N'$; it belongs to $\cu_{w=0}$ according to Proposition \ref{pbw}(\ref{iwd0}) and arguing is above we obtain that $N'{}^0$ is also a choice of $\wstu_{\ge 0} C'$. According to Proposition \ref{pbw}(\ref{ifact}) the morphism $f $ factors through the weight truncation morphism $N'\to N'{}^0$. So we set $f_{\cu}$ to be the corresponding composition $N'\to N'{}^0\to D$ and complete this morphism to a distinguished triangle $N
{\to} N'\stackrel{f_{\cu}}{\to}
D \stackrel{g_{\cu}}{\to} N[1]$.
Next we apply Proposition \ref{pwt}(\ref{iwcex}) to the couple $(f_{\cu},g_{\cu})$ to obtain a choice of morphisms $Po_{N'}\stackrel{f_{\pwcu}}{\longrightarrow} Po_{\tilde D}\stackrel{g_{\pwcu}}{\longrightarrow} Po_{N[1]}$ such that $Po_{\tilde D}$ is a weight Postnikov tower for $D$.
Proposition \ref{pwt}(\ref{iwcex}) also says that the corresponding couple $(f_{K(\hw)}, g_{K(\hw)})$ can be completed to a distinguished triangle. Since $D\in K(\hw)_{\wstu=0}$, the arrow $f_{K(\hw)}$ is $K(\hw)$-isomorphic to $f$ (since non-zero morphisms in $K(\hw)(N,D)$ do not vanish in $\kw(\hw)(N,D)$).
Thus $t(N)\cong C$ and $(f_{K(\hw)}, g_{K(\hw)})\cong (f,g)$. Applying Proposition \ref{pwt}(\ref{iwcons}) we obtain that $N\in\cu_{w\le -1}$.\footnote{Here we use the fact that $N$ belongs to $ \cu_{w\le 0}$ that is immediate from Proposition \ref{pbw}(\ref{iext}).} Lastly, for a cone $P$ of the composed morphism $N\to N'\to M$ we have a distinguished triangle $D\to P\to w_{\ge 1}M\to D[1]$ (by the octahedral axiom); hence $P\in \cu_{w\ge 0}$, and therefore $N=w_{\le -1}M$.
3. Proposition \ref{pwt}(\ref{iwhecat},\ref{iwcex}) clearly implies that $\cu^{\bu}$ is a triangulated subcategory of $\cu$. Next, to prove that $w$ restricts to $\cu^{\bu}$ it obviously suffices to verify that for any object $M$ of $\cu^{\bu}$ there exists a choice of $w_{\le 0}M$ that belongs to $\obj \cu^{\bu}$ as well. The latter statement follows immediately from the definition of $ \cu^{\bu}$ along with assertion 1.
Let us now calculate $\hw^{\bu}$. If $M$ is the image of an idempotent endomorphism $p:B\to B$ for $B\in \obj \bu$ then we clearly can take $t(M)=\dots\to 0\to B\stackrel{\id_B-p}{\to} B\stackrel{p}{\to} B \stackrel{\id_B-p}{\to} B\to \dots$ (the
first $B$ is in degree $0$).
Hence $\hw^{\bu}$ contains $\kar_{\cu}(\bu)\cong \kar(\bu)$.
Conversely, let $M$ belong to the heart of $w^{\bu}$. Then we choose $t(M)\in \obj K(\bu)$, and it remains to recall that $M$ is a retract of the object $M^0$ according to Proposition \ref{pwt}(\ref{iwch}).
\end{proof}
\begin{rema}\label{riwcex}
Adopt the assumptions of Proposition \ref{piwcex}(3).
1. Let us describe a funny example to this statement. According to Proposition \ref{paydegen} there exists a left non-degenerate weight structure $\wchow$ on
$\dmr$ whose heart is the coproductive hull of $\chowr$ in this category (see Remark \ref{rwchow}(1) and Theorem \ref{thegcomp}(\ref{itnp1},\ref{itnp1d})). Thus we can take $\bu=\chowr$ in our proposition and consider the corresponding subcategory $\dmr^{\chowr}\subset \dmr$ consisting of motives whose weight complexes are complexes of Chow motives
(and not of retracts of their small coproducts). According to part 3 of our proposition, this "big" $\wchow$ on $\dmr$ restricts to a weight structure on $\dmr^{\chowr}$ whose heart is equivalent just to $\chowr$.
Next we consider the functor $F':-\otimes \omp:\da\to \cu'$ (as mentioned in Remark \ref{rayoub}(2)). According to Theorem \ref{thegcomp}(\ref{itnp1},\ref{itnwe}) there also exists a weight structure on $\cu'$ such that $F'$ is weight-exact (here we use the negativity statement mentioned in Remark \ref{rayoub}(2)). Thus one may apply Theorem \ref{twcons} to the restriction $F$ of $F'$ to $\da^{\chowk}$ (so, we take $R=k$, where $k$ is our characteristic $0$ base field) as well; note that $\da^{\chowk}$ is obviously bigger than $\dmgk$ (look at $\coprod_{i\in \z}N[i]$ for any non-zero $N\in \obj \chowk$).
Moreover, it appears (cf. Remark \ref{rayoub}(3)) that combining Theorem II of \cite{ayoubcon} with Theorem \ref{twcons}(3) one can
deduce that all $\wchow$-bounded below objects of $\da^{\chowk}$ whose de Rham cohomology is zero are right $\wchow$-degenerate.
2. Actually, we would have lost no information if we had assumed that $\bu$ is retraction-closed in $\hw$ in Proposition \ref{riwcex}(3).
Indeed, let us demonstrate that $\cu^{\bu}=\cu^{\kar_{\cu}\bu}$. The latter assertion can be deduced from Remark 1.12(4) of \cite{neederex}; it is obviously equivalent to the
fact that any object $C$ of $K(\kar_{\cu}\bu)$ is isomorphic to an object of its full subcategory $K(\bu)$.
Since $K(\bu)$ is a full triangulated subcategory of $K(\kar_{\cu}\bu)$, it suffices to consider the case where $C$ is bounded either above or below. Moreover, applying duality
we reduce the statement to the case where $C=(C^i)$ is concentrated in non-negative degrees. Now we present each $C^i$ as the image of an endomorphism $p^i$ of some $B^i\in \obj \du$ and consider the morphisms $e^{i}:B^{i}\to B^{i+1}$ that factor through the boundaries $d^{i}:C^{i}\to C^{i+1}$ (cf. the definition of $\kar(B)$ in \S\ref{snotata}). Then it is easily seen that the totalization of the double complex
$$\begin{CD}
B^0@>{\id_{B^0}-p^0}>> B^0 @>{p^0}>> B^0 @>{\id_{B^0}-p^0}>> B^0 @>{p^0}>> \dots\\
@VV{e^0}V@VV{0}V@VV{0}V @VV{0}V \\
B^1@>{\id_{B^1}-p^1}>> B^1 @>{p^1}>> B^1 @>{\id_{B^1}-p^1}>> B^1 @>{p^1}>> \dots\\
@VV{e^1}V@VV{0}V@VV{0}V @VV{0}V \\
B^2@>{\id_{B^2}-p^2}>> B^2 @>{p^2}>> B^2 @>{\id_{B^2}-p^2}>> B^2 @>{-p^2}>> \dots\\
@VV{e^2}V@VV{0}V@VV{0}V @VV{0}V \\
\dots@>{}>>\dots @>{}>>\dots @>{}>>\dots
\end{CD}$$
is homotopy equivalent to $C$ (in $K(\kar_{\cu}\bu)$).
3. Now let us assume that $\bu$ is retraction-closed in $\hw$ (though this condition is not really important, as we have just demonstrated), and suppose that $\cu'$ is a full triangulated subcategory of $\cu$ such that $w$ restricts to it and the heart of this restriction lies in $\bu$. Then any object of $\cu'$ clearly possesses a $w$-weight complex
whose terms are objects of $\bu$. Thus $\cu'$ is a subcategory of $\cu^{\bu}$, and we obtain that $\cu^{\bu}$ is the maximal subcategory of $\cu$ such that $w$ restricts to it and the heart of this restriction is $\bu$.
Note in contrast that $w$ restricts to the subcategory of $\cu$ strongly generated by $\bu$ and the heart of this restriction equals $\bu$ according to Proposition \ref{pexw}(1) (combined with Proposition \ref{pbw}(\ref{igenlm})); this restriction of $w$ is essentially minimal among the restrictions whose heart equals $\bu$.
\end{rema}
Now we prove a corollary that is actual for \cite{bsoscwhn}.
\begin{coro}\label{cenvel}
Assume that $M\in \obj \cu$ and $t(M)$ is homotopy equivalent to a complex $(M'{}^i)\in \obj K(\hw)$.
1. Assume that $M\in \cu_{w\le 0}$ and $M'{}^i=0$ for $i<0$. Then
there exists a weight Postnikov tower for $M$ such that the corresponding weight complex $t'(M)$ equals $(M'{}^i)$.
2. Assume that $M$ is $w$-bounded and the complex $(M'{}^i)$ is bounded. Then $M$ belongs to the $\cu$-extension closure of $M'{}^i[-i]$.
\end{coro}
\begin{proof}
1. We set $M_{\le i}=M$ for $i\ge 0$. According to Theorem 2.2.2(I.1,III.1) of \cite{bonspkar}, there exists a triangulated category $\cu'$ endowed with a weight structure $w'$ such that $\cu\subset \cu'$, $w$ is the restriction of $w'$ to $\cu$, and that the category $\hw'$ is Karoubian. Moreover, we can certainly assume that $\cu$ is a strict subcategory of $\cu'$.
Now we apply Proposition \ref{piwcex}(2) to $\cu'$ repetitively starting from $M_{\le 0}=M$ to obtain a $w'$-Postnikov tower for $M$ such that $M^i=M'{}^i$ for all $i\in \z$. Being more precise, we set $M_{\le i}=M$ for $i\ge 0$, choose a
$w'$-Postnikov tower for $M=M_{\le 0}$,
and starting from $i=0$ on each step we take a $w'$-Postnikov tower $Po_{M_{\le i}}$ constructed earlier and construct a distinguished triangle $M_{\le i}\stackrel{b_i}{\to} M'{}^{-i}[i]\stackrel{c_i}{\to} M_{\le i-1}[1]\to M_{\le i}[1]$ along with a lift of $(b_i,c_i)$ to $\operatorname{Post}_{w'}(\cu')$; we demand $(b_{i,K(\hw)}, c_{i,K(\hw)})$ to be isomorphic to the corresponding morphisms in the tower corresponding to the complex $(M'{}^i)$. These compatibilities of choices (of $w'$-Postnikov towers) allow us to compute $t(d'^{-i}_{M}): M'{}^{-i}\to M'{}^{1-i}$ as $b_{i,K(\hw)}[1-i]\circ c_{i,K(\hw)}[-i]$. Since the restriction of $t$ to the lifts to $\cuw$ of $\hw$ is a full embedding, we obtain that the boundary morphisms in this $t'(M)$ as the same as that for $M'{}^i$ if $i\ge 0$, and clearly $M'{}^i=0$ for $i<0$.
It remains to check that this $w'$-Postnikov tower for $M$ is simultaneously a $w$-one. Since $w$ is the restriction of $w'$ to $\cu$ and $M'{}^i\in \obj \cu$, for this purpose it suffices to verify that all $M_{\le i}$ are objects of $\cu$ as well; the latter fact is given by an obvious downward induction (if $i<0$; whereas for $i\ge 0$ we have $M_{\le i}=M\in \obj \cu$).
2. We can certainly assume that $M'{}^i=0$ for $i<0$ and $M\in \cu_{w\ge 0}$.
Let us take some weight Postnikov tower for $M$ such that the corresponding complex $t'(M)$ equals $M'{}^i$ (as provided by the previous assertion). Assume that $M'{}^i=0$ for $i\ge j$. Then the corresponding weight complex $t(M_{\le -j})$ is zero, and Proposition \ref{pwt}(\ref{iwcons}) implies that $M_{\le -j}=0$. Hence this Postnikov tower is bounded (see Definition \ref{dfilt}(1)); thus $M$ belongs to the $\cu$-extension closure of $M'{}^i[-i]$ by
Lemma \ref{lrwcomp}(3).
\end{proof}
\begin{rema}\label{realiz}
1. The author suspects that a more detailed treatment of weight Postnikov towers would give a complete description of those (not necessarily bounded from any side) $(M'{}^i)\in \obj K(\hw)$ that come from $w$-Postnikov towers for $M$. Most probably, any $(M'{}^i)$ that is $K(\hw)$-isomorphic to $t(M)$ can be "realized" this way if $\hw$ is Karoubian.
It appears that this statement is easier to prove whenever there exists a "lift" of $t$ to an exact functor $t^{st}:\cu\to K(\hw)$
(cf. Remark \ref{rwc}(\ref{irwc7})). Indeed, then one can lift the stupid truncations of $(M'{}^i)$ via $t^{st}$ to weight truncations $w_{\le j}M$ (using the corresponding minor modification of Proposition \ref{piwcex}(2)); connecting these objects via Proposition \ref{pbw}(\ref{icompl}) gives a weight Postnikov tower as desired.
2. Note however that it is important to assume that $\hw$ is Karoubian. Indeed, if $p$ is an idempotent endomorphism of $B\in \obj \hw$ that does not give an $\hw$-splitting then the $2$-periodic complex $\dots \to B\stackrel{\id_B-p}{\longrightarrow} B\stackrel{p}{\to} B \stackrel{\id_B-p}{\longrightarrow} B\to \dots$ does not come from a weight Postnikov tower for $M$ since the corresponding $M_{\le 0}$ is a retract of $B$ that cannot belong to $\obj \cu$.
This example demonstrates that the "lifting" questions treated in this section are not trivial.
\end{rema}
\section{"Topological" examples}\label{stop}
In \S\ref{shg} we prove that in the equivariant stable homotopy category $\shg$ of $G$-spectra there exists a weight structure $\wg$ generated by the stable orbit subcategory (that consists of the spectra of the form $S_H^0$,
where $H$ is a closed subgroup of $G$, i.e., $\wg$ is purely compactly generated by equivariant spheres). Applying the previous results to $\wg$ we obtain that this weight structure is closely related to the connectivity of spectra, and the corresponding pure cohomology is the Bredon one (coming from Mackey functors).
Our section \ref{shtop} is dedicated to the detailed study of the case $G=\{e\}$, i.e., of the stable homotopy category $\shtop$ (along with the corresponding weight structure $\wsp$). We prove that $\wsp$-Postnikov towers are the {\it cellular} ones in the sense of \cite{marg} (cf. Remark \ref{rshg}(\ref{irgcell}) for a conjectural generalization of this statement).
In \S\ref{spost} we discuss the relation of our results to $t$-structures (that are {\it adjacent} to the corresponding weight structures) along with certain results of \cite{axstab}; we do not prove anything new in this section.
\subsection{On equivariant spherical weight structures and Mackey functors}\label{shg}
Now let us apply the general theory to the study of equivariant stable homotopy categories. Let us list some notation and definitions related to this matter.
\begin{itemize}
\item $G$
is a (fixed) compact Lie group; we will write $\shg$ for the stable homotopy category of $G$-spectra indexed by a complete $G$-universe.
\item We take $\cp$ to be the set of spectra of the form $S_H^0$, where $H$ is a closed subgroup of $G$ (cf. Definition I.4.3 of \cite{lms}; recall that $S_H^0$ is constructed starting from the $G$-space $G/H$). We will use the notation $\hu$ for the corresponding (preadditive) subcategory of $\shg$; so, it is the (stable) {\it orbit category} of ibid.
Recall also that $S_H^0[n]$ is the corresponding sphere spectrum $S_H^n$ essentially by definition (see loc. cit.).
\item The equivariant homotopy groups of an object $E$ of $\shg$ are defined as $\pi_n^H(E)=\shg(S_H^n,E)$ (for all $n\in \z$; see \S I.6 and Definition I.4.4(i) of ibid.).
\item We will write $\emg$ for the full subcategory of $\shg$ whose object class is $(\cup_{i\in \z\setminus \ns} \cp[i])^{\perp}$ (its objects are the {\it Eilenberg-MacLane} $G$-spectra; see \S XIII.4 of \cite{mayeq}).
\item We
write $\macg$ for the category of additive contravariant functors from $\hu$ into $\ab$ (cf. Theorem \ref{thegcomp}(\ref{itnp3})); its objects are the {\it Mackey functors} in the sense of loc. cit. Respectively, $\cacp$ will denote the Yoneda embedding $\hu\to \macg$.
Recall here that for any Mackey functor $M$ the corresponding (Bredon cohomology) functor $H_G^0(-,M)$ is represented by some Eilenberg-MacLane $G$-spectrum (see loc. cit.).
\end{itemize}
Now we relate the theory of weight structures (cf. Definitions \ref{dwstr}, \ref{dwso}(\ref{idh},\ref{idrest}), \ref{dpure}, and \ref{dbh}) to $\shg$.
\begin{theo}\label{tshg}
The following statements are valid.
\begin{enumerate}
\item\label{itgfine}
The category $\cu=\shg$ and the class $\cp$ specified above satisfy the assumptions of Theorem \ref{thegcomp}. Thus $\cp$ gives a a left non-degenerate smashing weight structure $\wg$ on $\shg$ whose heart is the coproductive hull of $\hu$ in $\shg$ and is equivalent to the formal coproductive hull of $\hu$.
\item\label{itgrest} $\wg$ restricts to the subcategory of compact objects of $\shg$; the heart of this restriction $\wgfin$ is the Karoubi envelope of the class of finite coproducts of elements of $\cp$.
\item\label{itgconn}
Let $n\in \z$. Then the class of $n-1$-connected spectra (see Definition I.4.4(iii) of \cite{lms}; i.e., this is the class $(\cup_{i<n}\cp[i])\perpp$)) coincides with $\shg_{\wg\ge n}$. In particular, $\shg_{\wg\ge 0}$ is the class of {\it connective} spectra, and $\wg$-bounded below objects are the bounded below spectra of loc. cit.
Moreover, $\shg_{\wg\ge n}$ is the smallest class of objects of $\shg$ that contains $\cp[i]$ for $i\ge n$ and is closed with respect to coproducts and extensions.
\item\label{itgpure} A (co)homological functor from $\shg$ into an AB4 (resp. AB4*) abelian category $\au$ that respects coproducts (resp. converts them into products) is $\wg$-pure if and only if it kills $\cup_{i\neq 0}\cp[i]$.
In particular, all Eilenberg-MacLane $G$-spectra represent $\wg$-pure functors.
\item\label{itgcacp} The pure homological functor $H^{\cacp}$ (see Theorem \ref{tpure} and Proposition \ref{pgcomp}) is the
equivariant ordinary homology with Burnside ring coefficients functor $H^G_0$ considered in \cite{lewishur} (cf. also Definition X.4.1 of \cite{mayeq}), and for any Mackey functor $M$ the corresponding pure functor $H_M$ (see Remark \ref{rpuresd}(1)) coincides with $H_G^0(-,M)$ in Definition X.4.2 and \S XIII.4 of ibid.
\item\label{itgemg} The category $\emg$ is naturally isomorphic to $\macg$ (in the obvious way); thus $\emg$ is Grothendieck abelian and has an injective cogenerator $I$.
\item\label{itgneg} $ \shg_{\wg\le 0}$ is the smallest subclass of $\obj \shg$ that is closed with respect to coproducts, extensions, and contains $\cp[i]$ for $i\le 0$. This class also equals $ \perpp \shg_{\wg\ge 1}$;\footnote{Recall here $\shg_{\wg\ge 1}$ is the class of $0$-connected $G$-spectra.}
moreover, it is annihilated by $H_i$ for all $i>0$ and
every pure homological functor $H$ from $\shg$.
\end{enumerate}
\end{theo}
\begin{proof}
\ref{itgfine}. The compactness of the spectrum $S_H^0$ for any closed subgroup $H$ of $G$ is given by Corollary A.3 of \cite{hukriz}; cf. also Lemma I.5.3 of \cite{lms}. The category $\hu$ is a connective subcategory of $\shg$ according to Lemma 2.3(i) of \cite{lewishur} (see also Proposition I.7.14 of \cite{lms} for a generalization).
Next, $\cp$ generates $\shg$ as its own localizing subcategory according to Theorem 9.4.3 of \cite{axstab}.\footnote{Alternatively, note that the definition of $\shg$ (see \S I.5 of of \cite{lms}) immediately implies that $(\cup_{i\in \z}\cp[i])\perpp=\ns$, and it remains to apply Lemma \ref{lcg} to obtain this generation statement.}
Lastly, we apply Theorem \ref{thegcomp}(\ref{itnp1d}) to compute the heart of $\wg$.
\ref{itgrest}. This is just the corresponding case of Theorem \ref{thegcomp}(\ref{itnpr}).
\ref{itgconn}. By definition, a $G$-spectrum $N$ is $n-1$-connected whenever $\pi_i^H(N)\cong \shg(S_H^i,N)= \ns$ for any $i<n$ and any closed subgroup $H$ of $G$. Hence it remains to apply Theorem \ref{thegcomp}(\ref{itnp1d}) to obtain all the statements in question.
\ref{itgpure}. Immediate from Theorem \ref{thegcomp}(\ref{itnp2}) (cf. also Remark \ref{rgensmash}(\ref{irspure})).
\ref{itgcacp}. Since the functor $H^G_0$ (resp. $H_G^0(-,M)$) respects coproducts (resp. converts coproducts into products), combining the previous assertion with Remark \ref{rgensmash}(\ref{irspure}) we obtain that it suffices to compare the restrictions of $H^G_0$ and $H^{\cacp}$ (resp. $H_G^0(-,M)$ of $H_M$) to the categories $\hu[i]$ for $i\in \z$. Now,
these restrictions are canonically isomorphic by definition (resp. by the "dimension axiom" (XIII.4.4) of \cite{mayeq}).
\ref{itgemg}. This is just the corresponding case of Theorem \ref{thegcomp}(\ref{itnp3}).
\ref{itgneg}. The first of these descriptions of $ \shg_{\wg\le 0}$ is given by Theorem \ref{thegcomp}(\ref{itnpb}). Next,
$ \shg_{\wg\le 0}= \perpp \shg_{\wg\ge 1}$ according to Proposition \ref{pbw}(\ref{iort}). It remains to recall the definition of pure functors to conclude the proof.
\end{proof}
\begin{rema}\label{rshg}
\begin{enumerate}
\item\label{irglam}
Our theory can also be applied to the subcategories of $\lam$-linear spectra in $\shg$; that is, for a set of prime numbers $S\subset \z$ and $\lam=\z[S\ob]$ one can invert all morphisms of the form $s\id_E$ for all $s\in S$ and $E\in \obj \shg$, and the right adjoint $l_S$ gives an embedding of this localization $\shg_{\lam}$ into $\shg$. These versions of our statements are described in \S4.2 of \cite{bkwn}.
\item\label{irgcell} It is certainly very convenient to compare pure functors by looking at their restrictions to the subcategory $\hu$ of $\hw$ only; this was crucial for the proof of part \ref{itgcacp} of our theorem. Recall however that the (co)homology functors coming from Mackey functors can be described in terms of {\it skeletal filtrations of $G$-CW-spectra} (see \S XIII.4 of \cite{mayeq}). Thus it would be nice to understand the relation between filtrations of this sort and $\wg$-Postnikov towers.
The author conjectures that a $\wg$-Postnikov tower of a spectrum $E$ lifts to a skeletal filtration (in the category of $G$-CW-spectra) if and only if the corresponding terms $E^i\in \shg_{\wg=0}$ are coproducts of elements of $\cp$ (i.e., one cannot take retracts of objects of this type here). Note that the proof of this statement would closely relate the filtrations on $\shg$ given by $\wg$ with {\it $(a,b)$-dimensional spectra} of \cite[Appendix A]{green} (one considers the quotients of possible skeletal filtrations for object of $\shg$ in this definition).\footnote{Probably, Corollary \ref{cenvel} can help to study this relation for {\it finite} $G$-spectra. One of the problems here is that the additive hull of $\cp$ is not Karoubian in general; cf. Theorem A.4 and Remarks A.5 of ibid.} The author believes that this conjecture is rather easy for those spectra that come from {\it $G$-CW-complexes}, and requires certain (countable) homotopy colimit and limit argument (cf. Definition 2.1 of \cite{bokne} or Proposition \ref{ppcoprws}(\ref{icoprhcl})) in general.
Note however that in the case $G= \{e\}$ there exists an alternative notion of {\it cellular tower} considered in \S6.3 of \cite{marg}. Theorem \ref{top}(\ref{itopcell}) below says that this notion is equivalent to that of a $\wsp$-Postnikov tower; this is related to the fact that in this case the category of coproducts of copies of $S^0$ (along with its subcategory of finite coproducts) is Karoubian.
\item\label{iruniv}
It appears that the results of \cite{lms} are actually sufficient to generalize all the assumptions of our theorem to the case where $\cu$ is the stable homotopy category of $G$-spectra indexed on a not (necessarily) complete $G$-universe. In particular, this universe can be $G$-trivial (i.e., $G$ acts trivially on it); this allows us to apply it to the corresponding representable functors as considered in \S IV.1 of \cite{bred}.
\end{enumerate} \end{rema}
Now let us formulate our weight detection statements in this setting. We will freely use the notation introduced above.
\begin{pr}\label{pshgwd}
Consider the following conditions on an object $E$ of $\shg$.
(i). $E$ is connective.
(ii). $C_j(E)=0$ for any $\wg$-pure homological functor $C$ from $\shg$ and $j<0$.
(iii). $C^j(E)=0$ for any $\wg$-pure cohomological functor $C$ from $\shg$ and $j<0$.
(iv). $E\perp I[j]$ (see Theorem \ref{tshg}(\ref{itgemg})) for all $j<0$.
(v). $H_j^{\cacp}(E)=0$ for all $j<0$.
Then the following statements are valid.
1. Conditions (ii)-(v) are equivalent.
2. These conditions follow from condition (i).
3. The converse implication is fulfilled whenever $E$ is $m$-connected for some $m\in \z$.
\end{pr}
\begin{proof}
1. Conditions (ii) and (iii) are equivalent since we can just invert arrows in the target. It remains to apply Proposition \ref{pgcomp} to our setting.
2. Condition (i) implies condition (ii) just by the definition of purity.
3. Recall that Proposition \ref{pgcomp} also implies that condition (ii) (as well as (iv) and (v)) are equivalent to $t_{\wg}(E)\in K(\hw^G)_{\wstu \ge 0}$. Thus it remains to apply Proposition \ref{pwt}(\ref{iwcons}).
\end{proof}
\subsection{The case of trivial $G$: cellular towers}\label{shtop}
Now we apply our results to the stable homotopy category $\shtop$ (whose detailed description can be found in \cite{marg}). This corresponds to the case of a trivial $G$ in Theorem \ref{tshg}. We will write $\emo$ for $\emg$ and $w^{sph}$ for $\wg$ in this case, and use the remaining notation from this theorem.
\begin{theo}\label{top}
Set $\cp=\{S^0\}$.
Then the following statements are valid.
\begin{enumerate}
\item\label{itoptriv}
The functor $\shtop(S^0,-)$ gives equivalences $\hwsp\to \abfr$ (the category of free abelian groups) and $\emo\to \ab$; thus $\aucp$ is equivalent to $\ab$ as well.
Moreover, $\wsp$ restricts to the category of finite spectra, and the heart of this restriction is equivalent to the category of finitely generated free abelian groups.
\item\label{itopsingh}
The functor $\cacp$ is essentially the singular homology functor $\hsing$.
\item\label{itopsingc}
For any abelian group $\gam$ and the corresponding spectrum $\egam\in \obj \emo$ the functor $\shtop(-,\egam)$ is isomorphic to the singular cohomology with coefficients in $\gam$.
\item\label{itopcell}
A $\wsp$-Postnikov tower of a spectrum $E\in \obj \shtop$ is a cellular tower for $E$ in the sense of
\cite[\S6.3]{marg}.
\item\label{itopskel}
Assume that the following statement is fulfilled for any object $E$ of $\shtop$: if $\hsing_i(E)=\ns$ for $i>0$ and $\hsing_0(E)$ is a free abelian group then $E\in \shtop_{\wsp\le 0}$.
Then (conversely) a cellular tower for $E$ in the sense of loc. cit. is a $\wsp$-Postnikov tower, and
$\shtop_{\wsp\le n}$ consists precisely of {\it$n$-skeleta} (of certain spectra) in the sense of ibid. (cf. also Definition 6.7 of \cite{christ}).
\end{enumerate}
\end{theo}
\begin{proof}
\ref{itoptriv}. Since the endomorphism ring of $S^0$ is $\z$, $\aucp$ is equivalent to $\ab$. Next, Theorem
\ref{tshg}(\ref{itgfine}) implies that $\hwsp$ is equivalent to the Karoubi envelope of the category of free abelian groups, i.e., to $\abfr$ itself. Moreover, applying
Theorem \ref{tshg}(\ref{itgrest}) we obtain that $\wsp$ restricts to finite spectra and compute the heart of this restriction. Lastly,
applying part \ref{itgemg} of that theorem we obtain that $\emo$ is equivalent to $\ab$ as well.
\ref{itopsingh}. Obviously, singular homology respects coproducts (since $\hsing(-)\cong \shtop(S^0,-\wedge \emz)$ and the smash product in $\shtop$ preserves coproducts; here $\emz$ is the Eilenberg-MacLane spectrum corresponding to the group $\z$). It is also pure since $\hsing_i(S^0)=\ns$ for $i\neq 0$. Thus comparing the restrictions of $\hsing$ and $\cacp$ to the category $\hu$ (whose only object is $S^0$ in this case) we obtain the isomorphism in question according to the aforementioned Theorem \ref{thegcomp}(\ref{itgemg}) (cf. also Remark \ref{rgensmash}(\ref{irspure})).
\ref{itopsingc}. This is a well-known statement; it can also be easily deduced from Theorem \ref{thegcomp}(\ref{itgemg}).
\ref{itopcell}. Let us recall that a cellular tower for $E$ is a certain Postnikov tower for $E$ whose term $E_{\le n}$ (in the notation of Definition \ref{dfilt}(2)) was denoted by $E^{(n)}$ in \S6.3 of \cite{marg}. Respectively, the assumption on $E_n=\co(E_{\le n-1}\to E_{\le n})$ in loc. cit. says that $E_n$ is a coproduct of $S^0[n]$ (since $S^0[n]$ is the $n$-dimensional sphere spectrum) for all $n\in \z$; this statement is clearly fulfilled for $\wsp$-Postnikov towers (see Proposition \ref{pwt}(\ref{iwpt1})).
Next, $E$ should be the {\it minimal weak colimit} of $E_{\le n}$ in the sense of \S3.1 of ibid.
Now,
$\wsp$ is a smashing left non-degenerate weight structure; hence
$\hcl_n \wsp_{\le n}E\cong E$ by Theorem 4.1.3(1,2) of \cite{bsnew}; here $\hcl_n \wsp_{\le n}E$ denotes the {\it countable homotopy colimit} of these objects (see Proposition \ref{ppcoprws}(\ref{icoprhcl})). It easily follows that the induced homomorphism $\shtop(E,Y)\to \prli_n \shtop(\wsp_{\le n} E ,Y)$ is surjective for any $Y\in \obj \shtop$, and the stable homotopy groups $\pi_*(E)$ of $E$ are the colimits of those of $\wsp_{\le n}E$ (note that $\pi_i= \shtop(S^0[i],-)$); here one can apply the well-known Remark 4.1.2(2,3) of \cite{bsnew}.
Thus $E$ is the minimal weak colimit of $\wsp_{\le n}E$ by definition.
Furthermore, assertion \ref{itopsingh} of
this theorem (combined with the definition of pure homological functors) immediately implies that the inverse limit of singular homology of $\wsp_{\le n}E$ (when $n$ goes to $-\infty$) vanishes; this is the last of the conditions in the definition of cellular towers.
\ref{itopskel}. Assume that $E^{(n)}=E_{\le n}$ is a cellular tower for $E$ in the sense of Margolis. Applying Proposition 6.12
of \cite{marg} we obtain that $\hsing_i(E_{\le m})=\ns$ whenever $i>m\in \z$, and $\hsing_m(E_{\le m})$ is a free abelian group. Applying our assumption we obtain that $E_{\le m}\in \shtop_{\wsp\le m}$.
Next, since $E_n\in \shtop_{\wsp=n}$, the corresponding distinguished triangles (\ref{etpt}) imply that the homomorphism $\pi_i(E_{\le n})\to \pi_i(E_{\le n+1})$ is bijective if $i<n\in \z$ and surjective if $i=n$. Applying the isomorphism $\inli_n \pi_i(E_{\le n})\cong \pi_i(E)$ we obtain that $\pi_i(\co(E_{\le n}\to E))=\ns$ if $i\le n$.
Hence $\co(E_{\le n}\to E)\in \shtop_{w\ge n+1}$ according to Theorem \ref{tshg}(\ref{itgconn}). Thus $E_{\le n}\to E\to \co(E_{\le n}\to E)\to E_{\le n}[1]$ is an $n$-weight decomposition of $E$, and we obtain that $E_{\le n}$ give a weight Postnikov tower indeed.
To prove that second statement in our assertion it remains to recall that $n$-skeleta are just the spectra of the form $E^{(n)}$ for $E\in \obj \shtop$ (and some cellular tower for $E$).
\end{proof}
\begin{rema}\label{rmarg}
1. The assumption made in part \ref{itopcell} of our theorem is given by Theorem 4.2.3(6) of \cite{bkwn}.
Moreover, if we assume in addition that $E$ belongs to $\shtop_{\wsp\le m}$ for some $m\in \z$ then the statement easily follows from Proposition \ref{pwt}(\ref{iwcons}), whereas the general case requires the theory of {\it killing weights} (as developed in ibid.).
Note also that combining loc. cit. and related results with Theorem \ref{tshg} one obtains several new statements on $\shg$.
2. Parts \ref{itoptriv}-- \ref{itopcell}
of our theorem were previously formulated by the author in \S4.6 of \cite{bws}. However, their proofs sketched in loc. cit. contained significant gaps, and the results of the current paper and of \cite {bkwn} really help in closing them.
Note also that part \ref{itopcell} of our theorem suggests that weight spectral sequences corresponding to $\wsp$ can be called Atiyah-Hirzebruch ones.
3. Since cellular towers are $\wsp$-Postnikov towers, we obtain that Proposition 6.18 of \cite{marg} is the corresponding case of the weak functoriality of weight Postnikov towers (see Proposition \ref{pwt}(\ref{iwpt2})).
\end{rema}
\subsection
The relation to adjacent $t$-structures and connective stable homotopy theory}\label{spost}
Now we relate the result of this section to $t$-structures. Though in \cite{bbd} (where this notion was introduced) and in previous papers of the author the "cohomological convention" for $t$-structures was used, we prefer to use the homological convention that is "more coherent" with the topological examples. The corresponding versions of the definitions are as follows.
\begin{defi}\label{dtstrh}
1. A couple of subclasses $\cu_{t\le 0},\cu_{t\ge 0}\subset\obj \cu$ will be said to be a
$t$-structure $t$ on $\cu$ if
they satisfy the following conditions:
(i) $\cu_{t\le 0}$ and $\cu_{t\ge 0}$ are strict, i.e., contain all
objects of $\cu$ isomorphic to their elements.
(ii) $\cu_{t\le 0}\subset \cu_{t\le 0}[1]$ and $\cu_{t\ge 0}[1]\subset \cu_{t\ge 0}$.
(iii) $\cu_{t\ge 0}[1]\perp \cu_{t\le 0}$.
(iv) For any $M\in\obj \cu$ there exists a {\it $t$-decomposition} distinguished triangle
\begin{equation}\label{tdec}
L_tM\to M\to R_tM{\to} L_tM[1]
\end{equation} such that $L_tM\in \cu_{t\ge 0}, R_tM\in \cu_{t\le 0}[-1]$.
2. $\hrt$ is the full subcategory of $\cu$ whose object class is $\cu_{t=0}=\cu_{t\le 0}\cap \cu_{t\ge 0}$.
\end{defi}
\begin{rema}\label{rtst}
Let us recall some well-known properties of $t$-structures (cf. \S1.3 of \cite{bbd}).
1.
The triangle (\ref{tdec}) is canonically and functorially determined by $M$; thus $L_t$ and $R_t$ are functors.
2. $\hrt$ is an abelian category with short exact sequences corresponding to distinguished triangles in $\cu$.
Moreover,
$L_t\circ [1] \circ R_t\circ [-1] \cong [1] \circ R_t\circ [-1] \circ L_t$ (if we consider these functors as endofunctors of $\cu$). The corresponding composite functor $H^t$ actually takes values in $\hrt\subset \cu$, and it is homological if considered this way.
3. We have $\cu_{t\le 0}=(\cu_{t\ge 0}^{\perp})[1]$. Thus $t$ is uniquely determined by $\cu_{t\ge 0}$ (and actually by $\cu_{t\le 0}$ as well).
\end{rema}
Now let recall some relations of $t$-structures to the results and formulations above.
\begin{pr}\label{padj}
Adopt the notation and assumptions of Theorem \ref{thegcomp}.
Then the following statements are valid.
\begin{enumerate}
\item\label{ipa1}
$w$ is {\it left adjacent} to a certain (unique) $t$-structure $t$ in the sense of \cite[Proposition 1.3.3]{bvtr}, i.e., $\cu_{t\ge 0}=\cu_{w\ge 0}$.
\item\label{ipa2} The category $\hrt\subset \cu$ is the heart of this $t$, and the functor $H^t$ becomes isomorphic to $H^\cacp$ when combined with the equivalence $\hrt\to \aucp$ provided by Theorem \ref{thegcomp}(\ref{itnp3}).
\end{enumerate}
\end{pr}
\begin{proof} Immediate from Theorem 4.5.2 of \cite{bws}; one can also combine Theorem 3.2.3(I) of \cite{bvtr} with Theorem \ref{thegcomp}(\ref{itnp1d}).
\end{proof}
\begin{rema}\label{radjt}
1. For $\cu=\shtop$ (and $\cp=\{S^0\}$) the corresponding $t$-structure is often called the Postnikov $t$-structure; so the author suggest to use this term for $\cu=\shg$ (and for $\cp$ as in the Theorem \ref{tshg}) as well. In \S3.2.B of \cite{marg} the object $L_tE$ (resp. $R_tE$; here $E$ is an object of $\shtop$) in the distinguished triangle (\ref{tdec}) was said to be {\it of type $E[0,\infty]$} (resp. $E[-\infty,-1]$).
Respectively, in (the proof of) Proposition 3.8 of ibid. certain $t$-decompositions (\ref{tdec}) were constructed; see also Proposition 7.1.2(c) of \cite{axstab} for a more general (and "explicit") formulation of this sort.
Note also that this $t$ does not restrict to finite spectra in contrast to $\wsp$.
2. Let us now say more on the relation of weight structures to connective stable homotopy theory as discussed in
\S7 of \cite{axstab}.
In that section the corresponding triangulated category (we will write $\cu$ for it) was assumed to be {\it monogenic}, i.e., $\cu$ is compactly generated by a single element set $\{X\}$ for some $X\in \obj \cu$. Next, the {\it connectivity} assumption on $X$ in ibid. means that $X\perp X[i]$ for all $i>0$. Applying Theorem \ref{thegcomp}(\ref{itnp1},\ref{itnp1d}) we obtain a unique smashing weight structure on $\cu$ whose heart consists of all retracts of coproducts of copies of $X$.
Now, our weight structure approach give some new statements on the corresponding class $\cu_{w\ge 0}=\cu_{t\ge 0}$. We also obtain the class $\cu_{w\le 0}$ that appears to be new and important (even in the case $\cu=\shtop$). In particular, we use this class to give a nice definition of weight Postnikov towers (that can also be called cellular towers) for arbitrary objects of $\cu$ (in contrast to Proposition 7.1.2(a) of ibid.). The notions of weight complex and weight spectral sequence are also new for this context.
Moreover, note that we currently have a good understanding of weight-exact localizations (achieved in \cite[\S4]{bos} and \cite[\S3]{bsnew}; cf. the beginning of \S7 of \cite{axstab}).
\end{rema}
|
2,869,038,156,848 | arxiv | \section{Introduction}
Water ice in the interior of asteroids and Near Earth Objects (NEOs) is of scientific and resource exploration interest \citep[e.g.][]{lodders99,zealey03}. Airless bodies gradually lose their ice to space by outward diffusion through the porous material, and the time-scale of this desiccation determines whether or not a body that initially contained ice still retains some of it in its interior. To address the question of how much ice a body retains after a given time, and at what depths, for given physical properties and environmental conditions, we obtain analytic solutions for 1) the latitude-dependent temperature field inside a fast-rotating and thermally equilibrated spherical body (section \ref{sec:T2d}), and 2) the time-dependent depth to retreating ice for a spherically symmetric body, including the effect of latent heat (section \ref{sec:i1d}).
A number of numerical models have been developed for icy bodies \citep{delbo15}. For example, \cite{guilbert11} have implemented a full three-dimensional model of ice evolution using spherical harmonics decomposition, and \cite{prialnik92} developed a numerical code for the thermal structure and composition of comet nuclei. \cite{schorghofer08a,schorghofer16a} investigated ice loss from the near-surface with an ensemble of one-dimensional models.
The approach chosen here differs from these previous works in that it seeks analytic solutions for the interior temperature field and ice retreat rates.
Analytical one-dimensional thermal models have been previously utilized for comets \citep[e.g.][]{klinger81,kuhrt84,mckay86}.
To obtain these solutions, simplifying assumptions are necessary, but because all parameter dependencies remain transparent, the results provide insight for a wide range of scales and parameters.
\begin{table}[tbh!]
$D$ ... vapor diffusivity \\
$L$ ... specific latent heat of water ice sublimation \\
$\bar Q$ ... mean annual insolation \\
$T$ ... temperature \\
$\bar T$ ... temperature averaged over surface area and orbit \\
$R$ ... body radius \\
$a$ ... semi-major axis of orbit \\
$a_\ell$ ... coefficients in a series expansion \\
$k_B$ ... Boltzmann constant \\
$r$ ... distance from body center \\
$r_i$ ... distance of ice table from body center \\
$t$ ... time \\
$t_D$ ... time to complete ice loss \\
$\Gamma$ ... Gamma function \\
$\zeta$ ... mean free path of water molecules \\
$\theta$ ... zenith angle (polar coordinate) \\
$\rho_s$ ... saturation vapor density \\
$\rho_v$ ... vapor density
\caption{Frequently used variables}
\end{table}
\section{Temperature in body interior}
\label{sec:T2d}
\subsection{Problem formulation}
The amplitude of a periodic surface temperature oscillation decays exponentially with depth.
A few such diurnal skin depths below the surface, the temperature varies little throughout a solar day, and below a few seasonal skin depths, it varies little even throughout an orbit around the sun. The spatial domain can be decomposed into a thin spherical shell, where lateral conduction is negligible, and the interior, where the temperature is cylindrically symmetric around the rotation axis of the body. The temperature in the deep interior of a body can hence be described by a two-dimensional solution, and does not require a three-dimensional description.
If the orbit of the asteroid does not change with time, the temperature in the deep interior is constant as well, and is a function of distance from the body center $r$ and zenith angle or co-latitude $\theta$, defined by the rotation axis of the body. The heat equation for a stationary temperature distribution $T(r,\theta)$ is
\begin{equation}
\nabla^2 T = 0
\label{eq:laplace}
\end{equation}
Throughout this work, it is assumed that the thermal conductivity is spatially uniform.
For small bodies radiogenic heating is negligible beyond the early solar system.
We can estimate how long it takes a body to thermally equilibrate. In a one-dimensional geometry, the time-scale for the propagation of a heat wave is $R^2/(2\kappa)$, where $\kappa$ is thermal diffusivity. In a spherical geometry, it proceeds faster, so we take $t_T = R^2/(6\kappa)$ as the equilibration time scale, where $R$ is the body radius.
For pure solid ice, the thermal conductivity would be $k\approx 3$~W/m\,K \citep{crc}. Assuming $k=1$~W/m\,K, a density $\rho=2000$~kg/m$^3$, and a heat capacity of $c=1000$~J\,kg\,K for the regolith-ice mixture, the thermal diffusivity is $\kappa = k/(\rho c) \approx 5\times 10^{-7}$~m$^2$/s.
For 2$R$=1 km, $t_T \approx 3\times 10^3$~yrs. Nearly all icy main belt asteroids should be thermally equilibrated, whereas dynamically young comets will not be thermally equilibrated.
We also numerically estimate the thickness of the shell where temperature changes over one orbit. The thermal skin depth is given by $\delta = \sqrt{k P /(\pi\rho\c)}$, where $P$ is period. For a typical orbital period of $P=5$~yr, a dry layer conductivity of $k=0.1$~W/m\, K, and a silicate heat capacity of $c=500$~J/kg\,K, the skin depth is about 2~m.
The idealized model is valid for bodies with a radius much larger than this seasonal skin depth.
\subsection{Solution ansatz}
In spherical coordinates with cylindrical symmetry, the two-dimensional Laplace equation (\ref{eq:laplace}) becomes
\begin{equation}
{\partial^2 T\over\partial r^2} +
\frac{2}{r} {\partial T\over\partial r} + {1\over r^2 \sin\theta} {\partial\over\partial\theta} \left(\sin\theta {\partial T\over\partial\theta} \right) = 0
\label{eq:3dlap}
\end{equation}
The partial differential equation (\ref{eq:3dlap}) can be decomposed with Legendre polynomials $P_\ell$. The temperature is expanded into orthogonal trigonometric polynomials
\begin{equation}
T(r,\theta) = \sum_{\ell=0}^\infty T_\ell(r) P_\ell (\cos\theta)
\label{eq:orthoexpansion}
\end{equation}
The eigenfunction property \citep{jeffrey}
\begin{equation}
{\partial\over\partial\theta} \left(\sin\theta {\partial P_\ell \over\partial\theta} \right) = - \ell(\ell+1) P_\ell
\end{equation}
decouples equation (\ref{eq:3dlap}) into
\begin{equation}
\frac{d^2 T_\ell}{dr^2}
+ \frac{2}{r}{d T_\ell\over d r} - {\ell(\ell+1)\over r^2} T_\ell = 0
\label{eq:rode}
\end{equation}
which has solutions
\begin{equation}
T_\ell(r) = A_\ell \left( {r\over R}\right) ^\ell + B_\ell \left({R\over r}\right)^{\ell+1}
\label{eq:gensolAB}
\end{equation}
The radial dependence is expressed relative to the body radius $R$.
The negative powers of $r$ have to vanish, $B_\ell=0$. To make the coefficients unitless, a constant temperature $\bar T$ can be factored,
\begin{equation}
T_\ell(r) = \bar T a_\ell \left( {r\over R}\right) ^\ell
\label{eq:TofC}
\end{equation}
The temperature at the center of the body is $T(0)=a_0 \bar T$.
From the orthogonality relation
\begin{equation}
\int_{-1}^{1} P_m(x) P_n(x) dx = \frac{2}{2n+1} \delta_{mn}
\end{equation}
and (\ref{eq:orthoexpansion}) we have
\begin{equation}
T_n(r) = \frac{2n+1}{2}
\int T(r,\theta) P_n(\cos\theta) d\cos \theta
\label{eq:getcoeffs}
\end{equation}
which provides a way to calculate $a_n=T_n(R)/\bar T$ from the zonally averaged surface temperature $T(R,\theta)$.
For a body in thermal equilibrium, the total radial heat flux through a sphere around the center must vanish, at any depth,
\begin{equation}
0 = \int \frac{\partial T}{\partial r} dS = \frac{d}{dr} \int T dS
\end{equation}
where the integral is over the surface of the sphere. Hence the temperature averaged over the surface of the body, $\bar T$, must be the same as the temperature at the center of the body.
With this relation, the solution can be written as
\begin{equation}
T(r,\theta) = \bar T \sum_{\ell=0}^\infty a_\ell \left( {r\over R}\right) ^\ell P_\ell(\cos\theta)
\label{eq:fullsolution}
\end{equation}
where $a_\ell = T_\ell(R)/\bar T$ and $a_0=1$.
If the temperature is symmetric on the two hemispheres, the odd coefficients vanish, and
\begin{equation}
T(r,\theta) = \bar T
\left[ 1 + a_2 \left({r\over R}\right)^2 \left( \cos^2\theta-{1 \over 2}\right) + ... \right]
\end{equation}
The radial dependence involves only $r^0, r^2, r^4, ...$, which implies that the $\theta$-dependent terms ($\ell>0$) diminish rapidly for small $(r/R)$.
The slowest non-constant term decays as $r^2$. Hence, the depth-scale of this temperature change is determined by $r/R \approx 1/\sqrt{2}$, i.e.\ roughly 3/10th body radii below the surface.
\subsection{Fast rotator model as surface boundary condition}
A body's effective temperature is defined by
\begin{equation}
\epsilon\sigma T_{\rm eff}^4 = \frac{1}{4} (1-A) \bar Q
\label{eq:Teff}
\end{equation}
where $\sigma$ is the Stefan-Boltzmann constant, $A$ albedo, and $\bar Q$ the annual mean insolation (for a circular orbit the solar constant at the appropriate distance from the sun). The factor of 4 represents the ratio of the surface of a sphere to its cross-sectional area.
The fast rotator model \citep{lebofsky89} assumes the rotation of the body is sufficiently fast so that the temperature does not change with geographic longitude (local time), or, equivalently that the thermal inertia is infinite. This often is not a good approximation, but it is a well-known end-member thermal model of asteroids. It applies best to rocky bodies (high thermal inertia) that spin fast.
In the following it is also assumed that the body has zero axis tilt, so the incoming flux is simple to calculate.
The solar flux which falls onto each infinitesimal latitude band at co-latitude $\theta$ is distributed over a circle of latitude,
\begin{equation}
\epsilon\sigma T^4 = \frac{1}{\pi} (1-A) \bar Q \sin\theta
\end{equation}
The factor of $\pi$ represents the ratio of the circumference of a circle to its diameter.
In this simple situation, the surface temperature is
\begin{equation}
T(R,\theta)=T_{eq} \sin^{1/4} \theta
\end{equation}
and the temperature at the equator is
\begin{equation}
T_{eq} = \left[ \frac{(1-A) \bar Q }{\pi \epsilon\sigma} \right]^{1/4}
= \frac{\sqrt 2}{\pi^{1/4}} T_{\rm eff} \approx 1.062 \, T_{\rm eff}
\end{equation}
The temperature averaged over the surface area is
\begin{eqnarray}
\bar T &=& \frac{1}{2} \int^{\pi}_{0} T \sin\theta d\theta
= \frac{T_{eq}}{2} \int^{\pi}_{0} \sin^{5/4}\theta d\theta \\
&=& \frac{\sqrt\pi}{10} {\Gamma(1/8)\over\Gamma(5/8)} T_{eq}
\approx 0.93087 \, T_{eq}
\label{eq:Tbar}
\end{eqnarray}
where $\Gamma$ denotes the Gamma function.
Or, expressed in terms of $T_{\rm eff}$,
\begin{equation}
\bar T = \frac{\pi^{1/4}}{5\sqrt{2}} {\Gamma(1/8)\over\Gamma(5/8)} T_{\rm eff}
\approx 0.98882 \, T_{\rm eff}
\label{eq:TfromTeff}
\end{equation}
The average surface temperature (area-weighted) is about 1\% lower than the effective temperature.
If the only source of energy is the sun, then it is always the case that $T_{\rm eff} \geq \bar T$. The effective temperature corresponds to the energy balance for a uniform surface temperature, and because the outward infrared radiation goes as the fourth power of temperature (Stefan-Boltzmann law), any deviation from uniformity leads to additional energy loss. For dust covered bodies (low thermal conductivity), this temperature can easily be 20~K lower \citep{schorghofer08a,schorghofer16a}, which has an enormous impact on sublimation rates.
\subsection{Solution for interior temperature}
We now determine the coefficients $a_\ell$ from the surface temperature.
Based on eqs.\ (\ref{eq:TofC},\ref{eq:getcoeffs}),
\begin{eqnarray}
a_\ell \bar T &=& \frac{2\ell +1}{2} T_e I_\ell
\label{eq:ctoT}\\
I_\ell &=& \int_{-1}^1 (1-x^2)^{1/8} P_\ell(x) dx
\label{eq:Inu}
\end{eqnarray}
where $x=\cos\theta$.
Odd coefficients vanish, due to hemispheric symmetry.
\begin{widetext}
A special case of an integration formula given in \cite{jeffrey}, 7.132 is
\begin{equation}
\int_{-1}^1 (1-x^2)^{\lambda-1} P_\nu(x) dx =
{ \pi \Gamma(\lambda) \Gamma(\lambda) \over
\Gamma\left(\lambda + \frac{\nu}{2}+\frac{1}{2} \right) \Gamma\left(\lambda-\frac{\nu}{2}\right) \Gamma\left(\frac{\nu}{2} + 1 \right) \Gamma\left( -\frac{\nu}{2} + \frac{1}{2}\right) }
\end{equation}
For $\lambda=9/8$, as needed in our case (\ref{eq:Inu}),
\begin{eqnarray}
I_\nu = \int_{-1}^1 (1-x^2)^{1/8} P_\nu(x) dx &=&
{ \pi \Gamma^2\left(\frac{9}{8} \right) \over
\Gamma\left(\frac{13}{8} + \frac{\nu}{2} \right) \Gamma\left(\frac{9}{8}-\frac{\nu}{2}\right) \Gamma\left(\frac{\nu}{2} + 1 \right) \Gamma\left( -\frac{\nu}{2} + \frac{1}{2}\right) }
\\ &=& {\pi \Gamma^2\left(\frac{1}{8} \right) \over
(5 + 4\nu) (1-4\nu) \Gamma\left(\frac{5}{8} + \frac{\nu}{2} \right) \Gamma\left(\frac{1}{8}-\frac{\nu}{2}\right)
\Gamma\left(\frac{\nu}{2} + 1 \right) \Gamma\left(-\frac{\nu}{2} + \frac{1}{2}\right) }
\label{eq:bigint2}
\end{eqnarray}
where we have used $\Gamma(x+1) = x \Gamma(x)$.
\end{widetext}
For $\nu=0$,
\begin{equation}
I_0 = {\sqrt\pi \over 5} { \Gamma\left(\frac{1}{8} \right) \over \Gamma\left(\frac{5}{8} \right) }
\end{equation}
From comparison with (\ref{eq:Tbar},\ref{eq:ctoT}), it is apparent that $a_0=1$, as we already determined above.
For $\nu=2$,
\begin{equation}
I_2 = - {\sqrt\pi \over 130 } {\Gamma\left(\frac{1}{8}\right) \over \Gamma\left(\frac{5}{8}\right) }
\end{equation}
The recursion relation, obtained from (\ref{eq:bigint2}),
\begin{equation}
I_{\nu+2} = { (4\nu - 1) (\nu+1) \over (13 + 4\nu) (\nu + 2) } I_\nu
\label{eq:Irecursion}
\end{equation}
involves no special functions. The first is
\begin{equation}
I_2 = - { 1 \over 26 } I_0
\end{equation}
Only the first recursion coefficient is negative, which means $I_0>0$, but $I_{2n}<0$ for $n>0$.
Combined with (\ref{eq:ctoT}), the recursion for the coefficients $a_n$ is
\begin{eqnarray}
a_n &=& \frac{2n+1}{2n-3} \frac{4n-9}{4n+5} \frac{n-1}{n} a_{n-2}
\label{eq:Crecursion}
\end{eqnarray}
Numerically,
$a_0 = 1$, $a_2 = -5/26 \approx -0.1923$, $a_4 = - 9/104 \approx -0.0865$, $a_6 \approx -0.0539$, ....
Figure~\ref{fig:Tsolution}a shows the surface temperature for increasing orders in the series expansion, $T(R,\theta)/\bar T=\sum_{\ell=0}^n a_\ell P_\ell(\theta)$, compared to the exact solution, $T=T_e \sin^{1/4}\theta$. The deviation is largest at the poles, and that is welcome since the zero polar temperatures are an unrealistic aspect of the idealized model. The solution at $T(R/2,\theta)$ illustrates how quickly the temperature homogenizes with depth.
\begin{figure}[tbh!]
a)\\
\includegraphics[width=7.5cm]{sol_surface.pdf}\\
b)\\
\includegraphics[width=7.5cm]{sol_interior.pdf}
\caption{
Temperature distribution of a thermally equilibrated spherical fast-rotator with zero axis tilt.
a) Approximation of surface temperature $T\propto (\sin\theta)^{1/4}$ with a series of Legendre polynomials. At half body radius ($r=R/2$), the $\theta$-dependence is small.
b) Temperature distribution in the body interior, based on 20th order analytic solution.
Shown is the cross-section $T(r,\theta)$, and the solution is cylindrically symmetric around the rotation axis.
The color scale shows $T/\bar T$, where $\bar T$ is the temperature of the object averaged over its entire surface area and orbit. The white contours show $T/\bar T=1$, and the black contours $T/\bar T=0.9$.
\label{fig:Tsolution}}
\end{figure}
Figure~\ref{fig:Tsolution}b shows the interior temperature distribution as a function of $r/R$ and $\theta$. The cold polar areas are shallow, consistent with the above estimate that lateral temperature changes decay with a depth-scale of 3/10th of the body radius.
\subsection{Numerical values for interior temperatures}
Since the surface area averaged temperature $\bar T$ determines the interior temperature, we discuss here how $\bar T$ is related to orbital geometry and physical properties of the body.
Figure \ref{fig:Tfroma}a shows the results of numerical surface temperature calculations, carried out with a numerical 1D model for a range of latitudes that cover the entire globe. This standard thermophysical model is described in \cite{github}. The figure shows $\bar T$ as a function of thermal inertia. For zero axis tilt and high thermal inertia this temperature must agree with eqs.\ (\ref{eq:Teff},\ref{eq:TfromTeff}). At low thermal inertia, average temperature decreases significantly \citep{schorghofer16a}. A higher axis tilt also leads to lower average surface temperature, because the seasonal temperature amplitude becomes larger, and a larger amplitude leads to more thermal radiation to space.
\begin{figure}[tbh!]
a)\\
\includegraphics[width=7.5cm]{tav_fig.pdf}\\
b)\\
\includegraphics[width=7.5cm]{fig_avstu.pdf}
\caption{
a) Area-averaged surface temperature $\bar T$ according to numerical thermal model calculations for $a=3$~AU and a rotation period of 6~hr. A thermal inertia of 2000~J~m$^{-2}$~K$^{-1}$~s$^{-1/2}$ would represent a single block of rock or ice. Small thermal inertia is representative of dust-sized particles.
b) Estimated maximum interior temperature $\bar T_u$. The actual interior temperature (surface temperature) can be lower than $\bar T_u$ because of axis tilt or low thermal inertia.
For both panels, an albedo of 5\%, an emissivity of 0.95, and a circular orbit were assumed.
\label{fig:Tfroma}}
\end{figure}
The influence of orbital eccentricity $e$ can to some extent be assessed analytically. The mean annual insolation $\bar Q$ is proportional to $1/\sqrt{1-e^2}$ \citep[e.g.][]{ward74a,klinger81}.
Hence,
\begin{equation}
\bar Q = {1\over \sqrt{1-e^2}} \frac{S_o}{a^2}
\end{equation}
where $S_o$ is the solar constant at 1~AU and $a$ the semi-major axis.
The equivalent circular orbit would have radius $a (1-e^2)^{1/4}$.
Even for moderate eccentricity, this is not a large change compared to a circular orbit.
This does not imply that $\bar T$ has the same dependence on $e$ \citep{rubincam04}, but it roughly captures the magnitude of the effect.
For a fast rotator with zero axis tilt $\bar T = 0.99 T_{\rm eff}$, where the effective temperature (\ref{eq:Teff}) is calculated from the annual mean insolation. We call this temperature $\bar T_u$ because it appears to be an upper bound on $\bar T$,
\begin{equation}
\bar T_u = 0.9888 \left[{1\over \sqrt{1-e^2}} \frac{1-A}{4\epsilon\sigma} \frac{S_o}{a^2} \right]^{1/4}
\end{equation}
$\bar T$ may be colder due to axis tilt or low thermal inertia (Fig.~\ref{fig:Tfroma}), but based on the numerical evidence presented in Fig.~\ref{fig:Tfroma}a, $\bar T_u \geq \bar T$. The desiccation timescale obtained from $\bar T_u$ will correspondingly be a conservative estimate.
\section{Model of ice retreat}
\label{sec:i1d}
Here we consider long-term loss of ice by vapor diffusion through the overlying porous layer.
\subsection{Equation for ice retreat}
First we consider a fully spherically symmetric situation to calculate the mass loss. Since vapor moves much faster than ice and heat, it also equilibrates faster, so the vapor flux is stationary.
At the ice interface ($r=r_i$) the vapor flux is given by
\begin{equation}
J(r_i) = \rho_i \frac{dr_i}{dt}
\label{eq:retreatdef}
\end{equation}
where $\rho_i$ is the density of ice and $r_i$ is the radial coordinate of the receding ice table.
The vapor flux must be proportional to $1/r^2$,
\begin{equation}
J(r) = -D \frac{d\rho_v(r)}{d r} = \left( \frac{r_i}{r} \right)^2 J(r_i)
\end{equation}
where $D$ is the vapor diffusivity and $\rho_v$ the vapor density.
Upon integration,
\begin{equation}
\rho_v(r) = {r_i^2} \frac{J(r_i)}{D} \left(\frac{1}{r} - \frac{1}{R} \right)
\label{eq:misc}
\end{equation}
The boundary values for the vapor density are $\rho_v(r=R)=0$ on the body's surface and $\rho_v(r=r_i)=\rho_s$ at the ice interface, where $\rho_s$ is the saturation vapor density at temperature $T(r_i)$.
Hence
\begin{equation}
\rho_v(r) = \rho_s { \frac{1}{r} - \frac{1}{R} \over \frac{1}{r_i} - \frac{1}{R} }
\end{equation}
Taking the derivative thereof at $r=r_i$,
\begin{equation}
J(r_i) = \frac{D \rho_s}{ r_i (1- r_i/R)}
\label{eq:flux3}
\end{equation}
(If $r_i$ was close to $R$, $r_i=R-z$, this would reduce to $J(r_i) = D \rho_s / z $, as it should.)
Combining (\ref{eq:flux3}) with (\ref{eq:retreatdef}),
\begin{equation}
\frac{dr_i}{dt} = D \frac{\rho_s}{\rho_i} \frac{1}{ r_i (1- r_i/R)}
\label{eq:drdt}
\end{equation}
$\rho_s$ only depends on temperature, so this is a differential equation that can be integrated over time.
\subsection{Solution for spherically averaged model}
For constant temperature (1-dimensional spherical model), $\rho_s(T)$ becomes a constant, $\rho_s(\bar T)$, and the differential equation (\ref{eq:drdt}) integrates to
\begin{equation}
r_i \left(1- \frac{r_i}{R} \right) dr_i = D \frac{\rho_s}{\rho_i} dt
\end{equation}
and upon integration from $R$ to $r_i$, and 0 to $t$,
\begin{equation}
\frac{R^2}{6} - \frac{r_i^2}{2} + \frac{r_i^3}{3R} = D \frac{\rho_s}{\rho_i} t
\label{eq:tofr}
\end{equation}
The time to complete desiccation $t_D$ is given by $r_i=0$,
\begin{equation}
t_D = \frac{R^2}{6D} \frac{\rho_i}{\rho_s(\bar T)}
\label{eq:tD}
\end{equation}
In a planar situation the factor of 6 would be 2 instead.
In non-dimensional variables $r'=r/R$ and $t'=t/t_D$, (\ref{eq:tofr}) becomes
\begin{equation}
1 - 3 r_i'^2 + 2 r_i'^3 = t'
\label{eq:tofrprime}
\end{equation}
The ice has receded to half the body radius ($r_i'=1/2$) exactly at $t=t_D/2$. Only 1/8th of the ice volume is left at this stage. Half the ice volume is lost at $t/t_D = 2 -3/\sqrt[3]{4}= 0.11$, that is, after 11\% of the total desiccation time. Two thirds are lost after $t/t_D=5/3-3^{1/3}$ or 22\% of the time.
The inverse of (\ref{eq:tofrprime}), $r'_i(t')$, is the solution to a cubic equation.
With $r_i'= u + 1/2$ it turns into the form
\begin{equation}
u^3 - \frac{3}{4} u + \frac{1-2t'}{4} =0
\end{equation}
The discriminant is obtained as
\begin{equation}
\Delta
= \frac{27}{4} t' (1-t') \geq 0
\end{equation}
and hence there are three real roots that can be expressed by trigonometric functions.
The roots are
\begin{equation}
u_k = \cos\left(\frac{1}{3} \arccos(2t'-1) - \frac{2\pi k}{3} \right) \quad k=0,1,2
\end{equation}
The physically relevant of the three solutions turns out to be $k=1$.
Undoing the reductions,
\begin{equation}
\frac{r_i(t)}{R} = \frac{1}{2} + \cos\left(\frac{1}{3} \arccos\left(2\frac{t}{t_D}-1\right) - \frac{2\pi}{3} \right)
\label{eq:roft}
\end{equation}
Figure~\ref{fig:cubic} shows this universal solution. The ice recedes fastest at the beginning and at the end. The retained ice volume is proportional to $(r_i/R)^3$, and this fraction is also plotted in Figure~\ref{fig:cubic}.
\begin{figure}
\includegraphics[width=7.5cm]{cubic_sol.pdf}
\caption{Time dependence of ice retreat in a spherically averaged model according to equation (\ref{eq:roft}). The retreat is also shown in terms of relative volume retained.}
\label{fig:cubic}
\end{figure}
The depth to the ice table is
\begin{equation}
R - r_i(t) = \frac{R}{2} - R\cos\left(\frac{1}{3}\arccos\left(2\frac{t}{t_D} - 1 \right) - \frac{2\pi}{3} \right)
\label{eq:doft}
\end{equation}
For small $t$ this must reduce to the planar case.
Indeed, the series expansion is
\begin{equation}
R - r_i = R \sqrt{ t \over 3 t_D} + O(t) =
\sqrt{2 D t \frac{\rho_s}{\rho_i}} + O(t)
\end{equation}
The retreat rate as a function of time is
\begin{equation}
\frac{d r_i}{dt} = \frac{R}{3} {\cos\left(\frac 1 3 \arcsin\left( 1 - 2 \frac{t}{t_D} \right) \right) \over \sqrt{ t(t_D -t)} }
\end{equation}
The outgassing rate is
$- \rho_i 4\pi r_i^2 (d r_i / dt)$. It is largest at the beginning and decreases monotonically toward zero. For example, at $t = t_D/2$ the outgassing rate is $\rho_i (2\pi /3) R^3/t_D$.
\subsection{Desiccation time}
To determine absolute desiccation time, we need to choose values for body size $R$, body temperature $\bar T$, and vapor diffusivity $D$. The saturation vapor density $\rho_s(\bar T)$ can be calculated from temperature with established formulae.
The saturation vapor pressure is \citep{bryson74,sack93,murphy05}
\begin{equation}
\ln p_s = b {\,-} \frac{m L}{k_B T}
\label{eq:psv}
\end{equation}
Here, $m$ is the mass of a water molecule, $k_B$ the Boltzmann constant, and $L$ the specific latent heat of ice.
For water ice, $m L/k_B \approx 6140$~K (0.53~eV) and $b\approx 28.9$ \citep{murphy05}.
The vapor diffusion coefficient $D$ depends on pore sizes (related to grain size), pore structure, and temperature. In a free ideal gas it would be $(1/3) \bar v_{\rm th} \zeta$, where $\bar v_{\rm th}$ is the mean thermal speed and $\zeta$ the mean free path. For a Maxwell distribution of velocities $\bar v_{\rm th} = \sqrt{8\pi k_B T/\pi m}$. In the current context, the water molecules migrate by adsorption and desorption rather than collisions between water molecules.
In this case, a Maxwell distribution may no longer be applicable, but the change to the
mean thermal speed is negligible compared to other uncertainties.
The pore size $\zeta$ in the interior is related to grain size, often a major unknown \citep{asphaug02rev}.
\cite{herique18} indicate regolith on asteroidal surfaces has a size range from microns to a meter, and the internal aggregate may be coarser than that.
We choose a value of $\zeta=1$~cm and note that $t_D$ is proportional to $\zeta$ and $R$ is proportional to $\sqrt\zeta$.
In a porous medium, $D$ also depends on porosity $\phi$, but the ice density $\rho_i$ is also proportional to porosity. So to leading order the two porosity factors cancel and we take $\rho_i/\phi = 927$ kg/m$^3$ and $D/\phi =(1/3) \bar v_{\rm th} \zeta$, because we are essentially only choosing a value for $\rho_i/D$.
We choose an average surface temperature $\bar T$, a body size $R$, and a vapor diffusion mean free path $\zeta$, and from these calculate vapor diffusivity $D$ and the desiccation time $t_D$. Figure~\ref{fig:thenumbers}a shows desiccation times as a function of $\bar T$ and for two body diameters.
Figure~\ref{fig:thenumbers}b is based on the same equations as Figure~\ref{fig:thenumbers}a, but instead of desiccation time for fixed body diameters, it shows the threshold body diameter for fixed desiccation times (ages). Bodies larger than this threshold diameter have retained some ice in their interior, if they started out as ice-rich objects, while bodies smaller than the threshold diameter can be expected to have lost all ice.
(Here, $\bar T$ can be obtained by any independent means, and this result is not limited to the fast rotator with zero axis tilt that was used for the two-dimensional temperature field.)
\begin{figure}[tbh!]
a)\\
\includegraphics[width=7.5cm]{fig_tD.pdf}\\
b)\\
\includegraphics[width=7.5cm]{fig_RD.pdf}
\caption{a) Desiccation time $t_D$ in Earth years as a function of mean surface temperature $\bar T$ for pore size $\zeta=1$~cm and body diameters $2R$, according to equation (\ref{eq:tD}) (solid lines).
The dashed lines are desiccation times $t_L$ with the latent heat effect included and thermal conductivity of the dry material of $k=0.1$~W/mK, according to equations (\ref{eq:DeltaTfinal}) and (\ref{eq:tL}).
b) Same as (a) but plotting threshold diameter for fixed desiccation times as a function of $\bar T$.
\label{fig:thenumbers}}
\end{figure}
\subsection{Latent heat effect}
The latent heat consumed at the ice interface is
\begin{equation}
Q = \rho_i \frac{dr_i}{dt} L
\label{eq:Q}
\end{equation}
where $L$ is the specific latent heat.
We consider a stationary temperature field where the latent heat of sublimation is compensated by heat flux through the ice-free shell. Heat can also be drawn toward the ice table from the sensible heat of the body interior. The following arguments justify that neglecting sensible heat should be a good assumption:
1) By the end of the desiccation process, the sensible heat will have to be returned, so sensible heat acts in part only to redistribute the energy over time.
2) As is well known, the latent heat of melting corresponds to a difference in heat capacity by 70$^\circ$C, and the latent heat of sublimation is about 7 times larger than that of melting, so the sensible heat stored in the entire body is small compared to the latent heat required to sublimate all of the ice.
The stationary solution is, based on (\ref{eq:gensolAB}),
\begin{equation}
T(r) = \left\{
\begin{array}{ll}
A + B/r \quad\mbox{for}\quad r>r_i \\
C \quad\mbox{for}\quad r<r_i
\end{array}
\right.
\end{equation}
From the boundary values, it quickly follows that
\begin{equation}
B = - {\bar T-T(r_i) \over \frac{1}{r_i} - \frac{1}{R} }
\end{equation}
The heat flux in the ice-free domain is
\begin{equation}
H = -k \frac{\partial T}{\partial r} = - k \frac{B}{r^2}
\end{equation}
where $k$ is the thermal conductivity of the ice-free domain.
Balancing the heat flux with the latent heat,
$H(r_i) = Q$,
\begin{equation}
B = - \frac{Q}{k} r_i^2
\end{equation}
The temperature at the ice table determines the vapor pressure.
The reduction in temperature due to latent heat is
\begin{equation}
\bar T - T(r_i) = \frac{Q}{k} r_i \left(1-\frac{r_i}{R}\right)
\label{eq:DeltaT}
\end{equation}
Combining (\ref{eq:drdt}), (\ref{eq:Q}), and (\ref{eq:DeltaT}) yields a simple relation
\begin{equation}
\Delta T = \bar T - T(r_i)
= \frac{D}{k} \rho_s L
\label{eq:DeltaTfinal}
\end{equation}
The temperature difference depends on neither $R$ nor directly on $r_i$, so the ice table is characterized by a single temperature $T_i \equiv T(r_i)$.
The simplicity of (\ref{eq:DeltaTfinal}) can be understood in the following way.
At the ice table, the heat flux balances evaporative cooling:
\begin{equation}
-k \frac{\partial T}{\partial r} = - D \frac{\partial \rho_v}{\partial r} L
\end{equation}
Since temperature $T$ and vapor density $\rho_v$ both obey the Laplace equation in the same geometry, the solutions are geometrically similar (the heat flux $H$ and the vapor flux $J$ are both proportional to $1/r^2$) and the ratio of the local derivatives can be replaced with the ratio of the differences between the respective boundary values, $\Delta T$ and $\Delta\rho_v=\rho_s$,
\begin{equation}
-k {\Delta T} = - D \rho_s L
\end{equation}
This reproduces (\ref{eq:DeltaTfinal}).
With $\rho_s(T(r_i))$, (\ref{eq:DeltaTfinal}) is a nonlinear equation for $T(r_i)$.
For small differences, the relation can be linearized
\begin{equation}
\Delta\rho_s = \frac{d\rho_s}{dT} \Delta T
\label{eq:Deltarhos}
\end{equation}
The saturation vapor pressure (\ref{eq:psv}) combined with the ideal gas law,
\begin{equation}
k_B T \rho_s = m p_s
\label{eq:idealgas}
\end{equation}
yields
\begin{equation}
\frac{d\rho_s}{dT} = \frac{\rho_s}{T} \left(\frac{m L}{k_B T} - 1 \right)
\approx \frac{\rho_s}{T} \frac{m L}{k_B T}
\label{eq:drhosdT}
\end{equation}
Combining (\ref{eq:DeltaTfinal}), (\ref{eq:Deltarhos}), and (\ref{eq:drhosdT}),
\begin{equation}
\frac{\Delta\rho_s}{\rho_s} \approx
\frac{D}{k} \rho_s \frac{m L^2}{k_B T^2}
= \frac{D}{kT} p_s \left(\frac{m L}{k_B T} \right)^2
\label{eq:lcriterion}
\end{equation}
Latent heat becomes noticeable when $\Delta\rho_s/\rho_s \approx 1/2$. For $D/\phi=3$~m$^2$/s and $k=0.1$~Wm$^{-1}$K$^{-1}$, the cross-over occurs at $T\approx 180$~K. In bodies warmer than that, latent heat will slow down the ice loss. Grain size changes both $D$ and $k$ in the same direction, so the ratio $D/k$ is less dependent on grain size than $D$ and $k$ are individually.
Since $\Delta T$ is constant in time, the only change in (\ref{eq:drdt}) is that $\rho_s$ has to be evaluated at $T_i$ instead of $\bar T$, and (\ref{eq:tD}) and (\ref{eq:roft}) are still valid. We rename $t_D$ to $t_L$ when latent heat is taken into account,
\begin{equation}
t_L = \frac{R^2}{6D} \frac{\rho_i}{\rho_s(T_i)}
\label{eq:tL}
\end{equation}
\begin{figure}
\includegraphics[width=7.5cm]{fig_Tdiff.pdf}
\caption{The temperature of the ice (red line) versus the average surface temperature $\bar T$ shows the temperature difference between the surface and the ice due to latent heat according to equation (\ref{eq:DeltaTfinal}). The dotted line is a 1:1 relation (no difference). Here, $\zeta=1$~cm and $k= 0.1$~W/m\,K.
\label{fig:latentTdiff}}
\end{figure}
To obtain $T_i$ the nonlinear equation (\ref{eq:DeltaTfinal}) with (\ref{eq:psv},\ref{eq:idealgas}) has been solved numerically.
Figure~\ref{fig:latentTdiff} shows the temperature difference due to the latent heat effect.
At low temperature the latent heat has no effect.
As anticipated, for temperatures above about 180~K, latent heat becomes noticeable, and above 190~K it is significant.
At higher temperatures, the ice is significantly cooler than the average surface, which prolongs desiccation times.
Figure~\ref{fig:thenumbers} also includes the desiccation time $t_L$ (dashed lines).
\subsection{Bilobate shape}
Equation (\ref{eq:drdt}) can also be used with the full $r$ and $\theta$ dependent temperature field, although in its derivation lateral vapor flux was neglected. Nevertheless, this still provides insight into how pronounced of a bilobate shape will emerge for the retained ice.
In non-dimensional variables $t' = t/t_D$ and $r'=r/R$, (\ref{eq:drdt}) becomes
\begin{equation}
\frac{dr'}{dt'} = \frac{\rho_s(T)}{6\rho_s(\bar T)} {1\over r'(1-r')}
\label{eq:drdtprime}
\end{equation}
where $r'$ and $T$ are both functions of $\theta$ and $t$. The solution $r'(\theta,t)$ only depends on $\bar T$, since $T(r',\theta)$ also follows from $\bar T$.
Figure~\ref{fig:dumbbell} shows the numerically integrated results for the shape of the ice retained. The shallowest contour corresponds to $0.1 t_D$, and ice has retreated substantially at the equator, but not at the poles. Ultimately there is a pinch time $<t_D$ when the remaining ice is split into two hemispheric reservoirs of equal volume. Ice loss in the polar region is small even after a long time. These cold spots can harbor near-surface ice reservoirs, as long as they have remained cold throughout the body's orbital history.
This pattern of desiccation is notable for MBCs, which may become active when collisions or mass shedding from rotational destabilization excavate near-surface ice \citep{hsieh09,hirabayashi2015_rotationalmassshedding,haghighipour2016_mbcimpacts}. It suggests that icy asteroids with sufficiently abundant ice to produce sublimation-driven activity close to their surfaces could be common, provided that the axis tilt is small and that activation events occur near the poles of such objects.
\begin{figure}
\includegraphics[width=7.5cm]{retreatode_170.pdf}
\caption{Time and latitude dependent ice retreat according to equation (\ref{eq:drdtprime}) for $\bar T= 170$~K. White contours indicate ice table in time steps of $0.1t_D$, up to $0.5t_D$. Background color is temperature as in Figure~\ref{fig:Tsolution}b.}
\label{fig:dumbbell}
\end{figure}
In the present model the regolith matrix is assumed to stay in place after the ice has retreated. If the ice fraction was higher, causing material to erode away as ice sublimated, potentially leaving behind an object with a dumbbell-like bilobate structure. Several studies suggest that the formation of bilobate or contact binary structures observed for many comets and some small asteroids
\citep[e.g.,][]{harmon2010_8pcontactbinary,harmon2011_103pradar,magri2011_1996hw1contactbinary,jorda2016_67pshape,agarwal2017_binary288p}
are due to
the merger of a binary pair after tidal decay or
a low-speed collision of unrelated objects
\citep[e.g.,][]{magri2011_1996hw1contactbinary,taylor2014_tidalendstatebinaries,massironi2015_bilobate67porigin,jutzi2017_bilobate67porigin}. Differential sublimation rates could be an alternative mechanism for the formation of such structures.
As the ice retreats, the rotational inertia around the polar axis will become smaller than the rotational inertia of the other two principal axes, and the orientation of the object's rotation axis is expected to change. On the other hand, small bodies are never perfectly spherical, and the shape defines a preferred rotation axis that is robust with respect to small changes in inertia. If the body does reorient, the polar region is quickly devolatilized.
\subsection{Application to specific populations}
Desiccation on a global scale for an entire body is governed by the temperature $\bar T$. It is not determined by the body's effective temperature, which is larger than $\bar T$, nor by perihelion temperatures, which are only relevant for relatively shallow depths. However, $\bar T$ depends on more than just an object's semi-major axis. The analysis presented in Fig.~\ref{fig:Tfroma}a shows how $\bar T$ also depends on thermal inertia and axis tilt. A fast rotator with zero axis tilt will have the highest $\bar T$, and therefore serves as a ``hot'' end-member case.
Figure~\ref{fig:thenumbers2} shows desiccation rates for this hot end-member case as a function of semi-major axis $a$. As in Fig.~\ref{fig:thenumbers} these results assume a mean free path of $\zeta=1$~cm for water molecules and an albedo of 5\%. Desiccation times are proportional to $\zeta$ and $R^2$.
\begin{figure}[tbh!]
a)\\
\includegraphics[width=7.5cm]{fig_tD2.pdf}\\
b)\\
\includegraphics[width=7.5cm]{fig_RD2.pdf}
\caption{Same as Figure~\ref{fig:thenumbers} but plotting desiccation time and threshold diameters for fixed desiccation times as functions of semi-major axis distance. See caption of Figure~\ref{fig:thenumbers} for further details. Plotted functions use $\bar T=T_u$, so represent pessimistically short desiccation times.
\label{fig:thenumbers2}}
\end{figure}
The Beagle family has an estimated age of 10~Ma and is centered at 3.157~AU with orbital eccentricities around 0.15, and contains the main-belt comet (MBC) 133P/Elst-Pizarro \citep{nesvorny08}. For $\zeta = 1$~cm, the size threshold for complete desiccation of a Beagle family asteroid is $R=41$~m. If we take $0.11 t_D$, when half the ice is lost, the threshold is 125~m. The smallest known Beagle family members are significantly larger than that \citep[$R\gtrsim1$~km, assuming albedos of $\sim$5\%;][]{nesvorny2015_hcmfamilies_pds}, so even for larger $\zeta$, all known Beagle family members should have retained most of their ice.
MBCs 133P, 176P, 238P, 288P, 313P, 324P, 358P, P/2013 R3, and P/2016 J1 have semi-major axes that range from 3.0~AU to 3.2~AU \citep{hsieh06,snodgrass17}. Once the total insolation $\bar Q$ is adjusted for orbital eccentricity, the range of equivalent semi-major axes is still almost the same. Objects with radii of $R\gtrsim 0.5$~km, which include all of the known MBCs, have desiccation time scales that exceed the age of the solar system. Many MBCs may be much younger, being apparent members of asteroid families formed in catastrophic disruption events of larger parent bodies in the relatively recent past \citep[cf.][]{hsieh2018_mbcfamilies}. Recent formation of MBCs is also supported by the fact that they exhibit comet-like activity, implying the presence of ice in near-surface layers and not only in their deep interiors. Generally speaking, in the outer main belt ($a\gtrsim 2.8$~AU) bodies with radii larger than a few km ($R\gtrsim 3$~km for $\zeta=1$~cm) are expected to have retained ice.
Although collisionally sculpted, the current asteroid size distribution arose early in its history and there was little further evolution of the size distribution, which is nearly fossilized \citep{bottke05}. Nevertheless, small bodies are continually formed through collisions and either gradually ejected or destroyed by further collisions.
\cite{granvik16} conducted Yarkovsky effect modeling on a larger number of asteroids, and found that most 100~m size objects were ejected over the integration period of 100~Myr. Hence, due to the Yarkovsky effect alone, asteroids of 100~m and smaller can be expected to be younger than 100~Myr.
The typical lifetimes of 100~m asteroids against collisional destruction have been similarly estimated to be roughly 100~Myr \citep{bottke2005_astcollisionaldepletion}.
Objects of this size and age retained some of their ice at temperatures below about 160~K, which includes all bodies beyond a semi-major axis of 3.0~AU. Hence, only in the outermost main belt are 100~m large asteroids expected to retain some ice, since they are likely to be $<$100~Myr old.
Objects with $a>2.5$~AU (i.e., in the middle and outer asteroid belt) and $R>7$~km should have been able to retain ice in their interiors over the age of the solar system, assuming that they formed with ice and that ice survived the period of radiogenic heating. This is valid for vapor mean free paths of up to $\zeta=1$~cm. (Ceres, at $a=2.77$~AU, retained ice within the top meter of the surface \citep{prettyman17} only because of its very low thermal inertia and small axis tilt \citep{schorghofer16a}.)
The NEO population covers a wide range of semi-major axes, but many NEOs originate in the main belt, and especially from the inner main belt ($a<2.5$~AU) \citep{binzel15rev}.
Nearly a thousand NEOs have diameters larger than 1~km, but most are smaller (https://cneos.jpl.nasa.gov/\allowbreak{}stats/\allowbreak{}size.html).
For an NEO with a diameter of 1~km or less to have retained ice in its interior, one of the following conditions would have to be met:
1) a semi-major axis in the outer belt or beyond,
2) a mantle of very low thermal inertia, which lowers interior temperature,
3) a young age due to a recent break-up from an ice-rich body, or
4) a stable and moderately small axis tilt that would maintain cold polar regions.
\section{Conclusions}
An idealized model of interior temperatures enables us to evaluate ice loss over a wide range of scales and parameters. For the surface temperature distribution of a fast rotator with a rotation axis perpendicular to the orbital plane, the interior equilibrium temperature is given by eqs.\ (\ref{eq:fullsolution}) and (\ref{eq:Crecursion}). Lateral temperature variations decrease with a depth-scale of about 3/10th of the body radius, and the average surface temperature $\bar T$ is representative of much of the body interior. This assessment also applies to thermally equilibrated spherical bodies with other surface temperature distributions, only $\bar T$ needs to be determined by other means than eqs.\ (\ref{eq:Teff},\ref{eq:TfromTeff}).
The surface-area-averaged and orbit-averaged surface temperature $\bar T$ plays a key role in assessing ice loss rates, because it represents the temperature of the body center. It depends, among other factors, on semi-major axis, thermal inertia, and spin axis tilt. Numerical exploration shows that a fast rotator with zero axis tilt represents a hot end member case, with a temperature about 1\% lower than the effective temperature, eq.\ (\ref{eq:TfromTeff}).
For a spherically averaged model, the time evolution of the ice retreat is obtained analytically (\ref{eq:tofr},\ref{eq:roft}), and the time until complete desiccation $t_D$ is given by eq.\ (\ref{eq:tD}); half of the ice mass is lost after $0.11 t_D$, and 7/8th are lost after $t_D/2$. Figure~\ref{fig:thenumbers} shows results for the desiccation time scale.
The model equations also enable us to investigate the latitude dependence of ice retreat. Figure~\ref{fig:dumbbell} illustrates the emergence of a bilobate structure. When the tilt of the rotation axis is not large, the polar regions retain subsurface ice long after the center of the body has lost its ice. Hence, cold polar areas may harbor ice on NEOs and small main belt asteroids that are otherwise devoid of ice.
The latent heat of the sublimating ice causes the ice to be colder than the surface average. Within a thermally equilibrated and spherically averaged model, this temperature difference is given by (\ref{eq:DeltaTfinal}) and is independent of body size and remains constant as the ice retreats toward the body center. These temperatures depend on the physical properties of the material, and for the values provided above, latent heat starts to significantly retard ice loss at mean surface temperatures above 190~K. Desiccation times for temperatures exceeding this regime are calculated based on numerical solutions to a nonlinear equation.
Applying these formulae we can make inferences about specific small body populations. First, all known Beagle family members should have been able to retain ice from their parent body over the age of the family ($\sim$10~Myr). Next, in the outer belt, bodies with radii larger than a few km should be able to retain ice over the age of the solar system, and in the middle belt and beyond, bodies need to be nearly 10~km in diameter to have been able to retain most of their ice over this duration. These are conservative estimates because low thermal inertia and high axis tilt lower interior temperatures below the value used for these estimates and many dynamically younger objects are embedded in the main belt.
Lastly, NEOs (most of which are necessarily small) should only be able to retain ice under particularly favorable circumstances, such as very young ages, semi-major axes in the outer belt region, extremely low thermal inertia, or a small spin axis tilt and stable spin axis orientation.
\vspace{1em}
{\bf Acknowledgments:}
We thank Robert Jedicke, Karen Meech, and Abel M\'endez for insightful discussions.
This material is based upon work supported by the National Aeronautics and Space Administration under Grant No.\ 80NSSC17K0723 through the Solar Systems Working Program and through the NASA Solar System Exploration Research Virtual Institute 2016 (SSERVI16) Cooperative Agreement (NNH16ZDA001N).
No data products to report.
|
2,869,038,156,849 | arxiv | \section{Exact solutions with electromagnetic fields }\label{S3}
Now we are going to apply the charged field equations of TEGR, Eqs. (\ref{fe}) and (\ref{fe1}), to the flat horizon spacetime, which directly gives rise to the following vierbein, written in terms of cylindrical coordinates ($t$, $r$, $\phi$, $z$) (see also \cite{CGSV13}):
\begin{equation}\label{tetrad}
\hspace{-0.3cm}\begin{tabular}{l}
$\left({b_{i}}^{\mu}\right)=\left( \sqrt{A(r)}, \; \frac{1}{\sqrt{A_1(r)}}, \; r, \; r\right)$,
\end{tabular}
\end{equation}
where $A(r)$ and $A_1(r)$ are two unknown functions of the radial coordinate $r$. Substituting Eq. (\ref{tetrad}) into Eq. (\ref{ts}), we evaluate the torsion scalar as\footnote{For the sake of simplicity, we will write $A(r)\equiv A$, \ \ $A_1(r)\equiv A_1$, \ \ $A'\equiv\frac{dA}{dr}$, $A'_1\equiv\frac{dA_1}{dr}$ $A''\equiv\frac{d^2A}{dr^2}$ and $A''_1\equiv\frac{d^2A_1}{dr^2}$.}
\begin{eqnarray}\label{ts1}
&\!\!\! &\!\!\! T=2\frac{A'A_1}{rA}+2\frac{A_1}{r^2},
\end{eqnarray}
Applying Eq. (\ref{tetrad}) to the field equation (\ref{fe}) we get the following non-vanishing components:
\begin{eqnarray} \label{df1}
& & I_{t t}\equiv \frac{A}{r^4}\Biggl\{r^2A_1(a_\phi[2 b'_3-a_\phi]+a_{1z}[2 s'_1-a_{1z}])+s_\phi[ 2b_{z}-s_\phi]+b_{z}{}^2-r^2[A_1(s'_1{}^2+b'_1{}^2)+A_1\nonumber \\[.5mm]
&& +rA'_1+r^2\Lambda]\Biggr\}=0,\nonumber\\
& & I_{r r}\equiv \frac{1}{r^4AA_1}\Biggl\{r^2AA_1(a_\phi[2 b'_3-a_\phi]+a_{1z}[2 s'_1-a_{1z}])+A s_\phi[ s_\phi-2b_{z}]+As_{1z}{}^2-r^2[AA_1(s'_1{}^2+b'_1{}^2)\nonumber \\[.5mm]
&& -A_1A'r-A(A_1+r^2\Lambda)]\Biggr\}=0,\nonumber\\
&& I_{r \phi}\equiv I_{\phi r}=\frac{2(a_{1z}-s'_1)(s_\phi-b_{z})}{r^2}=0, \qquad \qquad I_{r z}=I_{zr}=\frac{2(a_{\phi}-b'_1)(s_\phi-b_{z})}{r^2}=0,\nonumber \\[.5mm]
&& I_{\phi z}=I_{z \phi}=(a_{\phi}-b'_1)(s'_1-a_{1z})=0,\nonumber\\
&& I_{\phi \phi}\equiv \frac{1}{4r^2A^2}\Biggl\{2r^4AA_1A''-r^4A_1A'^2 +r^3AA'[rA'_1+2A_1]+2A^2\Biggl[2r^2A_1a_\phi[2 b'_1-a_\phi]-2s_\phi[s_\phi-2 b_{z}]\nonumber\\
&&+r^3A'_1-2r^2A_1a_{1z}(2s'_1-a_{1z})-2b_{z}{}^2-2r^2(b'_1{}^2A_1-r^2\Lambda-A_1s'_1{}^2)\Biggr] \Biggr\}=0,\nonumber\\
&& I_{z z}\equiv \frac{1}{4r^2A^2}\Biggl\{2r^4AA_1A''-r^4A_1A'^2 +r^3AA'[rA'_1+2A_1]+2A^2\Biggl[2r^2A_1a_\phi[a_\phi-2 b'_3]-2s_\phi[s_\phi-2 a_{1z}]\nonumber\\
&&+r^3A'_1+2r^2A_1a_{1z}(2s'_1-a_{1z})-2b_{1z}{}^2+2r^2(b'_1{}^2A_1+r^2\Lambda-A_1s'_1{}^2)\Biggr] \Biggr\}=0,\nonumber\\
\end{eqnarray}
where $a_\phi=\frac{da(\phi)}{d\phi}$, $a_{1z}=\frac{da_1(z)}{dz}$, $b'_1=\frac{db_1(r)}{dr}$, $b_z=\frac{db(z)}{dz}$, $s'_1=\frac{ds_1(r)}{dr}$, $s_\phi=\frac{ds(\phi)}{d\phi}$ and $a(\phi)$, $a_1(z)$, $b(z)$, $b_1(r)$, $s_1(r)$ and $s(\phi)$ are the magnetic field strengths given by the general gauge potential as
\begin{equation \label{p} v :=[a(\phi)+a_1(z)]dr+[b(z)+b_1(r)]d\phi+[s(\phi)+s_1(r)]dz.\end{equation}
The general solutions of the non-linear differential Eqs. (\ref{df1}) have the form:
\begin{eqnarray} \label{sol}
& & i)\; A(r)=\frac{1}{A_1(r)}=\left(\frac{\Lambda r^3-3c_1}{3r}\right), \quad a(\phi)=c_2\phi,\quad a_1(z)=c_3z,\quad s(\phi)=c_4\phi,\quad b(z)=c_5z,\nonumber \\[.5mm]
& & s_1(r)=c_6r,\quad b_1(r)=c_7r,\nonumber \\[.5mm]
& &ii)\; A(r)=\frac{1}{A_1(r)}=\left(\frac{\Lambda r^4-3c_1r-3c_8}{3r^2}\right), \quad a(\phi)=c_2\phi,\quad a_1(z)=c_3z,\quad s(\phi)=\pm \wp\phi, \nonumber \\[.5mm]
& & b(z)=\wp^2 z, \quad s_1(r)=c_6r,\quad b_1(r)=c_7r, \qquad \wp=\frac{1\pm\sqrt{1\pm4\sqrt{c_8}}}{2},
\end{eqnarray}
where $c_i$, $i=1\cdots 8$ are constants of integration. Eqs. (\ref{sol}) shows that when the constant $c_8=0$ then the second set will be identical to the first set and the constant $\wp$, after some re-scaling, can be related to the constants $c_4$ and $c_5$ of the first set.
\section{The physical properties of solutions}
The metric of solutions (\ref{sol}) take the form
\begin{eqnarray} \label{me}
&&ds^2{}_1=-\left(\frac{\Lambda r^3-3c_1}{3r}\right) dt^2+\frac{dr^2}{\left(\frac{\Lambda r^3-3c_1}{3r}\right)}+r^2(d\phi^2+dz^2)\;,\nonumber \\[.5mm]
& &ds^2{}_2=-\left(\frac{\Lambda r^4-3c_1r-3c_8}{3r^2}\right)dt^2+\frac{dr^2}{\left(\frac{\Lambda r^4-3c_1r-3c_8}{3r^2}\right)}+r^2(d\phi^2+dz^2)\;. \end{eqnarray}
Eqs. (\ref{me}) show that the metrics asymptotically behave as AdS/dS spacetime. Furthermore Eqs. (\ref{me}) show that the first metric is static without any charge. The second metric has a charge which comes from the term of order $O(\frac{1}{r^2})$. The second Eq. (\ref{me}) can be rewritten as
\begin{eqnarray} \label{me1}
ds^2{}_2=-\left(\frac{\Lambda r^2}{3}-\frac{m}{r}-\frac{q^2}{r^2}\right)dt^2+\left(\frac{\Lambda r^2}{3}-\frac{m}{r}-\frac{q_m{}^2}{r^2}\right)^{-1}dr^2+r^2(d\phi^2+dz^2)\;, \end{eqnarray} where $m=c_1$ and $q_m=\sqrt{c_m}$. The metric (\ref{me1}) is similar to the AdS/dS Reissner-Nordstr\"om solution \cite{NC03}. Here we want to stress that the source of term ${\cal O}(r^{-2})$ in metric (\ref{me1}) comes from the presence of magnetic field while, in the Reissner-Nordstr\"om case, such a term is related to the source of electric field.
Inserting Eqs. (\ref{sol}) into Eq. (\ref{ts}) we get
\begin{equation} \label{tss}
T=2\Lambda, \qquad\qquad
T= \frac{2({c_8}+\Lambda r^4)}{r^4}, \end{equation}
which shows that the scalar torsion is not constant in the second case. From Eqs. (\ref{tss}), it is easy to see that the torsion scalar of second case reduces to the first case as soon as the constant $c_8=0$. The behavior of the scalar torsion is given in Fig. 1.
\begin{figure}
\centering
\includegraphics[scale=.3]{jm1}
\caption{The behavior of the torsion scalar for the second solution (\ref{sol}).}
\label{fig. 1}
\end{figure}
Now we are going to calculate the {\it singularities of solutions (\ref{sol})}. The first step to discuss this issue is to find at which value of $r$ the functions $A(r)$ and $A_1(r)$ become zero or infinity. The curvature and torsion invariants that arise from the first solution (\ref{sol}),
using the Levi-Civita and Weitzenb\"ock connections, take the form:
\begin{eqnarray} R^{\mu \nu \lambda \rho}R_{\mu \nu \lambda \rho} &\!\!\! = &\!\!\! \frac{4(2\Lambda^2r^6+9c_1{}^2)}{3r^6},
\qquad
R^{\mu \nu}R_{\mu \nu} = 4\Lambda^2,\qquad
R =-4\Lambda,\nonumber\\
T^{\mu \nu \lambda}T_{\mu \nu \lambda} &\!\!\!=&\!\!\! \frac{4\Lambda^2r^6-12\Lambda c_1r^3+27{c_1}^2}{2r^3(3{c_1}-\Lambda r^3)}, \qquad
T^\mu T_\mu = \frac{3(3{c_1}-2\Lambda r^3)^2}{4r^3(3{c_1}-\Lambda r^3)},\quad
T(r)=-2\Lambda, \nonumber\\
&& \nabla_\alpha T^\alpha=3\Lambda, \qquad \qquad \Rightarrow R=-T-2\nabla_\alpha T^\alpha.
\end{eqnarray}
and for the second solution, we get the invariants
\begin{eqnarray} R^{\mu \nu \lambda \rho}R_{\mu \nu \lambda \rho} &\!\!\! = &\!\!\! \frac{4(2\Lambda^2r^8+6c_8[6c_1+7{c_8}r]+9r^2{c_1}^2)}{3r^8},
\quad
R^{\mu \nu}R_{\mu \nu} = \frac{4({c_8}^2+\Lambda^2r^8)}{r^8},\quad
R =-4\Lambda,\nonumber\\
T^{\mu \nu \lambda}T_{\mu \nu \lambda} &\!\!\!=&\!\!\! \frac{4\Lambda^2r^8-8\Lambda c_8r^4-12\Lambda{c_1}r^5+27r^2{c_1}^2+60{c_1}c_8r+36{c_8}^2}{2r^4(3{c_1}r-\Lambda r^4+3c_8)}, \nonumber\\
T^\mu T_\mu &\!\!\!=&\!\!\! \frac{3(3{c_1}r-2\Lambda r^4+2{c_8})^2}{4r^4(3{c_1}r-\Lambda r^4+3c_8)},\quad
T(r)=-\frac{2(\Lambda r^4+{c_8})}{r^4},\qquad \nabla_\alpha T^\alpha=\frac{(3\Lambda r^4+{c_8})}{r^4}, \nonumber\\
&& \Rightarrow R=-T-2\nabla_\alpha T^\alpha.
\end{eqnarray}
The above calculations show that:\vspace{0.1cm}\\
a)-Except for the scalars $R^{\mu \nu}R_{\mu \nu}$, $R$, $ \nabla_\alpha T^\alpha$, $T$ of the first solution and $R$ of the second solution, all the above invariants show infinite behavior at $r=0$ which represents a true singularity.\vspace{0.1cm}\\
b)- For $c_1=\frac{r^3\Lambda}{3}$ for the first solution, we get the horizon on the metric. For this value the curvature invariants are finite but torsion invariants diverge, i.e.
\[T^{\mu \nu \lambda}T_{\mu \nu \lambda}\rightarrow \infty, \qquad \textrm{and} \qquad T^{\mu }T_{\mu} \rightarrow \infty.\] This means that, on the horizon, the torsion invariants diverge. The reason that leads the curvature invariants to have finite value but torsion invariants diverge is the local Lorentz transformations. This can be seen clearly from the calculations of the scalar torsion $T(r)$ which is finite on the horizon due to its invariant under local Lorentz transformations however, the scalars $T^{\mu \nu \lambda}T_{\mu \nu \lambda}$ and $T^\mu T_\mu$ are not finite because they are not invariant under local Lorentz transformations. The same discussion can be applied when $c_8=\frac{r^3\Lambda -3c_1}{3}$ for the second set of solution (\ref{sol}). \vspace{0.1cm}\\
c)- The horizons of solutions (\ref{sol}) are respectively given for $c_1=\frac{r^3\Lambda}{3}$ and $c_8=\frac{r^3\Lambda -3c_1}{3}$.
Let us now discuss some thermodynamical quantities related to the solution (\ref{me1}). To this aim, we calculate the horizons of the function \begin{equation}{\cal N}=\frac{\Lambda r^2}{3}-\frac{m}{r}-\frac{q^2}{r^2}.\end{equation} The above equation has 4 roots, 3 of them are imaginary while the fourth one is real and takes the form
\begin{equation}\frac{3^{2/3}[2^{5/6}\{X^{2/3}-4q^2 \Lambda_1{}^{1/3}\}^{3/4}+2^{7/12}\sqrt{X^{2/3}\sqrt{2(X^{2/3}-4q^2 \Lambda_1{}^{1/3})}+ 2^{5/2} \Lambda_1{}^{1/3}q^2(X^{2/3}-4q^2\Lambda_1^{1/3})-12m\sqrt{X}})]}{12\Lambda^{1/3}X^{1/6}[X^{2/3}-4q^2\Lambda_1{}^{1/3}]^{1/4}},\end{equation} where $\Lambda_1=12\Lambda$ and $X=9m^2+\sqrt{3(256q^2\Lambda+27m^4)}$. To ensure we have a real root, we must have ${\displaystyle \Lambda>-\frac{27m^4}{256 q^6}}$. The behavior of the horizon is drawn in Figure 2 which shows that we have only one horizon.
\begin{figure}
\centering
\includegraphics[scale=.3]{jm2}
\caption{The horizon of solution (\ref{me1}).}
\label{fig2}
\end{figure}
The Hawking temperature is defined as \cite{NO13}
\begin{equation}
T_h = \frac{{\cal N}'(r_h)}{4\pi},
\end{equation}
where the event horizon is located at $r = r_h$ which is the largest positive root of ${\cal N}(r_h) = 0$ that fulfills the condition ${\cal N}'(r_h)\neq 0$.
The Hawking temperatures associated with the black hole solution (\ref{me1}) is calculated as
\begin{eqnarray} \label{m44}
{T_h}=\frac{3r_h{}^4\Lambda+q^2}{4\pi r_h{}^3},
\end{eqnarray}
where ${T_h}$ is the Hawking temperature at the event horizon. We represent the Hawking temperature in Figure 3. This last figure shows that the temperature is always positive.
\begin{figure}
\centering
\includegraphics[scale=.3]{jm3}
\caption{The Hawking temperature of solution (\ref{me1}).}
\label{fig3}
\end{figure}
\newsection{Energy conditions}
An important issue is related to the possible violation of the energy conditions in cosmology or strong field regime. In GR, there are four types of energy conditions known as:
The strong energy condition (SEC), the null energy condition (NEC), the dominant energy condition (DEC) and the weak energy condition (WEC) \cite{HE,Nepjp,C4,Nahep}. The SEC and the NEC arise from the
structure of the gravitational field related to the dynamics of matter. It is related to the Raychaudhuri
equation which leads the time
expansion of the scalar $\theta$ in terms of
quantities like the Ricci tensor, the shear tensor $\sigma^{\mu \nu}$ and the rotation
$\omega^{\mu \nu}$ for both time and light-like curves. These relations have the form:
\begin{eqnarray
\frac{d\theta}{d\tau}=-\frac{1}{3}\theta^2-\sigma_{\mu \nu}\sigma^{\mu \nu}+\omega_{\mu \nu}\omega^{\mu \nu}-R_{\mu \nu} u^\mu u^\nu=-\frac{1}{3}\theta^2-\sigma_{\mu \nu}\sigma^{\mu \nu}+\omega_{\mu \nu}\omega^{\mu \nu}-R_{\mu \nu} k^\mu k^\nu,\end{eqnarray}
where $u^\mu$ is an arbitrary time-like
vector and $k^\mu$ is an arbitrary null vector.
As a consequence of the attraction, one can show
\begin{equation \label{rc} R_{\mu \nu} u^\mu u^\nu \geq 0, \qquad \qquad \qquad \qquad R_{\mu \nu} k^\mu k^\nu\geq 0.\end{equation} Eqs. (\ref{rc}) can be rewritten as
\begin{equation \label{se} R_{\mu \nu} u^\mu u^\nu=\left({\cal T}_{\mu \nu}-\frac{\cal T}{2}g_{\mu \nu}\right)u^\mu u^\nu \geq 0, \qquad R_{\mu \nu} k^\mu k^\nu=\left({\cal T}_{\mu \nu}-\frac{\cal T}{2}g_{\mu \nu}\right)k^\mu k^\nu\geq 0,\end{equation}
which are the SEC and the NEC, respectively for a given source of matter ${\cal T}_{\mu\nu}$.
In the case of perfect-fluid matter, the SEC and NEC given
by (\ref{se}) impose the following constraints $\rho+3p\geq 0$ and $\rho+p\geq 0$ to be satisfied, while the WEC and DEC
require the conditions $\rho\geq 0$ and $\rho\pm p\geq 0$, respectively for
consistency.
The energy-momentum components of the first solution (\ref{sol}) are vanishing. This means that the first solution (\ref{sol}) is a vacuum solution. The non-vanishing components of the energy-momentum tensor of the second solution Eq. (\ref{sol}) have the form
\begin{equation \label{st1}
{\cal T}^0{}_0=-{\cal T}^3{}_3={\cal T}^1{}_1=-{\cal T}^2{}_2=-\frac{c_8}{2r^4}.\end{equation}
Eqs. (\ref{st1}) show that the WEC is violated unless $c_8<0$ and the DEC is satisfied for $\rho-p\geq 0$. However the DEC, NEC and SEC are all satisfied.
\section{Einstein-Cartan theory and conserved currents}
The above considerations can be extended in the framework og the Einstein-Cartan theory. Let us define the
Einstein-Cartan Lagrangian as \cite{OR6}:
\begin{equation \label{lg}
{\cal L}(\vartheta^i, \
{\Gamma^j}_{k})=-\frac{1}{2\kappa}\left(R^{i j}\wedge
\eta_{i j}-2\Lambda \eta\right),\end{equation} where $\vartheta^i$ is the co-frame, ${\Gamma^j}_{k}$ is the connection one-form and
$\kappa$ is the gravitational coupling constant that now we explicitly redefine. Lagrangian (\ref{lg}) is invariant under diffeomorphism and local Lorentz transformations \cite{OR6}. The variation of Eq. (\ref{lg}) leads to the
canonical energy-momentum and rotational gauge field momentum
with the forms \cite{OR6,OR7} \begin{eqnarray && E_{i}:= -\frac{1}{2\kappa}\left(R^{ j k}\wedge
\eta_{i j k}-2\Lambda \eta_i\right) , \qquad \qquad H_{i j}:=\frac{1}{2\kappa}\eta_{i j},\end{eqnarray}
with $\eta_{i j}$ being a 2-form defined in Appendix A and $R^{ j k}$ is the curvature 2-form. The conserved quantity of the
gravitational field of (\ref{lg}) is \cite{OR6}
\begin{equation \label{ch} \jmath[\xi]=\frac{1}{2\kappa}d\left\{^{^{*}}\left[dk+\xi\rfloor\left(\vartheta^i\wedge
T_j\right)\right]\right\},\end{equation}
where $ k=\xi_i\vartheta^i,$ and $ \xi^i=\xi\rfloor\vartheta^i $.
Here $*$ denotes the Hodge duality, $\xi$ is an arbitrary vector
field $\xi=\xi^i\partial_i$ and $\xi^i$ are four parameters $\xi^0$, $\xi^1$, $\xi^2$ and $\xi^3$.
We are working in TEGR theory which is equivalent to GR, therefore the torsion is vanishing
and the total charge, given by Eq. (\ref{ch}), takes the form
\begin{equation \label{ch1} {{\cal Q}}[\xi]=\frac{1}{2\kappa}\int_{\partial S}{^*}dk. \end{equation}
This invariant conserved quantity ${{\cal Q}}[\xi]$ was previously defined by Komar \cite{Ka2}--\cite{Ka3}. The quantity ${{\cal Q}}[\xi]$ is conserved and invariant
(for any given vector field $\xi$) under general coordinate transformations.
The coframe $\vartheta^{\delta}$ of
solution (\ref{sol}), using tetrad (\ref{tetrad}) , has the form: \begin{eqnarray \label{co} {\vartheta}^{{0}} =\sqrt{A(r)}dt,\qquad
{\vartheta}^{{1}}=\frac{1}{\sqrt{A_1(r)}}dt, \qquad {\vartheta}^{{2}}= r d\phi, \qquad
{\vartheta}^{{3}}=rdz. \end{eqnarray}
Using Eqs.
(\ref{co}) into Eq. (\ref{ch}), we get \begin{equation \label{k1} k=A(r)\xi_0dt - \frac{\xi_1dr}{A_1(r)}-r^2\xi_2d\phi -r^2\xi_3dz.\end{equation} After some algebra, the total derivative of Eq. (\ref{k1}) has the
form \begin{equation \label{dk} dk= A'(r)\xi_0(dr \wedge dt)+2r\xi_2(d\phi \wedge dr)+2r\xi_3(dz \wedge dr).
\end{equation}
Using the inverse of Eq. (\ref{k1}), (i.e. we write $dt$, $dr$, $d\theta$ and $d\phi$ in terms of ${\vartheta}^{\hat{0}}$, ${\vartheta}^{\hat{1}}$, ${\vartheta}^{\hat{2}}$ and ${\vartheta}^{\hat{3}}$) and
substituting Eq. (\ref{dk}) in Eq. (\ref{ch1}) and applying
the Hodge-dual to $dk$, we finally get the total conserved charge in
the form
\begin{equation \label{ch2} {{\cal Q}}[\xi_t]=\frac{\xi_0(2r^3\Lambda+3c_1)}{6}, \qquad \qquad {{\cal Q}}[\xi_r]={{\cal Q}}[\xi_\theta]={{\cal Q}}[\xi_\phi]=0,\end{equation}
Using the same algorithm for the second solution (\ref{sol}), we get
\begin{equation \label{cc2} {{\cal Q}}[\xi_t]=\frac{\xi_0(2r^4\Lambda+3c_1r+6c_7)}{6r}, \qquad \qquad {{\cal Q}}[\xi_r]={{\cal Q}}[\xi_\theta]={{\cal Q}}[\xi_\phi]=0,\end{equation}
Eqs. (\ref{ch2}) and (\ref{cc2}) show that the total conserved charges of
solutions (\ref{sol}), using tetrad (\ref{tetrad}) and Eq. (\ref{ch1}), are divergent when $r\rightarrow \infty$.
Therefore, Eq. (\ref{ch1}) needs a regularization.
\section{Regularization via relocalization}
The conserved quantity given by Eq. (\ref{ch1}) is invariant under diffeomorphism and local
Lorentz transformations. Besides these transformations
there is another issue in the definition of the conserved
quantities which lies in the fact that the field equations
allow for a relocalization of the gravitational field
momenta \cite{OR6}. Thus, the conserved currents can be altered through the relocalization of
translational and rotational momenta. A relocalization generated by
altering the Lagrangian of the gravitational field by a total
derivative is given by \begin{equation {\cal L}'={\cal L}+d\aleph, \qquad \textrm{where} \qquad
\aleph=\aleph({{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{i}},
{\Gamma_i}^j, T^i, {R_i}^i).\end{equation} The second term exists in the Lagrangian, i.e.,
$d\aleph$ modifies only the boundary part of the action, allowing the field equations to be invariant \cite{OR6}. It is straightforward that the total conserved
quantities can be regularized by means of a relocalization of the gravitational field momenta.
It is shown that the most accurate method, that can solve the strange result
derived in Eqs. (\ref{ch2}) and (\ref{cc2}), is to use relocalization which is originated by a boundary term in the Lagrangian. Here we use the relocalization
\[ H_{i j}\rightarrow H'_{i j}=H_{i j}-2\alpha\eta_{i j k l}R^{k l},\] which is originated by altering the Lagrangian as \cite{OR6}
\[{\cal L}\rightarrow {\cal L}'={\cal L}+\alpha d\aleph,\] where \[ H'_{i j}=\left(\frac{1}{2\kappa}-\frac{4\alpha \Lambda}{3}\right)\eta_{i j}-2\alpha \eta_{i j k l}\left(R^{k l}-\frac{\Lambda}{3}{\vartheta}^{k}{\vartheta}^{l}\right).\] We assume $\alpha$, that appears in the above equation to have the form $\frac{3}{8\Lambda \kappa}$ to insure the removal of the divergence that appear in Eqs. (\ref{ch2}) and (\ref{cc2}). Therefore, the conserved
charge, using the relocalization method, takes the form
\begin{equation \label{ch4} {{\cal J}}[\xi]=-\frac{3}{4\kappa \Lambda }\int_{\partial S}
\eta_{i j k l}\Xi^{i j} W^{k l}, \end{equation} where
$W^{i j}$ is the Weyl 2-form defined by \begin{equation W^{i j}=\frac{1}{2}{C_{k l}}^{i j}{\vartheta}^{k}\wedge {\vartheta}^{l},\end{equation} with ${C_{i j}}^{k l}={b_i}^\mu {b_j}^\nu {b^k}_\alpha {b^l}_\beta {C_{\mu \nu}}^{\alpha \beta}$ being the Weyl tensor and $\Xi^{i j}$
defined as\footnote{The detailed derivation of Eq. (\ref{ch4}) is found in references
\cite{OR6,OR7,OR8}.} \begin{equation \Xi_{i j}:=\frac{1}{2}e_j\rfloor
e_i \rfloor dk.\end{equation} The conserved
currents ${{\cal J}}[\xi]$ are invariant under both coordinate and local Lorentz transformations. These
currents ${{\cal J}}[\xi]$ are related to a given vector field $\xi$ on the
spacetime of the manifold.
We calculate the necessary components needed for Eq. (\ref{ch4}). The non-vanishing components of $\Xi^{i j}$ have the form\footnote{The non-vanishing components of
Weyl tensor are given in Appendix B.}
\begin{eqnarray \Xi_{01} &\!\!\! = &\!\!\!-\frac{\xi_0(2\Lambda r^3+3c_1)}{6r^2}, \qquad \Xi_{13}= \frac{\xi_3(3c_1-\Lambda r^3)}{\sqrt{3r}}. \end{eqnarray}
Using Eqs. (\ref{ch4}), we get \begin{equation\label{ch5} \eta_{i j k l}\Xi^{i j}
W^{k l}=\frac{ 2c_1\xi_0(2\Lambda r^3+3c_1)(dz \wedge d\phi)}{3r^3}\end{equation} Substituting Eq. (\ref{ch5}) in
(\ref{ch4}) we finally get \begin{equation \label{ch6} {{\cal
J}}[\xi_t]=\frac{c_1}{2},\qquad {{\cal
J}}[\xi_r]={{\cal
J}}[\xi_\theta]={{\cal
J}}[\xi_\phi]=0.\end{equation} Eqs. (\ref{ch6}) show that the constant $c_1$ may take the value $c_1=\frac{M}{2}$ such that the total mass of Eqs. (\ref{ch6}) takes the form \cite{LZ, Dm}
\begin{equation E=M+\left(\frac{1}{r}\right).\end{equation}
By the same method, we can get the conserved charge of the second solution (\ref{sol}). It has the form
\begin{equation E=M+\left(\frac{c_8}{r}\right)+O\left(\frac{1}{r^2}\right),\end{equation} which shows that the constant $c_8$ behaves as the electric charge.
\newsection{ Discussion and conclusions }
Including a magnetic field in the metric is a challenging issue to get exact solutions in theories of gravity. Despite of this difficulty, some analytic solutions
have been derived, like \cite{ M64,B54}, where a magnetic "universe", including a magnetic field in
the $z$ direction, is considered. Furthermore Gutsunaev and Man'ko found a solution where a magnetic dipole is present \cite{GM88}. Of course it is always possible to study an arbitrary shape for the magnetic field and solve the resulting Einstein equations. In this study, we have addressed the problem of deriving charged black hole solutions, in TEGR theory, involving cosmological constant and using a general gauge potential including magnetic fields only. For this purpose, we have applied a tetrad field with two unknown functions and assumed a cylindrical symmetry for the charged field equations of TEGR. We have used a gauge potential which contains 6 unknown functions. Finally, we obtained a system of nonlinear differential equations that has been solved exactly. The solution of this system has two cases: In the first one, the torsion scalar has a constant value and all the components of the energy momentum, which depends on the charge fields, are identically vanishing while all the components of the magnetic field have non-trivial values. The second case contains an integration constant which gives a nontrivial value to the torsion scalar which becomes trivial when this constant is equal zero. It is worth noticing that this constant is related to some component of the magnetic field.
We then discuss the energy conditions related to these solutions and show that the first set satisfies these conditions because it has a trivial value of the energy momentum tensor. However, for the second set, the energy conditions are satisfied under certain constraints. We have also discussed the singularities of the two sets and have discussed the horizons of each set. Finally, we have calculated the conserved quantities related to each set and have shown that the Komar formula gives a divergent quantity on the temporal components.
Therefore, we have applied the regularization through relocalization in order to calculate the conserved quantities. For the first set, we have shown that the only conserved quantity is the energy and have related the constant that appeared in the calculation of energy to the ADM mass. The conservation of the second set gives, besides the ADM mass, another term which is related to the constant that makes the torsion scalar a dynamical one. So we can explain the contribution of this constant as related to the magnetic field. The most interesting thing is that the sign of this term is different from the sign of the term of Reissner-Nordstr\"om spacetime whose source of charge comes from the electric charge \cite{NS}. In a forthcoming paper, we will extend these considerations to more general TEGR models like those discussed in \cite{cai}.
It is important to stress that magnetic teleparallel solutions can be obtained also in spherical symmetry adopting procedures similar to that considered in this paper. In fact, it is easy to see that the assumption (\ref{tetrad}) can be recast for spherical coordinates considering suitable vierbien fields. However, the magnetic field has to be adapted to the spherical symmetry and one obtains different forms for the functions $A(r)$ and $A_1(r)$.
A final remark concerns possible astrophysical applications of the present results. As we said, the value of the torsion scalar depends on the strength of the magnetic field and this fact could have observational consequences on magnetic astrophysical systems. As reported in \cite{andrade}, torsion plays a dynamical role on magnetic vortex line curves of magnetars. In particular, torsion contributes to the oscillations of the magnetar and to the equation of state of such systems. Furthermore, in \cite{lyutikov}, several observational evidences are reported for neutron star magnetospheres related to binary pulsars, Crab pulses and magnetars. In all these cases, the strict relation between torsion and magnetic field could contribute to figure out the dynamics. A detailed analysis in this direction will be developed in a forthcoming study.
|
2,869,038,156,850 | arxiv | \section{Introduction}
Modern physics is based on two fundamental pillars: quantum mechanics (QM) and Einsteinian general relativity (GR). When taken separately, these theories can claim success in satisfactorily describing many physical phenomena, but all attempts to make them compatible with each other have failed so far. The goal of quantum gravity (QG) research is to find a common approach to coherently merge quantum theory and GR. The QG problem has remained unsolved for more than eighty years now and keeps challenging physicists who, in the struggle to find a solution, have proposed a myriad of models \citep[see e.g.][]{Polyakov1981,Bombelli1987,Oriti2006,Reuter2006,Rovelli2007,Loll2012}. However, none of these models can claim full success. One of the main obstructions to progress in this field is the lack of experimental guidance. However, in the last two decades, the situation has changed, and recent years have held important advances in the field of QG phenomenology \citep{Mattingly2005,Amelino2013,Liberati2013}.
It is notoriously difficult to extract observable predictions from fully-fledged QG approaches. Different models usually start from different conceptual premises and use different mathematical formalisms in such a way it is difficult to determine whether they make compatible predictions. In some cases, the formal complexity forbids producing observable outcomes at all. Then, to guide experimental efforts, bottom-up approaches have been proposed \citep{Amelino2002,Kowalski2002,Smolin2004, Livine2011,Barrau2015,Ronco2018,Calcagni2019}. They rely on somewhat simpler models, suitable for describing only a subset of expected QG features, but have the advantage of producing opportunities for experimental testing.
In this regard, at the end of the 90s, independent semi-classical analyses inspired by QG models brought to the attention of the QG community the fact that it is a highly non-trivial task to retain Lorentz symmetries when quantizing the space-time geometry of GR. These models include, most notably, String Theory \citep[see e.g.][and references therein]{Mavromatos2010}, Loop Quantum Gravity \citep{Gamibini1999}, Non-commutative Geometry \citep{Carroll2001}, and Standard Model Extension \citep[][and references therein]{Kostelecky2008}. From then on, departures from Lorentz invariance have become one of the rare observable features we would expect in a QG theory and, as we shall see briefly, different bottom-up models to implement them have been proposed. According to this view, Lorentz invariance could be an emergent symmetry that arises in the low-energy limit but is modified at higher energies approaching the Planck scale, i.e. the energy scale at which both GR and QM effects should play an important role.
A much-studied way to encode departures from Lorentz invariance, either violations (noted LIV for Lorentz invariance violation) or deformations,
consists in modifying the energy-momentum dispersion relation of free relativistic particles as follows \citep{Amelino1998}:
\begin{equation}
\label{eq:disprel1}
E^2 \simeq p^2 c^2\times\left[1 \pm \sum_{n=1}^\infty \left(\frac{E}{E_{QG}}\right)^n\right],
\end{equation}
where $c$ is (the low energy limit of) the speed of light, and $E_{QG}$ the energy scale of QG effects which is usually expected to be around the Planck scale ($E_P = \sqrt{\hbar c^5 / G}\, \simeq 10^{19}$ GeV).
The sign $\pm$ in Equation~(\ref{eq:disprel1}) takes into account the possibility to have subluminal or superluminal effects.
Published one year after the first redshift of a gamma-ray burst (GRB) was measured, the article by \citet{Amelino1998} also proposed for the first time the use of transient, distant and high-energy gamma-ray sources as a way to probe the quantum nature of space-time by searching for energy-dependent delays. In the following, we will focus on this particular way to probe a dispersion relation such as the one of Equation~(\ref{eq:disprel1}) in the so-called
`time of flight' studies. Since then, other possibilities have emerged to search for QG effects in gamma-ray astronomy. For example, astrophysical sources have been used to search for vacuum birefringence \citep{Gotz2014}, and space-time `fuzziness' \citep{Vasileiou2015}. Possible modifications of the cross section of $\gamma\gamma$ interaction between high-energy photons and the extra-galactic background light were also investigated \citep{Biteau2015,Abdalla2019}. {Some of the limits published in these papers exceed the Planck scale, sometimes even by several orders of magnitude, but there is also a possibility that LIV could occur only through energy-dependent delays.} In principle, all these effects could {also} coexist, even if they have only been tested separately so far. {A comprehensive review of different possible effects of LIV on gamma rays, as well as on other messengers (cosmic rays, gravitational waves, neutrinos) is given by \citet{Addazi2021}.}
Heuristically, Equation~(\ref{eq:disprel1}) can be justified as follows: at Planckian distances ($\sim10^{-33}$ cm), QG effects are believed to cause fluctuations of space-time geometry which, then, would behave as a dynamical medium characterised by a non-trivial refractive index. Consequently, photons with different energies would have different interactions with the `foamy' structure of space-time (sometimes called `quantum space-time') and, thus, they would propagate in vacuum at different velocities thereby producing an effect of in-vacuo dispersion. This explains the dependence of Equation~(\ref{eq:disprel1}) on some power~$n$ of the energy $E$ of the probe. For simplicity,~$n$ is generally assumed to be an integer, and we will keep that assumption in this paper. However, in so-called fractional or (multi-)fractional models the modifications of the dispersion relation depend on non-integer powers of the energy \citep{Ronco2017,Calcagni2017}.
Regardless of the model to be used, the expected scale of QG effects is typically several orders of magnitude higher than the energy of observed photons. For this reason we can treat the anomaly induced by QG as a small correction to the photon group velocity and, in particular, only linear $n=1$ or quadratic $n=2$ modifications are of interest for experimental searches taking into account the sensitivity of current detectors. It is important to stress that there are counterexamples where $E_{QG}$ can be far away from the Planck scale (being either above or below $E_P$). Among others, let us highlight two particular cases. In the approach of Asymptotic Safety, renormalization group techniques generate a running of the gravitational constant thereby affecting the value of $E_{QG}$ \citep{Reuter2006}. In String Theory, the compactification of extra dimensions can produce testable effects at energies much lower than $E_P$, even of the order of tens of teraelectronvolts \citep[TeV, ][]{Arkani1998}. Some stringent constraints already exist on these models \citep{ATLAS2016}. Given that, different types of experiments play a crucial role in constraining the value of $E_{QG}$.
To compensate for the smallness of the effect ($E/E_{QG}$ is typically of the order $10^{-19} - 10^{-14}$), it has been recognized that very distant astrophysical sources could be used to probe properties of quantum space-time \citep{Amelino1998, Urrutia1999, Liberati2006}. Indeed, if the emitted photons travel over large distances, then even extremely tiny quantum-space-time effects could accumulate and eventually the overall effect could become detectable in the form of energy-dependent time delays in the light curves. Variable or transient sources at cosmological distances such as GRB and flaring active galactic nuclei (AGN) are good candidates looking for LIV, but it is important to stress that the involvement of cosmological distances forces us to face the problem of combining curvature with quantum-space-time effects. In other words, the delays should depend on the redshift. On the other hand, fast-spinning pulsars (PSRs) detected at TeV energies are within our Galaxy and, thus, their euclidean distances can be used instead of the redshift.
\begin{table*}[t!]
\begin{center}
\caption{A selection of limits for a sub-luminal propagation obtained with various instruments and various types of objects
\label{tab:all_res}}
\scriptsize
\begin{tabular}{llllllll}
\hline
\hline
Source & Experiment & Year & Distance$^a$ & Lower limit on $E_{QG,1}$ & Lower limit on $E_{QG,2}$ & Reference & Note \\
& & & & (95\% CL, GeV) & (95\% CL, GeV) & & \\
\hline
35 GRB & BATSE, HETE-2, Swift & - & - & $1.4\times10^{16}$ & - & 1 & $^{b}$ \\
8 GRB & Fermi LAT & - & - & $1.0\times10^{17}$ & - & 2 & \\
\object{GRB 090510} & Fermi LAT & 2009 & 0.903 & $9.3\times10^{19}$ & $1.3\times10^{11}$ & 3 & \\
\object{GRB 190114C} & MAGIC & 2019 & 0.4245 & $0.6\times10^{19}$ & $6.3\times10^{10}$ & 4 & \\
\object{Mrk~501} & MAGIC & 2005 & 0.034 & $0.3\times10^{18}$ & $5.7\times10^{10}$ & 5 & $^{c,\star}$ \\
\object{Mrk~501} & H.E.S.S. & 2014 & 0.034 & $3.6\times10^{17}$ & $8.5\times10^{10}$ & 6 &\\
\object{PKS 2155-304} & H.E.S.S. & 2006 & 0.116 & $2.1\times10^{18}$ & $6.4\times10^{10}$ & 7 & $^\star$ \\
\object{PG 1553+113} & H.E.S.S. & 2012 & $0.49\pm0.04$ & $4.1\times10^{17}$ & $2.1\times10^{10}$ & 8 & $^{d,\star}$\\
\object{PSR B0531+21} & VERITAS & 2007-14 & 2.2 kpc & $1.9\times10^{17}$ & - & 9 & \\
\object{PSR B0531+21} & MAGIC & - & 2.2 kpc & $5.5\times10^{17}$ & $5.9\times10^{10}$ & 10 & $^\star$ \\
\object{PSR B0833-45} & H.E.S.S. & - & 294 pc & $4.0\times10^{15}$ & - & 11 & $^\star$ \\
\hline
\end{tabular}
\end{center}
{Notes.}\\
$^a$ Redshift is given for extra galactic objects.
$^{b}$ The limits of \citet{Ellis2006} were corrected in \citet{Ellis2008} taking into account the factor $(1+z')$ in the numerator of integral in Eq. \protect{\ref{eq:kappaliv}}. Only the limit obtained for a linear correction is given.
$^{c}$ These numbers are actually reported as best fit values by \citet{Martinez2009}.
$^{d}$ The redshift of this source was not measured but only estimated.
$^\star$ Sources used as benchmark in the present paper.\\
{References.}\\
(1) \citet{Ellis2006, Ellis2008}, (2) \citet{Ellis2019}, (3) \citet{Vasileiou2013}, (4) \citet{Acciari2020}, (5) \citet{Martinez2009}, (6) \citet{Abdalla2019} , (7) \citet{Abramowski2011}, (8) \citet{Abramowski2015},
(9) \citet{Zitzer2013},
(10) \citet{Ahnen2017}, (11) \cite{Chretien2015}.
\end{table*}
Considering only the leading dominant term in Equation~(\ref{eq:disprel1}), either linear ($n=1$) or quadratic ($n=2$), it can be shown that the group velocity of photons acquires a dependence on their energies. In particular, the delay between two photons emitted at the same time by a source at redshift $z$ with energies $E_h > E_l$ is:
\begin{equation}
\label{eq:timez5}
\Delta t_n \simeq \pm\,\frac{n+1}{2}\,\frac{E_h^n - E_l^n}{\mathrm{H}_\mathrm{0} E_{QG}^n}\ \kappa_n(z),
\end{equation}
where $\kappa_n(z)$ is a parameter depending on the distance of the source. The symbol $\pm$ allows to take into account both a subluminal (sign $+$) or a superluminal (sign $-$) LIV effect. In this paper, two different expressions for $\kappa_n(z)$ will be compared for the first time: one obtained in a pure Lorentz invariance violation framework \citep{Jacob2008}, and another obtained in the doubly special relativity (DSR) approach \citep{Rosati2015}. This will be discussed in more detail in Section~\ref{sec:dist}.
The delay $\Delta t_n$ takes only into account Lorentz violation effects, therefore neglecting any time lag originating from emission mechanisms, also referred to as `source intrinsic' delay. A hint of such kind of delay are observed for GRB \citep{Ajello2019} and has been also reported once in the case of an AGN, for the flare of \object{Mrk~501} in 2005 recorded by MAGIC\,\footnote{\textit{Major Atmospheric Gamma Imaging Cherenkov}, \url{https://magic.mpp.mpg.de}} \citep{Albert2007}. With only one source, and with only a rough knowledge of how particles are emitted and accelerated, intrinsic delays cannot be separated from propagation effects. Modeling of astrophysical sources is an on-going effort and a first study of source intrinsic effects in connection with Lorentz violation searches has been published recently in the case of blazar flares \citep{Perennes2020, Levy2021}. On the other hand, when several sources are combined, it could be possible, at least in principle, to separate intrinsic and propagation effects. Indeed, it is reasonable to assume that intrinsic delays do not depend on the distance. It is therefore essential that these studies could be performed on a large population of objects.
From Equation~(\ref{eq:timez5}), another parameter $\lambda_n$ can be defined as
\begin{equation}
\label{eq:lambda}
\lambda_n \equiv \frac{\Delta t_n}{\Delta E_n\ \kappa_n(z)} = \pm \frac{n+1}{2 \mathrm{H}_\mathrm{0}\ E^{n}_{QG}},
\end{equation}
using the simplified notation $\Delta E_n \equiv E_h^n - E_l^n$. This parameter $\lambda_n$, which will be used later, has the advantage to be independent of the distance of the source and is therefore suitable for a multi-source analysis.
Since the late 90s, the field has rapidly expanded, with more and more sources being analyzed in the search for Lorentz invariance violation effects. With the notable exception of the flare of \object{Mrk~501} in 2005 already mentioned above, no significant delay has been reported so far when using only photon as the messenger. Constraints have been improving on a regular basis, even reaching the Planck scale in some cases in analyses of individual objects \citep[see e.g.][]{Vasileiou2013}. \mbox{Table}~\ref{tab:all_res} gives a partial selection of the best limits available on $E_{QG,n}$ for time-of-flight studies, with the three types of sources (AGN, GRB and PSR). This new notation $E_{QG,n}$ reflects the fact that LIV analyses have different sensitivities for linear and quadratic effects.
The results of \cite{Ellis2006, Ellis2008, Ellis2019} are of particular interest since they were obtained from the analyses of several GRB. This kind of analysis, repeated with different experiments \citep[e.g.][]{Bolmont2008, Bernardini2017} consists in two steps: first, the time-lags are computed for each individual source, and then the obtained data points are fitted with a function $\Delta t = a\,z + b\,(1 + z)$ (for $n=1$). The value of parameter $a$ is subsequently used to constrain $E_{QG,1}$ while $b$ represents source intrinsic effects, assumed to be identical for all bursts. In the present paper, we describe and test a more advanced and presumably more sensitive method to perform such a population study, based on a likelihood technique.
For completeness, let us mention that recently it was suggested from the analysis of several GRB that Lorentz invariance could be violated at a scale of \mbox{$\sim3.7\times 10^{17}$~GeV} \citep{XuMa2016a, XuMa2016b, XuMa2018}. This result is contradictory with the best limits listed in Table~\ref{tab:all_res} and still needs to be confirmed.
One of the main objectives of the first phase of describing QG phenomenology was the ability to prove that in-vacuo dispersion (or, more generally, Planck-scale effects) could be tested with current experiments. Now, the stringent limits established with GRB observations together with the growing amount of relevant data, and progress on the theory side, can bring us to a more mature phase where we can start constraining actual QG models, in a robust manner. The present work can be considered a first step in this direction. We aim to combine, for the first time, the data obtained with the three major imaging atmospheric Cherenkov telescope (IACT) experiments, H.E.S.S.\,\footnote{\textit{High Energy Stereoscopic System}, \url{https://www.mpi-hd.mpg.de/hfm/HESS/}}, MAGIC and VERITAS\,\footnote{\textit{Very Energetic Radiation Imaging Telescope Array System}, \url{https://veritas.sao.arizona.edu}} in order to constrain QG effects through the time-of-flight technique \citep[see][for a recent review]{terzic2021}. This combination will extract the most information out of each type of source to produce robust constraints from existing data, while taking into account the redshift dependence of the LIV-induced time-lag. \\
The paper will be divided in two parts. In the present article (part I), two possible ways to account for the dependence of time delays as a function of redshift will first be described (Section~\ref{sec:dist}). Then, the method used to compute and combine the likelihoods to measure time-lag parameters $\lambda_n$ and $\tau_n$ will be described in detail (Section~\ref{sec:method}). Several nuisance parameters are included in the computation to take into account various sources of systematic uncertainties. Then, in Section~\ref{sec:simulations}, the method is tested on simulated data sets mimicking the data of several representative sources observed in the TeV domain. These simulations are used to evaluate statistical errors and study the impact of various sources of systematic errors in the lag measurement. The results, as well as the impact of redshift dependence, will be given and discussed in Section~\ref{sec:res}.
In the second part of the paper, to appear later, the method will be used with available data from H.E.S.S., MAGIC and VERITAS, and possibly from other gamma-ray experiments, in order to produce a combined limit on $E_{QG,n}$.
\section{Redshift dependence of time delays}\label{sec:dist}
It is rather natural to believe that curvature and quantum effects are deeply intertwined since curvature is a key characteristic of space-time geometry. In light of this, a complete QG theory would be needed to tell us whether there is a phenomenon of in-vacuo dispersion and then compute its magnitude. However, in the absence of such a theory, simplified speculative approaches to model in-vacuo dispersion in curved spaces have been proposed. Among them, especially for reasons of simplicity, a model where Lorentz invariance is explicitly broken in a specific way proposed by Jacob and Piran \citep[][J\&P for short]{Jacob2008} attracted a particular interest and has been systematically used so far in experimental analyses constraining in-vacuo dispersion.
\begin{figure}
\plotone{Fig1.pdf}
\caption{Parameter $\kappa$ for $n=1$ (black) and $n=2$ (gray) in the J\&P case (solid line) and in the DSR case (dashed line).}
\label{fig:kappaz}
\end{figure}
In this approach, parameter $\kappa_n(z)$ is expressed as:
\begin{equation}
\label{eq:kappaliv}
\kappa^\mathrm{J\&P}_n(z) \equiv \int_0^z \frac{(1+z')^n}{\sqrt{\Omega_m\,(1+z')^3 + \Omega_\Lambda}}\ dz',
\end{equation}
where {the denominator relates to the Hubble parameter} $H(z) = \mathrm{H}_\mathrm{0} \sqrt{\Omega_m\,(1+z)^3 + \Omega_\Lambda}$. In the following, cosmological parameters values are taken from Planck results \citep{Planck:2018vyg} as recommended by the Particle Data Group \citep{Zyla:2020zbs}: $\mathrm{H}_\mathrm{0} = 67.4\pm0.5\ \mathrm{km\,s}^{-1}\,\mathrm{Mpc}^{-1}$, $\Omega_m = 0.315\pm0.007$ and $\Omega_\Lambda = 0.685\pm0.007$.
As recent literature has pointed out \citep{Rosati2015,Barcaroli2016,Pfeifer2018}, Equation~(\ref{eq:kappaliv}) offers only one possible parameterization among many others. It has been shown that if, following the Deformed Special Relativity (DSR) approach, Poincar\'e symmetries are modified in order to preserve the invariance of Equation~(\ref{eq:disprel1}) under relativistic transformations, then one can obtain a different result for the distance parameter \citep{Rosati2015}:
\begin{equation}
\label{eq:kappadsr}
\kappa^\mathrm{DSR}_n(z) \equiv \int_0^z \frac{h ^{2n}(z') dz'}{(1+z')^n\, \sqrt{\Omega_m\,(1+z')^3 + \Omega_\Lambda}},
\end{equation}
with
\begin{equation}
\begin{split}
h (z') \equiv 1+ z' - &\sqrt{\Omega_m\,(1+z')^3 + \Omega_\Lambda}\\
& \times \int_0^{z'} \frac{dz''}{\sqrt{\Omega_m\,(1+z'')^3 + \Omega_\Lambda}}\,
\end{split}
\end{equation}
and this can result in consistently different limits on $E_{QG,n}$. Note that Equation~(\ref{eq:kappadsr}) is only one possible outcome of DSR, chosen here as a benchmark. Using observations of multiple sources at different redshifts, we establish for the first time limits on these two different models.
Figure~\ref{fig:kappaz} shows functions $\kappa^\mathrm{DSR}$ and $\kappa^\mathrm{J\&P}$ as a function of redshift for $n=1$ and $n=2$. $\kappa^\mathrm{DSR}$ is smaller than $\kappa^\mathrm{J\&P}$ for both linear and quadratic cases. When no lag is measured, this leads to less stringent limits on $E_{QG,n}$.
{Both functions $\kappa^\mathrm{J\&P}$ and $\kappa^\mathrm{DSR}$ increase with redshift, thus increasing the expected time delay. However, it has to be pointed out that ultimately, the distance at which sources can be detected at high energies is limited by the absorption by the extragalactic background light (EBL). This distance depends on the energy, and does not exceed $z\sim1$ in the TeV range.}
To conclude this section, let us add that in case of nearby sources, such as pulsars, the euclidean approximation is valid, i.e. $\kappa_n(z) = d\,\mathrm{H}_\mathrm{0}/c$ where $d$ is the euclidean distance to the source. In addition, the ratio $\kappa^\mathrm{DSR}/\kappa^\mathrm{J\&P}$ converges to unity for low distances. As a result, a given pulsar will give the same constraints on $E_{QG,n}$ for both J\&P and DSR cases.
\section{Methodology}\label{sec:method}
All observations considered for combination in this work are analyzed with a Maximum Likelihood (ML) method to search for linear or quadratic LIV delays and to extract limits on $E_{QG,n}$. Compared to alternative methods such as \textit{e.g.} PairView \citep{Vasileiou2013}, Dispersion Correction \citep{Barres2012}, Peak comparison \citep{Ahnen2017}, the ML method allows optimal use of the information in data and provides a relatively straightforward way to combine analyses of multiple sources and observatories. On the other hand, it relies on parameterization of intrinsic photon emission time and energy distributions, which are currently not fully understood at the theoretical level. The uncertainties related to these parameterizations are taken into account when deriving the limits on $E_{QG,n}$.
The code for likelihood computation as well as for simulations was developed using the ROOT\,\footnote{\url{https://root.cern.ch}} framework \citep{ROOT}.
\subsection{Single Source Likelihood}
First applied by~\cite{Martinez2009} for analyzing the 2005 flare of \object{Mrk~501} observed by MAGIC, the ML method relies on defining a probability density function (PDF) that describes the probability of observing a gamma-ray photon at a certain arrival time and with a certain reconstructed energy.
In its simplest form, the PDF for signal photons is defined as a function of time and energy with $\lambda_n$ as the single parameter to be estimated. The PDF is obtained convolving the spectrum of the source $\Gamma_{s}(E_t)$ with the light curve $C_s$, both as observed on Earth, \textit{i.e.} after propagation:
\begin{equation}\label{eq:pdf source}
F_{s}(E_{t},t;\lambda_n) = \frac{\Gamma_{s}(E_t)\,C_{s}\left(t-D(E_{t},\lambda_n,z)\right)}{N_s},
\end{equation}
where $E_{t}$ is the true energy of the gamma-ray photon, $t$ is the arrival time corrected by the factor:
\begin{equation}\label{eq:Delay}
D(E_{t},\lambda_n,z) = \lambda_n \times \kappa_n(z) \times E_t^n,
\end{equation}
which defines the propagation delay due to LIV, where $\lambda_n$ is given by Equation~(\ref{eq:lambda}), and $N_s$ is a normalization term expressed as follows:
\begin{equation}\label{eq:pdf sourcenorm}
N_s = \iint \Gamma_{s}(E_t)\,C_{s}\left(t-D(E_{t},\lambda_n,z)\right)\,dE_t\,dt.
\end{equation}
In Equations~(\ref{eq:pdf source}) and (\ref{eq:pdf sourcenorm}), function $C_s$ is often called the \textit{template} light curve. It is usually obtained by fitting a light curve at low energies, where LIV effects are assumed to be weak or negligible. Since there is no fully accepted model available which reproduces the shape of the light curves for GRB, AGN or PSR, Gaussian or Lorentzian functions, or the sum of several of these functions are usually used. {The function $\Gamma_{s}$ is obtained from the data on the full energy range considered for the LIV analysis (see Section~\ref{sec:simulations}).}
\begin{table*}[t!]
\caption{Nuisance parameter uncertainties for the individual sources.}\label{tab:nuisance}
\begin{center}
\scriptsize
\begin{tabular}{llllll}
\hline
\hline
Source & Energy Scale & Background proportion & Spectral index$^a$ & Distance/redshift & References$^b$ \\
\hline
GRB 190114C & $17$\% & $11$\% & $0.21$ & $\Delta z = 1\times10^{-3}$ & 1, 2, 3, 4\\
PG 1553+113 & $10$\% & $20$\% & $0.31$ & $\Delta z = 4\times10^{-2}$ & 5, 6\\
Mrk~501 & $17$\% & $11$\% & $0.04$ & $\Delta z = 1\times10^{-4}$ & 1, 2, 3, 6\\
PKS 2155-304 & $10$\% & $20$\% & $0.1$ & $\Delta z = 1.7\times10^{-2}$ & 5, 7\\
Crab (M) & $17$\% & $11$\% & $0.07$ & $\Delta d = 506$ pc & 1, 2, 3, 8\\
Crab (V) & $20$\% & $22$\% & $0.5$ & $\Delta d = 506$ pc & 9, 8\\
Vela & $10$\% & $20$\% & $0.67$ & $\Delta d = 76$ pc & 5, 10\\ \hline
\end{tabular
\end{center}
{Note.}\\
$^a$ Uncertainty for spectral index includes both statistical and systematic errors. $^b$ References are in the same order as the columns: energy scale, background proportion, spectral index (the three values are sometimes given in the same reference), and distance.\\
{References.}\\
(1) \citet{Aleksic2012},
(2) \citet{Aleksic2016},
(3) \citet{Aleksic2015},
(4) \citet{Acciari2019b},
(5) \citet{Aharonian2006},
(6) \citet{Mao2011},
(7) \citet{Ganguly2013},
(8) \citet{Kaplan2008},
(9) For energy scale and spectral index, \citet{Pueschel2019}. The value for background proportion was provided by the VERITAS collaboration,
(10) \citet{Caraveo2001}
\end{table*}
Background events of several different origins are also taken into account. They include hadrons mis-reconstructed as gamma-rays and baseline photons emitted either by an AGN in its quiescent state or by the nebula surrounding a PSR. The PDF for background events of type \textit{k} (hadrons or baseline photons), which are not affected by LIV propagation effects, is written as:
\begin{equation}\label{eq:pdf bkg}
F_{b,k}(E_{t},t) = \frac{\Gamma_{b,k}(E_t)\,C_{b,k}}{N_{b,k}}.
\end{equation}
$\Gamma_{b,k}$ is the background spectrum taken as a power law. For hadrons, the index is set to 2.7 while the values for signal and baseline photons are given in Table~\ref{tab:simulation}. $C_{b,k}$ is the time distribution of background events assumed to be a constant, and $N_{b,k}$ the normalization term defined as:
\begin{equation}\label{eq:pdf bkgnorm}
N_{b,k} = \iint \Gamma_{b,k}(E_t)\,C_{b,k}\,dE_t dt.
\end{equation}
From Equations~(\ref{eq:pdf source}) and (\ref{eq:pdf bkg}), the complete definition of the PDF is obtained, accounting for detector performance assessed from instrument response functions (IRFs):
\begin{multline} \label{eq:pdf source+det}
\frac{dP}{dE_m dt}= w_s\ \frac{\int A(E_{t}, \vec{\varepsilon}) M(E_t,E_m) \times F_{s}(E_{t},t;\lambda_n) dE_{t}}{N'_{s}} \\
+ \sum_{k} w_{b,k}\ \frac{\int A(E_{t}, \vec{\varepsilon}) M(E_t,E_m) \times F_{b,k}(E_{t},t) dE_{t}}{N'_{b,k}},
\end{multline}
where the source and background terms $F_s$ and $F_{b,k}$ are convoluted with the detector effective area $A(E_{t},\vec{\varepsilon})$ and energy resolution $M(E_t,E_m)$. Source and background terms are weighted by $w_s$ and $w_{b,k}$ respectively, with $w_s+\sum_{k} w_{b,k} = 1$. $E_t$ still denotes the true energy while $E_m$ is the corresponding measured energy. Parameters $N'_{s}$ and $N'_{b,k}$ are the normalization factors of the PDF. In addition to energy $E_t$, effective area depends on a set of factors $\vec{\varepsilon}$ which vary with observation conditions and with the method used for event reconstruction and identification. The IRFs were kindly provided by the H.E.S.S., MAGIC and VERITAS collaborations. Distinct IRFs are used for each source and each observation period.
The confidence levels for either a measurement or the derivation of lower limits on $\lambda_n$ (and $E_{QG,n}$) can then be obtained summing the log-likelihood of all the events for a given source $S$:
\begin{equation}\label{eq:LikelihoodData}
L_{S}(\lambda_n) = -\sum_{\mathrm{i}} \log\left(\frac{dP}{dE_m dt}(E_{m,i},t_{i});\lambda_n\right).
\end{equation}
\subsection{Combining Likelihoods}
While each source may require a different analysis strategy, either using a single parameter likelihood or a profile likelihood, the combination of multiple sources is straightforward. Once log-likelihood functions $L_{S}(\lambda_n)$ are obtained for all sources, the combined log-likelihood $L_{comb}$ is simply given by their sum:
\begin{equation}\label{eq:combination of likelihood}
L_{comb}(\lambda_n) = \sum_{\mathrm{all\ sources}} L_{S}(\lambda_n).
\end{equation}
\subsection{Statistical and systematic uncertainties}\label{subsec:uncertainties}
Statistical and systematic uncertainties are propagated in the final result through the use of profile likelihood. The log-likelihood for each source is then written as:
\begin{multline}
L(\lambda_n,\vec{\theta}) = L_\mathrm{S}(\lambda_n,\vec{\theta}) + L_\mathrm{template}(\vec{\theta}_\mathrm{C}) + L_\mathrm{\gamma}(\theta_\mathrm{\gamma}) +\\
L_\mathrm{B}(\vec{\theta}_\mathrm{B}) + L_\mathrm{ES}(\theta_\mathrm{ES}) + L_\mathrm{z}(\theta_\mathrm{z}),
\end{multline}
where $\vec{\theta}$ is the vector of all nuisance parameters defined as:
\begin{enumerate}[label=\alph*.]
\item $\vec{\theta}_\mathrm{C}$, the parameters of the light curve analytic parameterization,
\item $\theta_\mathrm{\gamma}$, the power law index of signal events spectrum,
\item $\vec{\theta}_\mathrm{B}$, the ratio of signal and of background event numbers to the total number of events,
\item $\theta_\mathrm{ES}$, the energy scale,
\item $\theta_\mathrm{z}$, the distance or redshift.
\end{enumerate}
\begin{table*}[t!]
\caption{Simulation settings for the individual sources. \label{tab:simulation}}
\begin{center}
\scriptsize
\begin{tabular}{lllllll}
\hline
\hline
Source & Energy Range & Time Range$^a$ & Spectral index & Lightcurve shape & Number of events & Background proportion \\
& (TeV) & & $\Gamma_s$, $\Gamma_b$ & & likelihood$^b$, template$^c$ & hadronic, baseline \\
\hline
GRB 190114C & $0.3$ - $2$ & $60$ - $1200$ s & $5.43$, - & curved power law & $726$, - & $0.055$, $0.$\\
PG 1553+113 & $0.4$ - $0.8$ & $0$ - $8000$ s & $4.8$, $4.8$ & double Gauss & $72$, $82$ & $0.29$, $0.15$ \\
Mrk~501 & $0.25$ - $11$ & 0 - $1531$ s & $2.2$, $2.2$ & single Gauss & $1800$, - & $0.39$, $0.$\\
PKS 2155-304 & $0.28$ - $4$ & $0$ - $4000$ s & $3.46$, $3.32$ & 5 asymmetric Gauss & $2965$, $561$ & $0.$, $0.02$ \\
Crab (M) & $0.4$ - $7$ & $0.36$ - $0.45$ & $2.81$, $2.47$ & single Gauss + Baseline & $14869$, - & $0.$, $0.961$ \\
Crab (V) & $0.2$ - $10$ & $0.37$ - $0.43$ & $3.25$, $2.47$ & single Gauss + Baseline & $22764$, - & $0.$, $0.964$\\
Vela & $0.06$ - $0.15$ & $0.50$ - $0.60$ & $3.9$, $1.75$ & asymmetric Lorentzian & $330820$, - & $0.$, $0.998$ \\ \hline
\end{tabular
\end{center}
{Notes.}\\
$^a$ For pulsars, the phase range is given, \textit{i.e.} the time range normalized with respect to the rotation period.
$^b$ Number of photons considered when computing the likelihood, \textit{i.e.} excluding the ones used for template determination. $^c$ A sign '-' means no template was used (see Section~\ref{subsec:uncertainties} for details).
\end{table*}
As already mentioned above, the template light curve $C_s$ of Equation~(\ref{eq:pdf source}) is obtained by fitting a low energy light curve, for which LIV is assumed to be negligible. From this parameterization, it is possible to evaluate errors directly, defining $L_\mathrm{template}(\vec{\theta}_\mathrm{C})$ as the sum of the log-likelihoods of each event generated from the low energy template parameterization:
\begin{equation}
L_\mathrm{template}(\vec{\theta}_\mathrm{C}) = -\sum^{N_{\mathrm{template}}}_{i=1} \log\left( \frac{C_s(t_i,\vec{\theta}_\mathrm{C})}{N_c}\right),
\end{equation}
with $C_s$ the light curve and $N_c$ its normalization. In this equation, the new notation for the light curve $C_s(t_i,\vec{\theta}_\mathrm{C})$ denotes the fact the template is evaluated for a zero-lag ($D(E_{i},\lambda_n = 0,z) = 0$), and explicitly shows the parameter vector $\vec{\theta}_\mathrm{C}$ of the template function. On the other hand, some other analyses use the template fit results as nuisance parameters. In that case, $L_\mathrm{template}(\vec{\theta}_\mathrm{C}) = 0$ and the uncertainty on $\vec{\theta}_\mathrm{C}$ is then accounted for in the generated data sample log-likelihood $L_\mathrm{S}(\lambda,\vec{\theta})$ defined in Equation~(\ref{eq:LikelihoodData}).
$L_\mathrm{\gamma}(\theta_\mathrm{\gamma})$ is obtained from the statistical and systematical uncertainties of the spectral index as provided in the analyses of the different sources. The flux normalization and energy scale uncertainties provided by the different observatories are taken into account by $L_\mathrm{B}(\vec{\theta}_\mathrm{B})$ and $L_\mathrm{ES}(\theta_\mathrm{ES})$, respectively. The energy scale parameter is introduced in the data sample log-likelihood by a scale factor applied to the event energy. The uncertainties on redshift for extragalactic sources, or distance for galactic sources, are accounted for in $L_\mathrm{z}(\theta_\mathrm{z})$.
For the power law index, ratio of signal and of background, energy scale and redshift uncertainties, a normal distribution is assumed which allows to use a simple chi-square approach:
\begin{equation}
L_\mathrm{x}(\vec{\theta}_\mathrm{x}) = \sum_{i} \frac{(\theta_{\mathrm{x},i} - \Bar{\theta}_{\mathrm{x},i})^2}{2\sigma^2_{\theta_{\mathrm{x},i}}},
\end{equation}
where $\sigma^2_\theta$ is the uncertainty of the nuisance parameter $\theta$ and $x$ is the different type of systematics. The full list of uncertainties assigned to each nuisance parameter for each source is shown in Table~\ref{tab:nuisance}.
In order to illustrate the impact of the different sources of uncertainties, the uncertainty on lambda is derived by varying only one of the different nuisance parameters. The systematic errors are then derived assuming the total uncertainty is the squared sum of statistical and systematical uncertainty. They are presented in the Appendix with Table~\ref{tab:systematics_res} for the J\&P case and Table~\ref{tab:systematics_res_DSR} for the DSR case, for each source and each source combination. These results will be commented further in Section~\ref{sec:res}.
\section{Simulations}\label{sec:simulations}
\subsection{Simulated data sets}\label{subsec:sim_data sets}
\subsubsection{Data sets choice criteria}\label{subsubsec:srcs_criteria}
The sources used in this study are listed in Table~\ref{tab:simulation}. They have all been detected by the three experiments H.E.S.S., MAGIC and VERITAS, and have been selected to gather a representative sample. Namely, the three types of source were selected: one GRB, three flaring AGN and two PSR, with LIV results already published, and with the following additional criteria:
\begin{enumerate}[label=\alph*.]
\item The three flaring AGN show different signal to background ratios: negligible background for \object{PKS~2155-304} and \object{Mrk~501} and substantial background level for \object{PG~1553+113},
\item The sources show very different light curve shapes, from a single Gaussian pulse for \object{Mrk~501} flare of 2005 to multiple asymmetric spikes for \object{PKS~2155-304} flare of 2006,
\item The sources selected cover a wide range in distance, from 2~kpc for the Crab PSR to a redshift of 0.49 for \object{PG~1553+113},
\item The two PSR have different distances and were observed on very different time scales,
\item In addition, \object{PG~1553+113} has a large uncertainty on the distance which was taken into account in the analysis.
\end{enumerate}
In the following sub-section, the most important characteristics of the sources as taken from the references listed in Table~\ref{tab:all_res} are briefly summarized. The numbers given in Table~\ref{tab:simulation} were extracted from these references or provided by the authors in private communications. Then, the use of simulated data-sets to assess the performance of the method is described in Section~\ref{subsec:calperf}.
Except specified otherwise, a spectral index $\Gamma_k = 2.7$ was used for hadrons.
\subsubsection{Source description}\label{subsubsec:srcs}
\object{GRB~190114C} is a gamma-ray burst detected on 2019 January 14 at 20:57:03 Universal Time (UT) and located at redshift $z=0.4245 \pm 0.005$~\citep{Selsing2019,Castro-Tirado2019}. Following the alert sent by \textit{Swift} \citep{Gropp2019}, MAGIC observed \object{GRB~190114C}, detecting a strong VHE $\gamma$-ray signal \citep{Acciari2019a}. The observations started 62 seconds after the beginning of the burst. A total of $\sim$700 events with energy ranging from 300~GeV to $\sim$2~TeV were recorded during the first 19 minutes of observations. The intrinsic energy distribution of the signal was fitted with a power law of index $2.5 \pm 0.2$, leading to an index of $\Gamma_s = 5.43\pm 0.22$ (statistical error only) when EBL absorption is taken into account. The time distribution of the events recorded by MAGIC follows a power law with index $1.51 \pm 0.04$. MAGIC did not observe the peak of the burst. Therefore, the light curve of the full burst, including the sharp rise to the peak flux followed by a power-law decay was modeled based on multiwavelength observations of the event and theoretical inference~\citep{Acciari2019b}.
{The prompt emission of GRB~190114C inferred from the keV--MeV light curves and spectra lasted no more than $25$ seconds after the onset of the GRB. This indicates that the emission observed by MAGIC is associated with the afterglow phase, rather than with the prompt phase, which typically shows irregular variability. However, as reported in \citet{Acciari2020}, a sub-dominant contribution from the prompt phase (at most 20\%) at early times of the afterglow ($t \lesssim 100$~s) cannot be entirely excluded. The lower bound of 60~s chosen for the present study was chosen to minimize this contribution while retaining statistics as high as possible.
\object{Mrk~501} is a BL Lac object at redshift $z=0.03364$. The flare of 2005 July 9 was detected by the MAGIC telescope, at the time operating in monoscopic configuration \citep{Albert2007}. The flux of this flare reached a peak more than a factor of two higher than before and after the flare. A total of $\sim$1800 events with energy from 0.15 to 10 TeV were recorded during the flare among which $\sim$700 could be associated to the background. The energy distribution of the signal and baseline events is well described by a power law of index $\Gamma_{s,b} = 2.2$ while the time distribution was parameterized by a single Gaussian spanning over $1600$ seconds.
\object{PKS 2155-304} is another BL Lac object at higher redshift $z=0.116$. The flare of 2006 July 28 detected by H.E.S.S. telescopes is seemingly one of the brightest flares detected by the experiment so far with a signal to noise ratio exceeding 300 \citep{Aharonian2007}. The lightcurve is parameterized by five asymmetric Gaussians with $2\%$ background over $4000$ seconds with a total of $3526$ photons. The energy distribution is described by a power law of index $\Gamma_s = 3.46$ ranging from 0.25 to 4~TeV during the flare while the quiescent state leads to an index of~$\Gamma_b = 3.32$.
\object{PG 1553+113}, yet another BL Lac object, is the furthest source of this list with an estimated redshift $z=0.49\pm0.04$. The flare of 2012 April 26-27 was detected by H.E.S.S. telescopes where its flux increased three-fold as compared to its quiescent state \citep{Abramowski2015}. The time distribution was parameterized by two Gaussians with 154 photons over $8000$ seconds, where background accounts for $44\%$ of the events with $30\%$ gamma-like hadrons and $14\%$ baseline photons. The energy distribution spreading between $0.3$ and $0.8$ TeV is described by a power law of index $\Gamma_{s,b} = 4.8$ for signal and baseline photons.
The Vela Pulsar (\object{PSR B0833-45}) located at $294\pm76$~pc rotates with a periodicity of $89$ ms. The data simulated in this work is from a compilation of observations with H.E.S.S. large telescope from March 2013 to April 2014, for which a LIV analysis was performed \citep{Chretien2015, ChretienPhD}. {330,820 pulsed events} between 60 and 150~GeV were recorded with a signal to noise ratio of $0.012$. The phase distribution is parameterized by an asymmetric Lorentzian between $0.5$ and $0.6$. Background accounts for $98.8\%$ of the events with only baseline photons. The energy distribution is described by a power law of index $\Gamma_s = 3.9$ for signal and $\Gamma_b = 1.75$ for baseline photons.
\begin{figure}[t!]
\plotone{Fig2.pdf}
\caption{Bias $\lambda_{rec} - \lambda_{inj}$ vs. number of bins for GRB~190114C in the linear case and J\&P formalism. The number of bins in the table is chosen so that the bias (black line) is compatible with zero within its 1$\sigma$ uncertainty range (gray envelope). The same number of bins is used for measured energy, non-delayed arrival times, and time delays.}
\label{fig:convergence}
\end{figure}
The Crab Pulsar (\object{PSR B0531+21}) has a $33.7$ ms period and is located at $2.0 \pm 0.5$ kpc. One of the data sets used in this work, noted ``Crab M'' hereafter, is a compilation of observations with MAGIC telescopes from 2005 to 2017 \citep{Ahnen2017}. $3080 \pm 460$ excess events from the P2 region of the phase were recorded, from which $544 \pm 92$ have a reconstructed energy above 400~GeV and are used in the LIV analysis. The phase distribution of the P2 peak was parameterized by a Gaussian. A profiling of the nuisance parameters yielded a mean of $\Phi=0.403$ (respectively $0.401$) and standard deviation $0.015$ ($0.011$) for $n=1$ ($n=2$). Background accounts for 96\% of the events with only baseline photons. The energy distribution was described by a power law of index $\Gamma_s = 2.81$ for signal and $\Gamma_{b,k} = 2.47$ for combined background events and baseline photons.
The other data set, noted ``Crab V'', is a compilation of high quality data taken with VERITAS telescopes between 2007 and 2011. {22,764 pulsed events} were recorded from the P2 region and its baseline where background account for $96.4\%$ of the events with again only baseline photons \citep{Zitzer2013}. The phase distribution was also parameterized with a Gaussian centered on $0.398$ with a standard deviation of $0.0116$. The energy distribution was again described by a power law of index $\Gamma_s = 3.25$ for signal and $\Gamma_b = 2.47$ for baseline photons.
\subsection{Method calibration and performance}\label{subsec:calperf}
The normalization factor $N'_{s}$ of the PDF of Equation~(\ref{eq:pdf source+det}) is a triple integral, computation of which is particularly time consuming since it needs to be done for each minimization step and for each event of the sample. To decrease the computation time, the PDF is pre-calculated and stored in tables binned over measured energy $E_{m}$, non-delayed arrival times $t$, and time-delays $D(E_t,\lambda_n,z)$. The same number of bins is used for each of these three variables. A {trilinear} interpolation is performed on these tables to extract PDF values for the likelihood computation.
\begin{figure*}[t]
\plotone{Fig3.pdf}
\caption{The center plot shows the distribution of the reconstructed lag in the case of GRB~190114C, J\&P formalism for the linear case. The plot on the left (respectively on the right) shows the distribution of the lower (upper) limits of the confidence interval for 68\% CL. The three distributions are obtained with a zero injected lag. The histograms are fitted with asymmetric Gaussian functions, parameters of which are used in turn to produce calibration plots (see the text for details).}
\label{fig:LL_UL_distribs}
\end{figure*}
\begin{figure*}[t!]
\plottwo{Fig4a.pdf}{Fig4b.pdf}
\caption{Calibration plot showing $\lambda_{rec}$ vs. $\lambda_{inj}$ for GRB~199114C (left) and all sources combined (right) in the linear case and J\&P formalism. The light gray area corresponds to the standard deviation of the $\lambda_{rec}$ distribution while the dark gray region shows the statistical uncertainty. For both plots, a function $a\ \lambda_{inj} + b$ is fitted (black line).}
\label{fig:calib}
\end{figure*}
The number of bins used in the tables has been chosen for each source to minimize the bias $\lambda_{rec} - \lambda_{inj}$ between the injected ($\lambda_{inj}$) and reconstructed ($\lambda_{rec}$) time delays. An example is shown in Figure~\ref{fig:convergence} for \object{GRB~190114C}. In this particular case, the plot shows that a minimum of $\sim$140~bins for each variable is required, and a conservative number of 200 was actually chosen. The optimal number of bins was found to be independent of the injected lag.
Four sets of tables have been produced for each source which accounts for the four configurations explored in this work: J\&P or DSR formalism for distance, for linear and quadratic LIV effects.
In order to assess the sensitivity and precision of the lag reconstruction, simulated data sets were produced with different values for $\lambda_{inj}$. For each value of the injected lag, one thousand realizations of the light curve were simulated. The distribution of reconstructed values of $\lambda_n$ is shown in the central panel of Figure~\ref{fig:LL_UL_distribs} for \object{GRB~190114C}, in the J\&P case and $n = 1$. Lower and upper limits of the confidence intervals are taken as the values of $\lambda_n$ for which $2\,[L_S(\lambda_n)-\mathrm{min}(L_S)]=1$ for 68\% CL and $2\,[L_S(\lambda_n)-\mathrm{min}(L_S)]=3.84$ for 95\% CL (Equation~\ref{eq:LikelihoodData}). Their distributions for 68\% CL are displayed in the left and right panels of Figure~\ref{fig:LL_UL_distribs} respectively. All three distributions were fitted with asymmetric Gaussian functions providing three parameters: the average ($\lambda_{LL}$, $\lambda_{rec}$ and $\lambda_{UL}$) and standard deviations separately defined on the left and on the right of the maxima ($\sigma_{\lambda,l}$, $\sigma_{\lambda,r}$). While the latter accounts for statistical uncertainties only, the lower and upper limits $\lambda_{LL}$ and $\lambda_{UL}$ account for both statistical and systematic uncertainties.
\begin{figure*}[t]
\plotone{fig5a.pdf}
\plotone{fig5b.pdf}
\caption{Limits obtained for all individual sources and combinations for the linear (top) and quadratic (bottom) cases, for the J\&P (dots) and DSR (crosses) redshift dependence. Blue markers correspond to the case where only statistical errors are taken into account (``stat only'') while orange markers correspond to the case where both statistical and systematic errors are included (``stat+syst'').}
\label{fig:result-comb}
\end{figure*}
For extragalactic sources, the range for $\lambda_{inj}$ goes from $-5\,\sigma_0$ to $+5\,\sigma_0$, where $\sigma_0 = \mathrm{max}(\sigma_{\lambda_{rec},l}, \sigma_{\lambda_{rec},r})$ for $\lambda_{inj} = 0$. In the case of PSR, the range for $\lambda_{inj}$ is chosen so that the highest energy photons are not shifted out of the phase range given in Table~\ref{tab:simulation}.
The plots for $\lambda_{rec}$ versus $\lambda_{inj}$ were then produced for individual sources as well as for combinations, for the two correction orders and the two lag-distance models. Figure~\ref{fig:calib} shows two examples of calibration plots for \object{GRB~190114C} alone (left) and for all sources combined (right) in the linear and J\&P case. The reconstructed lag is fitted with a linear function $\lambda_{rec} = a\ \lambda_{inj} + b$. The plot for the GRB alone shows a clear decrease of the reconstruction error as the injected lag increases. This is a consequence of a peculiar shape of the light curve, which has a narrow peak, followed by a power-law decay. As the value of $\lambda_{inj}$ increases, the peak of the light curve enters progressively the time window where the likelihood is computed, resulting in an improvement of the reconstruction precision.
The plot on the right shows the same behavior, illustrating how the GRB dominates over the other sources. Other examples of calibration plots are shown in Appendix~\ref{sec:annexB}. As none of them include the GRB, the consistency in the reconstruction error is maintained. All the plots produced show a very good reconstruction of the injected lag, with slopes $a$ very close to unity, while the bias $b$ is found to be close to zero.
\section{Results and discussion}\label{sec:res}
\subsection{Systematic uncertainties}
All systematic uncertainties are listed for individual sources and combinations in Table~\ref{tab:systematics_res} (J\&P case) and Table~\ref{tab:systematics_res_DSR} (DSR case) in the Appendix. For most of the individual sources, the dominant systematic is the statistical uncertainty of the light curve template. Since the time lag intensifies as the correction order $n$ gets larger, the template uncertainties contribute comparatively less in the quadratic case than the linear one. For other individual sources, the precision of the energy distribution of the events prevails. Indeed, the energy scale uncertainty is found to be the most important source of systematics for the Crab pulsar observed by MAGIC, the Vela pulsar and \object{Mrk~501}, for the quadratic case. This is expected since the time delay depends on the energy squared. A similar behavior is observed for \object{GRB~190114C} and the Crab pulsar observed by VERITAS, where the uncertainty on the spectral slope dominates.
\begin{table*}[t]
\caption{95\% CL limits obtained for individual objects and combinations.\label{tab:combined_res}}
\begin{center}
\scriptsize
\begin{tabular}{lrlrlrlrl}
\hline
\hline
Source & \multicolumn{4}{c}{$E_{QG,1}$} & \multicolumn{4}{c}{$E_{QG,2}$} \\
& \multicolumn{2}{c}{J\&P} & \multicolumn{2}{c}{DSR} & \multicolumn{2}{c}{J\&P} & \multicolumn{2}{c}{DSR} \\
& \multicolumn{2}{c}{($10^{18}$ GeV)} & \multicolumn{2}{c}{($10^{18}$ GeV)} & \multicolumn{2}{c}{($10^{10}$ GeV)} & \multicolumn{2}{c}{($10^{10}$ GeV)} \\
& w/o syst. & w/ syst. & w/o syst. & w/ syst. & w/o syst. & w/ syst. & w/o syst. & w/ syst. \\
\hline
GRB 190114C & 9.2 & 4.0 & 6.5 & 2.7 & 14.2 & 8.3 & 9.5 & 5.8 \\
PKS 2155-304 & 2.8 & 1.0 & 2.6 & 0.9 & 8.2 & 6.2 & 7.2 & 5.5 \\
Mrk~501 & 1.1 & 0.5 & 1.1 & 0.5 & 9.6 & 7.1 & 9.3 & 6.9 \\
PG 1553+113 & 0.17 & 0.11 & 0.10 & 0.07 & 1.3 & 1.0 & 0.87 & 0.68 \\
Crab (M) & 0.80 & 0.65 & - & - & 3.0 & 2.5 & - & - \\
Crab (V) & 0.48 & 0.10 & - & - & 1.5 & 0.94 & - & - \\
Vela & $5.1 \times 10^{-3}$ & $3.5 \times 10^{-3}$ & - & - & $5.6 \times 10^{-2}$ & $5.5 \times 10^{-2}$ & - & - \\
\hline
Crab (M+V) & 1.0 & 0.28 & - & - & 3.3 & 2.6 & - & - \\
PSR & 1.0 & 0.28 & - & - & 3.3 & 2.8 & - & - \\
AGN & 3.0 & 1.1 & 2.8 & 1.0 & 10.8 & 8.3 & 10.5 & 7.9 \\
AGN+PSR & 3.2 & 1.2 & 3.0 & 1.1 & 10.6 & 8.5 & 10.1 & 8.3 \\
GRB+PSR & 9.2 & 4.1 & 6.6 & 2.8 & 14.3 & 9.2 & 9.1 & 7.0 \\
GRB+AGN & 9.5 & 4.1 & 6.9 & 3.0 & 14.5 & 9.7 & 11.4 & 8.2 \\
\hline
All combined & 9.5 & 4.1 & 7.0 & 2.9 & 14.4 & 9.7 & 11.1 & 8.4 \\
\hline%
\end{tabular}%
\end{center}%
\end{table*}
For the combinations, dominant systematic uncertainties are the ones of sources that dominate the sample. The pulsar combinations are dominated by template statistics, while the combination of AGN shows a predominance of template statistics for $n=1$, and a domination of energy scale for $n=2$, confirming the importance of the energy distribution uncertainty for the quadratic case. The combinations that include \object{GRB~190114C} follow a very similar trend due to the dominance of the GRB over the other sources. They show a clear ascendancy of the uncertainty of the power law index, which is the main source of systematic uncertainty for the combination of all the sources.
\begin{figure*}[t]
\plottwo{Fig6a.pdf}{Fig6b.pdf}
\caption{Limits obtained from the simulated data sets for all individual sources for the linear (left) and quadratic (right) cases for the J\&P redshift dependence. Blue dots show the limits obtained taking into account statistical errors only (``stat only'') while yellow dots show the limits including both statistical and systematic errors (``stat+syst''). Blue crosses give the limits published from actual data sets (Table~\ref{tab:all_res}).}
\label{fig:result-1}
\end{figure*}
\subsection{Limits}
\begin{figure*}[t]
\plottwo{Fig7a.pdf}{Fig7b.pdf}
\caption{Comparison between the limits obtained in the J\&P framework (red dots) and the limits obtained in the DSR formalism (green crosses) in the linear case (left) and quadratic case (right). The limits shown include both statistical and systematic errors.}
\label{fig:result-3}
\end{figure*}
From equation~(\ref{eq:lambda}), limits on $E_{QG}$ are given by:
\begin{equation}
\label{eq:limits}
\left[{ \frac{2}{n+1} \left( \lambda_{n,\pm} + \sqrt{\delta_{stat}^2 + \sigma^2\delta_{syst}^2} \right) \mathrm{H}_\mathrm{0}} \right] ^{\frac{1}{n}},
\end{equation}
where the subscript $\pm$ refers to subluminal and superluminal cases, $\delta_{stat}$ is the statistical error (standard deviation) on the normally distributed reconstructed value of $\lambda_{n}$, $\delta_{syst}$ is the overall systematic error obtained from the values listed in Tables \ref{tab:systematics_res} and \ref{tab:systematics_res_DSR} computed for a confidence level of 68\%, and $\sigma$ is a real number allowing for a shift in confidence level using the same systematic errors. Since systematic errors are computed for 68\%~CL, and statistical errors for 95\% CL, $\sigma$ is set to two in the following.
Using Equation~(\ref{eq:limits}), $E_{QG}$ limits were obtained for both subluminal and superluminal cases. Both approaches give comparable results and only the subluminal limits are shown in Table \ref{tab:combined_res}. In Figure \ref{fig:result-comb}, results are given with and without accounting for systematic uncertainties, clearly demonstrating the importance of taking them into account. In some cases, systematics lead to upper limits smaller by a factor of $\gtrsim 2$ as compared to the case they are not taken into account. For pulsars, DSR and J\&P formalisms lead to the same limits so they are given only for the J\&P case in the table.
Figure~\ref{fig:result-1} shows a comparison between the already published results taken from the references listed in Table~\ref{tab:all_res} and the ones obtained in the present study, for $n=1$ (left) and $n=2$ (right). Overall, the agreement between simulations and data is good, showing the simulated data sets represent the actual data well. The observed differences are most probably due to three different factors. First, the limits obtained in this work come from several hundreds realizations of the light curves while already published limits were derived from one (measured) light curve. Second, the systematic uncertainties in previous publications were evaluated with different methods. These methods can vary from one analysis to another, but most use a frequentist approach, while in the present paper nuisance parameters and profile likelihood where used (Section~\ref{subsec:uncertainties}). Finally, IRFs were fully taken into account in the present analysis while it was often approximated as a constant of energy in earlier articles. The latter point was fully justified at the time by the use of a somewhat reduced energy range, while we wanted to get rid of this restriction in the present analysis.
As expected from previously published results, GRB~190114C is the most constraining source due to its high redshift, high variability, large statistics, as well as the fact it has been observed on a wide energy range. Therefore, it dominates the final result whenever it is included in the combination.
When the GRB observation is not included, AGNs dominate with a competition between \object{PKS~2155-304} and \object{Mrk~501}. Due to its smaller number of events and limited energy range, \object{PG~1553+113} limit is less constraining, even though its redshift is the highest of all the sources included in this work. While \object{PKS~2155-304} dominates over the other sources in the linear case due to its higher redshift and event statistics, \object{Mrk~501} dominates the limit in the quadratic case due to its energy range extending twice as high as the one of \object{PKS~2155-304}.
PSR have only a marginal impact on the overall combination due to their closeness. The Crab pulsar dominates the combined PSR limit thanks to its higher statistics, wider energy range and greater distance. However, it is important to note that the limits provided by pulsars are independent of the redshift dependence model, providing model free constraints.
Figure~\ref{fig:result-3} shows the limits on $E_{QG,n}$ as a function of the redshift for both DSR and J\&P models. Differences in the results from the two approaches start to become significant for high redshift sources such as \object{GRB~190114C} or \object{PG~1553+113}, hence behaving in accordance with the $\kappa_n$ parameter evolution shown in Figure~\ref{fig:kappaz}. Due to the facts that $\kappa^\mathrm{J\&P} > \kappa^\mathrm{DSR}$ and $\kappa^\mathrm{J\&P}$ increases faster than $\kappa^\mathrm{DSR}$ (Section~\ref{sec:dist}), the J\&P model emphasizes the impact of large redshift sources on the limits. Therefore, the GRB dominates more in the J\&P case than for the DSR case, where all sources contributions are more balanced.
\section{Conclusions}
In the present paper, we have described an implementation of likelihood analysis designed with the goal to combine data from different sources and experiments in the search for LIV-induced energy-dependent time delays. One of the most important benefits of the likelihood technique is its simplicity for such a combination. In order to check the method and evaluate its performance, simulated data sets were produced mimicking actual observations of one GRB, three flaring AGN and two pulsars by the H.E.S.S., MAGIC and VERITAS experiments. We paid particular attention to the implementation of the algorithm, checking for any bias and carefully evaluating statistical and systematic errors, and their combination within the different experiments. For the first time, two different formalisms were studied concerning the way the distance is taken into account in the time-lag computation. Others could be added in the future \citep[see e.g.][]{Amelino2021}. As the next step, the software developed for this work will be applied to all available data sets recorded so far by H.E.S.S., MAGIC and VERITAS, and perhaps by other experiments, and the results will be published in the second part of this work.
Another important advantage of likelihood analysis is its adaptability. Indeed, nothing prevents, in principle, including other effects on production or propagation of photons in the probability density function. Two examples can be pointed out. First, as mentioned in the introduction, it is known that LIV could modify the absorption of VHE photons by the EBL changing the shape of high energy spectra. Assuming that QG affects both the photon group velocity and photon interactions, the likelihood technique could be used to provide combined EBL and delay constraints on QG models. It is not clear however, whether the different effects would manifest at the same energy scale, or if a different energy scale is applicable for each effect.
Second, it should be possible to include other types of delays in the probability function to probe both propagation and source-intrinsic time lags. Despite some recent exploratory work on that topic \citep[see e.g.][for the case of blazar flares]{Perennes2020}, the latter are still poorly understood. In addition, intrinsic effects are most probably different from one type of source to another and even from one sub-type to another: short or long GRBs, blazars or flat spectrum radio quasars. We therefore chose not to include them in the present study. Intrinsic effects are a critical aspect and they will need to be addressed in the future.
The Cherenkov Telescope Array\,\footnote{\url{https://www.cta-observatory.org}} (CTA) will start operating soon, superseding the current generation IACTs in the years 2025-2030 \citep{Acharya2019}. Thanks to its better overall performance and dedicated observation strategies to maximize the number of transient event detection, it is expected that both CTA arrays (one in each hemisphere) will be able to detect a large number of PSR, AGN flares and GRB. Different sub-array configurations will be used in order to optimize the observation program and combining data will therefore become very important. {As a result, CTA will be much more sensitive to LIV effects than current generation experiments.} The tools developed in this work will be made publicly available concurrently with the publication of the second paper and adapted to be used in CTA analysis software architecture.
\begin{acknowledgments}
The authors would like to thank collabora\-tions H.E.S.S., MAGIC and VERITAS for their support in the making of this joint effort and for allowing the use of IRFs for the set of sources used in this paper. They also would like to acknowledge networking support by the COST Action CA18108 (\url{https://qg-mm.unizar.es/}). This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No. 754510, from the ERDF under the Spanish Ministerio de Ciencia e Innovaci\'{o}n (MICINN), grant PID2019-107847RB-C41, the Centro de Excelencia ``Severo Ochoa'' (SEV-2016-0588) and from the CERCA program of the Generalitat de Catalunya.
T.T. acknowledges funding from the University of Rijeka, project number uniri-prirod-18-48, and from the Croatian Science Foundation (HrZZ), project number IP-2016-06-9782.
The authors would like to thank G. D'Amico for his useful comments on the draft as well as G. Rosati and C. Pfeifer for insightful discussions on lag-redshift dependence. {Finally, the authors express their gratitude to the anonymous referee who helped clarifying some parts of the paper.}
This paper is dedicated to the memory of our colleague and friend A. Jacholkowska, who initiated this work and put it on the best tracks towards a successful completion.
\end{acknowledgments}
\facilities{HESS,MAGIC,VERITAS}
\software{ROOT}
\newpage
|
2,869,038,156,851 | arxiv | \section{Introduction}
\allowdisplaybreaks
{The interest in control systems society for performance and robustness analysis of large-scale dynamical network is rapidly growing \cite{Bamieh12, Siami13cdc, Young10, Bamieh11, abbas, Zelazo-Allgower, LovisariGarinZampieriResistance, Nicola, jovbamTAC05platoons, linfarjovTAC12platoons}. Improving global performance as well as robustness to external disturbances in large-scale dynamical networks are crucial for sustainability, from engineering infrastructures to living cells; examples include a group of autonomous vehicles in a formation, distributed emergency response systems, interconnected transportation networks, energy and power networks, metabolic pathways and even financial networks. One of the fundamental problems in this area is to determine to what extent uncertain exogenous inputs can steer the trajectories of a dynamical network away from its working equilibrium point. To tackle this issue, the primary challenge is to introduce meaningful and viable performance and robustness measures that can capture essential characteristics of the network. A proper measure should be able to encapsulate transient, steady-state, macroscopic, and microscopic features of the perturbed large-scale dynamical network.
In this paper, we propose a new methodology to classify several proper performance measures for a class of linear consensus networks subject to external stochastic disturbances. We take an axiomatic approach to quantify essential functional properties of a number of sensible measures by introducing the class of systemic performance measures and show that this class of measures should satisfy monotonicity, convexity, and orthogonal invariance properties. It is shown that several existing and widely used performance measures in the literature are in fact special cases of this class of systemic measures \cite{Siami14acc, Zelazo-Allgower,Young10,noami11,Jadbabaie13}.
The performance analysis of linear consensus networks subject to external stochastic disturbances has been studied in \cite{Bamieh12, Siami13cdc, Siami14arxiv, SiamiNecSys, noami11, Jadbabaie13, dorjovchebulTPS14}, where the $\mathcal H_2$-norm of the network was employed as a scalar performance measure. In \cite{Bamieh12}, the authors interpret the $\mathcal H_2$-norm of the system as a macroscopic performance measure capturing the notion of coherence. It has been shown that if the Laplacian matrix of the coupling graph of the network is normal, the $\mathcal H_2$-norm is a function of the eigenvalues of the Laplacian matrix \cite{noami11,Bamieh12, SiamiNecSys}. In \cite{Siami13cdc}, the authors consider general linear dynamical networks and show that tight lower, and upper bounds can be obtained for the $\mathcal H_2$-norm of the network from the exogenous disturbance input to a performance output, which are functions of the eigenvalues of the state matrix of the network. Besides the commonly used $\mathcal H_2$-norm, there are several other performance measures that have been proposed in \cite{Bamieh12, Zelazo-Allgower, Olfati-saber07}. In \cite{Siami14acc}, a partial ordering on linear consensus networks is introduced where it shows that several previously used performance measures are indeed Schur-convex functions in terms of the Laplacian eigenvalues. In a more relevant work, the authors of \cite{Siami14cdc-2} show that performance measures that are defined based on some system norms, spectral, and entropy functions exhibit several useful functional properties that allow us to utilize them in network synthesis problems.
The first main contribution of this paper is introduction of a class of systemic performance measures that are spectral functions of Laplacian eigenvalues of the coupling graph of a linear consensus network. Several gold-standard and widely used performance measures belong to this class, for example, to name only a few, spectral zeta function, Gamma entropy, expected transient output covariance, system Hankel norm, convergence rate to consensus state, logarithm of uncertainty volume of the output, Hardy-Schatten system norm or $\mathcal{H}_p$-norm, and many more. All these performance measures are monotone, convex, and orthogonally invariant. Our main goal is to investigate a canonical network synthesis problem: growing a linear consensus network by adding new interconnection links to the coupling graph of the network and minimizing a given systemic performance measure. In the context of graph theory, it is known that a simpler version of this combinatorial problem, when the cost function is the inverse of algebraic connectivity, is indeed NP-hard \cite{Mosk}. There have been some prior attempts to tackle this problem for some specific choices of cost functions (i.e., total effective resistance and the inverse of algebraic connectivity) based on semidefinite programing (SDP) relaxation methods \cite{Kolla, Ghosh2006}. There is a similar version of this problem that is reported in \cite{Fardad}, where the author studies convergence rate of circulant consensus networks by adding some long-range links. Moreover, a continuous (non-combinatorial) and relaxed version of our problem of interest has some connections to the sparse consensus network design problem \cite{mogjovACC15, wujovACC14, farlinjovTAC14sync}, where they consider $\ell_1$-regularized $\mathcal H_2$-optimal control problems. The other related works \cite{Summers16, SummersECC} argue that some metrics based on controllability and observability Gramians are modular or submodular set functions, where they aim to show that their proposed simple greedy heuristic algorithms have guarantees sub-optimality bounds.
In our second main contribution, we propose two efficient polynomial-time approximation algorithms to solve the above mentioned combinatorial network synthesis problem: a linearization-based method and a simple greedy algorithm based on rank-one updates. Our complexity analysis asserts that computational complexity of our proposed algorithms are reasonable and make them particularly suitable for synthesis of large-scale consensus networks. To calculate sub-optimality gaps of our proposed approximation algorithms, we quantify the best achievable performance bounds for the network synthesis problem in Section \ref{sec:672}. Our obtained fundamental limits are exceptionally useful as they only depend on the spectrum of the original network and they can be computed a priori. In Subsection \ref{subsec1}, we classify a subclass of differentiable systemic performance measures that are indeed supermodular. For this subclass, we show that our proposed simple greedy algorithm can achieve a $(1- 1/e)$-approximation of the optimal solution of the combinatorial network synthesis problem. Our extensive simulation results confirm effectiveness of our proposed methods.
\section{Preliminaries and Definitions}
\label{sec:123}
\allowdisplaybreaks
\subsection{Mathematical Background
The set of real numbers is denoted by ${\mathbb{R}}$, the set of non--negative by ${\mathbb{R}} _{+}$, and the set of positive real numbers by ${\mathbb{R}} _{++}$. The cardinality of set ${\mathcal{E}}$ is shown by $|{\mathcal{E}}|$. We assume that $\mathbbm{1}_n$, $I_n$, and $J_n$ denote the $n \times 1$ vector of all ones, the $n \times n$ identity matrix, and the $n \times n$ matrix of all ones, respectively. For a vector $v = [v_i] \in \mathbb R^n$, ${\text{diag}}(v) \in {\mathbb{R}}^{n \times n}$ is the diagonal matrix with elements of $v$ orderly sitting on its diameter, and for $ A= [a_{ij}] \in \mathbb R^{n \times n}$, ${\text{diag}}(A) \in {\mathbb{R}}^{n}$ is diagonal elements of square matrix $A$. We denote the generalized matrix inequality with respect to the positive semidefinite cone $\mathbb{S}^n_{+}$ by ``$\,\preceq \,$" .
{Throughout this paper, it is assumed that all graphs are finite, simple, undirected, and connected.} A graph herein is defined by a triple ${\mathcal{G}} = ({\mathcal{V}}, {\mathcal{E}},w)$, where ${\mathcal{V}}$ is the set of nodes, ${\mathcal{E}} \subseteq \big\{\{i,j\}~\big|~ i,j \in {\mathcal{V}}, ~i \neq j \big\}$ is the set of links, and $w: {\mathcal{E}} \rightarrow {\mathbb{R}}_{++}$ is the weight function.
{The adjacency matrix $A = [a_{ij}]$ of graph ${\mathcal{G}}$ is defined in such a way that $a_{ij} = w(e)$ if $e=\{i,j\} \in {\mathcal{E}}$, and $a_{ij}=0$ otherwise. The Laplacian matrix of ${\mathcal{G}}$ is defined by $L := \Delta - A$, where $\Delta={\text{diag}}[d_{1},\ldots,d_{n}]$ and $d_i$ is degree of node $i$.} We denote the set of Laplacian matrices of all connected weighted graphs with $n$ nodes by ${\mathfrak{L}_{n}}$. Since ${\mathcal{G}}$ is both undirected and connected, the Laplacian matrix $L$ has $n-1$ strictly positive eigenvalues and one zero eigenvalue. Assuming that $0 = \lambda_1 < \lambda_2 \leq \ldots \leq \lambda_n$ are eigenvalues of Laplacian matrix $L$, we define operator ${{\Lambda}}: \mathbb{S}^n_{+} \rightarrow {\mathbb{R}}^{n-1}_{++}$ by
\begin{equation}
{\Lambda}(L) ~=~ \begin{bmatrix} \lambda_2 & \ldots & \lambda_n \end{bmatrix}^{\text T}. \label{eigen-fcn}
\end{equation}
The Moore-Penrose pseudo-inverse of $L$ is denoted by $L^{\dag}=[l_{ji}^{\dag}]$, which is a square, symmetric, doubly-centered and positive semi--definite matrix. For a given link $e=\{i,j\}$, $r_e(L)$ denotes the effective resistance between nodes $i$ and $j$ in a graph with the Laplacian matrix $L$, where its value can be calculated as follows
\begin{equation}
r_{e}(L)~=~l_{ii}^{\dag}+l_{jj}^{\dag}-2 \hspace{0.02cm} l_{ij}^{\dag},
\label{eq:148}
\end{equation}
where $L^{\dag}=[l_{ij}^{\dag}]$. For every real $q$, powers of pseudo inverse of $L$ is represented by $L^{\dag,q} := \left(L^{\dag} \right)^q.$
\begin{definition}
The derivative of a scalar function $\rho(.)$, with respect to the $n$-by-$n$ matrix $X$, is defined by
\[\triangledown \rho(X) ~:= \left[\begin{array}{cccc}
\frac{\partial \rho}{\partial x_{11}} & \frac{\partial \rho}{\partial x_{12}} & \ldots & \frac{\partial \rho}{\partial x_{1n}} \\
\frac{\partial \rho}{\partial x_{21}} & \frac{\partial \rho}{\partial x_{22}} & \ldots & \frac{\partial \rho}{\partial x_{2n}} \\
\vdots & \vdots & \ddots &\vdots \\
\frac{\partial \rho}{\partial x_{n1}} & \frac{\partial \rho}{\partial x_{n2}} & \ldots & \frac{\partial \rho}{\partial x_{nn}}
\end{array}\right] \label{state-matrix},\]
where $X=[x_{ij}]$. The directional derivative of function $\rho(X)$ in the direction of matrix $Y$ is given by
\[\triangledown_{Y} \rho(X)~=~~\big < \triangledown {{\rho}}(X) , Y \big> ~ =~\tr \left(\triangledown {{\rho}}(X) Y\right),\]
where $\left < .,. \right > $ denotes the inner product operator.
\end{definition}
The following Majorization definition is from \cite{marshall11}.
\begin{definition}
For every $x \in {\mathbb{R}}_+^n$, let us define $x^{\downarrow}$ to be a vector whose elements are a permuted version of elements of $x$ in descending order. We say that $x$ majorizes $y$, which is denoted by $x \unrhd y$, if and only if $\mathbf{1}^{\text T}x \, = \, \mathbf{1}^{\text T}y$ and $\sum_{i=1}^k x_i^{\downarrow} \, \geq \, \sum_{i=1}^k y_i^{\downarrow}$ for all $k=1,\ldots,n-1$.
\end{definition}
The vector majorization is not a partial ordering. This is because from relations $x \unrhd y$ and $y \unrhd x$ one can only conclude that the entries of these two vectors are equal, but possibly with different orders. Therefore, relations $x \, \unrhd \, y$ and $y \, \unrhd \, x$ do not imply $x=y$.
\begin{definition}[\cite{marshall11}]
The real-valued function $F: {\mathbb{R}}_+^n \rightarrow {\mathbb{R}}$ is called Schur--convex if $F(x) \geq F(y)$ for every two vectors $x$ and $y$ with property $x \unrhd y$.
\end{definition}
{
\subsection{Noisy linear consensus networks }
\label{sec:158}
We consider the class of linear dynamical networks that consist of multiple agents with scalar state variables $x_i$ and control inputs $u_i$ whose dynamics evolve in time according to
\begin{eqnarray}
\dot{x}_i(t) & = & u_i(t) +\xi_i(t) \label{TI-consensus-algorithm} \\
y_i(t) & = & x_i(t) - \bar{x}(t) \label{TI-consensus-algorithm-2}
\end{eqnarray}
for all $i=1,\ldots,n$, where $x_i(0)=x_i^*$ is the initial condition and \[\bar{x}(t)=\frac{1}{n}\big(x_1(t)+\ldots+x_n(t)\big)\]
is the average of all states at time instant $t$. The impact of the uncertain environment on each agent's dynamics is modeled by the exogenous noise input $\xi_i(t)$.
By applying the following feedback control law to the agents of this network
\begin{equation}
u_i(t) ~=~\sum_{j=1}^{n} k_{ij} \big(x_j(t) - x_i(t)\big),\label{feedback-law}
\end{equation}
the resulting closed-loop system will be a first-order linear consensus network. The closed-loop dynamics of network (\ref{TI-consensus-algorithm})-\eqref{TI-consensus-algorithm-2} with feedback control law \eqref{feedback-law} can be written in the following compact form
\begin{eqnarray}
\dot x(t) & = & -L\, x(t)~+~\xi(t)\label{first-order}\\
y(t) & = & M_n \, x(t), \label{first-order-G}
\end{eqnarray}
with initial condition $x(0)= x^*$, where $x = [x_1, \ldots, x_n]^{\rm T}$ is the state, $y = [y_1, \ldots, y_n]^{\rm T}$ is the output, and $\xi = [\xi_1, \ldots, \xi_n]^{\rm T}$ is the exogenous noise input of the network.
The state matrix of the network is a graph Laplacian matrix that is defined by $L=[l_{ij}]$, where
\begin{equation}
\displaystyle l_{ij} := \left\{\begin{array}{ccc}
-k_{ij} & \textrm{if} & i \neq j \\
& & \\
k_{i1}+\ldots+k_{in}& \textrm{if} & i=j
\end{array}\right.
\end{equation}
and the output matrix is a centering matrix that is defined by
\begin{equation}
M_n~:=~I_{n} - \frac{1}{n}J_n.
\end{equation}
The underlying coupling graph of the consensus network \eqref{first-order}-\eqref{first-order-G} is a graph ${\mathcal{G}}=({\mathcal{V}},\mathcal E, w)$ with node set ${\mathcal{V}}=\{1,\ldots,n\}$, edge set
\begin{equation}
{\mathcal{E}}=\Big\{ \{i,j\}~\big|~\forall~i,j \in {\mathcal{V}},~k_{ij} \neq 0\Big\}, \label{edge-set}
\end{equation}
and weight function
\begin{equation}
w(e)=k_{ij} \label{edge-weight}
\end{equation}
for all $e=\{i,j\} \in {\mathcal{E}}$, and $w(e)=0$ if $e \notin {\mathcal{E}}$. The Laplacian matrix of graph ${\mathcal{G}}$ is equal to $L$.
\begin{assumption}\label{assump-simple}
All feedback gains (weights) satisfy the following properties for all $i,j \in {\mathcal{V}}$:
\vspace{0.1cm}
\noindent (a)~non-negativity: $k_{ij} \geq 0$, \\
\noindent (b)~symmetry: $k_{ij}=k_{ji}$,\\
\noindent (c)~simpleness: $k_{ii}= 0$.
\vspace{0.1cm}
\end{assumption}
Property (b) implies that feedback gains are symmetric and (c) means that there is no self-feedback loop in the network.
\vspace{0.1cm}
\begin{assumption}\label{assum-coupling-graph}
The coupling graph ${\mathcal{G}}$ of the consensus network \eqref{first-order}-\eqref{first-order-G} is connected and time-invariant.
\end{assumption}
\vspace{0.1cm}
According to Assumption \ref{assump-simple}, the underlying coupling graph is undirected and simple. Assumption \ref{assum-coupling-graph} implies that only one of the modes of network \eqref{first-order} is marginally stable with eigenvector $\mathbbm{1}_n$ and all other ones are stable. The marginally stable mode, which corresponds to the only zero Laplacian eigenvalue of $L$, is unobservable from the output \eqref{first-order-G}. The reason is that the output matrix of the network satisfies $M_n \mathbbm{1}_n= 0$. When there is no exogenous noise input, i.e., $\xi(t) = 0$ for all time, state of all agents converge to a consensus state \cite{Olfati-saber07, Jadbabaie}, which for our case the consensus state is
\begin{equation}
\lim_{t \rightarrow \infty} x(t) ~=~ \bar x(0) \mathbbm{1}_n~=~\frac{1}{n}\mathbbm{1}_n\mathbbm{1}_n^{\text T} x^*. \label{limit-zero} %
\end{equation}
When the network is fed with a nonzero exogenous noise input, the limit behavior \eqref{limit-zero} is not expected anymore and the state of all agents will be fluctuating around the consensus state without converging to it. Before providing a formal statement of the problem of growing a linear consensus network, we need to introduce a new class of performance measures for networks \eqref{first-order}-\eqref{first-order-G} that can capture the effect of noise propagation throughout the network and quantify degrees to which the state of all agents are dispersed from the consensus state.
\section{Systemic Performance Measures}
\label{sec: 176}
The notion of systemic performance measure refers to a real-valued operator over the set of all linear consensus networks governed by \eqref{first-order}-\eqref{first-order-G} with the purpose of quantifying the quality of noise propagation in these networks.
We have adopted an axiomatic approach to introduce and categorize a class of such operators that are obtained through our close examination of functional properties of several existing gold standard measures of performance in the context of network engineering and science. In order to state our findings in a formal setting, we observe that every network with dynamics \eqref{first-order}-\eqref{first-order-G} is uniquely determined by its Laplacian matrix. Therefore, it is reasonable to define a systemic performance measure as an operator over the set of Laplacian matrices ${\mathfrak{L}_{n}}$.
\begin{definition}\label{def-schur-systemic}
An operator ${{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$ is called a systemic performance measure if it satisfies the following properties for all Laplacian matrices in ${\mathfrak{L}_{n}}$:
{\vspace{0.0cm}
\noindent 1. {\it Monotonicity:} If $L_{2} \preceq L_{1}$, then
\[{{\rho}} (L_{1}) ~\leq~ {{\rho}} (L_{2});\]
\vspace{0.05cm}
\noindent 2. {\it Convexity:} For all $0 \leq \alpha \leq 1$,
\[{{\rho}} (\alpha L_{1}+(1-\alpha)L_{2})~\leq~ \alpha {{\rho}} (L_{1})+{(1-\alpha)}{{\rho}} (L_{2});\]
\vspace{0.05cm}
\noindent 3. {\it Orthogonal invariance:} For all orthogonal matrices $U \in {\mathbb{R}}^{n \times n}$,
\[{{\rho}}(L) ~=~ {{\rho}}(U L U^{\text T}).\]}
\end{definition}
\vspace{0.0cm}
Property 1 guarantees that strengthening couplings in a consensus network never worsens the network performance with respect to a given systemic performance measure. The coupling strength among the agents can be enhanced by several means, for example, by adding new feedback interconnections and/or increasing weight of an individual feedback interconnection. The monotonicity property induces a partial ordering\footnote{This implies that the family of networks \eqref{first-order}-\eqref{first-order-G} can be ordered using a relation that has reflexivity, antisymmetry, and transitivity properties.} on all linear consensus networks governed by \eqref{first-order}-\eqref{first-order-G}. Property 2 requires that a viable performance measure should be amenable to convex optimization algorithms for network synthesis purposes. Property 3 implies that a systemic performance measure depends only on the Laplacian eigenvalues.
\begin{theorem}\label{thm:schur-convex}
Every operator ${{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$ that satisfies Properties 2 and 3 in Definition \ref{def-schur-systemic} is indeed a Schur-convex function of Laplacian eigenvalues, i.e., there exists a Schur-convex spectral function $\Phi: {\mathbb{R}}^{n-1} \rightarrow {\mathbb{R}}$ such that
\begin{equation}
{{\rho}}(L)~=~ \Phi(\lambda_2, \ldots, \lambda_n). \label{spectral-rho}
\end{equation}
\end{theorem}
\begin{proof}
For every $L \in {\mathfrak{L}_{n}}$, the value of the systemic performance measure can be written as a composition of two functions as follows
\begin{equation}
{{\rho}} (L) ~=~ (\phi \circ \Lambda)(L), \label{spectral-fcn}
\end{equation}
where function $\Lambda: \mathbb{S}^n_{+} \rightarrow {\mathbb{R}}^{n-1}_{++}$ is defined by \eqref{eigen-fcn} and function $\phi: \mathbb R_{++}^{n-1} \rightarrow {\mathbb{R}}$ is characterized by
\begin{equation}
\phi(v)~=~{{\rho}}(W^{\text T} \textrm{diag}(v) W)
\label{eq:250}
\end{equation}
for any matrix $W=EU$ with $U \in {\mathbb{R}}^{n \times n}$ being an orthogonal matrix satisfying $L=U^{\text T} \textrm{diag}([0,{\Lambda}(L)^{\text T}]) U$ and $E \in {\mathbb{R}}^{(n-1) \times n}$ given by the following projection matrix
\begin{equation}
E ~=~ \left[\begin{array}{ccc}
0_{(n-1) \times 1} & \big| & I_{n-1} \end{array}
\right].
\end{equation}
Thus, we can conclude that \eqref{spectral-rho} holds with $\Phi(\lambda_2, \ldots, \lambda_n)=\phi(\Lambda(L))$. In the next step, we need to show that operator ${{\rho}}$ is convex and symmetric with respect to Laplacian eigenvalues $\lambda_2, \ldots, \lambda_n$. Property 2 indicates that ${{\rho}}$ is convex on Laplacian matrices and any convex function on Laplacian matrices is also convex function with respect to Laplacian eigenvalues \cite{boyd2006}. Property 3 implies that operator ${{\rho}}$ is symmetric with respect to $\lambda_2, \ldots, \lambda_n$ as ${{\rho}}$ is invariant under any permutation of eigenvalues. It is known that every function that is convex and symmetric is also Schur-convex \cite{boyd2006}.
\end{proof}
\begin{table*}[t]
{\small
\begin{center}
\begin{tabular}{ | p{6.5cm} | p{9cm} |}
\hline
Systemic Performance Measure & Matrix Operator Form \\ \hline \hline
Spectral zeta function ${\zeta}_{q}(L)$ & $\left(\mathrm{Tr}\big( L^{\dagger,q} \big)\right)^{\frac{1}{q}}$ \vspace{0.1cm} \\
\hline
Gamma entropy $I_{\gamma}(L)$ & $\displaystyle \gamma^2 \mathrm{Tr}\Big(L - \big( L^2 - \gamma^{-2} M_n \big)^{\frac{1}{2}} \Big)$
\vspace{0.1cm}
\\
\hline
Expected transient output covariance $\tau_t(L)$ & $\displaystyle \frac{1}{2} \mathrm{Tr}\big(L^{\dagger} (I- e^{-L t})\big)$
\vspace{0.1cm}
\\
\hline
System Hankel norm $\eta(L)$ & $\displaystyle \frac{1}{2} \max \big\{ \mathrm{Tr}(L^{\dagger} X)~\big|~X=X^{\rm T},~ \mathrm{rank}(X)=1, ~\mathrm{Tr}(X)=1 \big\}
\vspace{0.05cm}
$ \\
\hline
\vspace{0.05cm}
Uncertainty volume of the output $\upsilon(L)$ & \vspace{0.05cm} $\displaystyle (1-n) \log 2 - \mathrm{Tr}\left(\log \left(L+\frac{1}{n} J_n\right)\right)$
\vspace{0.1cm}
\\
\hline
Hardy-Schatten system norm or $\mathcal{H}_p$-norm $\theta_p(L)$ & $\displaystyle \alpha_0 \left( \tr \left( L^{\dag,\,p-1}\right)\right)^{\frac{1}{p}}$
\\
\hline
\end{tabular}
\caption{ \small Some important examples of spectral systemic performance measures and their corresponding matrix operator forms. }
\label{matrix-operator}
\end{center}}
\vspace{-0.6cm}
\end{table*}
The Laplacian eigenvalues of network \eqref{first-order}-\eqref{first-order-G} depend on global features of the underlying coupling graph. This is the reason why every performance measure that satisfies Definition \ref{def-schur-systemic} is tagged with adjective {\it systemic}. Table \ref{matrix-operator} shows some important examples of systemic performance measures and their corresponding matrix operator forms. In the appendix, we prove functional properties and discuss applications of these measures in details.
\begin{comment}
\begin{table*}[t]
\begin{center}
\begin{tabular}{ | p{5.3cm} | l | l | p{5.7cm} |}
\hline
Systemic Performance Measure & Symbol & Mathematical Expression & Assumptions and Conditions \\ \hline \hline
Sum and normalized sum of functions & ${{\rho}}(L)$ & $\displaystyle \sum_{i=2}^n \varphi(\lambda_i)$ and $\displaystyle \bigg ( \sum_{i=2}^n \phi(\lambda_i) \bigg)^{\frac{1}{\kappa}}$ & $\varphi$ is a {differentiable}, decreasing, and convex; $\phi$ is a differentiable, decreasing, convex, and homogenous of order $-\kappa$ \\ \hline
Spectral zeta function & ${\zeta}_{q}(L)$ & $\displaystyle \bigg( \sum_{i=2}^n \lambda_i^{-q} \bigg)^{1/q}$ & $q \geq 1$ \\
\hline
Gamma entropy & $I_{\gamma}(L)$ & $\displaystyle -\gamma^2 \sum_{i=2}^n \Big( \big(\lambda_i^2-\gamma^{-2}\big)^{\frac{1}{2}}-\lambda_i \Big)$ & $\gamma \geq \lambda_2^{-1}$
\\
\hline
Expected transient output covariance & $\tau_t(L)$ & $\displaystyle \frac{1}{2} \sum_{i=2}^n \lambda_i^{-1} (1- e^{-\lambda_i t})$ & $t \geq 0$
\\
\hline
System Hankel norm & $\eta(L)$ & $\displaystyle \frac{1}{2}\lambda_2^{-1}
$ & \\
\hline
Uncertainty volume of the output & $\upsilon(L)$ & $\displaystyle (1-n) \log 2 - \sum_{i=2}^n \log \lambda_i $ &
\\
\hline
Hardy-Schatten system norm or $\mathcal{H}_p$-norm & $\theta_p(L)$ & $\displaystyle \left\{ \frac{1}{2\pi} \int_{-\infty}^{\infty} \sum_{k=1}^n \sigma_k(G(j \omega))^p \hspace{0.05cm} d\omega \right\}^{\frac{1}{p}} $ & $\sigma_k$ is a singular value of transfer matrix {$G(j \omega)= M_n (j \omega I + L )^{-1}$} and $2 \leq p \leq \infty$
\\
\hline
\end{tabular}
\caption{ \small Some important examples of spectral systemic performance measures.} \label{table-1}
\end{center}
\end{table*}
\end{comment}
\section{Growing a Linear Consensus Network}
\label{sec:218}
The network synthesis problem of interest is to improve the systemic performance of network \eqref{first-order}-\eqref{first-order-G} by establishing $k \geq 1$ new feedback interconnections among the agents. Suppose that the underlying graph of the network ${\mathcal{G}}=({\mathcal{V}},\mathcal E, w)$ is defined according to \eqref{edge-set}-\eqref{edge-weight} and a set of candidate feedback interconnection links $\mathcal E_c=\big\{\varepsilon_1, \ldots , \varepsilon_p \big\} {\, \subseteq \,} {\mathcal{V}} \times {\mathcal{V}}$, which is endowed with a weight function $\varpi: {\mathcal{E}}_c \rightarrow {\mathbb{R}}_{++}$, is also given. The weight of a link $\varepsilon_i \in \mathcal E_c$ is represented by $\varpi(\varepsilon_i)$ and we assume that it is pre-specified and fixed. The network growing problem is to select exactly $k$ feedback interconnection links from $\mathcal E_c$ and append them to ${\mathcal{G}}$ such that the systemic performance measure of the resulting network is minimized over all possible choices.
Let us represent the set of all possible appended subgraphs by
\begin{equation*}
\hat{\mathfrak{G}}_k:=\Big\{ \hat{{\mathcal{G}}} = (\mathcal V, \hat{{\mathcal{E}}}, \hat{w}){\hspace{0.05cm}}\Big|{\hspace{0.05cm}}\hat{{\mathcal{E}}} \in \Pi_k({\mathcal{E}}_c),~\forall \varepsilon_i \in \hat{{\mathcal{E}}}:~ \hat{w}(\varepsilon_i)= \varpi(\varepsilon_i) \Big\},
\end{equation*}
where the set of all possible choices to select $k$ links is denoted by
\begin{equation*}
\Pi_k({\mathcal{E}}_c) := \big\{ \hat{{\mathcal{E}}} \subseteq \mathcal E_c~\big|~|\hat{{\mathcal{E}}}|=k\big\}.
\end{equation*}
Then, the network synthesis problem can be cast as the following combinatorial optimization problem
\begin{equation}
\underset{\hat{{\mathcal{G}}} \in \hat{\mathfrak{G}}_k}{\textrm{minimize}} \hspace{0.6cm} {{\rho}} (L + \hat{L}), \label{k-link}
\end{equation}
where $\hat{L}$ is the Laplacian matrix of an appended candidate subgraph $\hat{{\mathcal{G}}}$ and the resulting network with Laplacian matrix $L+\hat{L}$ is referred to as the augmented network.
The role of the candidate set ${\mathcal{E}}_c$ is to pre-specify authorized locations to establish new feedback interconnections in the network.
The network synthesis problem \eqref{k-link} is inherently combinatorial and it is known that a simpler version of this problem with ${{\rho}}(L)=\lambda^{-1}_2$ is in fact NP-hard \cite{Mosk}. There have been some prior attempts to tackle problem \eqref{k-link} for some specific choices of performance measures, such as total effective resistance and the inverse of algebraic connectivity, based on convex relaxation methods \cite{Kolla, Ghosh2006} and greedy methods \cite{SummersECC}. In Sections \ref{subsec:B} and \ref{sec:algorithms}, we propose approximation algorithms to compute sub-optimal solutions for \eqref{k-link} with respect to the broad class of systemic performance measures.
We propose an exact solution for \eqref{k-link} when $k=1$ and two tractable and efficient approximation methods when $k >1$ with computable performance bounds. Besides, in Section \ref{sec:algorithms}, we demonstrate that a subclass of systemic performance measures has a supermodularity property. This provides approximation guarantees for our proposed approximation algorithm.
\section{Fundamental Limits on the Best Achievable Performance Bounds}
\label{sec:672}
In the following, we present theoretical bounds for the best achievable values for the performance measure in \eqref{k-link}. Let us denote the optimal cost value of the optimization problem \eqref{k-link} by $\mathbf{r}_k^*(\varpi)$.
For a given systemic performance measure ${{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$, we recall that according to Theorem \ref{thm:schur-convex} there exists a spectral function $\Phi$ such that
\[ {{\rho}}(L)~=~ \Phi \big(\lambda_2, \ldots, \lambda_n\big). \]
\begin{theorem}\label{w-thm}
Suppose that a consensus network \eqref{first-order}-\eqref{first-order-G} with an ordered set of Laplacian eigenvalues $\lambda_2 \leq \ldots \leq \lambda_n$, a set of candidate links ${\mathcal{E}}_c$ endowed with a weight function $\varpi: {\mathcal{E}}_c \rightarrow {\mathbb{R}}_{++}$, and design parameter $1 \leq k \leq n-1$ are given. Then, the following inequality
\begin{equation}
\mathbf{r}_k^*(\varpi) ~>~ \Phi \big (\lambda_{k+2}, \ldots, \lambda_n,\underbrace{ \infty, \ldots, \infty}_{\text{$k$ times}}\big
\label{fund-limit-1}
\end{equation}
holds for all weight functions $\varpi$. For $k \geq n$, all lower bounds are equal to $\Phi \big (\infty, \ldots, \infty\big )$. Moreover, if the systemic performance measure has the following decomposable form
\begin{equation*}
{{\rho}} \left(L\right)~=~\sum_{i=2}^n \varphi (\lambda_i),
\label{meas_0}
\end{equation*}
where $\varphi: {\mathbb{R}} \rightarrow {\mathbb{R}}_{+}$ is a decreasing convex function and $\lim_{\lambda \rightarrow \infty} \varphi(\lambda) = 0$, then the best achievable performance measure is characterized by
\begin{equation}
\mathbf{r}_k^*(\varpi)~ > ~\sum_{i=k+2}^{n}\varphi(\lambda_i). \label{lower-bound-rho_0}
\end{equation}
\end{theorem}
\begin{proof}
For a given weight function $\varpi: {\mathcal{E}}_c \rightarrow {\mathbb{R}}_{++}$, we show that inequality (\ref{fund-limit-1}) holds for every $\hat{{\mathcal{E}}} \in \Pi_k({\mathcal{E}}_c)$. Assume that $\hat L$ is the Laplacian of the graph formed by $k$ added edges. We note that ${\Rank}( \hat L)=k' \leq k$. Therefore ${\Dim}({\Ker}\hat L)=n-k' \geq n-k$. Therefore, we can define the nonempty set $M_j$ for $2\leq j \leq n$, as follows
\begin{equation*}
M_j \,=\, {\Span}\{u_1,\ldots,u_{j+k'}\} \, \cap \, \Span\{v_j,\ldots,v_{n}\} \, \cap \, \Ker \hat L,
\end{equation*}
where $u_i$'s and $v_i$'s are orthonormal eigenvectors of $L$ and $L+\hat L$, respectively. We now choose a unit vector $v \in M_j$. It then follows that:
\begin{eqnarray}
\lambda_j (L+ \hat L) &\leq& v^{\text T}( L+ \hat L)v~=~v^{\text T}Lv \nonumber \\
&\leq& \lambda_{j+k'}( L)~\leq~ \lambda_{j+k}( L).
\label{eq-h}
\end{eqnarray}
Therefore, according to \eqref{eq-h} and the monotonicity property of the systemic measure ${{\rho}}$, we get
\begin{eqnarray}
{{\rho}}(L+\hat L) ~>~ \Phi \big (\lambda_{k+2}, \cdots, \lambda_n,\underbrace{ \infty, \cdots, \infty}_{\text{$k$ times}}\big )
\label{eq:369}
\end{eqnarray}
for all $\hat{{\mathcal{E}}} \in \Pi_k({\mathcal{E}}_c)$. Inequality (\ref{fund-limit-1}) now follows from (\ref{eq:369}) and this completes the proof. Note that inequality \eqref{lower-bound-rho_0} is a direct consequence of \eqref{fund-limit-1} and $\lim_{\lambda \rightarrow \infty} \varphi(\lambda) = 0$.
\end{proof}
\begin{theorem}
\label{w-prop}
Suppose that in optimization problem \eqref{k-link}, the set of candidate links form a complete graph, i.e., $|\mathcal E_c|=\frac{1}{2}n(n-1)$. Then, there exists a weight function $\varpi_0: {\mathcal{E}}_c \rightarrow {\mathbb{R}}_{++}$ and a choice of $k$ weighted links from ${\mathcal{E}}_c$ with weight function $\varpi: {\mathcal{E}}_c \rightarrow {\mathbb{R}}_{++}$ such that
\begin{equation}
\mathbf{r}_k^*(\varpi)~\leq~ \Phi \big (\lambda_{2}, \ldots, \lambda_{n-k}, \underbrace{ \infty, \ldots, \infty}_{\text{$k$ times}} \big
\label{fund-limit-2}
\end{equation}
holds for all weight functions $\varpi$ that satisfies $\varpi(e) \geq \varpi_0(e)$ for all $e \in {\mathcal{E}}_c$. Moreover, if the systemic performance measure has the following decomposable form
\begin{equation*}
{{\rho}} \left(L\right)~=~\sum_{i=2}^n \varphi (\lambda_i),
\label{meas}
\end{equation*}
where $\varphi: {\mathbb{R}} \rightarrow {\mathbb{R}}_{+}$ is a decreasing convex function and $\lim_{\lambda \rightarrow \infty} \varphi(\lambda) = 0$, then the best achievable performance measure is characterized by
\begin{equation}
\mathbf{r}_k^*(\varpi)~ \leq ~\sum_{i=2}^{n-k}\varphi(\lambda_i). \label{lower-bound-rho}
\end{equation}
\end{theorem}
\begin{proof}
{We will show that there exists $\hat{{\mathcal{E}}} \in \Pi_k({\mathcal{E}}_c)$ for which (\ref{fund-limit-2}) is satisfied. Without loss of generality, we may assume that $k < n-1$. This is because otherwise, by adding $n-1$ links, which forms a spanning tree, and increasing their weights the performance of the resulting network tends to $\Phi(\infty, \cdots, \infty)$ {(see Theorem \ref{tree-theorem})}. Let $\hat {\mathcal{E}} \subset {\mathcal{E}}$ be the set of $k$ links that do not form any cycle with $\varpi_0(e)=\infty$ for all $e \in \hat {\mathcal{E}}$. Then, we know that
\begin{equation}
{\Lambda}(L + \hat L) ~\geq~ {\Lambda}(L)
\label{fund-limit-2-1}
\end{equation}
and the $k$ largest eigenvalues of $L + \hat L$ are equal to $\infty$. Using \eqref{fund-limit-2-1} and the monotonicity property of the systemic performance measure, we get
\begin{equation}
{{\rho}}(L+\hat L)~\leq~ \Phi \big (\lambda_{2}, \cdots, \lambda_{n-k}, \underbrace{ \infty, \cdots, \infty}_{\text{$k$ times}} \big )
\label{fund-limit-22}
\end{equation}
From $\mathbf{r}^*(\varpi) \leq {{\rho}}(L+\hat L)$ and using \eqref{fund-limit-22}, we obtain \eqref{fund-limit-2}. Note that inequality \eqref{lower-bound-rho} is a direct consequence of \eqref{fund-limit-2} and $\lim_{\lambda \rightarrow \infty} \varphi(\lambda) = 0$.
}
\end{proof}
Examples of systemic performance measures that satisfies conditions of Theorem \ref{w-thm} include $\zeta_q^q(L)$ for $q \geq 1$, $I_\gamma(L),$ and $\tau_t(L)$.
\begin{theorem}
\label{tree-theorem}
Let us consider a linear consensus network \eqref{first-order}-\eqref{first-order-G} that is endowed with systemic performance measure $\rho: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$. Then, the network performance can be arbitrarily improved\footnote{This implies that the value of the systemic performance measure can be made close enough to $\Phi(\infty, \cdots, \infty)$, the lower bound in inequality \eqref{fund-limit-1}.} by adding only $n-1$ links that form a spanning tree.
\end{theorem}
\begin{proof}
Let us denote the Laplacian matrix of the spanning tree by $L_{\mathcal T}$. In the following, we show that the performance of resulting network can be arbitrarily improved by increasing the weights of the spanning trees.
Based on the monotonicity property, we have
\begin{equation}
\rho(L+\kappa L_{\mathcal T}) \leq \rho ( \kappa L_{\mathcal T}), ~~~{\kappa >0},
\label{eq:1512}
\end{equation}
Also, we know that $\Lambda(\kappa L_{\mathcal T}) = \kappa \Lambda(L_{\mathcal T})$. Therefore, using the fact that the spanning tree has only one zero eigenvalue, \eqref{spectral-rho}, we get
\[ \lim_{\kappa \rightarrow \infty } \rho ( \kappa L_{\mathcal T}) = \Phi (\infty, \cdots, \infty). \]
Using this limit and \eqref{eq:1512} we get the desired result.
\end{proof}
It should be emphasized that by increasing weights of all the edges, the network performance can be arbitrarily improved, {i.e.,}~ the value of the systemic performance measure can be made arbitrarily close to $\Phi(\infty, \cdots, \infty)$. Theorem \ref{tree-theorem} sheds more light on this fact by revealing the minimum number of required links and their graphical topology to achieve this goal.
The results of Theorems \ref{w-thm} and \ref{w-prop} can be effectively applied to select a suitable value for the design parameter $k$ in optimization problem \eqref{k-link}. Let us denote the value of the lower bound in \eqref{fund-limit-1} by $\varrho_k$. The performance of the original network is then $\varrho_0=\rho(L)$. The percentage of performance enhancement can be computed by formula $\frac{\varrho_0 - \varrho_k}{\varrho_0} \times 100$ for all values of parameter $1 \leq k \leq n-1$. For a given desired performance level, we can look up these numbers and find the minimum number of required links to be added to the network. This is explained in details in Example \ref{ex:4} and Figure \ref{fig:882} in Section \ref{sec:simu}. In next sections, we propose approximation algorithms to compute near-optimal solutions for the network synthesis problem \eqref{k-link}.
\section{A Linearization-Based Approximation Method} \label{sec:linearization}
\label{subsec:B}
Our first approach is based on a linear approximation of the systemic performance measure when weights of the candidate links in ${\mathcal{E}}_c$ are small enough. In the next result, we calculate Taylor expansion of a systemic performance measure using notions of directional derivative for spectral functions.
\begin{lemma}
\label{second-approx}
Suppose that a linear consensus network \eqref{first-order}-\eqref{first-order-G} endowed with a differentiable systemic performance measure $\rho$ is given. Let us consider the cost function in optimization problem \eqref{k-link}. If $\hat L$ is the Laplacian matrix of an appended subgraph $\hat{{\mathcal{G}}} = (\mathcal V, \hat{{\mathcal{E}}}, \varpi)$, then
\begin{equation*}
{{{\rho}}(L+\epsilon \hat L) ~=~ {{\rho}} (L) + \epsilon \tr \big( \triangledown {{\rho}}(L) \hat L \big) +\mathcal{O}(\epsilon^2)}
\end{equation*}
where the derivative of $\rho$ at $L$ is given by
\begin{equation}
\triangledown {{\rho}}(L) ~=~ W^{\text T} \left( {\text{diag}} \triangledown \phi \left ({\Lambda}(L)\right ) \right) W \label{derivative-rho}
\end{equation}
for any matrix $W$ that is defined by \eqref{eq:250}.
\end{lemma}
\begin{proof}
The expression \eqref{derivative-rho} can be calculated using the spectral form of a given systemic performance measure described by \eqref{spectral-fcn} and according to \cite[Corollary 5.2.7]{borwein}. Using the directional derivative of ${{\rho}}$ along matrix $\hat L$, the Taylor expansion of ${{\rho}}(L+\epsilon \hat L) $ is given by
\begin{equation}
{{\rho}}(L+ \epsilon \hat L) ~=~ {{\rho}}(L) + \epsilon \triangledown_{\hat L} {{\rho}} (L) +\mathcal{O}(\epsilon^2),
\label{1182}
\end{equation}
where $ \triangledown_{\hat L} {{\rho}} (L) $ is the directional derivative of ${{\rho}}$ at $L$ along matrix $\hat L$
\begin{equation}
\triangledown_{\hat L} {{\rho}} (L)~=~\big < \triangledown {{\rho}}(L) , \hat L \big> ~=~\tr \big( \triangledown {{\rho}}(L) \hat L \big),
\label{d-d}
\end{equation}
where $\left < .,. \right > $ denotes the inner product operator. Then, substituting \eqref{d-d} in \eqref{1182} yields the desired result.
\end{proof}
According to the monotonicity property of systemic performance measures, the inequality \[ \tr \big( \triangledown {{\rho}}(L) \hat L\big) ~ \leq ~ 0 \]
holds for every Laplacian matrix $\hat L$.
This implies that when weights of the candidate links are small enough, one can approximate the optimization problem \eqref{k-link} by the following optimization problem
\begin{equation}
\underset{\hat{{\mathcal{E}}} \in \Pi_k({\mathcal{E}}_c)}{\textrm{minimize}} \hspace{0.6cm} \tr \big( \triangledown {{\rho}}(L) \hat L\big), \label{k-link-B}
\end{equation}
where $\hat{L}$ is the Laplacian matrix of an appended candidate subgraph $\hat{{\mathcal{G}}} = (\mathcal V, \hat{{\mathcal{E}}}, \varpi)$. Therefore, the problem boils down to select the $k$-largest elements of the following set
\[\Big\{ \varpi(e) \big(\triangledown {{\rho}}(L)_{ii}+\triangledown {{\rho}}(L)_{jj}-\triangledown {{\rho}}(L)_{ij}-\triangledown {{\rho}}(L)_{ji}\big)\big|e=\{i,j\} \in {\mathcal{E}}_c \Big\},\]
where $\varpi(e)$ is weight of link $e$. Table \ref{table-linear} presents our linearization approach as an algorithm. In some special cases, one can obtain an explicit closed-form formula for systemic performance measure of the resulting augmented network.
\begin{table}[t]
\centering
\caption{\small Linearization-based algorithm }
{\small
\begin{tabular}{ |l| }
\hline
\quad{\bf Algorithm:} Adding $k$ links using linearization \\ \hline \hline
{\it Input:} $L$, $\mathcal E_c$, $\varpi$, and $k$ \\
1: \quad {\it set} $\hat L=\mathbf 0$ \\
2: \quad {\it for} $i = 1$ {\it to} $k$\\
3: \quad\quad \indent \indent {\it find} $e=\{i,j\} \in \mathcal E_c$ that returns the maximum value for \\
4: \quad\quad \indent \indent ~~~~$\varpi(e) \big ( \triangledown {{\rho}}(L)_{ii}+\triangledown {{\rho}}(L)_{jj}-\triangledown {{\rho}}(L)_{ij}-\triangledown {{\rho}}(L)_{ji} \big)$
\\5: \quad\quad \indent \indent {\it set} the solution $e^{\star}$
\\6: \quad\quad \indent \indent {\it update}
\\7: \quad\quad\quad \indent \indent
$\hat L= \hat L + \varpi (e^\star)L_{e^{\star}}$, and
\\8: \quad\quad\quad \indent \indent
$\mathcal E_c = \mathcal E_c \, \backslash \, \{e^\star \}$\\ 9: \quad {\it end for}
\\ \hline
\end{tabular} \label{table-linear}}
\end{table}
\begin{theorem}
\label{col:h2-added}
Suppose that linear consensus network \eqref{first-order}-\eqref{first-order-G} with Laplacian matrix $L$ is endowed with systemic performance measure \eqref{zeta-measure} for $q=1$. Let us consider optimization problem \eqref{k-link}, where $\hat{L}$ is the Laplacian matrix of a candidate subgraph $\hat{{\mathcal{G}}} = (\mathcal V, \hat{{\mathcal{E}}}, \varpi)$. Then,
\begin{equation*}
\zeta_1 (L+\epsilon \hat L) ~=~ \zeta_1 (L) - \epsilon \sum_{e \in \hat{\mathcal E}} \varpi(e)r_e(L^2)+\mathcal{O}(\epsilon^2),
\end{equation*}
where $r_e(L^2)$ is the effective resistance between the two ends of $e$ in a graph with node set $\mathcal V$ and Laplacian matrix $L^2$.
\end{theorem}
\begin{proof}
We use the following identity
\begin{equation}
\left({A} + \epsilon{X}\right)^{-1}
= {A}^{-1}
- \epsilon {A}^{-1} {X} {A}^{-1} + \mathcal{O}(\epsilon^2),
\label{w1}
\end{equation}
for given matrices $A, X \in {\mathbb{R}}^{n \times n}$.
Based on \cite[Theorem 4]{Siami13cdc}, the performance measure $\zeta_1(.)$ can be calculated by
\begin{equation}
\zeta_1 (L+\epsilon \hat{L}) ~=~ \tr((L+\epsilon \hat{L})^{\dag}).
\label{w2}
\end{equation}
Moreover, according to the definition of the Moore-Penrose generalized matrix inverse, we have
\begin{equation*}
\left (L+\epsilon \hat L \right)^{\dag}~=~\left (\bar L+\epsilon \hat L\right)^{-1} - ~\frac{1}{n}J_n,
\end{equation*}
where $\bar L=L+\frac{1}{n}J_n$.
Using \eqref{w1} and \eqref{w2}, it follows that
\begin{eqnarray}
\left (L+\epsilon \hat L \right)^{\dag}~=~\bar L^{-1} - \frac{1}{n}J_n- \epsilon {\bar L}^{-1} {\hat L} {\bar L}^{-1} + \mathcal{O}(\epsilon^2).
\label{w3}
\end{eqnarray}
Then we show that
\begin{equation}
\tr ({\bar L}^{-1} {\hat L} {\bar L}^{-1} )~=~\tr ({\hat L} {\bar L}^{-2} )~=~\sum_{e \in \hat{\mathcal E}} \varpi(e)r_e(L^2).
\label{eq:624}
\end{equation}
Using \eqref{w2}, \eqref{w3} and \eqref{eq:624}, we get the desired result.
\end{proof}
According to Theorem \ref{col:h2-added}, when weights of the candidate links are small, in order to solve problem \eqref{k-link}, it is enough to find $k$-largest element of the following set
\[ \big\{ \varpi(e)r_e(L^2)~\big|~ e \in \mathcal E_c\big\}.\]
Since the weights of the candidate links are given, we only need to calculate the effective resistance $r_e(L^2)$ for all $e \in \mathcal E_c$.
As we discussed earlier, the design problem \eqref{k-link} is generally NP-hard. Our proposed approximation algorithm in this section works in polynomial-time. In example \ref{comparing}, we discuss and compare optimality gap and time complexity of this method with other methods. The computational complexity of the linearization-based algorithm in Table \ref{table-linear} is $\mathcal O(n^3)$ for a given differentiable systemic performance measure from Table \ref{matrix-operator}. This involves computation of $\triangledown \rho$ for the original graph, which requires $\mathcal O(n^3)$ operations. The rest of the algorithm can be done in $\mathcal O(p k)$ for small $k$ and $\mathcal O(p \log p)$ operations for large $k$.
\section{Greedy Approximation Algorithms}\label{sec:algorithms}
In this section, we propose an optimal algorithm to solve the network growing problem \eqref{k-link} when $k=1$. It is shown that for some commonly used systemic performance measures, one can obtain a closed-form solution for $k=1$. We exploit our results and propose a simple greedy approximation algorithm for \eqref{k-link} with $k > 1$ by adding candidate links one at a time. For some specific subclasses of systemic performance measures, we prove that our proposed greedy approximation algorithm enjoys guaranteed performance bounds with respect to the optimal solution of the combinatorial problem \eqref{k-link}. Finally, we discuss time complexity of our proposed algorithms.
\begin{table}[t]
\centering
\caption{\small Simple greedy algorithm}
{\small
\begin{tabular}{|l| }
\hline
\quad{{\bf Algorithm: } Adding links Consecutively} \\ \hline \hline
{\it Input:} $L$, $\mathcal E_c$, $\varpi$, and $k$ \\
1: {\it set} $\tilde L = L $\\
2: {\it for} $i = 1$ {\it to} $k$\\
3: \quad \indent \indent {\it find} link $e \in \mathcal E_c$ with maximum ${{\rho}}(\tilde L)-{{\rho}}(\tilde L+\varpi(e)L_e)$ \\
4: \quad \indent \indent {\it set} the solution $e^{\star}$ \\
5: \quad \indent \indent {\it update} \\
6: \quad\quad \indent \indent
$\tilde L= \tilde L + \varpi (e^\star)L_{e^{\star}}$, and\\
7: \quad\quad \indent \indent $\mathcal E_c = \mathcal E_c \, \backslash \, \{e^\star \}$\\
8: {\it end for}
\\ \hline
\end{tabular} }
\label{greedy-table}
\vspace{-0.4cm}
\end{table}
\subsection{Simple Greedy by Sequentially Adding Links}
\label{sec:291}
The problem of adding only one link can be formulated as follows
\begin{equation}
\underset{ e \in {\mathcal{E}}_c}{\textrm{minimize}} \hspace{0.6cm} {{\rho}} (L + L_e), \label{1-link}
\end{equation}
where $L_e$ is the Laplacian matrix of a candidate subgraph $\hat{{\mathcal{G}}}_e = (\mathcal V, \{e\}, \varpi)$. Let us denote the optimal cost of \eqref{1-link} by $\mathrm{r}_1^*(\varpi)$. In order to formulate the optimal cost value of \eqref{1-link}, we need to define the notion of a companion operator for a given systemic performance measure.
\begin{lemma}\label{psi-thm}
For a given systemic performance measure ${{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$, there exists a companion operator ${{\psi}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$ such that
\begin{equation}
{{\rho}}(L) = {{\psi}}(L^{\dag}),
\label{eq:353}
\end{equation}
for all $L \in \mathfrak L_n$. Moreover, the companion operator of ${{\rho}}$ is characterized by
\begin{equation}
\psi(X) = \Phi(\mu_n^{-1}, \ldots, \mu_2^{-1}),
\label{eq:453}
\end{equation}
for all $X \in \mathfrak L_n$ with eigenvalues $\mu_2 \leq \ldots \leq \mu_n$, where operator $\Phi: {\mathbb{R}}^{n-1} \rightarrow {\mathbb{R}}$ is defined by \eqref{spectral-rho}.
\end{lemma}
\begin{proof}
According to Theorem \ref{thm:schur-convex}, there exists a Schur-convex spectral function $\Phi: {\mathbb{R}}^{n-1} \rightarrow {\mathbb{R}}$ such that
\[ {{\rho}}(L)~=~ \Phi(\lambda_2, \ldots, \lambda_n). \]
In addition, we know that for the Moore-Penrose pseudo-inverse of matrix $L \in \mathfrak L_n$, we have the following
\[ \lambda_i(L^\dag)~=~\lambda_{n-i+1}^{-1}(L) ~=~ \lambda_{n-i+1}^{-1},\]
for $i=2, \ldots n$, and $\lambda_1(L)=\lambda_1(L^{\dag})=0$. Consequently, we can rewrite $\rho(L)$ using its companion operator as
\[ \rho(L)=\Phi \left (\lambda_n^{-1}(L^{\dag}), \ldots, \lambda_2^{-1}(L^{\dag}) \right).\]
Therefore, by defining ${{\psi}}: \mathfrak L_n \rightarrow {\mathbb{R}}$ as \eqref{eq:453}, we get identity \eqref{eq:353}.
\end{proof}
Table \ref{table-11} shows some important examples of systemic performance measure and their corresponding companion operators.
\begin{theorem}\label{lem-ex}
Suppose that a linear consensus network \eqref{first-order}-\eqref{first-order-G} endowed by a systemic performance measure ${{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$ is given. The optimal cost value of the optimization problem \eqref{1-link} is given by
\begin{equation}
\mathrm{r}_1^*(\varpi)~=~ \min_{e \in {\mathcal{E}}_c} ~{{\psi}} \left(L ^{\dag}- \frac{1}{\varpi^{-1}(e)+r_e(L)}U_e\right),
\label{eq:472}
\end{equation}
where ${{\psi}}$ is the corresponding companion operator of $\rho$ and $U_e$ for a link $e=\{i,j\}$ is a rank-one matrix defined by
\begin{equation}
U_e~=~(L^{\dag}_i-L^{\dag}_j) (L^{\dag}_i-L^{\dag}_j)^{\text T}, \label{U-e}
\end{equation}
in which $L^{\dag}_i$ is the $i^{\text{th}}$ column of matrix $L^{\dag}$.
\end{theorem}
\begin{proof}
We use the following matrix identity
\[(L+L_{e})^{\dag}~=~\left (\bar L+E_e\varpi(e)E^{\text T}_e \right )^{-1}-\frac{1}{n}J_n,\]
where $E_e$ is the incidence matrix of graph $\hat{{\mathcal{G}}}_e$ and $\bar L=L+\frac{1}{n}J_n$. By utilizing the Woodbury matrix identity, we get
\begin{equation}
(L+L_{e})^{\dag} ~=~ L^{\dag}- \bar L^{-1}E_e\left(w^{-1}_1(e)+E^{\text T}_e\bar L^{-1}E_e\right)^{-1}E^{\text T}_e\bar L^{-1}.
\label{eq:1325}
\end{equation}
From the definition of the effective resistance between nodes $i$ and $j$, it follows that
\[r_e(L)~=~E^{\text T}_e\bar L^{-1}E_e~=~ l^{\dag}_{ii}+l^{\dag}_{jj}-l^{\dag}_{ij}-l^{\dag}_{ji}.\]
On the other hand, we have
\begin{equation}
\bar L^{-1}E_e~=~\left(L^{\dag}-\frac{1}{n}J_n\right)E_e~=~L^{\dag}E_e~=~L^{\dag}_i-L^{\dag}_j.
\label{eq:1333}
\end{equation}
Therefore, using \eqref{eq:1325} and \eqref{eq:1333}, we have
\begin{eqnarray}
&&\hspace{-.6 cm}(L+L_{e})^{\dag}=L^{\dag}- \frac{1}{\varpi^{-1}(e)+r_e(L)} (L^{\dag}_i-L^{\dag}_j)(L^{\dag}_i-L^{\dag}_j)^{\text T} \nonumber\\
&&~~~~~~~= L ^{\dag}- \frac{1}{\varpi^{-1}(e)+r_e(L)}U_e.
\label{eq:522}
\end{eqnarray}
From \eqref{eq:353} and \eqref{eq:522}, we can conclude the desired equation \eqref{eq:472}.
\end{proof}
In some special cases, the optimal solution \eqref{eq:472} can be computed very efficiently using a simple separable update rule.
\begin{theorem}\label{coro:1241}
Suppose that linear consensus network \eqref{first-order}-\eqref{first-order-G} with Laplacian matrix $L$ is given. Then, for every link $e \in \mathcal{E}_c$ we have
\begin{eqnarray*}
& &\hspace{-.8cm} \Scale[1.1]{\zeta_1(L+L_{e}) = \zeta_{1} (L) - \frac{r_{e}(L^2)}{\varpi^{-1}({e})+r_{e}(L)},} \\
& &{ \hspace{-.8cm} \Scale[1.1]{\zeta_{2}^2 (L+L_{e}) = \zeta_{2}^2(L) + \left[\frac{r_e(L^2)}{\varpi^{-1}(e)+r_e(L)}\right]^2 - \frac{ 2 r_e(L^3)}{\varpi^{-1}(e)+r_e(L)}}},\\
& & \hspace{-.8cm} \Scale[1]{\upsilon(L+L_{e}) ~=~ \upsilon (L) ~-~ \log \big(1+ r_{e}(L)\varpi(e)\big)},
\end{eqnarray*}
where $r_{e}(L^m)$ is the effective resistance between the two ends of link $e$ in a graph with node set $\mathcal{V}$ and Laplacian matrix $L^m$ for $m \in \{1,2,3\}$.
\end{theorem}
\begin{proof}
Based on Theorem \ref{lem-ex}, it is straightforward to get the desired result for $\zeta_1(.)$ and $\zeta_2(.)$. For the last part, using the definition of $\upsilon(.)$ and \eqref{measure:uncertainty}, we get
\begin{eqnarray}
\upsilon(L+L_{e}) &=& \log \det \left(\frac{1}{2}(L+L_e)^\dag+\frac{1}{n}J_n\right)\nonumber \\
&=& \log \det \left( 2(L+L_e)+\frac{1}{n}J_n\right)^{-1}.
\label{eq:2303}
\end{eqnarray}
According to the matrix determinant lemma we have
\begin{equation}
\det( A+uv^\mathrm{T}) ~=~ (1 + {v}^\mathrm{T}{A}^{-1}{u})\,\det({A}).
\label{eq:1520}
\end{equation}
Now using \eqref{eq:2303} and \eqref{eq:1520}, it follows that
\begin{eqnarray*}
\det \left( 2(L+L_e)+\frac{1}{n}J_n\right)^{-1} & = & \\
& & \hspace{-2.5cm} \det \left (\left (2L+\frac{1}{n}J_n\right) ^{-1}- \frac{1}{2\varpi^{-1}(e)+2r_e(L)}U \right)\\
&&\hspace{-2.5cm} = \left(1 - \frac{r_e(L)}{\varpi^{-1}(e)+r_e(L)}\right)\det \left (2L+\frac{1}{n}J_n\right) ^{-1},
\end{eqnarray*}
then by taking $\log$ from both sides, we get the desired result.
\end{proof}
In these special cases, the computational complexity of calculating the optimal solution for network design problem \eqref{1-link} is relatively low. For $q=1$, the optimal cost value is equal to $\zeta_1(L+L_{e^*})$, where
\begin{equation}
e^* = \arg \max_{e \in {\mathcal{E}}_c} ~ \frac{r_e(L^2)}{\varpi^{-1}(e)+r_e(L)}, \label{1-link-B}
\end{equation}
and for $q=2$, the optimal cost value is equal to $\zeta_{2} (L+L_{e^*})$, where
\begin{equation*}
e^* = \arg \min_{e \in {\mathcal{E}}_c} ~ \left (\left[\frac{r_e(L^2)}{\varpi^{-1}(e)+r_e(L)}\right]^2 - \frac{ 2 r_e(L^3)}{\varpi^{-1}(e)+r_e(L)} \right) \end{equation*}
Moreover, for \eqref{measure:uncertainty}, the optimal cost value is equal to $\upsilon(L+L_{e^*})$, where
\begin{equation*}
e^* = \arg \min_{e \in {\mathcal{E}}_c} ~ \log \big(1+ r_{e}(L)\varpi(e)\big).
\end{equation*}
The location of the optimal link is sensitive to its weight. For example when optimizing with respect to $\zeta_1$,
maximizers of $r_e(L)$, $r_e(L^2)$ and $r_e(L^2)/r_e(L)$ can be three different links. In Example \ref{ex:2} and Fig. \ref{fig:1618} of Section \ref{sec:simu}, we illustrate this point by means of a simulation. Furthermore, one can obtain the following useful fundamental limits on the best achievable cost values.
\begin{theorem}\label{funda-limit}
Let us denote the value of performance improvement by adding an edge $e$ with an arbitrary positive weight to linear consensus network \eqref{first-order}-\eqref{first-order-G} by
\[ \Delta \rho(L) = \rho(L) - \rho(L+L_{e}).\] Then, the maximum achievable performance improvement is
\begin{equation}
\Delta \rho(L) ~\leq~\psi(L ^{\dag}) - \psi \Big(L ^{\dag}- r_e(L)^{-1}U_e\Big), \label{max-rho}
\end{equation}
where $U_e$ is given by \eqref{U-e} and the upper bound can be achieved as $w$ tends to infinity. Moreover, we have the following explicit fundamental limits
\begin{eqnarray}
\Delta \zeta_1(L) & \leq & \frac{r_e(L^2)}{r_e(L)}, \\
\Delta \zeta_2^2(L) & \leq & \left[\frac{r_e(L^2)}{r_e(L)}\right]^2 - 2\frac{r_e(L^3)}{r_e(L)}.
\end{eqnarray}
\end{theorem}
\begin{table*}[t]
{ \small
\begin{center}
\begin{tabular}{ | p{5.29cm} | l | p{5.0cm} | p{5.7cm} |}
\hline
Systemic Performance Measure & Symbol & Spectral Representation & The Corresponding Companion Operator \\ \hline \hline
Spectral zeta function & ${\zeta}_{q}(L)$ & $\displaystyle \Big( \sum_{i=2}^n \lambda_i^{-q} \Big)^{1/q}$ & $\displaystyle \Big( \sum_{i=2}^n \mu_i^{q} \Big)^{1/q}$ for $q \geq 1$ \\
\hline
Gamma entropy & $I_{\gamma}(L)$ & $\displaystyle \gamma^2 \sum_{i=2}^n \Big(\lambda_i- \big(\lambda_i^2-\gamma^{-2}\big)^{\frac{1}{2}} \Big)$ & $\displaystyle \gamma^2 \sum_{i=2}^n \Big(\mu_i^{-1}- \big(\mu_i^{-2}-\gamma^{-2}\big)^{\frac{1}{2}} \Big)$
\\
\hline
Expected transient output covariance & $\tau_t(L)$ & $\displaystyle \frac{1}{2} \sum_{i=2}^n \lambda_i^{-1} (1- e^{-\lambda_i t})$ & $\displaystyle \frac{1}{2} \sum_{i=2}^n \mu_i (1- e^{-\frac{t}{\mu_i}})$
\\
\hline
System Hankel norm & $\eta(L)$ & $\displaystyle \frac{1}{2}\lambda_2^{-1}
$ & $\frac{1}{2} \mu_n$ \\
\hline
Uncertainty volume of the output & $\upsilon(L)$ & $\displaystyle (1-n) \log 2 - \sum_{i=2}^n \log \lambda_i $ & $\displaystyle (1-n) \log 2 + \sum_{i=2}^n \log \mu_i $
\\
\hline
Hardy-Schatten system norm or $\mathcal{H}_p$-norm & $\theta_p(L)$ & $\displaystyle \left\{ \frac{1}{2\pi} \int_{-\infty}^{\infty} \sum_{k=1}^n \sigma_k(G(j \omega))^p \hspace{0.05cm} d\omega \right\}^{1/p}$ $ = \alpha_0 \left( \tr \left( L^\dag \right)^{p-1}\right)^{\frac{1}{p}}$ & $\alpha_0 \displaystyle \bigg( \sum_{i=2}^n \mu_i^{p-1} \bigg)^{1/p}$ for $2 \leq p \leq \infty$, where $\alpha_0^{-1}=\sqrt[p]{-\beta(\frac{p}{2},-\frac{1}{2})}$.
\\
\hline
\end{tabular}
\caption{ \small Some important examples of spectral systemic performance measures and their corresponding companion operators.} \label{table-11}
\end{center}
\vspace{-0.5cm}}
\end{table*}
\begin{proof}
We utilize monotonicity property of companion operator of a systemic performance measure, i.e., If $L_1^{\dag} \preceq L_2^{\dag} $, then
\[ \psi(L_1^{\dag}) ~\leq~ \psi(L_2^{\dag}), \]
and the inequality
\[ L ^{\dag}- r_e(L)^{-1}U_e ~\preceq~ L ^{\dag}- \frac{1}{w^{-1}+r_e(L)}U_e \]
to show that
\[\psi \Big(L ^{\dag}- r_e(L)^{-1}U_e\Big) ~\leq~ \psi\Big(L ^{\dag}- \frac{1}{w^{-1}+r_e(L)}U_e\Big). \]
From this inequality, we can directly conclude \eqref{max-rho}. For systemic performance measure $\zeta_1(.)$, inequality \eqref{max-rho} reduces to
\begin{eqnarray}
\Delta \zeta_1(L) &\leq& \tr(L ^{\dag}) - \tr \Big(L ^{\dag}- r_e(L)^{-1}U_e\Big), \nonumber \\
&=&\tr \left ( r_e(L)^{-1}U_e\right) ~=~ r_e(L)^{-1} \tr \left (U_e\right).
\label{eq:1869}
\end{eqnarray}
Moreover, based on the definition of $U_e$, we have
\[ \tr (U_e) ~=~ \tr (L^{\dag}E_e^{\rm T} E_e L^{\dag})~=~E_e L^{\dag, 2} E_e^{\rm T}~=~r_e(L^2). \]
Using this and \eqref{eq:1869}, it follows that
\[\Delta \zeta_1(L) ~\leq~\frac{r_e(L^2)}{r_e(L)}.\]
Similarly for $\zeta_2^2(.)$, using \eqref{max-rho} and the definition of $\zeta_2(.)$, results in
\begin{eqnarray}
\Delta \zeta_2^2(L) &\leq& \tr \left (L ^{\dag,2}\right ) - \tr \left(\Big(L ^{\dag}- r_e(L)^{-1}U_e\Big)^2\right), \nonumber \\
&=&\frac{1}{r_e^2(L)} \tr \left (U_e^2\right) - 2 \tr \left( r_e(L)^{-1}U_e L^{\dag}\right) \nonumber \\
&=&\left [ \frac{r_e(L^2)}{r_e(L)}\right ]^2 - 2 \frac{ \tr \left( U_e L^{\dag}\right)}{r_e(L)} \nonumber \\
&=&\left [ \frac{r_e(L^2)}{r_e(L)}\right ]^2 - 2 \frac{ \tr \left( L^{\dag}E_e^{\rm T} E_e L^{\dag, 2} \right) }{r_e(L)} \nonumber \\
&=&\left [ \frac{r_e(L^2)}{r_e(L)}\right ]^2 - 2 \frac{ r_e(L^3) }{r_e(L)}.
\end{eqnarray}
This completes proof.
\end{proof}
The result of Theorem \ref{funda-limit} asserts that, in general, performance improvement may not be arbitrarily large by adding only one new link. In some cases, however, performance improvement can be arbitrarily good. For instance, for the uncertainty volume of the output, we have
\begin{equation}
\lim_{\varpi(e) \rightarrow +\infty} ~\Delta \upsilon(L) = +\infty.
\end{equation}
The result of Theorem \ref{lem-ex} can be utilized to devise a greedy approximation method by decomposing \eqref{k-link} into $k$ successive tractable problems in the form of \eqref{1-link}. In each iteration, Laplacian matrix of the network is updated and then optimization problem \eqref{1-link} finds the next best candidate link as well as its location. Since the value of systemic performance measure can be calculated explicitly in each step using Theorem \ref{lem-ex}, one can explicitly calculate the value of systemic performance measure for the resulting augmented network. This value can be used to determine the effectiveness of this method. Table \ref{greedy-table} summarizes all steps of our proposed greedy algorithm, where the output of the algorithm is the Laplacian matrix of the resulting augmented network. In Section \ref{sec:simu}, we present several supporting numerical examples.
\begin{remark}
The optimization problem \eqref{1-link} with performance measure $\zeta_{\infty}(L)=\lambda^{-1}_2$ was previously considered in \cite{Ghosh2006}, where a heuristic algorithm was proposed to compute an approximate solution. Later on, another approximate method for this problem was presented in \cite{Kolla}. Also, there is a similar version of this problem that is reported in \cite{Fardad}, where the author studies convergence rate of circulant consensus networks by adding some long-range links. Moreover, a non-combinatorial and relaxed version of our problem of interest has some connections to the sparse consensus network design problem \cite{mogjovACC15, wujovACC14, farlinjovTAC14sync}, where they consider $\ell_1$-regularized $\mathcal H_2$ optimal control problems. When the candidate set $\mathcal E_c$ is the set of all possible links except the network links, i.e., ~$\mathcal E_c= \mathcal V \times \mathcal V \, \backslash \, \mathcal E$, and the performance measure is the logarithm of the uncertainty volume, our result reduces to the result reported in \cite{Summers16}.
\end{remark}
\subsection{Supermodularity and Guaranteed Performance Bounds}\label{subsec1}
A systemic performance measure is a continuous function of link weights on the space of Laplacian matrices ${\mathfrak{L}_{n}}$. Moreover, we can represent a systemic performance measure equivalently as a set function over the set of weighted links. Let us denote by $\mathfrak{G}({\mathcal{V}})$ the set of all weighted graphs with a common node set ${\mathcal{V}}$.
\begin{definition}
\label{defin:3} For a given systemic performance measure ${{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$, we associate a set function $\tilde \rho: \mathfrak{G}({\mathcal{V}}) \rightarrow {\mathbb{R}}$ that is defined as
\[ \tilde {{\rho}}({\mathcal{G}})~=~ {{\rho}} \bigg(\sum_{e \in {\mathcal{E}}} w(e)L_e \bigg)~=~{{\rho}} (L),\]
where $L$ is Laplacian matrix of ${\mathcal{G}}=({\mathcal{V}}, {\mathcal{E}}, w)$ and $L_e$ is the Laplacian matrix of $({\mathcal{V}}, \{e\}, 1)$, which is an unweighted graph formed by a single link $e$.
\end{definition}
\begin{definition} The union of two weighted graphs ${\mathcal{G}}_1=({\mathcal{V}}, {\mathcal{E}}_1, w_1)$ and ${\mathcal{G}}_2=({\mathcal{V}}, {\mathcal{E}}_2, w_2)$ is defined as follows
\[{\mathcal{G}}_1 \vee {\mathcal{G}}_2~ := ~ ({\mathcal{V}}, {\mathcal{E}}_1 \cup {\mathcal{E}}_2, w)\]
in which
\begin{eqnarray}
w(e):=\begin{cases}\max\{w_1(e),w_2(e)\} ~~\text{if}~e \in {\mathcal{E}}_1 \cup {\mathcal{E}}_2 \\
0~~~~~~~~~~~~~~~~~~~~~~~~~\text{otherwise}
\end{cases}.
\end{eqnarray}
\end{definition}
\begin{definition} The intersection of two weighted graphs ${\mathcal{G}}_1=({\mathcal{V}}, {\mathcal{E}}_1, w_1)$ and ${\mathcal{G}}_2=({\mathcal{V}}, {\mathcal{E}}_2, w_2)$ is defined as follows
\[{\mathcal{G}}_1 \wedge {\mathcal{G}}_2~ := ~ ({\mathcal{V}}, {\mathcal{E}}_1 \cap {\mathcal{E}}_2, w)\]
in which
\begin{eqnarray*}
w(e):=\begin{cases}\min\{w_1(e),w_2(e)\} ~~\text{if}~e \in {\mathcal{E}}_1 \cap {\mathcal{E}}_2 \\
0~~~~~~~~~~~~~~~~~~~~~~~~~\text{otherwise}
\end{cases}.
\end{eqnarray*}
\end{definition}
The following definition is adapted from \cite{combinatorial} for our graph theoretic setting.
\begin{definition}
A set function $\tilde \rho: \mathfrak{G}({\mathcal{V}}) \rightarrow {\mathbb{R}}$ is supermodular with respect to the link set if it satisfies
\begin{equation}
\tilde {{\rho}}({\mathcal{G}}_1 \wedge {\mathcal{G}}_2) \, + \, \tilde {{\rho}}({\mathcal{G}}_1 \vee {\mathcal{G}}_2) \, \geq \, \tilde {{\rho}}({\mathcal{G}}_1) \, + \, \tilde {{\rho}}({\mathcal{G}}_2)
\label{eq:352}
\end{equation}
\end{definition}
\begin{figure}[t]
\centering
\begin{tabular}{c c c}
\includegraphics[trim = 50 10 30 10, clip,width=.14 \textwidth]{ex_2_A.eps}&
\includegraphics[trim = 50 10 30 10, clip,width=.14 \textwidth]{ex_2_B.eps} &
\includegraphics[trim = 50 10 30 10, clip,width=.14 \textwidth]{ex_2_C.eps}\\ (a) & (b) & (c)
\end{tabular}
\caption{\small The interconnection topology of all three graphs, except for their highlighted blue links, are identical, which show the coupling graph of the linear consensus network in Example \ref{ex:2}. The coupling graph shown in here is a generic connected graph with $50$ nodes and $100$ links, which are drawn by black lines. The optimal links are shown by blue line segments. }
\label{fig:1618}
\vspace{-0.8cm}
\end{figure}
\begin{theorem}
\label{submodular:th}
Suppose that systemic performance measure $\rho: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$ is differentiable and $\triangledown {{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}^{n \times n}$ is monotonically increasing with respect to the cone of positive semidefinite matrices\footnote{$L_1 \preceq L_2 \Longrightarrow \triangledown {{\rho}}(L_1) \preceq \triangledown {{\rho}}(L_2)$.}. Then, the corresponding set function $\tilde \rho: \mathfrak{G}({\mathcal{V}}) \rightarrow {\mathbb{R}}$, from Definition \ref{defin:3}, is supermodular.
\end{theorem}
\begin{comment}
{\color{red} If ${{\rho}}$ is twice continuously differentiable, then supermodularity is equivalent to the condition
\[{\frac {\partial ^{2} {{\rho}} }{\partial w_{i}\,\partial w_{j}}}\geq 0{\mbox{ for all }}i\neq j. \]
according to
\begin{eqnarray}
&&\hspace{-.8cm}\triangledown^2 \rho \circ \chi (\mathbf w)~=~ \nonumber \\
&&\hspace{-.4cm}~~~ W_E^{\rm T} \triangledown \phi(\Lambda(L)) W_E \otimes W_E^{\rm T} \phi (\Lambda(L)) W_E
\end{eqnarray}
it seems that even for $\mathcal H_2$ norm this claim is not true. Since we have
\[ W_E^{\rm T} (L+ 1/nJ_n)^{-2} W_E \otimes W_E^{\rm T} (L+ 1/nJ_n)^{-1} W_E.\]
we need all elements of the (element-wise) product of $W_E^{\rm T} (L+ 1/nJ_n)^{-2} W_E$ and $ W_E^{\rm T} (L+ 1/nJ_n)^{-1} W_E$ be positive, and this is not true.}
\end{comment}
\begin{proof}
We know that
\begin{equation}
\frac{d}{dt}{{\rho}}(L+tX)~=~\tr(\triangledown {{\rho}}(L+tX) X).
\label{eq:409}
\end{equation}
where $t \in {\mathbb{R}}_+$ and $L, X \in {\mathfrak{L}_{n}}$.
From \eqref{eq:409}, we get
\begin{eqnarray}
&& \hspace{-.2cm}\frac{d}{dt}\big( {{\rho}}(L_1+tX) - {{\rho}}(L_2+tX) \big) \, = \, \nonumber \\
&&~~~~~~~~~~~~~~ \tr\left (\big(\triangledown {{\rho}}(L_1+tX) - \triangledown {{\rho}}(L_2+tX) \big)X\right),
\label{eq:415}
\end{eqnarray}
where $ L_1 ,L_2 \in {\mathfrak{L}_{n}}$ and $L_1 \preceq L_2$. From the monotonicity property of $\triangledown {{\rho}}$ and \eqref{eq:415}, we get
\begin{equation}
\frac{d}{dt}\big( {{\rho}}(L_1+tX) - {{\rho}}(L_2+tX) \big) \, \leq \, 0.
\label{eq:474}
\end{equation}
Then, by taking integral from both sides of \eqref{eq:415}, and then using \eqref{eq:474} we have
\begin{eqnarray*}
&&\hspace{-.6cm}\int_0^1 \frac{d}{dt}{{\rho}}(L_1+tX) dt - \int_0^1 \frac{d}{dt}{{\rho}}(L_2+tX) dt ~\leq~ 0,
\end{eqnarray*}
which directly implies that
\begin{equation}
{{\rho}}(L_1+X)-{{\rho}}(L_1) ~\leq~ {{\rho}}(L_2+X)-{{\rho}}(L_2).
\label{eq:428}
\end{equation}
On the other hand, the corresponding Laplacian matrices of $\mathcal G_1$, $\mathcal G_2$, $\mathcal G_1 \wedge \mathcal G_2$, and $\mathcal G_1 \vee \mathcal G_2$ are given as follows
\begin{eqnarray}
\begin{cases}
L_{\mathcal G_1}:=\sum_{e \in \mathcal E_1 } w_1(e)L_e, \\
L_{\mathcal G_2}:=\sum_{e \in \mathcal E_2 } w_2(e)L_e, \\
L_{\mathcal G_1 \wedge \mathcal G_2}:=\sum_{e \in \mathcal E_1 \cap \mathcal E_2} \min \{w_1(e), w_2(e)\} L_e,\\
L_{\mathcal G_1 \vee \mathcal G_2}:=\sum_{e \in \mathcal E_1 \cup \mathcal E_2} \max \{w_1(e), w_2(e)\} L_e.
\end{cases}
\label{eq:4322}
\end{eqnarray}
Based on these definitions, we have
\begin{equation}
L_{\mathcal G_1 \wedge \mathcal G_2} ~\preceq~ L_{\mathcal G_1}, L_{\mathcal G_2} ~\preceq~ L_{\mathcal G_1 \vee \mathcal G_2}.
\label{eq:360}
\end{equation}
By setting $L_1=L_{\mathcal G_1 \wedge \mathcal G_2}$, $L_2=L_{\mathcal G_1} $, and $X = L_{\mathcal G_2}- L_{\mathcal G_1 \vee \mathcal G_2}$ in inequality \eqref{eq:428}, we get
\begin{eqnarray}
&&\hspace{-.5cm} {{\rho}}(L_{\mathcal G_1 \wedge \mathcal G_2}+L_{\mathcal G_2}-L_{\mathcal G_1\wedge \mathcal G_2 })-{{\rho}}(L_{\mathcal G_1 \wedge \mathcal G_2}) = {{\rho}}(L_{\mathcal G_2})-{{\rho}}(L_{\mathcal G_1 \wedge \mathcal G_2}) \nonumber \\
&&~~~~~~~~~~~\leq~ {{\rho}}(L_{\mathcal G_1 \vee \mathcal G_2}+L_{\mathcal G_2}-L_{\mathcal G_1\wedge \mathcal G_2 })-{{\rho}}(L_{\mathcal G_1 \vee \mathcal G_2}).
\label{eq:448}
\end{eqnarray}
According to \eqref{eq:4322}, we have
\begin{equation}
L_{\mathcal G_1 \vee \mathcal G_2}+L_{\mathcal G_1 \vee \mathcal G_2 }~=~ L_{\mathcal G_1}+L_{\mathcal G_2}.
\label{eq:452}
\end{equation}
Therefore, based on equality \eqref{eq:452} we can rewrite the right hand side of inequality \eqref{eq:448}, as follows
\begin{equation}
{{\rho}}(L_{\mathcal G_1 \vee \mathcal G_2}+L_{\mathcal G_2}-L_{\mathcal G_1 \wedge \mathcal G_2 })-{{\rho}}(L_{\mathcal G_1 \vee \mathcal G_2})= {{\rho}}(L_{\mathcal G_1})-{{\rho}}(L_{\mathcal G_1 \vee \mathcal G_2}).
\label{eq:458}
\end{equation}
Finally, using Definition \ref{defin:3}, \eqref{eq:448} and \eqref{eq:458}, we can conclude \eqref{eq:352}.
\end{proof}
}
It should be emphasized that convexity property of a systemic performance measure $\rho$ implies that $\triangledown \rho$, if it exists, is a monotone mapping\footnote{$\tr \left ( (\triangledown {{\rho}}(L_1) - \triangledown {{\rho}}(L_2))(L_1 -L_2) \right) \geq 0$, where $ L_1, L_2 \in {\mathfrak{L}_{n}}$.}. However, this property is not sufficient for supermodularity of its corresponding set function $\tilde \rho$.
\begin{example}
In our first example, we show that the uncertainty volume of the output \eqref{measure:uncertainty} satisfies conditions of Theorem \ref{submodular:th}. The gradient operator of this systemic performance measure is
\[ \triangledown \upsilon(L) = -\Big(L + \frac{1}{n} J_n \Big)^{-1}.\]
It is straightforward to verify that $\triangledown \upsilon(L)$ is monotone with respect to the cone of positive semidefinite matrices. Thus, $\upsilon(L)$ is supermodular.
\end{example}
\begin{figure}[t]
\centering
\begin{tabular}{c c}
~~~\includegraphics[trim = 45 10 35 10, clip,width=0.17\textwidth]{fig_2_1.eps}~~~~ &~~~
\includegraphics[trim = 45 10 35 10, clip,width=0.17\textwidth]{Fig_2_B.eps}~~~\\ (a) &~ (b)
\end{tabular}
\caption{\small{ The coupling graph of the network used in Example \ref{ex:1} is shown in (a) that consists of $60$ nodes and $176$ links. The location of the optimal link, highlighted by the blue color, is shown in (b). }}
\label{Fig:751}
\vspace{-0.6cm}
\end{figure}
\begin{example}
In our second example, we consider a new class of systemic performance measures that are defined as
\begin{equation}
\mathfrak{m}_q(L) ~=~ - \sum_{i=2}^n \lambda_i^{q},
\label{z-q}
\end{equation}
where $0\leq q \leq 1$.
According to Theorem \ref{f-sum}, this spectral function is a systemic performance measure as function $-\lambda^q$ for $0\leq q \leq 1$ is a decreasing convex function on ${\mathbb{R}}_+$. Moreover, its gradient operator, which is given by $\triangledown \mathfrak{m}_q(L) =~ q L^{q-1}$ is monotonically increasing for all $0\leq q \leq 1$. Therefore, according to Theorem \ref{submodular:th}, systemic performance measure \eqref{z-q} is supermodular over the set of all weighted graphs with a common node set.
\end{example}
\begin{remark}
For a given performance measure $\rho$, there are several different ways to define an extended set function for $\rho$. These set functions may have different properties. For instance, the extended set function of $\zeta_1$ is supermodular over principle sub-matrices \cite{Submodular}, but it is not supermodular over the set of all weighted graphs with a common node set (see Definition \ref{defin:3}).
\end{remark}
For those systemic performance measures that satisfy conditions of Theorem \ref{submodular:th}, one can provide guaranteed performance bounds for our proposed greedy algorithm in Subsection \ref{sec:291}. The following result is based a well-known result from \cite[Chapter III, Section 3]{combinatorial}.
\begin{theorem}
Suppose that systemic performance measure ${{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$ is differentiable and $\triangledown {{\rho}}: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}^{n \times n}$ is monotonically increasing with respect to the cone of positive semidefinite matrices. Then, the greedy algorithm in Table \ref{greedy-table}, which starts with $\hat{\mathcal E}$ as the empty set and at every step selects an element $e \in \mathcal E_c$ that minimizes the marginal cost ${{\rho}}(L+L_{\hat{ \mathcal E}}+L_e)-{{\rho}}(L+L_{\hat{\mathcal E}})$, provides a set $\hat{\mathcal E}$ that achieves a $(1- 1/e)$-approximation\footnote{~ This means that $\frac{{{\rho}}(L+\tilde L)- {{\rho}}(L)}{{{\rho}}(L+L^*) - {{\rho}}(L)} \, \geq \, 1 -\frac{1}{e}$, where $L^*$ is the optimum solution and $\tilde L$ is the solution of the greedy algorithm, or equivalently: $ \frac{{{\rho}}(L+\tilde L)- {{\rho}}(L+L^*)}{{{\rho}}(L) - {{\rho}}(L+L^*) } \leq \frac{1}{e}$, where $e$ is Euler's number. } of the optimal solution of the combinatorial network synthesis problem \eqref{k-link}.
\end{theorem}
Since the class of supermodular systemic performance measures are monotone, the combinatorial network synthesis problem \eqref{k-link} is polynomial-time solvable with provable optimality bounds \cite{combinatorial}. Supermodularity is not a ubiquitous property for all systemic performance measures. Nevertheless, our simulation results in Section \ref{sec:simu} assert that the proposed greedy algorithm in Table \ref{greedy-table} is quite powerful and provides tight and near-optimal solutions for a broad range of systemic performance measures.
\begin{figure}[t]
\begin{center}
\psfrag{X}[c][c]{\footnotesize Label of a candidate link}
\psfrag{Y}[c][c]{\footnotesize $\zeta_1$}
\psfrag{B}[c][c]{\footnotesize ~~~~~~~~~~~~the optimal value }
\includegraphics[width=0.35\textwidth]{Fig2_modified.eps}
\end{center}
\caption{\small{This plot is discussed in Example \ref{ex:1}. }}
\label{Fig:740}
\vspace{-0.5cm}
\end{figure}
\subsection{Computational Complexity Discussion}
As we discussed earlier, the network synthesis problem \eqref{k-link} is in general NP-hard. However, this problem is solvable when $k=1$ and the best link can be found by running an exhaustive search over all possible scenarios, i.e., by calculating the value of a performance measure for all possible $p$ augmented networks, where $p$ is the number of candidate links. The computational complexity of evaluating performance of a given linear consensus network depends on the specific choose of a systemic performance measure. Let us denote computational complexity of a given systemic performance measure $\rho: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$ by $\mathcal O\left(M_\rho(n)\right)$.
In the simple greedy algorithm of Table \ref{greedy-table}, the difference term
\begin{equation}
{{\rho}}(\tilde L)-{{\rho}} \big(\tilde L+\varpi(e)L_e \big)
\label{dif}
\end{equation}
is calculated and updated for each candidate link at each step, for the total of $k \big(p -\frac{k-1}{2}\big)$ times. Thus, the total computational complexity of our simple greedy algorithm is $\mathcal O\left(M_{\rho}(n) (p -\frac{k-1}{2})k\right)$ operations. This computational complexity is at most $\mathcal O \left ( M_{\rho}(n) n^2 k\right)$, where $p=\binom{n}{2}$, i.e., when the candidate set contains all possible links. The complexity of the brute-force method is $\mathcal O\left (M_{\rho}(n) \binom{p}{k}\right)$\footnote{~This corresponds to calculating the value of a performance measure for all $\binom{p}{k}$ possible augmented networks.}. This can be at most $\mathcal O \left( M_\rho(n) 2^p/\sqrt p\right)$.
Moreover, if $k \leq \sqrt p$, then the computational complexity will be $\mathcal O\left (M_{\rho}(n) p^k/k! \right)$.
In some occasions, we can take advantage of the rank-one updates in Theorems \ref{lem-ex} and \ref{coro:1241}, where it is shown that a rank-one deviation in a matrix results in a rank-one change in its inverse matrix as well. This helps reduce the
computational complexity of \eqref{dif} to the order of $\mathcal O(n^2)$ instead of $\mathcal O(n^3)$ operations. As it is shown in \cite{Yaser16necsys}, one can apply the rank-one update on the matrix of effective resistances. As a result, we can update the effective resistances of all links in order of $\mathcal O(n^2)$. More specifically, the matrix of effective resistances is given by
\begin{equation}
R(L^m):= \mathbbm{1}_n \, {\text{diag}} \big( L^{\dag, m} \big) + {\text{diag}} \big( L^{\dag, m} \big) \mathbbm{1}_n^{\text T} - 2 L^{\dag, m},
\label{R_matrix}
\end{equation}
for $m \in \{1,2,3\}$, where $R(L^m)_{ij}=r_{\{i,j\}}(L^m)$. The update rule \eqref{R_matrix} can be obtained by substituting the rank-one update of $(L+L_e)^{\dag}$ from \eqref{eq:522} in \eqref{R_matrix} and
the $m$-th power of the rank-one update can be calculated in $\mathcal O(n^2)$ as it can be cast as only matrix-vector products.
Using these facts and the result of Theorem \ref{coro:1241}, the computational cost of \eqref{dif} for systemic performance measures $\zeta_1$, $\zeta_2$, and $\upsilon$ can be significantly reduced; more specifically, the computational complexity of our algorithm reduces to
\[ \mathcal O \left ( \underbrace{n^3}_{\text{calculating}~ L^{\dag,m} {\text{'s at the beginning}} } + \underbrace{ n^2}_{\text{rank-one update} } \times \underbrace{k}_{\text{number of steps}}\right).\]
\begin{figure}[t]
\begin{center}
\includegraphics[trim = 60 30 30 30, clip,width=0.25\textwidth]{0.eps}
\end{center}
\caption{\small{This is the coupling graph of the network in Example \ref{comparing} with $30$ nodes, where the graph has $50$ original (black) links and the candidate set includes all $15$ dashed red line segments.}}
\label{Fig:candidate-coupling}
\vspace{-0.55cm}
\end{figure}
For a generic systemic performance measure $\rho: {\mathfrak{L}_{n}} \rightarrow {\mathbb{R}}$, according to Theorem \ref{thm:schur-convex}, calculating its value requires knowledge of all Laplacian eigenvalues of the coupling graph. It is known that the eigenvalue problem for symmetric matrices requires $\mathcal O (n^{2.376} \log n)$ operations \cite{Yau93}. Suppose that calculating the value of spectral function $\Phi: {\mathbb{R}}^{n-1} \rightarrow {\mathbb{R}}$ in Theorem \ref{thm:schur-convex} needs $\mathcal O \left(M_{\Phi}(n)\right)$ operations. Thus, the value of systemic performance measure $\rho(L)$ in equation \eqref{spectral-rho}, and similarly \eqref{dif}, can be calculated in $\mathcal O (n^{2.376} \log n + M_{\Phi}(n) )$. Based on this analysis, we conclude that the complexity of the greedy algorithm in Table \ref{greedy-table} is at most
\[\mathcal O\left( \left (n^{2.376} \log n + M_{\Phi}(n) \right) \left (p -\frac{k-1}{2} \right )k\right).\]
\begin{figure*}
\centering
\subfloat[Spectral zeta function $\zeta_1$]{
\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\pi_k$}
\includegraphics[width=60mm]{enhance_1_new.eps}
}
\subfloat[Spectral zeta function $\zeta_2$]{
\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\pi_k$}
\includegraphics[width=60mm]{enhance_2_new.eps}
}
\hspace{0mm}
\subfloat[Expected transient covariance $\tau_t$ where $t= 1$]{
\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\pi_k$}
\includegraphics[width=60mm]{enhance_3_new.eps}
}
\subfloat[$\gamma$-entropy $I_\gamma(.)$ where $\gamma = 2$]{
\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\pi_k$}
\includegraphics[width=60mm]{enhance_4_new.eps}
}
\caption{\small These plots are discussed in Example \ref{ex:4}. }
\label{fig:882}
\vspace{-0.5cm}
\end{figure*}
\section{Numerical Simulations} \label{sec:simu}
In this section, we support our theoretical findings by means of some numerical examples.
\begin{example}\label{ex:2}
This example investigates sensitivity of location of an optimal link as a function of its weight. Let us consider a linear consensus network \eqref{first-order}-\eqref{first-order-G}, whose coupling graph is shown in Fig. \ref{fig:1618}, endowed by systemic performance measure \eqref{zeta-measure} with $q=1$. The graph shown in Fig. \ref{fig:1618} is a generic unweighted connected graph with $n=50$ nodes and $100$ links. We solve the network synthesis problem \eqref{1-link} for the candidate set with $|\mathcal E_c|=\frac{1}{2}n(n-1)$ that covers all possible locations in the graph. It is assumed that all candidate links have an identical weight $\varpi_0$. We use our rank-one update method in Theorem \ref{coro:1241} to study the effect of $\varpi_0$ on location of the optimal link. In Fig. \ref{fig:1618}(c), we observe that by increasing $\varpi_0$, the optimal location changes. When $\varpi_0=1$, our calculations reveal that the optimal link in Fig. \ref{fig:1618}(a), shown by a blue line segment, maximizes $r_e(L^2)$ among all possible candidate links in set $\mathcal E_c$. By increasing the value of our design parameter to $\varpi_0=1.2$ in Fig. \ref{fig:1618}(b), we observe that the location of the optimal link moves. In our last scenario in Fig. \ref{fig:1618}(c), by setting $\varpi_0=1.6$, the optimal link moves to a new location that maximizes quantity $r_e(L^2)/r_e(L)$ among all possible candidate links.
\end{example}
\begin{example} \label{ex:1}
The usefulness of our theoretical fundamental hard limits in Theorem \ref{w-thm} in conjunction with our results in Theorem \ref{coro:1241} is illustrated in Fig. \ref{Fig:740}. Suppose that a linear consensus network \eqref{first-order}-\eqref{first-order-G} with a generic coupling graph with $n=60$, as shown in Fig. \ref{Fig:751}(a), is given. Let us consider the network design problem \eqref{1-link} with systemic performance measure \eqref{zeta-measure} for $q = 1$. The set of candidate links is the set of all possible links in the coupling graph, i.e., $|\mathcal E_c|=\frac{1}{2}n(n-1)$, where it is assumed that all candidate links have an identical weight $\varpi_0=20$. Our goal is to compare optimality of our low-complexity update rule against brute-force search over all $|\mathcal E_c|=1770$ possible augmented graphs. The value of the systemic performance measure for each candidate graph is marked by blue star in Fig. \ref{Fig:740}. In this plot, the black circle highlights the value of performance measure for the network resulting from the rank-one search \eqref{1-link-B}. The red dashed line in Fig. \ref{Fig:740} shows the best achievable value for $\zeta_1$ according to Theorem \ref{w-thm}. The value of this hard limit can be calculated merely using Laplacian eigenvalues of the original graph shown in Fig. \ref{Fig:751}(a). The location of the optimal link is shown in Fig. \ref{Fig:751}(b). One observes from Fig. \ref{Fig:740} that our theoretical fundamental limit justifies near-optimality of our rank-one update strategy \eqref{1-link-B} for networks with generic graph topologies.
\end{example}
\begin{figure*}
\centering
\begin{tabular}{c c c}
{\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\zeta_1$}
\psfrag{1}[c][c]{\footnotesize}
\includegraphics[width=50mm]{1.eps}} & {\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\zeta_2$}
\psfrag{2}[c][c]{\footnotesize }
\includegraphics[width=50mm]{2.eps}} & {
\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\eta$}
\psfrag{0}[c][c]{\footnotesize }
\includegraphics[width=50mm]{6.eps}
} \\ (a) \small Spectral zeta function $\zeta_1$& (b) \small Spectral zeta function $\zeta_2$& (c) \small Hankel norm $\eta$\\
{\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $I_\gamma$}
\psfrag{5}[c][c]{\footnotesize }
\includegraphics[width=50mm]{5.eps}} &{
\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\upsilon$}
\psfrag{3}[c][c]{\footnotesize }
\includegraphics[width=50mm]{3.eps}
}&{\psfrag{x}[c][c]{\footnotesize $k$}
\psfrag{y}[c][c]{\footnotesize $\tau_t$}
\psfrag{4}[c][c]{\footnotesize }
\includegraphics[width=50mm]{4.eps}} \\
(d) \small $\gamma$-entropy $I_\gamma$ where $\gamma = 20$ & (e) \small Uncertainty Volume $\upsilon$ & (f) \small Expected output covariance $\tau_t$ where $t=10$
\end{tabular}
\caption{\small These plots compare optimality gaps of five different methods for solving the network synthesis problem \eqref{k-link} in Example \ref{comparing}.}
\label{Fig:2423}
\vspace{-.9cm}
\end{figure*}
\begin{example}\label{ex:4}
This example follows up on our discussion at the end of Section \ref{sec:672}, where it is explained that the result of Theorem \ref{w-thm} can be utilized to choose reasonable values for design parameter $k$ in the network design problem \eqref{k-link}. We explain the procedure by considering a linear consensus network \eqref{first-order}-\eqref{first-order-G} with a given coupling graph by Fig. \ref{Fig:751}(a). The value of the lower bound (i.e., hard limit) in \eqref{fund-limit-1} is used to form the following quantity
\[ \pi_k := \frac{\varrho_0 - \varrho_k}{\varrho_0} \times 100\]
that represents the percentage of performance enhancement for all values of parameter $1 \leq k \leq n-1$. Fig. \ref{fig:882} illustrates the value of $\pi_k$ with respect to four systemic performance measures: $\zeta_1$, $\zeta_2$, $\tau_t$ and $I_\gamma$. Depending on the desired level of performance, one can compute a sensible value for design parameter $k$ merely by looking up at the corresponding plots. For instance, in order to achieve $50 \%$ performance improvement, one should add at least $13$, $10$, $16$, and $12$ weighted links with respect to $\zeta_1$, $\zeta_2$, $\tau_t$ and $I_\gamma$, respectively. We verified tightness of this estimate by running our greedy algorithm in Table \ref{greedy-table}, where the candidate set is equal to the set of all possible links with identical weight $10$. Our simulation results reveal that by adding $13$, $10$, $16$, and $12$ links from the candidate set, the network performance improves by $40.60\%$, $45.10\%$, $37.76\%$, and $40.61\%$ with respect to $\zeta_1$, $\zeta_2$, $\tau_t$, and $I_\gamma$, respectively. Our theoretical bounds predict that network performance can be further improved by increasing weights of the candidate links. In our example, if we increase the weight from $10$ to $500$, the network performance boosts by more than $46\%$ for all mentioned systemic performance measures.
\end{example}
\begin{example} \label{comparing}
We compare optimality gaps of our proposed greedy (see Table \ref{greedy-table}) and linearization-based (see Table \ref{table-linear}) methods versus brute-force and simple-random-sampling methods. The brute-force method runs an exhaustive search to find the global optimal solution of problem \eqref{k-link}; however, it cannot be used for medium to large size networks. In order to make our comparison possible, we consider a linear consensus network \eqref{first-order}-\eqref{first-order-G} with $n=30$ nodes over the graph shown in Fig. \ref{Fig:candidate-coupling}. Weights of all links, both in the coupling graph and the candidate set, are equal to $1$. Our control objective is to solve the network synthesis problem \eqref{k-link}, where the candidate set consists of $15$ links that are shown by red-dashed lines in Fig. \ref{Fig:candidate-coupling}. The outcome of our simulation results are explicated in Fig. \ref{Fig:2423}, where we run our algorithms and compute the corresponding values for systemic performance measures for all $k=1,\ldots, 15$. One observes that our greedy algorithm performs nearly as optimal as the brute-force method. This is mainly due to convexity and monotonicity properties of the class of systemic performance measures that enable the greedy algorithm to produce near-optimal solutions with respect to this class of measures. As one expects, our greedy algorithm outperforms our linearization-based method. It is noteworthy that the time complexity of the linearization method is comparably less than the greedy algorithm. The usefulness of the linearization-based method accentuates itself when weight of candidate links are small and/or $k$ is large.
\end{example}
\section{Discussion and Conclusion}
\label{sec:913}
In the following, we provide explanations for some of the outstanding and remaining problems related to this paper.
\vspace{0.1cm}
\noindent {\it Convex Relaxation:}
The constraints of the combinatorial problem \eqref{k-link} can be relaxed by allowing the link weights to vary continuously. The relaxed problem will be a spectral convex optimization problem \cite{lewis}. In some special cases, such as when the cost function is $\zeta_1$ or $\zeta_2$, the relaxed problem can be equivalently cast as a semidefinite programming problem \cite{Siami14acc, Siami14cdc-2}. However, for a generic systemic performance measure, we need to develop some low-complexity specialized optimization techniques to solve the corresponding spectral optimization problem, which is beyond the scope of this paper.
\vspace{0.1cm}
\noindent {\it Higher-Order Approximations:}
In Subsection \ref{subsec:B}, we employed the first-order approximation of a systemic performance measure. One can easily extend our algorithm by considering second-order approximations of a systemic performance measure in order to gain better optimality gaps.
\vspace{0.1cm}
\noindent {\it Non-spectral Systemic Performance Measures:} The class of spectral systemic performance measures can be extended to include non-spectral measures as well. This can done by relaxing and replacing the orthogonal invariance property by permutation invariance property. The local deviation error is an example of a non-spectral systemic performance measure \cite{Siami14cdc-2, Siami15necsys}. Our ongoing research involves a comprehensive treatment of this class of measures.
|
2,869,038,156,852 | arxiv | \section{Introduction}
The time varying maps (so-called non-autonomous or time-dependent dynamical systems), describe situations where the dynamics can vary with time and yield very flexible models than autonomous cases for the study and description of real world processes. They may be used to describe the evolution
of a wider class of phenomena, including systems which are forced or driven. For example, any moving picture
on a television screen is an example of time varying dynamical systems.
In the recent past, lots of studies have been done regarding dynamical properties in such systems, but a global
theory is still out of reach. Kolyada et al. \cite{KS,KST} gave definition of topological entropy of time varying maps and discussed minimality of these systems. Also, $\omega$-limit sets and attraction of time varying maps were studied in \cite{JSC,KRR,LLCB}. Then, stability of time varying maps were investigated \cite{BV, KWW}. Thakkar and Das \cite{DTRD} studied expansiveness, shadowing and topological stability of time varying maps.
In \cite{HXWXZF,JNSFHG1,K1,K2,K3,JNSFHG}, authors studied topological entropy, topological pressure and thermodynamic properties of time varying maps. Weak mixing and chaos of time varying maps
were also studied by \cite{JSC1,OPWP,SYCGG,TCCGG}. Ott et al. \cite{O} studied the evolution of probability distributions and exponential loss of memory for certain time-dependent dynamical systems.
In general time varying maps can be rather complicated. Thus, we are inclined to look at approximations of orbits, also called pseudo orbits. Systems for which pseudo orbits can be approximated by true orbits are said to satisfy the shadowing property. The shadowing property plays a key role in the study of the stability of dynamical systems.
This property is found in hyperbolic dynamics, and it was used to prove their stability, see for example \cite{MLLL}.
In this literature, some remarkable results were further obtained through works of several authors,
see e.g. \cite{ASAMMR,BADGCOP,CBKDDD,FNMSAA,VGVG,KR,SKKK}.
Since the approximation by true orbits can be expressed in various ways, different notions of shadowing have been introduced. In this paper, what we want to study on time varying maps is shadowing, h-shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\textbf{This is how the paper is organized:}
In Section \ref{section2}, we give a precise definition of a time varying map, review the main concepts and set up our notation. In this section, shadowing, h-shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties for time varying maps are considered. Then, we study the basic properties of these notions of shadowing in Section \ref{section3}. Especially, we show that the h-shadowing, limit shadowing and s-limit shadowing properties are conjugacy invariant. Also, by considering these notions of shadowing, we extend earlier results from other papers and identify some subtle changes to the theory in this case. In section \ref{section333}, we investigate the relationships between these notions of shadowing for time varying maps and examine the role that expansivity plays in shadowing properties of such dynamical systems. Specially, we prove some results linking s-limit shadowing property to limit shadowing property, and h-shadowing property to s-limit shadowing and limit shadowing properties. Moreover, under the assumption of expansivity, we show that the shadowing property implies the h-shadowing, s-limit shadowing and limit shadowing properties. Finally, in Section \ref{section4}, we prove that the uniformly expanding and uniformly contracting time varying maps exhibit the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties. Also, we show that any time varying map of a finite set of hyperbolic linear homeomorphisms on a Banach space with the same stable and unstable subspaces has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\section{Preliminaries}\label{section2}
Throughout this paper we consider $(X,d)$ to be a metric space, $f_{n}:X\to X$, $n\in\mathbb{N}$, to be a sequence of continuous maps and $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ to be a time varying map
on $X$ that its time evolution is defined by composing the maps $f_{n}$ in the following way
\begin{equation}
\mathcal{F}_{n}:=f_{n}\circ f_{n-1}\circ\cdots\circ f_{1},\ \textnormal{for}\ n\geq 1,\ \textnormal{and}\ \mathcal{F}_{0}:=Id_{X}.
\end{equation}
For time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ defined on $X$, we set $\mathcal{F}_{[i,j]}:=f_{j}\circ f_{j-1}\circ\cdots\circ f_{i+1}\circ f_{i}$ for $1\leq i\leq j$,
and $\mathcal{F}_{[i,j]}:=Id_{X}$ for $i>j$. Also, for any $k>0$, we define a
time varying map ($k^{th}$-iterate of $\mathcal{F}$) $\mathcal{F}^{k}=\{g_{n}\}_{n\in\mathbb{N}}$ on $X$, where
\begin{equation}
g_{n}=f_{nk}\circ f_{(n-1)k+k-1}\circ\ldots\circ f_{(n-1)k+2}\circ f_{(n-1)k+1}\ \text{for}\ n\geq 1.
\end{equation}
Thus $\mathcal{F}^{k}=\{\mathcal{F}_{[(n-1)k+1,nk]}\}_{n\in\mathbb{N}}$. Moreover, if time varying map $\mathcal{F}$ shifted $k$-times ($k\geq 1$), then we denote it by $\mathcal{F}(k,\textnormal{shift})$, i.e. $\mathcal{F}(k,\textnormal{shift})=\{f_{n}\}_{n=k+1}^{\infty}$.
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$. For a point $x_{0}\in X$, put $x_{n}:=\mathcal{F}_{n}(x_{0})$ for all $n\geq 0$. Then the sequence $\{x_{n}\}_{n\geq 0}$, denoted by $\mathcal{O}(x_{0})$, is said to be the \emph{orbit} of $x_{0}$ under time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$. Moreover, a subset $Y$ of $X$ is said to be \emph{invariant} under $\mathcal{F}$ if $f_{n}(Y)=Y$ for all $n\geq 1$, equivalently $\mathcal{F}_{n}(Y)=Y$ for all $n\geq 0$.
\begin{definition}[Conjugacy]
Let $(X,d_{1})$ and $(Y,d_{2})$ be two metric spaces. Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ and $\mathcal{G}=\{g_{n}\}_{n\in\mathbb{N}}$ be time varying maps on $X$ and $Y$, respectively. If there exists a homeomorphism $h:X\to Y$ such that $h\circ f_{n}=g_{n}\circ h$, for all $n\in\mathbb{N}$, then $\mathcal{F}$ and $\mathcal{G}$ are said to be \emph{conjugate} (with respect to the map $h$) or $h$-\emph{conjugate}. In particular, if $h:X\to Y$ is a uniform homeomorphism, then $\mathcal{F}$ and $\mathcal{G}$ are said to be \emph{uniformly conjugate} or \emph{uniformly} $h$-\emph{conjugate}. (Recall that homeomorphism $h:X\to Y$, such that $h$ and $h^{-1}$ are uniformly continuous, is called a uniform homeomorphism.)
\end{definition}
For example, if $\mathcal{F}=\{x^{n+1}\}_{n\in\mathbb{N}}$ on $[0,1]$ and $\mathcal{G}=\{2((x+1)/2)^{n+1}-1\}_{n\in\mathbb{N}}$ on $[-1,1]$, then $\mathcal{F}$ is uniformly $h$-conjugate to $\mathcal{G}$, where $h:[0,1]\to [-1,1]$ is defned by $h(x)=2x-1$, see \cite{DTRD}.
\begin{definition}[Shadowing property]
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$ and $Y$ be a subset of $X$. Then,
\begin{enumerate}
\item
for $\delta>0$, a sequence $\{x_{n}\}_{n\geq 0}$ in $X$ is said to be a $\delta$-\emph{pseudo orbit} if
\begin{equation*}
d(f_{n+1}(x_{n}),x_{n+1})<\delta\ \textnormal{for all}\ n\geq 0;
\end{equation*}
\item
for given $\varepsilon>0$, a $\delta$-pseudo orbit $\{x_{n}\}_{n\geq 0}$ is said to be
$\varepsilon$-\emph{shadowed} by $x\in X$ if $d(\mathcal{F}_{n}(x),x_{n})<\varepsilon$ for all $n\geq 0$;
\item
the time varying map $\mathcal{F}$ is said to have \emph{shadowing property} on $Y$ if, for every $\varepsilon>0$, there exists a $\delta>0$ such that every $\delta$-pseudo orbit in $Y$ is $\varepsilon$-shadowed by
some point of $X$. If this property holds on $Y=X$, we simply say that $\mathcal{F}$ has \emph{shadowing property}.
\end{enumerate}
\end{definition}
\begin{remark}
If $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a compact metric space $(X,d)$ and $Y$ be a subset of $X$, then it is easy to see that the time varying map $\mathcal{F}$ has shadowing property on $Y$ if and only if for every $\varepsilon>0$ there is a $\delta>0$ such that every finite $\delta$-pseudo orbit in $Y$ is $\varepsilon$-shadowed by some point of $X$.
\end{remark}
\begin{definition}[h-shadowing property]
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$ and $Y$ be a subset of $X$. We say that $\mathcal{F}$ has \emph{h-shadowing property} on $Y$ if
for every $\varepsilon>0$ there exists $\delta>0$ such that for every finite $\delta$-pseudo orbit
$\{x_{0},x_{1},\ldots,x_{m}\}$ in $Y$ there is $x\in X$ such that $d(\mathcal{F}_{n}(x),x_{n})<\varepsilon$ for every $0\leq n<m$ and $\mathcal{F}_{m}(x)=x_{m}$. If this property holds on $Y=X$, we simply say that $\mathcal{F}$ has \emph{h-shadowing property}.
\end{definition}
\begin{remark}\label{remark1}
It is easy to see that every time varying map on a compact metric space with h-shadowing property has shadowing property, but the converse is not true, see \cite[Example 6.4]{BADGCOP}. Note that each continuous map generates a time varying map.
\end{remark}
Now, we define the limit shadowing and s-limit shadowing properties on time varying maps.
\begin{definition}[Limit shadowing property]
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$ and $Y$ be a subset of $X$. Then,
\begin{enumerate}
\item a sequence $\{x_{n}\}_{n\geq 0}$ in $X$ is called a \emph{limit pseudo orbit} if $d(f_{n+1}(x_{n}),x_{n+1})\to 0$ as $ n\to +\infty$;
\item a sequence $\{x_{n}\}_{n\geq 0}$ in $X$ is said to be \emph{limit shadowed} if there is $x\in X$ such that $d(\mathcal{F}_{n}(x),x_{n})\to 0$, as $n\to +\infty$;
\item the time varying map $\mathcal{F}$ has the \emph{limit shadowing property} on $Y$ whenever every limit pseudo orbit in $Y$ is limit shadowed by some point of $X$. If this property holds on $Y=X$, we simply say that $\mathcal{F}$ has \emph{limit shadowing property}.
\end{enumerate}
\end{definition}
The notion of limit shadowing property was extended to a notion so called s-limit shadowing property, to account the fact that many systems have limit shadowing property but not shadowing property.
\begin{definition}[s-Limit shadowing property]
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$ and $Y$ be a subset of $X$. We say that $\mathcal{F}$ has \emph{s-limit shadowing property} on $Y$ if for
every $\varepsilon > 0$ there is $\delta > 0$ such that
\begin{enumerate}
\item for every $\delta$-pseudo orbit $\{x_{n}\}_{n\geq 0}$ in $Y$, there exists $x\in X$ satisfying $d(\mathcal{F}_{n}(x),x_{n})<\varepsilon$ for all $n\geq 0$, and,
\item if in addition, $\{x_{n}\}_{n\geq 0}$ is a limit pseudo orbit in $Y$ then $d(\mathcal{F}_{n}(x),x_{n})\to 0$ as $n\to +\infty$.
\end{enumerate}
If this property holds on $Y=X$, we simply say that $\mathcal{F}$ has \emph{s-limit shadowing property}.
\end{definition}
\begin{example}\label{example1}
Let $X=[0,1]\cup\{-1/2^{n}:n\geq 1\}$ and $f:X\to X$ be any
homeomorphism such that $f(x)=x$ for $x=1$ or $x\leq 0$ and $f(x)<x$ for
$x\in(0,1)$. Put $f_{n}=f$ for every $n\in\mathbb{N}$. Then time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ on $X$ has shadowing and limit shadowing properties, but it
does not have s-limit shadowing property, see \cite[Example 3.5]{BADGCOP}.
\end{example}
We say that a sequence $\{a_{n}\}_{n\geq 0}$ of real numbers converges to zero with rate $\theta\in(0,1)$ and write $a_{n}\xrightarrow{\theta} 0$ as $ n\to +\infty$, if there exists a constant $L>0$ such that $|a_{n}|\leq L\theta^{n}$ for all $n\geq 0$.
\begin{definition}[Exponential limit shadowing property]
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$ and $Y$ be a subset of $X$. Then,
\begin{enumerate}
\item
for $\theta\in(0,1)$, a sequence $\{x_{n}\}_{n\geq 0}$ in $X$ is called a $\theta$-\emph{exponentially
limit pseudo orbit} of $\mathcal{F}$ if $d(f_{n+1}(x_{n}),x_{n+1})\xrightarrow{\theta} 0$ as $ n\to +\infty$;
\item
the time varying map $\mathcal{F}$ has the \emph{exponential limit shadowing property with exponent} $\xi$ on $Y$ if there exists $\theta_{0}\in(0,1)$ so that for any $\theta$-exponentially limit pseudo orbit $\{x_{n}\}_{n\geq 0}\subseteq Y$ with $\theta\in(\theta_{0},1)$, there is $x\in X$ such that $d(\mathcal{F}_{n}(x),x_{n})\xrightarrow{\theta^{\xi}} 0$, as $n\to +\infty$. In the case $\xi=1$ we say that F has the \emph{exponential limit shadowing property} on $Y$. If this property holds on $Y=X$, we simply say that $\mathcal{F}$ has \emph{exponential limit shadowing property}.
\end{enumerate}
\end{definition}
\section{Basic properties of various notions of shadowing}\label{section3}
Our aim of this section is to characterize the basic properties of various notions of shadowing (i.e. shadowing, h-shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties) of time varying maps. Specially, we show that h-shadowing, limit shadowing and s-limit shadowing properties are conjugacy invariant. Moreover, by considering these notions of shadowing, we are able to extend earlier results from other papers, identifying some subtle changes to the theory in this case.
Thakkar and Das in \cite[Theorem 3.1]{DTRD} show that the shadowing property of time varying maps is conjugacy invariant. Now, in the following theorem, we show that the h-shadowing, limit shadowing and s-limit shadowing properties are conjugacy invariant. Our approach is similar to \cite[Theorem 3.1]{DTRD} and for completeness we give its proof.
\begin{theorem}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ and $\mathcal{G}=\{g_{n}\}_{n\in\mathbb{N}}$ be time varying maps on metric spaces $(X,d_{1})$ and $(Y,d_{2})$, respectively, such that $\mathcal{F}$ is uniformly conjugate to $\mathcal{G}$. Then, the following statements hold:
\begin{itemize}
\item[\textnormal{(a)}]
If $\mathcal{F}$ has the h-shadowing property, then so does $\mathcal{G}$.
\item[\textnormal{(b)}]
If $\mathcal{F}$ has the limit shadowing property, then so does $\mathcal{G}$.
\item[\textnormal{(c)}]
If $\mathcal{F}$ has the s-limit shadowing property, then so does $\mathcal{G}$.
\end{itemize}
\end{theorem}
\begin{proof}
Since $\mathcal{F}$ is uniformly conjugate to $\mathcal{G}$, there exists a uniform homeomorphism
$h:X\to Y$ such that $h\circ f_{n}=g_{n}\circ h$ for all $n\in\mathbb{N}$, which implies
$f_{n}\circ h^{-1}=h^{-1}\circ g_{n}$ for all $n\in\mathbb{N}$. Hence, for all $n\geq 0$,
\begin{eqnarray*}
h\circ\mathcal{F}_{n}
&=& h\circ f_{n}\circ f_{n-1}\circ\cdots\circ f_{1}\\
&=& g_{n}\circ h\circ f_{n-1}\circ\cdots\circ f_{1}\\
&\vdots &\\
&=& g_{n}\circ g_{n-1}\circ\cdots\circ g_{1}\circ h\\
&=& \mathcal{G}_{n}\circ h.
\end{eqnarray*}
\textnormal{(a)}. Let $\varepsilon>0$ be given. By uniform continuity of $h$ there exists an $\varepsilon_{0}>0$ such that $d_{1}(x,y)<\varepsilon_{0}$ implies $d_{2}(h(x),h(y))<\varepsilon$. Since $\mathcal{F}$ has h-shadowing property there exists a $\delta_{0}>0$ such that any finite $\delta_{0}$-pseudo orbit of $\mathcal{F}$ is $\varepsilon_{0}$-shadowed (with exact hit at the end) by $\mathcal{F}$ orbit of some point of $X$. Since $h^{-1}$ is uniformly continuous, for $\delta_{0}>0$ there exists a $\delta>0$ such that $d_{2}(x,y)<\delta$ implies $d_{1}(h^{-1}(x),h^{-1}(y))<\delta_{0}$. Now, let $\{x_{0},x_{1},\ldots,x_{m}\}$ be a finite $\delta$-pseudo orbit
for $\mathcal{G}$, i.e. $d_{2}(g_{n+1}(x_{n}),x_{n+1})<\delta$ for every $0\leq n<m$. Hence
$d_{1}(h^{-1}(g_{n+1}(x_{n})),h^{-1}(x_{n+1}))<\delta_{0}$, and so
$d_{1}(f_{n+1}(h^{-1}(x_{n})),h^{-1}(x_{n+1}))<\delta_{0}$. Therefore
$\{h^{-1}(x_{0}),h^{-1}(x_{1}),\ldots,h^{-1}(x_{m})\}$ is a finite $\delta_{0}$-pseudo orbit
for $\mathcal{F}$. Thus there exists a $x\in X$ such that $d_{1}(\mathcal{F}_{n}(x),h^{-1}(x_{n}))<\varepsilon_{0}$ for every $0\leq n<m$ and $\mathcal{F}_{m}(x)=h^{-1}(x_{m})$. Hence, $d_{2}(h(\mathcal{F}_{n}(x)),x_{n})<\varepsilon$ for every $0\leq n<m$ and $h(\mathcal{F}_{m}(x))=x_{m}$. Thus $d_{2}(\mathcal{G}_{n}(h(x)),x_{n})<\varepsilon$ for every $0\leq n<m$ and $\mathcal{G}_{m}(h(x))=x_{m}$. Thus $\mathcal{G}$ also has the h-shadowing property.
\textnormal{(b)}. Let $\mathcal{F}$ has the limit shadowing property, and let $\{y_{n}\}_{n\geq 0}$ be a limit pseudo orbit of $\mathcal{G}$, i.e. $d_{2}(g_{n+1}(y_{n}),y_{n+1})\to 0$ as $n\to +\infty$. Then, by uniform continuity of $h^{-1}$,
$d_{1}(h^{-1}\circ g_{n+1}(y_{n}),h^{-1}(y_{n+1}))\to 0$ as $n\to +\infty$, and so $d_{1}(f_{n+1}\circ h^{-1}(y_{n}),h^{-1}(y_{n+1}))\to 0$ as $n\to +\infty$. Thus $\{h^{-1}(y_{n})\}_{n\geq 0}$ is a limit pseudo orbit of $\mathcal{F}$. Hence, there exists $x\in X$ such that $d_{1}(\mathcal{F}_{n}(x),h^{-1}(y_{n}))\to 0$ as $n\to +\infty$. Again, by uniform continuity of $h$, $d_{2}(h\circ\mathcal{F}_{n}(x),h\circ h^{-1}(y_{n}))\to 0$ as $n\to +\infty$, and so $d_{2}(\mathcal{G}_{n}\circ h(x),h\circ h^{-1}(y_{n}))\to 0$ as $n\to +\infty$.
Hence, $d_{2}(\mathcal{G}_{n}(h(x)),y_{n})\to 0$ as $n\to +\infty$, which implies $\{y_{n}\}_{n\geq 0}$ is limit shadowed by $h(x)\in Y$. Consequently, any limit pseudo orbit of $\mathcal{G}$ can be limit shadowed by some point of $Y$. Thus $\mathcal{G}$ also has the limit shadowing property.
Finally, the part (c) is a direct consequence of part (b) and \cite[Theorem 3.1]{DTRD}.
\end{proof}
Thakkar and Das in \cite[Theorem 3.3]{DTRD} show that every finite direct product of time varying maps has the shadowing property if and only if all of its time varying maps have shadowing property. Now, in the following theorem, we extend this property to other notions of shadowing.
\begin{theorem}\label{theorem1}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ and $\mathcal{G}=\{g_{n}\}_{n\in\mathbb{N}}$ be time varying maps on metric spaces $(X,d_{1})$ and $(Y,d_{2})$, respectively. Define metric $d$ on $X\times Y$ by
\begin{equation*}
d((x_{1},y_{1}),(x_{2},y_{2})):=\max\{d_{1}(x_{1},x_{2}),d_{2}(y_{1},y_{2})\}\quad \text{for \ any} \ (x_{1},y_{1}),(x_{2},y_{2})\in X\times Y.
\end{equation*}
Then,
\begin{itemize}
\item[\textnormal{(a)}]
$\mathcal{F}$ and $\mathcal{G}$ have the h-shadowing property if and only if so does $\mathcal{F}\times\mathcal{G}:=\{f_{n}\times g_{n}\}_{n\in\mathbb{N}}$.
\item[\textnormal{(b)}]
$\mathcal{F}$ and $\mathcal{G}$ have the limit shadowing property if and only if so does $\mathcal{F}\times\mathcal{G}$.
\item[\textnormal{(c)}]
$\mathcal{F}$ and $\mathcal{G}$ have the exponential limit shadowing property if and only if so does $\mathcal{F}\times\mathcal{G}$.
\item[\textnormal{(d)}]
$\mathcal{F}$ and $\mathcal{G}$ have the s-limit shadowing property if and only if so does $\mathcal{F}\times\mathcal{G}$.
\end{itemize}
\end{theorem}
\begin{proof}
By definitions and \cite[Theorem 3.3]{DTRD} the proof of parts (a), (b) and (d) are not difficult, hence we only prove the part (c).
\textnormal{(c)}. Let time varying maps $\mathcal{F}$ and $\mathcal{G}$ have the exponential limit shadowing property. Then, there exists $\theta_{\mathcal{F}}\in(0,1)$ such that for any $\theta$-exponentially limit pseudo orbit $\{x_{n}\}_{n\geq 0}$ of $\mathcal{F}$ with $\theta\in(\theta_{\mathcal{F}},1)$, there is $x\in X$ such that $d(\mathcal{F}_{n}(x),x_{n})\xrightarrow{\theta} 0$,
as $n\to +\infty$. Also, there exists $\theta_{\mathcal{G}}\in(0,1)$ such that for any $\theta$-exponentially limit pseudo orbit $\{y_{n}\}_{n\geq 0}$ of $\mathcal{G}$ with $\theta\in(\theta_{\mathcal{G}},1)$, there is $y\in Y$
such that $d(\mathcal{G}_{n}(y),y_{n})\xrightarrow{\theta} 0$, as $n\to +\infty$. Put $\theta_{0}=\max\{\theta_{\mathcal{G}},\theta_{\mathcal{F}}\}$, and let $\{(x_{n},y_{n})\}_{n\geq 0}$ be a $\theta$-exponentially limit pseudo orbit of $\mathcal{F}\times\mathcal{G}$ with $\theta\in(\theta_{0},1)$, i.e. $d((f_{n+1}\times g_{n+1})(x_{n},y_{n}),(x_{n+1},y_{n+1}))\xrightarrow{\theta} 0$ as $ n\to +\infty$.
Then, $d_{1}(f_{n+1}(x_{n}),x_{n+1})\xrightarrow{\theta} 0$ and $d_{2}(g_{n+1}(y_{n}),y_{n+1})\xrightarrow{\theta} 0$, as $ n\to +\infty$. Hence, there exist $x\in X$ and $y\in Y$ such that $d_{1}(\mathcal{F}_{n}(x),x_{n})\xrightarrow{\theta} 0$ and $d_{2}(\mathcal{G}_{n}(y),y_{n})\xrightarrow{\theta} 0$, as $n\to +\infty$. Therefore,
\begin{eqnarray*}
d((\mathcal{F}\times\mathcal{G})_{n}(x,y),(x_{n},y_{n}))
&=& d((\mathcal{F}_{n}(x),\mathcal{G}_{n}(y)),(x_{n},y_{n}))\\
&\leq& \max\{d_{1}(\mathcal{F}_{n}(x),x_{n}),d_{2}(\mathcal{G}_{n}(y),y_{n})\}\xrightarrow{\theta} 0\ \textnormal{as}\ n\to +\infty
\end{eqnarray*}
which implies $\mathcal{F}\times\mathcal{G}$ also has the exponential limit shadowing property.
Conversely, let the direct product $\mathcal{F}\times\mathcal{G}$ has the exponential limit shadowing property, and so there exists $\theta_{0}\in(0,1)$ such that for any $\theta$-exponentially limit pseudo orbit $\{(x_{n},y_{n})\}_{n\geq 0}$ of $\mathcal{F}\times\mathcal{G}$ with $\theta\in(\theta_{0},1)$, there is $(x,y)\in X\times Y$ such that $d((\mathcal{F}\times\mathcal{G})_{n}(x,y),(x_{n},y_{n}))\xrightarrow{\theta} 0$,
as $n\to +\infty$. Now, let $\{x_{n}\}_{n\geq 0}$ and $\{y_{n}\}_{n\geq 0}$ be $\theta$-exponentially limit pseudo orbits of $\mathcal{F}$ and $\mathcal{G}$ with $\theta\in(\theta_{0},1)$, respectively, i.e. $d_{1}(f_{n+1}(x_{n}),x_{n+1})\xrightarrow{\theta} 0$ and $d_{2}(g_{n+1}(y_{n}),y_{n+1})\xrightarrow{\theta} 0$, as $ n\to +\infty$. Hence,
\begin{eqnarray*}
d((f_{n+1}\times g_{n+1})(x_{n},y_{n}),(x_{n+1},y_{n+1}))
&=& d((f_{n+1}(x_{n}), g_{n+1}(y_{n})),(x_{n+1},y_{n+1}))\\
&=& \max\{d_{1}(f_{n+1}(x_{n}),x_{n+1}),d_{2}(g_{n+1}(y_{n}),y_{n+1})\}\\
&\leq& d_{1}(f_{n+1}(x_{n}),x_{n+1})+d_{2}(g_{n+1}(y_{n}),y_{n+1})\xrightarrow{\theta} 0
\end{eqnarray*}
as $ n\to +\infty$. Therefore, $\{(x_{n},y_{n})\}_{n\geq 0}$ is a $\theta$-exponentially limit pseudo orbit of $\mathcal{F}\times\mathcal{G}$, and so there exists $(x,y)\in X\times Y$ such that
$d((\mathcal{F}\times\mathcal{G})_{n}(x,y),(x_{n},y_{n}))=d((\mathcal{F}_{n}(x),\mathcal{G}_{n}(y)),(x_{n},y_{n}))\xrightarrow{\theta} 0$ as $n\to +\infty$. Hence, $d_{1}(\mathcal{F}_{n}(x),x_{n})\xrightarrow{\theta} 0$ and $d_{2}(\mathcal{G}_{n}(y),y_{n})\xrightarrow{\theta} 0$, as $n\to +\infty$, which implies $\mathcal{F}$ and $\mathcal{G}$ have also the exponential limit shadowing property.
\end{proof}
Hence, every finite direct product of time varying maps has the h-shadowing, limit shadowing, s-limit shadowing
or exponential limit shadowing property if and only if all of its time varying maps have h-shadowing, limit shadowing, s-limit shadowing or exponential limit shadowing property, respectively.
\begin{theorem}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on metric space $(X,d)$ and $k\in\mathbb{N}$.
Then, the following statements hold:
\begin{itemize}
\item[\textnormal{(a)}]
If $\mathcal{F}$ has the limit shadowing property, then so does $\mathcal{F}^{k}$.
\item[\textnormal{(b)}]
If $\mathcal{F}$ has the exponential limit shadowing property, then so does $\mathcal{F}^{k}$.
\item[\textnormal{(c)}]
If $\mathcal{F}$ has the s-limit shadowing property, then so does $\mathcal{F}^{k}$.
\end{itemize}
\end{theorem}
\begin{proof}
\textnormal{(a)}. The case $k=1$ is trivial, so let $k\geq 2$ and $\{y_{n}\}_{n\geq0}$ be a limit pseudo orbit of $\mathcal{F}^{k}$. Then $d(g_{n+1}(y_{n}),y_{n+1})\to 0$ as $ n\to +\infty$, where $g_{n}=\mathcal{F}_{[(n-1)k+1,nk]}$ for all $n\in\mathbb{N}$, and so $d(\mathcal{F}_{[nk+1,(n+1)k]}(y_{n}),y_{n+1})\to 0$ as $ n\to +\infty$. Put
\begin{equation}\label{jjj13}
x_{nk+j}:=\mathcal{F}_{[nk+1,nk+j]}(y_{n})\ \textnormal{for}\ 0\leq j<k\ \textnormal{and}\ n\geq 0.
\end{equation}
$\mathbf{Claim.}$ The sequence $\{x_{n}\}_{n\geq0}$ is a limit pseudo orbit for $\mathcal{F}$, i.e. $d(f_{nk+j+1}(x_{nk+j}),x_{nk+j+1})\to 0$ as $ n\to +\infty$, for all $n\geq 0$ and $0\leq j<k$.
To prove the claim, choose any $n\geq 0$. Then for any $0\leq j<k-1$,
\begin{equation*}
f_{nk+j+1}(x_{nk+j})=f_{nk+j+1}(\mathcal{F}_{[nk+1,nk+j]}(y_{n}))=\mathcal{F}_{[nk+1,nk+j+1]}(y_{n})=x_{nk+j+1}.
\end{equation*}
Thus $d(f_{nk+j+1}(x_{nk+j}),x_{nk+j+1})=0$, for all $n\geq 0$ and $0\leq j<k-1$. Now for $j=k-1$,
\begin{eqnarray*}
d(f_{nk+k}(x_{nk+k-1}),x_{nk+k})
&=& d(f_{nk+k}(\mathcal{F}_{[nk+1,nk+k-1]}(y_{n})),x_{nk+k})\\
&=& d(\mathcal{F}_{[nk+1,(n+1)K]}(y_{n}),y_{n+1})\\
&=& d(g_{n+1}(y_{n}),y_{n+1}).
\end{eqnarray*}
Hence, $d(f_{n+1}(x_{n}),x_{n+1})\to 0$ as $ n\to +\infty$, since $d(g_{n+1}(y_{n}),y_{n+1})\to 0$ as $ n\to +\infty$, which completes the proof of the claim.
Now, by the limit shadowing property of $\mathcal{F}$, $\{x_{n}\}_{n\geq0}$ is limit shadowed by some $x\in X$, i.e.
$d(\mathcal{F}_{n}(x),x_{n})\to 0$ as $n\to +\infty$. In particular, $d(\mathcal{F}_{kn}(x),x_{kn})\to 0$ as $n\to +\infty$. Thus $d(\mathcal{F}^{k}_{n}(x),y_{n})\to 0$ as $n\to +\infty$, since $y_{n}=x_{kn}$ and $\mathcal{F}^{k}_{n}=\mathcal{F}_{kn}$, which implies $\{y_{n}\}_{n\geq 0}$ is limit shadowed by $x\in X$. Consequently, any limit pseudo orbit of $\mathcal{F}^{k}$ can be limit shadowed by some point of $X$. Thus $\mathcal{F}^{k}$ has also the limit shadowing property.
\textnormal{(b)}.
For $k=1$ this is trivial, so let $k\geq 2$. Since the time varying map $\mathcal{F}$ has the exponential limit shadowing property, there exists $\theta_{0}\in(0,1)$ such that for any $\theta$-exponentially limit pseudo orbit $\{x_{n}\}_{n\geq 0}$ of $\mathcal{F}$ with $\theta\in(\theta_{0},1)$, there is $x\in X$ such that $d(\mathcal{F}_{n}(x),x_{n})\xrightarrow{\theta} 0$, as $n\to +\infty$. Put $\theta_{1}:=\theta_{0}^{1/k}$. Now, let $\{y_{n}\}_{n\geq0}$ be a $\theta$-exponentially limit pseudo orbit of $\mathcal{F}^{k}$ with $\theta\in(\theta_{1},1)$, i.e. $d(g_{n+1}(y_{n}),y_{n+1})\xrightarrow{\theta} 0$ as $ n\to +\infty$, where $g_{n}=\mathcal{F}_{[(n-1)k+1,nk]}$ for all $n\in\mathbb{N}$. Hence, there is $L>0$ such that $d(g_{n+1}(y_{n}),y_{n+1})\leq L\theta^{n}$ for all $n\geq 0$. Consider the sequence $\{x_{n}\}_{n\geq 0}$ given by relation (\ref{jjj13}). Then, one has for all $n\geq 0$ and $0\leq j<k-1$:
\begin{equation*}
d(f_{nk+j+1}(x_{nk+j}),x_{nk+j+1})=0\leq (L\theta^{\frac{1-k}{k}})(\theta^{1/k})^{nk+j},
\end{equation*}
and for $j=k-1$:
\begin{equation*}
d(f_{nk+k}(x_{nk+k-1}),x_{nk+k})=d(g_{n+1}(y_{n}),y_{n+1})\leq L\theta^{n}=(L\theta^{\frac{1-k}{k}})(\theta^{1/k})^{nk+k-1}.
\end{equation*}
Thus $\{x_{n}\}_{n\geq0}$ is a $\theta^{1/k}$-exponentially limit pseudo orbit for $\mathcal{F}$. Hence, by exponential limit shadowing property of $\mathcal{F}$, there is $x\in X$ such that
$d(\mathcal{F}_{n}(x),x_{n})\xrightarrow{\theta^{1/k}} 0$ as $n\to +\infty$. In particular, $d(\mathcal{F}_{kn}(x),x_{kn})\xrightarrow{\theta^{1/k}} 0$ as $n\to +\infty$. Thus $d(\mathcal{F}^{k}_{n}(x),y_{n})\xrightarrow{\theta} 0$ as $n\to +\infty$, which implies $\mathcal{F}^{k}$ has the exponential limit shadowing property.
Finally, part $\textnormal{(c)}$ is a direct consequence of part $\textnormal{(a)}$ and \cite[Theorem 3.3]{DTRD}, which completes the proof of the theorem.
\end{proof}
In the following theorem, we show that if we can find some iterate of a time varying map which has h-shadowing property, then we can deduce that the time varying map itself has h-shadowing property. We need the assumption of equicontinuity.
\begin{definition}[Equicontinuity]
Time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ on a metric space $(X,d)$ is said to be \emph{equicontinuous} if for each $\varepsilon>0$ there exists $\delta>0$ such that $d(x,y)<\delta$ implies $d(\mathcal{F}_{[i,j]}(x),\mathcal{F}_{[i,j]}(y))<\varepsilon$ for all $1\leq i\leq j$.
\end{definition}
\begin{theorem}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be an equicontinuous time varying map on a compact metric space $(X,d)$ and $Y$ be an invariant subset of $X$. Then, the following conditions are equivalent:
\begin{itemize}
\item[\textnormal{(a)}]
$\mathcal{F}$ has the h-shadowing property on $Y$.
\item[\textnormal{(b)}]
$\mathcal{F}^{k}$ has the h-shadowing property on $Y$ for some $k\in\mathbb{N}$.
\item[\textnormal{(c)}]
$\mathcal{F}^{k}$ has the h-shadowing property on $Y$ for all $k\in\mathbb{N}$.
\end{itemize}
\end{theorem}
\begin{proof}
First we prove $(a)\Rightarrow(c)$. By the h-shadowing property of $\mathcal{F}$, for $\varepsilon>0$ there exists $\delta>0$ such that every finite $\delta$-pseudo orbit in $Y$ is $\varepsilon$-shadowed (with exact hit at
the end) by $\mathcal{F}$ orbit of some point in $X$. Now, let $\{x_{0},x_{1},\ldots,x_{m}\}\subseteq Y$ be a finite $\delta$-pseudo orbit for $\mathcal{F}^{k}=\{g_{n}\}_{n\in\mathbb{N}}$ , where $g_{n}=\mathcal{F}_{[(n-1)k+1,nk]}$. Then the sequence
\begin{center}
$\{x_{0},\mathcal{F}_{[1,1]}(x_{0}),\mathcal{F}_{[1,2]}(x_{0}),\ldots,\mathcal{F}_{[1,k-1]}(x_{0}),x_{1},\mathcal{F}_{[k+1,k+1]}(x_{1}),\mathcal{F}_{[k+1,k+2]}(x_{1}),\ldots$
\end{center}
\begin{center}
$,\mathcal{F}_{[k+1,2k-1]}(x_{1}),x_{2},\ldots,x_{m-1},\mathcal{F}_{[(m-1)k+1,(m-1)k+1]}(x_{m-1}),\mathcal{F}_{[(m-1)k+1,(m-1)k+2]}(x_{m-1}),\ldots$
\end{center}
\begin{center}
$,\mathcal{F}_{[(m-1)k+1,mk-1]}(x_{m-1}),x_{m}\}$
\end{center}
is a finite $\delta$-pseudo orbit for $\mathcal{F}$ which $\varepsilon$-shadowed by some point $x\in X$
such that $\mathcal{F}_{mk}(x)=x_{m}$. Hence $\mathcal{F}^{k}$ has the h-shadowing property on $Y$ for all $k\in\mathbb{N}$ because of $g_{n}\circ\cdots\circ g_{1}=\mathcal{F}_{nk}$ for $1\leq n\leq m$.
Implication $(c)\Rightarrow(b)$ is trivial.
To prove $(b)\Rightarrow(a)$, fix $\varepsilon>0$ and suppose that $\mathcal{F}^{k}$ has h-shadowing property on $Y$ for some $k\in\mathbb{N}$. Since time varying map $\mathcal{F}$ is equicontinuous and $X$ is compact, there exists $\eta>0$ such that $d(x,y)<\eta$ implies
$d(\mathcal{F}_{[n,n+i]}(x),\mathcal{F}_{[n,n+i]}(y))<\frac{\varepsilon}{2}$ for every $n\geq 1$ and $0\leq i\leq k$.
By the h-shadowing property of $\mathcal{F}^{k}$ there exists $0<\delta<\frac{\varepsilon}{2}$ such that each finite $\delta$-pseudo orbit of $\mathcal{F}^{k}$ is $\eta$-shadowed by $\mathcal{F}^{k}$ orbit of some point of $X$ which hits the last element of the pseudo orbit. Since time varying map $\mathcal{F}$ is equicontinuous and $X$ is compact, there exists $0<\gamma<\frac{\delta}{k}$ such that $d(x,y)<\gamma$ implies $d(\mathcal{F}_{[n,n+i]}(x),\mathcal{F}_{[n,n+i]}(y))<\frac{\delta}{k}$ for every $n\geq 1$ and $0\leq i\leq k$.
Now, let $\{x_{0},x_{1},\ldots,x_{m}\}\subseteq Y$ be any finite $\gamma$-pseudo orbit for $\mathcal{F}$ and write $m=sk+r$ for some $s\geq 0$ and some $0\leq r<k$. Then the sequence
\begin{center}
$\{x_{0},x_{1},\ldots,x_{m},\mathcal{F}_{[m+1,m+1]}(x_{m}),\mathcal{F}_{[m+1,m+2]}(x_{m}),\ldots,\mathcal{F}_{[m+1,m+k-r]}(x_{m})\}\subseteq Y$
\end{center}
is a finite $\gamma$-pseudo orbit for $\mathcal{F}$ (note that $Y$ is an invariant subset of $X$), which we enumerate obtaining the sequence $\{x_{0},x_{1},\ldots,x_{(s+1)k}\}$. We claim that $\{x_{0},x_{k},x_{2k},\ldots,x_{(s+1)k}\}$
is a finite $\delta$-pseudo orbit for $\mathcal{F}^{k}$. Indeed,
\begin{eqnarray*}
d(\mathcal{F}_{[1,k]}(x_{0}),x_{k})
&\leq& d(x_{k},\mathcal{F}_{[k,k]}(x_{k-1}))+d(\mathcal{F}_{[k,k]}(x_{k-1}),\mathcal{F}_{[k-1,k]}(x_{k-2}))\\
&& +\cdots+d(\mathcal{F}_{[2,k]}(x_{1}),\mathcal{F}_{[1,k]}(x_{0}))\\
&<& \gamma+(k-1)\dfrac{\delta}{k}<\delta.
\end{eqnarray*}
Similarly for $1\leq i\leq s$, we have $d(\mathcal{F}_{[(i-1)k+1,ik]}(x_{(i-1)k}),x_{ik})<\delta$. Finally,
\begin{eqnarray*}
d(\mathcal{F}_{[sk+1,(s+1)k]}(x_{sk}),x_{(s+1)k})
&\leq& d(x_{(s+1)k},\mathcal{F}_{[(s+1)k,(s+1)k]}(x_{(s+1)k-1}))\\
&& +d(\mathcal{F}_{[(s+1)k,(s+1)k]}(x_{(s+1)k-1}),\mathcal{F}_{[(s+1)k-1,(s+1)k]}(x_{(s+1)k-2}))\\
&& +\cdots+d(\mathcal{F}_{[sk+2,(s+1)k]}(x_{sk+1}),\mathcal{F}_{[sk+1,(s+1)k]}(x_{sk}))\\
&<& r(\dfrac{\delta}{k})<\delta.
\end{eqnarray*}
By h-shadowing property of $\mathcal{F}^{k}$ there is $x\in X$ such that $d(\mathcal{F}_{[1,ik]}(x),x_{ik})<\eta$ for every $0\leq i\leq s$ and $\mathcal{F}_{[1,(s+1)k]}(x)=x_{(s+1)k}$. Now, for every $0\leq j<k$ we have
\begin{center}
$d(\mathcal{F}_{[1,ik+j]}(x),\mathcal{F}_{[ik+1,ik+j]}(x_{ik}))<\dfrac{\varepsilon}{2}$
\end{center}
and
\begin{eqnarray*}
d(\mathcal{F}_{[ik+1,ik+j]}(x_{ik}),x_{ik+j})
&\leq& d(x_{ik+j},\mathcal{F}_{[ik+j,ik+j]}(x_{ik+j-1}))\\
&& +d(\mathcal{F}_{[ik+j,ik+j]}(x_{ik+j-1}),\mathcal{F}_{[ik+j-1,ik+j]}(x_{ik+j-2}))\\
&& +\cdots+d(\mathcal{F}_{[ik+2,ik+j]}(x_{ik+1}),\mathcal{F}_{[ik+1,ik+j]}(x_{ik}))\\
&<& \gamma+(k-1)\dfrac{\delta}{k}<\delta<\dfrac{\varepsilon}{2}.
\end{eqnarray*}
So $d(\mathcal{F}_{[1,ik+j]}(x),x_{ik+j})<\varepsilon$. Also,
$f_{m+k-r}\circ\cdots\circ f_{1}(x)=x_{(s+1)k}=f_{m+k-r}\circ\cdots\circ f_{m+1}(x_{m})$ which implies $\mathcal{F}_{m}(x)=x_{m}$. Hence $\mathcal{F}$ has the h-shadowing property on $Y$.
\end{proof}
\begin{lemma}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$ and $k\in\mathbb{N}$ such that $f_{1},f_{2},\ldots,f_{k}$ are surjective. Then, $\mathcal{F}$ has the limit shadowing
property if and only if $\mathcal{F}(k,\textnormal{shift})=\{f_{n}\}_{n=k+1}^{\infty}$ has the limit shadowing property.
\end{lemma}
\begin{proof}
Let $\mathcal{F}(k,\textnormal{shift})$ has the limit shadowing property, and let $\{x_{n}\}_{n\geq 0}$ be a limit pseudo orbit of $\mathcal{F}$, i.e. $d(f_{n+1}(x_{n}),x_{n+1})\to 0$ as $n\to +\infty$. Then, the sequence $\{y_{n}\}_{n\geq 0}$ in which $y_{n}=x_{n+k}$ is a limit pseudo orbit of $\mathcal{F}(k,\textnormal{shift})$, i.e. $d(f_{n+k+1}(y_{n}),y_{n+1})\to 0$ as $ n\to +\infty$. Hence, there exists $y\in X$ such that $d(\mathcal{F}_{n}(k,\textnormal{shift})(y),y_{n})\to 0$ as $n\to +\infty$, where $\mathcal{F}_{n}(k,\textnormal{shift})=f_{n+k}\circ\cdots\circ f_{k+2}\circ f_{k+1}$. Now, consider a preimage $x$ of $y$ under $\mathcal{F}_{k}$, i.e. $\mathcal{F}_{k}(x)=y$ (note that $f_{1},f_{2},\ldots,f_{k}$ are surjective). Then $d(\mathcal{F}_{n}(x),x_{n})\to 0$ as $n\to +\infty$, which implies $\{x_{n}\}_{n\geq 0}$ is limit shadowed by $x\in X$. Consequently, any limit pseudo orbit of $\mathcal{F}$ can be limit shadowed by some point of $X$. Thus $\mathcal{F}$ also has the limit shadowing property.
Conversely, let $\mathcal{F}$ has the limit shadowing property, and let $\{x_{n}\}_{n\geq 0}$ be a limit pseudo orbit of $\mathcal{F}(k,\textnormal{shift})$, i.e. $d(f_{n+k+1}(x_{n}),x_{n+1})\to 0$ as $ n\to +\infty$. Then the sequence $\{y_{n}\}_{n\geq 0}$, where $y_{n+k}=x_{n}$ for $n\geq 0$ and $y_{1},y_{2},\ldots,y_{k-1}$ are arbitrary points of $X$, is a limit pseudo orbit of $\mathcal{F}$, i.e. $d(f_{n+1}(y_{n}),y_{n+1})\to 0$ as $ n\to +\infty$. Hence, there exists $y\in X$ such that $d(\mathcal{F}_{n}(y),y_{n})\to 0$ as $n\to +\infty$.
Now, put $x=f_{k}\circ\cdots\circ f_{2}\circ f_{1}(y)$. Then $d(\mathcal{F}_{n}(k,\textnormal{shift})(x),x_{n})\to 0$ as $n\to +\infty$, which implies $\{x_{n}\}_{n\geq 0}$ is limit shadowed by $x\in X$. Consequently, any limit pseudo orbit of $\mathcal{F}(k,\textnormal{shift})$ can be limit shadowed by some point of $X$. Thus $\mathcal{F}(k,\textnormal{shift})$ also has the limit shadowing property.
\end{proof}
\section{Shadowing properties and expansivity}\label{section333}
In this section, we investigate the relationships between various notions of shadowing for time varying maps and examine the role that expansivity plays in shadowing properties of such dynamical systems. We prove some results linking s-limit shadowing property to limit shadowing property, and h-shadowing property to s-limit shadowing and limit shadowing properties. Finally, under the assumption of expansivity, we show that the shadowing property implies the h-shadowing, s-limit shadowing and limit shadowing properties.
\begin{lemma}\label{lemma1}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$ and $Y$ be a subset of $X$. If $Y\subseteq f_{n}(Y)$ for every $n\in\mathbb{N}$ and $\mathcal{F}$ has s-limit shadowing property on $Y$ then $\mathcal{F}$ also has limit shadowing property on $Y$. In particular, if $\mathcal{F}$ is a time varying map of surjective maps and has s-limit shadowing property then $\mathcal{F}$ also has limit shadowing property.
\end{lemma}
\begin{proof}
Fix $\varepsilon>0$ and let $\delta>0$ be given by the s-limit shadowing property.
Let $\{x_{n}\}_{n\geq 0}\subseteq Y$ be a limit pseudo orbit of $\mathcal{F}$, i.e. $d(f_{n+1}(x_{n}),x_{n+1})\to 0$ as $ n\to +\infty$. Then for some $n_{0}\in\mathbb{N}$,
$d(f_{n+1}(x_{n}),x_{n+1})<\delta$ for every $n\geq n_{0}$. By assumption $Y\subseteq f_{n}(Y)$ for every $n\in\mathbb{N}$, there is $y_{0}\in Y$ such that $\{y_{0},\mathcal{F}_{1}(y_{0}),\ldots,\mathcal{F}_{n-1}(y_{0})\}\subseteq Y$ and $\mathcal{F}_{n}(y_{0})=x_{n_{0}}$. Hence, the sequence $\{y_{0},\mathcal{F}_{1}(y_{0}),\ldots,\mathcal{F}_{n-1}(y_{0}),x_{n_{0}},x_{n_{0}+1},\ldots\}$ is $\delta$-pseudo orbit and limit pseudo orbit. By s-limit shadowing property, $\{y_{0},\mathcal{F}_{1}(y_{0}),\ldots,\mathcal{F}_{n-1}(y_{0}),x_{n_{0}},x_{n_{0}+1},\ldots\}$ is $\varepsilon$-shadowed and limit shadowed by some point $y\in X$. Therefore $\{x_{n}\}_{n\geq 0}$ is limit shadowed by $y$ which implies $\mathcal{F}$ has the limit shadowing property on $Y$.
\end{proof}
\begin{theorem}\label{theorem22}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a compact metric space $(X,d)$ and $Y$ be a closed subset of $X$. Then, the following statements hold:
\begin{itemize}
\item[\textnormal{(a)}]
If there is an open set $U$ such that $Y\subseteq U$ and $\mathcal{F}$ has h-shadowing property on
$U$, then $\mathcal{F}$ has s-limit shadowing property on $Y$. If in addition, $Y\subseteq f_{n}(Y)$ for every $n\in\mathbb{N}$ then $\mathcal{F}$ has limit shadowing property on $Y$.
\item[\textnormal{(b)}]
If $Y$ is invariant and $\mathcal{F}|_{Y}$ has h-shadowing property then $\mathcal{F}|_{Y}$ has s-limit shadowing property and limit shadowing property.
\item[\textnormal{(c)}]
If $\mathcal{F}$ has h-shadowing property then $\mathcal{F}$ has s-limit shadowing property. If in addition, $\mathcal{F}$ is a time varying map of surjective maps then $\mathcal{F}$ has limit shadowing property.
\end{itemize}
\end{theorem}
\begin{proof}
\textnormal{(a)}. Since $X$ is compact, then by remark \ref{remark1} every time varying map with h-shadowing property has shadowing property. Hence the first half of the definition of s-limit shadowing property is satisfied trivially.
So fix $\varepsilon>0$ such that $B(Y,3\varepsilon)\subseteq U$ and denote $\varepsilon_{n}=2^{-n-2}\varepsilon$ for every $n\in\mathbb{N}\cup\{0\}$ (note that $B(Y,r)$ is the $r$-neighborhood of the set $Y$). By the definition of h-shadowing property there are $\{\delta_{n}>0\}_{n\in\mathbb{N}\cup\{0\}}$ such that every finite $\delta_{n}$-pseudo orbit in $U$ is $\varepsilon_{n}$-shadowed by some point of $X$ (with exact hit at the end). Fix any $\delta_{0}$-pseudo orbit $\{x_{n}\}_{n\geq 0}\subseteq Y$ such that $d(f_{n+1}(x_{n}),x_{n+1})\to 0$ as $ n\to +\infty$. There is an increasing sequence $\{k_{i}\}_{i\in\mathbb{N}\cup\{0\}}$ such that $\{x_{n}\}_{n\geq k_{i}}$ is a $\delta_{i}$-pseudo orbit for $\mathcal{F}(k_{i},\textnormal{shift})=\{f_{n}\}_{n=k_{i}+1}^{\infty}$ and obviously $k_{0}=0$. Note that if $w$ is a point such that $\mathcal{F}_{k_{i}}(w)=x_{k_{i}}$ then
the sequence
\begin{center}
$\{w,\mathcal{F}_{1}(w),\ldots,\mathcal{F}_{k_{i}}(w),x_{k_{i}+1},\ldots,x_{k_{i+1}}\}$
\end{center}
is a finite $\delta_{i}$-pseudo orbit. Let $z_{0}$ be a point which $\varepsilon_{0}$-shadows the finite $\delta_{0}$-pseudo orbit $\{x_{0},\ldots,x_{k_{1}}\}$ with exact hit at the end, i.e. $\mathcal{F}_{k_{1}}(z_{0})=x_{k_{1}}$. Note that $\mathcal{F}_{j}(z_{0})\in U$ for $0\leq j\leq k_{1}$.
For $i\in\mathbb{N}$, assume that $z_{i}$ is a point which $\varepsilon_{i}$-shadows the finite $\delta_{i}$-pseudo orbit
\begin{center}
$\{z_{i-1},\mathcal{F}_{1}(z_{i-1}),\ldots,\mathcal{F}_{k_{i}}(z_{i-1}),x_{k_{i}+1},\ldots,x_{k_{i+1}}\}\subseteq U$
\end{center}
with exact hit at the end. Then by h-shadowing property there is a point $z_{i+1}$ which $\varepsilon_{i+1}$-shadows the finite $\delta_{i+1}$-pseudo orbit
\begin{center}
$\{z_{i},\mathcal{F}_{1}(z_{i}),\ldots,\mathcal{F}_{k_{i+1}}(z_{i}),x_{k_{i+1}+1},\ldots,x_{k_{i+2}}\}\subseteq U$
\end{center}
with exact hit at the end. Thus we can produce a sequence $\{z_{i}\}_{i\geq 0}$ with the following
properties:
\begin{itemize}
\item[\textnormal{(1)}]
$d(\mathcal{F}_{j}(z_{i-1}),\mathcal{F}_{j}(z_{i}))<\varepsilon_{i}$ for $0\leq j\leq k_{i}$ and $i\geq 1$;
\item[\textnormal{(2)}]
$d(\mathcal{F}_{j}(z_{i}),x_{j})<\varepsilon_{i}$ for $k_{i}<j\leq k_{i+1}$ and $i\geq 0$;
\item[\textnormal{(3)}]
$\mathcal{F}_{k_{i+1}}(z_{i})=x_{k_{i+1}}$ for $i\geq 0$;
\item[\textnormal{(4)}]
$d(\mathcal{F}_{j}(z_{i}),Y)<\varepsilon$ for $j\leq k_{i+1}$.
\end{itemize}
Since $X$ is compact, there is an increasing sequence $\{s_{i}\}_{i\geq 1}$ such that the limit $z=\lim_{i\to\infty}z_{s_{i}}$ exists. Hence, for any $j,n\in\mathbb{N}$ there exist $i_{0}\geq 0$ and $m\geq i_{0}$ such that $k_{i_{0}}<j\leq k_{i_{0}+1}$
and $d(\mathcal{F}_{j}(z),\mathcal{F}_{j}(z_{s_{m}}))<\varepsilon_{n+1}$. So we get
\begin{eqnarray*}
d(\mathcal{F}_{j}(z),x_{j})
&\leq& d(\mathcal{F}_{j}(z),\mathcal{F}_{j}(z_{s_{m}}))
+d(\mathcal{F}_{j}(z_{i_{0}}),x_{j})
+\sum_{i=i_{0}}^{s_{m}-1}d(\mathcal{F}_{j}(z_{i}),\mathcal{F}_{j}(z_{i+1}))\\
&<& \varepsilon_{n+1}+\varepsilon_{i_{0}}+\sum_{i=i_{0}}^{s_{m}-1}\varepsilon_{i+1}
<2^{-n-3}\varepsilon+\sum_{i=i_{0}}^{\infty}2^{-i-2}\varepsilon\\
&=& \varepsilon(2^{-n-3}+2^{-i_{0}-1})<\varepsilon.
\end{eqnarray*}
Furthermore, for any $n$, let $j>k_{n+2}$. Then, there is $i_{1}\geq n+2$ such that $k_{i_{1}}<j\leq k_{i_{1}+1}$
and there is $m>i_{1}$ such that $d(\mathcal{F}_{j}(z),\mathcal{F}_{j}(z_{s_{m}}))<\varepsilon_{n+1}$. Hence, as
before we obtain
\begin{center}
$d(\mathcal{F}_{j}(z),x_{j})<\varepsilon(2^{-n-3}+2^{-i_{1}-1})\leq\varepsilon(2^{-n-3}+
2^{-n-3})=\varepsilon_{n}.$
\end{center}
This immediately implies that $\limsup_{j\to\infty}d(\mathcal{F}_{j}(z),x_{j})\leq\varepsilon_{n}$. Since $n$
was arbitrary, we have $\lim_{j\to\infty}d(\mathcal{F}_{j}(z),x_{j})=0$. This shows that $\mathcal{F}$ has
s-limit shadowing property on $Y$.
Finally, (b) and (c) follow directly from (a) and lemma \ref{lemma1} (since $U=Y$ is open in $Y$), which
completes the proof of the theorem.
\end{proof}
\begin{definition}[Expansivity]
An time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ on a metric space $(X,d)$ is called \emph{strongly expansive} if there exists $\gamma>0$ (called expansivity constant) such that for any two distinct points $x,y\in X$ and every $N\in\mathbb{N}$, $d(\mathcal{F}_{[N,n]}(x),\mathcal{F}_{[N, n]}(y))>\gamma$ for some $n\geq N$. Equivalently, if for $x,y\in X$ and some $N\in\mathbb{N}$,
$d(\mathcal{F}_{[N,n]}(x),\mathcal{F}_{[N, n]}(y))\leq\gamma$ for all $n\geq N$, then $x=y$.
\end{definition}
\begin{corollary}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a compact metric space $(X,d)$.
\begin{itemize}
\item[\textnormal{(a)}]
If $\mathcal{F}$ is strongly expansive then $\mathcal{F}$ has the shadowing property if and only if $\mathcal{F}$ has the h-shadowing property.
\item[\textnormal{(b)}]
If $\mathcal{F}$ is strongly expansive and has the shadowing property then $\mathcal{F}$ has the h-shadowing and s-limit shadowing properties. If in addition, $\mathcal{F}$ is a time varying map of surjective maps then $\mathcal{F}$ has the limit shadowing property.
\end{itemize}
\end{corollary}
\begin{proof}
If $\mathcal{F}$ has the h-shadowing property then $\mathcal{F}$ has the shadowing property (see Remark \ref{remark1}). So suppose that $\mathcal{F}$ has the shadowing property. Let $\varepsilon<\gamma$ and let $\delta>0$ be provided by shadowing property for $\varepsilon$, where $\gamma$ is the expansivity constant. Fix any finite $\delta$-pseudo orbit $\{x_{0},x_{1},\ldots,x_{m}\}$ and extend it to the infinite $\delta$-pseudo orbit
\begin{center}
$$\{x_{0},x_{1},\ldots,x_{m},\mathcal{F}_{[m+1,m+1]}(x_{m}),\mathcal{F}_{[m+1,m+2]}(x_{m}),\mathcal{F}_{[m+1,m+3]}(x_{m}),\ldots\}.$$
\end{center}
If $x$ is a point which $\varepsilon$-shadows the above $\delta$-pseudo orbit, then
\begin{center}
$d(\mathcal{F}_{[m+1,m+j]}(\mathcal{F}_{m}(x)),\mathcal{F}_{[m+1,m+j]}(x_{m}))<\varepsilon<\gamma$
\end{center}
for all $j\geq 0$ which implies that $\mathcal{F}_{m}(x)=x_{m}$. Thus $\mathcal{F}$ has the h-shadowing property. Finally, (b) is a direct consequence of part (a) and Theorem \ref{theorem22}, which completes the proof.
\end{proof}
\section{Uniformly contracting and uniformly expanding time varying maps}\label{section4}
In this section, we investigate various notions of shadowing for uniformly contracting and uniformly expanding time varying maps. We show that the uniformly contracting and uniformly expanding time varying maps exhibit the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties. Moreover, we show that any time varying map of a finite set of hyperbolic linear homeomorphisms on a Banach space with the same stable and unstable subspaces has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\begin{definition}[Uniformly contracting and uniformly expanding time varying map]
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a time varying map on a metric space $(X,d)$. Then,
\begin{enumerate}
\item
the time varying map $\mathcal{F}$ is \emph{uniformly contracting} if its contracting ratio which denoted by $\alpha$ exists and is less than one, where
\begin{equation*}
\alpha:=\sup_{n\in\mathbb{N}}\sup_{\substack{x,y\in X \\
x\neq y}}\dfrac{d(f_{n}(x),f_{n}(y))}{d(x,y)};
\end{equation*}
\item
the time varying map $\mathcal{F}$ is \emph{uniformly expanding} if its expanding ratio which denoted by $\beta$
exists and is greater than one, where
\begin{equation*}
\beta:=\inf_{n\in\mathbb{N}}\inf_{\substack{x,y\in X \\
x\neq y}}\dfrac{d(f_{n}(x),f_{n}(y))}{d(x,y)}.
\end{equation*}
\end{enumerate}
\end{definition}
In the following theorem, we show that uniformly contracting time varying maps exhibit the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\begin{theorem}\label{contracting}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a uniformly contracting time varying map on a metric space $(X,d)$. Then,
\begin{itemize}
\item[\textnormal{(a)}]
$\mathcal{F}$ has the shadowing property;
\item[\textnormal{(b)}]
$\mathcal{F}$ has the limit shadowing property;
\item[\textnormal{(c)}]
$\mathcal{F}$ has the exponential limit shadowing property;
\item[\textnormal{(d)}]
$\mathcal{F}$ has the s-limit shadowing property.
\end{itemize}
\end{theorem}
\begin{proof}
Assume that the time varying map $\mathcal{F}$ is uniformly contracting with the contracting ratio $\alpha$.
$\textnormal{(a)}$. Given $\varepsilon>0$ take $\delta=(1-\alpha)\frac{\varepsilon}{2}\leq\frac{\varepsilon}{2}$, and let $\{x_{n}\}_{n\geq 0}$ be a $\delta$-pseudo orbit of $\mathcal{F}$, i.e. $d(f_{n+1}(x_{n}),x_{n+1})<\delta$
for all $n\geq 0$. Consider a point $x\in X$ with $d(x,x_{0})\leq\frac{\varepsilon}{2}$. We show that the
$\delta$-pseudo orbit $\{x_{n}\}_{n\geq 0}$ is $\varepsilon$-shadowed by $x$, i.e. $d(\mathcal{F}_{n}(x),x_{n})<\varepsilon$ for all $n\geq 0$.
Observe that $d(\mathcal{F}_{0}(x),x_{0})\leq\frac{\varepsilon}{2}$ and
\begin{equation*}
d(\mathcal{F}_{1}(x),x_{1})\leq d(\mathcal{F}_{1}(x),f_{1}(x_{0}))+d(f_{1}(x_{0}),x_{1})\leq\alpha d(x,x_{0})+\delta.
\end{equation*}
Similarly,
\begin{eqnarray*}
d(\mathcal{F}_{2}(x),x_{2})
&\leq & d(\mathcal{F}_{2}(x),f_{2}(x_{1}))+d(f_{2}(x_{1}),x_{2})\\
&=& d(f_{2}(\mathcal{F}_{1}(x)),f_{2}(x_{1}))+d(f_{2}(x_{1}),x_{2})\\
&\leq & \alpha d(\mathcal{F}_{1}(x),x_{1})+\delta\\
&\leq & \alpha^{2} d(x,x_{0})+\delta\alpha+\delta.
\end{eqnarray*}
By induction, for each $n\geq 0$, one can show that
\begin{equation*}
d(\mathcal{F}_{n}(x),x_{n})\leq\alpha^{n} d(x,x_{0})+\delta(\alpha^{n-1}+\alpha^{n-2}+\cdots+\alpha+1).
\end{equation*}
Now, the last inequality together with $d(x,x_{0})\leq\frac{\varepsilon}{2}$ imply
\begin{equation*}
d(\mathcal{F}_{n}(x),x_{n})\leq d(x,x_{0})+\delta(\dfrac{1}{1-\alpha})\leq\dfrac{\varepsilon}{2}+\dfrac{\varepsilon}{2}=\varepsilon.
\end{equation*}
Hence, the $\delta$-pseudo orbit $\{x_{n}\}_{n\geq 0}$ is $\varepsilon$-shadowed by $x$ and so time varying map $\mathcal{F}$ has the shadowing property, which completes the proof of part $\textnormal{(a)}$.
$\textnormal{(b)}$. Let $\{x_{n}\}_{n\geq 0}$ be a limit pseudo orbit of $\mathcal{F}$, i.e. $d(f_{n+1}(x_{n}),x_{n+1})\to 0$ as $n\to +\infty$. Put $\tau_{n}=d(f_{n+1}(x_{n}),x_{n+1})$ for all $n\geq 0$ (note that $\tau_{n}\to 0$ as $n\to +\infty$). Now, we show that $d(\mathcal{F}_{n}(x_{0}),x_{n})\to 0$ as $n\to +\infty$, which implies $\{x_{n}\}_{n\geq 0}$ is limit shadowed by $x_{0}$.
To prove this, suppose $\varepsilon$ is an arbitrary positive real number and $M=\sup_{n\geq 0}\tau_{n}$. We can find $k\in\mathbb{N}$ such
that $M\dfrac{\alpha^{k}}{1-\alpha}<\dfrac{\varepsilon}{2}$ and $\tau_{i}<\varepsilon\dfrac{1-\alpha}{2}$ for all $i\geq k$. Obviously, $d(\mathcal{F}_{0}(x_{0}),x_{0})=0$ and
\begin{equation*}
d(\mathcal{F}_{1}(x_{0}),x_{1})\leq d(\mathcal{F}_{1}(x_{0}),f_{1}(x_{0}))+d(f_{1}(x_{0}),x_{1})=d(f_{1}(x_{0}),f_{1}(x_{0}))+\tau_{0}=\tau_{0}.
\end{equation*}
Similarly,
\begin{eqnarray*}
d(\mathcal{F}_{2}(x_{0}),x_{2})
&\leq & d(\mathcal{F}_{2}(x_{0}),f_{2}(x_{1}))+d(f_{2}(x_{1}),x_{2})\\
&=& d(f_{2}(\mathcal{F}_{1}(x_{0})),f_{2}(x_{1}))+d(f_{2}(x_{1}),x_{2})\\
&\leq & \alpha d(\mathcal{F}_{1}(x_{0}),x_{1})+\tau_{1}\\
&\leq & \alpha\tau_{0}+\tau_{1},
\end{eqnarray*}
and
\begin{eqnarray*}
d(\mathcal{F}_{3}(x_{0}),x_{3})
&\leq & d(\mathcal{F}_{3}(x_{0}),f_{3}(x_{2}))+d(f_{3}(x_{2}),x_{3})\\
&=& d(f_{3}(\mathcal{F}_{2}(x_{0})),f_{3}(x_{2}))+d(f_{3}(x_{2}),x_{3})\\
&\leq & \alpha d(\mathcal{F}_{2}(x_{0}),x_{2})+\tau_{2}\\
&\leq & \alpha(\alpha\tau_{0}+\tau_{1})+\tau_{2}\\
&=& \alpha^{2}\tau_{0}+\alpha\tau_{1}+\tau_{2}.
\end{eqnarray*}
By induction, for each $n\geq 0$ we have that
\begin{equation*}
d(\mathcal{F}_{n}(x_{0}),x_{n})\leq\alpha^{n-1}\tau_{0}+\alpha^{n-2}\tau_{1}+\cdots+\alpha\tau_{n-2}+\tau_{n-1}.
\end{equation*}
Hence for $n\geq k$, we have
\begin{eqnarray*}
d(\mathcal{F}_{n}(x_{0}),x_{n})
&\leq & \alpha^{n-1}\tau_{0}+\alpha^{n-2}\tau_{1}+\cdots+\alpha^{k+1}\tau_{n-(k+2)}+\alpha^{k}\tau_{n-(k+1)}\\
&& +\alpha^{k-1}\tau_{n-k}+\cdots+\alpha\tau_{n-2}+\tau_{n-1}\\
&\leq & M\alpha^{k}(1+\alpha+\cdots+\alpha^{n-k-1})+\varepsilon\dfrac{1-\alpha}{2}(1+\alpha+\cdots+\alpha^{k-1})\\
&=& M\alpha^{k}\dfrac{\alpha^{n-k}}{1-\alpha}+\varepsilon\dfrac{1-\alpha}{2}\dfrac{\alpha^{k}}{1-\alpha}\\
&\leq & \dfrac{\varepsilon}{2}+\dfrac{\varepsilon}{2}=\varepsilon.
\end{eqnarray*}
Therefore $d(\mathcal{F}_{n}(x_{0}),x_{n})\leq\varepsilon$ as $n\to +\infty$. Since $\varepsilon>0$ was arbitrary, we conclude that $d(\mathcal{F}_{n}(x_{0}),x_{n})\to 0$ as $n\to +\infty$. Hence, time varying map $\mathcal{F}$ has the limit shadowing property, which completes the proof of part $\textnormal{(b)}$.
$\textnormal{(c)}$. We choose $\theta_{0}\in(\alpha,1)$ and show that $\mathcal{F}$ has the exponential limit shadowing property with respect to this $\theta_{0}$. Let $\{x_{n}\}_{n\geq 0}$ be a $\theta$-exponentially limit pseudo orbit of $\mathcal{F}$ with $\theta\in(\theta_{0},1)$, i.e.
$d(f_{n+1}(x_{n}),x_{n+1})\xrightarrow{\theta} 0$ as $ n\to +\infty$. Then, there exists $L>0$ such that
$d(f_{n+1}(x_{n}),x_{n+1})\leq L\theta^{n}$ for all $n\geq 0$. Hence,
\begin{eqnarray*}
d(\mathcal{F}_{n}(x_{0}),x_{n})
&\leq & d(\mathcal{F}_{n}(x_{0}),f_{n}(x_{n-1}))+d(f_{n}(x_{n-1}),x_{n})\\
&=& d(f_{n}(\mathcal{F}_{n-1}(x_{0})),f_{n}(x_{n-1}))+d(f_{n}(x_{n-1}),x_{n})\\
&\leq & \alpha d(\mathcal{F}_{n-1}(x_{0}),x_{n-1})+L\theta^{n}\\
&\leq & \alpha^{2}d(\mathcal{F}_{n-2}(x_{0}),x_{n-2})+\alpha L\theta^{n-1}+L\theta^{n}\\
&\vdots &\\
&\leq & \alpha^{n-1}L\theta+\alpha^{n-2}L\theta^{2}+\cdots+\alpha^{2}L\theta^{n-2}+\alpha L\theta^{n-1}+L\theta^{n}\\
&= & L(1+\alpha\theta^{-1}+\alpha^{2}\theta^{-2}+\cdots+\alpha^{n-1}\theta^{-n+1})\theta^{n}\\
&\leq & L(1+\alpha\theta^{-1}+\alpha^{2}\theta^{-2}+\cdots+\alpha^{n-1}\theta^{-n+1})\theta^{n-1}\\
&\leq & (\dfrac{L}{\theta-\alpha})\theta^{n}.
\end{eqnarray*}
Therefore $d(\mathcal{F}_{n}(x_{0}),x_{n})\xrightarrow{\theta} 0$, as $n\to +\infty$, i.e. time varying map $\mathcal{F}$ has the exponential limit shadowing property, which completes the proof of part $\textnormal{(c)}$.
Finally, the part $\textnormal{(d)}$ is a direct consequence of our process in parts $\textnormal{(a)}$ and $\textnormal{(b)}$, which completes the proof of the theorem.
\end{proof}
\begin{remark}
In general a time varying map with shadowing and limit shadowing properties does not have s-limit shadowing property, see example \ref{example1}.
\end{remark}
\begin{corollary}
Let $I$ be a non-empty finite set and for every $i\in I$, $f_{i}:\mathbb{R}\to\mathbb{R}$ be a differentiable function. Assume that the maps $f_{i}$ have a common attractor fixed point $p\in\mathbb{R}$, i.e. $f_{i}(p)=p$
and $|f_{i}^{\prime}(p)|<1$ for all $i\in I$. Set $\mathcal{A}=\{f_{i}\}_{i\in I}$. Then, there is an open interval $U$ about $p$ such that $f(U)\subset U$, for all $f\in\mathcal{A}$. Moreover, each time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ with $f_{n}\in\mathcal{A}$ on $U$ has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\end{corollary}
\begin{proof}
By \cite[Proposition 4.4]{RDEV}, for each $i\in I$ there is an open interval $U_{i}$ about $p$ such that if $x\in U_{i}$, then $f_{i}(x)\in U_{i}$ and $f_{i}^{n}(x)\to p$ as $n\to +\infty$. Hence, we can find an open interval $U\subset\cap_{i\in I}U_{i}$ and $0<\varepsilon<1$ such that if $x\in U$, then $|f_{i}^{\prime}(x)|<1-\varepsilon$, $f_{i}(x)\in U$ and $f_{i}^{n}(x)\to p$ as $n\to +\infty$, for every $i\in I$. This implies that for all $x,y\in U$, we have
$\dfrac{|f_{i}(x)-f_{i}(y)|}{|x-y|}<1-\varepsilon$. Hence, each time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ with $f_{n}\in\mathcal{A}$ on $U$ is uniformly contracting, and so by Theorem \ref{contracting} it has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\end{proof}
In the following, we provide two uniformly contracting time varying maps.
\begin{example}
Let $\Sigma_{2}:=\{0,1\}^{\mathbb{N}}=\{x=(x_{1}x_{2}\ldots): x_{n}\in\{0,1\}\}$ be the Bernoulli space.
Consider in $\Sigma_{2}$ the distance defined by
\begin{equation*}
d(x,y) = \left\{
\begin{array}{rl}
2^{-N} & \text{if } x\neq y\ \textnormal{and}\ N=\min\{i:x_{i}\neq y_{i}\},\\
0 & \text{if } x=y.
\end{array} \right.
\end{equation*}
Now, let $f,g:\Sigma_{2}\to\Sigma_{2}$ be two maps defined as
follows
\begin{equation*}
f((x_{1}x_{2}\ldots))=(0x_{1}x_{2}\ldots)\qquad g((x_{1}x_{2}\ldots))=(1x_{1}x_{2}\ldots).
\end{equation*}
Then, any time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ with $f_{n}\in\{f,g\}$ is uniformly contracting, and so by Theorem \ref{contracting} it has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\end{example}
\begin{example}
Let $X\subset \mathbb{R}^2$ be the Sierpinski triangle on the solid equilateral triangle
which is constructed by repeatedly removing inverted maximal equilateral (solid) triangles from a given equilateral (solid)
triangle. Denote the sets in this construction by $X_0, X_1, \cdots$, whereby $X=\bigcap_{n=0}^{\infty}X_n$.
Then $X$ is selfsimilar,
\begin{equation*}
X=\bigcup_{i=1}^3 g_i(X),
\end{equation*}
where the $g_1, g_2, g_3:\mathbb{R}^2 \to \mathbb{R}^2$ are the homotheties of rate $1/2$ that keep one of the three
vertices of $X_0$ fixed, see \cite{Ri} for more details.
Then $g_i$, $i=1, 2, 3$, are uniformly contracting. Hence, by Theorem \ref{contracting}, each time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ with $f_{n}\in \{g_1, g_2, g_3\}$ is uniformly contracting, and so it has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\end{example}
In the following theorem, we show that every uniformly expanding time varying map has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties. Note that Castro, Rodrigues and Varandas (\cite[Lemma 2.1]{CARFBVP}) assert that the uniformly expanding time varying maps having the shadowing property. Also, Nazarian Sarkooh and Ghane (\cite[Proposition 4.12]{JNSFHG}) showed that the uniformly Ruelle-expanding time varying maps having the shadowing property. Here, we give a different proof for shadowing property of uniformly expanding time varying maps and use it to yield the limit shadowing, s-limit shadowing and exponential limit shadowing properties. Recall that an expanding and surjective map is invertible.
\begin{theorem}\label{expanding}
Let $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ be a uniformly expanding time varying map of surjective maps on a complete metric space $(X,d)$. Then,
\begin{itemize}
\item[\textnormal{(a)}]
$\mathcal{F}$ has the shadowing property;
\item[\textnormal{(b)}]
$\mathcal{F}$ has the limit shadowing property;
\item[\textnormal{(c)}]
$\mathcal{F}$ has the exponential limit shadowing property;
\item[\textnormal{(d)}]
$\mathcal{F}$ has the s-limit shadowing property.
\end{itemize}
\end{theorem}
\begin{proof}
Here, we use the approach used in the proof of \cite[Theorem 2.2]{VGVG}. Given $n\in\mathbb{N}$, consider the function $\varphi_{n}:X\times X\to[0,+\infty)$, defined by
\begin{equation*}
\varphi_{n}(x,y) = \left\{
\begin{array}{rl}
\dfrac{d(f_{n}(x),f_{n}(y))}{d(x,y)} & \text{if } x\neq y,\\
\beta & \text{if } x=y,
\end{array} \right.
\end{equation*}
where $\beta>1$ is the expanding ratio of the uniformly expanding time varying map $\mathcal{F}$. Hence, one has
\begin{equation}\label{jjj1}
d(x,y)=\dfrac{d(f_{n}(x),f_{n}(y))}{\varphi_{n}(x,y)},\quad \varphi_{n}(x,y)\geq\beta\quad\textnormal{for all}\ x,y\in X\ \textnormal{and}\ n\in\mathbb{N}.
\end{equation}
$\textnormal{(a)}$. Given $\varepsilon>0$ take $\delta=(\beta-1)\varepsilon$, and let $\{x_{n}\}_{n\geq 0}$ be a $\delta$-pseudo orbit of $\mathcal{F}$, i.e. $d(f_{n+1}(x_{n}),x_{n+1})<\delta$ for all $n\geq 0$. Consider
the sequence $\{z_{n}\}_{n\geq 0}$ in $X$, defined as follows
\begin{equation}\label{jjj7}
z_{0}=x_{0},\quad z_{n}=f_{1}^{-1}\circ f_{2}^{-1}\circ\cdots\circ f_{n}^{-1}(x_{n})\quad\textnormal{for all}\ n\in\mathbb{N}.
\end{equation}
Then, $x_{n}=f_{n}\circ f_{n-1}\circ\cdots\circ f_{1}(z_{n})$ for all $n\in\mathbb{N}$. Given $n\in\mathbb{N}$ and $1\leq k\leq n$, denote
\begin{equation}\label{jjj2}
z_{n}^{(k)}=f_{k}\circ f_{k-1}\circ\cdots\circ f_{1}(z_{n}).
\end{equation}
Therefore, for any $n\in\mathbb{N}$ and $1\leq k\leq n$, one has
\begin{equation}\label{jjj3}
z_{n}^{(k)}=f_{k}(z_{n}^{(k-1)})\qquad x_{n}=z_{n}^{(n)}=f_{n}(z_{n}^{(n-1)}).
\end{equation}
We claim that $\{z_{n}\}_{n\geq 0}$ is a Cauchy sequence. Firstly, fixing $n\in\mathbb{N}$ and
$p\geq 1$, and using (\ref{jjj1}), (\ref{jjj2}) and (\ref{jjj3}), we obtain
\begin{eqnarray}\label{jjj4}
d(z_{n},z_{n+p})
&=& \dfrac{d(f_{1}(z_{n}),f_{1}(z_{n+p}))}{\varphi_{1}(z_{n},z_{n+p})}\notag\\
&=& \dfrac{d(z_{n}^{(1)},z_{n+p}^{(1)})}{\varphi_{1}(z_{n},z_{n+p})}\notag\\
&=& \dfrac{d(z_{n}^{(2)},z_{n+p}^{(2)})}{\varphi_{1}(z_{n},z_{n+p})\varphi_{2}(z_{n}^{(1)},z_{n+p}^{(1)})}\notag\\
&\vdots &\notag\\
&=& \dfrac{d(z_{n}^{(n)},z_{n+p}^{(n)})}{\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}\notag\\
&=& \dfrac{d(x_{n},z_{n+p}^{(n)})}{\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}.
\end{eqnarray}
Secondly, by induction on $p\geq 1$ we show that the following inequality holds
uniformly with respect to $n\in\mathbb{N}$:
\begin{equation}\label{jjj5}
d(x_{n},z_{n+p}^{(n)})\leq\delta\sum_{k=1}^{p}\beta^{-k}.
\end{equation}
Indeed, for $p=1$ the inequality (\ref{jjj5}) follows from (\ref{jjj1}) and (\ref{jjj3}):
\begin{equation}
d(x_{n},z_{n+1}^{(n)})=\dfrac{d(f_{n+1}(x_{n}),f_{n+1}(z_{n+1}^{(n)})}{\varphi_{n+1}(x_{n},z_{n+1}^{(n)})}=
\dfrac{d(f_{n+1}(x_{n}),x_{n+1})}{\varphi_{n+1}(x_{n},z_{n+1}^{(n)})}\leq\dfrac{\delta}{\beta}.
\end{equation}
Assume that (\ref{jjj5}) holds for some $p=q\geq 1$ uniformly on $n\in\mathbb{N}$. Taking into account this assumption, as well as (\ref{jjj1}) and (\ref{jjj3}), we prove (\ref{jjj5}) for
$p=q+1$:
\begin{eqnarray}\label{jjj9}
d(x_{n},z_{n+q+1}^{(n)})
&=& \dfrac{d(f_{n+1}(x_{n}),f_{n+1}(z_{n+q+1}^{(n)})}{\varphi_{n+1}(x_{n},z_{n+q+1}^{(n)})}\notag\\
&=& \dfrac{d(f_{n+1}(x_{n}),z_{n+1+q}^{(n+1)})}{\varphi_{n+1}(x_{n},z_{n+q+1}^{(n)})}\notag\\
&\leq & \dfrac{d(f_{n+1}(x_{n}),x_{n+1})+d(x_{n+1},z_{n+1+q}^{(n+1)})}{\varphi_{n+1}(x_{n},z_{n+1+q}^{(n)})}\notag\\
&\leq & \dfrac{1}{\beta}(\delta+\delta\sum_{k=1}^{q}\beta^{-k})=\delta\sum_{k=1}^{q+1}\beta^{-k}.
\end{eqnarray}
Therefore (\ref{jjj5}) holds.
Now, the relations (\ref{jjj1}), (\ref{jjj4}) and (\ref{jjj5}) give us the following estimation for $d(z_{n},z_{n+p})$ with any $n\in\mathbb{N}$ and $p\geq 1$:
\begin{eqnarray}\label{jjj6}
d(z_{n},z_{n+p})
&\leq & \dfrac{\delta\sum_{k=1}^{p}\beta^{-k}}{\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}\notag\\
&\leq & \dfrac{\delta}{(\beta-1)\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}\notag\\
&=& \dfrac{\varepsilon}{\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}\leq\varepsilon\beta^{-n}.
\end{eqnarray}
This inequality proves the claim, i.e. $\{z_{n}\}_{n\geq 0}$ is a Cauchy sequence.
Therefore, the sequence $\{z_{n}\}_{n\geq 0}$ is convergent to some point $x\in X$. From (\ref{jjj2}) and (\ref{jjj6}) one has
\begin{equation*}
\lim_{n\to\infty} z_{n}^{(k)}=f_{k}\circ f_{k-1}\circ\cdots\circ f_{1}(x)=\mathcal{F}_{k}(x)\ \textnormal{for any}\ k\geq 1,
\end{equation*}
and
\begin{equation*}
d(z_{n},x)\leq\dfrac{\varepsilon}{\varphi_{1}(z_{n},x)\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},\mathcal{F}_{i-1}(x))}\quad\textnormal{as}\ p\to\infty,\ \textnormal{for}\ n\geq 1.
\end{equation*}
Hence, for $n\geq 1$ we get
\begin{eqnarray*}
d(\mathcal{F}_{n}(x),x_{n})
&=& d(f_{n}\circ\cdots\circ f_{1}(x),f_{n}\circ\cdots\circ f_{1}(z_{n}))\\
&=& \varphi_{n}(\mathcal{F}_{n-1}(x),z_{n}^{n-1})d(f_{n-1}\circ\cdots\circ f_{1}(x),f_{n-1}\circ\cdots\circ f_{1}(z_{n}))\\
&=& \varphi_{n}(\mathcal{F}_{n-1}(x),z_{n}^{n-1})\varphi_{n-1}(\mathcal{F}_{n-2}(x),z_{n}^{n-2})d(f_{n-2}\circ\cdots\circ f_{1}(x),f_{n-2}\circ\cdots\circ f_{1}(z_{n}))\\
&\vdots&\\
&=& \varphi_{n}(\mathcal{F}_{n-1}(x),z_{n}^{n-1})\varphi_{n-1}(\mathcal{F}_{n-2}(x),z_{n}^{n-2})\cdots\varphi_{1}(x,z_{n})
d(x,z_{n})\leq\varepsilon.
\end{eqnarray*}
Also, for the lacking case $n=0$
\begin{eqnarray*}
d(\mathcal{F}_{0}(x),x_{0})
&=& d(x,x_{0})=\dfrac{d(f_{1}(x),f_{1}(x_{0}))}{\varphi_{1}(x,x_{0})}\\
&=& \dfrac{d(f_{1}(x),x_{1})+d(x_{1},f_{1}(x_{0}))}{\varphi_{1}(x,x_{0})}\\
&=& \dfrac{d(\mathcal{F}_{1}(x),x_{1})+d(x_{1},f_{1}(x_{0}))}{\varphi_{1}(x,x_{0})}\\
&\leq & \dfrac{\varepsilon+\delta}{\beta}=\dfrac{\varepsilon+\beta\varepsilon-\varepsilon}{\beta}=\varepsilon.
\end{eqnarray*}
Hence, $\{x_{n}\}_{n\geq 0}$ is $\varepsilon$-shadowed by $x$. Thus $\mathcal{F}$ has the shadowing property which completes the proof of part $\textnormal{(a)}$.
$\textnormal{(b)}$. Let $\varepsilon>0$ and $\{x_{n}\}_{n\geq 0}$ be a limit pseudo orbit of $\mathcal{F}$, i.e. $d(f_{n+1}(x_{n}),x_{n+1})\to 0$ as $ n\to +\infty$. Therefore, there is $N_{0}\in\mathbb{N}$ such that $d(f_{n+1}(x_{n}),x_{n+1})<\varepsilon$, for all $n\geq N_{0}$. Now, consider the sequence $\{z_{n}\}_{n\geq 0}$ and notation $z_{n}^{(k)}$ as given by relations (\ref{jjj7}) and (\ref{jjj2}). We claim that $\{z_{n}\}_{n\geq 0}$ is a Cauchy sequence. Hence, by induction on $p\geq 1$ we show that the following inequality holds
uniformly with respect to $n\geq N_{0}$:
\begin{equation}\label{jjj11}
d(x_{n},z_{n+p}^{(n)})\leq\varepsilon\sum_{k=1}^{p}\beta^{-k}.
\end{equation}
Indeed, for $p=1$ the inequality (\ref{jjj11}) follows from (\ref{jjj1}) and (\ref{jjj3}):
\begin{equation*}
d(x_{n},z_{n+1}^{(n)})=\dfrac{d(f_{n+1}(x_{n}),f_{n+1}(z_{n+1}^{(n)})}{\varphi_{n+1}(x_{n},z_{n+1}^{(n)})}=
\dfrac{d(f_{n+1}(x_{n}),x_{n+1})}{\varphi_{n+1}(x_{n},z_{n+1}^{(n)})}\leq\dfrac{\varepsilon}{\beta}.
\end{equation*}
Assume that (\ref{jjj11}) holds for some $p=q\geq 1$ uniformly on $n\geq N_{0}$. Taking into account this assumption, as well as (\ref{jjj1}), (\ref{jjj3}) and (\ref{jjj9}), we prove (\ref{jjj11}) for
$p=q+1$:
\begin{eqnarray*}
d(x_{n},z_{n+q+1}^{(n)})
&\leq & \dfrac{d(f_{n+1}(x_{n}),x_{n+1})+d(x_{n+1},z_{n+1+q}^{(n+1)})}{\varphi_{n+1}(x_{n},z_{n+1+q}^{(n)})}\\
&\leq & \dfrac{1}{\beta}(\varepsilon+\varepsilon\sum_{k=1}^{q}\beta^{k})
=\varepsilon\sum_{k=1}^{q+1}\beta^{-k}.
\end{eqnarray*}
Therefore (\ref{jjj11}) holds. Now, the relations (\ref{jjj4}) and (\ref{jjj11}) give us the following estimation for $d(z_{n},z_{n+p})$ with any $n\geq N_{0}$ and $p\geq 1$:
\begin{eqnarray}\label{jjj12}
d(z_{n},z_{n+p})
&\leq & \dfrac{\varepsilon\sum_{k=1}^{p}\beta^{-k}}{\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}\notag\\
&\leq & \dfrac{\varepsilon}{(\beta-1)\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}\notag\\
&\leq & \dfrac{\varepsilon}{(\beta-1)}\beta^{-n}.
\end{eqnarray}
This inequality proves the claim, i.e. $\{z_{n}\}_{n\geq 0}$ is a Cauchy sequence.
Therefore, the sequence $\{z_{n}\}_{n\geq 0}$ is convergent to some point $x\in X$. Also, from (\ref{jjj12}) one has
\begin{equation*}
d(z_{n},x)\leq\dfrac{\varepsilon}{(\beta-1)\varphi_{1}(z_{n},x)\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},\mathcal{F}_{i-1}(x))}\quad\textnormal{as}\ p\to\infty,\ \textnormal{for}\ n\geq 1.
\end{equation*}
Hence, for $n\geq N_{0}$ we get
\begin{eqnarray*}
d(\mathcal{F}_{n}(x),x_{n})
&=& d(f_{n}\circ\cdots\circ f_{1}(x),f_{n}\circ\cdots\circ f_{1}(z_{n}))\\
&=& \varphi_{n}(\mathcal{F}_{n-1}(x),z_{n}^{n-1})d(f_{n-1}\circ\cdots\circ f_{1}(x),f_{n-1}\circ\cdots\circ f_{1}(z_{n}))\\
&\vdots&\\
&=& \varphi_{n}(\mathcal{F}_{n-1}(x),z_{n}^{n-1})\varphi_{n-1}(\mathcal{F}_{n-2}(x),z_{n}^{n-2})\cdots\varphi_{1}(x,z_{n})
d(x,z_{n})\leq\dfrac{\varepsilon}{\beta-1}
\end{eqnarray*}
that implies $d(\mathcal{F}_{n}(x),x_{n})\to 0$ as $n\to\infty$, because $\varepsilon$ is arbitrary. Thus the time varying map $\mathcal{F}$ has the limit shadowing property.
$\textnormal{(c)}$. Let $\{x_{n}\}_{n\geq 0}$ be a $\theta$-exponentially limit pseudo orbit of $\mathcal{F}$ with rate $\theta\in(0,1)$, i.e. $d(f_{n+1}(x_{n}),x_{n+1})\xrightarrow{\theta} 0$ as $ n\to +\infty$. Hence, there exists a constant $L>0$ such that $d(f_{n+1}(x_{n}),x_{n+1})\leq L\theta^{n}$ for $n\geq 0$.
Consider the sequence $\{z_{n}\}_{n\geq 0}$ and notation $z_{n}^{(k)}$ given by relations (\ref{jjj7}) and (\ref{jjj2}). We claim that $\{z_{n}\}_{n\geq 0}$ is a Cauchy sequence. Hence, by induction on $p\geq 1$ we show that the following inequality holds
uniformly with respect to $n\in\mathbb{N}$:
\begin{equation}\label{jjj8}
d(x_{n},z_{n+p}^{(n)})\leq L\delta^{n}\sum_{k=1}^{p}\beta^{-k}.
\end{equation}
Indeed, for $p=1$ the inequality (\ref{jjj8}) follows from (\ref{jjj1}) and (\ref{jjj3}):
\begin{equation*}
d(x_{n},z_{n+1}^{(n)})=\dfrac{d(f_{n+1}(x_{n}),f_{n+1}(z_{n+1}^{(n)})}{\varphi_{n+1}(x_{n},z_{n+1}^{(n)})}=
\dfrac{d(f_{n+1}(x_{n}),x_{n+1})}{\varphi_{n+1}(x_{n},z_{n+1}^{(n)})}\leq\dfrac{L\delta^{n}}{\beta}.
\end{equation*}
Assume that (\ref{jjj8}) holds for some $p=q\geq 1$ uniformly on $n\in\mathbb{N}$. Taking into account this assumption, as well as (\ref{jjj1}), (\ref{jjj3}) and (\ref{jjj9}), we prove (\ref{jjj8}) for
$p=q+1$:
\begin{eqnarray*}
d(x_{n},z_{n+q+1}^{(n)})
&\leq & \dfrac{d(f_{n+1}(x_{n}),x_{n+1})+d(x_{n+1},z_{n+1+q}^{(n+1)})}{\varphi_{n+1}(x_{n},z_{n+1+q}^{(n)})}\\
&\leq & \dfrac{1}{\beta}(L\theta^{n}+L\theta^{n+1}\sum_{k=1}^{q}\beta^{-k})\\
&\leq & \dfrac{1}{\beta}(L\theta^{n}+L\theta^{n}\sum_{k=1}^{q}\beta^{-k})=L\theta^{n}\sum_{k=1}^{q+1}\beta^{-k}.
\end{eqnarray*}
Therefore (\ref{jjj8}) holds. Now, the relations (\ref{jjj4}) and (\ref{jjj8}) give us the following estimation for $d(z_{n},z_{n+p})$ with any $n\in\mathbb{N}$ and $p\geq 1$:
\begin{eqnarray}\label{jjj10}
d(z_{n},z_{n+p})
&\leq & \dfrac{L\theta^{n}\sum_{k=1}^{p}\beta^{-k}}{\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}\notag\\
&\leq & \dfrac{L\theta^{n}}{(\beta-1)\varphi_{1}(z_{n},z_{n+p})\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},z_{n+p}^{(i-1)})}\notag\\
&\leq & \dfrac{L}{(\beta-1)}\Big(\dfrac{\theta}{\beta}\Big)^{n}.
\end{eqnarray}
This inequality proves the claim, i.e. $\{z_{n}\}_{n\geq 0}$ is a Cauchy sequence.
Therefore, the sequence $\{z_{n}\}_{n\geq 0}$ is convergent to some point $x\in X$. Also, from (\ref{jjj10}) one has
\begin{equation*}
d(z_{n},x)\leq\dfrac{L\theta^{n}}{(\beta-1)\varphi_{1}(z_{n},x)\prod_{i=2}^{n}\varphi_{i}(z_{n}^{(i-1)},\mathcal{F}_{i-1}(x))}\quad\textnormal{as}\ p\to\infty,\ \textnormal{for}\ n\geq 1.
\end{equation*}
Hence, for $n\geq 1$ we get
\begin{eqnarray*}
d(\mathcal{F}_{n}(x),x_{n})
&=& d(f_{n}\circ\cdots\circ f_{1}(x),f_{n}\circ\cdots\circ f_{1}(z_{n}))\\
&=& \varphi_{n}(\mathcal{F}_{n-1}(x),z_{n}^{n-1})d(f_{n-1}\circ\cdots\circ f_{1}(x),f_{n-1}\circ\cdots\circ f_{1}(z_{n}))\\
&\vdots&\\
&=& \varphi_{n}(\mathcal{F}_{n-1}(x),z_{n}^{n-1})\varphi_{n-1}(\mathcal{F}_{n-2}(x),z_{n}^{n-2})\cdots\varphi_{1}(x,z_{n})
d(x,z_{n})\leq\Big(\dfrac{L}{\beta-1}\Big)\theta^{n}
\end{eqnarray*}
that implies $d(\mathcal{F}_{n}(x),x_{n})\xrightarrow{\theta} 0$ as $ n\to +\infty$. Thus the time varying map $\mathcal{F}$ has the exponential limit shadowing property.
Finally, part $\textnormal{(d)}$ is a direct consequence of our process in parts $\textnormal{(a)}$ and $\textnormal{(b)}$, which completes the proof of the theorem.
\end{proof}
\begin{remark}
Note that the surjectivity of maps $f_{n}$ of time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ in Theorem \ref{expanding} is essential. Indeed, let $X=\{1\}\cup[2,+\infty)$ be a complete metric space endowed with the standard metric from $\mathbb{R}$, and consider the function $f:X\to X$, $f(x)=2x$ for all $x\in X$. Hence $f$ is uniformly expanding that is not surjective. Then time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ on $X$ with $f_{n}=f$ for all $n\in\mathbb{N}$ does not possess the shadowing property, for more details see \cite[Example 2.1]{VGVG}.
\end{remark}
\begin{remark}
Memarbashi and Rasuli in \cite[Theorem 2.8]{MMMMMHHH} show that each uniformly expanding time varying map satisfies the h-shadowing property. Hence, by theorem \ref{theorem22}, each uniformly expanding time varying map on a compact metric space has the s-limit shadowing property, moreover it has the limit shadowing property (under some conditions). Nevertheless, for Theorem \ref{expanding} we give a different proof and use it to yield the exponential limit shadowing property, also it is useful for future studies.
\end{remark}
\begin{example}\label{example2}
Let $f_{A}:\mathbb{T}^{d}\to\mathbb{T}^{d}$ be the linear endomorphism of the torus $\mathbb{T}^{d}=\mathbb{R}^{d}/\mathbb{Z}^{d}$ induced by
some matrix $A$ with integer coefficients and determinant different from zero. Assume that all the
eigenvalues $\lambda_{1},\lambda_{2},\ldots,\lambda_{d}$ of $A$ are larger than $1$ in
absolute value. Then, given any $1<\sigma<\inf_{i}|\lambda_{i}|$, there exists an inner product
in $\mathbb{R}^{d}$ relative to which $||Av||\geq\sigma ||v||$ for every $v\in\mathbb{R}^{d}$. This shows that the transformation $f_{A}$ is expanding, see \cite[Example 6.2]{JNSFHG}.
Now, let $\mathcal{A}$ be a non-empty finite set of different matrices enjoying the above conditions. Then, each time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ with $f_{n}\in\{f_{A}:A\in\mathcal{A}\}$ is uniformly expanding. Hence, by Theorem \ref{expanding}, it has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\end{example}
In what follows, we show that any time varying map of a finite set of hyperbolic linear homeomorphisms on a Banach space with the same stable and unstable subspaces has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\begin{definition}
Let $f:X\to X$ be a linear homeomorphism on a Banach space $X$. Then, $f$ is said to be \emph{hyperbolic} if there exist Banach subspaces $X_{s},X_{u}\subset X$, called stable and unstable subspaces, respectively, and a norm $\|.\|$ on $X$ compatible with the original Banach structure such that
\begin{equation*}
X=X_{s}\oplus X_{u},\quad f(X_{s})=X_{s},\quad f(X_{u})=X_{u},\quad \|f|_{X_{s}}\|<1\ \ \textnormal{and}\ \
\|f^{-1}|_{X_{u}}\|<1.
\end{equation*}
\end{definition}
\begin{theorem}\label{hyperbolic}
Let $X$ be a Banach space, and let $\mathcal{A}$ be a finite set of hyperbolic linear homeomorphisms with the same stable and unstable subspaces. Then, any time varying map $\mathcal{F}=\{f_{n}\}_{n\in\mathbb{N}}$ with $f_{n}\in\mathcal{A}$ has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\end{theorem}
\begin{proof}
Take $\mathcal{G}=\{f_{n}|_{X_{s}}\}_{n\in\mathbb{N}}$ and
$\mathcal{H}=\{f_{n}|_{X_{u}}\}_{n\in\mathbb{N}}$. Then, obviously $\mathcal{G}$ is uniformly
contracting and $\mathcal{H}$ is uniformly expanding on $X_{s}$ and $X_{u}$, respectively. Hence, by Theorems \ref{contracting} and \ref{expanding}, $\mathcal{G}$ and $\mathcal{H}$ have the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties. On the other hand, $\mathcal{F}=\mathcal{G}\times \mathcal{H}$. So, by \cite[Theorem 3.2]{DTRD} and Theorem \ref{theorem1}, time varying map $\mathcal{F}$ has the shadowing, limit shadowing, s-limit shadowing and exponential limit shadowing properties.
\end{proof}
\section*{Acknowledgements}
We would like to thank anonymous reviewer whose remarks improved
the presentation of the paper.
|
2,869,038,156,853 | arxiv | \section{Abstract}
\section{Introduction}
Randomized trials often need large sample sizes to achieve adequate statistical power, and recruiting patients across the spectrum of illness severity can make it easier to find a sufficiently large cohort. However, this recruitment strategy can be a double-edged sword, as patients with mild or severe illness may receive little benefit from treatment~\cite{redelmeier2020approach}. For patients with mild illness, treatment may be superfluous; for patients with severe illness, treatment may be futile. Including these patients in a study may bias a study toward the null, as the study results rely on a subset of patients with illness severity in the middle range.
We present a simple approach that identifies the single contiguous range of illness severity where the estimated treatment benefit is maximized. We consider this range to be the ``sweet spot" for treatment. We further present methods to compute a bias-corrected estimate of the conditional average treatment effect (CATE), and to control type I error. Because we identify a single sweet spot and compute a $p$-value, we believe our method to be straightforward to interpret and actionable: results from our method can inform future clinical trials and help clinicians make personalized treatment recommendations.
As a running example to illustrate our method, we use the AQUAMAT trial~\cite{dondorp2010artesunate}, which compares artesunate to quinine in the treatment of severe falciparum malaria in African children. This randomized trial studied $\num{5488}$ children with severe malaria, with primary outcome in-hospital mortality. This study was conducted between Oct 3, 2005, and July 14, 2010. Half of the patients were randomized to receive artesunate, and half to receive quinine. The trial found that artesunate substantially reduces mortality in African children with severe malaria. The patients in this study were diverse across $45$ measured covariates including age, sex, complications on admission, vitals, and labs (for a full description of covariates, we refer to \cite{dondorp2010artesunate}). This diversity makes it more likely that some patients would do well or poorly regardless of care, though it is not obvious how to identify them: a Mantel-Haenszel subgroup analysis showed no evidence of any differences in outcomes between subgroups. However, the reanalysis of this trial in \cite{watson2020graphing} suggests that there may be treatment effect heterogeneity along the axis of illness severity.
In Figure \ref{fig:intro}, we visualize the smoothed treatment effect estimate as a function of illness severity for the patients in this trial. From this image, it is tempting to determine that there is a range of illness severity where patients receive more benefit from treatment; acting on this determination can be dangerous, however, as the apparent sweet spot could be due to chance alone. Our statistical framework protects against this by finding sweet spots and judging their significance.
\begin{figure}[H]
\centering
\includegraphics[width=.6\linewidth]{images/intro_image_binary_AQUAMAT.pdf}
\caption{Smoothed treatment effect as a function of illness severity on the AQUAMAT randomized trial. By visual inspection, it seems that patients with illness severity in the shaded range seem to benefit more from treatment.}
\label{fig:intro}
\end{figure}
\section{Related work}
There has been recent interest in developing methods to estimate heterogeneous treatment effects in randomized trials \cite{redelmeier2020approach, zhao2013effectively, athey2016recursive, athey2019generalized, kunzel2019metalearners, watson2020machine, watson2020graphing}.
Athey and Imbens~\cite{athey2016recursive} model treatment effect as a function of patient covariates: the causal tree estimator is a tree that partitions the covariate space into subsets where patients have lower-than or higher-than-average treatment effects, and allows valid inference for causal effects in randomized trials. Causal trees can be combined in an ensemble to form a causal forest~\cite{athey2019generalized} which is more flexible, though harder to interpret. Instead of modeling treatment effect, Watson and Holmes~\cite{watson2020machine} identify the presence of treatment effect heterogeneity with control of type I error. This method uses a statistical test to compare treated and untreated outcomes among a subgroup of patients predicted to benefit from treatment. This is repeated many times on different subsets of the data, and the tests are summarized into a single $p$-value that accounts for multiple hypothesis testing. As in \cite{athey2016recursive}, this method may uncover treatment effect heterogeneity even when the relationship between covariates and the outcome is complex.
However, heterogeneous treatment effects are often small and difficult to identify, and methods which are very general can suffer from low power. Rather than search the full covariate space, the methods in Redelmeier and Tibshirani~\cite{redelmeier2020approach}, Watson and Holmes~\cite{watson2020graphing} and Kent et al~\cite{kent2010assessing} look directly at the relationship between a precomputed measure of illness severity and treatment effect. The method in \cite{redelmeier2020approach} orders patients by increasing illness severity, computes the cumulative treatment effect, and compares the goodness of fit of a line and a Gompertz CDF -- there is no heterogeneity when the cumulative treatment effect is linear. The method in \cite{watson2020graphing} models individual treatment effect as a function of illness severity using a localised reweighting kernel. Finally, the method in~\cite{kent2010assessing} stratifies patients by risk, and then estimates the treatment effect separately on each stratum. None of these methods quantify type I error or statistical power.
\section{Our proposed method}
Our method exploits any existing relationship between between illness severity and treatment effect, and searches for the contiguous range of illness severity where the estimated treatment benefit is maximized. We further compute a bias-corrected estimate of the conditional average treatment effect (CATE) in the sweet spot, and compute a $p$-value.
\begin{algorithm}
\caption{Finding and assessing a sweet spot on clinical trial data\\for a randomized trial design with a control$:$treated ratio of $k$:$1$}
\begin{enumerate}
\item Compute a predilection score for each patient that indicates illness severity.
\item Create sets of patients with similar scores, consisting of $k$ controls and one treated patient. On each matched set, compute the treatment effect, the difference between the treated outcome and average control outcome.
\item Perform an exhaustive search to identify the sweet spot --- the range of illness severity scores where the treatment effect is maximal.
\item Test the null hypothesis that there is no sweet spot related to illness severity with a permutation test.
\item Debias the estimate of the treatment effect inside and outside the sweet spot using the parametric bootstrap.
\end{enumerate}
\label{alg:overview}
\end{algorithm}
\subsection{Computing predilection scores}
Illness severity is characterized by prognosis at baseline: we refer to this as a \textit{predilection score}, as it represents the patient's baseline predilection to the outcome. Predilection scores are computed from a model trained to predict the outcome from the baseline covariates, and they may take on any real values. To model continuous or binary outcomes, we recommend linear or logistic regression.
There are two important considerations when fitting the predilection score model. First, the model must be trained only on the control patients, as the intervention may have altered the natural history of the treated patients. Second, prevalidation must be used to avoid overfitting to the controls~\cite{tibshirani2002pre, abadie2018endogenous}; prevalidation ensures that every patient's prediction comes from a model trained solely on other patients. To do $k$-fold prevalidation, partition the controls evenly into $k$ sets, train a predilection score model on $k-1$ sets and use this model to compute scores on the remaining set. This is repeated so that every set is held out, and as a result, no patient is used to train the model that ultimately computes their predilection score. We illustrate this on a small example in Table~\ref{table:preval}. We thank Lu Tian, Trevor Hastie and Rocky Aikens for bringing this to our attention, and we further motivate the need for prevalidation in Section~\ref{section:preval}.
We note that it may not be necessary to train a new predilection score model when there already exists a model trained on an external dataset of patients who received the ``control'' treatment.
\begin{table}[H]
\begin{center}
\begin{tabular}{ l | l }
train & predict\\
\hline
controls 3--10 & controls 1, 2 \\
controls 1, 2, 5--10 & controls 3, 4 \\
controls 1--4, 7--10 & controls 5, 6 \\
controls 1--6, 9, 10 & controls 7, 8 \\
controls 1--8 & controls 9, 10 \\
\end{tabular}
\caption{An example of five-fold prevalidation on $10$ controls. More generally, when doing $k$-fold prevalidation, we use $k$ models, each trained on $\frac{(k-1)n}{k}$ controls and used to compute the score on the remaining $\frac{n}{k}$ controls.}
\label{table:preval}
\end{center}
\end{table}
\begin{example*}
To compute predilection scores on the AQUAMAT data, we choose logistic regression: our outcome is in-hospital mortality. We begin with $10$-fold prevalidation to compute scores for the $\num{2743}$ control patients in the trial, and then we fit a model on all $\num{2743}$ controls and compute the predilection scores of the treated patients. The predilection scores have moderate goodness-of-fit (AUROC = $0.82$, AUPRC = $0.78$), and we report the odds ratios in Figure~\ref{fig:logreg}. We also illustrate the distribution of scores for the treated and control patients.
\end{example*}
\begin{figure}[H]
\begin{minipage}[b]{0.5\linewidth}
\centering
\begin{tabular}{ l r }
& odds ratio \\
\hline
intercept& $0.080$\\
coma& $4.765$\\
shock & $1.563$\\
convulsions & $1.371$\\
base deficit & $0.986$\\
BUN & $1.016$\\
hematocrit & $0.995$\\
respiratory rate & $1.013$\\
bicarbonate & $0.874$\\
hypoglycaemia & $2.410$\\
\hline
\end{tabular}
\par\vspace{0pt}
\end{minipage}%
\begin{minipage}[b]{0.48\linewidth}
\centering
\includegraphics[width=\linewidth]{images/predilection_score_comparison_binary_AQUAMAT.pdf}
\par\vspace{0pt}
\end{minipage}
\caption{Odds ratios for the predilection score model, and the distributions of predilection scores for treated and control patients. An odds ratio above $1$ indicates increased risk of death.}
\label{fig:logreg}
\end{figure}
\subsection{Estimating treatment effects}
\label{section:treatmenteffects}
We now estimate treatment effect as a function of predilection score. For a trial design with a control$:$treated ratio of $k$:$1$, we use optimal matching~\cite{hansen2019package} to form groups of $k$ controls and one treated patient with similar predilection scores. Each group's predilection score is their average predilection score, and for binary or continuous outcomes, their conditional average treatment effect (CATE) is the mean difference between the treated and control outcomes within the group.
\begin{example*}
On the AQUAMAT data, we have a control$:$treated ratio of $1$:$1$, and we form $\num{2743}$ sets containing one control and one treated patient with similar predilection scores. On each matched set, we compute the treatment effect as the difference in in-hospital mortality between the treated and control patient. For example matched sets and their estimated treatment effects, see Table~\ref{table:examplescores}; for the treatment effect as a function of predilection score, see Figure~\ref{fig:matched_triplets}.
\end{example*}
\begin{table}[H]
\centering
\begin{tabular}{ r r | r r | r r }
\multicolumn{2}{c }{predilection score} & \multicolumn{2}{c }{in-hospital mortality} & & \\
\hline
\multicolumn{1}{c}{control} & \multicolumn{1}{c}{treated} & \multicolumn{1}{c}{control} & \multicolumn{1}{c}{treated} & \multicolumn{1}{c}{mean score} & \multicolumn{1}{c}{CATE} \\
\toprule
$-4.21$ & $-4.21$ & $0$ & $1$ & $-4.21$ & $-1$\\
\rule{0pt}{2.6ex}
$-5.34$ & $-5.38$ & $0$ & $0$ & $-5.36$ & $0$\\
\rule{0pt}{2.6ex}
$-1.98$ & $-1.97$ & $1$ & $1$ & $-1.97$ & $0$\\
\rule{0pt}{2.6ex}
$-4.78$ & $-4.78$ & $1$ & $0$ & $-4.78$ & $1$\\
\end{tabular}
\caption{Predilection scores, outcomes and estimated treatment effects for four example matched sets in the AQUAMAT data. A lower predilection score indicates less severe illness.}
\label{table:examplescores}
\end{table}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{images/matched_triplets_binary_AQUAMAT.pdf}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{images/matched_triplets_smooth_binary_AQUAMAT.pdf}
\end{subfigure}
\caption{On the left, treatment effect as a function of illness severity on the AQUAMAT randomized trial. On the right, a smoothing spline fit to the treatment effects suggests a region of predilection scores where patients respond more strongly to treatment.}
\label{fig:matched_triplets}
\end{figure}
\subsection{Finding the sweet spot}
\label{section:sweetspot}
We have defined $\mathbf{t} = \{t_k\}_{k=1}^n$ and $\mathbf{s} = \{s_k\}_{k=1}^n$, the sequences of estimated treatment effects and predilection scores of our matched sets, both ordered by increasing predilection score. That is, $s_1 \leq s_i \leq s_n$, and $t_i$ is the treatment effect for the set with predilection score $s_i$.
We would like to identify a contiguous subsequence of $\mathbf{t}$ that (1) is long (to cover as many patients as possible), and (2) has a large average treatment effect. To measure the extent to which any subsequence meets our requirements, we use the length of the sequence (which captures criterion 1) times the difference between the sequence average and the global average (which captures criterion 2). Explicitly, for the subsequence of $\mathbf{t}$ consisting of $\{t_i, t_{i+1}, \dots, t_j\}$, compute:
\[
Z(i, j) = (j-i+1) \left( \text{mean}\left(\{t_k\}_{k=i}^j\right) - \text{mean}\left(\{t_k\}_{k=1}^n\right) \right).
\]
The values of $i$ and $j$ that maximize $Z$ indicate the location of the sweet spot, and they are found by an exhaustive search over $i \in [1, n-1],\, j \in [i+1, n]$. We write these values as $(\hat{i}, \hat{j}) = \arg \max_{i, j} Z(i, j)$; the sweet spot includes patients with predilection scores between $s_{\hat{i}}$ and $s_{\hat{j}}$.
\begin{example*} On the AQUAMAT data, the maximum value of $Z$ is $\widehat{Z} = 47.47$, with sweet spot $(\hat{i}, \hat{j}) = (2153, 2631)$ corresponding to patients with predilection scores between $-1.70$ and $-0.20$. The mean treatment effect in this range is $0.12$; outside this range, it is $0.00$. This is illustrated in Figure~\ref{fig:sweet_spot_1}.
\end{example*}
\begin{figure}[H]
\centering
\includegraphics[width=.8\linewidth]{images/sweet_spot_found_bothplots_binary_AQUAMAT.pdf}
\caption{The sweet spot identified on the AQUAMAT data. On the left, we highlight the range of predilection scores in the sweet spot. On the right, we show the mean treatment effect estimate inside the sweet spot. For illustration, we include a smoothing spline fit to the treatment effect estimate.}
\label{fig:sweet_spot_1}
\end{figure}
\subsection{Calibrating}
\label{section:pvalue}
We wish to test the null hypothesis that there is no sweet spot related to illness severity. We ask: if there were \textit{no sweet spot}, how often would we observe a value at least as large as $\widehat{Z} = Z(\hat{i}, \hat{j})$?
Suppose there is no sweet spot, and the true treatment benefit is the same across the entire range of illness severity. In this case, the ordering of the treatment effect sequence $\mathbf{t}$ does not matter: with the same probability, we could have observed any permutation of $\mathbf{t}$, and the maximum value of $Z$ on the permuted sequence would be similar to $\widehat{Z}$. However, if there is a sweet spot, the ordering of $\mathbf{t}$ is meaningful, and $\widehat{Z}$ would be larger than most of the maximum values of $Z$ on the permuted sequences.
We test our null hypothesis with a permutation test: we repeatedly shuffle the values of $\mathbf{t}$ and find the maximum value of $Z$ on the permuted sequence. The $p$-value is the relative frequency that the maximum value on the permuted sequence is at least as large as $\widehat{Z}$.
\begin{example*} We do a permutation test on the AQUAMAT data. In Section~\ref{section:treatmenteffects}, we computed the ordered sequence of treatment effects, and in Section~\ref{section:sweetspot} we chose the sweet spot corresponding to $\widehat{Z} = 47.47$. For \num{1000} iterations, we permuted the sequence of treatment effects and computed the maximum value of $Z$. On one of those permutations, we observed a value of $Z$ that was larger than $47.47$, which corresponds to $p$-value $0.001$. We visualize our permutation test in Figure~\ref{fig:pvalue}.
\end{example*}
\begin{figure}[H]
\centering
\includegraphics[width=.5\linewidth]{images/pvalue_histogram_binary_AQUAMAT.pdf}
\caption{Sweet spot permutation test on the AQUAMAT data.
}
\label{fig:pvalue}
\end{figure}
\subsection{Debiasing}
Finally, we wish to estimate the treatment effect in the sweet spot. The naive choice is the mean treatment effect in the sweet spot: $\hat{\tau} = \text{mean}(\{t_{\hat{i}}, \dots, t_{\hat{j}}\})$. However, this may be optimistic, as searching for the sweet spot may bias the treatment effect. We debias our estimate with the parametric bootstrap~\cite{tibshirani1993introduction}, using our sweet spot as a model to simulate data.
Having computed treatment effects $\mathbf{t} = \{t_k\}_{k=1}^n$ and a sweet spot location $[\hat{i}, \hat{j}]$, we generate a new sequence of treatment effects $\mathbf{t}^*$. The values inside the sweet spot $\{t^*_k\}_{k=\hat{i}}^{\hat{j}}$ are sampled with replacement from $\{t_k\}_{k=\hat{i}}^{\hat{j}}$, the values inside the sweet spot on the original sequence. Likewise, the values outside the sweet spot, $\{t^*_k\}_{k=1}^{\hat{i}-1} \cup \{t^*_k\}_{k=\hat{j}+1}^{n}$, are sampled with replacement from $\{t_k\}_{k=1}^{\hat{i}-1} \cup \{t_k\}_{k=\hat{j}+1}^{n}$. We repeatedly simulate data using this method, find its sweet spot, and estimate the CATE in the sweet spot. Our bootstrapped bias estimate is the difference between the mean bootstrapped CATE estimate, $\hat{\tau}_\text{boot}$, and $\hat{\tau}$. To bias-correct $\hat{\tau}$, subtract the bias:
\begin{align*}
\hat{\tau}_{\text{corrected}} &= \hat{\tau}- \widehat{\text{bias}}\\
&= \hat{\tau}- \left(\hat{\tau}_\text{boot} - \hat{\tau}\right)\\
&= 2\, \hat{\tau}- \hat{\tau}_\text{boot}.
\end{align*}
In every run of the bias-correction bootstrap, we also obtain an estimate of the location of the sweet spot. These estimates form an empirical distribution around the values of $\hat{i}$ and $\hat{j}$ in the original estimation of the sweet spot.
\begin{example*}
On the AQUAMAT data, we found $\hat{\tau} = 0.123$. Our bootstrap estimate was $\hat{\tau}_\text{boot} = 0.126$, so we overestimated by $0.003$ on average. We adjust our estimate down to $\hat{\tau}_\text{corrected} = 0.120$ (Figure~\ref{fig:biashistogram}).
\end{example*}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\linewidth}
\includegraphics[width=.9\linewidth]{images/bias_histogram_binary_AQUAMAT.pdf}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\includegraphics[width=.9\linewidth]{images/start_end_binary_AQUAMAT.pdf}
\end{subfigure}
\caption{Visualizations from the de-biasing parametric bootstrap on the AQUAMAT data. On the left, we visualize bias: we overestimate the CATE by $0.003$. On the right, we visualize uncertainty around the location of the sweet spot.}
\label{fig:biashistogram}
\end{figure}
\subsection{More results on real data}
We summarise our results on the AQUAMAT data in a single image (Figure~\ref{fig:final}). Our results include the original and debiased estimates of the CATE inside and outside the sweet spot, together with our bootstrapped distributions of the start and end index of the sweet spot. We also visualize the smoothed treatment effect as a function of predilection score.
\begin{figure}[H]
\centering
\includegraphics[width=.6\linewidth]{images/final_binary_AQUAMAT.pdf}
\caption{Sweet spot found on the AQUAMAT data.}
\label{fig:final}
\end{figure}
On the AQUAMAT data, we compare our method to the Gompertz fit in \cite{redelmeier2020approach}, the causal forest in \cite{athey2019generalized}, and the reference class approach in \cite{watson2020graphing}. To compute a $p$-value for the Gompertz method, we use the bootstrap as defined in Section~\ref{section:pvalue}.
\begin{figure}[H]
\centering
\includegraphics[width=.7\linewidth]{images/model_comparison_real_data_binary_AQUAMAT.pdf}
\caption{Comparison of methods on the AQUAMAT data.}
\label{fig:comparison}
\end{figure}
We also illustrate our method on the SEAQUAMAT trial as in \cite{watson2020machine}, which compared quinine to artesunate for the treatment of severe malaria in Asian adults. The superiority of artesunate for severe malaria is now well established~\cite{white2014451}, and in this retrospective analysis, we consider artesunate to be standard of care. We follow the data preparation and experimental setup in \cite{watson2020machine}, and our finding agrees with theirs: we fail to reject the null hypothesis that there is no range of illness severity for which quinine is superior.
\begin{figure}[H]
\centering
\includegraphics[width=.7\linewidth]{images/SEAQUAMAT.pdf}
\caption{No sweet spot found on the SEAQUAMAT trial data.}
\label{fig:seaquamat}
\end{figure}
\section{Simulation studies}
\subsection{Type I error}
\label{section:type1error}
To compute the type I error of our method, we simulate randomized trial data with no sweet spot. In each of our $\num{1000}$ simulations, covariates for $400$ patients are drawn from a standard multivariate normal distribution in $10$ dimensions, and patients are assigned to receive treatment with probability $0.5$. The probability of a negative outcome is determined by a logistic model with coefficients drawn from a standard multivariate normal distribution in $10$ dimensions and a normally distributed error term. For patients who receive treatment, this probability is lowered by $0.05$ (the treatment effect). On our simulated data, we find our method to be well-calibrated, and this is illustrated in Figure~\ref{fig:type1}.
\begin{figure}[H]
\centering
\includegraphics[width=.5\linewidth]{images/type1error.pdf}
\caption{Type I error on simulated data.}
\label{fig:type1}
\end{figure}
\subsection{Power}
\label{section:power}
To compute the power of our method, we again simulate randomized trial data as in Section~\ref{section:type1error}, however now we add an extra treatment effect for patients in the middle range of illness severity. In Figure~\ref{fig:power}, we examine power along two axes: the first is the extra treatment effect in the sweet spot, and the second is the size of the sweet spot. We compare our method to a causal forest~\cite{athey2019generalized} and to the method in \cite{watson2020machine}, ``ML analysis plans for randomized controlled trials". The latter computes a $p$-value and is directly comparable to our method. To compare our method to the former, we do the following: on our simulated data, we fit a causal forest and predict outcomes, and then we perform a one-sided $t$-test that tests whether the mean predicted outcome in the sweet spot is larger than that outside the sweet spot.
In this setting, our method has the highest power: this is expected, as we simulated data that matches our method's assumptions. This comparison is included to illustrate that all methods struggle when the sweet spot is small (covering only about $10\%$ of the data), and when there is little extra benefit in the sweet spot ($ \leq 20\%$).
\begin{figure}[H]
\centering
\includegraphics[width=.8 \linewidth]{images/sweet_spot_power_dots.pdf}
\caption{Power as a function of sweet spot size and magnitude on simulated data. ``ML $p$-value" refers to the method in \cite{watson2020machine}.}
\label{fig:power}
\end{figure}
We repeat this experiment, this time defining the sweet spot location as a region defined by three of the ten covariates. The methods in \cite{athey2019generalized} and \cite{watson2020machine} are tree based, and should be able to discover the relevant covariates. In all methods, the power is low compared to what the sweet spot method achieves in Figure~\ref{fig:power}, and the method in \cite{watson2020machine} has superior performance when the true sweet spot is large and strong. A sweet spot search over the full covariate space can only have large power when there is a large effect.
\begin{figure}[H]
\centering
\includegraphics[width=.8 \linewidth]{images/sweet_spot_power_dots_sparse_2.pdf}
\caption{Power as a function of sweet spot size and magnitude on simulated data, where the sweet spot is defined by three of the ten covariates. ``ML $p$-value" refers to the method in \cite{watson2020machine}.}
\label{fig:power2}
\end{figure}
\section{Further notes and discussion}
\subsection{Prevalidation}
\label{section:preval}
To compute treatment effects, we match a treated patient with a control who has similar illness severity, and we compute the difference in their outcomes. However, we do not know the \textit{true} illness severity for any patient; rather, we fit a predilection score model, and use predilection score as a proxy for illness severity. So, though we think of our matched sets as sets of patients with similar illness severity, they are more precisely sets of patients with similar \textit{predilection scores}.
Though subtle, this distinction is important. Without prevalidation, we may overfit our predilection score model to the controls. If we imagine overfitting to the extreme, a large predilection score for a control indicates only that the model knows the control had a negative outcome; a small score likewise indicates a positive outcome. Our model has no such knowledge of the future for the treated patients.
Overfitting will have downstream effects: we use the predilection scores to pair treated and control patients. In the pairs with larger predilection scores, we overestimate the treatment effect: the control is more likely to have had a negative outcome than the treated patient. Similarly, we underestimate the treatment effect in the pairs with smaller predilection scores, as the control is more likely to have had a positive outcome.
As a result, without prevalidating the predilection score model, we inject treatment effect heterogeneity into our data, which causes us to lose control of type I error: we are more likely to find a sweet spot when there is none. This is discussed in detail in \cite{abadie2018endogenous}.
We illustrate this in Figure~\ref{fig:preval} on simulated data, with $n=800$ trial participants and $p=10$ and $p=100$ covariates. In scenarios where we are more prone to overfitting (in this example, when $p=100$), our problem becomes more pronounced.
\begin{figure}[H]
\centering
\includegraphics[width=.8 \linewidth]{images/prevalidation_both_2.pdf}
\caption{Type I error on simulated data, with and without prevalidation. We simulate data as in Section~\ref{section:type1error}.} \label{fig:preval}
\end{figure}
\subsection{Calibration}
Without a measure of calibration, it can be tempting to erroneously find a sweet spot. In Figure~\ref{fig:calibrationtrick}, we show a sample drawn from a data generating process with no sweet spot; by visual inspection, however, it is tempting to identify a sweet spot over the range of the $70^{\text{th}} - 84^{\text{th}}$ predilection scores. Our permutation test $p$-value is $0.160$; it correctly finds no sweet spot.
\begin{figure}[H]
\centering
\includegraphics[width=.6\linewidth]{images/calibration_4.pdf}
\caption{Simulated data with no sweet spot, though it is tempting to find a sweet spot by visual inspection.}
\label{fig:calibrationtrick}
\end{figure}
\section{Discussion}
The idea behind our method is simple: identify the range of illness severity where treatment benefit is maximized, estimate the benefit inside and outside this range, and test the null hypothesis that there is no treatment effect heterogeneity. There are existing methods for modeling treatment effect heterogeneity in randomized trials: some model the treatment effect as a function of covariates, and others identify the presence of treatment effect heterogeneity. Our method is unique as it does both, and our results are straightforward to visualize and interpret. Further, our method has a natural extension to multi-arm trials: treatments can be compared pairwise (as we have illustrated here with a treated and control group), or treatments may be compared to the group of all other treatments.
When the trial has a survival endpoint where patients may be right-censored (as in \cite{redelmeier2020approach}), estimating the treatment effect is complicated by censoring. We do not know the true outcome for all patients, and it is less obvious how to directly compare outcomes within a matched pair. This is an open area for future research.
In this paper, we exploited the relationship between treatment effect and illness severity. When the treatment effect is independent of illness severity (or it cannot be expressed by our predilection score model), our method will not find the sweet spot. In principle, another measure could be used in place of illness severity, though this is advisable only when there is a natural choice for a particular dataset. Simulations in Section~\ref{section:power} show that identifying small or weak sweet spots remains an open challenge.
\subsection{Acknowledgements}
The authors thank Lu Tian, Trevor Hastie, Alejandro Schuler, Rocky Aikens, and Stephen Pfohl for helpful discussions, and James Watson and Chris Holmes for data access.
\begin{appendices}
\section{Finding the sweet spot}
We can speed up our computation of $Z(i, j) = \sum_{k=i}^j t_k - \frac{j-i+1}{n} \sum_{k=1}^n t_k.$ by vectorizing. For each $k < n$, we can simultaneously compute the vector of values of $Z(i, j)$ that satisfy $j-i=k$. First, we compute the cumulative sum of $\mathbf{t}$, denoted $\mathbf{t}^*$, such that $t_w^*= \sum_{k=1}^w t_w$. We then compute: \[ \mathbf{Z}(k) = \{ t^*_{k+1}, t^*_{k+2}, \dots, t^*_{n} \} - \{ t^*_1, t^*_2, \dots, t^*_{n-k} \} - k \frac{t^*_n}{n}. \]
For very large studies, we can conserve computing time by choosing to consider only e.g. ranges where $j-i$ is even. We may also restrict to ranges within a minimum and maximum size: for example, further study of the treatment may be practical only if the sweet spot covers at least $10\%$ of the patients in the trial.
\end{appendices}
|
2,869,038,156,854 | arxiv | \section{Introduction}
We establish for quaternions an analog of the trace formula obtained by Connes
in \cite{Co} for a commutative local field $K$. This formula has the form
$\Tr(\widetilde{P_\Lambda}P_\Lambda\,U_f) = 2\log(\Lambda)f(1) + W(f) + o(1)$
(for $\Lambda\to\infty$), where $f$ is a test-function on $K^\times$, $U_f$ is
the operator of multiplicative convolution with $f$, $P_\Lambda$ and
$\widetilde{P_\Lambda}$ are cut-off projections (precise definitions will be
given later), all acting on the Hilbert space of square-integrable functions on
$K$. The contant term $W(f)$ was shown by Connes to be exactly the term arising
in the ``Weil's explicit formulae'' \cite{We1} of number theory.
We have shown in this abelian local case (see \cite{Bu1}, \cite{Bu2}, and the
related papers \cite{Bu3} and \cite{Bu4}) that the Weil term $W(f)$ can be
written as $-H(f)(1)$ for a certain dilaton-invariant operator $H$. We study
this operator in the non-commutative context of quaternions and then derive the
(analog of) Connes's asymptotic formula. The proof would go through (with some
simplifications of course) equally well in the abelian case.
We first give some elementary lemmas of independent interest about self-adjoint
operators. We then study in multiplicative terms the additive Fourier Transform,
and this immediately leads to the definition of certain ``Quaternionic Tate
Gamma'' functions and to the analog of Tate's local functional equations
(\cite{Ta}, \cite{We2}). This is of course very much related to the
generalization to $GL(N)$ of Tate's Thesis in the work \cite{GJ} of
Godement-Jacquet (see \cite{Ja1}, \cite{Ja2} for reviews and references to
further works by other authors), where certain ``$\gamma(s,\pi,\psi)$''
functions, local $L$- and $\epsilon$-factors and associated functional equations
are studied. Also relevant is the classic monograph by Stein and Weiss \cite{St}
on harmonic analysis in euclidean spaces. In this paper we will follow a
completely explicit and accordingly elementary approach.
We introduce the ``conductor operator'' $H= \log(|x|) + \log(|y|)$ and show how
it gives an operator theoretic interpretation to the logarithmic derivatives of
the Gamma functions (which are involved in explicit formulae.) It is then a
simple matter to compute Connes's trace, and to obtain the asymptotic formula
$$\Tr(\widetilde{P_\Lambda}P_\Lambda\,U_f) = 2\log(\Lambda)f(1) - H(f)(1) +
o(1)$$ in a form directly involving our operator $H$. Further work leads to a
``Weil-like'' formulation for the constant term $H(f)(1)$, if so desired.
\section{Closed invariant operators}
It is well known that any bounded operator on $L^2(\RR,dx)$ which commutes with
translations is diagonalized by the additive Fourier transform (see for example
the Stein-Weiss monograph \cite{St}.) We need a generalization which applies to
(possibly) unbounded operators on Hilbert spaces of the form $L^2(G,dx)$ where
$G$ is a topological group. Various powerful statements are easily found in the
standard references on Hilbert spaces, usually in the language of spectral
representations of abelian von Neumann algebras. For lack of a reference
precisely suited to the exact formulation we will need, we provide here some
simple lemmas with their proofs.
\begin{definition} Let $L$ be a Hilbert space. A (possibly unbounded) operator
$M$ on $L$ with domain $D$ is said to commute with the bounded operator $A$ if
$$\forall v\in L: v\in D\Rightarrow \Big(\ A(v)\in D\ \hbox{and}\ M(A(v)) =
A(M(v))\ \Big)$$
\end{definition}
\begin{thm}\label{L1}
Let $L$ be a Hilbert space and $G$ a (not necessarily abelian) group of unitary
operators on $L$. Let $\cal A$ be the von Neumann algebra of bounded operators
commuting with $G$. Let $M$ be a (possibly unbounded) operator on $L$, with
dense domain $D$. If the three following conditions are satisfied\\ (1) $\cal
A$ is abelian\\ (2) $(M,D)$ is symmetric\\ (3) $(M,D)$ commutes with the
elements of $G$\\ then $(M,D)$ has a unique self-adjoint extension. This
extension commutes with the operators in $\cal A$.
\end{thm}
\begin{proof} We first replace $(M,D)$ by its double-adjoint so that we can
assume that $(M,D)$ is closed (it is easy to check that conditions (2) and (3)
remain valid). The problem is to show that it is self-adjoint. Let $K$ be the
range of the operator $M + i$. It is a closed subspace of $L$ (as $\|(M +
i)(\varphi)\|^2 = \|M(\varphi)\|^2 + \|\varphi\|^2$, and $M$ is closed). Let $R$
be the bounded operator onto $D$ which is orthogonal projection onto $K$
followed with the inverse of $M + i$. One checks easily that $R$ belongs to
$\cal A$, hence commutes with its adjoint $R^*$ which will also belong to $\cal
A$. Any vector $\psi$ in the kernel of $R$ is then in the kernel of $R^*$ (as
$<R^*\psi|R^*\psi>\ =\ <\psi|R\,R^*\psi>\ =\ 0$). So $\psi$ belongs to the
orthogonal complement to the range of $R$, that is $\psi = 0$ as the range of
$R$ is $D$. So $K = L$ and in the same manner $(M - i)(D) = L$. By the basic
criterion for self-adjointness (see \cite{Re}), $M$ is self-adjoint. Let
$A\in\cal A$. It commutes with the resolvant $R$ hence leaves stable its range
$D$. On $D$ one has $RA(M+i) = AR(M+i) = A = R(M+i)A$ hence $A(M+i) = (M+i)A$
so $A$ commutes with $M$.
\end{proof}
For the remainder of this section we let $G$ be a locally compact, Hausdorff,
topological \emph{abelian} group and $\widehat{G}$ its dual group. We refer to
\cite{Ru} for the basics of harmonic analysis on $G$. In particular we have a
Haar measure $dx$ (unique up to a multiplicative constant) and a Hilbert space
$L = L^2(G, dx)$. We also have a dual Haar measure $dy$ on $\widehat{G}$ such
that the Fourier transform $F(\varphi)(y) = \int \varphi(x) \overline{y(x)} dx$
is an isometry of $L$ onto $L^2(\widehat{G}, dy)$. We sometimes identify the
two Hilbert spaces without making explicit the reference to $F$: so when we
write $f(y)\in L$ we really refer to $F^{-1}(f)\in L$ with $f\in
L^2(\widehat{G}, dy)$. No confusion should arise. We will assume that $dy$ is a
$\sigma$-finite measure so that there exists $\psi \in L$ with the property
$\psi(y)\neq 0\ a.e.$\ .
Let $a(y)$ be a measurable function on $\widehat{G}$, not necessarily
bounded. Let $D_a\subset L$ be the domain of square-integrable (equivalence
classes of measurable) functions $\varphi(x)$ on $G$ such that
$a(y)F(\varphi)(y)$ belongs to $L^2(\widehat{G}, dy)$. And let $M_a$ be the
operator with domain $D_a$ acting according to $\varphi\mapsto M_a(\varphi) =
F^{-1}(a\cdot F(\varphi))$. We write $a=b$ if the two functions $a(y)$ and
$b(y)$ are equal almost everywhere on $\widehat{G}$.
\begin{lem} The operator $(M_a, D_a)$ on $L^2(G,dx)$ commutes
with $G$. Furthermore $D_a$ is dense and $(M_a, D_a)$ is a closed operator. If
$(M_b, D_b)$ extends $(M_a,D_a)$, then in fact $a = b$ and $(M_b, D_b) =
(M_a,D_a)$. The adjoint of $(M_a,D_a)$ is $(M_{\overline{a}},D_{\overline{a}})$
(of course $D_{\overline{a}} = D_a$.)
\end{lem}
\begin{proof} We give the proof for completeness. The commutation with
$G$-translations is clear. Then $D_a$ contains (the inverse
Fourier transform of) $\psi(y)\over\sqrt{1 + |a(y)|^2}$ and all its
translates. Hence if $f$ is orthogonal to $D_a$ then the function
$\overline{f(y)}\psi(y)\over\sqrt{1 + |a(y)|^2}$ on $\widehat{G}$ belongs to
$L^1(\widehat{G},dy)$ and has a vanishing ``inverse Fourier transform'', hence
$f = 0$ (almost everywhere). It is also clear using $\psi(y)\over\sqrt{1 +
|a(y)|^2}$ that if $(M_b, D_b)$ extends $(M_a,D_a)$, then $a = b$. Let us
assume that the sequence $\varphi_j$ is such that $\varphi=\mathop{\rm l{.}i{.}m{.}} \varphi_j$
and $\theta=\mathop{\rm l{.}i{.}m{.}} M_a(\varphi_j)$ both exist. Let us pick a pointwise on
$\widehat{G}$ almost everywhere convergent subsequence
$\varphi_{j_k}(y)$. Using Fatou's lemma we deduce that $\varphi$ belongs to
$D_a$. Using Fatou's lemma again we get the vanishing of $\int_{\widehat{G}}
|\theta(y) - a(y) \varphi(y)|^2\,dy$, and this shows that $(M_a, D_a)$ is a
closed operator. Finally let $f(y)$ be in the domain of the adjoint of
$(M_a,D_a)$. There exists then an element $\theta$ of $L$ such that for any
$\varphi \in D_a$ the equality $$\int f(y)\overline{a(y)\varphi(y)}\,dy = \int
\theta(y)\overline{\varphi(y)}\,dy$$ holds. This implies that the two
following functions of $L^1(\widehat{G}, dy)$:
$${f(y)\overline{a(y)\psi(y)}\over\sqrt{1 + |a(y)|^2}}\hbox{\quad
and\quad}{\theta(y)\overline{\psi(y)}\over\sqrt{1 + |a(y)|^2}}$$ have the same
Fourier transform on $G$, hence are equal almost everywhere. So $f \in
D_{\overline{a}}$ and $(M_a)^*(f) = (M_{\overline{a}})(f)$.
\end{proof}
\begin{thm}\label{T1} Let $(M, D)$ be a \emph{closed operator} on $L^2(G,dx)$
commuting with $G$-translations. Then $(M, D) = (M_a, D_a)$ for a (unique)
multiplicator $a$.
\end{thm}
\begin{note} For a bounded $M$ and $G = \RR$, this is proven in the classical
monograph by Stein and Weiss \cite{St}, as a special case of a more general
statement applying in $L^p$ spaces.
\end{note}
\begin{proof}
Let us first assume that $M$ is bounded. We use the (inverse Fourier transform
of the) function $\psi(y)$ and define $a(y)$ to be
${M(\psi)(y)\over\psi(y)}$. Let us consider the domain $D$ consisting of all
finite linear combinations of translates of $\psi$. It is dense by the argument
using unicity of Fourier transform in $L^1$ we have used previously. Then $(M,
D) \subset (M_a, D_a)$, hence $(M_a, D_a)$ is also an extension of the closure
of $(M, D)$. As $M$ is assumed to be bounded this is $(M, L)$. But this means
that $D_a = L$ and that $M = M_a$ (we then note that necessarily $a$ is
essentially bounded).
The next case is when $M$ is assumed to be self-adjoint. Its resolvents $R_1 =
(M - i)^{-1}$ and $R_2 = (M + i)^{-1}$ are bounded and commute with $G$. Hence
they correspond to multiplicators $r_1(y)$ and $r_2(y)$. The kernel of $R_1$ is
orthogonal to the range of $R_2 = R_1^*$ which is all of $D$, so in fact it is
reduced to $\{0\}$. Hence $r_1(y)$ is almost everywhere non-vanishing. Let $f
\in D$ and $g = M(f)$. As $R_1(M(f) - i\,f) = f$ we get $g(y) = {1 +
i\,r_1(y)\over r_1(y)}\cdot f(y)$ and defining $a(y)$ to be ${1 + i\,r_1(y)\over
r_1(y)}$ we see that $(M_a, D_a)$ is an extension of $(M, D)$. Taking the
adjoints we deduce that $(M, D)$ is an extension of
$(M_{\overline{a}},D_{\overline{a}})$. So all three are equal (and $a$ is
real-valued).
For the general case we use the theorem of polar decomposition (see for example
\cite{Re}). There exists a non-negative self-adjoint operator $|M|$ with the
same domain as $M$ and a partial isometry $U$ such that $M = U|M|$. Further
conditions are satisfied which make $|M|$ and $U$ unique: so they also commute
with $G$. It follows from what was proven previously that $(M, D) \subset (M_a,
D_a)$ for an appropriate $a$ (the product of the multiplicators associated to
the self-adjoint $|M|$ and the bounded $U$). The adjoint $(M^*, D^*)$ also has a
dense domain and commutes with $G$, so in the same manner $(M^*, D^*) \subset
(M_b, D_b)$ for an appropriate $b$. The inclusion
$(M_{\overline{a}},D_{\overline{a}}) \subset (M^*, D^*) \subset (M_b, D_b)$
implies $b = \overline{a}$ and $(M_a, D_a) = (M, D)^{**}$. But the
double-adjoint coincides with the closed operator $(M,D)$.
\end{proof}
Let us mention an immediate corollary:
\begin{corollary} A closed symmetric operator on $L^2(G,dx)$ commuting with
$G$ is self-adjoint, and a symmetric operator which has a dense domain and
commutes with $G$ is essentially self-adjoint.
\end{corollary}
\section{Tate's functional equations}
Our first concern will be to introduce numerous notations. Let $\HH$ be the
space of quaternions with $\RR$-basis $\{1, i, j, k\}$ and table of
multiplication $i^2 = j^2 = k^2 = -1,\ ij = k = -ji,\ jk = i = -kj,\ ki = j =
-ik$. A typical quaternion will be denoted $x = x_0 + x_1 i + x_2 j + x_3 k$,
its conjugate $\overline{x} = x_0 - x_1 i - x_2 j - x_3 k$, its real part
$\mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(x) = x_0$, its (reduced) norm $n(x) = x\overline{x} = \overline{x}x =
x_0^2 + x_1^2 + x_2^2 + x_3^2$.
$\HH$ can also be considered as a left $\CC$-vector space with basis $\{1,
j\}$. We then write $a = x_0 + x_1 i$ and $b = x_2 + x_3 i$. Then $jaj^{-1} =
\overline{a}$, $x = a + bj$, and $n(x) = a\overline{a} + b\overline{b}$. The
action of $\HH$ on itself by right-multiplication sends $x = a + bj$ to the
$2\times2$ complex matrix
$$R_x = \pmatrix{a & -\overline{b} \cr b & \overline{a} \cr}$$ We write $V$ for
the complex vector space of complex-linear forms $\alpha: \HH\to\CC$. The forms
$A: x \mapsto a$ and $B: x \mapsto b$ are a basis of $V$. We have a left action
of $\HH$ on $V$ with $x\in\HH$ acting as $\alpha(y) \mapsto \alpha(yx)$. This
left action represents the quaternion $x$ by the matrix
$$L_x = \pmatrix{a & b \cr -\overline{b} & \overline{a} \cr}$$ Also let $V_N =
\hbox{SYM}^N(V)$, for $N = 0, 1, \dots$ be the $N + 1$-dimensional complex
vector space with basis the monomials $A^j B^{N-j}, 0\leq j\leq N$.
Let $G = \HH^\times$ be the multiplicative group (with typical element $g$) and
$G_0 = \{ g \in G |\ n(g) = 1\}$ its maximal compact subgroup. Through the
assignment $g \mapsto L_g$ an isomorphism $G_0 \sim SU(2)$ is obtained, and the
$V_N$'s give the complete list of (isomorphism classes of) irreducible
representations of $G_0$.
The additive Fourier Transform $\cal F$ is taken with respect to the additive
character $x \mapsto \lambda(x) = e^{-2\pi\,i (x + \overline{x})}$. We note that
$\lambda(xy) = \lambda(yx)$. The choice we make for the normalization of $\cal
F$ is:
$${\cal F}(\varphi)(y) = \widetilde{\varphi}(y) = \int
\varphi(x)\lambda(-xy)\,dx$$ where $dx = 4dx_0dx_1dx_2dx_3$ is the unique
self-dual Haar measure for $\lambda$. With these choices the function $\omega(x)
= e^{-2\pi x\overline{x}}$ is its own Fourier transform.
\begin{definition}
The module $|g|$ of $g\in \HH^\times$ is defined by the equality of additive
Haar measures on $\HH$: $d(gx) = d(xg) = |g|dx$. It is expressed in terms of the
reduced norm by $|g| = n(g)^2$.
\end{definition}
\begin{note}
The multiplicative (left- and right-) Haar measures on $G$ are the multiples of
$dg \over |g|$.
\end{note}
One has a direct product $G = (0, \infty) \times G_0$, $g = r g_0$, $r =
\sqrt{n(g)} = |g|^{1/4}$. We write $d\sigma$ for the Euclidean surface element
on $G_0$ (for the coordinates $x_i$), so that $dx = 4r^3\, drd\sigma$. The rule
for integrating functions of $r$ is $\int g(r) dx = 8\pi^2\,\int_0^\infty g(r)
r^3 dr$ as is checked with $\omega(x)$. So $d\sigma = 2\pi^2\,d^*g_0$ where
$d^*g_0$ is the Haar measure on $G_0$ with total mass $1$.
\begin{definition}
The normalized Haar measure on $G$ is defined to be $d^*g = {1 \over 2
\pi^2}{dg \over |g|} = 4 {dr\over r}\,d^*g_0$. It is chosen so that its
push-forward under the module map $g\mapsto u = |g|\in\RR^{\times+}$ is ${du
\over u} = 4 {dr\over r}$.
\end{definition}
The multiplicative group $G$ acts in various unitary ways on $L^2 := L^2(\HH,
dx)$:\\ \centerline{\hfill$L_1(g) : \varphi(x) \mapsto
|g|^{1/2}\varphi(xg)$\hfill $R_1(g) : \varphi(x) \mapsto
|g|^{1/2}\varphi(gx)$\hfill} and also $L_2(g) = R_1(g^{-1})$ and $R_2(g) =
L_1(g^{-1})$.
\begin{definition}
The \emph{Inversion} $I$ is the unitary operator on $L^2(\HH,dx)$ acting as
$\varphi(x) \mapsto {1\over|x|}\varphi({1\over x})$. The \emph{Gamma operator}
is the composite $\Gamma={\cal F}I$.
\end{definition}
\begin{thm}
The Gamma operator commutes with both left actions $L_1$ and $L_2$ and with both
right actions $R_1$ and $R_2$ of $G$ on $L^2$.
\end{thm}
\begin{proof}
One just checks that ${\cal F}$ intertwines $L_1$ with $L_2$, and also $R_1$
with $R_2$ and that the inversion $I$ also intertwines $L_1$ with $L_2$, and
$R_1$ with $R_2$.
\end{proof}
\begin{definition}
The \emph{basic isometry} is the map $\phi(x) \mapsto f(g) = \sqrt{2\pi^2\,|g|}
\; \phi(g)$ between $L^2(\HH,dx)$ and $L^2(G, d^*g)$.
\end{definition}
\begin{note}
It is convenient to avoid using any notation at all for the basic isometry. So
we still denote by ${\cal F}$ the additive Fourier transform transported to the
multiplicative setting. The inversion $I$ becomes $f(g) \mapsto f(g^{-1})$. The
Gamma operator is still denoted $\Gamma$ when viewed as acting on $L^2(G, d^*g)$.
\end{note}
The spectral decomposition of $L^2((0, \infty),{du\over u})$ is standard Fourier
(or Mellin) theory (alternatively we can apply Theorem {\bf\ref{T1}} here): any bounded
operator $M$ commuting with multiplicative translations is given by a measurable
bounded multiplier $a(\tau)$ in dual space $L^2(\RR, {d\tau \over 2\pi})$:
$$G_1(u) = \mathop{\rm l{.}i{.}m{.}}_{\Lambda \rightarrow \infty} \int_{-\Lambda}^{\Lambda}
\psi(\tau) u^{-i\tau} {d\tau \over 2\pi} \Longrightarrow M(G_1)(u) =
\mathop{\rm l{.}i{.}m{.}}_{\Lambda \rightarrow \infty} \int_{-\Lambda}^{\Lambda} a(\tau)\psi(\tau)
u^{-i\tau} {d\tau \over 2\pi}$$
On the other hand the spectral decomposition of $L^2(G_0,\,d^*g_0)$ is part of
the Peter-Weyl theory: it tells us that $L^2(G_0,\,d^*g_0)$ decomposes under
the $L_1 \times R_1$ action by $G_0 \times G_0$ into a countable direct sum
$\oplus_{N\geq0} W_N$ of finite dimensional irreducible, non-isomorphic,
modules. This is also the isotypical decomposition under either $L_1$ alone or
$R_1$ alone (for which $W_N$ then contains $N+1$ copies of $V_N$.) Using the
standard theory of tensor products of separable Hilbert spaces (see for example
\cite{Re}) we have:
\begin{lem} The isotypical
decomposition of $L^2(G,\,d^*g)$ under the compact group $G_0 \times G_0$ acting
through $L_1 \times R_1$ is
$$L^2(G,\,d^*g) = L^2((0, \infty),{du\over u}) \otimes L^2(G_0,\,d^*g_0) =
\oplus_N L^2((0, \infty),{du\over u})\otimes W_N$$
\end{lem}
\begin{lem}\label{L2}
Let $M$ be a bounded operator on $L^2$ which commutes with both the $L_1$ and
$R_1$ actions of $G$. Then to each integer $N\geq 0$ is associated an
(essentially bounded) multiplicator $a_N(\tau)$ on $\RR$, unique up to equality
almost everywhere, such that
$$\psi\in L^2(\RR, {d\tau \over 2\pi}), \ G_1(u) = \mathop{\rm l{.}i{.}m{.}}_{\Lambda \rightarrow
\infty} \int_{-\Lambda}^{\Lambda} \psi(\tau) u^{-i\tau} {d\tau \over 2\pi}$$
$$\Rightarrow \forall F\in W_N\quad M(FG_1) = FG_2$$
$$\hbox{with }G_2(u) = \mathop{\rm l{.}i{.}m{.}}_{\Lambda \rightarrow \infty}
\int_{-\Lambda}^{\Lambda} a_N(\tau)\psi(\tau) u^{-i\tau} {d\tau \over 2\pi}$$
and where $FG_1$ is the function $g\mapsto F({g\over|g|^{1/4}})G_1(|g|)$ and
$FG_2$ the function $g\mapsto F({g\over|g|^{1/4}})G_2(|g|)$.
\end{lem}
\begin{proof}
Let us take $f \in L^2((0, \infty),{du\over u})$ and consider the linear
operator on $L^2(G_0,d^*g_0)$:
$$F(g_0) \mapsto \left(g_0 \mapsto \int_0^\infty \overline{f(u)}M(f \otimes
F)(g_0\,u^{1/4}){du\over u}\right)$$ It commutes with the action of $G_0 \times
G_0$ hence stabilizes each $W_N$ and is a multiple $a_N^f$ of the identity
there. On the other hand, if we choose $F_1$ and $F_2$ in $W_N$ and consider
$$f \mapsto \left(u \mapsto \int_{G_0} \overline{F_2(g_0)} M(f \otimes
F_1)(g_0\,u^{1/4})\,d^*g_0\right)$$ we obtain a bounded operator $M(F_1,F_2)$ on
$L^2((0, \infty),{du\over u})$ commuting with dilations and such that
$$<f |M(F_1,F_2)(f)> = <F_2 | M_N^f(F_1) > = a_N^f <F_2 | F_1>$$ where the
let-hand bracket is computed in $L^2((0, \infty),{du\over u})$ while the next
two are in $L^2(G_0,\,d^*g_0)$. So $M(F_1,F_2)$ depends on $(F_1,F_2)$ only
through $<F_2 | F_1>$. We then let $a_N(\tau)$ be the spectral multiplier
associated to $M(F, F)$ for an arbitrary $F$ satisfying $<F|F> = 1$.
\end{proof}
\begin{corollary}
The von Neumann algebra $\cal A$ of bounded operators commuting simultaneously
with the left and right actions of the multiplicative quaternions on $L^2(\HH,
dx)$ is abelian.
\end{corollary}
\begin{lem}
A self-adjoint operator $M$ commuting with both left and right actions of $G$
commutes with any operator of the von Neumann algebra $\cal A$. In particular it
commutes with $\Gamma$.
\end{lem}
\begin{proof}
One applies Theorem {\bf\ref{L1}}.
\end{proof}
\begin{definition}\label{D1}
The \emph{quaternionic Tate Gamma functions} are the multiplicators
$\gamma_N(\tau)$ ($N\geq 0$) associated to the unitary operator $\Gamma$.
\end{definition}
\begin{note}
This generalizes the Gamma functions of Tate for $K=\RR$ anf $K=\CC$
(\cite{Ta}). In all cases they are indexed by the characters of the maximal
compact subgroup of the multiplicative group $K^\times$.
\end{note}
\begin{lem}
There is a smooth function in the equivalence class of
$\gamma_N(\tau)$.
\end{lem}
\begin{proof}
If the function $G_1(u)$ on $(0, \infty)$ is chosen smooth with compact support
(so that $\psi(\tau)$ is entire) then, for any $F\in W_N$ the function $FG_1$,
viewed in the additive picture, is smooth on $\HH$, has compact support, and
vanishes identically in a neighborhood of the origin. So its image under the
inversion also belongs to the Schwartz class in the additive picture on
$\HH$. Hence $\Gamma(FG_1)$ can be written as $|g|^{1/2} \phi(g)$ for some
Schwartz function $\phi(x)$ of the additive variable $x$. One checks that this
then implies that $G_2(u)$ is a Schwartz function of the variable $\log(u)$ (we
assume that $F$ does not identically vanish of course), hence that
$\gamma_N(\tau) \psi(\tau)$ is a Schwartz function of $\tau$. The various
allowable $\psi$'s have no common zeros so the conclusion follows.
\end{proof}
\begin{note}
From now on $\gamma_N$ refers to this unique smooth representative. It is
everywhere of modulus $1$ as $\Gamma$ is a unitary operator.
\end{note}
\begin{note}
Any function $F \in W_N$ will now be considered as a function on all of $G =
\HH^\times$ after extending it to be constant along each radial line. It is not
defined at $x=0$ of course.
\end{note}
Let $F\in W_N$. For $\mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(s) > 0$, $F(x) |x|^{s-1}$ is a tempered distribution
on $\HH$, hence has a distribution-theoretic Fourier Transform. At first we only
consider $s = {1\over 2} + i\tau$:
\begin{lem}\label{L4}
As distributions on $\HH$
$${\cal F}(F({1\over x})\,|x|^{-{1\over 2} + i\tau}) = \gamma_N(\tau)
F(x)\,|x|^{-{1\over 2} - i\tau}$$
\end{lem}
\begin{proof}
We have to check the identity:
$$\int F({1\over y})\,|y|^{-{1\over 2} + i\tau}\widetilde{\varphi}(y)\,dy =
\gamma_N(\tau)\cdot\int F(x)\,|x|^{-{1\over 2} - i\tau}\varphi(x)\,dx$$ for all
Schwartz functions $\varphi(x)$ with Fourier Transform
$\widetilde{\varphi}(y)$. Both integrals are analytic in $\tau\in\RR$, hence
both sides are smooth (bounded) functions of $\tau$. It will be enough to prove
the identity after integrating against $\psi(\tau)\, {d\tau \over 2\pi}$ with an
arbitrary Schwartz function $\psi(\tau)$. With the notations of Lemma
{\bf\ref{L2}}, we have to check
$$\int F({1\over y})G_1({1\over y})|y|^{- {1\over 2}}\widetilde{\varphi}(y)\,dy
= \int F(x)\,G_2(x)|x|^{-{1\over 2}}\varphi(x)\,dx$$ But, by Lemma
{\bf\ref{L2}}, and by Definition {\bf\ref{D1}}, $F(x)\,G_2(x)|x|^{-{1\over 2}}$
is just the Fourier Transform in $L^2(\HH, dx)$ of $F({1\over y})G_1({1\over
y})|y|^{- {1\over 2}}$, so this reduces to the $L^2$-identity
$$\int \psi(y)\widetilde{\varphi}(y)\,dy = \int
\widetilde{\psi}(x)\varphi(x)\,dx$$
\end{proof}
\begin{thm}\label{T2}
Let $F\in W_N$. There exists an analytic function $\Gamma_N(s)$ in $0 < \mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(s)
<1$ depending only on $N\in \NN$ and such that the following identity of
tempered distributions on $\HH$ holds for each $s$ in the critical strip
($0<\mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(s)<1$):
$${\cal F}(F({1\over x})\,|x|^{s -1}) = \Gamma_N(s) F(x)\,|x|^{-s}$$
\end{thm}
\begin{proof}
We have to check an identity:
$$\int F({1\over y})\,|y|^{s-1}\widetilde{\varphi}(y)\,dy = \Gamma_N(s)\cdot\int
F(x)\,|x|^{-s}\varphi(x)\,dx$$ for all Schwartz functions $\varphi(x)$ with
Fourier Transform $\widetilde{\varphi}(y)$. Both integrals are analytic in the
strip $0 < \mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(s) <1$, their ratio is thus a meromorphic function, which
depends neither on $F$ nor on $\varphi$ as it equals $\gamma_N(\tau)$ on the
critical line. Furthermore for any given $s$ we can choose $\varphi(x) =
\overline{F(x)} \alpha(|x|)$, with $\alpha$ having very small support around
$|x| = 1$ to see that this ratio is in fact analytic.
\end{proof}
\begin{note}
This is the analog for quaternions of Tate's ``local functional equation''
\cite{Ta}, in the distribution theoretic flavor advocated by Weil \cite{We2}. We
followed a different approach than Tate, as his proof does not go through that
easily in the non-commutative case.
\end{note}
Let $\Gamma(s)$ be Euler's Gamma function ($\int_0^\infty e^{-u}u^s\,{du\over
u}$).
\begin{thm}\label{T3} We have for each $N\in\NN$:
$$\Gamma_N(s) = i^N (2\pi)^{2-4s} {\Gamma(2s + {N\over 2})\over\Gamma(2(1-s) +
{N\over 2})}$$
\end{thm}
\begin{proof}
Let $0\leq j \leq N$ and $\omega_j(x) = \overline{A(x)}^{N-j}\overline{B(x)}^j
e^{-2\pi x\overline{x}} = \overline{a}^{N-j}\,\overline{b}^j\,\omega(x)$. One
checks that $\widetilde{\omega_j}(y) =
(-1)^j\,i^N\,{\alpha}^{N-j}\,\overline{\beta}^j \, \omega(y)$ ($y = \alpha +
\beta j$). We choose as homogeneous function $F_j(x) =
a^{N-j}\,b^j\,|x|^{-N/4}$. For these choices the identity of Theorem
{\bf\ref{T2}} becomes
$$i^N \int (\alpha\overline{\alpha})^{N-j}(\beta\overline{\beta})^{j} e^{-2\pi
y\overline{y}} |y|^{s - 1 - N/4} dy = \Gamma_N(s) \int
(a\overline{a})^{N-j}(b\overline{b})^{j} e^{-2\pi x\overline{x}} |x|^{-s - N/4}
dx$$ Adding a suitable linear combinations of these identities for $0\leq j
\leq N$ gives
$$i^N \int (y\overline{y})^{N}\,e^{-2\pi y\overline{y}} |y|^{s - 1 - N/4} dy =
\Gamma_N(s) \int (x\overline{x})^{N}\,e^{-2\pi x\overline{x}} |x|^{-s - N/4}
dx$$ hence the result after evaluating the integrals in terms of $\Gamma(s)$.
\end{proof}
\section{The central operator $H = \log(|x|) + \log(|y|)$}
\begin{definition}
We let $\Delta \subset {\cal C}^\infty(G)$ be the vector space of finite linear
combinations of functions $f(g) = F(g_0)K(\log(|g|))$ with $F$ in one of the
$W_N$'s (hence smooth) and $K$ a Schwartz function on $\RR$. It is a dense
sub-domain of $L^2$.
\end{definition}
\begin{thm} $\Delta$ is stable under ${\cal F}$.\end{thm}
\begin{proof}
We have to show that $\gamma_N(\tau)$ is a multiplier of the Schwartz class. Let
$h_N(\tau) = -i {\gamma_N^\prime(\tau) \over \gamma_N(\tau)}$. Using Theorem {\bf\ref{T3}}
and the partial fraction expansion of the logarithmic derivative of $\Gamma(s)$
(as in \cite{Bu1} for the real and complex Tate Gamma functions), or
Stirling's formula, or any other means, one finds $h_N(\tau) =
O(\log(1+|\tau|))$, $h_N^{(k)}(\tau) = O(1)$, so that $\gamma_N^{(k)}(\tau) =
O(\log(1+|\tau|)^k)$.
\end{proof}
Let $A$ be the operator on $L^2(\HH,dx)$ of multiplication with $\log(|x|)$. As
it is unbounded, we need a domain and we choose it to be $\Delta$. Of course
$(A, \Delta)$ is essentially self-adjoint. It is unitarily equivalent to the
operator $(B, \Delta)$, $B = {\cal F}A{\cal F}^{-1}$. Clearly:
\begin{lem} The domain $\Delta$ is stable under $A$ and $B$.\end{lem}
\begin{definition}
The \emph{conductor operator} is the operator $H = A + B$:
$$H = \log(|x|) + \log(|y|)$$ This is an unbounded operator defined initially
on the domain $\Delta$.
\end{definition}
\begin{lem} The conductor operator $(H,\Delta)$ commutes with the left and with
the right actions of $G$ and is symmetric.
\end{lem}
This is clear. Applying now Theorem {\bf\ref{L1}} we deduce:
\begin{thm} The conductor operator $(H,\Delta)$ has a unique self-adjoint extension.
\end{thm}
We will simply denote by $H$ and call ``conductor operator'' this self-adjoint
extension.
\begin{thm} The conductor operator $H$ commutes with the inversion $I$.
\end{thm}
\begin{proof} Indeed, it commutes with $\Gamma$ by Theorem {\bf\ref{L1}} and it
commutes with $\cal F$ by construction.
\end{proof}
We now want to give a concrete description of its spectral functions.
\begin{definition} Let for each $N\in\NN$ and $\tau\in\RR$:
$$h_N(\tau) = -i {\gamma_N^\prime(\tau) \over \gamma_N(\tau)}$$
$$k_N(\tau) = - h_N^\prime(\tau)$$
\end{definition}
Explicit computations prove that the functions $h_N$ are left-bounded ($\exists
C\,\forall\tau\,\forall N\ h_N(\tau)\geq -C$) and that the functions $k_N$ are
bounded ($\exists C\,\forall\tau\,\forall N\ |k_N(\tau)|\leq C$.) We need not
reproduce these computations here, which use only the partial fraction expansion
of Euler's gamma function, as similar results are provided in \cite{Bu1} in the
real and complex cases.
Let $f(g) = F(g_0)\phi(|g|)$ be an element of $\Delta$, $F \in W_N \subset
L^2(G_0, dg_0)$, $\phi\in L^2((0,\infty), {du\over u})$, $\phi$ being a Schwartz
function of $\log(u)$ ($u = |g|$). We can also consider $f$ to be given as a
pair $\{F, \psi\}$ with $\psi(\tau) = \int_0^\infty \phi(u) u^{i\tau} {du\over
u}$ being a Schwartz function of $\tau$. Then $A(f)$ is given by the pair $\{F,
D(\psi)\}$ where $D$ is the differential operator ${1\over i}{d\over
d\tau}$. This implies that $\Gamma A\Gamma^{-1} (f)$ corresponds to the pair
$\{F, D(\psi) - h_N\cdot \psi\}$. On the other hand $\Gamma A\Gamma^{-1} = - B$
so $H(f)$ corresponds to the pair $\{F, h_N\cdot \psi\}$. The commutation with
the inversion $I$ translates into $h_N(-\tau) = h_N(\tau)$. Also: $K = i[B, A] =
-i[A, H]$ sends the pair $\{F, \psi\}$ to $\{F, k_N\cdot \psi\}$, hence is
bounded and anti-commutes with the Inversion. We have proved:
\begin{thm}\label{T4}
The operator $\log(|x|) + \log(|y|)$ is self-adjoint, left-bounded, commutes
with the left- and right- dilations, commutes with the Inversion, and its
spectral functions are the functions $h_N(\tau)$. The operator $i\,[\log(|y|),
\log(|x|)]$ is bounded, self-adjoint, commutes with the left- and right-
dilations, anti-commutes with the Inversion and its spectral functions are the
functions $k_N(\tau)$.
\end{thm}
We now conclude this chapter with a study of some elementary
distribution-theoretic properties of $H$. For this we need the analytic
functions of $s$ ($0 < \mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(s) < 1$) indexed by $N\in\NN$:
$$H_N(s) = {d\over ds} \log(\Gamma_N(s))$$ (so that $h_N(\tau) = H_N({1\over 2}
+ i\tau)$).
\begin{lem}
Let $\varphi(x)$ be a Schwartz function on $\HH$. Then $H(\varphi)$ is
continuous on $\HH \backslash\{0\}$, is $O(\log(|x|))$ for $x \rightarrow 0$,
and is $O(1/|x|)$ for $|x| \rightarrow\infty$. Furthermore, for any $F \in W_N$
(constant along radial lines), the following identity holds for $0 < \mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(s) <
1$:
$$\int H(\varphi)(x) F(x) |x|^{-s} dx = H_N(s) \int \varphi(x) F(x) |x|^{-s} dx$$
\end{lem}
\begin{proof}
Assuming the validity of the estimates we see that both sides of the identity
are analytic functions of $s$, so it is enough to prove the identity on the
critical line:
$$\int H(\varphi)(x) F(x) |x|^{-{1\over 2} - i\tau} dx = h_N(\tau) \int
\varphi(x) F(x) |x|^{-{1\over 2} - i\tau} dx$$
As in the proof of Lemma {\bf\ref{L4}}, it
is enough to prove it after integrating against an arbitrary Schwartz function
$\psi(\tau)$. With $G(u) = \int_{-\infty}^\infty \psi(\tau) u^{-i\tau} {d\tau
\over 2\pi}$, and using Theorem {\bf\ref{T4}} this becomes
$$\int H(\varphi)(x) F(x) G(|x|) |x|^{-{1\over 2}} dx = \int \varphi(x) H(FG)(x)
|x|^{-{1\over 2}} dx$$ (on the right-hand-side $H(FG)$ is computed in the
multiplicative picture, on the left-hand-side $H(\varphi)$ is evaluated in the
additive picture). The self-adjointness of $H$ reduces this to
$\overline{H(\overline{\varphi})} = H(\varphi)$, which is a valid identity.
For the proof of the estimates we observe that $B(\varphi)$ is the Fourier
transform of an $L^1$-function hence is continuous, so that we only need to show
that it is $O(1/|x|)$ for $|x| \rightarrow \infty$. For this we use that
$B(\varphi)$ is additive convolution of $-\varphi$ with the distribution $G =
{\cal F}(-\log(|y|))$. The estimate then follows from the formula for $G$ given
in the next lemma.
\end{proof}
\begin{lem}\label{L5} The distribution $G(x) = {\cal F}(-\log(|y|))$ is given as:
$$G(\varphi) = \int_{|x| \leq 1}(\varphi(x) - \varphi(0))\, {dx \over
2\pi^2\,|x|} + \int_{|x| > 1}\varphi(x)\, {dx \over 2\pi^2\,|x|}+\ (4\log(2\pi)
+ 4\gamma_e - 2)\varphi(0)$$
\end{lem}
\begin{proof}
We have used the notation $\gamma_e = - \Gamma^\prime(1)$ for the
Euler-Mascheroni's constant ($=0.577\dots$). Let $\Delta_s$ for $\mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(s)>0$ be
the homogeneous distribution $|x|^{s-1}$ on $\HH$. It is a tempered
distribution. The formula
$$\Delta_s(\varphi) = \int_{|x| \leq 1}(\varphi(x) - \varphi(0))|x|^{s-1}\, dx +
\int_{|x| > 1}\varphi(x)|x|^{s-1}\, dx + {2\pi^2\over s}\varphi(0)$$ defines its
analytic continuation to $\mathop{\rm Re}} \newcommand{\Tr}{\mathop{\bf Tr}(s) > -{1 \over 4}$, with a simple pole at $s =
0$. Using
$${\cal F}(\Delta_s) = \Gamma_0(s) \Delta_{1-s}$$ for $s = 1 - \varepsilon$,
$\varepsilon \rightarrow 0$, and expanding in $\varepsilon$ gives
$$\varphi(0) + \varepsilon G(\varphi) + O(\varepsilon^2) = \Gamma_0(1 -
\varepsilon)\cdot\left\{{2\pi^2\over \varepsilon}\varphi(0) + \int_{|x| \leq
1}(\varphi(x) - \varphi(0))\, {dx\over|x|} + \int_{|x| > 1}\varphi(x)\,
{dx\over|x|} + O(\varepsilon) \right\}$$ As $\Gamma_0(1 - \varepsilon) = {1\over
2\pi^2}(\varepsilon + (4\log(2\pi) + 4\gamma_e - 2) \varepsilon^2 +
O(\varepsilon^3))$ the result follows.
\end{proof}
\section{The trace of Connes for quaternions}
Let $f(g)$ be a smooth function with compact support on $\HH^\times$. Let $U_f$
be the bounded operator $\int f(g) L_2(g)\,d^*g$ on $L^2(\HH, dx)$ of left
multiplicative convolution. So
$$U_f: \varphi(x) \mapsto \int_G f(g){1\over\sqrt{|g|}}\varphi(g^{-1}x)\,d^*g$$
The composition $U_f\,{\cal F}$ of $U_f$ with the Fourier Transform ${\cal F}$
acts as
\begin{eqnarray*}
\varphi(x) &\mapsto& \int_G\int_\HH
f(g){1\over\sqrt{|g|}}\lambda(-g^{-1}xy)\varphi(y)\,dy\,d^*g \\
&=& \int_\HH\int_G
f({1\over
g}){\sqrt{|g|}}\lambda(-gxy)\,d^*g\,\varphi(y)\,dy\\
&=&
{1\over\sqrt{2\pi^2}}\int_{y\in\HH} \left( \int_{Y\in\HH} f({1\over
Y}){1\over\sqrt{2\pi^2|Y|}}\lambda(-Yxy)\,dY\right) \varphi(y)\,dy \\
&=&
{1\over\sqrt{2\pi^2}}\int_\HH {\cal F}(I(f)_a)(xy)\varphi(y)\,dy\\
\end{eqnarray*}
In this last equation $I(f)_a$ is the additive representative
${1\over\sqrt{2\pi^2|Y|}}f({1\over Y})$ of $I(f)$. Finally denoting similarly
with $\Gamma(f)_a$ the additive representative of $\Gamma(f)$ we obtain
$$(U_f{\cal F})(\varphi)(x) = {1\over\sqrt{2\pi^2}}\int_\HH {\Gamma(f)_a}(xy)\varphi(y)\,dy$$
As $f$ has compact support on $\HH^\times$ we note that $I(f)_a$ is smooth with
compact support on $\HH$ and that $\Gamma(f)_a$ belongs to the Schwartz
class. Following Connes (\cite{Co}, for $\RR$ or $\CC$ instead of $\HH$), our
goal is to compute the trace $\Tr(\Lambda)$ of the operator
$\widetilde{P_\Lambda}P_\Lambda\,U_f$, where $\widetilde{P_\Lambda} = {\cal
F}P_\Lambda{\cal F}^{-1}$ and $P_\Lambda$ is the cut-off projection to functions
with support in $|x| \leq \Lambda$. Our reference for trace-class operators will
be \cite{Go}. We recall that if $A$ is trace-class then for any bounded $B$,
$AB$ and $BA$ are trace-class and have the same trace. Also if $K_1$ and $K_2$
are two Hilbert-Schmidt operators given for example as $L^2-$kernels $k_1(x,y)$
and $k_2(x,y)$ on a measure space $(X, dx)$ then $A = K_1^*\, K_2$ is
trace-class and its trace is the Hilbert-Schmidt scalar products of $K_1$ and
$K_2$:
$$\Tr(K_1^*\, K_2) = \int\int \overline{k_1(x,y)}\,k_2(x,y)\ dxdy$$
The operator $P_\Lambda {\cal F}^{-1} P_\Lambda$ is an operator with kernel a
smooth function restricted to a finite box (precisely it is $\lambda(xy),\
|x|,|y|\leq\Lambda$). Such an operator is trace class, as is well-known (one
classical line of reasoning is as follows: taking a smooth function $\rho(x)$
with compact support, identically $1$ on $|x|\leq\Lambda$, and $Q_\rho$ the
multiplication operator with $\rho$, one has $P_\Lambda {\cal F}^{-1} P_\Lambda
= P_\Lambda Q_\rho {\cal F}^{-1} Q_\rho P_\Lambda$, so that it is enough to
prove that $Q_\rho {\cal F}^{-1} Q_\rho$ is trace-class. This operator has a
smooth kernel with compact support, so we can put the system in a box, and
reduce to an operator $K$ with smooth kernel on a torus. Then $K = (1 +
\Delta)^{-n}(1 + \Delta)^{n}K$ with $\Delta$ the positive Laplacian. For $n$
large enough, $(1 + \Delta)^{-n}$ is trace-class, while $(1 + \Delta)^{n}K$ is
at any rate bounded.)\par
So Connes's operator $\widetilde{P_\Lambda}P_\Lambda\,U_f = {\cal F}\cdot
P_\Lambda {\cal F}^{-1} P_\Lambda\cdot U_f$ is indeed trace class and
$$\Tr(\widetilde{P_\Lambda}P_\Lambda\,U_f) = \Tr(P_\Lambda {\cal F}^{-1}
P_\Lambda\cdot U_f{\cal F}) = \Tr(P_\Lambda {\cal F}^{-1} P_\Lambda\cdot
P_\Lambda U_f{\cal F} P_\Lambda)$$ can be computed as a Hilbert-Schmidt scalar
product:
$$\Tr(\widetilde{P_\Lambda}P_\Lambda\,U_f) = {1\over\sqrt{2\pi^2}}\int\int_{|x|,
|y| \leq\Lambda}\lambda(xy)\Gamma(f)_a(xy)\,dxdy$$ using the change of
variable $(x,y) \mapsto (Y= xy, y)$
$$\Tr(\widetilde{P_\Lambda}P_\Lambda\,U_f) = \sqrt{2\pi^2}
\int_{|Y|\leq\Lambda^2} \lambda(Y) \Gamma(f)_a(Y)\left(\int_{{|Y|\over
\Lambda}\leq|y|\leq\Lambda} {dy\over 2\pi^2 |y|}\right)\,dY$$
$$\Tr(\widetilde{P_\Lambda}P_\Lambda\,U_f) = \sqrt{2\pi^2}
\int_{|Y|\leq\Lambda^2} \Big(2\log(\Lambda) - \log(|Y|)\Big)\lambda(Y)
\Gamma(f)_a(Y)\,dY\leqno\bf(C)$$
This integral is an inverse (additive) Fourier transform evaluated at $1$. As
$\Gamma = {\cal F}I$ itself involves a Fourier transform the final result is
just $\sqrt{2\pi^2}M_\Lambda(I(f)_a)(1)$ where $M_\Lambda$ is the self-adjoint
operator $(2\log(\Lambda) - B)_+ = \max(2\log(\Lambda) - B, 0)$. If we recall
that $\sqrt{2\pi^2}$ is involved in the basic isometry from the additive to the
multiplicative picture, we can finally express everything back in the
multiplicative picture:
\begin{thm} The Connes operator $\widetilde{P_\Lambda}P_\Lambda\,U_f$ is a
trace-class operator and satisfies
\begin{eqnarray*}
\Tr(\widetilde{P_\Lambda}P_\Lambda\,U_f) &=& (2\log(\Lambda) - B)_+(I(f))(1)\\
\Tr(\widetilde{P_\Lambda}P_\Lambda\,U_f) &=& 2\log(\Lambda)f(1) - H(f)(1) +
o(1)\\
\end{eqnarray*}
\end{thm}
For the last line we used that $B(I(f))(1) = H(I(f))(1) = H(f)(1)$ as $H =
\log(|x|) + \log(|y|)$ commutes with the Inversion $I$. The error is $o(1)$ for
$\Lambda \rightarrow\infty$ as it is bounded above in absolute value (assuming
$\Lambda > 1$) by
$$\sqrt{2\pi^2} \int_{|Y|\geq\Lambda^2} \log(|Y|)\;
\left|\Gamma(f)_a(Y)\right|\,dY$$ and $\Gamma(f)_a$ is a Schwartz function of
$Y\in\HH$. We note that if needed the Lemma {\bf\ref{L5}} gives to the term
$H(f)(1)$ a form more closely akin to the Weil's explicit formulae of number
theory. We note that Connes's computation in \cite{Co} also goes through an
intermediate stage essentially identical with {\bf (C)} and that the
identification of the constant term with Weil's expression for the explicit
formula of number theory then requires a further discussion. The main result of
\cite{Bu1} and of this paper is thus the direct connection between $H$ and the
logarithmic derivatives of the Tate Gamma functions involved in the explicit
formulae.
{\bf Acknowledgements}
I thank the SSAS (``Soci\'et\'e de secours des amis des sciences'', quai de
Conti, Paris) for its financial support while this work was completed.
\baselineskip=14pt\parskip=12pt
|
2,869,038,156,855 | arxiv |
\section{Introduction}\label{sec:intro}
Equational unification of two terms is of special relevance to many areas in computer science and consists of finding a substitution that, when applied to both terms, makes them equal modulo some equational properties. Several algorithms have been developed in the literature for specific equational theories, such as associative-commutative symbols, exclusive-or, Diffie-Hellman, or Abelian Groups (see~\cite{BS-handbook00}). Narrowing was proved to be complete for unification \cite{Hullot80,JKK83} and several cases have been studied where narrowing provides a decidable unification algorithm~\cite{AEI09,AEI11}. A new narrowing-based equational unification algorithm relying on the concept of the \emph{variants} of a term \cite{CD05} has been developed \cite{ESM12} and it is available in the most recent version of Maude, version 2.7.1, which provides quite sophisticated unification features \cite{maude-manual,Meseguer18-wollic}.
Several tools and techniques rely on Maude's advanced unification capabilities, such as termination~\cite{DLM09} and local confluence and coherence~\cite{DM12} proofs, narrowing-based theorem proving~\cite{Rusu10} or testing~\cite{Riesco14}, and \emph{logical model checking} \cite{EM07,BEM13}. The area of cryptographic protocol analysis has also benefited: the Maude-NPA tool~\cite{EMM09} is the most successful example of using variant-based equational unification in Maude and the Tamarin tool~\cite{MSCB13,DDKS17,DHRS18} also relies on variants. Numerous decision procedures for formula satisfiability modulo equational theories also rely on unification, either based on narrowing \cite{TGRK15} or by using variant generation in finite variant theories~\cite{Meseguer18-scp}.
However, variant-based unification may compute many more unifiers than the necessary. In this paper, we explore how to improve the variant-based unification algorithm implemented in Maude to produce a smaller, yet complete, set of most general variant unifiers. After some preliminaries in Section~\ref{sec:preliminaries}, we recall variant-based unification in Section~\ref{sec:eq_unification} and propose how to compute a set of most general variant unifiers in Section~\ref{sec:mgvu}. In Section~\ref{sec:mgvu-fast}, we propose a new fast algorithm that considerably reduces the number of variant unifiers by computing a complete (yet not always minimal) set of most general unifers modulo the considered theory. Our experiments in Section~\ref{sec:exp} demonstrate that this new adaptation of the variant-based unification is more efficient both in execution time and in the number of computed variant unifiers than the original algorithm. We conclude in Section~\ref{sec:conc}.
\section{Preliminaries}\label{sec:preliminaries}
We follow the classical notation and terminology from \cite{Terese03} for term rewriting, from \cite{BS-handbook00} for unification, and from \cite{Meseguer92} for rewriting logic and order-sorted notions.
We assume an order-sorted signature $\mathsf{\Sigma} = (S, \leq, \Sigma)$ with a poset of sorts $(S, \leq)$. The poset $(\sort{S},\leq)$ of sorts for $\Sigma$ is partitioned into equivalence classes, called \emph{connected components}, by the equivalence relation $(\leq \cup \geq)^+$. We assume that each connected component $[\sort{s}]$ has a \emph{top element} under $\leq$, denoted $\top_{[\sort{s}]}$ and called the \emph{top sort} of $[\sort{s}]$. This involves no real loss of generality, since if $[\sort{s}]$ lacks a top sort, it can be easily added. We also assume an $\sort{S}$-sorted family $\caX=\{\caX_\sort{s}\}_{\sort{s} \in \sort{S}}$ of disjoint variable sets with each $\caX_\sort{s}$ countably infinite. $\TermsS{s}$ is the set of terms of sort \sort{s}, and $\GTermsS{s}$ is the set of ground terms of sort \sort{s}. We write $\TermsOn{\Symbols}{\Variables}{}{}{}$ and $\GTermsOn{\Symbols}{}$ for the corresponding order-sorted term algebras. Given a term $t$, $\var{t}$ denotes the set of variables in $t$.
A \textit{substitution} $\sigma\in\SubstOn{\Symbols}{\Variables}{}{}{}$ is a sorted mapping from a finite subset of $\caX$ to $\TermsOn{\Symbols}{\Variables}{}{}{}$. Substitutions are written as $\sigma=\{X_1 \mapsto t_1,\ldots,X_n \mapsto t_n\}$ where the domain of $\sigma$ is $\domain{\sigma}=\{X_1,\ldots,X_n\}$ and the set of variables introduced by terms $t_1,\ldots,t_n$ is written $\range{\sigma}$. The identity substitution is $\textit{id}$. Substitutions are homomorphically extended to $\TermsOn{\Symbols}{\Variables}{}{}{}$. The application of a substitution $\sigma$ to a term $t$ is denoted by $t\sigma$ or $\sigma(t)$. For simplicity, we assume that every substitution is idempotent, i.e., $\sigma$ satisfies $\domain{\sigma}\cap\range{\sigma}=\emptyset$. The restriction of $\sigma$ to a set of variables $V$ is $\subterm{\sigma}{V}$, i.e., $\forall x\in V$, $\subterm{\sigma}{V}(x)=\sigma(x)$ and $\forall x\not\in V$, $\subterm{\sigma}{V}(x)=x$. Composition of two substitutions $\sigma$ and $\sigma'$ is denoted by $\sigma\composeSubst\sigma'$. Combination of two substitutions $\sigma$ and $\sigma'$ such that $\domain{\sigma}\cap\domain{\sigma'}=\emptyset$ is denoted by $\sigma \cup \sigma'$. We call an idempotent substitution $\sigma$ a variable \emph{renaming} if there is another idempotent substitution $\sigma^{-1}$ such that $(\sigma\sigma^{-1})|_{Dom(\sigma)} = \textit{id}$.
A \textit{$\Sigma$-equation} is an unoriented pair $t = t'$, where $t,t' \in \TermsS{s}$ for some sort $\sort{s}\in\sort{S}$. An \emph{equational theory} $(\Sigma,E)$ is a pair with $\Sigma$ an order-sorted signature and $E$ a set of $\Sigma$-equations. Given $\Sigma$ and a set $E$ of $\Sigma$-equations, order-sorted equational logic induces a congruence relation $\congr{E}$ on terms $t,t' \in \TermsOn{\Symbols}{\Variables}{}{}{}$ (see~\cite{Meseguer97}). Throughout this paper we assume that $\GTermsS{s}\neq\emptyset$ for every sort \sort{s}, because this affords a simpler deduction system. An equational theory $(\Sigma,E)$ is \emph{regular} if for each $t = t'$ in $E$, we have $\var{t} = \var{t'}$. An equational theory $(\Sigma,E)$ is \emph{linear} if for each $t = t'$ in $E$, each variable occurs only once in $t$ and in $t'$. An equational theory $(\Sigma,E)$ is \textit{sort-preserving} if for each $t = t'$ in $E$, each sort \sort{s}, and each substitution $\sigma$, we have $t \sigma \in \TermsS{s}$ iff $t' \sigma \in \TermsS{s}$. An equational theory $(\Sigma,E)$ is \emph{defined using top sorts} if for each equation $t = t'$ in $E$, all variables in $\var{t}$ and $\var{t'}$ have a top sort. Given two terms $t$ and $t'$, we say $t$ is more general than $t'$, denoted as $t \sqsupseteq_{E} t'$, if there is a substitution $\eta$ such that $t\eta \congr{E} t'$. Similarly, given two substitutions $\sigma$ and $\rho$, we say $\sigma$ is more general than $\rho$ for a set $W$ of variables, denoted as $\subterm{\sigma}{W} \sqsupseteq_{E} \subterm{\rho}{W}$, if there is a substitution $\eta$ such that $\subterm{(\sigma\composeSubst\eta)}{W} \congr{E} \subterm{\rho}{W}$. The $\sqsupseteq_{E}$ relation induces an equivalence relation $\simeq_{E}$, i.e., $t \simeq_{E} t'$ iff $t \sqsupseteq_{E} t'$ and $t \sqsubseteq_{E} t'$.
An \textit{$E$-unifier} for a $\Sigma$-equation $t = t'$ is a substitution $\sigma$ such that $t\sigma \congr{E} t'\sigma$. For $\var{t}\cup\var{t'} \subseteq W$, a set of substitutions $\csuV{t = t'}{W}{E}$ is said to be a \textit{complete} set of unifiers for the equality $t = t'$ modulo $E$ away from $W$ iff: (i) each $\sigma \in \csuV{t = t'}{W}{E}$ is an $E$-unifier of $t = t'$; (ii) for any $E$-unifier $\rho$ of $t = t'$ there is a $\sigma \in \csuV{t=t'}{W}{E}$ such that $\subterm{\sigma}{W} \sqsupseteq_{E} \subterm{\rho}{W}$; and (iii) for all $\sigma \in \csuV{t=t'}{W}{E}$, $\domain{\sigma} \subseteq (\var{t}\cup\var{t'})$ and $\range{\sigma} \cap W = \emptyset$. Given a conjunction $\Gamma$ of equations, a set $U$ of $E$-unifiers of $\Gamma$ is said to be \textit{minimal} if it is complete and for all distinct elements $\sigma$ and $\sigma'$ in $U$, $\sigma \sqsupseteq_E \sigma'$ implies $\sigma \congr{E} \sigma'$. A unification algorithm is said to be \textit{finitary} and complete if it always terminates after generating a finite and complete set of unifiers. A unification algorithm is said to be \textit{minimal} and complete if it always returns a minimal and complete set of unifiers.
A \textit{rewrite rule} is an oriented pair $l \to r$, where $l \not\in \caX$ and $l,r \in \TermsS{s}$ for some sort $\sort{s}\in\sort{S}$. An \textit{(unconditional) order-sorted rewrite theory} is a triple $(\Sigma,E,R)$ with $\Sigma$ an order-sorted signature, $E$ a set of $\Sigma$-equations, and $R$ a set of rewrite rules. The set $R$ of rules is \textit{sort-decreasing} if for each $t \rightarrow t'$ in $R$, each $\sort{s} \in \sort{S}$, and each substitution $\sigma$, $t'\sigma \in \TermsS{s}$ implies $t\sigma \in \TermsS{s}$. The rewriting relation on $\TermsOn{\Symbols}{\Variables}{}{}{}$, written $t \rewrite{p,R} t'$ (or just $t \rewrite{R} t'$) holds between $t$ and $t'$ iff there exist $p \in \funocc{t}$, $l \to r\in R$ and a substitution $\sigma$, such that $\subterm{t}{p} = l\sigma$, and $t' = \replace{t}{p}{r\sigma}$. The relation $\rewrite{R/E}$ on $\TermsOn{\Symbols}{\Variables}{}{}{}$ is ${\congr{E} ;\rewrite{R};\congr{E}}$. The transitive (resp. transitive and reflexive) closure of $\rewrite{R/E}$ is denoted $\rewrite{R/E}^+$ (resp. $\rewrites{R/E}$).
Reducibility of $\rewrite{R/E}$ is undecidable in general since $E$-congruence classes can be arbitrarily large. Therefore, $R/E$-rewriting is usually implemented by $R,E$-rewriting under some conditions on $R$ and $E$ such as confluence, termination, and coherence (see~\cite{JK86,Meseguer17}). A relation $\rewrite{R,E}$ on $\TermsOn{\Symbols}{\Variables}{}{}{}$ is defined as: $t \rewrite{p,R,E} t'$ (or just $t \rewrite{R,E} t'$) iff there is a non-variable position $p \in \funocc{t}$, a rule $l \to r$ in $R$, and a substitution $\sigma$ such that $\subterm{t}{p} \congr{E} l\sigma$ and $t' = \replace{t}{p}{r\sigma}$. The narrowing relation $\narrow{}{R,E}$ on $\TermsOn{\Symbols}{\Variables}{}{}{}$ is defined as: $t \narrow{\sigma}{p,R,E} t'$ (or just $t \narrow{\sigma}{R,E} t'$) iff there is a non-variable position $p \in \funocc{t}$, a rule $l \to r$ in $R$, and a substitution $\sigma$ such that $\subterm{t}{p}\sigma \congr{E} l\sigma$ and $t' = (\replace{t}{p}{r})\sigma$.
We call $(\Sigma,B,E)$ a \emph{decomposition} of an order-sorted equational theory ${(\Sigma,E\uplus B)}$ if $B$ is regular, linear, sort-preserving, defined using top sorts, and has a finitary and complete unification algorithm, which implies that $B$-matching is decidable, and equations $E$ are oriented into rules $\overrightarrow{E}$ such that they are sort-decreasing and \emph{convergent}, i.e., confluent, terminating, and strictly coherent modulo $B$ \cite{DM12,LM16,Meseguer17}. The irreducible version of a term $t$ is denoted by $t\norm{R,E}$.
Given a decomposition $(\Sigma,B,E)$ of an equational theory and a term $t$, a pair $(t',\theta)$ of a term $t'$ and a substitution $\theta$ is an $E,B$-\emph{variant} (or just a variant) of $t$ if $t\theta\norm{E,B} \congr{B} t'$ and $\theta\norm{E,B} \congr{B} \theta$ ~\cite{CD05,ESM12}. A \emph{complete set of $E,B$-variants}~\cite{ESM12} (up to renaming) of a term $t$ is a subset, denoted by $\sem{t}$, of the set of all $E,B$-variants of $t$ such that, for each $E,B$-variant $(t',\sigma)$ of $t$, there is an $E,B$-variant $(t'', \theta) \in \sem{t}$ such that $(t'',\theta) \sqsupseteq_{E,B} (t',\sigma)$, i.e., there is a substitution $\rho$ such that $t' \congr{E} t''\rho$ and $\restrict{\sigma}{\var{t}} =_{E} \restrict{(\theta\rho)}{\var{t}}$. A decomposition $(\Sigma,B,E)$ has the \emph{finite variant property} (FVP)~\cite{ESM12} (also called a \emph{finite variant decomposition}) iff for each $\Sigma$-term $t$, there exists a complete and finite set $\sem{t}$ of variants of $t$. Note that whether a decomposition has the finite variant property is undecidable~\cite{BGLN13} but a technique based on the dependency pair framework has been developed in \cite{ESM12} and a semi-decision procedure that works well in practice is available in~\cite{CME14tr}.
\section{Variant-based Equational Unification in Maude 2.7.1}\label{sec:eq_unification}
Rewriting logic \cite{Meseguer92} is a flexible semantic framework within which different concurrent systems can be naturally specified
(see \cite{Meseguer12}). Rewriting Logic is efficiently implemented in the high-performance system Maude~\cite{maude-manual}, which has itself a formal environment of verification tools thanks to its reflective capabilities (see \cite{Maude07,Meseguer12}).
Since 2007, several symbolic capabilities have been successively added to Maude (see~\cite{DEEM+18,Meseguer18-wollic} and references therein). First, Maude has been endowed with unification, i.e., \emph{order-sorted equational unification}. Second, Maude has been extended with symbolic reachability features that rely on Maude's unification, i.e., \emph{narrowing-based reachability analysis} as well as the more general \emph{symbolic LTL model checking of infinite-state systems}~\cite{EM07,BEM13}. However, Maude's unification features are quite general in nature: (i) they are applicable to order-sorted signatures; (ii) they work modulo any combination of the equational axioms of associativity (A), commutativity (C), and identity (U); and (iii) they work modulo a set of equations that are assumed convergent modulo axioms. The third part is supported via the concept of the \emph{variants} of a term \cite{CD05} and the \emph{folding variant narrowing strategy} \cite{ESM12}, which achieves termination when the equational theory has the \emph{finite variant property} \cite{CD05,ESM12}. All these unification capabilities are seamlessly provided by a variant-based unification command in Maude, as shown below.
Equational unification can be simply understood as variant computation in an extended equational theory.
\begin{definition}\label{def:extended}{\rm\cite{ESM12}}
Given a decomposition $(\Sigma,B,E)$ with a poset of sorts $(\sort{S}, \leq)$ of an equational theory $(\Sigma,\ensuremath{\mathcal E})$, we extend $(\Sigma,B,E)$ and $(\sort{S}, \leq)$ to $(\widehat{\Sigma},B,\widehat{E})$ and $(\sort{\widehat{S}}, \leq)$ as follows:
\begin{enumerate}
\item we add a new sort \sort{Truth} to $\sort{\widehat{S}}$, not related to any sort in $\Sigma$,
\item we add a constant operator $\pr{tt}$ of sort $\sort{Truth}$ to $\widehat{\Sigma}$,
\item for each top sort of a connected component \sort{[s]}, we add an operator $\pr{eq}$ : \sort{[s]} $\times$ \sort{[s]} $\rightarrow$ \sort{Truth} to $\widehat{\Sigma}$, and
\item for each top sort $\sort{[s]}$, we add a variable $X{:}\sort{[s]}$ and an extra rule $\pr{eq}(X{:}\sort{[s]},X{:}\sort{[s]}) \rightarrow \pr{tt}$ to $\widehat{E}$.
\end{enumerate}
\end{definition}
Then, given any two $\Sigma$-terms $t,t'$, if $\theta$ is an equational unifier of $t$ and $t'$, then the $E{,}B$-canonical forms of $t\theta$ and $t'\theta$ must be $B$-equal and therefore the pair $(\pr{tt},\theta)$ must be a variant of the term $\pr{eq}(t,t')$. Furthermore, if the term $\pr{eq}(t,t')$ has a finite set of most general variants, then we are \emph{guaranteed} that the set of most general $\mathcal{E}$-unifiers of $t$ and $t'$ is \emph{finite}.
Let us make explicit the relation between variants and equational unification. First, we define the intersection of two sets of variants. Without loss of generality, we assume in this paper that each variant pair $(t',\sigma)$ of a term $t$ uses new freshly generated variables.
\begin{definition}[Variant Intersection]{\rm\cite{ESM12}}
Given a decomposition $(\Sigma,B,E)$ of an equational theory, two $\Sigma$-terms $t_1$ and $t_2$ such that $W_\cap = \var{t_1}\cap\var{t_2}$ and $W_\cup = \var{t_1}\cup\var{t_2}$, and two sets $V_1$ and $V_2$ of variants of $t_1$ and $t_2$, respectively, we define
$V_1 \cap V_2 =
\{(u_1\sigma,\theta_1\sigma \cup \theta_2\sigma \cup \sigma) \mid (u_1,\theta_1) \in V_1 \wedge (u_2,\theta_2) \in V_2 \wedge
\exists \sigma: \sigma \in \csuV{u_1 = u_2}{W_\cup}{B}
\wedge
\restrict{(\theta_1\sigma)}{W_\cap}
\congr{B}
\restrict{(\theta_2\sigma)}{W_\cap}
\}
$.
\end{definition}
Then, we define variant-based unification as the computation of the variants of the two terms in a unification problem and their intersection.
\begin{proposition}[Variant-based Unification]{\rm\cite{ESM12}}
Let $(\Sigma,B,E)$ be a decomposition of an equational theory. Let $t_{1},t_{2}$ be two $\Sigma$-terms. Then, $\rho$ is an unifier of $t_{1}$ and $t_{2}$ iff $\exists (t',\rho)\in\sem{t_{1}}\cap\sem{t_{2}}$.
\end{proposition}
The most recent version 2.7.1 of Maude \cite{maude-manual} incorporates variant-based unification based on the folding variant narrowing strategy \cite{ESM12}. First, there exists a variant generation command of the form:
\noindent
{\small
\begin{verbatim}
get variants [ n ] in ModId : Term .
\end{verbatim}
}
\noindent
where $n$ is an optional argument providing a bound on the number of variants requested, so that if the cardinality of the set of variants is greater than the specified bound, the variants beyond that bound are omitted; and \texttt{ModId} is the identifier of the module where the command takes place. Second, there exists a variant-based unification command of the form:
\noindent
{\small
\begin{verbatim}
variant unify [ n ] in ModId : T1 =? T1' /\ ... /\ Tk =? Tk' .
\end{verbatim}
}
\noindent
where $k\geq 1$ and $n$ is an optional argument providing a bound on the number of unifiers requested, so that if there are more unifiers, those beyond that bound are omitted; and \texttt{ModId} is the identifier of the module where the command takes place.
\begin{example}\label{ex:xor}
Consider the following equational theory for exclusive-or that assumes three extra constants \verb+a+, \verb+b+, and \verb+c+. Note that the theory is not coherent modulo $AC$ without the second equation.
{\small
\begin{verbatim}
fmod EXCLUSIVE-OR is
sorts Elem ElemXor .
subsort Elem < ElemXor .
ops a b c : -> Elem .
op mt : -> ElemXor .
op _*_ : ElemXor ElemXor -> ElemXor [assoc comm] .
vars X Y Z U V : [ElemXor] .
eq [idem] : X * X = mt [variant] .
eq [idem-Coh] : X * X * Z = Z [variant] .
eq [id] : X * mt = X [variant] .
endfm
\end{verbatim}
}
\noindent
The attribute \verb+variant+ specifies that these equations will be used for variant-based unification. Since this theory has the finite variant property (see \cite{CD05,ESM12}), given the term \verb!X * Y! it is easy to verify that there are seven most general variants.
{\small
\begin{verbatim}
Maude> get variants in EXCLUSIVE-OR : X * Y .
Variant #1 ... Variant #7
[ElemXor]: #1:[ElemXor] * #2:[ElemXor] ... [ElemXor]:
X --> #1:[ElemXor] ... X -->
Y --> #2:[ElemXor] ... Y --> mt
\end{verbatim}
}
\noindent Note that there are two forms of fresh variables, {\small\textit{\texttt{\#n:Sort}}} and {\small\textit{\texttt{\%n:Sort}}}, depending on whether they are generated by unification modulo axioms or by narrowing with the equations modulo axioms. Also note that the two forms have different counters.
When we consider a variant unification problem between terms $X * Y$ and $U * V$, there are $57$ unifiers:
{\small
\begin{verbatim}
Maude> variant unify in EXCLUSIVE-OR : X * Y =? U * V .
Unifier #1
X -->
Y -->
V -->
U -->
Unifier #2
X -->
Y -->
V -->
U -->
...
\end{verbatim}
}
\end{example}
Note that this method does not provide an equational unification algorithm in general: given an equational theory $(\Sigma,\mathcal{E})$ and two terms $t,t'$ that have a finite, minimal, and complete set of equational unifiers modulo $\ensuremath{\mathcal E}$, the equational theory $\ensuremath{\mathcal E}$ may not have a finite variant decomposition. An example is the unification under homomorphism (or one-side distributivity), where there is a finite number of unifiers of two terms but the theory does not satisfy the finite variant property (see \cite{CD05,ESM12}).
The following result from \cite{ESM12} ensures a complete set of unifiers for a finite variant decomposition.
\begin{corollary}[Finitary $\ensuremath{\mathcal E}$-unification]{\rm\cite{ESM12}}
\label{cor:finitary-unification}
Let $(\Sigma,B,E)$ be a finite variant decomposition of an equational theory. Given two terms $t,t'$, the set
$\csuV{t = t'}{\cap}{E\cup B}=\{\theta \mid (w,\theta)\in\sem{t}\cap\sem{t'}\}$ is a \emph{finite and complete} set of unifiers for $t = t'$.
\end{corollary}
However, Corollary~\ref{cor:finitary-unification} does not provide a minimal set of \emph{most general} unifiers w.r.t. the $\sqsupseteq_{E\cup B}$ relation.
For instance, it is well-known that unification in the exclusive-or theory is unitary, i.e., there exists only one most general unifier modulo exclusive-or
\cite{KN87}.
For the unification problem $X * Y \? U * V$ of Example~\ref{ex:xor}, the most general unifier w.r.t. $\sqsupseteq_{E\cup B}$ is
$\{X \mapsto Y * U * V\}$,
which should be appropriately written as
$$\sigma=\{X \mapsto Y' * U' * V', Y \mapsto Y', U \mapsto U', V \mapsto V'\}.$$
Note that
$\{Y \mapsto X * U * V\}$,
$\{U \mapsto Y * X * V\}$,
and
$\{V \mapsto Y * U * X\}$ are equivalent to the former unifier w.r.t. $\sqsupseteq_{E\cup B}$
by composing $\sigma$ with, respectively,
$\rho_1=\{Y' \mapsto X'' * U'' * V'',X' \mapsto X'', U' \mapsto U'', V' \mapsto V''\}$,
$\rho_2=\{U' \mapsto Y'' * X'' * V'',X' \mapsto X'', Y' \mapsto Y'', V' \mapsto V''\}$,
and
$\rho_3=\{V' \mapsto Y'' * U'' * X'',X' \mapsto X'', U' \mapsto U'', Y' \mapsto Y''\}$.
Similarly,
$\{X \mapsto U, Y \mapsto V\}$
and
$\{X \mapsto V, Y \mapsto U\}$
are equivalent to all the previous ones.
\section{Computing More General Variant Unifiers}\label{sec:mgvu}
Note that when $(\Sigma,B,E)$ is a finite variant decomposition and $B$-unification is finitary, we get an \emph{$E\cup B$-matching algorithm} as $\textit{Match}_{E\cup B}(u,v)=\{\theta \mid \bar{\theta} \in \csuV{u = \bar{v}}{\cap}{E\cup B}\}$, where $\bar{v}$ is obtained from $v$ by turning its variables $x_1,\ldots,x_n$ into fresh constants $\bar{x}_1,\ldots,\bar{x}_n$, and $\theta$ is obtained from $\bar{\theta}$ by, given a binding $x \mapsto \bar{t}\in\bar{\theta}$, adding the binding $x\mapsto t$ to $\theta$; the term $t$ is easily obtained from $\bar{t}$ by replacing every occurrence of a fresh constant $\bar{x}_1,\ldots,\bar{x}_n$ by its original. We say $t \sqsupseteq_{E\cup B} t'$ if $\textit{Match}_{E\cup B}(t,t')\neq\emptyset$, and $t \sqsupset_{E\cup B} t'$ if $t \sqsupseteq_{E\cup B} t'$ and $t \not=_{E\cup B} t'$.
It is easy to provide, at the theoretical level, a minimal set of most general variant unifiers by post-filtering the set of computed unifiers by using $\sqsupseteq_{E\cup B}$.
\begin{proposition}[Post-filtered Variant-based Unification]\label{prop:post}
Let $(\Sigma,B,E)$ be a finite variant decomposition of an equational theory. Given two terms $t,t'$, the set $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}= \{\theta \mid \theta \in \csuV{t = t'}{\cap}{E\cup B} \wedge \nexists \theta'\in \csuV{t = t'}{\cap}{E\cup B}\setminus\{\theta\} : \theta' \sqsupset_{E\cup B} \theta\}$ is a \emph{finite and complete} set of unifiers for $t = t'$.
Even more, the quotient $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}/_{\simeq_{E\cup B}}$ w.r.t.\ the equivalence relation $\simeq_{E\cup B}$ induced from $\sqsupseteq_{E\cup B}$ is a \emph{finite, minimal, and complete} set of unifiers for $t = t'$.
\end{proposition}
We have implemented both post-filtering stages $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}$ and $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}/_{\simeq_{E\cup B}}$ in an extended version of Full Maude version 27g \cite{full-maude} available at \url{http://safe-tools.dsic.upv.es/mgvu}. The new command implementing the algorithm $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}$ is as follows:
\noindent
{\small
\begin{verbatim}
(post variant unify [ n ] in ModId : T1 =? T1' /\ ... /\ Tk =? Tk' .)
\end{verbatim}
}
\noindent where $k\geq 1$ and $n$ is an optional argument providing a bound on the number of unifiers requested, so that if there are more unifiers, those beyond that bound are omitted; and \texttt{ModId} is the identifier of the module where the command takes place.
When we consider the previous variant unification problem between terms $X * Y$ and $U * V$, now we get just $7$ unifiers from the $57$ unifiers above.
{\small
\begin{verbatim}
Maude> (post variant unify in EXCLUSIVE-OR : X * Y =? U * V .)
Unifier #1 ... Unifier #7
X -->
Y -->
V -->
U -->
\end{verbatim}
}
\noindent The new command reporting the quotient $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}/_{\simeq_{E\cup B}}$ is as follows:
\noindent
{\small
\begin{verbatim}
(post quotient variant unify [ n ] in ModId : T1 =? T1' /\ ... /\ Tk =? Tk' .)
\end{verbatim}
}
\noindent where $k\geq 1$ and $n$ is an optional argument providing a bound on the number of unifiers requested, so that if there are more unifiers, those beyond that bound are omitted; and \texttt{ModId} is the identifier of the module where the command takes place.
When we consider the previous variant unification problem between terms $X * Y$ and $U * V$, now we get just one unifier, since all the seven unifiers reported before are equivalent modulo exclusive-or.
{\small
\begin{verbatim}
Maude> (post quotient variant unify in EXCLUSIVE-OR : X * Y =? U * V .)
Unifier #1
X -->
Y -->
V -->
U -->
\end{verbatim}
}
\section{Fast Computation of More General Variant Unifiers}\label{sec:mgvu-fast}
The computation of both $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}$ and $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}/_{\simeq_{E\cup B}}$ is extremely expensive (see Section~\ref{sec:exp} below), both in execution time and memory usage, because we must use the same variant-based unification command in Maude for obtaining the variant unifiers and then for filtering them. In this section, we provide the main contribution of this paper on improving the computation of a set of most general variant unifiers. Let us motivate our main results with an example.
When we consider a variant unification problem between terms $X$ and $U * V$, we get an explosion of all the variants of $U * V$.
{\small
\begin{verbatim}
Maude> variant unify in EXCLUSIVE-OR : X =? U * V .
Unifier #1
X -->
V -->
U -->
Unifier #2
X --> mt
V --> #1:[ElemXor]
U --> #1:[ElemXor]
Unifier #3
X --> #2:[ElemXor] * #3:[ElemXor]
V --> #1:[ElemXor] * #2:[ElemXor]
U --> #1:[ElemXor] * #3:[ElemXor]
Unifier #4
X --> #1:[ElemXor]
V --> #1:[ElemXor] * #2:[ElemXor]
U --> #2:[ElemXor]
Unifier #5
X --> #1:[ElemXor]
V --> #2:[ElemXor]
U --> #1:[ElemXor] * #2:[ElemXor]
Unifier #6
X --> #1:[ElemXor]
V --> mt
U --> #1:[ElemXor]
Unifier #7
X --> #1:[ElemXor]
V --> #1:[ElemXor]
U --> mt
\end{verbatim}
}
\noindent but it is clear that the simplest, most general unifier is $\{X \mapsto U * V\}$
{\small
\begin{verbatim}
Maude> (post quotient variant unify in EXCLUSIVE-OR : X =? U * V .)
Unifier #1
X -->
V -->
U -->
\end{verbatim}
}
The main idea here, common to any unification algorithm (see \cite{BS-handbook00}), is that when a variable is found, i.e., $X \? t$, there is no need to search for further unifiers, since any other unifier will be an instance of $X \mapsto t$. We have formalized this idea but extended it to the case of having any context $C[X] \? C[t]$. Indeed, we have formalized it for the very general case of having any context modulo $B$, i.e., $C_1[X] \? C_2[t]$ s.t. $C_1[\Box] =_{B} C_2[\Box]$. The following auxiliary result stating that it is possible that any narrowing step from $t$ does not interfere with $C_1$, $C_2$ and $X$ is essential.
\begin{lemma}\label{lem}
Given a decomposition $(\Sigma,B,E)$ of an equational theory, two $\Sigma$-terms $t_1$ and $t_2$ s.t.
$W_\cap = \var{t_1}\cap\var{t_2}$ and
$W_\cup = \var{t_1}\cup\var{t_2}$,
$(u_1,\theta_1) \in \sem{t_1}$,
$(u_2,\theta_2) \in \sem{t_2}$,
$\sigma \in \csuV{u_1 = u_2}{W_\cup}{B}$ s.t.
$\restrict{(\theta_1\sigma)}{W_\cap}
\congr{B}
\restrict{(\theta_2\sigma)}{W_\cap}$,
$(u'_1,\theta'_1) \in \sem{t_1}$ s.t. $(u'_1,\rho) \in \sem{u_1}$ and $\restrict{\theta'_1}{W_\cup} \congr{B} \restrict{\theta_1\rho}{W_\cup}$,
$\sigma' \in \linebreak \csuV{u'_1 = u_2}{W_\cup}{B}$ s.t.
$\restrict{(\theta'_1\sigma)}{W_\cap}
\congr{B}
\restrict{(\theta_2\sigma)}{W_\cap}$,
and
$\domain{\sigma}\cap\domain{\rho}=\emptyset$,
then
$\restrict{((\theta_1\cup\theta_2)\sigma)}{W_\cup}$ and $\restrict{((\theta'_1\cup\theta_2)\sigma')}{W_\cup}$ are both equational unifiers of $t_1$ and $t_2$
but
$\restrict{((\theta_1\cup\theta_2)\sigma)}{W_\cup} \sqsupseteq_{E\cup B} \restrict{((\theta'_1\cup\theta_2)\sigma')}{W_\cup}$.
\end{lemma}
\begin{proof}
The statement of the Lemma is depicted in Figure~\ref{fig:lem}.
The proof is done by realizing that
$\domain{\sigma}\cap\domain{\rho}=\emptyset$
implies that
$\restrict{(((\theta_1\cup\theta_2)(\sigma\cup\rho))}{W_\cup}$
is also a unifier of $t_1$ and $t_2$, and then
$$\restrict{(\theta_1\cup\theta_2)\sigma}{W_\cup}
\sqsupseteq_{E\cup B}
\restrict{(\theta_1\cup\theta_2)(\sigma\cup\rho)}{W_\cup}
\sqsupseteq_{E\cup B}
\restrict{(\theta_1\cup\theta_2)\sigma\rho\sigma'}{W_\cup}
=_{B}
\restrict{(\theta'_1\cup\theta_2)\sigma'}{W_\cup}$$
\end{proof}
\begin{figure}[t]
$$
\xymatrix{
t_1\ar@{~>}_{\theta_1}_>{*}[d] & t_2\ar@{~>}^{\theta_2}^>{*}[d]\\
u_1\ar@{~>}_{\rho}_>{*}[d] & u_2\ar@{<=>}^{\sigma}[l]\ar@{<=>}^{\sigma'}[ld]\\
u'_1
}
$$
\caption{Sketch of the proof of Lemma~\ref{lem}}\label{fig:lem}
\end{figure}
We redefine the intersection of two sets of variants. Note that this definition does not prevent the generation of the variants of both terms in an unification problem; techniques for avoiding the generation of variants are outside the scope of this paper.
\begin{definition}[Fast Variant Intersection]
Given a decomposition $(\Sigma,B,E)$ of an equational theory, two $\Sigma$-terms $t_1$ and $t_2$ such that $W_\cap = \var{t_1}\cap\var{t_2}$ and $W_\cup = \var{t_1}\cup\var{t_2}$, and two sets $V_1$ and $V_2$ of variants of $t_1$ and $t_2$, respectively, we define
\noindent
\begin{align*}
V_1 \doublecap V_2 =
\{
&(u_1\sigma,\theta_1\sigma \cup \theta_2\sigma \cup \sigma) \mid (u_1,\theta_1) \in V_1 \wedge (u_2,\theta_2) \in V_2 \wedge\\
&\exists \sigma: \sigma \in \csuV{u_1 = u_2}{W_\cup}{B}
\wedge
\restrict{(\theta_1\sigma)}{W_\cap}
\congr{B}
\restrict{(\theta_2\sigma)}{W_\cap}
\wedge\\
&
(\nexists (u'_1,\theta'_1) \in V_1,\nexists\rho:
(u_1,\rho)\in\sem{u'_1}\wedge\\
&\ \nexists \sigma': \sigma' \in \csuV{u'_1 = u_2}{W_\cup}{B}
\wedge
\restrict{(\theta'_1\sigma')}{W_\cap}
\congr{B}
\restrict{(\theta_2\sigma')}{W_\cap}
\wedge
\domain{\sigma'}\cap\domain{\rho}=\emptyset)
\wedge\\
&
(\nexists (u'_2,\theta'_2) \in V_2,\nexists\rho:
(u_2,\rho)\in\sem{u'_2}\wedge\\
&\ \nexists \sigma': \sigma' \in \csuV{u_1 = u'_2}{W_\cup}{B}
\wedge
\restrict{(\theta_1\sigma')}{W_\cap}
\congr{B}
\restrict{(\theta'_2\sigma')}{W_\cap}
\wedge
\domain{\sigma'}\cap\domain{\rho}=\emptyset)
\}
\end{align*}
\end{definition}
Then, we define variant-based unification as the computation of the variants of the two terms in a unification problem and their \emph{minimal} intersection; its proof is immediate by Lemma~\ref{lem}.
\begin{proposition}[Fast Variant-based Unification]\label{prop:mgvu-fast}
Let $(\Sigma,B,E)$ be a finite variant decomposition of an equational theory. Given two terms $t,t'$, on the one hand, the set $\csuV{t = t'}{\doublecap}{E\cup B}=\{\theta \mid (w,\theta)\in\sem{t}\doublecap\sem{t'}\}$ is a \emph{finite and complete} set of \emph{unifiers} for $t = t'$ and the quotient $\csuV{t = t'}{\doublecap}{E\cup B}/_{\simeq_{E\cup B}}$ is also a (generally smaller) \emph{finite and complete} set of unifiers for $t = t'$. On the other hand, the set $\csuV{t = t'}{\doublecap,\sqsupset}{E\cup B}= \{\theta \mid \theta \in \csuV{t = t'}{\doublecap}{E\cup B} \wedge \nexists \theta'\in \csuV{t = t'}{\doublecap}{E\cup B}\setminus\{\theta\} : \theta' \sqsupset_{E\cup B} \theta\}$ is a \emph{finite and complete} set of unifiers for $t = t'$. Furthermore, the quotient $\csuV{t = t'}{\doublecap,\sqsupset}{E\cup B}/_{\simeq_{E\cup B}}$ is a \emph{finite, minimal, and complete} set of unifiers for $t = t'$.
\end{proposition}
We have implemented these four fast unification methods in an extended version of Full Maude version 27g \cite{full-maude}, which is available at \url{http://safe-tools.dsic.upv.es/mgvu}:
\begin{itemize}
\item The new command implementing the algorithm $\csuV{t = t'}{\doublecap}{E\cup B}$ is
\noindent
{\small
\begin{verbatim}
(fast variant unify [ n ] in ModId : T1 =? T1' /\ ... /\ Tk =? Tk' .)
\end{verbatim}
}
\item The new command implementing the algorithm $\csuV{t = t'}{\doublecap}{E\cup B}/_{\simeq_{E\cup B}}$ is
\noindent
{\small
\begin{verbatim}
(fast quotient variant unify [ n ] in ModId : T1 =? T1' /\ ... /\ Tk =? Tk' .)
\end{verbatim}
}
\item The new command implementing the algorithm $\csuV{t = t'}{\doublecap,\sqsupset}{E\cup B}$ is
\noindent
{\small
\begin{verbatim}
(fast post variant unify [n] in ModId : T1 =? T1' /\ ... /\ Tk =? Tk' .)
\end{verbatim}
}
\item And the new command implementing the algorithm $\csuV{t = t'}{\doublecap,\sqsupset}{E\cup B}/_{\simeq_{E\cup B}}$ is
\noindent
{\small
\begin{verbatim}
(fast post quotient variant unify [n] in ModId : T1 =? T1'/\ ... /\ Tk =? Tk' .)
\end{verbatim}
}
\end{itemize}
For the unification problem $X * Y$ and $U * V$, the \texttt{fast} command delivers $8$ unifiers instead of the $57$ unifiers for standard variant unification. However, $7$ of those $8$ unifiers are equivalent, thus the \texttt{fast\;quotient} command delivers only $2$ unifiers. Likewise, the \texttt{fast\;post} command returns the same $7$ unifiers as the \texttt{post} command, and the \texttt{fast\;post\;quotient} command gets the same (most general) unifier as the \texttt{post\;quotient} command above. Note that the \texttt{fast} unification command and the \texttt{fast\;quotient} unification command compute these unifiers in a fraction of time compared to the \texttt{post} unification command and the \texttt{post\;quotient} unification command (see unification problem $P_6$ in Section~\ref{sec:exp}).
When we consider the previous variant unification problem between terms $X$ and $U * V$, now we get just one unifier as desired, and again in a fraction of time compared to $\csuV{t = t'}{\cap,\sqsupset}{E\cup B}$ (see unification problem $P_1$ in Section~\ref{sec:exp}).
{\small
\begin{verbatim}
Maude> (fast variant unify in EXCLUSIVE-OR : X =? U * V .)
Unifier #1
X -->
V -->
U -->
\end{verbatim}
}
\noindent
Note that, in this case, clearly the \texttt{fast\;post} and \texttt{fast\;post\;quotient} unification commands do not improve over the \texttt{fast} unification command.
\section{Experimental Evaluation}\label{sec:exp}
To evaluate the performance of both the post-filtering and the fast unification techniques, we have conducted a series of benchmarks available at \url{http://safe-tools.dsic.upv.es/mgvu}.
All the experiments were conducted on a PC with a 3.3GHz Intel Xeon E5-1660 and 64GB RAM. First, we created a battery of 20 different unification problems for both the {\it exclusive-or} and the {\it abelian group} theories. For each problem and theory, we computed: (i) the unifiers by using the standard {\tt variant\;unify} command provided by the C++ core system of Maude; (ii) the unifiers by using the {\tt post\;quotient\;variant\;unify} command implemented at the metalevel of Maude; (iii) the unifiers by using the {\tt fast\;quotient\;variant\;unify} command implemented at the metalevel of Maude; and (iv) the unifiers by using the {\tt fast\;post\;quotient\;variant\;unify} command, also implemented at the metalevel of Maude. We measured both the number of computed unifiers and the time required for their computation.
Since it is unfair to compare the performance between compiled code and interpreted code, i.e., the C++ core system of Maude and a Maude program using Maude's metalevel, we have reimplemented the {\tt variant unify} command at the metalevel and applied the post-filtering and the fast variant intersection to the output returned by this reimplementation.
Table~\ref{tab:xor} (resp. Table~\ref{tab:ag}) shows the results obtained for the {\it exclusive-or} (resp. {\it abelian group)} theory. \emph{T/O} indicates that a generous $24$ hours \emph{timeout} was reached without any response. The first column describes the unification problem, while the following $\#_{\mathit{maude}}$, $\#_{\mathit{post}}$, $\#_{\mathit{fast}}$, and $\#_{\mathit{fast,post}}$ columns show the number of computed unifiers for Maude's unification command, the post-filtering technique producing the quotient w.r.t. $\sqsupseteq_{E\cup B}$, the fast unification technique, and the combination of fast and the post-filtering, respectively. The $\mathcal{T}_{\mathit{maude}}$ column measures the time (in milliseconds) required to execute the {\tt variant unify} command for the given input problem, the $\mathcal{T}_{\mathit{post}}$ column measures the time required by the reimplementation of the {\tt variant unify} command together with the post-filtering technique,
the $\mathcal{T}_{\mathit{fast}}$ column measures the time required by the reimplementation of the {\tt variant unify} command together with the fast unification technique, and the $\mathcal{T}_{\mathit{fast,post}}$ column measures the time required of all three combined, the reimplementation, the fast technique, and the post-filtering.
\begin{table}[t]
\centering
\scriptsize
{\setlength{\tabcolsep}{0.5em}
\begin{tabular*}{\textwidth}{|c|c@{\extracolsep{\fill}}|r|r|r|r|r|r|r|r|}
\cline{1-10}
\multicolumn{2}{|c|}{\it Unification problem}
&\multicolumn{1}{c|}{$\#_{\mathit{maude}}$}
&\multicolumn{1}{c|}{$\mathcal{T}_{\mathit{maude}}$}
&\multicolumn{1}{c|}{$\#_{\mathit{post}}$}
&\multicolumn{1}{c|}{$\mathcal{T}_{\mathit{post}}$}
&\multicolumn{1}{c|}{$\#_{\mathit{fast}}$}
&\multicolumn{1}{c|}{$\mathcal{T}_{\mathit{fast}}$}
&\multicolumn{1}{c|}{$\#_{\mathit{fast,post}}$}
&\multicolumn{1}{c|}{$\mathcal{T}_{\mathit{fast,post}}$}\\
\cline{1-10}
$P_{1}$ & $ V_1 \? V_2 * V_3$ &7 &0 &1 &13 &1 &4 &1 &4\\
$P_{2}$ & $ V_1 \? V_2 * V_3 * V_4$ &57 &49 &1 &6545 &1 &1080 &1 &1168\\
$P_{3}$ & $ V_1 \? f_1(V_2 * V_3 * f_1(V_4))$ &21 &3 &1 &199 &1 &47 &1 &47\\
$P_{4}$ & $ V_1 \? f_2(V_2 * V_3, f_1(V_2 * V_4))$ &61 &98 &1 &18895 &1 &1463 &1 &1470\\
$P_{5}$ & $ V_1 \? f_3(V_2 * V_3,f_1(V_3 * V_4),f_2(V_2,f_1(V_4)))$ &61 &193 &1 &20949 &1 &1958 &1 &1966\\
\cline{1-10}
$P_{6}$ & $ V_1 * V_2 \? V_3 * V_4$ &57 &10 &1 &12240890 &2 &72 &1 &10005912\\
$P_{7}$ & $ V_1 * V_2 \? f_1(V_3 * V_4)$ &28 &8 &1 &697 &4 &17 &1 &41\\
$P_{8}$ & $ V_1 * V_2 \? f_1(V_3 * V_3 * f_1(V_4))$ &4 &0 &1 &6 &4 &3 &1 &5\\
$P_{9}$ & $ V_1 * V_2 \? f_2(V_3 * V_4,f_1(V_3 * V_5))$ &244 &741 &1 &30490862 &4 &2193 &1 &14836\\
$P_{10}$ & $ V_1 * V_2 \? f_3(V_3 * V_4,f_1(V_4 * V_5),f_2(V_3,f_1(V_5)))$ &244 &1277 &1 &30423527 &4 &2868 &1 &14802\\
\cline{1-10}
$P_{11}$ & $ f_1(V_1) \? f_1(V_2 * V_3)$ &7 &0 &1 &13 &1 &4 &1 &4\\
$P_{12}$ & $ f_1(V_1) * f_1(V_2) \? f_1(V_3) * f_1(V_3 * V_4)$ &13 &3 &2 &118 &2 &8 &2 &9\\
$P_{13}$ & $ f_1(V_1 * V_2) \? f_1(V_3 * V_4 * V_5)$ &973 &857 &- &T/O &8 &15539 &- &T/O\\
$P_{14}$ & $ f_2(V_1 * V_2,V_2 * V_3) \? f_2(V_4,V_5)$ &61 &97 &1 &32836 &1 &1471 &1 &1473\\
$P_{15}$ & $ f_3(V_1 * V_2,V_3 * V_4,V_5 * V_6) \? f_3(V_7,V_8,V_9)$ &343 &173 &1 &165260 &1 &20608 &1 &20634\\
\cline{1-10}
$P_{16}$ & $ V_1 \? a * b * V_2$ &8 &0 &1 &11 &1 &2 &1 &2\\
$P_{17}$ & $ V_1 * V_2 \? a * b * V_3$ &69 &9 &1 &2259 &5 &74 &1 &183\\
$P_{18}$ & $ V_1 * a \? V_2 * b$ &8 &0 &1 &11 &4 &2 &1 &4\\
$P_{19}$ & $ f_1(a) * f_1(V_1) \? f_1(V_2 * b) * f_1(V_3 * c)$ &16 &3 &3 &104 &10 &13 &3 &47\\
$P_{20}$ & $ f_2(a,V_1) \? f_2(V_2 * V_3,f_1(a * b))$ &4 &0 &1 &9 &4 &4 &1 &5\\
\cline{1-10}
\end{tabular*}
}
\caption{Experimental evaluation (exclusive-or)}\label{tab:xor}
\end{table}
\begin{table}[t]
\centering
\scriptsize
{\setlength{\tabcolsep}{0.5em}
\begin{tabular*}{\textwidth}{|c|c@{\extracolsep{\fill}}|r|r|r|r|r|r|r|r|}
\cline{1-10}
\multicolumn{2}{|c|}{\it Unification problem}
&\multicolumn{1}{c|}{$\#_{\mathit{maude}}$}
&\multicolumn{1}{c|}{$\mathcal{T}_{\mathit{maude}}$}
&\multicolumn{1}{c|}{$\#_{\mathit{post}}$}
&\multicolumn{1}{c|}{$\mathcal{T}_{\mathit{post}}$}
&\multicolumn{1}{c|}{$\#_{\mathit{fast}}$}
&\multicolumn{1}{c|}{$\mathcal{T}_{\mathit{fast}}$}
&\multicolumn{1}{c|}{$\#_{\mathit{fast,post}}$}
&\multicolumn{1}{c|}{$\mathcal{T}_{\mathit{fast,post}}$}\\
\cline{1-10}
$P_{21}$ & $ V_1 \? V_2 + V_3$ &47 &68 &1 &6185 &1 &778 &1 &806\\
$P_{22}$ & $ V_1 \? f_1(V_2 + V_3)$ &47 &68 &1 &6117 &1 &796 &1 &808\\
$P_{23}$ & $ V_1 \? f_1(V_2 + V_2 + f_1(V_3))$ &8 &13 &1 &125 &1 &43 &1 &43\\
$P_{24}$ & $ V_1 \? f_2(V_2 + V_3 + f_1(V_3),V_4)$ &103 &371 &1 &55662 &1 &10696 &1 &10696\\
$P_{25}$ & $ V_1 \? f_3(V_2,f_1(V_3 + V_4),f_2(V_3,V_5))$ &6 &2 &1 &30 &1 &7 &1 &7\\
\cline{1-10}
$P_{26}$ & $ V_1 + V_2 \? V_3 + V_4$ &3611 &21663 &- &T/O &167 &439304 &- &T/O\\
$P_{27}$ & $ V_1 + V_2 \? f_1(V_3 + V_4)$ &376 &13864 &1 &22207559 &8 &3830 &1 &27870\\
$P_{28}$ & $ V_1 + V_2 \? f_1(V_3 + V_3 + f_1(V_4))$ &64 &1239 &1 &82170 &8 &904 &1 &3382\\
$P_{29}$ & $ V_1 + V_2 \? f_2(V_3 + V_4,f_1(V_5))$ &376 &13373 &1 &19468887 &8 &4059 &1 &30537\\
$P_{30}$ & $ V_1 + V_2 \? f_3(V_3 + V_3,V_4,V_5)$ &32 &466 &1 &4743 &8 &836 &1 &1194\\
\cline{1-10}
$P_{31}$ & $ f_1(V_1) \? f_1(V_2 + V_3)$ &47 &71 &1 &9985 &1 &842 &1 &849\\
$P_{32}$ & $ f_1(V_1) + f_1(V_2) \? f_1(V_3) + f_1(V_3 + V_4)$ &93 &150 &1 &699872 &1 &1417 &1 &1449\\
$P_{33}$ & $ f_1(V_1 + V_2) \? f_1(V_3 + - V_4)$ &3702 &25277 &- &T/O &109 &283851 &1 &48028877\\
$P_{34}$ & $ f_2(V_1 + V_2,V_2 + V_3) \? f_2(V_4,- V_5)$ &188 &356 &1 &154443 &1 &2384 &1 &2409\\
$P_{35}$ & $ f_3(V_1 + V_2,f_1(V_3), - V_4) \? f_3(V_5,- V_6,V_6)$ &47 &1812 &1 &35992 &1 &25889 &1 &29674\\
\cline{1-10}
$P_{36}$ & $ V_1 \? a + - b + V_2$ &14 &5 &1 &117 &1 &20 &1 &29\\
$P_{37}$ & $ V_1 + V_2 \? a + b + V_3$ &510 &1411 &1 &1366009 &107 &5557 &1 &288552\\
$P_{38}$ & $ V_1 + a \? V_2 + b$ &14 &9 &1 &107 &8 &8 &1 &63\\
$P_{39}$ & $ f_1(a) + f_1(V_1) \? f_1(V_2 + - b) + f_1(V_3 + c)$ &12 &17 &2 &277 &2 &150 &2 &142\\
$P_{40}$ & $ f_2(a,V_1) \? f_2(V_2 + V_3,f_1(a + b))$ &8 &79 &2 &831 &8 &764 &1 &920\\
\cline{1-10}
\end{tabular*}
}
\caption{Experimental evaluation (abelian group)}\label{tab:ag}
\end{table}
Table~\ref{tab:xor} shows that, for the {\it exclusive-or} theory, the {\tt fast\;post\;quotient} unification command almost replicates the results obtained by using the {\tt post\;quotient} unification command, but in a fraction of time, as in unification problems $P_{9}$ and $P_{10}$. For the number of unifiers, Maude reported $973$ unifiers for the unification problem $P_{13}$, and the fast unification technique delivers just $8$ unifiers, whereas applying the post-filtering technique to either standard or fast unification is hopeless. For the execution time, the unification problem $P_6$ reports only $10$ milliseconds for $\mathcal{T}_{\mathit{maude}}$, $72$ milliseconds for $\mathcal{T}_{\mathit{fast}}$, $12240890$ milliseconds ($3,4$ hours) for $\mathcal{T}_{\mathit{post}}$, and $10005912$ milliseconds ($2,7$ hours) for $\mathcal{T}_{\mathit{fast,post}}$, demonstrating that the post-filtering technique is expensive in any case.
Table~\ref{tab:ag} shows the experimental results for the {\it abelian group} theory. Since this theory is far more complex than the {\it exclusive-or} theory, the execution time and the number of unifiers are bigger than those in Table~\ref{tab:xor}. For the unification problem $P_{27}$, Maude reported $376$ unifiers and the fast unification technique reported just $8$ unifiers. The post-filtering technique delivers only one most general unifier, but it takes $22207559$ milliseconds ($6,2$ hours) to compute it from the $376$ unifiers and only $27870$ milliseconds (less than $28$ seconds) to compute it from the $8$ unifiers, demonstrating that applying the fast unification technique is advantageous in any case.
\section{Conclusion and Future Work}\label{sec:conc}
The variant-based equational unification algorithm implemented in the most recent version of Maude, version 2.7.1, may compute many more unifiers than the necessary and, in this paper, we have explored how to strengthen such an algorithm to produce a smaller set of variant unifiers. Our experiments suggest that this new adaptation of the variant-based unification is more efficient both in execution time and in the number of computed variant unifiers than the original algorithm.
As far as we know, this is the first work to reduce the number of variant unifiers. The closest work are methods to combine standard unification algorithms with variant-based unification, such as \cite{EKMN+15,EEMR19}. This is just a step forward on developing new techniques for improving variant-based unification and we plan to reduce even more the number of variant unifiers.
|
2,869,038,156,856 | arxiv | \section{Introduction}
\label{sec:Intro}
The tunneling of conduction electrons into local charge traps is a prevalent
phenomena in solid state physics. Traps can be realized by artificial
structures such as quantum dots,\cite{latta11} or by natural ``unwanted''
defects such as dangling bonds \cite{desousa07} and bound states in metal/oxide interfaces. \cite{choi09} It has long been recognized that trap fluctuation causes charge noise in electronic devices, with the signature of individual traps being observed with a Lorentzian $1/f^2$ noise spectral density in small structures \cite{ralls84, desousa05} and an ensemble of them causing $1/f$
noise in large structures. \cite{weissman88} Here we address the fundamental question of how the electron \textit{spin} alters trap noise.
One of the greatest developments of interacting electron physics was the discovery that a local trap interacting with a Fermi sea gives rise to the Kondo effect, the formation of a many-body singlet with conduction electron spins screening out the local trap spin. \cite{wilsonRMP} The signatures of the Kondo effect in transport phenomena are well studied, but key issues related to dynamics have only been addressed recently with the emergence of modern Numerical Renormalization Group (NRG) algorithms. \cite{Bulla:395:2008}
It is particularly interesting to find out whether trap noise will impact devices that are sensitive to \emph{magnetic fluctuations as opposed to charge}, e.g. spin-based or spintronic devices, \cite{zutic04, diao11} in the same way that
it affects conventional charge-based devices. Recent measurements of intrinsic magnetic flux noise in superconducting quantum interference devices do indeed confirm that trap spin fluctuation is the dominant source of noise.\cite{faoro08,sendelbach08,lanting14} Moreover, novel developments in spin noise spectroscopy \cite{crooker04} open several possibilities for the
detection of correlated spin fluctuations in quantum dot systems.
Given these interesting prospects, the question that we address here is the qualitative difference between pure charge/spin noise of a ``Kondo trap'' interacting with a Fermi sea, which we define as a local charge trap in the Kondo regime.
The interplay of Kondo physics and noise has been mostly explored in the context of transport through quantum-dot systems, with the Kondo trap right inside the transport path. In this case trap charge and spin fluctuation are intertwined in a non-trivial way. Calculations of the shot noise and current noise in different set-ups such as single \cite{Meir:Phys.Rev.Lett.:88:116802:2002,Moca:Phys.Rev.B:83:201303:2011,mueller10,Mueller:Phys.Rev.B:245115:2013,Moca:Phys.Rev.B:89:155138:2014}
and double quantum dots\cite{Lopez:Phys.Rev.B:69:235305:2004,Kubo:Phys.Rev.B:83:115310:2011,Breyel:Phys.Rev.B:84:155305:2011}
in the Kondo regime have been reported.
Much less studied is the role of the Kondo state in \textit{spin} noise. The case of spin-current noise was considered in Refs. \onlinecite{Moca:Phys.Rev.B:81:241305:2010} and \onlinecite{Moca:Phys.Rev.B:84:235441:2011}, and qualitative differences between spin-current and charge-current noise were found to exist.
In this article, we show that focusing on \emph{pure spin/charge trap noise} (i.e., finite frequency trap occupation noise)
allows for a different perspective on the problem of Kondo trap dynamics: it enables a clear separation between the contributions of single-particle excitations and the many-particle processes
connected with the formation of the Kondo singlet state. Moreover, considering pure spin (charge) trap noise is important for describing transport experiments with traps outside the transport channel. In this case, trap fluctuations produce bias magnetic (electric) noise that in turn may dominate the spin-current (charge-current) noise.
Our article is organized as follows. In Section~\ref{sec:Model} we outline our model for pure spin/charge trap noise, and establish its connection to the usual spin/charge susceptibilities. We demonstrate six exact results: four sum rules and two Shiba relations. In Section~\ref{sec:HF} we describe our Hartree-Fock (HF) or mean-field approximation, that mainly accounts for single-particle processes. In Section~\ref{sec:NRG} we present our non-perturbative NRG calculations, which account for single-particle and many-particle processes on the same footing. The NRG results show that finite-frequency spin/charge noise have quite distinct behaviors and are dominated by completely different processes. In Section~\ref{sec:analyticspin} we use NRG and the sum rules to obtain an analytic approximation to spin noise in the Kondo regime, and in Section~\ref{sec:disorder} we use this analytic approximation to study the interplay between disorder and Kondo correlations in an ensemble of Kondo traps. We show that, in the presence of disorder, the spin noise displays a temperature-dependent $1/f$ noise that is qualitatively distinct from the temperature-independent charge $1/f$ noise. Finally, Section~\ref{sec:Conclusion} presents our concluding remarks, with a discussion of the impact of our results in the effort to detect Kondo correlations in spin noise spectroscopy experiments, and our prediction of qualitatively different $1/f$ noise impacting spin-based and charge-based devices.
\section{Charge trap model and exact sum rules}
\label{sec:Model}
Our starting point is the Anderson model \cite{anderson61} for a
trapping-center interacting with a Fermi sea,
\begin{equation}
H \!=\!{\cal H}_{\rm band}+{\cal H}_{\rm hyb}+{\cal H}_{\rm trap},
\label{h}
\end{equation}
with
\begin{subequations}
\begin{eqnarray}
{\cal H}_{\rm band} &=&\sum_{k,\sigma}\epsilon_{k\sigma}n_{k\sigma},\\
{\cal H}_{\rm hyb} &=& \sum_{k,\sigma} V_{dk}\left(c^{\dag}_{k\sigma}d_{\sigma}+d^{\dag}_{\sigma}c_{k\sigma}\right),\\
{\cal H}_{\rm trap} &=&\epsilon_d\left(n_{\uparrow}+n_{\downarrow}\right) +U n_{\uparrow}n_{\downarrow}.
\end{eqnarray}
\end{subequations}
In the above, $c^{\dag}_{k\sigma}$ ($c_{k\sigma}$) is a creation
(destruction) operator for a conduction electron with wavevector $k$
and spin $\sigma=\uparrow,\downarrow$,
$n_{k\sigma}=c^{\dag}_{k\sigma}c_{k\sigma}$ counts the number of band
electrons in state $k,\sigma$ with energy
$\epsilon_{k\sigma}$. Similarly, the operators $d^{\dag}_{\sigma}$ and
$d_{\sigma}$ create and destroy a trap electron with spin
$\sigma$, respectively, with $n_{\sigma}=d^{\dag}_{\sigma}d_{\sigma}$
being the number operator for electrons with spin $\sigma$ occupying
the trap state with energy $\epsilon_d$. Finally, $U$ is the
Coulomb repulsion energy for the trap, with $\epsilon_d+U$ the
energy required to add a second electron to an trap site that already contains one electron.
Our goal is to calculate the trap \textit{spin} $S_s(\omega, T)$ and
\textit{charge} $S_c(\omega,T)$ noise spectral densities, defined by:
\begin{equation}
S_{i=s,c}(\omega,T) = \frac{1}{2\pi}\int_{-\infty}^{\infty} dt \;\textrm{e}^{i\omega t}
\left\langle
\delta{\cal \hat{O}}_i(t)\delta{\cal \hat{O}}_i(0)\right\rangle,
\label{Eq:sz_ndnoise}
\end{equation}
where
$\delta{\cal \hat{O}}_i(t)={\cal \hat{O}}_i(t)-\langle{\cal
\hat{O}}_i\rangle$
with trap spin and charge operators given by
${\cal \hat{O}}_s=S_z=(n_{\uparrow}-n_{\downarrow})/2$ and
${\cal \hat{O}}_c=(n_{\uparrow}+n_{\downarrow})$, respectively, and
$\langle \cdot\rangle$ denoting the thermal equilibrium average.
We write an exact expression for the spin and
charge noise by performing a spectral decomposition of Eq.~(\ref{Eq:sz_ndnoise}) in the basis of energy eigenstates:
\begin{equation}
S_{i}(\omega)= \sum_{m,n} \frac{\textrm{e}^{-E_{m}/T}}{Z} \left|\left\langle n|{\cal \hat{O}}_i|m \right\rangle
\right|^{2} \delta(\omega - E_{nm}) - \langle {\cal \hat{O}}_i \rangle^2 \delta(\omega) \;,
\label{Eq:noise_Lehmann_delta}
\end{equation}
where $Z$ is the partition function, $|m \rangle$ are (many-body) eigenstates of the Hamiltonian (\ref{h}) with energy $E_{m}$ ($E_{nm}\equiv E_{n}-E_{m}$) and $\langle n|{\cal \hat{O}}_i|m \rangle$ are the many-body matrix elements of the local operator ${\cal \hat{O}}_i$. For simplicity, we set $\hbar=k_B=1$.
Note that Eq.~(\ref{Eq:noise_Lehmann_delta}) implies that $S_{i}(\omega,T)\geq 0$ and $S_{i}(-\omega,T)=\textrm{e}^{-\omega/T}S_{i}(\omega,T)$ as required by our assumption of thermal equilibrium.
The noise spectra is closely related to the dynamical susceptibility
associated with the operator ${\cal \hat{O}}_i$. We shall explore this connection in order to derive the exact frequency sum rules and Shiba relations\cite{shiba75} for
$S_i(\omega,T)$.
These relationships will be used in Sec.~\ref{sec:analyticspin} to obtain analytical approximations for the noise spectra.
Assuming that an external field $F_i(t)$ couples to ${\cal \hat{O}}_i$ through
${\cal H}_{{\rm ext}}=-{\cal \hat{O}}_iF_i(t)$,
the linear response of ${\cal \hat{O}}_i$ to $F_i$ will be
$\langle {\cal \hat{O}}_i(t)\rangle_{F\neq 0} - \langle {\cal
\hat{O}}_i\rangle_{F=0}= 2\pi \int d\omega \textrm{e}^{-i\omega t}\chi_{i}(\omega,T)F_i(\omega)$,
where $\chi_{i}(\omega,T)$ is the dynamical susceptibility given by \cite{kubo91}
\begin{equation}
\chi_{i}(\omega,T)=\frac{i}{2\pi}\int_{0}^{\infty}dt\;\textrm{e}^{i\omega t}\left\langle \left[{\cal \hat{O}}_i(t),{\cal \hat{O}}_i(0)\right]\right\rangle.
\label{chio}
\end{equation}
Performing a spectral decomposition of Eq.~(\ref{chio}) and comparing to Eq.~(\ref{Eq:noise_Lehmann_delta}) leads to the following Lehmann representation:
\begin{equation}
\chi_{i}(\omega,T)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{d\omega'}{\omega-\omega'+i\eta}\left[S_{i}(-\omega',T)-S_{i}(\omega',T)\right],
\label{lehmannchi}
\end{equation}
with $\eta\rightarrow 0^+$. Separating the susceptibility into real and imaginary parts, $\chi_i=\chi_i' + i \chi_i''$, using $S_{i}(-\omega,T)=\textrm{e}^{-\omega/T}S_{i}(\omega,T)$,
and taking the imaginary part of Eq.~(\ref{lehmannchi}) leads to
\begin{equation}
\chi''_{i}(\omega,T)=\frac{1-\textrm{e}^{-\omega/T}}{2}S_{i}(\omega,T),
\label{fttheorem}
\end{equation}
which is known as the fluctuation-dissipation theorem. Moreover,
taking the real part of Eq.~(\ref{lehmannchi}) yields
\begin{equation}
\chi'_i(\omega,T) = \frac{1}{2\pi}{\cal P} \int_{-\infty}^{\infty} \frac{d\omega'}{\omega'-\omega}\left(1-\textrm{e}^{-\omega'/T}\right)S_i(\omega',T),
\label{kramerskronig}
\end{equation}
which is the Kramers-Kronig causality relation.
We now derive the frequency sum rules.
The first one is obtained by direct integration of Eq.~(\ref{Eq:noise_Lehmann_delta}) over all frequencies:
\begin{equation}
\int_{-\infty}^{\infty}S_{i}(\omega,T) \; d \omega = \langle {\cal
\hat{O}}^2_i \rangle - \langle {\cal \hat{O}}_i \rangle^2.
\label{sumrule1}
\end{equation}
We call this the \emph{spin} or the \emph{charge} sum rule depending on whether $i=s$ or $i=c$.
Another sum rule is obtained by setting $\omega=0$ in Eq.~(\ref{kramerskronig}), and
noting that Eq.~(\ref{lehmannchi}) implies $\chi_i(\omega=0,T)=\chi_i'(\omega=0,T)$:
\begin{equation}
\int_{-\infty}^{\infty}\frac{1-\textrm{e}^{-\omega'/T}}{2\pi \omega'}S_i(\omega,T)d\omega' = \chi_{i}(\omega=0,T).
\label{sumrule2}
\end{equation}
Accordingly, we call this the spin or charge \emph{susceptibility} sum rule. Altogether Eqs.~(\ref{sumrule1}),~(\ref{sumrule2}) form a set of four exact sum rules that are valid at any temperature $T$.
Finally, there are two additional exact relationships between noise and susceptibility, that apply only at $T=0$. These are the so called Shiba relations:\cite{shiba75}
\begin{subequations}
\begin{eqnarray}
{\rm Lim}_{\omega\rightarrow 0^+} \frac{S_s(\omega,T=0)}{8\pi^2\omega} &=& \left[\chi_s(\omega=0,T=0)\right]^{2},\label{shibaspin}\\
{\rm Lim}_{\omega\rightarrow 0^+} \frac{S_c(\omega,T=0)}{2\pi^2\omega} &=& \left[\chi_c(\omega=0,T=0)\right]^{2}.\label{shibacharge}
\end{eqnarray}
\end{subequations}
They imply that $S_i(\omega,T)$ is Ohmic (linear in
$\omega$) at $T=0$, with a slope related to the static susceptibility $\chi_i(\omega\!=\!0,T\!=\!0)$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig1_ComparisonChargeNoiseNRG_MFT}
\caption{(color online) Charge noise as a function of frequency for the trap in
the symmetric case with $\epsilon_d=-U/2$.
NRG calculations are shown to be well approximated by
a mean-field Hartree-Fock
decomposition (HF) even when $U/\Gamma$ is large and the trap is deep in the
Kondo regime. This shows that charge noise is well described by single-particle
excitations.
}
\label{fig1}
\end{figure}
\section{Hartree-Fock approximation}
\label{sec:HF}
As a first approximation we calculate the noise spectral densities using
\emph{Hartree-Fock} (HF) decomposition based on writing expectation values into
products of spectral functions.\cite{anderson61} The advantage of HF is that
it becomes exact in the $U=0$ non-interacting limit.\cite{desousa05,desousa09} The
result for charge noise is
\begin{equation}
S_{c}^{{\rm HF}}(\omega,T)=\sum_{\sigma=\uparrow,\downarrow}\int d\epsilon A_{\sigma\sigma}(\epsilon)A_{\sigma\sigma}(\epsilon-\omega)[1-f(\epsilon)]f(\epsilon-\omega),
\label{mftc}
\end{equation}
and for the spin noise we get simply $S_{s}^{{\rm HF}}(\omega,T)=\frac{1}{4}S_{c}^{{\rm HF}}(\omega,T)$,
i.e., in the HF approximation magnetic noise is simply $\frac{1}{4}$ times the charge noise. In Eq.~(\ref{mftc}) $f(\epsilon)=1/[\exp{((\epsilon-\epsilon_F)/T)}+1]$ is the Fermi function, and
\begin{subequations}
\begin{eqnarray}
A_{\uparrow\uparrow}(\epsilon)&=&\frac{\Gamma/\pi}{(\epsilon-\epsilon_d)^{2}+\Gamma^{2}},\label{aup}\\
A_{\downarrow\downarrow}(\epsilon)&=&\frac{\Gamma/\pi}{(\epsilon-\epsilon_d-U)^{2}+\Gamma^{2}},\label{adown}
\end{eqnarray}
\end{subequations}
are HF local density of states for the trap with spin $\uparrow$ and
$\downarrow$, respectively. The energy scale $\Gamma \equiv \pi \rho V^2_d$ models the rate for escape of a trap electron into the Fermi sea, with $\rho$ the energy density at the Fermi level, and $V_{dk} \equiv V_d$ a $k$-independent coupling between trap and Fermi sea. Note that Eqs.~(\ref{aup}) and ~(\ref{adown}) break the local spin symmetry by assuming the energy for the $\uparrow$ and $\downarrow$ trap states are $\epsilon_d$ and $\epsilon_d+U$,
respectively. This result is well known to be incorrect, in that it misses Kondo physics, i.e. the screening of trap spin by the electron gas spins.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig2_NRGnoiseULinLinFine}
\caption{(color online) Spin noise as a function of frequency for the trap in
the symmetric case with $\epsilon_d=-U/2$. The NRG results
agree with HF only
at $U=0$. As $U$ increases NRG shows that
the magnetic noise increases, developing
a peak at $\omega\approx T_K$. In contrast, the single particle contributions
described by HF
decrease dramatically as $U$ increases. This
shows that magnetic noise is dominated by many-body processes.
}
\label{fig2}
\end{figure}
\section{NRG calculations}
\label{sec:NRG}
We shall compare the Hartree-Fock approach to non-perturbative NRG calculations of the noise spectra, that take into account local spin symmetry and the formation of the Kondo singlet. The NRG algorithm calculates, within some well-controlled
approximations,\cite{Bulla:395:2008} the many-body spectrum for the Anderson model.\cite{Wilson1980,Bulla:395:2008} Conduction electrons are assumed to have a continuum spectrum, forming a metallic band with a half-bandwidth $D$.
At zero temperature, the first term in Eq.\
(\ref{Eq:noise_Lehmann_delta}) can be computed from the NRG spectral
data \cite{CostiHZ94,Bulla:045103:2001,Bulla:395:2008} down to
arbitrarily small non-zero frequencies $|\omega|>0$. The spectral
weight at $\omega\!=\!0$ and the fulfillment of the sum-rules can be
obtained by calculating the expectation values
$\langle {\cal \hat{O}}_i \rangle$ and
$\langle {\cal \hat{O}}^2_i \rangle$ with NRG. Since we will be
interested in the large frequency regime and our spectral functions
obey well-defined sum rules, we have chosen to use the ``Complete Fock Space" (CFS) approach \cite{Peters:Phys.Rev.B:245114:2006,
Weichselbaum:Phys.Rev.Lett.:99:076402:2007} to calculate
$S_{i}(\omega>0)$ at zero temperature. As discussed in Appendix~\ref{sec:NRGdetails}, this choice has two important features: (i) the $T=0$ spectral functions are sum-rule-conserving by construction and (ii) broadening artifacts in the high frequency
regime, which can mask the correct power-law behavior, are
minimized. .
\begin{figure}[t!]
\includegraphics[width=1.0\columnwidth]{fig3_Scaling_Gamma1em4_TKSw}
\caption{(Color online) Universal scaling for spin noise in the Kondo regime
for $\epsilon_d=-U/2$. (a, b) NRG results for spin noise $S_s(\omega)$. Note
how all the curves collapse into a single scaling relation when the noise is
written as a function of $\omega/T_K$. For $\omega \lesssim T_K$, the magnetic noise
scales linearly with $\omega$ (Ohmic noise), and for $T_K \lesssim \omega < U$ it
decreases with an anomalous power of frequency $\propto
1/[\omega\log^{2}{(\omega/T_K)}]$. For $\omega > U$, spin noise is cut-off
$\propto 1/\omega^{2}$. (c, d) NRG results for charge noise $S_c(\omega)$
\emph{do not show universal Kondo scaling}, and behaves just like the single
particle approximation (HF) with noise peaked at
$\omega\approx\Max{\{\Gamma,U\}}$
with smooth
cut-off $1/\omega^2$ at $\omega > U$.} \label{fig3}
\end{figure}
Figure~\ref{fig1} shows the calculated charge noise in the case
$\epsilon_d=-U/2$ for $\Gamma=10^{-4}D$ and
several different $U$. Remarkably, HF remains a good
approximation to charge noise even at large $U$. We interpret this result to be
evidence that charge noise is dominated by single particle processes \emph{even
when the trap is deep in the Kondo regime ($U\gg \Gamma$ for $T=0$)}.
The situation is drastically different for magnetic noise as shown in Fig.~\ref{fig2}. While NRG and HF agree with each other in the $U=0$ limit (when HF is exact), as soon as $U$ becomes non-zero the two methods show opposite results. As $U$ increases, the single particle noise (HF) decreases, while the many-body noise (NRG) increases. The low-frequency NRG results can be better visualized in Fig.~\ref{fig3}. We find (Fig.\ \ref{fig3}-a,c) that the
magnetic noise for a single trap is Ohmic at low frequencies, with a peak at
$\omega \approx T_K$ where $T_K$ is the Kondo temperature.
The magnetic noise spectral densities all collapse in the same universal curve
and scale as an anomalous power law $\propto
T_K/[\omega\log^{2}{(\omega/T_K)}]$ in the $T_K \!\ll\! \omega \!\ll\! U$
frequency range (Fig.~\ref{fig3}-b), consistent with previous results for the
dynamical spin susceptibility
\cite{KollerHM05,Garst:Phys.Rev.B:205125:2005,Glossop:Phys.Rev.B:104410:2007,fritsch10,Hoerig:Phys.Rev.B:165411:2014}
and the spin-current noise\cite{Moca:Phys.Rev.B:84:235441:2011}
in the Kondo regime.
\section{Analytical approximation for spin noise in the Kondo regime}
\label{sec:analyticspin}
While the HF approximation [Eq.~(\ref{mftc})] failed to describe spin
noise, it was shown to give a good description of charge noise at
$T=0$ [Fig.~\ref{fig1}]. In Appendix~\ref{sec:apphf} we show that \emph{HF
actually provides a good approximation for charge noise at
$T\geq 0$}, in the sense that it approximately satisfies the sum rules and Shiba relations demonstrated in
Section~\ref{sec:Model}. The goal of the current section is to use our NRG calculations,
sum rules and Shiba relations to obtain an analytical approximation
for spin noise at $T\geq 0$ in the Kondo regime.
It is well known\cite{Bulla:395:2008} that NRG has difficulty in calculating spectral features at frequencies $\omega < T$. Here, we propose an alternate approach to evaluate the spin noise for a broader $\omega/T$ range.
Motivated by the susceptibility sum rule Eq.~(\ref{sumrule2}) and the property $S_{s}(-\omega,T)=\textrm{e}^{-\omega/T}S_{s}(\omega,T)$ we propose
the following fit function
\begin{equation}
S_{s}^{\rm Fit}(\omega,T) = \frac{2\omega\chi_{s}(\omega=0,T)}{1-\textrm{e}^{-\omega/T}}\frac{\Gamma_s}{\omega^{2}+\Gamma_s^{2}},
\label{ssfit}
\end{equation}
with the $\omega=0$ susceptibility given by a continuous fit to the NRG result\cite{hewsonbook}
\begin{widetext}
\begin{equation}
\chi_s(\omega=0,T)=\left\{
\begin{array}{c c}
\frac{{\cal W}}{8\pi T_K},& {\rm for}\;T\leq 0.23T_K,\\
\frac{0.68}{8\pi\left(T+\sqrt{2}T_K\right)}, & {\rm for}\; 0.23 T_K < T \leq 28.59 T_K,\\
\frac{1}{8\pi T}\left[1-\frac{1}{\log{\left(T/T_K\right)}}-\frac{\log{\left(\log\left(T/T_K\right)\right)}}{2\log^{2}{\left(T/T_K\right)}}\right], & {\rm for}\;T>28.59T_K,
\end{array}
\right.
\label{chi0T}
\end{equation}
\end{widetext}
where ${\cal W}=0.4128$ is the Wilson number.
In Eq.~(\ref{ssfit}) $\Gamma_s\equiv\Gamma_s(\omega,T)$ is a fit function of frequency and temperature that will be determined by the exact sum rules and the Shiba relations. We recall that previous relaxational fits for $\Gamma_s$ assume no frequency dependence.\cite{Miranda:JournalofPhysics:CondensedMatter:9871:1996}
Here we allow $\Gamma_s(\omega,T)$ to vary on frequency so that the
logarithmic frequency decay discussed in Section~\ref{sec:NRG} is
properly accounted for.
For $T\gg T_K$, the perturbative method of Suhl-Nagaoka\cite{gruner74,fritsch09} yields the high temperature
limit (the Korringa law):
\begin{equation}
\Gamma_s(\omega,T\gg T_K) \approx \frac{1}{4\pi}\frac{T}{1+\frac{4}{3\pi^{2}}\log^{2}{\left(\frac{T}{T_K}\right)}}.
\label{korringa}
\end{equation}
At $T=0$ the Shiba relation (\ref{shibaspin}) applied to Eq.~(\ref{ssfit}) implies\cite{Miranda:JournalofPhysics:CondensedMatter:9871:1996}
\begin{equation}
\Gamma_s(\omega=0,T=0)=\frac{1}{4\pi^2\chi_s(0,0)}=\frac{2T_K}{\pi {\cal W}},
\label{gammaszero}
\end{equation}
where we used the NRG result $\chi_s(0,0)={\cal W}/(8\pi T_K)$.
In order to interpolate between Eqs.~(\ref{korringa})~and~(\ref{gammaszero}) we propose the following expression:
\begin{equation}
\Gamma_s(\omega,T)=\frac{1}{4\pi}
\frac{T+\frac{8}{{\cal W}}T_K}{1+\frac{1}{3\pi^2}
\log^{2}{\left[1+\left(\frac{T}{T_{K}}\right)^{2}+\left(\frac{\omega}{\alpha T_K}\right)^{2}\right]}},
\label{gammasfit}
\end{equation}
where $\alpha$ is a fit parameter to be determined by the spin sum rule [Eq.~(\ref{sumrule1})]:
\begin{equation}
{\rm Sum}_{s}(T)=4\int_{-\infty}^{\infty}d\omega S_{s}^{\rm Fit}(\omega,T).
\label{sumsT}
\end{equation}
This sum rule is most sensitive to $\alpha$ at $T=0$, and we find that the optimal fit value is quite close to $\alpha=3$, when ${\rm Sum}_s(T=0)=0.9994$. As an independent check, we evaluate the spin sum rule at $T>0$ and the susceptibility sum rule at $T\geq 0$:
\begin{equation}
{\rm Sum}_{\chi_s}(T)=\frac{1}{\chi_s(0,T)}\int_{-\infty}^{\infty}d\omega \frac{1-\textrm{e}^{-\omega/T}}{2\pi \omega}S_{s}^{\rm Fit}(\omega,T).
\label{sumchis}
\end{equation}
In all cases, we obtain agreement within 36\%. A few examples are shown in Table~\ref{tablespincheck}. Moreover, we find that Eqs.~(\ref{ssfit})~and~(\ref{gammasfit}) with $\alpha=3$ provide an excellent fit of our NRG results at $T=0$, as shown in Fig.~\ref{fig:NewFitSpinNoise}.
\begin{figure}[t]
\includegraphics[width=1.0\columnwidth]{fig4_NewFitSpinNoiseTeq0}
\caption{ (Color online) Comparison of the spin noise fit $S^{\rm Fit}_s(\omega)$ [Eqs.~(\ref{ssfit})~and~(\ref{gammasfit}) with $\alpha=3$] (lines) with the NRG results (symbols) at $T\!=\!0$.
}
\label{fig:NewFitSpinNoise}
\end{figure}
Note that the choice of Eq.~(\ref{gammasfit}) implies that $T_K S^{\rm Fit}_{s}(\omega,T)$ is a universal function of $\omega/T_K$ and $T/T_K$, and that the presence of the temperature-dependent functions $\chi_s(0,T)$ and $\Gamma_s(\omega,T)$ suggest that spin noise has a much stronger temperature dependence than charge noise.
In particular, Eq.~(\ref{ssfit}) fully accounts for the
Kondo screening for $T\!<\!T_K$ through $\chi_s(\omega\!=\!0,T)$.
\begin{figure*}
\centering
\subfigure[\;Low temperature behavior.\label{fig5a}]{\includegraphics[width=0.49\textwidth]{fig5a_k10_lmax5_varT}}
\subfigure[\;High temperature behavior.\label{fig5b}]{\includegraphics[width=0.49\textwidth]{fig5b_k10_lmax5_varT}}
\caption[]{(Color online) Spin noise in the presence of trap disorder. The calculated noise for $N$ traps was averaged according to the prescription $\Gamma= \Gamma_0 \textrm{e}^{-\lambda}$, with $\lambda$ the tunneling distance between trap and the Fermi gas uniformly distributed in the interval
$[0,\lambda_{{\rm max}}]$. This gives rise to the broad distribution of Kondo temperatures shown in Eq.~(\ref{ptk}). The resulting noise, shown here for $\kappa=10$ and $\lambda_{{\rm max}}=5$ displays $1/f$ behavior over a frequency range that decreases as the temperature increases. This is in contrast to the temperature independent
charge $1/f$ noise described in the literature.\cite{weissman88}}
\label{fig5}
\end{figure*}
\begin{table}
\begin{center}
\begin{tabular}{c |c |c }
$T/T_K$ & ${\rm Sum}_s$ & ${\rm Sum}_{\chi_s}$\\
\hline
$0$ & $0.9994$ & $0.9518$\\
$0.5$ & $0.9247$ & $0.9502$\\
$1$ & $0.8245$ & $0.9503$\\
$10$ & $0.6910$ & $0.9875$ \\
$100$ & $0.7777$ & $0.9979$\\
\hline
\end{tabular}
\caption{Sum rules [Eqs.~(\ref{sumsT})~and~(\ref{sumchis})] applied to
our analytical fit of spin noise, Eqs.~(\ref{ssfit}) and
(\ref{gammasfit}) with $\alpha=3$. For the spin sum rules we used
analytical approximations for $\chi_s(0,T)$ obtained by NRG
[Eqs.~(4.53)~and~(4.60) in Ref.~\onlinecite{hewsonbook}]. In all
cases we find that the sum rules are satisfied within 30\%.}
\label{tablespincheck}
\end{center}
\end{table}
\section{Spin noise in the presence of disorder}
\label{sec:disorder}
In the case of an ensemble of $N$ Kondo traps, the noise will be
affected by disorder. The usual model for trap disorder (the one
that gives rise to ubiquitous charge $1/f$ noise)\cite{weissman88} is
to assume trap tunneling rate
$\Gamma = \Gamma_0 \textrm{e}^{-\lambda}$, where $\lambda$ models the
tunneling distance between trap and Fermi sea. The model assumes
$\lambda$ uniformly distributed with density
$P'(\lambda)=N/\lambda_{{\rm max}}$ for
$\lambda\in [0,\lambda_{{\rm max}}]$, and $P'(\lambda)=0$ for
$\lambda$ outside this interval, resulting in
$P(\Gamma)=(N/\lambda_{{\rm max}})/\Gamma$ and the corresponding
$1/f$ frequency dependence for trap charge noise. As we shall show, this same model
applied to Kondo traps gives rise to a much broader
distribution of Kondo temperatures that we denote $P(T_K)$.
For definiteness, we assume all Kondo traps have fixed $\epsilon_d$ and $U$, with the disorder solely affecting the parameter $\Gamma(\lambda)$. The dependence of the Kondo temperature
with $\lambda$ is given by\cite{haldane78}
\begin{eqnarray}
T_K(\lambda)&=&\sqrt{\frac{\Gamma(\lambda) U}{2\pi}}\textrm{e}^{\frac{\sqrt{3}\epsilon_d(\epsilon_d+U)}{U}\frac{1}{\Gamma(\lambda)}}\nonumber\\
&=&T_{K}^{{\rm max}}\textrm{e}^{-\left[\frac{\lambda}{2}+\kappa \left(\textrm{e}^{\lambda}-1\right)\right]}.
\end{eqnarray}
Here $\kappa=-\sqrt{3}\epsilon_d (\epsilon_d+U)/(U\Gamma_0)>0$ characterizes the type of trap. We shall assume $\kappa\gg (\lambda_{{\rm max}}+1)/2$, a limit that is typically satisfied by Kondo traps with $U\gg \Gamma$. The maximum and minimum Kondo temperatures of the distribution are given by
$T_{K}^{{\rm max}}=T_K(\lambda=0)$ and
$T_{K}^{{\rm min}}=T_K(\lambda=\lambda_{{\rm max}})$,
respectively; for $T_K\in [T_{K}^{{\rm min}},T_{K}^{{\rm max}}]$
the trap density becomes
\begin{equation}
P(T_K)=\frac{P'(\lambda)}{\left|\frac{dT_K}{d\lambda}\right|}
\approx \frac{\frac{N}{\lambda_{{\rm max}}} }{T_K\left[\kappa-\log{
\left(\frac{T_K}{T_{K}^{{\rm max}}}\right)}\right]},
\label{ptk}
\end{equation}
with $P(T_K)=0$ for
$T_K\not\in [T_{K}^{{\rm min}},T_{K}^{{\rm max}}]$. Note how $P(T_K)$
is exponentially broader than $P(\Gamma)$: we have
$T_{K}^{{\rm max}}/T_{K}^{{\rm min}}\approx
\exp{\left[\kappa\exp{\left(\lambda_{{\rm max}}\right)}\right]}$,
in contrast to
$\Gamma_{{\rm max}}/\Gamma_{{\rm min}}=\exp{\left(\lambda_{{\rm
max}}\right)}$.
In spite of this difference,
the normalization condition
$\int dT_K P(T_K)\approx N$ still holds since the logarithm in Eq.~(\ref{ptk}) makes $P(T_K)$ flatter than a $\sim 1/T_K$ distribution, thereby making the integral finite. We remark that our $P(T_K)$ is appropriate to describe highly disordered traps, such as traps randomly distributed at an insulator close to the metal/insulator interface. This situation is quite different from Kondo impurities in bulk alloys, whose $P(T_K)$ is considerably less
broad.\cite{Miranda:JournalofPhysics:CondensedMatter:9871:1996,bernal95}
Applying this averaging prescription to our spin noise Eq.~(\ref{ssfit}) yields
\begin{equation}
\left\langle S_{s}(\omega)\right\rangle = \int_{T_{K}^{{\rm min}}}^{T_{K}^{{\rm max}}}
d T_K P(T_K) S_{s}^{\rm Fit}(\omega,T).
\end{equation}
The results are shown in Fig.~\ref{fig5}-a,b.
At low temperatures ($T<T_{K}^{{\rm max}}$) the noise shows
$1/f$ behavior up to frequencies of the order of $T_{K}^{{\rm max}}$; at
larger frequencies, Kondo-enhanced exchange processes lead to a
$1/[f\log^{2}(f)]$ behavior.
For higher temperatures ($T>T_{K}^{{\rm max}}$) the noise saturates
in the low frequency region, and the $1/[f\log^{2}(f)]$ behavior gets washed out of the high frequency region.
Interestingly, the frequency range with $1/f$ behavior gets \textit{reduced} as the temperature increases.
This shows that spin $1/f$ noise behavior is strongly temperature-dependent, in marked contrast to the usually temperature-independent charge $1/f$ noise. The additional temperature dependence implies that temperature actually competes against disorder, converting the spin $1/f$ noise into a Lorentzian.
\section{Concluding Remarks}
\label{sec:Conclusion}
In conclusion, we presented a theory of charge and spin noise of a Kondo trap interacting with a Fermi sea. We showed that trap spin noise is qualitatively different from charge noise, in that the former occurs due to many-body scattering processes, while the latter is mainly dominated by single-particle
tunneling. This difference implies that spin noise has a stronger temperature
dependence than charge noise, and that it is controllable by tuning Kondo
temperature $T_K$ rather than trap tunneling rate $\Gamma$.
Kondo trap dynamics displays two quite distinct behaviors depending on which property is probed. The experimental methods of charge
\cite{latta11} and spin \cite{crooker04} noise spectroscopy use
optical absorption to detect noise via the fluctuation-dissipation
theorem [optical absorption at frequency $\omega$ is
directly proportional to $\chi_{i}''(\omega,T)$ and to noise as in
Eq.~(\ref{fttheorem})]. Our results elucidate how Kondo
correlations can be observed with these methods. Pure charge
absorption does not enable the detection of the Kondo effect; in
Ref.~\onlinecite{latta11} the formation of the exciton state mixes
charge and spin fluctuation, and this feature was critical in enabling
their observation of the Kondo effect. For spin noise spectroscopy,
universal scaling with Ohmic behavior at
$T\!<\!\omega \!<\! T_K$ coupled with a
$1/[\omega\log^{2}{(\omega/T_K)}]$ tail for $T_K\!\ll\!\omega\!\ll\!U$
can be taken as the signature of the Kondo effect, allowing the
extension of this technique to probe Kondo correlations. However, in the presence of strong disorder over a range of Kondo temperatures $T_K\in [T_{K}^{{\rm min}},T_{K}^{{\rm max}}]$, we find that the Ohmic behavior is washed out, and the signature of Kondo correlations are visible only for $\omega > T_{K}^{{\rm max}}$ and
$T < 10 T_{K}^{{\rm max}}$ [See Fig.~\ref{fig5}b].
The qualitative difference between spin and charge noise survives even in the presence of disorder and high temperatures (namely $T_{K}\gg T_{K}^{{\rm max}}$). As the temperature increases, the range of $1/f$ behavior for spin noise decreases, while the range of $1/f$ charge noise remains essentially unaltered.
The additional temperature dependence for spin noise implies that temperature actually competes against disorder, converting the spin noise $1/f$ behavior into a Lorentzian-like dependence. Given that $1/f$ noise is notoriously difficult to control,\cite{paladino14} we reach the conclusion that ubiquitous trap noise can be more manageable in spin or flux-based devices that are sensitive to magnetic fluctuations rather than charge.
\begin{acknowledgements}
\emph{Acknowledgements.--}LGDS acknowledges support from Brazilian agencies
FAPESP (2013/50220-7), CNPq (307107/2013-2) and PRP-USP NAP-QNano. We acknowledge useful discussions with M.~Le~Dall, E. Miranda, K. Ingersent, H.~E. T\"{u}reci and I. \v{Z}uti\'{c}, and financial support from the Canadian program NSERC-Discovery
and a generous FAPESP-UVic exchange award.
\end{acknowledgements}
|
2,869,038,156,857 | arxiv | \section*{Acknowledgments}\end{small}}
\newcommand\altaffilmark[1]{$^{#1}$}
\newcommand\altaffiltext[1]{$^{#1}$}
\voffset=-0.6in
\title[Multi-Stage AGN Feedback]{Quasar Feedback: More Bang for Your Buck}
\author[Hopkins \&\ Elvis]{
\parbox[t]{\textwidth}{
Philip F. Hopkins\altaffilmark{1}\thanks{E-mail:[email protected]},
\&\ Martin Elvis\altaffilmark{2} }
\vspace*{6pt} \\
\altaffiltext{1}{Department of Astronomy, University of California
Berkeley, Berkeley, CA 94720} \\
\altaffiltext{2}{Harvard-Smithsonian Center for Astrophysics, 60
Garden Street, Cambridge, MA 02138} \\
}
\date{Submitted to MNRAS, March 29, 2009}
\begin{document}
\maketitle
\label{firstpage}
\begin{abstract}
We propose a ``two-stage'' model for the effects of feedback from a bright quasar
on the cold gas in a galaxy. It is difficult for winds or
other forms of feedback from near the accretion disk to directly
impact (let alone blow out of the galaxy) dense molecular clouds at $\sim$kpc. But
if such feedback can drive a weak wind or outflow in the hot, diffuse ISM
(a relatively ``easy'' task), then in the wake of such an outflow passing over a
cold cloud, a combination of instabilities and simple pressure gradients
will drive the cloud material to effectively expand in the direction perpendicular
to the incident outflow. This shredding/expansion (and the corresponding decrease in
density) may alone be enough to substantially suppress star formation in the host.
Moreover, such expansion, by even a relatively small factor,
dramatically increases the effective cross section of the cloud material and makes it
much more susceptible to both ionization and momentum coupling from absorption of
the incident quasar radiation field. We show that even a moderate effect of this nature
can dramatically alter the ability of clouds at large radii to be fully ionized
and driven into a secondary outflow by radiation pressure. Since the
amount of momentum and volume which can be ionized by observed quasar radiation
field is more than sufficient to affect the entire cold gas supply once
it has been altered in this manner (and
the ``initial'' feedback need only initiate a moderate wind in the low-density
hot gas), this reduces by an order of magnitude the required energy budget
for feedback to affect a host galaxy.
Instead of $\sim5\%$ of the radiated energy ($\sim100\%$ momentum)
needed if the initial feedback must directly heat or ``blow out'' the galactic gas,
if only $\sim0.5\%$ of the luminosity ($\sim10\%$ momentum)
can couple to drive the initial hot outflow, this
mechanism could be efficient.
This amounts to hot gas outflow rates from near the
accretion disk of only $\sim5-10\%$ of the BH accretion rate.
\end{abstract}
\begin{keywords}
quasars: general --- galaxies: active ---
galaxies: evolution --- cosmology: theory
\end{keywords}
\section{Introduction}
\label{sec:intro}
Observations have established that the masses of supermassive black
holes (BHs) are tightly correlated with various host
galaxy properties \citep{magorrian,FM00,Gebhardt00,
hopkins:bhfp.obs,aller:mbh.esph}. Together with constraints indicating that
most of the BH mass is assembled in optically bright quasar\footnote{In
this paper, we use the term ``quasar'' loosely as a proxy for
high-Eddington ratio activity, rather than as a reference to
specific optical properties.} phases
\citep{Soltan82,salucci:bhmf,yutremaine:bhmf,hopkins:old.age},
this has led to the development of models where feedback processes
from accretion self-regulated BH growth at a critical mass
\citep{silkrees:msigma,dimatteo:msigma,murray:momentum.winds}.
Gas inflows triggered by some process fuel rapid BH growth, until
feedback begins to expel nearby gas and dust. This ``blowout''
results in a short-lived, bright optical quasar that, having expelled its
fuel supply, fades and leaves a remnant on the observed
BH-host correlations \citep{hopkins:lifetimes.methods,hopkins:lifetimes.obscuration}.
These scenarios have been able to explain many quasar observables,
including luminosity functions, lifetimes, and BH mass functions
\citep{hopkins:lifetimes.interp,hopkins:merger.lfs,
hopkins:groups.qso,hopkins:seyfert.bimodality,volonteri:xray.counts,
menci:sam,somerville:new.sam,lapi:qlf.sam,
tortora:2009.agn.jet.fb.and.ell.colors}.
It is much less clear, however, what the impact of whatever feedback
regulates BH growth will be on the host galaxy. In models, such feedback
is invoked to explain the rapid ``quenching'' of star formation and sustained lack of
cooling in massive galaxies
\citep{granato:sam,scannapieco:sam,croton:sam,hopkins:groups.ell,
antonuccio-delogu:2008.jet.fb.destroying.sf.clouds}.
The argument in the models is that, under various simple assumptions,
if sufficient energy or momentum is injected into the ISM near the BH on a timescale
short enough to halt accretion, then it will yield a supersonic
pressure or momentum-driven outflow that propagates to large scales
\citep[see e.g.][]{monaco:feedback,hopkins:qso.all,
shin:mech.agn.fb.constraints}.
But the actual mechanisms of feedback and physics of the ISM rlevant for this
remain highly uncertain.
Highly energetic outflows are associated
with bright quasars \citep[for a review, see][]{veilleux:winds}; these range
from intense winds ($v\sim10^{4}\,{\rm km\,s^{-1}}$)
associated with the central engine seen in the broad
emission line regions and broad absorption line quasars \citep[e.g.][]{weymann:BALs}
to more moderate outflows ($v\sim10^{2}-10^{3}\,{\rm km\,s^{-1}}$) associated
with the narrow line region and the ``warm absorber''
\citep{laor:warm.absorber,crenshaw:nlr}
as well as with small-scale quasar absorption and occultation systems
\citep[e.g.][]{mckernan:1998.agn.occultation.by.clumpy.outflow,
turner:2008.clumpy.agn.disk.wind,miller:2008.clumpy.agn.disk.wind}.
Indeed, high-velocity winds
driven near the accretion disk are theoretically hard to avoid
\citep[see e.g.][]{blandfordpayne:mhd.jets,begelman:agn.compton.heating,
koniglkartje:disk.winds,elvis:outflow.model,
proga:disk.winds.2000,proga:disk.winds}.
However, these are probably tenuous, with an initial mass-loading
$\lesssim \dot{M}_{\rm BH}$
\citep[although in at least some cases, these outflows are extremely
dense, and might have much higher mass-loading factors; see e.g.][]{
hall:2003.high.density.balqso.outflows,hall:2007.agn.outflow.substructure}.
It is not clear whether such ``hot'' outflows could efficiently entrain gas
at larger radii. If most of the gas mass of the galaxy is at some appreciable
fraction of the galaxy effective radius $R_{e}$
and in the form of cold, dense giant molecular clouds (GMCs),
then it is difficult to imagine such a diffuse wind directly ``launching''
the clouds out of the galaxy. It remains unclear whether, in fact, the momentum
associated with the winds that {\em are} known to emanate from the
central engine of a quasar is sufficient to unbind the cold
gas in the host \citep[see e.g.][]{baum:radio.outflows,dekool:large.outflow.1,
dekool:large.outflow.2,Steenbrugge:outflow.mdot,
holt:merger.radio.warm.outflow,gabel:large.outflow,
krongold:warm.absorber.outflow.rate,
krongold:seyfert.outflow,batcheldor:outflow.mechanism,tremonti:postsb.outflows,
McKernan:agn.fb.outflow.rates, ganguly:qso.outflows,
prochaska:qso.outflow}.
In this paper, we argue that it is not necessary that the small-scale, high-velocity
AGN outflows directly entrain any cold gas at scales $\sim R_{e}$.
Rather, so long as these are sufficient to drive a significant wind
in the ``hot'' diffuse ISM, then clouds will be effectively destroyed or deformed and
``secondary'' feedback mechanisms -- namely the radiative effects of
dust absorption and ionization -- will be able to
act efficiently on the cold gas at large scales. This will effectively terminate star formation on a
short timescale, with greatly reduced energy/momentum requirements for
the ``initial'' outflow drivers.
\section{Radiative Feedback In the Presence of Hot Outflows}
\label{sec:radiative}
\begin{figure*}
\centering
\plotside{f1.eps}
\caption{An illustration from a simple simulation of the effects of a hot outflow on
radiation feedback. {\em Left:} Initial dense cloud in pressure
equilibrium with the diffuse ISM. Ionization and momentum flux from
the quasar is negligible, because the effective surface area for absorption is
small.
{\em Center:} At $t=t_{s}$, an outflow generated in the hot/diffuse ISM
hits the cloud. Ambient pressure drops in the wake (low-pressure regions form
on the perpendicular sides of the cloud). A mix of instabilities shortly cause the
cloud to mix in the perpendicular direction.
{\em Right:} After a few cloud crossing times, the mixing
has increased the effective cross section by a factor
$(R_{c}/R_{0})^{2}\gtrsim10$ in the perpendicular direction.
The cloud may now (with lower density and higher cross section to
absorb quasar photon momentum) be vulnerable to radiative
feedback, and will be accelerated to $v\sim v_{\rm esc}$.
\label{fig:cartoon}}
\end{figure*}
Consider a typical galaxy, where the ISM gas is composed of a
mix of diffuse warm/hot gas and cold clouds.\footnote{By
``diffuse'' ISM, we refer to the warm/hot phases of the ISM,
with effective temperatures $T\sim 10^{4}-10^{6}\,$K and
densities $n_{H}\sim 10^{-2}-10^{0}\,{\rm cm^{-3}}$. The ``clouds,''
on the other hand, represent the cold phase (but {\em not}
collapsing molecular cloud cores) with
$T\lesssim 100\,$K molecular gas and
$n_{H}\sim 10^{2}-10^{4}\,{\rm cm^{-3}}$. The volume filling
factor of clouds is small, but by mass, they dominate
the total gas mass; the hot gas with filling factor near
unity has a typical mass fraction $f_{\rm hot}\lesssim0.1$
\citep[see e.g.][]{mckee.ostriker:ism}.}
Radiation will always act on the cold clouds (in the form
of ionization and momentum injection from absorbed photons),
but they may be too dense and self-shielding to be
significantly affected.
In such a case, one could invoke a blastwave or cold
shell, driven by AGN feedback on small scales, to
entrain this material. Various models and
simulations have shown that if feedback needs to directly launch
a blastwave in {\em both} the hot and cold gas together (sufficient
to entrain most of the galactic gas),
then an efficiency $\eta_{\rm E}\sim0.05$ is the relevant value
(for energy injection where the feedback $\dot{E} = \eta_{\rm E}\,L =
\eta_{\rm E}\,\epsilon_{r}\,\dot{M}_{\rm BH}\,c^{2}$; $\epsilon_{r}\sim0.1$
is the radiative efficiency). If the outflow is instead
momentum-driven, the relevant value for driving hot+cold phases
is $\eta_{\rm p}\sim1$ ($\dot{p} = \eta_{\rm p}\,L/c$).
If, however, the ``initial'' feedback from the central source
need only drive a wind in the low-density hot gas, and
does not necessarily directly entrain the cold clouds,
then the energy required will be much less.
In both cases above, the efficiency needed
simply scales linearly with the mass of material to be driven
(scaling with its binding energy or momentum, respectively).
So, if a fraction $f_{\rm hot}$ of the gas is in the hot, diffuse ISM,
and only that needs to be initially driven, we obtain
\begin{equation}
\eta_{\rm E} = \frac{\dot{E}_{\rm fb}}{L} \sim 0.05\,f_{\rm hot}
\end{equation}
for energy-driven and
\begin{equation}
\eta_{\rm p} = \frac{\dot{p}_{\rm fb}}{L/c} \sim f_{\rm hot}
\end{equation}
for momentum-driven outflows.
For typical $f_{\rm hot}\lesssim0.1$ \citep[e.g.][]{blitz:gmc.properties},
this implies an order-of-magnitude
reduction in the necessary feedback input for ``interesting''
behavior. And in detail, since the hot-phase gas is already virialized
(rather than in e.g.\ a cold disk), the efficiency gains may be even higher.
The question is then, if most of the mass is in cold clouds and only
the hot gas is affected by this initial outflow, will any interesting
behavior result?
Consider a cold molecular cloud in a galaxy. The cloud has a mass
$M_{c}$, and an initial (``equilibrium'') effective spherical radius $R_{0}$.
In the observed typical ISM, these are related by
\begin{equation}
M_{c}\sim300\,M_{\sun}\,R_{0,{\rm pc}}^{2}
\label{eqn:cloud.sizemass}
\end{equation}
\citep[where $R_{0,{\rm pc}}\equiv R_{0}/{1\,{\rm pc}}$;][]{blitz:h2.pressure.corr}.
Since we will later allow the cloud to stretch or deform, define
the instantaneous radius of the cloud as $R_{c}$.
The cloud resides at a spherical distance $r$ from the center of
a \citet{hernquist:profile} profile bulge of total mass
$M_{\rm bul}$ and characteristic
scale length $R_{\rm eff}$, which hosts a BH on the characteristic BH-host
relations, $M_{\rm BH}=\mu_{\rm BH}\,M_{\rm bul}$
\citep[$\mu_{\rm BH}\approx0.0014$;][]{haringrix}.
The BH radiates with a luminosity
\begin{equation}
L_{\rm QSO}=\dot{m}\,L_{\rm Edd}\ ,
\end{equation}
where $\dot{m}$ is the dimensionless Eddington ratio and
$L_{\rm Edd}=1.3\,M_{\rm BH,8}\times10^{46}\,{\rm erg\,s^{-1}}$
is the Eddington luminosity
($M_{\rm BH,8}=M_{\rm BH}/10^{8}\,M_{\sun}$).
We are interested in the case where the cold clouds are
embedded in some kind of hot outflow generated by the ``primary'' AGN feedback.
Specifically, assume that the quasar somehow succeeds in driving an
outflow through the diffuse warm/hot ISM: to be conservative, the outflow
can be tenuous and we will assume that the outflow
``impacting'' the cloud carries negligible momentum
compared to the binding momentum of the cloud. In other words, assume
that at some smaller scale, a wind or outflow is generated sufficient to
sweep up the tenuous, diffuse ISM at large radii, but insufficient to affect cold dense
clouds, which contain most of the ISM gas mass.
This problem of the survival of cold clouds in a post-shock hot medium is
well-studied in the context of star formation and supernova feedback
\citep[see e.g.][and references therein]{kmc:shock.cloud}.
In general, the collision of a shock or
wind with velocity $v_{s}$ with a cloud of initial characteristic
(quasi-spherical) radius $R_{0}$ and density contrast $\chi$ (ratio of
cloud density to external medium density $\chi\equiv n_{c}/n_{0}$)
will launch secondary shocks within the cloud with velocity
$v_{s}/\chi^{1/2}$. This defines a ``cloud crushing'' timescale
$t_{\rm cc} = \chi^{1/2}\,R_{0}/v_{s}$.
In the simple case of a
pure hydrodynamic strong shock, if $t_{\rm cc}$ is
much less than the characteristic timescales for the density to
change behind the shock and $v_{s}/\chi^{1/2}$ is
comparable to or larger than the characteristic internal velocities
of the cloud, then the cloud will be stretched and ``shredded''
by a combination of Rayleigh-Taylor and Kelvin-Helmholtz instabilities
on a timescale $\sim$a few $t_{\rm cc}$ \citep[see e.g.][]{kmc:shock.cloud,
xustone:cloud.shock.3d,fragile:shock.cloud.2004,
orlando:cloud.shock.w.cooling,nakamura:varied.cloud.shock.interactions}.
Given the definitions above, for $v_{s}\sim v_{\rm esc}$,
$t_{\rm cc}$ is much less than the dynamical timescales of interest
at all the spatial scales of interest ($t_{\rm cc}\ll 10^{7}\,{\rm yr}$ for
all $r\gtrsim\chi^{1/2}\,R_{0}$ -- i.e.\ for clouds not in the very nuclear regions).
The material being mixed off of the surface of the cloud from
these instabilities expands into
and mixes with the low-pressure zones created by the passage of the
shock on the sides of the cloud. This leads to an effective net expansion of
the cloud by a factor $\sim\chi^{1/2}$ in radius in the perpendicular shock direction.
Eventually, despite the initial compression, reflection shocks lead to a
expansion by a factor $\sim2$ in the parallel shock direction, bringing the
original cloud material into an effective density and pressure equilibrium with the
external medium. The surface area of the cloud can increase dramatically;
for our purposes, we are interested in the effective
cross section the cloud presents to the perpendicular shock direction.
The radii defined above should be thought of in this manner:
the initial cloud cross section to the hot shock is $\pi\,R_{0}^{2}$;
post-shock, the effective cross section owing to this expansion and
equilibration is $\pi\,R_{c}^{2}\sim\chi\pi\,R_{0}^{2}$.
We illustrate this behavior with a simple toy model system in
Figure~\ref{fig:cartoon}. Specifically, we show an example of a hydrodynamic simulation
of an idealized system, using the {\textsc{ZEUS}} code \citep{zeus:a,zeus:b}. The initial
conditions consist of a Plummer sphere cloud embedded in a uniform background, with density
contrast of $\chi=100$ (peak density of the cloud relative to background),
with the initial system in pressure equilibrium (uniform pressure),
and periodic boundary conditions in a large grid. At time $t=t_{S}$
the low-density material is rapidly accelerated into a mach $\sim2$ wind.
Color encodes the gas density, from black (the arbitrary background density) to
red (the initial maximum).
Note that the example shown is purely for illustrative purposes --
we do not include many possible complexities, such as gas cooling,
star formation, or magnetic fields. The behavior of clouds in response to
outflows with such sophistications has been extensively studied in the
references above, and more detailed extensions of such simulations to
the regime of interest here will be the subject of future work. Nevertheless,
this simple experiment illustrates much of the important qualitative behavior.
The qualitative behavior we care about -- the mixing/stretching/deformation
of the cloud in the perpendicular shock direction leading to an increase in the
effective cross section of the cloud -- is in fact quite general.
Simulations have shown that the same instabilities operate regardless of
whether the ``hot outflow'' is a strong shock, weak shock, or wind
(since we assume the hot material is being unbound in this wind, it cannot be substantially
sub-sonic). The timescale of
cloud expansion increases by a factor of a few in the weaker wind case, but it is still much less
than the relevant local galactic dynamical times
\citep{kmc:shock.cloud,jones:mhd.supersonic.cloud}.
The process is also similar in the case of a cloud being impacted by
AGN jets, despite the different densities, temperatures, and magnetic field
states associated with jets and ``bubbles'' \citep[see][]{krause:2007.turbulent.jet.cocoons,
antonuccio-delogu:2008.jet.fb.destroying.sf.clouds}.
A cloud could in principle be stabilized against such instabilities
by being strongly magnetically dominated
\citep{maclow:mhd.shock.cloud,
junjones:shock.cloud.mhd.cosmic.rays,fragile:shock.cloud.2005,
shinstone:3d.mhd.shock.cloud}. However, in this limit, as the hot outflow
sweeps up material, the pressure of the diffuse ISM trailing the outflow
will decline as a steep function of time $t/t_{\rm cc}$
\citep[][]{ostrikermckee:blastwaves}. Since, in this limit, the cloud
is then over-pressurized, it will expand isothermally as the exterior
post-shock pressure drops (the free expansion/equilibration time of the cloud
being short compared to the other timescales of interest).
Because this stops when the system is equilibrated, the
``effective'' net expansion of $R_{c}$ is the same as in the hydrodynamic
shock case, even though the details are quite different.
If nothing more were to happen to the cloud, this would only suppress
star formation for a short time. The cooling instabilities that
produced the cloud in the first place would
operate. In the ``typical'' ISM, clouds mix in the wake of
stellar or supernovae-driven outflows until they reach equilibrium
with the ISM and
re-cool into new clouds.
However, we are interested in all of this occuring in the background of a luminous
AGN, which will both ionize and exert a radiation pressure force.
The cloud -- especially a realistic cloud with a large dust mass
and corresponding opacity -- is optically thick\footnote{
For any incident spectrum where a
significant fraction of the incident
energy is in the optical/UV or higher wavelengths, the effective optical depth
from dust within the cloud will be
$\tau\sim1$ \citep{murray:momentum.winds,thompson:rad.pressure} --
this is simply a statement that the
clouds will be optically thick to some portion of that SED (whether
ionizing photons in the far UV, or, if the cloud is not yet ionized,
then there is dust which will absorb in the optical and IR) and re-radiate
that energy. The cloud can effectively be thought of as a single absorbing ``mega-grain''
with effective cross section $\sim\pi R_{c}^{2}$.
}
to the quasar radiation, with an
effective cross-section $\Delta\Omega\sim(\pi\,R_{c}^{2})/(4\pi\,r^{2})$.
There is therefore an inescapable deposition of
photon momentum from the radiation field with a
deposition rate $\dot{p}_{\rm rad} = L_{\rm abs}/c$
where $L_{\rm abs}=L_{\rm QSO}\,\Delta\Omega$.
Comparing this to the gravitational force
$F_{\rm grav} = -M_{c}\partial \phi /\partial r$ defines
an effective Eddington limit for the cloud: if the absorbed flux exceeds some limit
the cloud will be unbound (equivalently, the absorbed momentum
in a single dynamical time, over which the cloud could redistribute
that momentum, will exceed the cloud binding momentum
$\sim M_{c}\,v_{\rm esc}$).
This limit is when the two are equal:
equivalently
\begin{equation}
\frac{L_{\rm QSO}}{c}\,
\frac{\pi\,R_{c}^{2}}{4\pi\,r^{2}} = M_{c}\,\frac{G\,M_{\rm bul}}{(r+R_{\rm eff})^{2}}\ .
\label{eqn:prad.vs.pedd}
\end{equation}
Assuming that the cloud lies on the observed size-mass
relation (Equation~\ref{eqn:cloud.sizemass})
and that the galaxy lies on the $M_{\rm BH}-M_{\rm bul}$ relation,
this reduces to the criterion for unbinding the cloud:
\begin{equation}
{\Bigl(}\frac{R_{c}}{R_{0}}{\Bigr)} \gtrsim 2.7\ \dot{m}^{-1/2}\,{\Bigl (} \frac{r}{r+R_{\rm eff}} {\Bigr )}\ .
\label{eqn:rcrit.prad}
\end{equation}
In other words, a cloud on the ``normal'' size mass relation at
large radii $r\sim R_{\rm eff}$ is
sufficiently dense and sufficiently high column-density to avoid being
unbound by radiation pressure. But if the effective size of the cloud
(the effective coupling surface area) could be increased by a factor of a
couple, or the effective column lowered, the cloud would rapidly be unbound
by the incident radiation field momentum. This condition is easily
satisfied in post-shock clouds.
Figure~\ref{fig:cartoon} also includes and illustrates this effect.
Specifically, we include a very simple, time-independent momentum
deposition rate in the ``facing'' cells to the incident radiation field,
which we simply approximate as all cells with a density
above $\gtrsim3\,$ times the initial background density but with
no cell at $x<x_{i}$ above this density (i.e.\ implicitly assuming that
such a cell would shield the cells ``behind'' it with respect to the incident radiation).
The magnitude of this is initialized such that at $t<t_{S}$ the
``total'' deposition rate over the surface of the cloud is equal to a small
fraction of the ``binding'' momentum over the dynamical time
(assuming $v_{s}\sim V_{c}$), $\sim0.01\,M_{c}\,v_{s}/(R_{c}/v_{s})$. But the
details make little qualitative difference to the global acceleration.
A similar effect pertains to the ionization of the cloud
(although this is not explicitly included in Figure~\ref{fig:cartoon}).
Ignoring geometric effects of photon diffusion, the
volume of a cloud of mean density $n_{c}$ ionized
is $V_{\rm ion}=\dot{N}/n_{c}^{2}\,\beta$, where
$\beta\approx2\times10^{-13}\,{\rm cm^{3}\,s^{-1}}$ is the
recombination coefficient for gas at the temperature for
hydrogen ionization ($T_{e}\sim10^{4}\,$K) and $\dot{N}$
is the rate at which ionizing photons hit the cloud.
The total rate of production of ionizing photons from the quasar is
$\dot{N}_{Q}=\lambda L/ h\nu_{912}$ \citep[$\lambda\approx0.07$
comes from a proper integration over the quasar spectrum; here from][]{hopkins:bol.qlf},
and a fraction $\Delta\Omega$ are incident on the cloud.
Together with the typical values above, this
implies that clouds will be ionized to a depth $h_{\rm ionized}$
\begin{equation}
\frac{h_{\rm ionized}}{R_{c}}\approx 10^{-3}\,\dot{m}\,R_{0,\rm pc}\,
{\Bigl (}\frac{r+R_{\rm eff}}{r}{\Bigr )}^{2}\,
{\Bigl (}\frac{R_{c}}{R_{0}}{\Bigr )}^{5}.
\label{eqn:hionized}
\end{equation}
Give the cloud size-mass relation, this is equivalent to the statement that
all clouds below a mass
$M_{c}\lesssim10^{8}\,M_{\sun}\,(R_{c}/R_{0})^{-10}$ will be self-shielded
at $r\gtrsim R_{\rm eff}$.
For typical clouds, the depth ionized is clearly quite small; but there is a steep
dependence on cloud radius. As $R_{c}$ increases,
the ionized depth increases by a factor $\propto R_{c}^{2}$ owing to the
increased photon capture cross-section and a factor
$\propto R_{c}^{3}$ owing to the decreased density lowering the
recombination rate.
\section{Implications in a Global Feedback Scenario}
\label{sec:global}
There are many caveats to the simplified derivations above:
clouds have some size and mass spectrum, and are distributed at various radii,
with the background quasar changing in time.
Nevertheless, embedding this in
more detailed models for AGN feedback, the results are interesting.
Consider an $\sim L_{\ast}$ bulge with
$M_{\rm bul}=10^{11}\,M_{\sun}$ and $M_{\rm BH}=1.4\times10^{8}\,M_{\sun}$,
with a \citet{hernquist:profile} density profile
and scale radius $R_{\rm eff}=4\,$kpc. Assume gas traces stars (initially),
with mass fraction $f_{\rm gas}=0.1$, and that $90\%$ of the gas is in
cold clouds while $10\%$ is in a hot diffuse phase (which we assume
is in hydrostatic equilibrium).
This yields a density profile of hot or cold gas of
\begin{equation}
\rho_{i} = f_{i}\,f_{\rm gas}\,\frac{M_{\rm bul}}{2\pi}\,\frac{R_{e}}{R\,(R+R_{e})^{3}}
\end{equation}
where $f_{i}$ represents the fraction in either the hot or the cold phase,
($f_{\rm hot}=0.1$, $f_{\rm cold}=0.9$).
At a given radius (within some small radial annulus), the cold gas
(with mean volume density given above)
is assumed to be locked
into cold clouds with a small volume filling factor. The clouds
are initially placed on the observed size-mass relation (Equation~\ref{eqn:cloud.sizemass};
determining an initial radius $R_{0}$ for each cloud of mass $M_{c}$),
and are distributed in mass according to the observed mass spectrum
${\rm d}N/{\rm d}M_{c}\propto M_{c}^{-1.8}$ \citep[up to
a maximum $M_{c}=10^{7}\,M_{\sun}$; see][
and references therein]{rosolowsky:gmc.mass.spectrum}. Given
some total cold gas mass $\rho_{\rm cold}$ per unit volume
in an annulus, this mass spectrum,
integrated from arbitrarily small cloud mass (it makes no difference
if we adopt some lower-mass cutoff) to the maximum $M_{c}$,
must integrate to $\rho_{\rm cold}$ per unit density; i.e.\ this determines
the number density $n_{c}$ of clouds at each galaxy
radius $R$ and mass interval $M_{c}\rightarrow M_{c}+{\rm d}M_{c}$ (and corresponding
initial cloud radius $R_{0}(M_{c})$).
At a time $t=0$, we assume that the BH ``turns on,'' radiating (initially)
at the Eddington limit.
We allow it to drive a shock/wind through the diffuse ISM
according to the analytic solutions derived
in \citet{hopkins:seyferts}. In these models, the AGN is assumed to drive
a simple, Sedov-Taylor-type outflow via any ``small-scale'' feedback
channel. The detailed behavior is derived
and compared with hydrodynamic simulations
in \citet{hopkins:seyferts}, but can be simply summarized as follows:
the BH, once on, couples a fraction $\eta\approx0.05\,f_{\rm hot}$
of its luminous energy to the diffuse-phase gas in its vicinity.
Because the spatial and timescales in the vicinity of the BH are
small compared to the rest of the galaxy, this appears to the
galaxy as a point-like energy injection in a hot medium. The result
is therefore a roughly self-similar (power-law) Sedov-Taylor-type outflow.
Figure~\ref{fig:clouds} illustrates some of the basic behaviors
of this outflow. The energy injection leads to a shock
that expands outwards with radius $R_{s}(t)$, in an
approximately power-law fashion as $R_{s}\propto t^{\alpha}$,
where $\alpha$ is a function of the local density profile, gas
equation of state,
and is coupled (weakly) to the declining energy injection rate
of the BH (Figure~\ref{fig:clouds}; {\em bottom left}).
For typical galaxy density profiles, a $\gamma=5/3$ gas,
and the conditions
assumed here, \citet{hopkins:seyferts} show $\alpha\approx4/5$.
In the wake of the expanding bubble/shock, the post-shock density
drops in a related power-law manner. Since the spherical accretion rate onto a
BH scales roughly $\propto \rho$ (in Bondi-Hoyle
accretion; or similarly $\propto \Sigma$ for viscous accretion from a disk),
the accretion rate, and hence luminosity $L=0.1\,\dot{M}_{\rm BH}\,c^{2}$,
will decay as well (Figure~\ref{fig:clouds}; {\em top left}). Again, we refer to the full derivation in
\citet{hopkins:seyferts} for details, but the self-consistent
solution derived therein can be approximated as
$L\propto [1+t/t_{Q}]^{-\eta_{L}}$, with $\eta_{L}\sim1.5$ and
$t_{Q}\sim1-5\times10^{7}\,$yr for the
parameters here \citep[consistent with various observational constraints; see][]{martini04,
yu:mdot.dist,hopkins:mdot.dist}.
The value of $\eta_{L}$ follows from $\alpha$ and
generic behavior of Sedov-Taylor post-shock gas, and
$t_{Q}$ is simply related, modulo appropriate numerical coefficients,
to the local dynamical time near the BH radius of influence.
The shock therefore crosses a radius $r$ at a time $t_{s}$ (when
$R_{s}=r$). In the wake of the shock, the post-shock ambient pressure
$P_{\rm ext}$ will drop, reflecting the density decline from material
being blown out (Figure~\ref{fig:clouds}; {\em top center}).
Again, this approximately follows a standard
decline in the wake of a
Sedov-Taylor blastwave; the exact solution for the
pressure internal to the blastwave under the conditions here must be obtained numerically,
but \citet{ostrikermckee:blastwaves} show that it can be approximated
as a double power-law. Roughly speaking, there is a rapid drop in pressure in the
immediate post-shock region, as the thin shell at the front of the
blastwave clears the diffuse material away from the region, and the pressure declines as a steep power
$P\propto ([t - t_{s}]/t_{c})^{-\beta}$, where $t_{c}$ is the crossing time of the
shock relative to the cloud ($\sim R_{c}/v_{s}$) and
$\beta\sim3-5$ (the exact index depends on the local density profile slope and
rate of decay of the driving source, so is not the same at all radii). This is followed by a more gradual decline,
as the diffuse medium internal to the shock relaxes, is heated, and expands,
with $P\propto ([t-t_{s}]/t_{s})^{-\beta^{\prime}}$ and $\beta^{\prime}\sim 2$
\citep[again, for the detailed derivation of the double power-law structure in
the wake of the blastwave, for the conditions of the feedback-driven
blastwaves considered here, we refer to][]{hopkins:seyferts}.
In the wake of the shock, with a rapidly declining background
pressure and density, the cloud will be mixed and effectively increase its
surface area. We can solve for the behavior of each cloud -- at least the key parameter of
interest, the effective radius of the cloud in the direction perpendicular to the shock
($R_{c}$) as a function of time, according to the approximations in
\citet{kmc:shock.cloud} in the wake of the shock defined above.
If the cloud could somehow resist shredding (say, via sufficient magnetic field or turbulent
support) this would be trivial: the cloud (initial pressure equilibrium)
cloud would expand isothermally (e.g.\ conserving
total magnetic energy) as it is now over-pressurized, such that pressure equilibrium
would be conserved. For a background pressure declining with some power law
$P\propto (t/t_{s})^{-\beta}$, this implies expansion of the cloud with
$R_{c}/R_{0} = (t/t_{s})^{\beta/3}$ (generally, $R_{c}/R_{0} = (P_{c}/P_{0})^{-1/3}$,
for isothermal expansion). If the cloud were supported by thermal energy with
no new inputs this index would be slightly modified (for adiabatic expansion,
$R_{c}/R_{0} = (P_{c}/P_{0})^{1/5}$) but the behavior is qualitatively similar.
Technically this approximation assumes that the time for the cloud to equilibrate
is short relative to the timescale on which the background is changing, but this is
easily satisfied. The cloud expands/equilibrates on its internal crossing time,
given by the effective sound speed as $\sim R_{c}/c_{s,\, \rm eff}$;
for quasi-virial clouds this is simply the dynamical time $1/\sqrt{G\,\rho}$
where the effective density $\rho$ follows from the observed size-mass
relation (Equation~\ref{eqn:cloud.sizemass}). Using the observed values,
this gives a timescale of $\sim 0.1-1\times10^{6}\,$yr (for cloud sizes
$\sim0.1-10\,$pc). Compare this to the characteristic timescale for the
evolution of the background, a few $t_{s}$ (itself of order $t_{\rm dyn}$, the galaxy dynamical
time at $R$). For a typical $\sigma \sim 200\,{\rm km\,s^{-1}}$ spheroid,
$t_{s}\gg R_{c}/c_{s,\, \rm eff}$ for all radii $\gtrsim100\,$pc -- in other words,
this condition is easily satisfied at the radii $\sim R_{e}$, which contain most of the
mass of the galaxy. Figure~\ref{fig:clouds} ({\em bottom center}) shows how
the clouds will expand in effective cross section ($\sim R_{c}^{2}$), relative
to their initial sizes, given this declining background pressure for the simple isothermal
case.
For the more complex case of cloud shredding, it turns out that,
in aggregate, a similar scaling obtains.
The characteristic time for the cloud to effectively be mixed via instabilities and
so effectively increase its cross section is a few cloud-crushing timescales
$t_{cc} = \chi^{1/2}\,R_{0}/v_{s}$. But since the shock velocity is of order the galaxy
escape velocity for the ``interesting'' diffuse outflows considered here
($v_{s}\sim$ a few $\sigma$), and typical $\sigma\sim200\,{\rm km\,s^{-1}}$,
compared with a typical effective sound speed of a virialized cloud
$\sim1-10\,{\rm km\,s^{-1}}$, this time is almost always much shorter than
(or at least comparable to) the cloud dynamical time.
Following \citet{kmc:shock.cloud}, cloud shredding will equilibrate when
the system expands by a factor $\sim \chi^{1/2}$ in the perpendicular
direction (and a small, $\sim$ constant factor in the parallel direction), where
$\chi$ is the initial density contrast; in other words, until the {\em effective}
density and pressure drop to approximate equilibrium with the background.
Thus, for timescales $\sim t_{s}$ over which the background is evolving,
long compared to the cloud-crushing time, we can consider the systems to
effectively expand with an average effective radius scaling
in the same way in equilibrium with the external/hot medium background pressure
(i.e.\ similar effective net expansion, averaged over these timescales,
as in the isothermal expansion case).
\begin{figure*}
\centering
\plotside{f2.ps}
\caption{Effects of a two-stage feedback model
in the wake of a quasar-induced hot outflow,
with parameters of a typical $\sim L_{\ast}$ quasar. {\em Top Left:}
Quasar Eddington ratio as a function of time, after feedback from the
quasar begins to drive an outflow in the diffuse ISM (time $t=0$).
{\em Bottom Left:} Radius of the hot/diffuse outflow as a function of time
(compare the galaxy effective radius $R_{\rm eff}$).
{\em Top Center:} Change in pressure of the (post-shock) diffuse ISM at a given radius
$r$ from the BH, at time $t/t_{s}$ ($t_{s}$ is the time when the hot outflow
first reaches $r$). We show three different radii: $r=100\,$pc, $1\,$kpc,
and $10\,$kpc. {\em Bottom Center:} Change in effective
cross-section of a typical ($R_{0}\sim1\,$pc) cloud at each $r$
versus time $t/t_{s}$. {\em Top Right:} Ratio of
the radiation pressure force to the Eddington limit for expulsion of
each cloud (thick; $P_{\rm rad}/{\rm Edd}$), and fraction of the
cloud ionized (thin; $f_{\rm ion}$).
{\em Bottom Right:} Total fraction of cloud mass that can be expelled
by radiation pressure or ionized (we consider each separately),
as a function of time. The curves reflect the integral over the observed
mass spectrum of clouds at each radius, integrated over all
radii following the galaxy density profile.
\label{fig:clouds}}
\end{figure*}
We then solve for the behavior of each cloud in the wake of this
hot outflow, according to the approximations in
\citet{kmc:shock.cloud} and \S~\ref{sec:radiative} (Figure~\ref{fig:clouds}; {\em top right}).
In particular, given the time-evolution in $R_{c}/R_{0}$
shown in Figure~\ref{fig:clouds}, we use the scalings derived in \S~\ref{sec:radiative}
to estimate the fraction of the cloud
(at some initial radius $r$) which will be ionized (i.e.\
$f_{\rm ion}=h_{\rm ionized}/R_{c}$ from Equation~\ref{eqn:hionized};
where the AGN accretion rate $\dot{m}$ and cloud expansion $R_{c}/R_{0}$
are given as a function of time above, the initial cloud radius $r$ is one of those
specified in the Figure, and we chose a representative initial cloud with
radius $R_{0,\, \rm pc}=1$ for illustrative purposes).
We also show the relative strength of radiation pressure on the cloud,
i.e.\ the radiation pressure relative to the local Eddington limit (that which would
unbind the cloud), $P_{\rm rad}/{\rm Edd} = 0.14\,\dot{m}\,(R_{c}/R_{0})^{2}\,[(r+R_{e})/r]^{2}$
(re-arranging Equations~\ref{eqn:prad.vs.pedd} \&\ \ref{eqn:rcrit.prad}; where
again $\dot{m}$, $R_{c}/R_{0}$, and $r$ for the clouds is given).
Since these both scale steeply with the increasing effective cloud size
($\propto R_{c}^{5}$ and $R_{c}^{2}$, respectively), both increase rapidly in time.
At early times, only the clouds within a narrow region $\sim100\,$pc around the quasar
are efficiently ionized, and the effects of radiation pressure are weak --
similar to what is observed in the narrow-line regions of AGN
\citep{crenshaw:nlr,rice:nlr.kinematics}.
This changes rapidly in the wake of the outflow at large radii --
the deformation induced makes the clouds vulnerable to ionization and radiative
momentum driving.
Integrating over the entire cloud population and galaxy mass,
i.e.\ integrating over the initial cloud mass or size ($R_{0}$) spectrum at each
galactic radius $r$, and then over the total cold gas density at each radius $r$,
we obtain the {\em total} fraction of cloud mass that can be ionized
or effectively accelerated by radiation pressure (we define the latter
as the integral of mass in clouds where $P_{\rm rad}/{\rm Edd}>1$).
Figure~\ref{fig:clouds} ({\em bottom right}) shows this as a function of time
given the above outflow properties.
Most of the cold mass can be effectively accelerated by radiation pressure
at large radii in a timescale $\sim$a few galaxy dynamical times,
despite the declining AGN luminosity.
We have experimented with varying the
exact parameters adopted here, and find that the qualitative results are robust.
In a couple of dynamical times, $\sim90\%$ of the original cold cloud mass
becomes vulnerable to secondary radiative feedback; i.e.\ these numbers -- the fraction
that can be ionized and/or the strength of radiation pressure approach or
exceed unity.
This and the previous results also provide an important check of an
implicit assumption in this model: that the cloud acceleration time
is long relative to the timescale for the clouds to expand/equilibrate.
If this were not true, the two behaviors could not be treated independently, and
the physical consequences are unclear (it is possible, for example, that each ``parcel'' of
the cloud which is mixed or stripped off by instabilities would rapidly be accelerated, leading to
the edges of the clouds being effectively blown away or stripped but giving little acceleration
to the cloud core). As noted above, the expansion/equilibration time is
given by the cloud-crushing time, comparable to the internal
cloud crossing/dynamical times $<10^{6}\,$yr. In comparison, the acceleration times
are of order a couple to a few $t_{s}$, the dynamical time at $r$ in the galaxy,
which (as we discuss above) are generically much larger
than the cloud crossing time at all radii $\sim R_{e}$ (where most of the galaxy
mass is located), indeed all radii $\gtrsim100\,$pc (i.e.\ all radii which are not
already affected by feedback even without a diffuse outflow). In a global sense,
most of the mass is accelerated and/or ionized over a timescale
$\sim 10^{7}-10^{8}\,$yr, much longer than the crossing/collapse times of
all but the most massive molecular cloud complexes.
In fact, this acceleration time
is a relatively long time, at large scales ($\sim$a few $10^{8}\,$yr),
and the AGN luminosity has correspondingly decayed to $\sim 1\%$ of the
Eddington limit. The model above accounts for this, but an important
question remains if real AGN can sustain even this level of energetic output
over these time intervals. If, for example,
AGN switch to a radiatively inefficient state above or around this
accretion rate, the driving will suddenly vanish. Clearly, this is
an interesting regime; better knowledge of how feedback-induced
hot outflows and subsequent lightcurve evolution proceed
will be important to understanding both how dramatic the effect on the
galaxy will be and, potentially, how much variation there may be
between galaxies.
\section{Discussion}
\label{sec:discuss}
``Feedback'' from bright AGN is a topic of
fundamental interest for galaxy evolution, but it remains
unknown whether or not any of the obvious candidate feedback mechanisms
are capable of effectively coupling to cold molecular gas, especially at kpc scales,
the dominant reservoir for star formation. Here, we demonstrate
that it is at least possible that the cold gas reservoir is destroyed and/or blown out
of the galaxy
{\em despite} inefficient coupling of ``initial'' feedback mechanisms that originate
near the BH.
If some coupling of energy or momentum near the BH -- whether from
e.g.\ Compton heating, radiation pressure, BAL winds, jets, or resonant line-driving --
can generate a wind or shock/blastwave in the warm/hot ISM, then when the outflow
passes by a cold cloud, even if it does not directly entrain the material, it will
generate various instabilities that ``shred'' the cloud and mix it, efficiently
enhancing the cloud cross section in the perpendicular direction. Even if the
cloud is magnetically supported or extremely dense and able to resist instabilities,
there is still a growing pressure imbalance that drives the cloud
to expand in the same manner.
This is well-studied in the context of supernovae-driven winds, but there is
an important difference
in the presence of a bright quasar.
The effective increase in cross section means that momentum driving
and ionization heating from
the quasar radiation is quickly able to act in much more dramatic fashion on clouds
that were once too dense and too small (or at too large a distance from the
black hole) to be perturbed by the radiation field.
This effect can have dramatic implications for star formation in quasar host galaxies.
Because radiation pressure always acts, this means the energy needed in
``initial'' feedback from the central source to e.g.\ drive winds in the low-density
hot gas will be much less than if it were expected to act directly on the
cold clouds. We show that the energetic or momentum driving requirements for the
initially driven feedback are reduced by at least factor $f_{\rm hot}\sim0.1$
(the mass fraction in the hot diffuse ISM); i.e.\ rather than
the canonical $\sim5\%$ of the radiant energy ($\sim100\%$
momentum) needed in
the initial outflow if it were to
entrain the entire gas supply directly, only $\sim0.5\%$ ($\sim10\%$ momentum) is
sufficient to drive the hot gas and enter the regime of interest here.
Another way of stating this is, for accretion with an Eddington ratio
$\dot{m}$ and BH mass $M_{\rm BH}$ relative to the expectation
$\langle M_{\rm BH} \rangle$ from the $M_{\rm BH}-\sigma$ relation,
the relevant outflows will be driven (and star formation
suppressed) when
\begin{equation}
\eta\,\dot{m}\,\frac{M_{\rm BH}}{\langle M_{\rm BH} \rangle}\sim 0.05\,f_{\rm hot}\ ,
\end{equation}
where $\eta$ is the feedback efficiency ($\dot{E}=\eta\,L$).
Given this criterion, that there is sufficient momentum in photons
for the ``secondary'' feedback to act is gauranteed for all
but the most extremely gas-dominated systems.
Note that the derivation here pertains to large clouds
($R_{0}\gtrsim$\,pc), observed to be in rough pressure equilibrium with
the ambient medium and containing most of the ISM mass. Dense cores ($R_{0}\ll$\,pc)
are observed to be in self-gravitating collapse; these will continue to collapse
and form stars on a very short timescale despite a diffuse outflow.
The important thing is that no {\em new} cold gas reservoir of large
clouds will be available to form new cores.
We have also neglected the possibility that galaxies are
highly self-shielding. For example, in dense nuclear star-forming regions
in e.g.\ ULIRGs, the column densities are so high
\citep[$N_{\rm H}\gtrsim 10^{23}\,{\rm cm^{-2}}$, see e.g.][]{komossa:ngc6240,
li:radiative.transfer} that the quasar can do little until star formation
exhausts more of the gas supply. In disk-dominated galaxies, gas near
$R_{\rm eff}$ can be similarly self-shielded. If the radiation is isotropic,
then for some disk mass and gas fraction, only a fraction $\sim h/R$
(the fractional scale height at $R$) will couple to the relevant area (and
the radiation may, in fact, be preferentially polar, yielding even lower
efficiency). Only the most gas-poor disks, or the central regions of disks
(where systems are typically bulge-dominated) will be affected
by the coupling efficiencies above.
What we outline here is a simple model for the qualitative physical
effects that may happen when cold clouds in the ISM encounter a hot outflow
driven by an AGN. More detailed conclusions
will require study in hydrodynamic simulations which incorporate gas
phase structure, cooling, turbulence, self-gravity, radiation transport,
and possibly (if they provide significant pressure support) magnetic fields.
Detailed effects which we cannot follow analytically,
such as e.g.\ self-shielding within thin, dense fingers in
Rayleigh-Taylor or Kelvin-Helmholtz instabilities may alter the effects of the
radiation field on the cloud materia and change our conclusions.
Nevertheless, our simple
calculations here demonstrate that the process of cloud deformation in the wake
of a hot outflow can have dramatic implications for the susceptibility of
those clouds to other modes of feedback, and should motivate
further study.
According to these simple considerations,
outflows driven by AGN feedback may in fact be ``multi-stage'' or ``two-tiered'', with
an initial hot shockwave or strong wind driven by feedback mechanisms near the
BH, which is then supplemented by a successive wind driven out as clouds in the
wake of the former are deformed/mixed and increase their effective
cross-section to the AGN luminosity. The characteristic velocity of this secondary outflow,
which will carry most of the mass, should be $\sim v_{\rm esc}$ at
the radii of launching ($\sim10^{2}\,{\rm km\,s^{-1}}$), and it will behave similarly
to outflows from star formation. Indeed, because the driving occurs at large radii,
it is not clear whether it could be distinguished from
stellar-driven outflows at all, except indirectly (e.g.\ in cases where the observed
star formation is insufficient to power the outflow).
Charactersitic timescales are
$\sim$ a few $t_{\rm dyn}$ of the galaxy,
so much of the outflow occurs as sub-Eddington luminosities
(as the AGN fades in the wake of launching the ``primary'' outflow)
and the systems will not appear gas-depleted until they have evolved by a significant
amount ($\sim$few $10^{8}$yr, at Eddington ratios $\sim0.01$ typical of ``quiescent''
ellipticals). These processes should nevertheless imply effective shutdown of
star formation and destruction/heating of the cold gas supply
in ``massive'' BH systems -- bulge-dominated systems on the $M_{\rm BH}-\sigma$
relation that have recently been excited to near-Eddington luminosities.
Most intriguing, this reduces the energetic requirements for the ``initial'' feedback --
whatever might drive an outflow in the hot gas from the vicinity of the BH --
by an order of magnitude. Our estimates suggest that coupling only a fraction
$\sim10^{-3}$ of the luminosity of the AGN on small scales would be sufficient
to drive such a hot outflow and then allow $\sim100\%$ of the
radiative energy/momentum to couple to cold gas. If, for example,
quasar accretion-disk (or broad-line) winds (with characteristic velocities
$v\sim10^{4}\,{\rm km\,s^{-1}}$) do not immediately
dissipate all their energy, then
the hot outflows we invoke would be generated with a mass-loading in such
winds of just $\sim0.1\,\dot{M}_{\rm BH}$, a fraction of the BH accretion rate.
\begin{small}\section*{Acknowledgments}\end{small}
We thank Lars Hernquist and Eliot Quataert for helpful discussions
in the development of this work, and thank Vincenzo Antonuccio-Delogu,
Pat Hall, and Barry McKernan for helpful comments on an earlier draft.
We also appreciate the hospitality of the
Aspen Center for Physics, where this paper was partially developed.
Support for PFH was provided by the Miller Institute for Basic Research
in Science, University of California Berkeley.
\\
|
2,869,038,156,858 | arxiv |
\section{Introduction}
The nature of the particle content of the dark matter (DM) sector of the Universe remains unknown. The dominant paradigm of the last several decades has been based on the assumption that DM is made up of some weakly interacting massive particles (WIMPs) in the mass range of a few \gev\ to a few \tev. WIMP DM could naturally be produced in the early Universe via the popular thermal freeze-out mechanism, yielding a correct relic density for WIMP interactions set by a fraction of electroweak interactions of the Standard Model (SM); cf., \eg, Ref\cite{Arcadi:2017kky,Roszkowski:2017nbc,Billard:2021uyg} for a review.\footnote{The other very popular and strongly motivated candidate is an axion; see, \eg, Ref.\cite{Marsh:2015xka} for a review.} However, lack of experimental signal in a wide array of worldwide searches has led one to explore alternative candidates for DM, its production mechanisms, as well as detection methods; see \eg, Refs\cite{Battaglieri:2017aum,AlvesBatista:2021gzc}.
For instance, the standard freeze-out mechanism can be generalized to a broad ranges of DM mass and coupling constants\cite{Feng:2008ya}, see also a discussion in Ref.\cite{Baer:2014eja}. While too large values of DM couplings are constrained by unitarity, lowering them down by even several orders of magnitude below the electroweak strength can still render the correct relic density, however, for DM mass much below the $\gev$ scale. In various alternative scenarios some light mediator particle(s) beyond the Standard Model (BSM) spectrum are present. Their role is to specify the interaction between the SM and some DM particle with the mass in the $\gev$ range \cite{Boehm:2003hm,Pospelov:2007mp}, or below. Experimental searches for such light DM (LDM) via direct detection (DD) invoke new strategies\cite{Battaglieri:2017aum,Billard:2021uyg}, while further signatures of associated frameworks may be possible with searches for unstable light mediator species; cf. Refs\cite{Beacham:2019nyx,Alimena:2019zri,Agrawal:2021dbo} for recent reviews.
Such studies are usually performed within a framework of simplified models where the DM particle is assumed to be part of some secluded (dark) sector that interacts with the SM sector via one mediator particle only, for instance a dark photon or a dark Higgs boson; cf. Refs\cite{Fabbrichesi:2020wbt,Filippi:2020kii,Arcadi:2021mag} for recent reviews. It is, however, understood that in more UV-complete models further experimental probes could be possible and could provide other, complementary ways of their experimental tests, which triggers interest in considering non-minimal scenarios.
In models with a richer dark sector, there are typically more BSM particles with masses spanning wide ranges of values, even up to several orders of magnitude. If such a rich dark sector remains secluded from the SM sector, with only a weak portal communicating between them, the dark couplings of the BSM species are only mildly constrained and can lead to new phenomenological effects. This is especially relevant for cosmological tests and indirect detection (ID) searches of DM that can probe scatterings in the dark sector with the visible signals generated via subsequent decays of BSM species. Notable examples of such effects include, e.g., secluded\cite{Pospelov:2007mp,Batell:2009yf} or Sommerfeld-enhanced\cite{Bringmann:2016din} DM annihilations into the dark sector particles that modify ID bounds and can be constrained by Cosmic Microwave Background (CMB) radiation measurements.\footnote{Collider probes of light new physics can also be modified in the presence of large dark coupling constants via, e.g., possible secondary production of new BSM species in front of or inside the detectors\cite{Jodlowski:2019ycu}.}
In this study, we focus on such an interplay between light and heavy dark species in models that can simultaneously be probed by experiments targeting light and long-lived particles (LLPs). As we will show, already in a simple extension of the most minimal scenarios in which we add a heavy DM candidate and much lighter secluded dark species one can find new complementary detection prospects in future CMB surveys and DM ID searches, in addition to standard searches. We will highlight possible unique non-local effects in DM ID that could appear in the models with very long-lived light mediators; cf. Refs\cite{Rothstein:2009pm,Kim:2017qaw,Chu:2017vao,Gori:2018lem,Agashe:2020luo}. In this case, the interplay between different channels of DM ID observations could be challenging to interpret in standard WIMP scenarios, while it could be indicative of the existence of a dark sector comprising of both heavy WIMP-like DM particles and much lighter and long-lived species.
We will focus on a popular dark Higgs boson portal, which is often considered as a convenient mediator between the SM and even very complicated BSM sectors\cite{Chacko:2005pe,Schabinger:2005ei,Patt:2006fw,Branco:2011iw,Craig:2015pha}. In particular, such a portal can naturally be related to the \textsl{hidden valley} scenarios predicting the existence of some LLPs\cite{Strassler:2006im}. Motivated by this, we will assume that the relevant new scalar species -- in our case it will be SM singlet mixing with the SM Higgs -- has a mass below $1\gev$ and suppressed couplings to the SM sector. The remaining non-minimal BSM content of our model will further contain particles at a wide mass scales between $\sim1\mev$ and $10\tev$, or so, including a heavy secluded DM candidate that could be targeted in future ID observations.
This paper is organized as follows. In \cref{sec:model}, we introduce the BSM model of our interest. In \cref{sec:relic} we examine the relic abundance of both stable and long-lived dark species present in this scenario. In \cref{sec:constraintsandfuture} we discuss past and future possible bounds on this model. Finally, we present our results in \cref{sec:results} and present our conclusions in \cref{sec:conclusions}. Technical details of our analysis, including, i.a., involved cross section and decay width formulae, as well as the discussion of the photon spectrum in DM ID are relegated to \cref{sec:sec_formulae,app:spectrum}.
\section{Model\label{sec:model}}
The BSM model that we study in this article contains a fairly rich dark sector that is coupled to the visible (SM) sector via the mixing between a light, sub-$\gev$ dark Higgs boson $h_D$ and its heavier SM counterpart $H$. It also includes a dark photon $A^\prime$, a gauge boson of the secluded $U(1)^\prime$ group. The dark Higgs boson is charged under $U(1)^\prime$ and the dark vector obtains its mass via the dark Higgs mechanism. Furthermore, we introduce an additional stable complex scalar field $\eta$ of a similar mass, which also carries a non-zero charge under the $U(1)^\prime$ group. The field $\eta$ will constitute some fraction of the DM in the Universe. Notably, this scenario corresponds to one of the prototype models with LDM at the $\mev$-$\gev$ scale which is discussed in intensity frontier studies, cf., \eg, Refs\cite{Darme:2017glc,Darme:2018jmx,Duerr:2019dmv} for recent analyses. In \cref{fig:cartoon}, we schematically present the connection between the SM and the dark sectors in the model. The fields charged under the dark gauge group $U(1)^\prime$ are shown in green.
In this study, our aim is to identify possible new features that may arise when this simple scenario is extended to include some additional heavy and light secluded dark species. In particular, we introduce a new, heavy and stable complex scalar field $\chi$ that will play the role of a dominant component of DM. It couples to $\eta$ (a minor DM component) via a heavy spectator real scalar field $\phi$, which plays only an auxiliary role in our model and a direct renormalizable contact operator, $|\chi|^2|\eta|^2$, could also be used instead. Finally, the full dark sector will likely contain some additional light degrees of freedom that are not essential for our analysis. We assume that they are charged under some $U(1)^{\prime\prime}$ group, which adds to the spectrum of particles a light dark gauge boson $A^{\prime\prime}$. As we shall see later, its presence will be needed for phenomenological reasons. We further assume that the dark Higgs field $h_D$ is not charged under $U(1)^{\prime\prime}$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{./figures/DS_SM_cartoon_FINAL.pdf}
\caption{
Schematic illustration of the model. The dark sector (DS) is connected to the Standard Model (SM) through a light dark Higgs boson portal ($h_D$), which mixes with the Higgs particle. We denote the mixing angle by $\theta_{h_D H}$. The core of the model under study consists of the SM gauge groups extended by a single $U(1)^\prime$ group. The respective gauge boson is denoted by $A^\prime$. Both this vector field, as well as the dark Higgs field $h_D$ and complex scalar dark matter $\eta$ charged under this group are shown in green. In the study, we focus on the impact on this scenario from its possible extensions towards both heavy and light degrees of freedom. These can contain many fields, while in the figure we have focused on the ones with a direct impact on our discussion, \ie, the heavy complex scalar dark matter $\chi$ and the auxiliary scalar field ($\phi$), indicated with the black color, and the light new dark gauge boson $A^{\prime\prime}$ of the additional $U(1)^{\prime\prime}$ group shown in blue. The $\eta$ field is a minor DM component, which also couples to the heavier dark sector species, $\chi$ and $\phi$. Instead, the interactions between the fields charged under the $U(1)^\prime$ and $U(1)^{\prime\prime}$ groups arise due to kinetic mixing. The presence of additional light and heavy dark sector components allows for avoiding stringent cosmological bounds and leads to non-standard phenomenological effects in DM indirect detection discussed in the draft.
\label{fig:cartoon}}
\end{figure}
The Lagrangian of the model can be written as
\begin{equation}
\mathcal{L}=\mathcal{L}_{\mathrm{SM}}+\mathcal{L}_{\mathrm{DS}}+\mathcal{L}_{\text {portal }}
\end{equation}
where $\mathcal{L}_{\mathrm{SM}}$ is the SM Lagrangian, $\mathcal{L}_{\mathrm{DS}}$ corresponds to the dark sector, and $\mathcal{L}_{\text {portal }}$ describes interactions between the SM and the dark sector, as well as between the two parts of the dark sector charged under the gauge groups $U(1)^\prime$ and $U(1)^{\prime\prime}$.
\begin{equation}
\mathcal{L}_{\text {portal }} = -\lambda_{H h_D}|\Phi|^{2}|\sigma|^{2} -\frac{\epsilon^{\prime}}{2} F_{\mu \nu}^\prime F^{\mu \nu}-\frac{\epsilon^{\prime\prime}}{2} F_{\mu \nu}^{\prime\prime} F^{\mu \nu}-\frac{\tilde{\epsilon}^\prime}{2} F_{\mu \nu}^{\prime} (F^{\mu \nu})^{\prime\prime}.
\label{eq:L_portal}
\end{equation}
In the following, we assume that both dark vectors have vanishing kinetic mixing with the SM photon, $\epsilon^{\prime}=\epsilon^{\prime\prime}= 0$. The motivation behind it is that our goal is to primarily focus our study on identifying possible new phenomenological aspects of the model -- with respect to usual dark photon signatures -- that might appear in the presence of extremely long-lived dark vectors. Importantly, in our model, there is also no perturbative generation of the kinetic mixing terms with the SM photon, as will be discussed later. The vanishing kinetic mixing parameters also result in effective decoupling of the dark vectors from the SM $Z$ boson, cf., e.g., Ref.\cite{Berlin:2018jbm}. On the other hand, we allow for a non-zero kinetic mixing between the two dark vectors. This can be loop-induced by additional heavy fields charged under both the dark $U(1)$ groups, which leads to $\tilde{\epsilon}^\prime< 1$.
In \cref{eq:L_portal} $\Phi$ denotes the SM Higgs doublet while $\sigma$ corresponds to the dark Higgs boson singlet. We parameterize both fields in the unitary gauge after the spontaneous symmetry breaking of both the electroweak and the dark gauge symmetries as follows
\begin{equation}
\Phi=\left(0,\left(v_{h}+h\right) / \sqrt{2}\right)^{T}, \quad \sigma=\left(v_{\mathrm{D}}+H_{\mathrm{D}}\right) / \sqrt{2}
\end{equation}
where the SM Higgs vacuum expectation value is $v_{h}=246\gev$, while the corresponding quantity for the dark Higgs field is denoted by $v_{D}$. The portal term $\lambda_{H h_D}|\Phi|^{2}|\sigma|^{2}$ introduces non-diagonal mixing of $h$ and $H_D$; for details see a recent review\cite{Lebedev:2021xey}; which necessitates a change of basis into the mass eigenstate basis. We denote resulting physical eigenvector states by $H$ and $h_D$, respectively.
As discussed above, we assume that the lighter DM component $\eta$ is coupled to both the dark vector $A^\prime$ and the dark Higgs boson $h_D$, while the auxiliary heavy scalar field $\phi$ provides the connection between both DM components $\chi$ and $\eta$
\begin{align}
\mathcal{L}_{\mathrm{DS}} \supset &\ \mu_{\chi} |\chi|^2 \phi + \mu_{\eta} |\eta|^2 \phi + (q_{H}^\prime g_D)^2 A^{\prime \mu} A^\prime_\mu |h_D|^2 \nonumber\\
&+ i q_{\eta}^\prime\,g_{D} A^\prime_\mu \left[\eta^{*}\left(\partial^{\mu} \eta\right)-\left(\partial^{\mu} \eta^{*}\right) \eta\right] + (q_{\eta}^\prime g_{D})^2 A^\prime_\mu A^{\prime \mu} |\eta|^2.
\label{eq:LagrDS}
\end{align}
Here, $q_i^\prime$ denotes the charges of particle $i$ under the dark gauge group $U(1)^\prime$. In further discussion, we set $q_H^\prime = q_\eta^\prime=1$. As we will discuss below, invoking two components of DM scalar fields allows one to relax otherwise stringent bounds on heavy WIMP-like scalars coupled to the SM sector via light mediator species, cf. Refs\cite{Ma:2017ucp,Duerr:2018mbd}.
The dark sector Lagrangian also describes the dark Higgs boson potential. It contains usual mass terms, as well as the cubic and quartic interaction terms between the dark Higgs boson and the lighter scalar $\eta$,
\begin{align}
\mathcal{L}_{\mathrm{DS}} \supset &\ \ \mu_{\mathrm{D}}^{2} |\sigma|^2-\frac{1}{2} \lambda_{\mathrm{D}}|\sigma|^4 \nonumber\\
&+m_{\chi}^{2}|\chi|^{2}+m_{\eta}^{2}|\eta|^{2}\nonumber\\
&-\lambda_{h_D \eta}h_D^2|\eta|^{2} -\left(\mu_{h_D \eta} h_D |\eta|^{2}+\mathrm{h.c.}\right).
\end{align}
As a result of the spontaneous breaking of the dark $U(1)^\prime$ gauge symmetry, the dark bosons $A^\prime$ and $h_D$ obtain their masses as
$m_{A^\prime}=g_{D} v_{D}$ and $m_{h_{D}}=\sqrt{\lambda_{D}} v_{D}$. Moreover, the dark Higgs boson mixing with the SM Higgs can be parameterized by some angle $\theta_{h h_D} \simeq \lambda_{H h_D} v_{\mathrm{D}} v_{h} / m_{H}^{2}$, valid whenever $m_{h_D}\ll m_{H}$, which will be the case in the following.
In the part of the dark sector charged under the group $U(1)^{\prime\prime}$, a similar dark Higgs mechanism with additional light scalar $H_D^{\prime\prime}$ could also be present and generate a non-zero mass for the corresponding gauge boson $m_{A^{\prime\prime}}$. In the following, we assume for simplicity that $m_{A^{\prime\prime}}\ll m_{A^\prime}$ and that this part of the dark sector remains secluded from the SM, while the presence of additional light dark species, not indicated in the equations below, would render the $A^{\prime\prime}$ vector unstable. In the following, we will not focus on this part of the dark sector. However, below we will discuss the direct impact of $A^{\prime\prime}$ on the phenomenology of the species charged under the $U(1)^\prime$ group.
The most relevant couplings for our discussion appear after a dark vector field redefinition, $A^\prime \rightarrow A^\prime - \tilde{\epsilon}^\prime\delta\,A^{\prime\prime}$. They are induced by moving to the mass basis and removing the non-canonical kinetic mixing term between the two dark field-strength tensors in \cref{eq:L_portal}. We stress the presence of an additional multiplicative factor $\delta = (m_{A^{\prime\prime}}/m_{A^\prime})^2$ that modifies the field shift with respect to the case where one of the vector fields is massless\cite{Bauer:2018onh,Briannotes}. In the following, we will denote $\tilde{\epsilon} = \tilde{\epsilon}^\prime\delta$ and assume $\tilde{\epsilon}\sim 10^{-6}$ in our benchmark scenarios.
This requires $m_{A^{\prime\prime}}\gtrsim 10^{-3}\,m_{A^\prime}$ so that the $A^{\prime\prime}$ mass lies in the $\mev-\gev$ range.
The aforementioned field redefinition leads to
\begin{equation}
\mathcal{L}_{\textrm{DS}} \supset \frac{1}{2}\,m_{A^{\prime\prime}}\,A^{\prime\prime 2} - i\,\tilde{\epsilon}g_D\,A^{\prime\prime}_\mu\,[\eta^\ast(\partial^\mu\eta) - (\partial^\mu\eta^\ast)\eta].
\end{equation}
As can be seen, the lighter DM scalar $\eta$ acquires an $\tilde{\epsilon}$-suppressed coupling to the light dark vector $A^{\prime\prime}$. We note that a similar coupling could be obtained for $\tilde{\epsilon}^\prime = 0$ by introducing a tiny (milli)charge of the $\eta$ field with respect to the $U(1)^{\prime\prime}$ group. In this case, the $A^{\prime\prime}$ boson could be even much lighter.
\begin{figure}[t]
\centering
\hspace*{1.2cm}\includegraphics[scale=0.5]{./figures/mass_scheme.pdf}
\caption{
On the left side, a schematic illustration of the mass hierarchy across the energy scales for dark sector particles in the model is shown. The unstable mediators are denoted in dark-red, while the two stable DM species in black. On the right side, Feynman diagrams for processes leading to DM indirect detection signatures are shown. From top to bottom, these include: the $2\to 3$ annihilation process of $\chi$ DM producing the heavier dark vector $A^\prime$, a loop-induced $A^\prime$ decay and a subsequent decays of the dark Higgs boson $h_D$ into the SM species.
\label{fig:idea}}
\end{figure}
The dark sector spans at least several orders of magnitude in mass, as shown on the left side of \cref{fig:idea}. Even for all the dark charges set to unity, the model is still characterized by a set of $12$ free parameters. In order to organize our discussion and better highlight interesting phenomenological prospects of this scenario, we assume below the following mass hierarchy in the BSM sector of the model
\begin{equation}
(m_{\phi}\gg)\ m_{\chi} > m_{\eta}>m_{A^\prime}> m_{h_D}>2m_{f}\ \ \textrm{and}\ \ m_{A^{\prime\prime}}\sim m_{h_D}.
\label{eq:mass_scheme}
\end{equation}
With this assumption, we find below that the dominant DM component is the heavier scalar field $\chi$ which decouples earlier from thermal plasma than the lighter species $\eta$. In particular, the decoupling of $\chi$ proceeds via annihilations through an exchange of the intermediate heavy scalar, $\chi\chi\to (\phi^\ast)\to \eta\eta$. In addition, $2\to 3$ annihilation processes are possible with an outgoing dark vector emitted from the $\eta$ leg, $\chi\chi\to (\phi^\ast)\to \eta\eta A^\prime$. The latter process will play a crucial role in our discussion of ID observables. On the right side of \cref{fig:idea}, we show the respective Feynman diagram along with other key processes leading to indirect signals of DM in our model.
As can be seen, the dark photon produced in the $2\to 3$ process subsequently decays into dark Higgs boson $h_D$ and the dark vector $A^\prime\to h_D A^{\prime\prime}$, which is kinematically allowed; cf \cref{{eq:mass_scheme}}. This decay takes place via a radiative process with the $\eta$ particle exchange in the triangle loop.\footnote{Note that other loop-induced decays into the two lighter dark vectors, $A^\prime\to A^{\prime\prime}A^{\prime\prime}$, or two scalars, $A^\prime\to h_D(h_D\ \textrm{or}\ H)$, are excluded by the scalar QED analog of the Furry theorem that affects loop diagrams with an odd number of external vector fields, cf. Ref.\cite{Peskin:1995ev}. Decays of $A^\prime$ into a pair of the SM Higgs and the light dark Higgs boson would, either way, correspond to only narrow regions of the parameter space of the model studied in \cref{sec:results}. The similar process involving two dark Higgs bosons in the final state is also excluded at a more general level, as it would involve the initial state particle with $J=1$ and two identical real scalar bosons in the final state that due to Bose statistics must have even $J$.} As a result, the decay width is naturally suppressed and $A^\prime$ may have a very large lifetime of astrophysical relevance,
\begin{equation}
c\tau_{A^\prime} \simeq 1~\textrm{kpc} \left(\frac{1}{g_D}\right)^2 \left(\frac{10^{-6}}{\tilde{\epsilon}}\right)^2 \left(\frac{4\times 10^{-6}}{\lambda_{h_D \eta}}\right)^2\left(\frac{m_{\eta}}{150 \gev}\right)^4 \left(\frac{10 \gev}{m_{A^\prime}}\right)^5\ .
\label{eq:ctauAprime}
\end{equation}
The approximate result in \cref{eq:ctauAprime} is derived from a full expression given in \cref{eq:DP_DHDH} under the assumption that $m_{\eta}\gg m_{A^\prime}\gg m_{h_D}$. The surprising dependence of $c\tau_{A^\prime}$ on only two powers of the coupling $g_D$ is due to the fact that, after the spontaneous symmetry breaking in the dark sector, the coupling $\mu_{\eta \eta h_D}$ between the $\eta$ particle exchanged in the loop and the dark Higgs boson is proportional to the dark vacuum expectation value, $\mu_{\eta \eta h_D} \propto v_D \propto m_{A^\prime}/g_D$. This partially cancels the dependence on $g_D$ while introducing additional powers of $m_{A^\prime}$ in the expression. We provide a complete list of decay width and annihilation cross sections relevant for our discussion in \cref{sec:sec_formulae}.
As can be seen from \cref{eq:ctauAprime}, the dark vector can travel galactic-scale distances before decaying and producing visible signals due to subsequent decays of the dark Higgs bosons, $h_D\to f\bar{f}$, where $f\bar{f} = e^+e^-$, $\mu^+\mu^-$, or hadrons obtained from quark pairs $q\bar{q}$. We require $h_D$ to decay before the Big Bang Nucleosynthesis (BBN) in the early Universe, \ie, $\tau_{h_D}\lesssim 0.1\second$.
In fact, the lifetime is often within the reach of the intensity frontier searches for light new physics.
Before we discuss phenomenological aspects of this scenario, it is worthwhile to comment further on the vanishing kinetic mixing terms in \cref{eq:L_portal} that are proportional to $\epsilon^\prime$ and $\epsilon^{\prime\prime}$. We note that, in general, even if there is no such a kinetic mixing between hypercharge and dark-photon field-strength tensors at the tree level, it can be generated at the loop level\cite{Holdom:1985ag} provided that charge conjugation symmetry is also broken in the dark sector; for details see a recent review\cite{Gherghetta:2019coi}. In order to obtain such a symmetry breaking, one can either introduce a new particle (or several new particles) which is charged under both SM and dark gauge groups or else introduce interactions in the dark sector that break this symmetry, in analogy with the SM where weak interactions are responsible for the violation. Otherwise, there can be no perturbative generation of the kinetic mixing term\cite{Gherghetta:2019coi}.
In the model considered here there is no such a symmetry breaking and, therefore, non-zero kinetic mixing could only appear at the tree level. This can also be seen directly since for each loop diagram that could a priori contribute to kinetic mixing there exists the same diagram but with a reversed particle flow, which corresponds to virtual antiparticles, and the contributions of both diagrams cancel pairwise. In the following, we will therefore assume a negligible kinetic mixing which, in the light of the discussion above, is not regenerated perturbatively. This way we will be able to also avoid stringent terrestrial and astrophysical bounds on light dark vector species\cite{Fabbrichesi:2020wbt}, as well as possible BBN constraints\cite{Berger:2016vxi}.
\section{Relic density\label{sec:relic}}
We begin the discussion of specific features of our model by examining the relic density of both DM components, the dominant one $\chi$ and a minor one $\eta$ -- for which we require $\Omega_{\chi}h^2+\Omega_{\eta}h^2\simeq 0.12$\cite{Planck:2018vyg} -- as well the abundance of the unstable long-lived dark vector $A^\prime$. To this end, we numerically solve a set of Boltzmann equations that extend the familiar assisted freeze-out mechanism discussed for two-component dark sectors\cite{Belanger:2011ww} to compute the relic densities of three dark species:\footnote{We note that, in the absence of a direct coupling between $\chi$ and $A^\prime$, the contribution in the Boltzmann equation for $Y_{A^\prime}$ corresponding to the $\chi-A^\prime$ interactions will appear only at next-to-leading order and we neglect it in the following. Similarly, given the suppressed couplings of the light vector $A^{\prime\prime}$ to the dark sector species charged under the $U(1)^\prime$ group, its impact on their relic density is also subdominant and we neglect it in the following. As mentioned above, we assume that the $A^{\prime\prime}$ field itself is unstable and decays into much lighter dark species which is not relevant for the discussion of the $\chi$, $\eta$, and $A^\prime$ abundances. We solve the Boltzmann equations assuming a partial wave decomposition of the thermally averaged annihilation cross sections for each process, $\langle \sigma v\rangle$. Due to large mass hierarchies among the three species, the times of their thermal freeze-out are well separated which means that is not essential to keep their full thermal dependence of $\langle \sigma v\rangle$.}
\begin{equation}
\begin{aligned}
\frac{d Y_{\chi}}{d x} &=-\frac{\lambda_{\chi}}{x^{2}}\left(Y_{\chi}^{2}-\frac{Y_{\eta}^{2}}{\left(Y_{\eta}^{\mathrm{eq}}\right)^{2}} \left(Y_{\chi}^{\mathrm{eq}}\right)^{2} \right), \\
\frac{d Y_{\eta}}{d x} &=-\frac{\lambda_{\eta}}{x^{2}}\left(Y_{\eta}^{2}-\left(Y_{\eta}^{\mathrm{eq}}\right)^{2}\frac{Y_{A^\prime}^{2}}{\left(Y_{A^\prime}^{\mathrm{eq}}\right)^{2}}\right)+\frac{\lambda_{\chi}}{x^{2}}\left(Y_{\chi}^{2}-\frac{Y_{\eta}^{2}}{\left(Y_{\eta}^{\mathrm{eq}}\right)^{2}} \left(Y_{\chi}^{\mathrm{eq}}\right)^{2} \right),\\
\frac{d Y_{A^\prime}}{d x} &=\frac{\lambda_{\eta}}{x^{2}}\left(Y_{\eta}^{2}-\left(Y_{\eta}^{\mathrm{eq}}\right)^{2}\frac{Y_{A^\prime}^{2}}{\left(Y_{A^\prime}^{\mathrm{eq}}\right)^{2}}\right)-\frac{\lambda_{A^\prime}}{x^{2}}\left(Y_{A^\prime}^{2}-\left(Y_{A^\prime}^{\mathrm{eq}}\right)^{2} \right),
\end{aligned}
\label{eq:boltzmann_three}
\end{equation}
where $Y_i$ is the respective yield, $i = \chi, \eta, A^\prime$, and $x = m_{\eta}/T$. We stress again that we assume the lifetime of the dark Higgs boson $h_D$ to be short enough to decay before the era of BBN. A calculation of its abundance is, therefore, not needed in further analysis. The parameters $\lambda_i$ depend on the annihilation cross sections of the processes that play the dominant role in determining the abundances in the dark sector, given mass ordering shown in \cref{eq:mass_scheme}:
\begin{dmath}[labelprefix={eq:}]
{\lambda_{\chi} \equiv \frac{s(m_{\eta})}{H(m_{\eta})}\left\langle\sigma_{\chi \bar{\chi} \rightarrow \eta \bar{\eta}} v\right\rangle \simeq \frac{1.32\,g_{*s}(m_{\eta})}{\sqrt{g_{*}(m_{\eta})}}m_{\eta} m_{\textrm{Pl}} \left\langle\sigma_{\chi \bar{\chi} \rightarrow \eta \bar{\eta}} v\right\rangle},\\
{\lambda_{\eta} \equiv \frac{s(m_{\eta}) }{H(m_{\eta})}\left\langle\sigma_{\eta \bar{\eta} \rightarrow A^\prime A^\prime} v\right\rangle \simeq \frac{1.32\,g_{*s}(m_{\eta})}{\sqrt{g_{*}(m_{\eta})}}m_{\eta} m_{\textrm{Pl}} \left\langle\sigma_{\eta \bar{\eta} \rightarrow A^\prime A^\prime} v\right\rangle,}\\
{\lambda_{A^\prime} \equiv \frac{s(m_{\eta}) }{H(m_{\eta})}\left\langle\sigma_{A^\prime A^\prime \rightarrow h_D h_D} v\right\rangle \simeq \frac{1.32\,g_{*s}(m_{\eta})}{\sqrt{g_{*}(m_{\eta})}}m_{\eta} m_{\textrm{Pl}} \left\langle\sigma_{A^\prime A^\prime \rightarrow h_D h_D} v\right\rangle,}
\end{dmath}
where $m_{\textrm{Pl}}=2.44 \times 10^{18}\gev$ is the reduced Planck mass, $s\equiv s(T)$ is the entropy density and $H\equiv H(T)$ is the Hubble rate. The effective number of degrees of freedom for the entropy and energy densities of the thermalized SM-DS plasma at temperature $T$ is denoted by $g_{*s}(T)$ and $g_{*}(T)$, respectively. The equilibrium comoving yield of each particle is defined as $Y^{\mathrm{eq}}_i=n^{\mathrm{eq}}_i/s$, and it explicitly reads:
\begin{equation}
Y_{i}^{\mathrm{eq}}(x)=\frac{g_i}{g_{*s}(x)} \frac{45}{4 \pi^{4}} (r_i\,x)^{2} K_{2}[r_i\,x].
\end{equation}
Here, $r_i=m_i/m_{\eta}$ and the number of internal degrees of freedom of particle $i$ is denoted by $g_i$. The resulting relic density is obtained from $\Omega_{i} h^2=(\rho_{i}/\rho_{\mathrm{crit}})h^2=(s_0\,Y_{i,0}\,m_{i}/\rho_{\mathrm{crit}})h^2$, where $s_0$ is the present-day entropy density and $Y_{i,0}$ is the final yield of the dark species $i$ after its freeze-out.
In the case of the dominant heavy DM component $\chi$, additional contributions to the total relic density may come from $2\to 3$ processes $\chi\bar{\chi}\to \eta\bar{\eta} A^\prime$. We take this into account when presenting the results below by properly tuning the $\mu_{\chi}$ and $\mu_{\eta}$ parameters in \cref{eq:LagrDS}, as well as by modifying the heavy scalar mass $m_\phi$, such that the total pair-annihilation cross section of $\chi$s assumes a thermal value. In the case of the relic abundance of $\eta$, possible annihilation modes are into the $A^\prime A^\prime$ and $h_D h_D$ as well as $A^\prime h_D$ final state. In practice, the annihilation mode into a pair of dark vectors dominates for typical large value of the dark coupling constant $g_D\gtrsim 0.1$. The coupling $g_D$ takes such values in the cosmologically allowed region of the parameter space because it also determines the annihilation rate of dark photons into $h_D h_D$ pairs, and we require the cross section in the dark sector $\langle \sigma v\rangle_{A^\prime A^\prime\to h_D h_D}$ to be up to a few orders of magnitude larger than the thermal value. This suppresses the abundance of $A^\prime$ and allows one to avoid stringent cosmological bounds, cf. \cref{sec:constraints}. As a result, in the allowed region of the parameter space of the model we typically find
\begin{equation}
\Omega_{\chi}h^2\simeq 0.12 \gg \Omega_{\eta}h^2 \sim \Omega_{A^\prime}h^2\ .
\end{equation}
In the left panel of \cref{fig:relic_density_three_component_benchmarks} we illustrate the evolution of the yield $m_i\,Y_i$ for all three dark species as a function of $x$. We assume $m_{\chi} = 1.5~\tev$, $m_{\eta} = 150~\gev$, $m_{A^\prime} = 20~\gev$, and $m_{h_D}= 250~\mev$. We also fix the coupling constants in the Higgs sector of the model as $\lambda_{hh_D} = 10^{-4}$ and $\lambda_{h_D\eta} = 4\times 10^{-7}$, cf. discussion in \cref{sec:results} for the justification of such choice.
The heavy scalar $\phi$ mass and the coupling constants $\mu_{\chi}$ and $\mu_{\eta}$ are chosen such that the annihilation cross section $\langle\sigma v\rangle_{\chi\chi\to \eta\eta}$ achieves the thermal value. Due to the assumed mass hierarchy, the heavier DM species freezes out almost independently of the other two, resulting in $\Omega_{\chi}h^2\simeq 0.12$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.562]{./figures/relic_density_example_evolution.pdf}
\hfill
\includegraphics[scale=0.34]{./figures/mY_vs_gD.pdf}
\caption{Left: Comoving energy densities, $(mY)_i$, for two-component DM and the unstable dark vector undergoing thermal freeze-out obtained by numerically solving the set of Boltzmann equations \cref{eq:boltzmann_three}. We fixed the dark sector masses at the following values: $m_{\chi}=1.5~\tev$, $m_{\eta}=150~\gev$, $m_{A^\prime}=20~\gev$, and $m_{h_D}=250~\mev$. We also fixed the coupling constants $g_D=5$, $\lambda_{hh_D} = 10^{-4}$, and $\lambda_{h_D\eta} = 4\times 10^{-7}$, while the values of $\mu_{\chi}$, $\mu_{\eta}$, and $m_\phi$ are chosen such that $\langle\sigma v\rangle_{\chi\chi\to \eta\eta}$ is at thermal level and $\Omega_{\chi}h^2\simeq 0.12$, as indicated with the horizontal black dotted line. The brown horizontal line denotes the BBN limit on late-time decaying $A^\prime$ with $\tau_{A^\prime}\lesssim 10^{12}$s. Right: A schematic plot of the unstable dark vector yield, $(mY)_{A^\prime}$, as a function of $g_D$ is shown with the solid black line. The heavy DM yield, $(mY)_{\chi}$, is also shown with the horizontal black dotted line. We also present with gray-shaded regions cosmological constraints from BBN and CMB, as well as a perturbativity bound. The DM indirect detection signal rates grow towards larger values of $g_D$ that are also preferred by cosmology, as indicated in the figure.
\label{fig:relic_density_three_component_benchmarks}
}
\end{figure}
Instead, for sufficiently large $g_D$ the $A^\prime$ abundance may become strongly suppressed with respect to the total DM relic density. For the assumed values of the model parameters and $g_D=5$, as shown in the figure we obtain $\Omega_{A^\prime}\lesssim 10^{-5}\,\Omega_{\chi}$. With such a small abundance, one can avoid stringent BBN bounds for even very long-lived dark vectors. Importantly, one can see that in this case the predicted abundance of the lighter DM species $\eta$ is even smaller than for the unstable dark vector $(m Y)_{\eta}<(m Y)_{A^\prime}$. This is so due to the fact that the dominant annihilation modes for both of these particles depend on $g_D$ in a way that guarantees $(\sigma v)_{\eta \eta\to A^\prime A^\prime} > (\sigma v)_{A^\prime A^\prime\to h_Dh_D}$, cf. \cref{eq:SBSBApAp,eq:DPDP_DHDH} for $m_{\eta}\gtrsim m_{A^\prime}$.
The yield of both $A^\prime$ and $\eta$ grows with decreasing dark coupling constant $g_D$, e.g., $Y_{A^\prime}\propto 1/\langle\sigma v\rangle_{A^\prime A^\prime\to h_Dh_D}\propto g_D^{-4}$, cf. \cref{eq:DPDP_DHDH}. We illustrate this in the right panel of \cref{fig:relic_density_three_component_benchmarks} which shows $(mY)$ as a function of $g_D$. Here, we also schematically present the BBN bounds violated for too low values of the coupling constant that predict both the increased abundance of late-time decaying $A^\prime$ and its large lifetime, $\tau_{A^\prime}\propto 1/g_D^{2}$, cf. \cref{eq:ctauAprime}. We defer to \cref{sec:constraints} the discussion of a more precise implementation of the BBN bounds in our analysis. CMB data constrain excessively large lifetimes for which $A^\prime$-induced electromagnetic energy injection could happen around recombination. This excludes even very suppressed values of the dark vector yield, even if it survived until this time. As can be seen in the figure, both cosmological constraints become weaker for larger $g_D$. This simultaneously increases the sub-thermal $2\to 3$ annihilation cross section $\chi\chi\to \eta\eta A^\prime$ that is responsible, as we shall see below, for DM ID signals in our model. In the following, we will focus on regions of the parameter space of the model with $g_D\gtrsim 0.1$.
\paragraph{Correcting for the temperature difference in the SM and dark sector} In the scenario of our interest, the dark sector couples to the SM via the light mediator particle $h_D$, while some of the stable or long-lived particles can freeze out late. In this case it is important to take into account the impact of the possible early kinetic decoupling on the abundances of the dark species\cite{Duch:2017nbe,Binder:2017rgn,Bringmann:2020mgx}. We model this effect following Ref.\cite{Feng:2008mu} and implement the differences in the dark sector temperature $T_{\textrm{DS}}$ and the SM temperature $T$ in the above expressions determining the relic densities.
One has to note that this simple approach relies on a few assumptions which are not necessarily valid in our case. In particular, we assume that the chemical potential of $h_D$ vanishes until decoupling due to its interactions with the SM bath. In Ref.\cite{Bringmann:2020mgx}, however, it was shown that whenever the mass of the mediator is comparable to the mass of the DM particle, this can lead to underestimating the DM abundance by more than even an order of magnitude. On the other hand, for the mass hierarchy $m_{\mathrm{DM}}\gg m_{\mathrm{med}}$, this effect becomes negligible and we expect it to play a minor role in our discussion, cf. \cref{eq:mass_scheme}. We stress that the effect of the early kinetic decoupling affects only fractions of the available parameter space of our model presented in \cref{sec:results} for the dark Higgs boson mass close to the di-electron threshold, $m_{h_D}\gtrsim 2m_e$.
\section{Current and future constraints from astrophysics, cosmology and colliders\label{sec:constraintsandfuture}}
Although most of the dark sector species present in the model considered here remain secluded from the SM, their indirect couplings via a light dark Higgs boson portal $h_D$ still induce experimental constraints. Below, we briefly summarize the resulting current bounds on our scenario. They will define allowed regions of the parameter space of the model, as will be discussed in \cref{sec:results}. We then discuss future experimental and observational prospects of the considered scenario.
\subsection{Current bounds\label{sec:constraints}}
\paragraph{Accelerator-based searches} Light dark Higgs bosons are certainly among the primary targets of intensity frontier searches for sub-$\gev$ new physics as they correspond to one out of only a few available simple renormalizable portals to the dark sector. Based on available data one can put upper bounds on the mixing angle $\theta$ between $h_D$ and the SM Higgs boson $H$. Of particular relevance to our study are constraints obtained from E949 data on rare kaon decays\cite{E949:2008btt} and LHCb searches for rare $B$ meson decays\cite{LHCb:2015nkv,LHCb:2016awg}, as well as results of beam-dump searches in the CHARM\cite{CHARM:1985anb}, MicroBooNE\cite{MicroBooNE:2021usw}, and NA62\cite{NA62:2021zjw} experiments. We implement them following Refs\cite{Beacham:2019nyx,Winkler:2018qyg,Batell:2019nwo,Batell:2020vqn,Lanfranchi:2020crw}.
\paragraph{Astrophysical and cosmological bounds} Additional constraints on light dark scalars can be obtained from their possible impact on astrophysical and cosmological observations. In particular, below we employ applicable bounds that arise due to possible modifications of the supernovae cooling rate and the neutrino emission from SN1987A\cite{Chang:2018rso}, as well as BBN constraints on late-time decaying unstable BSM species\cite{Berger:2016vxi}.
Important bounds are also related to the residue relic abundance of potentially very long-lived dark vector $A^\prime$. We implement them following Ref.\cite{Poulin:2016anj} for the BBN constraints on late-time electromagnetic energy injection. We also use the results of Ref.\cite{Lucca:2019rxf} for the CMB bounds based on the combined data from the Planck\cite{Planck:2018vyg} and COBE/FIRAS\cite{Fixsen:1996nj} satellite missions. In the latter case, we follow Refs\cite{Slatyer:2015jla,Slatyer:2015kla} and modify the CMB constraints from Ref.\cite{Lucca:2019rxf} by taking into account a finite fraction of the electromagnetic energy transferred to the intergalactic medium, $f_{\textrm{eff}}<1$, characteristic of different cascade annihilation SM final states present in the model. The final shape of the cosmological bounds derived this way depends on the interplay between the $A^\prime$ lifetime, cf. \cref{eq:ctauAprime}, and its relic abundance obtained by solving the Boltzmann equations discussed in \cref{sec:relic}.
\medskip
The rich dark sector of our model also offers very good discovery prospects in various future searches targeting different BSM species. Below, we first discuss expected constraints for the light dark Higgs boson. These are related to the upcoming intensity frontier searches for LLPs. We then move to complementary ID of DM experiments that will probe the heavier part of the secluded BSM sector of the model. We finally focus on future prospects for detecting signatures of a very long-lived $A^\prime$ in CMB observations.
\subsection{Intensity frontier searches for light dark Higgs boson}
As discussed above, rare meson decays provide stringent bounds on light, sub-$\gev$, dark Higgs bosons from various past accelerator-based experiments. Similar searches in the future are expected to further constrain the available parameter space of the model. For the dark Higgs boson mass range of our interest important bounds on light dark Higgs bosons below the di-muon threshold $m_{h_D}<2\,m_\mu$ are expected to come from rare kaon decays in the proposed KLEVER experiment\cite{KLEVERProject:2019aks,Beacham:2019nyx}, a possible upgrade of the KOTO detector, referred to as KOTO step-2\cite{Nomura:2020oor}, and the next run of the NA62 experiment\cite{Bondarenko:2019vrb}. Further constraints on both light and somewhat heavier scalars, but below the kaon threshold for the $h_D$ production $m_{h_D} < m_K-m_\pi$ can be obtained by Fermilab Short-Baseline Neutrino Experiments. We present expected bounds from the NuMI-ICARUS and BNB-SBND detectors following Ref.\cite{Batell:2019nwo}.
For a heavier dark Higgs boson, up to a mass of order several \gev, the most stringent future constraints are expected to come from displaced visible decays of $h_D$s produced in beam-dump or collider experiments. We present below future sensitivity reach contours for the proposed SHiP experiment\cite{Alekhin:2015byh} at CERN SPS, as well as the LHC searches at Codex-b ($300~\textrm{fb}^{-1}$)\cite{Gligorov:2017nwh,Aielli:2019ivi}, FASER 2\cite{Feng:2017uoz,FASER:2018eoc}, and MATHUSLA\cite{Chou:2016lxi,Curtin:2018mvb}. In addition, we also show expected sensitivity due to the search for dark scalars in rare $B$ meson decays
at Belle-II\cite{Filimonova:2019tuy,Kachanovich:2020yhi}.
We note that direct searches for heavier dark species at the LHC will be very challenging to obtain in our model since we assume that they couple to the SM only via the $h_D$ portal and the corresponding mixing angle with the SM Higgs $H$ is suppressed. This also leads to typically very low BSM decay rates of $H$ that will in this case be SM-like with suppressed invisible branching fraction
$\mathcal{B}(H\to \textrm{inv.})<0.1\%$.
\subsection{Dark matter detection\label{sec:DMID}}
The parameters of our model can also be probed in DM experiments. This is especially the case for indirect searches. In this case, the corresponding discovery prospects rely on annihilation rates that can depend only on unsuppressed couplings present in the secluded dark sector. In our case, however, further caveats to this scenario appear that lead to distinct phenomenological features, as discussed below. On the other hand, future DD searches remain much less promising. While lighter DM species $\eta$ couple to the SM via $h_D$, their suppressed couplings to light quarks and negligible abundance typically yield very low expected signal rates in DD experiments. This is also true for the heavier DM species $\chi$ for which the scattering rates off nuclei appear only via intermediate $\eta$ and $\phi$ fields.
\paragraph{Indirect detection of $\chi$ DM} ID signatures in our model can arise from annihilations of both DM species, the heavier dominant scalar $\chi$ and the lighter one $\eta$. The latter process, however, is highly suppressed by the tiny abundance of $\eta$. This is not the case for annihilations of the dominant DM component. At the leading order, though, the main annihilation channel is purely into invisible final states $\chi\chi\to \eta\eta$. Instead, the dominant contribution to ID signal rates appears at the next-to-leading order due to the $2\to 3$ process $\chi\chi\to \eta\eta A^\prime$ shown in \cref{fig:idea}. This is especially important for a growing value of the dark coupling constant $g_D$ which increases the chance of the final-state $A^\prime$-sstrahlung off the $\eta$ leg, cf. \cref{eq:MSASA} and \cref{eq:brehm}.\footnote{The dark vectors could also be produced in cross interactions $\chi\eta\to \chi\eta A^\prime$ which, however, play a subdominant role with respect to the $2\to 3$ annihilations of the $\chi$s due to the suppressed relic density of $\eta$.} Notably, as we have already argued above, larger values of $g_D$ are also preferred by cosmological and collider constraints.
In order to examine possible DM ID signatures of the less-simplified scenario considered here, we first note that it has at least three distinct features that differentiate it from the simplest secluded DM models. Firstly, it relies on the $2\to 3$ annihilation process producing a continuous (not monochromatic) spectrum of the meta-stable $A^\prime$ mediator energies. Secondly, it employs multi-step cascade decays contributing to the final photon flux, $A^\prime\to h_DA^{\prime\prime}$, followed by $h_D\to 4e, 4\mu, \textrm{hadrons} \to \gamma$, cf. Ref.\cite{Mardon:2009rc,Elor:2015tva}. Thirdly, this allows for interesting non-local effects in DM ID present for very long-lived $A^\prime$\cite{Rothstein:2009pm,Kim:2017qaw,Chu:2017vao,Gori:2018lem,Agashe:2020luo}. In particular, the first two features above result in smearing the spectrum of the final state SM particles which can be further non-trivially changed by non-local effects described in the third feature. As a result, in the model one effectively avoids bounds from the searches for the peaked spectral features in the positron data, cf. Refs\cite{Bergstrom:2013jra,John:2021ugy}.
A particularly promising way of probing these scenarios is via searches for a diffuse DM-induced $\gamma$-ray flux. The corresponding differential flux of photons coming to a detector from the angular region in the sky $\Delta\Omega$ is given by
\begin{equation}
\left(\frac{d\Phi}{dE_\gamma}\right)_{\textrm{standard}} = \frac{1}{8\pi}\frac{\langle\sigma v\rangle}{m_{\chi}^2}\int_{\Delta\Omega}d\Omega\,\int_{\textrm{l.o.s.}}\rho_{\chi}^2\,ds\,\left(\frac{dN_\gamma}{dE_\gamma}\right)_{\chi},
\label{eq:flux}
\end{equation}
where on the left hand side we have denoted that this result corresponds to the ``standard'' regime in which the long-lived vector mediator $A^\prime$ decays after traveling distances much shorter than a kpc. In our analysis below, we employ the Einasto DM profile,
\begin{equation}
\rho_\chi\equiv\rho_{\text {Einasto}}(r)=\rho_{s} \exp \left(-\frac{2}{\alpha}\left[\left(\frac{r}{r_{s}}\right)^{\alpha}-1\right]\right)
\label{eq:Einasto}
\end{equation}
where $\rho_{s}=0.079\gev/\cmeter^3$, $r_{s}=20~\mathrm{kpc}$ and $\alpha=0.17$\cite{Pieri:2009je}.
In \cref{eq:flux} the photon spectrum from $2\to 3$ annihilations of $\chi$ and subsequent three-body and cascade decays of $A^\prime$ is denoted by $(dN_\gamma/dE_\gamma)_{\chi}$, cf. \cref{app:spectrum} for further discussion. Importantly, the above chain of processes results in a much softer photon spectrum than one could expect based on the DM mass $m_{\chi}$ which is set to be above the $\tev$ scale in the discussion below. In particular, a typical $A^\prime$ energy after the initial $\chi$ annihilation is of the order $E_{A^\prime}\sim (0.1 - 0.2) m_{\chi}$. The peak energy of photons produced as a result of $A^\prime$ decays is then further shifted towards smaller energies, $E_\gamma\lesssim 100~\gev$, although it contains a high-energy tail extending towards larger $E_\gamma$.
In \cref{sec:results}, we will present the expected sensitivity reach of the forthcoming DM ID experiment in the $\tev$ mass regime, namely the Cherenkov Telescope Array (CTA)\cite{CTAConsortium:2017dvg}, cf. also Refs\cite{Hryczuk:2019nql,CTA:2020qlo}. Instead of performing full detector simulations for the scenario of our interest, which is beyond the scope of our study, we illustrate the impact of the CTA on the parameter space of the model with some approximate expected bounds. We obtain them by employing the CTA sensitivity plots for the secluded DM regime following Ref.\cite{Siqueira:2021lqj} which we, however, modify by taking into account characteristic $A^\prime$ energies obtained after the $2\to 3$ annihilation process. We expect this simplified procedure to properly encompass the most essential effects in our study. Notably, the applicable CTA bounds for secluded DM only mildly depend on the DM mass in the regime between $100~\gev \lesssim m_{\textrm{DM}}\lesssim\tev$.
We treat in a similar way past bounds from the Fermi-LAT search for DM-induced signals from dSphs and present them following Ref.\cite{Profumo:2017obk}. We note that even stronger bounds could be derived based on Fermi-LAT searches towards the GC\cite{Abazajian:2020tww}. These could improve the current constraints by up to a factor of a few in $\langle\sigma v\rangle\sim g_D^2$ for the $2\to 3$ process, although the impact of these bounds could be ameliorated both by a smeared photon spectrum and a possible non-standard morphology of the signal discussed below.
\paragraph{Non-local effects in $\gamma$-ray DM ID for $\chi$} The presence of an exceptionally long-lived vector mediator $A^\prime$ has further important consequences for the ID of DM in our model. Signals can be modified by at least two additional non-local effects: enhanced DM-induced signal rates from extensive regions outside the Galactic Center (GC), and decreased signals from individual small regions, e.g., around the dSphs.
In the non-local case, the DM-induced $\gamma$-ray flux is partially driven outside the dense region around the GC by a very long-lived mediator $A^\prime$. As a result, both the energy spectrum and morphology of the signal might be affected. The respective photon flux reads
\begin{align}
\label{eq:fluxnonlocal}
\left(\frac{d\Phi}{dE_\gamma}\right)\Bigg|_{\textrm{non-local}} &= \sum_{\textrm{bins}\,E_{A^\prime}}\Bigg[\frac{1}{8\,\pi}\frac{\langle\sigma v\rangle_{E_{A^\prime}}}{m_{\chi}^2}\frac{1}{\bar{d}_{A^\prime}}\,\int_{\Delta\Omega}{d\Omega}\int_{\textrm{l.o.s.}}{ds}\int_{V_\chi}{d^3\vec{r}_{\chi}}\,\times\\
& \hspace{-1.7cm}\times\,\frac{\rho_{\chi}^2(|\vec{r}_{\chi}-\vec{r}_{\textrm{GC}}|)}{|\vec{r}_{A^\prime}-\vec{r}_{\chi}|^2}\,\exp{\left(-\frac{|\vec{r}_{A^\prime}-\vec{r}_{\chi}|}{\bar{d}_{A^\prime}}\right)}\,\gamma_{A^\prime}\left(1-\beta_{A^\prime}\cos\theta\right)\,\frac{f(\theta)}{4\pi}\left(\frac{dN_\gamma}{dE_\gamma}\right)_{\chi}\Bigg|_{E_{A^\prime}}\Bigg]\nonumber
\end{align}
where $\bar{d}_{A^\prime} = c\tau_{A^\prime}\gamma_{A^\prime}\beta_{A^\prime}$ is the decay length of $A^\prime$ in the Galactic frame and vectors $\vec{r_{\chi}}$, $\vec{r}_{A^\prime}$, and $\vec{r}_{\textrm{GC}}$ correspond, respectively, to the position of the $\chi$ annihilation, the $A^\prime$ decay, and the GC with respect to the detector on Earth. As can be seen, compared with the standard case, \cref{eq:flux}, in the non-local DM ID an additional integration appears over the position of the initial $\chi$ annihilation. This takes into account the fact that the long-lived mediator $A^\prime$ produced in $2\to 3$ processes at $\vec{r}_{\chi}$ can travel long-distances before decaying at position $\vec{r}_{A^\prime}$. In particular, the initial position $\vec{r}_{\chi}$ can lie outside the region of interest (RoI) in a given DM ID analysis. The mediator decay probability decreases exponentially with the growing distance $|\vec{r}_{A^\prime}-\vec{r}_{\chi}|$. Hence, typically only a limited region in the Galaxy around the RoI contributes to the observed DM-induced photon flux, although this depends on the value of $\bar{d}_{A^\prime}$.
In \cref{eq:fluxnonlocal}, we also employ anisotropy factors that depend on the angle $\theta$ defined as the angle between the $A^\prime$ boost direction and the detector,
\begin{equation}
\cos\theta = \frac{\vec{r}_{A^\prime}\cdot (\vec{r}_{\chi}-\vec{r}_{A^\prime})}{|\vec{r_{A^\prime}}||\vec{r}_{\chi}-\vec{r}_{A^\prime}|}.
\label{eq:costheta}
\end{equation}
The function $f(\theta)$ then reads
\begin{equation}
f(\theta) = \frac{(1+\tan^2\theta)^{3/2}}{\tan^2\theta}\,\frac{\left[(\beta/\tilde{\beta}_{h_D})+\cos\tilde{\theta}\right]\,\sin^2\tilde{\theta}}{(\beta/\tilde{\beta}_{h_D})\cos\tilde{\theta}+1},
\label{eq:ftheta}
\end{equation}
in which $\tilde{\theta}$ is the relevant angle in the $A^\prime$ rest frame and we obtain $\cos\tilde{\theta} = \cos\tilde{\theta}_+$ for $\theta\leq \pi/2$ and $\cos\tilde{\theta} = \cos\tilde{\theta}_-$ otherwise, where
\begin{equation}
\cos\tilde{\theta}_{+,-} = \frac{-\gamma^2\tan^2\theta\,\frac{\beta}{\tilde{\beta}_{h_D}}\pm \sqrt{\gamma^2\tan^2\theta\,\left(1-\frac{\beta^2}{\tilde{\beta}_{h_D}^2}\right)+1}}{\gamma^2\tan^2\theta + 1}\ ,
\label{eq:costhposneg}
\end{equation}
and $\tilde{\beta}_{h_D} = \sqrt{1-(2m_{h_D}/m_{A^\prime})^2}$. In the simplest case (not relevant for our full model), in which the travelling mediator directly decays into a pair of photons, one would reproduce a known expression for \textsl{radiative beaming}, $f(\theta) = \gamma_{A^\prime}^2\,(\beta_{A^\prime}\cos{\tilde{\theta}}+1)^2$. The anisotropy factors appear due to the boost of the decaying $A^\prime$ in the Galactic frame. They affect the final observed photon flux at Earth, since, in each given region in the Galaxy, decaying mediators preferentially come from the direction of the GC. However, in the non-relativistic and local limit we obtain $f(\theta)\to 1$.
We illustrate the impact of non-local effects on the total integrated flux of $\gamma$-rays in a toy DM model with $m_{\textrm{DM}} = 100~\gev$ and a long-lived mediator mass $m_{\textrm{med}} = 10~\gev$ in the left panel of \cref{fig:fluxes}. Here, we assume for simplicity that the mediator decays directly into a pair of photons. We show with the red lines the ratio between the flux obtained in the long and short mediator lifetime regimes for several different RoIs. In particular, the red solid line corresponds to a large region around the GC characterized by $|b|,|l|<12^\circ$.\footnote{We use the Galactic coordinate system with the Galactic longitude $l$ and latitude $b$.} While this is a larger region than for typical CTA analyses, we employ it to better highlight the difference between the searches focused on the closed vicinity of the GC and the possible extended Galactic center survey present in the non-local DM ID regime, see, e.g., Refs\cite{CTAConsortium:2017dvg,CTA:2020qlo} for further discussion about CTA analyses. The large RoI employed in our study extends to roughly $d_{\textrm{RoI}} \sim R_0\,\sin{b} \simeq 2.3~\textrm{kpc}$ distance away from the GC, where $R_0 = 9~\textrm{kpc}$ is the distance between the Earth and the GC.
As can be seen in the figure, for $\bar{d}_{\textrm{med}}\lesssim d_{\textrm{RoI}}$ the impact of non-local effects on the observed spectrum is very small and the photon spectrum resembles the one obtained in the short lifetime regime denoted by the horizontal black solid line, $\Phi_{\textrm{non-local}}/\Phi_{\textrm{stand.}}\simeq 1$. In contrast, for very large decay lengths of $A^\prime$ the expected DM-induced photon flux coming from the RoI drops down much below the standard expectations. This is due to efficient escape of mediators away from the GC before they decay. The decrease of the flux is roughly linear in growing $\bar{d}_{\textrm{med}}$, as can be seen from \cref{eq:fluxnonlocal} in the limit of $\bar{d}_{\textrm{med}}\gg |\vec{r}_{\textrm{med}}-\vec{r}_{\chi}|\sim d_{\textrm{RoI}}$.
The relative increase of the DM-induced photon flux for intermediate values of $\bar{d}_{\textrm{med}}\sim d_{\textrm{RoI}}$ can be understood as follows\cite{Chu:2017vao}. In this case, most of the mediators produced close to the GC decay within the RoI. Only a small fraction of dark photons produced at the GC would generate photons traveling towards the Earth. In the standard case, the signal from other mediators will be lost. In the non-local regime, however, this can be partially overcome by dark vectors traveling away from the GC before decaying. At these remote positions, they can produce photons in the direction of the Earth, which would not be seen had they been produced close to the GC. As a result, the DM ID signal rates from distant positions within the RoI do not only receive contributions from DM annihilations happening locally but also from annihilations occurring close to the GC. This increase is even more pronounced for a modified RoI around the GC in which we exclude the innermost region with the $2^\circ$ size. We present this with the dashed red line in the figure. In this case, the photon flux in the non-local regime gains even more from the dark vectors traveling inside the RoI from the innermost region close to the GC.
In addition, we also show in the figure the expected photon fluxes for much smaller RoIs. Here, with a red dotted line we present the results for a small RoI characteristic for the CTA Galactic center survey, in which we have additionally excluded part of the region very close to the GC, \ie, we assume $0.3^\circ < |b| < 1^\circ$ and $|l|<1^\circ$. Instead, with a red dash-dotted line we present the flux for an even smaller region around the GC with $|b|,|l| < 0.5^\circ$. The size of this region encompasses a typical DM halo size for dwarf galaxies in the Fermi-LAT analyses\cite{Fermi-LAT:2015att}. As can be seen, for both small RoIs, the relative growth of the flux for smaller $\bar{d}_{\textrm{med}}$ is hard to reconstruct, while for the decay length of order several kpc the flux is already suppressed. We note that if dSphs are modeled as point-like sources in the analysis with the $0.1^\circ\times 0.1^\circ$ bin size\cite{Fermi-LAT:2016uux}, the impact of non-local effects is even stronger. Last but not least, we stress that in the non-local regime the DM-induced photons escaping from small RoIs could also affect the analysis based on local background expectations around each of the dSphs. This could further ameliorate the relevant bounds.
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{./figures/GC_dwarf.pdf}
\hfill
\includegraphics[scale=0.37]{./figures/sigmav_ID_secluded_WIMP_LLP.pdf}
\caption{Left: The ratio of the integrated photon fluxes obtained for increasing decay length $d$ of the mediator and in the standard regime of prompt decays shown with the horizontal black solid line. The figure has been prepared for the toy model with the DM and mediator masses equal to $m_{\textrm{DM}} = 100~\gev$ and $m_{\textrm{med}} = 10~\gev$, respectively, and assuming that the mediator decays directly into a pair of photons. We integrate the fluxes over the energy range between $0.1\gev$ and $100\gev$. The solid (dashed, dotted, dash-dotted) red line corresponds to the DM ID observation region around the GC defined by different longitude and latitude limits, as indicated in the figure. Right: The CTA sensitivity for the secluded WIMP DM scenario in the ($m_{\textrm{DM}}$, $\langle\sigma v\rangle$) plane is shown with the black solid line following Ref.\cite{Siqueira:2021lqj}. For comparison, we also present with the red dotted line the expected such reach for the toy model with the long-lived mediator with $\tau_{\textrm{med}} = 10^9\second$ and $m_{\textrm{med}} = 10~\gev$ which decays into light quarks. The red dashed line corresponds to a re-scaled sensitivity for a larger RoI, as indicated in the figure (see text for details).
\label{fig:fluxes}
}
\end{figure}
As can be seen, for $\bar{d}_{\textrm{med}}\sim \textrm{a few kpc}$, the difference between the impact of non-local effects for DM ID focusing on extensive regions around the GC and for searches targeting small RoIs can reach up to a factor of a few in the predicted photon flux. One can then expect a weakening of the relevant DM constraints derived based on stacked dwarf analyses, while DM-induced signals could be stronger for searches using larger RoIs around the GC. This might open new possibilities in explaining persisting anomalies in DM searches, see, e.g., Ref.\cite{Agashe:2020luo} for the relevant discussion about the Galactic Center Excess (GCE) and non-local DM ID effects. However, as mentioned above, such effects would also modify the morphology of the DM-induced signals, which could then no longer follow the original DM profile but would appear to be less cuspy. In particular, as shown in Ref.\cite{Agashe:2020luo}, when compared to the standard short-lived regime, DM solutions to the GCE employing non-local effects struggle to improve the global $\chi^2$ fit for this anomaly. In the following, we will then rather focus on heavy DM $\chi$ with the mass above the $\tev$ scale which corresponds to the next important target in the upcoming DM ID searches.
In the right panel of \cref{fig:fluxes} we illustrate the impact of non-local effects on DM ID searches for the aforementioned toy model. To this end, we compare the expected sensitivity reach of the CTA in secluded WIMP DM scenario presented in Ref.\cite{Siqueira:2021lqj} with the relevant reach obtained for a very long-lived mediator with $\tau_{\textrm{med}} = 10^9\second$ and fixed $m_{\textrm{med}} = 10~\gev$. Here, we assume that the mediator decays into light quarks. In the figure larger values of the mass of annihilating DM $m_{\textrm{DM}}$ implies larger values of the boost factor of the mediator and the corresponding decay length $\bar{d}_{\textrm{med}}\simeq (m_{\textrm{DM}}/1~\tev)\,(10~\gev/m_{\textrm{med}})\times 1~\textrm{kpc}$. This results in an effective suppression of the DM-induced signal from a small RoI around the GC for $m_{\textrm{DM}}\gtrsim \textrm{a few hundred}~\gev$, as indicated with the dotted red line in the figure. We also show there with the red dashed line the expected sensitivity for a larger RoI $|b|,|l|<12^\circ$ which, for illustrative purposes, has been re-scaled to match the sensitivity of the small RoI in the limit of low $m_{\textrm{DM}}$. As can be seen, for the extended RoI, the weakening of the future bounds corresponds to larger DM masses than for small RoI. In this case, we also observe a relative improvement of the bound in the intermediate region of $m_{\textrm{DM}}\sim \textrm{a few}~\tev$. This is due to the excess DM-induced photon flux for $\bar{d}_{\textrm{med}}\sim d_{\textrm{RoI}}$, as discussed above.
\paragraph{DM ID of the lighter scalar DM component $\eta$} The subdominant DM component $\eta$ can also contribute to DM ID signals. The relevant annihilation rate is, however, suppressed by by a very small abundance, typically $\Omega_{\eta}\lesssim 10^{-6}\,\Omega_{\chi}$. Since the DM ID signal rate depends quadratically on the number density of annihilating species, this is not compensated for by the increasing annihilation cross section $\sigma_{\eta\bar{\eta}}$ for the process $\eta\bar{\eta}\to A^\prime A^\prime$ and the resulting signal is suppressed with respect to the dominant $\chi$ annihilations by several orders of magnitude. This suppression becomes smaller for the growing mass gap between $\eta$ and the dark vector, $m_{\eta}\gg m_{A^\prime}$, due to a possible Sommerfeld enhancement (SE) of $\sigma_{\eta\bar{\eta}}$. However, we have verified numerically that this enhancement is not larger than a factor of order $10^4$ in the regions of the parameter space of the model that we explore in \cref{sec:results}, which still renders the $\eta$ contribution to the total DM ID rate subdominant. To implement the Sommerfeld effect, we followed Refs\cite{Cassel:2009wt,Iengo:2009ni,Slatyer:2009vg} and approximated the Yukawa potential generated by the dark photon exchange by the Hulthen potential. Similarly, we do not obtain large SE of $\eta$ annihilations that could be induced by lighter dark vectors $A^{\prime\prime}$. This is due to much suppressed couplings between $A^{\prime\prime}$ and $\eta$.
The annihilations of the dominant DM species $\chi\bar{\chi}\to\eta\bar{\eta}(A^\prime)$ will also produce a flux of a boosted lighter scalar $\eta$. It can travel through the Galaxy or from distant galaxies, and can scatter off a background $\chi$ or annihilate again with background $\bar{\eta}$. In both cases further unstable dark vectors can be produced either via a next-to-leading order scattering process $\eta\chi\to \eta\chi A^\prime$ or in the annihilation $\eta\bar{\eta}\to A^\prime A^\prime$. The relevant contributions are, however, extremely small. In particular, the probability that $\eta$ produced in the GC will annihilate before leaving the Galaxy can be estimated as
\begin{equation}
\mathcal{P}_{\eta,\textrm{Galaxy}} \sim 1-\exp{\left(-\frac{\sigma_{\textrm{th}}}{m_{\eta}}\,\left[\frac{\sigma_{\eta\bar{\eta}}\,\Omega_{\eta}}{\sigma_{\textrm{th}}\,\Omega_{\chi}}\right]\int_{0}^{R_{\textrm{max}}}{\rho_{\textrm{Einasto}}(\ell)\,d\ell}\right)}\sim 10^{-16}\
\end{equation}
where $\sigma_{\textrm{th}}\sim \textrm{a few}\times 10^{-9}~\gev^{-2}$ is the typical thermal annihilation cross section. We note that the quantity in the square brackets is roughly equal to $1$. Here, we assume the Einasto DM profile \cref{eq:Einasto} for background $\bar{\eta}$ and set $R_{\textrm{max}} = 100~\textrm{kpc}$ for concreteness, although a precise value of the Galactic DM halo size has a negligible impact on the final result. In the last step, we have estimated the interaction probability for the benchmark scenario presented in the left panel of \cref{fig:relic_density_three_component_benchmarks}. As can be seen, lighter scalars $\eta$ produced in $\chi$ annihilations close to the GC can easily escape the Galaxy. A typical distance they can travel is set by their mean free path in the background DM in the Universe, $d_{\eta}\sim (m_{\eta}/\sigma_{\textrm{th}})\,(1/\rho_{\textrm{DM}}^{\textrm{av}})\sim 10^{17}~\textrm{Gpc}$, where we used $\rho_{\textrm{DM}}^{\textrm{av}} = 1.1\times 10^{-6}~\gev/\textrm{cm}^3$\cite{Planck:2018vyg}. This is, again, an extremely large distance, which renders any extragalactic contributions to DM ID signal rates negligible in our analysis.
\subsection{Future Cosmic Microwave Background surveys\label{sec:CMB}}
As we have already discussed in \cref{sec:constraints}, CMB observations provide a further complementary way of probing the BSM scenario of our interest. We will implement them below following Ref.\cite{Lucca:2019rxf}. In particular, future surveys are expected to significantly improve such bounds on CMB spectral distortions and to essentially exclude the BSM scenarios predicting the mediator lifetime between $\tau_{A^\prime}\sim 10^{5}\second$ and $10^{12}\second$. The relevant bounds will constrain the relic abundance of unstable very long-lived species to be not larger than a tiny fraction of the total DM relic density, $\Omega_{A^\prime}\lesssim 10^{-6}\,\Omega_{\textrm{DM}}$, but, depending on $\tau_{A^\prime}$, even much more stringent limits can be derived of order $10^{-12}\,\Omega_{\textrm{DM}}$. Instead, for the CMB anisotropy data, the expected improvement over current bounds is less pronounced but will also result in more stringent constraints by about a factor of a few in $\Omega_{A^\prime}$. This is relevant for the large lifetime regime, $\tau_{A^\prime} > 10^{12}\second$.
In the following, we will present expected combined bounds from the Planck\cite{Planck:2018vyg} and the proposed Primordial Inflation Explorer (PIXIE)\cite{Kogut:2011xw} satellite mission. In addition, we will also employ more stringent constraints expected from the combination of the future data from the Polarized Radiation Imaging and Spectroscopy Mission (PRISM)\cite{PRISM:2013fvg}, ground-based CMB Stage-4 (CMB S-4) searches\cite{CMB-S4:2016ple} and the space mission LittleBIRD\cite{Matsumura:2013aja}. Similarly to the discussion in \cref{sec:constraints}, for the future CMB surveys we also include finite $f_{\textrm{eff}}<1$ factors in our analysis. We stress that, while the CMB data remain complementary to DM ID searches for intermediate mediator lifetimes $\tau_{A^\prime}\gtrsim 10^{5}\second$, they will provide the best way of probing scenarios with extremely long-lived $A^\prime$s that would predict much-suppressed ID signal rates.
\section{Results\label{sec:results}}
The non-minimal content of our model results in $12$ free parameters that contain the masses of the dark species and various dark sector couplings, cf. \cref{sec:model}. A thorough investigation of a rich phenomenology of the model would therefore require extensive and sophisticated numerical methods.
We therefore limit our discussion to only slices of this multidimensional parameter space. Below, we first justify our choice for the fixed parameters, and later we present the results of our analysis in the most convenient two-dimensional ($m_{A^\prime}$, $g_D$) plane.
We present our result for three different values of the dark Higgs boson mass: $m_{h_D} = 20$, $250$, and $500\mev$. They correspond to distinct dominant $h_D$ decays into $e^+e^-$, $\mu^+\mu^-$, and pion ($\pi\pi$) pairs, respectively, with important consequences for experimental searches. In our case, $h_D$ is the lightest dark sector particle charged under the $U(1)^\prime$ group. We further assume a large hierarchy between the dominant heavy scalar DM $\chi$ and the lighter dark species $\eta$ and $A^\prime$ such that the long-lived dark photons produced in $2\to 3$ annihilations $\chi\chi\to \eta \eta A^\prime$ could become additionally boosted and travel galactic-scale distances with $\bar{d}_{A^\prime}\gtrsim 1~\textrm{kpc}$. We note, though, that for the scalar $\eta$ that is too light the loop-suppressed $A^\prime$ lifetime would be driven again to smaller values, cf. \cref{eq:ctauAprime}, in which case no non-local effects in DM ID would be expected. We therefore set $m_{\chi} = 1.5\tev$ and $m_{\eta} = 150\gev$ for concreteness. We note that the dominant $\chi$ DM mass around and above the $\tev$-scale corresponds to the best reach of DM ID searches in CTA, while it goes beyond the previous constraints for lighter DM.
In presenting our results we find it most convenient to vary the dark photon mass $m_{A^\prime}$ and the coupling constant $g_D$. The former assumes a limited mass range, $m_{h_D}<m_{A^\prime} < m_{\eta}$, cf. \cref{eq:mass_scheme}, which allows for both the $A^\prime\to h_DA^{\prime\prime}$ decays and $\eta\eta\to A^\prime A^\prime$ annihilation channels to be open. In particular, they drive the $\eta$ abundance to negligible levels. Instead, for $m_{\eta}<m_{A^\prime}$ direct annihilations of $\eta$ into the SM species via the light scalar portal $h_D$ and suppressed annihilations into the lighter vectors $\eta\bar{\eta}\to A^{\prime\prime}A^{\prime\prime}$ will lead to an overabundance of thermally-produced DM, unless the induced mixing angle between the dark Higgs boson and the SM Higgs boson acquires large values close to the current bounds. Even in this case, however, the large hierarchy $m_{h_D}\ll m_{\eta}$ would generate too large -- and already excluded -- values of the Sommerfeld-enhanced annihilation cross section of $\eta$ around the time of recombination. In the following, we therefore focus on the case with $m_{A^\prime}<m_{\eta}$ and fit the auxiliary parameters in the dark sector, $m_\phi$, $\mu_A$, and $\mu_B$, such that at each point in the parameter space shown in the figures the heavy scalar DM obtains the correct value of the thermal relic density, $\Omega_{\chi}h^2\simeq 0.12$, while $\Omega_{\eta}h^2$ is negligible.
The values of the dark coupling constant $g_D$ are \textsl{a priori} only limited by the perturbativity bound, which for simplicity we take to be $g_D< 4\pi$. On the other hand, as we have already discussed in \cref{sec:relic} and will also show below, the astrophysical, cosmological, and collider bounds discussed in \cref{sec:constraints} constrain too low values of this coupling constant, and we, effectively, obtain $g_D\gtrsim 0.1$. In particular, lower values of $g_D$ would lead to too large abundance of $A^\prime$ and its late-time decays would violate BBN and CMB limits. As we will see in the figures, collider bounds also affect the region of the parameter space with too low values of $g_D$. This is because for fixed $m_{A^\prime}$ and $\lambda_{Hh_D}$ the mixing angle $\theta$ increases with decreasing $g_D$, cf. discussion in \cref{sec:model}. It, eventually, reaches the level at which it has already been probed in the previous searches for the dark Higgs boson.
Finally, we set the dark coupling constant between the dark Higgs boson and the lighter scalar DM component $\eta$ such that the lifetime of $A^\prime$ can be large and of astrophysical relevance, cf. \cref{eq:ctauAprime}. For concreteness, below we present our results for $\lambda_{h_D\eta} = 4\times 10^{-7}$, $4\times 10^{-6}$, and $4\times 10^{-5}$. We also assume fixed small value of the kinetic mixing parameter between the two dark vectors $\tilde{\epsilon} = 10^{-6}$.
In the left panel of \cref{fig:results_mDM_gD_1} we show the results for $m_{h_D} = 500\mev$ and $\lambda_{h_D\eta} = 4\times 10^{-6}$. Already excluded regions of the parameter space are shown in gray and labeled with proper bounds. As can be seen, in this case in the allowed region of the model's parameter space one prefers the values of the dark coupling constant $g_D$ right in the ballpark for future searches in the proposed Codex-b, FASER 2, MATHUSLA, or SHiP detectors, as well as within the reach of Belle-II. The sensitivity reach of each experiment corresponds to the region below the accordingly marked line in the figure. In particular, MATHUSLA could cover almost the entire available region in the parameter space, except for the upper left corner.
\begin{figure}[t]
\centering
\hspace*{-2.02cm}
\includegraphics[scale=0.625]{./figures/gD_mDP_lambdahH_1e-04_lambdaSBH_4e-06_mDH_0.50.pdf}\hspace*{0.2cm}
\includegraphics[scale=0.594]{./figures/gD_mDP_lambdahH_1e-04_lambdaSBH_4e-06_mDH_0.50_zoomed_in.pdf}
\caption{Left: The impact on the parameter space of the model under study of various past bounds (gray-shaded region, see the text for details) and future searches shown in the $(m_{A^\prime},g_D)$ plane. We fix other parameters of the model to $m_{h_D} = 500~\mev$, $m_{\eta} = 150~\gev$, and $m_{\chi} = 1.5~\tev$, as well as $\lambda_{h_D\eta} = 4\times 10^{-6}$ and $\tilde{\epsilon} = 10^{-6}$. In each point in the figure, the parameters $m_\phi$, $\mu_{\chi}$, and $\mu_{\eta}$ are chosen such that the correct value of the heavy DM relic density is obtained, $\Omega_{\chi}h^2\simeq 0.12$. The colorful lines indicate the expected sensitivity reach for future searches, as discussed in the text. We also show with a black dash-dotted line the contours of the fixed $A^\prime$ lifetime of $\tau_{A^\prime} = 10^{11}$ and $10^9\second$ from left to right. Right: For the same benchmark scenario as in the left panel, the expected CTA sensitivity is shown for the standard case with diminishing decay length of $A^\prime$ (solid line) and the true scenario that takes into account the non-local effects. For the latter, we present the results for the RoIs characterized by $0.3^\circ < |b|<1^\circ$, $|l|<1^\circ$ (dotted line) and $|b|, |l| < 12^\circ$ (dashed line). A smaller portion of the parameter space of the model presented in the right panel is indicated with a dotted rectangle in the left panel.
\label{fig:results_mDM_gD_1}
}
\end{figure}
For $m_{h_D} = 500~\mev$, the dark Higgs boson decays hadronically and further bounds on the model may arise from past and future ID searches for DM-induced $\gamma$-rays. We mark in gray the upper bounds obtained from null searches for DM signals in dwarf galaxies performed by Fermi-LAT following Refs\cite{Fermi-LAT:2015att,Profumo:2017obk}. Importantly, this bound puts a constraint on too large values of the dark coupling constants $g_D$. For larger $g_D$ the $2\to 3$ annihilation cross section of $\chi$ grows which in turn enhances the expected $A^\prime$-induced signal rates. We note that these bounds become much weaker with the increasing dark vector lifetime $\tau_{A^\prime}$. This can be seen for lower values of $m_{A^\prime}\lesssim 10~\gev$. This is due to the diffusion of the DM-induced signal outside dwarf galaxies, as discussed in \cref{sec:DMID}. Boosted and very-long-lived dark vector particles often decay away from their parent dSphs. The relevant region of the parameter space is, however, at least partially constrained by past CMB surveys because very long-lived dark vectors with $\tau_{A^\prime}\gtrsim 10^{12}\second$ could decay during the recombination epoch and the subsequent dark ages.
For smaller values of the lifetime of the dark vector additional bounds can be derived in future DM ID searches at CTA. We show the relevant sensitivity curve with the purple dotted contour which constrains the region of the parameter space above the line. It corresponds to DM-induced photons coming from the region around the GC with $0.3^\circ < |b| < 1^\circ$ and $|l|<1^\circ$. In this case, we also observe the diminishing reach in the limit of increasing $\tau_{A^\prime}$. Notably, in the large lifetime regime complementary probes of the model will be available thanks to future CMB surveys. These can cover parts of the allowed region below the dotted red and light blue lines corresponding, respectively, to the future Planck and PIXIE data, as well as to the combined data from CMB-S4, LittleBIRD, and PRISM (indicated as CMB-S4 in the figure), cf. \cref{sec:CMB}.
In the right panel of \cref{fig:results_mDM_gD_1} we compare the CTA bounds obtained by taking into account the non-local DM ID regime for large $\bar{d}_{A^\prime}$ with the respective ones with neglecting these effects. The latter sensitivity reach line is denoted as ``Standard'' in the figure. As can be seen, the presence of non-local effects significantly weakens the impact of CTA on the parameter space of the model. This could be ameliorated in studies focusing on larger RoI, as schematically illustrated in the figure for the RoI characterized by $|b|, |l|<12^\circ$. While we expect the bounds derived for the larger RoI to be shifted with respect to the ones obtained for the smaller region around the GC, in the plot, for illustration, we have artificially rescaled the reach for $|b|, |l|<12^\circ$ to match the standard one in the regime of the small $A^\prime$ lifetime and better highlight the relative difference in the non-local effect in both cases. For the larger RoI, the weakening of the bounds is seen only for much smaller values of $m_{A^\prime}$. We also note the presence of a small improvement of the expected CTA sensitivity in the small region of the figure for $m_{A^\prime}\simeq (3-4)~\gev$. It is caused by the anisotropy effects discussed in \cref{sec:DMID}.
\begin{figure}[t]
\centering
\hspace*{-2.2cm}
\includegraphics[scale=0.625]{./figures/gD_mDP_lambdahH_1e-04_lambdaSBH_4e-07.pdf}\hspace*{0.2cm}
\includegraphics[scale=0.625]{./figures/gD_mDP_lambdahH_1e-04_lambdaSBH_4e-05_mDH_0.25.pdf}
\caption{Same as the left panel of \cref{fig:results_mDM_gD_1} but for $m_{h_D} = 20~\mev$, $\lambda_{h_D\eta} = 4\times 10^{-7}$ (left) and $m_{h_D} = 250~\mev$, $\lambda_{h_D\eta} = 4\times 10^{-5}$ (right).
\label{fig:results_mDM_gD_2}
}
\end{figure}
In \cref{fig:results_mDM_gD_2} we present similar bounds for $m_{h_D} = 20$ and $250~\mev$ in the left and right panel, respectively. We also fix the value of the coupling constant $\lambda_{h_D\eta} = 4\times 10^{-7}$ and $4\times 10^{-5}$ for the lighter and the heavier $h_D$, respectively. As expected from \cref{eq:ctauAprime}, a larger $\lambda_{h_D\eta}$ implies a shorter $A^\prime$ lifetime. As a result, for larger $\lambda_{h_D\eta}$ cosmological bounds constrain smaller fractions of the available parameter space and the respective bounds are shifted towards smaller values of the dark vector mass. This can be seen by comparing the size of the currently excluded gray-shaded regions in both panels. In both cases, the expected photon flux from DM ID is too low for the CTA to probe the allowed region in the parameter space of the model. On the other hand, the future CMB and collider searches will both be able to constrain these scenarios.
In the left panel of \cref{fig:results_mDM_gD_2} where $m_{h_D} = 20~\mev$ the currently allowed region of the parameter space is constrained by the supernova SN198a and NA62 bounds on the dark Higgs boson, and BBN and CMB constraints on long-lived $A^\prime$. In this case, $h_D$ decays predominantly into electrons and has a large lifetime, beyond the reach of much intensity frontier experiments targeting displaced decays of LLPs. Therefore, it can be searched for in rare kaon decays in the future KLEVER and NA62 detectors which could cover almost the entire allowed region of the parameter space below the dark blue dotted line. On the other hand, the expected future CMB bounds in this case are somewhat weaker and can only cover regions below the dotted red and light blue lines corresponding to $m_{A^\prime}\lesssim \textrm{a few}~\gev$.
The heavier dark Higgs boson with $m_{h_D} = 250~\mev$ (the right panel of \cref{fig:results_mDM_gD_2}) decays dominantly into a muon pair and is, therefore, characterized by a much smaller lifetime, as determined by its Yukawa-like couplings to the SM fermions. Hence, a good fraction of the parameter space can be probed by the future Codex-b, FASER 2, KLEVER, MATHUSLA, and SHiP detectors, as well as by the neutrino experiments NuMI-ICARUS and BNB-SBND. The complementarity between these searches and future CMB surveys will therefore lead to probing a large region in the $(m_{A^\prime},g_D)$ plane across a wide range of the $A^\prime$ lifetime.
\section{Conclusions\label{sec:conclusions}}
Light sub-$\gev$ portal models of dark matter have gained much interest in recent years and have sparked both experimental and theoretical activity in the field. Most studies so far have focused on simplified frameworks with only a limited number of new species added to extend the SM. These typically correspond to the most popular interaction operators and are supposed to encompass essential phenomenological aspects of such simplified scenarios. While this approach allows one to do a manageable comparison among many experimental proposals, it is also understood that new effects might be observed in more elaborate, and more realistic, models incorporating a larger number of BSM species with a mass hierarchy that can be quite complex and in particular span several orders of magnitude.
The presence of the light portal particle that thermally connects the SM sector and a heavy DM in the early Universe leaves important observational imprints that can be distinct from those in both more popular heavy WIMP models and from scenarios predicting the existence of only light DM. The biggest effects can be expected in indirect DM searches where a secluded heavy DM can lead to non-negligible interaction rates via displaced decays of the light portal particles, provided that the dark coupling constants remain large. Importantly, the light BSM species can be even very long-lived and additionally constrained by cosmology. This opens up new detection prospects for this kind of LLPs in the lifetime regime that remain highly complementary to intensity frontier searches.
In this study, we have found such effects and exposed them in a model with a rich dark sector and the popular scenario in which the coupling between the SM and the dark sectors arises only due to the mixing between the sub-$\gev$ dark Higgs boson and the SM Higgs boson. The dark sector of the model further contains particles with masses up to $10\tev$ or so. In particular, it invokes a heavy scalar DM with the mass above the $\tev$ scale and a potentially very long-lived dark vector mediator $A^\prime$ that is secluded from the SM. This scenario remains beyond the reach of current and near-future DD searches. However, we have shown here that the best way of probing this type of model, besides intensity frontier searches for LLPs, is to employ DM-induced signatures in both future ID and CMB experiments.
The presence of unstable subdominant dark species produced in annihilations of much heavier dominant DM particles can lead to further striking signatures. We have illustrated this in the case of the dark vector $A^\prime$ that can feature a very large loop-suppressed lifetime and an astrophysically interesting decay length $\gamma\beta c\tau_{A^\prime}\sim \textrm{kpc}$. This can lead to interesting non-local effects in DM ID searches, for instance, enhanced DM signal rates from the GC and simultaneously suppressed corresponding rates expected from the dwarf galaxies. Similarly, the DM-induced $\gamma$-ray flux might in this case be characterized with a distinct morphology which does not have to follow a true DM density distribution. A thorough experimental testing of such scenarios will require going beyond the traditional approach to DM ID. In particular, studying possible DM ID signatures in both small and larger regions of interests around the GC might lead to important differences in expected photon fluxes and in the bounds on the annihilation cross section compared to simplified scenarios studied so far.
In searching for new physics it remains essential to encompass a broad range of possible BSM scenarios and to study all possible complementarities among different types of experiments. An attractive framework base on the popular thermal DM paradigm has lead to numerous studies in the past years. Models of this class that predict the existence of a light sub-$\gev$ portal to heavy secluded DM deserve special attention in such efforts. As we have shown here, their characteristic signatures are absent in more popular simplified scenarios but may be successfully probed in extensive experimental programs in the coming years.
\paragraph{Acknowledgements} We would like to thank Brian Batell for useful remarks and comments on the manuscript. We would like to thank Luc Darm$\acute{\textrm{e}}$, Iftah Galon, Andrzej Hryczuk, Arvind Rajaraman for useful discussions. In our analysis, we employed python module \texttt{vegas}\cite{peter_lepage_2021_4746454}, which uses an algorithm developed in Refs\cite{Lepage:1977sw,Lepage:2020tgj}. KJ and LR are supported by the National Science Centre, Poland, research grant No. 2015/18/A/ST2/00748. ST and LR are supported by the grant ``AstroCeNT: Particle Astrophysics Science and Technology Centre'' carried out within the International Research Agendas programme of the Foundation for Polish Science financed by the European Union under the European Regional Development Fund. ST is supported in part by the Polish Ministry of Science and Higher Education through its scholarship for young and outstanding scientists (decision no 1190/E-78/STYP/14/2019).
|
2,869,038,156,859 | arxiv | \section{Introduction}
Throughout this paper, let $T>0$ be a given time horizon. As an extension of stochastic differential equations, mean-field stochastic differential equations (hereafter mean-field SDEs), also referred to as McKean-Vlasov equations, given by
\begin{align}\label{eq:RegMainMcKeanVlasov}
dX_t^x = \overline{b}\left(t,X_t^x,\mathbb{P}_{X_t^x}\right) dt + \overline{\sigma}\left(t,X_t^x, \mathbb{P}_{X_t^x} \right) dB_t,~ t\in [0,T], ~ X_0^x = x \in \mathbb{R}^d,
\end{align}
allow the coefficients to depend on the law of the solution in addition to the solution process. Here, $\overline{b}: [0,T] \times \mathbb{R}^d \times \mathcal{P}_1(\mathbb{R}^d) \to \mathbb{R}^d$ and $\overline{\sigma}: [0,T] \times \mathbb{R}^d \times \mathcal{P}_1(\mathbb{R}^d) \to \mathbb{R}^{d\times n}$ are some given drift and volatility coefficients, $(B_t)_{t\in[0,T]}$ is $n$-dimensional Brownian motion,
\begin{align*}
\mathcal{P}_1(\mathbb{R}^d) := \left\lbrace \mu \left| \mu \text{ probability measure on } (\mathbb{R}^d, \mathcal{B}(\mathbb{R}^d)) \text{ with } \int_{\mathbb{R}^d} \Vert x \Vert d\mu(x) < \infty \right.\right\rbrace
\end{align*}
is the space of probability measures over $(\mathbb{R}^d,\mathcal{B}(\mathbb{R}^d))$ with existing first moment, and $\mathbb{P}_{X_t^x}$ is the law of $X_t^x$ with respect to the underlying probability measure $\mathbb{P}$. \par
Mean-field SDEs arised from Boltzmann's equation in physics, which is used to model weak interaction between particles in a multi-particle system, and were first studied by Vlasov \cite{Vlasov_VibrationalPropertiesofElectronGas}, Kac \cite{Kac_FoundationsOfKineticTheory} and McKean \cite{McKean_AClassOfMarkovProcess}. Nowadays the study of mean-field SDEs is an active research field with numerous applications. Various extensions such as replacing the driving noise by a Lévy process or considering backward equations have been examined e.g.~in \cite{BuckdahnDjehicheLiPeng_MFBSDELimitApproach}, \cite{BuckdahnLiPeng_MFBSDEandRelatedPDEs}, and \cite{JourdainMeleardWojbor_NonlinearSDEs}. A cornerstone in the application of mean-field SDEs in Economics and Finance was set by Lasry and Lions with their work on mean-field games in \cite{LasryLions_MeanFieldGames}, see also \cite{Cardaliaguet_NotesOnMeanFieldGames} for a readily accessible summary of Lions' lectures at Collège de France. Carmona and Delarue developed a probabilistic approach to mean-field games opposed to the analytic one taken in \cite{LasryLions_MeanFieldGames}, see e.g. \cite{CarmonaDelarue_ProbabilisticAnalysisofMFG}, \cite{CarmonaDelarue_MasterEquation}, \cite{CarmonaDelarue_FBSDEsandControlledMKV}, \cite{CarmonaDelarueLachapelle_ControlofMKVvsMFG}, and \cite{CarmonaLacker_ProbabilisticWeakFormulationofMFGandApplications} as well as the monographs \cite{CarmonaDelarue_Book}. A more recent application of the concept of mean-fields is in the modeling of systemic risk, in particular in models for inter-bank lending and borrowing, see e.g. \cite{CarmonaFouqueMousaviSun_SystemicRiskandStochasticGameswithDelay}, \cite{CarmonaFouqueSun_MFGandSystemicRisk}, \cite{FouqueIchiba_StabilityinaModelofInterbankLending}, \cite{FouqueSun_SystemicRiskIllustrated}, \cite{GarnierPapanicolaouYang_LargeDeviationsforMFModelofSystemicRisk}, \cite{KleyKlueppelbergReichel_SystemicRiskTroughContagioninaCorePeripheryStructuredBankingNetwork}, and the cited sources therein. \par
In this paper we analyze (strong) solutions of multi-dimensional mean-field SDEs of the form
\begin{align}\label{lebesgueMFSDE}
dX_t^x = b\left(t,X_t^x,\int_{\mathbb{R}^d} \varphi\left(t,X_t^x,z\right) \mathbb{P}_{X_t^x}(dz) \right) dt + dB_t,~ t\in[0,T],~ X_0^x=x \in \mathbb{R}^d,
\end{align}
for $b, \varphi: [0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$. This mean-field SDEs generalize two commonly used models in the literature, where the first one considers $b(t,y,z) = z$, see e.g. \cite{MishuraVeretennikov_SolutionsOfMKV}, or \cite{BuckdahnDjehicheLiPeng_MFBSDELimitApproach} where the authors consider backward mean-field SDEs, and in the second model $\varphi(t,y,z) = \overline{\varphi}(z)$ for some $\overline{\varphi}: \mathbb{R}^d \to \mathbb{R}^d$, see e.g. \cite{Banos_Bismut}. Note that putting $\overline{\sigma} \equiv 1$ and
\begin{align}\label{eq:MFDrift}
\overline{b}(t,y,\mu) = (b \diamond \varphi)(t,y,\mu) := b\left(t,y,\int_{\mathbb{R}^d} \varphi(t,y,z) \mu(dz)\right),
\end{align}
yields that mean-field SDE \eqref{lebesgueMFSDE} is recognized as a special case of the general mean-field SDE \eqref{eq:RegMainMcKeanVlasov}.
The first main contribution of this paper is to establish existence and uniqueness of weak and strong solutions of mean-field SDE \eqref{lebesgueMFSDE} with irregular drift. Further, we show that the strong solutions are Malliavin differentiable. For coefficients $\overline{b}$ and $\overline{\sigma}$ in the general mean-field SDE\eqref{eq:RegMainMcKeanVlasov} fulfilling typical regularity assumptions such as linear growth and Lipschitz continuity, existence and uniqueness is well-studied, see e.g \cite{BuckdahnLiPengRainer_MFSDEandAssociatedPDE}. In \cite{Chiang_MKVWithDiscontinuousCoefficients} the existence of strong solutions is shown for time-homogeneous mean-field SDEs \eqref{eq:RegMainMcKeanVlasov} with drift coefficients $\overline{b}$ that are of linear growth and allow for certain discontinuities in the space variable $y$ and are Lipschitz in the law variable $\mu$. In the time-inhomogeneous case it is shown in \cite{MishuraVeretennikov_SolutionsOfMKV} that there exists a strong solution of mean-field SDE \eqref{lebesgueMFSDE} in the special case $b(t,y,z) = z$ under the assumption that $\varphi$ is of linear growth. The special case of mean-field SDE \eqref{lebesgueMFSDE}, where $\varphi(t,y,z) = \overline{\varphi}(z)$, is treated in \cite{de2015strong}. Here the author assumes that the drift coefficient $b$ is bounded and continuously differentiable in the law variable $z$ and $\overline{\varphi}$ is $\alpha$-Hölder continuous for some $0< \alpha \leq 1$. The work that is the closest to our analysis presented in the following are \cite{Bauer_MultiDim} and \cite{Bauer_StrongSolutionsOfMFSDEs}, where for additive noise, i.e.~$\overline{\sigma} \equiv 1$, existence and uniqueness of weak and Malliavin differentiable strong solutions of mean-field SDE \eqref{eq:RegMainMcKeanVlasov} is shown for irregular drift coefficients $\overline{b}$ including the case of bounded coefficients $\overline{b}$ that are allowed to be merely measurable in the space variable $y$ and continuous in the law variable $\mu$.\par
Considering mean-field SDE \eqref{lebesgueMFSDE}, first existence and uniqueness results of solutions for irregular drifts are inherited from results in \cite{Bauer_MultiDim} on the general mean-field SDE \eqref{eq:RegMainMcKeanVlasov} by specifying $b$ and $\varphi$ such that $\overline{b}$ in \eqref{eq:MFDrift} fulfills the assumptions in \cite{Bauer_MultiDim}. We derive these conditions in \Cref{sec:Recall}. However, in order to guarantee continuity in the law variable $\mu$ required in \cite{Bauer_MultiDim} we cannot allow for irregular $\varphi$, in particular we need that $\varphi$ is Lipschitz continuous in the third variable. This excludes interesting examples where $\varphi$ is irregular, as for example the case when $\varphi(t,x,z) = \mathbbm{1}_{\lbrace z\leq u \rbrace}$, $u\in \mathbb{R}$, and thus the case where the drift $b\left(t,X_t^x,F_{X_t^x}(u)\right)$ depends on the distribution function $F_{X_t^x}(\cdot)$ of the solution is not covered. The objective of this paper is thus to show existence and uniqueness of weak and Malliavin differentiable strong solutions of mean-field SDE \eqref{lebesgueMFSDE} where we relax the conditions on $\varphi$ even further and merely assume that $\varphi$ is measurable and of at most linear growth. The assumptions on the drift function $b$ are inherited from \cite{Bauer_MultiDim} which includes the case of merely measurable coefficients of at most linear growth that are continuous in the law variable $z$. As one application we obtain a global version of Carathéodory's existence theorem for ODEs.
In the second part of the paper the main objective is to study the differentiability in the initial condition $x$ of the expectation functional $\mathbb{E}[\Phi(X_T^x)]$ and to give a Bismut-Elworthy-Li type representation of $\partial_x \mathbb{E}[\Phi(X_T^x)]$\footnote{Here, $\partial_x$ denotes the Jacobian with respect to the variable $x$.}, where $\Phi: \mathbb{R} \to \mathbb{R}$ and $(X_t^x)_{t\in [0,T]}$ is the unique strong solution of the one-dimensional mean-field SDE \eqref{lebesgueMFSDE}, i.e. $d=1$. In \cite{Bauer_StrongSolutionsOfMFSDEs} it is shown that $\mathbb{E}[\Phi(X_T^x)]$ is Sobolev differentiable in its initial condition for a broad range of irregular drift coefficients and for $\Phi$ fulfilling merely some integrability condition, and a Bismut-Elworthy-Li formula is derived. However, for various purposes it is of interest to understand when the derivative $\partial_x \mathbb{E}[\Phi(X_T^x)]$ exists in a strong sense. For example, the weak derivative does not allow for a satisfactory interpretation of $\partial_x \mathbb{E}[\Phi(X_T^x)]$ as a sensitivity measure in the sense of the so-called {\it Delta} from Mathematical Finance.
For the case $\varphi(t,y,z) = \overline{\varphi}(z)$ and for smooth coefficients, \cite{Banos_Bismut} provides a Bismut-Elworthy-Li formula for the continuous derivative $\partial_x \mathbb{E}[\Phi(X_T^x)]$. We here show that $\mathbb{E}[\Phi(X_T^x)]$ is continuously differentiable for a large family of irregular drift coefficients. More precisely, we require $b$ and $\varphi$ in addition to the assumptions for existence and uniqueness of strong solutions to be sufficiently regular in the law variable $z$. For these coefficients the Bismut-Elworthy-Li representation from \cite{Bauer_StrongSolutionsOfMFSDEs} thus holds in a strong sense. As a first step to obtain this result, we also need to study strong differentiability of $X^x$ in its initial condition $x$. In particular, we show that if $b$ and $\varphi$ are continuously differentiable in the space variable $y$ and the law variable $z$ then $X_t^x$ is continuously differentiable in $x$.
The paper is structured as follows. In Section~\ref{sec:Recall} we recall the results from \cite{Bauer_MultiDim} and \cite{Bauer_StrongSolutionsOfMFSDEs} and apply it to the case of mean-field SDEs of type \eqref{lebesgueMFSDE}. These results will be employed in the remaining parts of the paper. In Section~\ref{sec:Solution} we weaken the assumptions on $\varphi$ and show existence, uniqueness, and Malliavin differentiability of solutions of mean-field equation \eqref{lebesgueMFSDE}. Finally, Section~\ref{sec:Regularity} deals with the first variation process $(\partial_x X_t^x )_{t\in[0,T]}$ and provides a Bismut-Elworthy-Li formula for the continuous derivative $\partial_x \mathbb{E}[\Phi(X_T^x)]$ for irregular drift coefficients in the one-dimensional case.
\vspace{1cm}
\textbf{Notation:} Subsequently we list some of the most frequently used notations. For this, let $(\mathcal{X},d_{\mathcal{X}})$ and $(\mathcal{Y},d_{\mathcal{Y}})$ be two metric spaces.
\begin{itemize}
\item By $\Vert \cdot \Vert$ we denote the euclidean norm.
\item $\mathcal{C}(\mathcal{X};\mathcal{Y})$ denotes the space of continuous functions $f:\mathcal{X} \to \mathcal{Y}$. If $\mathcal{X} = \mathcal{Y}$ we write $\mathcal{C}(\mathcal{X}) := \mathcal{C}(\mathcal{X}; \mathcal{X})$.
\item $\mathcal{C}_0^{\infty}(\mathcal{X})$ denotes the space of smooth functions $f: \mathcal{X} \to \mathbb{R}$ with compact support.
\item For every $C>0$ we define the space $\text{Lip}_C(\mathcal{X};\mathcal{Y})$ of functions $f:\mathcal{X}\to \mathcal{Y}$ such that
\begin{align*}
d_{\mathcal{Y}}(f(x_1),f(x_2)) \leq C d_{\mathcal{X}}(x_1,x_2), \quad \forall x_1,x_2 \in \mathcal{X}
\end{align*}
as the space of Lipschitz functions with Lipschitz constant $C>0$. Furthermore, we define $\text{Lip}(\mathcal{X};\mathcal{Y}) := \bigcup_{C>0} \text{Lip}_C(\mathcal{X};\mathcal{Y})$ and denote by $\text{Lip}_C(\mathcal{X}):= \text{Lip}_C(\mathcal{X};\mathcal{X})$ and $\text{Lip}(\mathcal{X}) := \text{Lip}(\mathcal{X};\mathcal{X})$, respectively, the space of Lipschitz functions mapping from $\mathcal{X}$ to $\mathcal{X}$.
\item $\mathcal{C}^{1,1}_{b,C}(\mathbb{R}^d)$ denotes the space of continuously differentiable functions $f: \mathbb{R}^d \to \mathbb{R}^d$ such that there exists a constant $C>0$ with
\begin{enumerate}[(a)]
\item $\sup_{y\in \mathbb{R}^d} \Vert f'(y) \Vert \leq C$, and
\item $(y \mapsto f'(y)) \in \text{Lip}_C(\mathbb{R}^d)$.
\end{enumerate}
Here $f'$ denotes the Jacobian of $f$. We define $\mathcal{C}^{1,1}_b(\mathbb{R}^d) := \bigcup_{C>0} \mathcal{C}^{1,1}_{b,C}(\mathbb{R}^d)$.
\item $\mathfrak{C}([0,T] \times \mathbb{R}^d \times \mathbb{R}^d)$ is the space of functions $f:[0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ such that there exists a constant $C>0$ with
\begin{enumerate}[(a)]
\item $(y \mapsto f(t,y,z)) \in \mathcal{C}_{b,C}^{1,1}(\mathbb{R}^d)$ for all $t\in[0,T]$ and $z \in \mathbb{R}^d$, and
\item $(z \mapsto f(t,y,z)) \in \mathcal{C}_{b,C}^{1,1}(\mathbb{R}^d)$ for all $t\in[0,T]$ and $y\in \mathbb{R}^d$.
\end{enumerate}
\item We say a function $f:[0,T]\times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ is in the space $\mathcal{L}([0,T] \times \mathbb{R}^d \times \mathbb{R}^d)$, if there exists a constant $C>0$ such that for every $t\in[0,T]$ and $y \in \mathbb{R}^d$ the function $(z \mapsto f(t,y,z)) \in \mathcal{C}_{b,C}^{1,1}(\mathbb{R}^d)$.
\item Let $(\Omega, \mathcal{F}, \mathbb{F}, \mathbb{P})$ be a generic complete filtered probability space with filtration $\mathbb{F} = (\mathcal{F}_t)_{t\in[0,T]}$ and $B = (B_t)_{t\in[0,T]}$ be $d$-dimensional Brownian motion defined on this probability space. Furthermore, we write $\mathbb{E}[\cdot] := \mathbb{E}_{\mathbb{P}}[\cdot]$, if not mentioned differently.
\item $L^p(\Omega)$ denotes the Banach space of functions on the measurable space $(\Omega,\mathcal{F})$ integrable to some power $p$, $p\geq 1$.
\item $L^p(\Omega,\mathcal{F}_t)$ denotes the space of $\mathcal{F}_t$-measurable functions in $L^p(\Omega)$.
\item We define the weighted $L^p$-space over $\mathbb{R}$ with weight function $\omega: \mathbb{R} \to \mathbb{R}$ as
\begin{align*}
L^p(\mathbb{R};\omega) := \left\lbrace f:\mathbb{R} \to\mathbb{R} \text{ measurable : } \int_{\mathbb{R}} |f(y)|^p \omega(y) dy <\infty \right\rbrace.
\end{align*}
\item Let $f: \mathbb{R}^d \to \mathbb{R}^d$ be a (weakly) differentiable function. Then we denote by $\partial_y f(y):= \frac{\partial f}{\partial y} (y)$ its first (weak) derivative evaluated at $y \in \mathbb{R}^d$ and $\partial_k$ is the Jacobian in the direction of the $k$-th variable.
\item We denote the Doléan-Dade exponential for a progressive process $Y$ with respect to the corresponding Brownian integral if well-defined for $t\in[0,T]$ by $$\mathcal{E}\left( \int_0^t Y_u dB_u \right) := \exp \left\lbrace \int_0^t Y_u dB_u - \frac{1}{2} \int_0^t \Vert Y_u \Vert^2du \right\rbrace.$$
\item We define $B_t^x := x + B_t$, $t\in [0,T]$, for any Brownian motion $B$.
\item We write $E_1(\theta) \lesssim E_2(\theta)$ for two mathematical expressions $E_1(\theta),E_2(\theta)$ depending on some parameter $\theta$, if there exists a constant $C>0$ not depending on $\theta$ such that $E_1(\theta) \leq C E_2(\theta)$.
\item We denote the Wiener transform of some $Z \in L^2(\Omega,\mathcal{F}_T)$ in $f \in L^2([0,T])$ by
\begin{align*}
\mathcal{W}(Z)(f) := \mathbb{E} \left[ Z \mathcal{E}\left(\int_0^T f(s) dB_s \right) \right].
\end{align*}
\end{itemize}
\section{Results derived from the general mean-field SDE}\label{sec:Recall}
In this section we recall sufficient conditions on $b$ and $\varphi$ such that $\overline{b}$ as defined in \eqref{eq:MFDrift} fulfills the corresponding assumptions for existence, uniqueness, and regularity properties of strong solutions required in \cite{Bauer_MultiDim} and \cite{Bauer_StrongSolutionsOfMFSDEs}. These results will subsequently be applied in \Cref{sec:Solution,sec:Regularity} in order to weaken the assumptions on $\varphi$ such that mean-field SDE \eqref{lebesgueMFSDE} has a Malliavin differentiable strong solution and to show strong differentiability of this unique strong solution under sufficient conditions on $b$ and $\varphi$. We start by giving the definitions of some frequently used assumptions. \par
Let $f:[0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ be a measurable function. The function $f$ is said to be of linear growth, if there exists a constant $C>0$ such that for all $t\in[0,T]$ and $y,z\in \mathbb{R}^d$,
\begin{align}\label{linearGrowthMF}
\Vert f(t,y,z) \Vert &\leq C(1+\Vert y \Vert+\Vert z\Vert).
\end{align}
We say $f$ is continuous in the third variable (uniformly with respect to the first and second variable), if for all $z_1 \in \mathbb{R}^d$ and $\varepsilon>0$ there exists a $\delta>0$ such that for all $t\in [0,T]$ and $y\in \mathbb{R}^d$
\begin{align}\label{continuityThirdMF}
\left( \forall z_2 \in \mathbb{R}^d: \Vert z_1-z_2 \Vert < \delta \right) \Rightarrow \Vert f(t,y,z_1) - f(t,y,z_2) \Vert < \varepsilon.
\end{align}
Moreover, we say $f$ admits a modulus of continuity in the third variable, if there exists $\theta\in \lbrace \vartheta \in \mathcal{C}(\mathbb{R}_+;\mathbb{R}): \vartheta(z) >0 \text{ and } \int_0^z \frac{dy}{\vartheta(y)} = \infty \text{ } \forall z\in\mathbb{R}_+\rbrace$ such that for all $t\in[0,T]$ and $y,z_1,z_2\in\mathbb{R}^d$
\begin{align}\label{modulusOfContinuityMean}
\Vert f(t,y,z_1) - f(t,y,z_2)\Vert^2 \leq \theta \left(\Vert z_1-z_2\Vert^2\right).
\end{align}
The function $f$ is said to be Lipschitz continuous in the second, respectively third, variable (uniformly with respect to the other two variables), if there exists a constant $C>0$ such that for all $t\in[0,T]$ and $y_1,y_2,z \in \mathbb{R}^d$
\begin{align}\label{lipschitzSecondMF}
\left\Vert f(t,y_1,z) - f(t,y_2,z) \right\Vert \leq C \Vert y_1-y_2\Vert,
\end{align}
respectively, such that for all $t\in[0,T]$ and $y,z_1,z_2 \in \mathbb{R}^d$
\begin{align}\label{lipschitzThirdMF}
\left\Vert f(t,y,z_1) - f(t,y,z_2) \right\Vert \leq C \Vert z_1-z_2\Vert.
\end{align}
Concluding, we say the function $f$ is Lipschitz continuous in the second and third variable (uniformly with respect to the first variable), if it fulfills the Lipschitz assumptions \eqref{lipschitzSecondMF} and \eqref{lipschitzThirdMF}, i.e. there exists a constant $C>0$ such that for all $t\in[0,T]$ and $y_1,y_2,z_1,z_2\in\mathbb{R}^d$
\begin{align}\label{eq:lipschitzSecondThird}
\left\Vert f(t,y_1,z_1) - f(t,y_2,z_2) \right\Vert \leq C \left( \Vert y_1 - y_2 \Vert + \Vert z_1 - z_2 \Vert \right).
\end{align}
Note that when we talk about (Lipschitz) continuity in a certain variable, we always understand the continuity to hold uniformly with respect to the other variables. \par \vspace{0.3cm}
We start by deriving sufficient conditions on $b$ and $\varphi$ from \cite{Bauer_MultiDim} and \cite{Bauer_StrongSolutionsOfMFSDEs} for existence and uniqueness of solutions of mean-field SDE \eqref{lebesgueMFSDE}. For detailed definitions of the notions weak and strong solution as well as pathwise uniqueness and uniqueness in law - as used subsequently - we refer the reader to these same papers
\par
From \cite[Theorems 3.7]{Bauer_MultiDim} we obtain in the following corollary the assumptions on $b$ and $\varphi$ to ensure the existence of a strong solution of \eqref{lebesgueMFSDE}.
\begin{corollary
Let $b:[0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ be a measurable function of at most linear growth \eqref{linearGrowthMF} and continuous in the third variable \eqref{continuityThirdMF}. Furthermore, assume that $\varphi:[0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ is a measurable functional which is of at most linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Then mean-field SDE \eqref{lebesgueMFSDE} has a strong solution. \par
If in addition $b$ admits a modulus of continuity in the third variable \eqref{modulusOfContinuityMean}, the solution is pathwisely unique.
\end{corollary}
Concerning Malliavin differentiability of the solution we obtain from \cite[Theorem 4.1]{Bauer_MultiDim}:
\begin{corollary}\label{cor:strongSolutionRecall}
Let $b$ be a bounded measurable function which is continuous in the third variable \eqref{continuityThirdMF}. Furthermore, assume that $\varphi$ is a measurable functional which is of at most linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Then mean-field SDE \eqref{lebesgueMFSDE} has a Malliavin differentiable strong solution.
\end{corollary}
\begin{remark}
In the one-dimensional case, $d=1$, \Cref{cor:strongSolutionRecall} can be generalized in the following way due to \cite{Bauer_StrongSolutionsOfMFSDEs}. Let $(b\diamond \varphi)$ allow for a decomposition of the form
\begin{align}\label{formDriftMF}
(b\diamond \varphi)(t,y,\mu) & := \hat{b}\left(t,y,\int_\mathbb{R} \hat{\varphi}(t,y,z) \mu(dz) \right) + \tilde{b}\left(t,y,\int_\mathbb{R} \tilde{\varphi}(t,y,z) \mu(dz) \right),
\end{align}
where the drift $\hat{b}$ is merely measurable and bounded and the functional $\hat{\varphi}$ is of linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Moreover, the drift $\tilde{b}$ is of at most linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the second variable \eqref{lipschitzSecondMF} whereas the functional $\tilde{\varphi}$ is of at most linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the second and third variable \eqref{eq:lipschitzSecondThird}. Then, mean-field SDE \eqref{lebesgueMFSDE} has a Malliavin differentiable unique strong solution and the Malliavin derivative admits for $0\leq s \leq t \leq T$ the representation
\begin{align}\label{eq:derivativeMalliavin}
D_sX_t^x = \exp \left\lbrace - \int_s^t \int_{\mathbb{R}} b \left(u,y, \int_\mathbb{R} \varphi(u,y,z) \mathbb{P}_{X_u^x}(dz) \right) L^{X^x}(du,dy) \right\rbrace.
\end{align}
Here, $L^{X^x}(du,dy)$ denotes integration with respect to local time of $X^x$ in time and space, see \cite{MeyerBrandisBanosDuedahlProske_ComputingDeltas} and \cite{Eisenbaum_LocalTimeIntegral} for more details. If in addition $b$ is continuously differentiable with respect to the second and third variable and $\varphi$ is continuously differentiable with respect to the second variable, representation \eqref{eq:derivativeMalliavin} can be written as
\begin{align*
D_sX_t^x = &\exp \left\lbrace \int_s^t \partial_2 b\left(u, X_u^x, \int_\mathbb{R} \varphi(u,X_u^x,z) \mathbb{P}_{X_u^x}(dz) \right) \right.\\
&\qquad +\left. \partial_3 b\left(u, X_u^x, \int_\mathbb{R} \varphi(u,X_u^x,z) \mathbb{P}_{X_u^x}(dz) \right) \int_\mathbb{R} \partial_2 \varphi(u,X_u^x, z) \mathbb{P}_{X_u^x}(dz) du \right\rbrace. \notag
\end{align*}
Here, $\partial_2$ and $\partial_3$ denotes the derivative with respect to the second and third variable, respectively.
\end{remark}
Next we state a result on the regularity of a strong solution of \eqref{lebesgueMFSDE} in its initial condition which is due to \cite[Theorem 4.3]{Bauer_MultiDim}.
\begin{corollary}\label{cor:SobolevDifferentiableRecall}
Let $b$ be a bounded measurable function which is Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Furthermore, assume that $\varphi$ is a measurable functional which is of at most linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Then, the unique strong solution $(X_t^x)_{t\in[0,T]}$ of mean-field SDE \eqref{lebesgueMFSDE} is Sobolev differentiable in the initial condition $x$.
\end{corollary}
\begin{remark}
In the one-dimensional case, $d=1$, we further get due to \cite[Theorem 3.3 \& Proposition 3.4]{Bauer_StrongSolutionsOfMFSDEs} for $(b\diamond \varphi)$ allowing for a decomposition \eqref{formDriftMF} that the first variation process $(\partial_x X_t^x)_{t\in[0,T]}$ has for almost all $x \in K$, where $K\subset \mathbb{R}$ is a compact subset, the representation
\begin{small}
\begin{align}\label{eq:RegRepresentationFV}
\partial_x X_t^x &= \exp \left\lbrace - \int_0^t \int_\mathbb{R} (b\diamond \varphi)\left(s,y, \mathbb{P}_{X_s^x}\right) L^{X^x}(ds,dy) \right\rbrace \\
&\quad+ \int_0^t \exp \left\lbrace - \int_u^t \int_\mathbb{R} (b\diamond \varphi) \left(s,y,\mathbb{P}_{X_s^x} \right) L^{X^x}(ds,dy) \right\rbrace \partial_x (b\diamond \varphi) \left(s,y, \mathbb{P}_{X_u^x} \right)\Big\vert_{y= X_u^x} du. \notag
\end{align}
\end{small}
Moreover, for $0\leq s \leq t \leq T$ the following relationship with the Malliavin Derivative holds:
\begin{align}\label{eq:RegRelationDerivatives}
\partial_x X_t^x = D_sX_t^x \partial_x X_s^x + \int_s^t D_uX_t^x \partial_x (b\diamond \varphi) \left(u,y, \mathbb{P}_{X_u^x} \right) \Big\vert_{y=X_u^x} du \,.
\end{align}
\end{remark}
Furthermore, the unique strong solution is Hölder continuous in time and the initial condition which is due to \cite[Theorem 4.12]{Bauer_MultiDim}.
\begin{corollary}\label{cor:Hölder}
Let $b$ be a bounded measurable function which is Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Furthermore, assume that $\varphi$ is a measurable functional which is of at most linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Let $(X_t^x)_{t\in[0,T]}$ be the unique strong solution of mean-field SDE \eqref{lebesgueMFSDE}. Then for every compact subset $K\subset \mathbb{R}^d$ there exists a constant $C > 0$ such that for all $s,t\in[0,T]$ and $x,y \in K$,
\begin{align}\label{eq:RegHoelderContinuity}
\mathbb{E} [\Vert X_t^x - X_s^y \Vert^2] \leq C(|t-s| + \Vert x-y\Vert^2).
\end{align}
In particular, there exists a continuous version of the random field $(t,x) \mapsto X_t^x$ with Hölder continuous trajectories of order $\alpha < \frac{1}{2}$ in $t\in[0,T]$ and $\alpha<1$ in $x\in \mathbb{R}^d$.
\end{corollary}
Finally, from \cite[Theorem 5.1]{Bauer_MultiDim} we get the following Bismut-Elworthy-Li type formula under the same assumptions as in \Cref{cor:SobolevDifferentiableRecall}.
\begin{corollary}\label{cor:BismutFormulaRecall}
Let $b$ be a bounded measurable function which is Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Furthermore, assume that $\varphi$ is a measurable functional which is of at most linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Furthermore, let $\Phi \in L^{2p}(\mathbb{R}^d;\omega_T)$ with $p:= \frac{1+\varepsilon}{\varepsilon}$, $\varepsilon>0$ sufficiently small with regard to \Cref{lem:RegBoundsSolution}, and $\omega_T(y) := \exp \left\lbrace - \frac{\Vert y \Vert^2}{4T} \right\rbrace$. Then, the expectation functional $ \mathbb{E}\left[ \Phi(X_t^x) \right]$ is Sobolev differentiable in the initial condition and the derivative $\partial_x \mathbb{E}\left[ \Phi(X_T^x) \right]$ admits for almost all $x\in K$, where $K \subset \mathbb{R}^d$ is a compact subset, the representation
\begin{footnotesize}
\begin{align}\label{eq:RegDelta}
\partial_x \mathbb{E}[\Phi(X_T^x)] = \mathbb{E} \left[ \Phi(X_T^x) \left( \int_0^T a(s) \partial_x X_s^x + \partial_x (b\diamond \varphi) \left(s,y,\mathbb{P}_{X_s^x} \right)\vert_{y=X_s^x} \int_0^s a(u) du dB_s \right) \right],
\end{align}
\end{footnotesize}
where $a: \mathbb{R} \to \mathbb{R}$ is any bounded, measurable function such that
\begin{align*}
\int_0^T a(s) ds = 1.
\end{align*}
\end{corollary}
\section{Existence and Uniqueness of Solutions }\label{sec:Solution}
The results in \Cref{sec:Recall} presume Lipschitz continuity of the function $\varphi$. In this section we are interested in showing existence and uniqueness of strong solutions under weakened regularity assumptions on $\varphi$. In particular, this will allow to consider mean-field SDEs where the drift depends on the solution law in form of indicator and distribution functions, respectively.
\begin{theorem}\label{extensionSolution}
Let $b:[0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ be of at most linear growth \eqref{linearGrowthMF} and continuous in the third variable \eqref{continuityThirdMF}. Further, let $\varphi:[0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ be of at most linear growth \eqref{linearGrowthMF}. Then mean-field SDE \eqref{lebesgueMFSDE} has a strong solution. \par
If in addition $b$ is Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}, the solution is unique.
\end{theorem}
\begin{proof}
This proof is organized as follows. First we introduce a sequence $\lbrace Y^n \rbrace_{n\geq 1}$ of solutions to mean-field SDE \eqref{eq:RegMainMcKeanVlasov} with approximating coefficients and show that we can find a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ such that the equivalent sequence $\lbrace X^k \rbrace_{k\geq 1}$ of $\lbrace Y^n \rbrace_{n\geq 1}$ on this probability space converges in $L^2(\Omega)$ to some stochastic process $X$. We then prove that $X^k$ further converges weakly in $L^2(\Omega)$ to a solution of \eqref{lebesgueMFSDE} and thus by uniqueness of the limit $X$ is a weak solution of mean-field SDE \eqref{lebesgueMFSDE}. Afterwards we conclude the existence of a strong solution and prove uniqueness of the solution. \par
By standard arguments using mollifiers, we can define sequences $\lbrace b_n \rbrace_{n\geq 1}$ and $\lbrace \varphi_n\rbrace_{n\geq 1}$ in $\mathcal{C}_0^\infty([0,T] \times \mathbb{R}^d \times \mathbb{R}^d)$ such that $b_n$ converges to $b$ and $\varphi_n$ converges to $\varphi$, respectively, pointwise in $(t,y,z) \in [0,T]\times \mathbb{R}^d \times \mathbb{R}^d$ a.e. with respect to the Lebesgue measure. We denote the original functions $b$ and $\varphi$ by $b_0$ and $\varphi_0$, respectively. Due to continuity assumption \eqref{continuityThirdMF} on the coefficient $b$, we can further assume that the family of coefficients $\lbrace b_n \rbrace_{n\geq 0}$ is pointwisely equicontinuous in the third variable, i.e. for every $\varepsilon>0$ and $z_1 \in \mathbb{R}^d$ exists a $\delta >0$ such that for all $n\geq 0$, $t\in [0,T]$, and $y \in \mathbb{R}^d$ we get
\begin{align}\label{continuityThirdEquiApprox}
\left( \forall z_2 \in \mathbb{R}^d: \Vert z_1-z_2 \Vert< \delta \right) \Rightarrow \Vert b_n(t,y,z_1) - b_n(t,y,z_2) \Vert < \varepsilon.
\end{align}
Then, by \Cref{cor:strongSolutionRecall}, for $n\geq 1$ mean-field SDEs
\begin{align}\label{eq:RegApproximatingMFSDE}
\begin{split}
dY_t^n &= b_n\left(t,Y_t^n,\int_{\mathbb{R}^d} \varphi_n\left(t,Y_t^n,z\right) \mathbb{P}_{Y_t^n}(dz) \right) dt + dW_t, ~ t\in[0,T],\\
Y_0^n &= x \in \mathbb{R}^d,
\end{split}
\end{align}
where $W= (W_t)_{t\in[0,T]}$ is Brownian motion, have unique strong solutions $\lbrace Y^n \rbrace_{n\geq 1}$ on some complete probability space $(\tilde{\Omega}, \tilde{\mathcal{F}}, \tilde{\mathbb{P}})$. Moreover, due to \Cref{lem:RegBoundsSolution} there exists some constant $C>0$ such that
\begin{enumerate}[(i)]
\item $\sup_{n\geq 1} \sup_{0\leq t \leq T} \EPpO{Y_t^n}{\tilde{\mathbb{P}}}{2} \leq C(1+\Vert x\Vert^2)$,
\item $\sup_{n\geq 1} \sup_{0\leq s \leq t \leq T; t-s \leq h} \EPpO{Y_t^n-Y_s^n}{\tilde{\mathbb{P}}}{2} \leq Ch$.
\end{enumerate}
Next, we show that the properties (i) and (ii) imply the assumptions of \Cref{thm:SkorohodRepresentationTheorem} and thus there exists a subsequence $\lbrace n_k \rbrace_{k\geq 1} \subset \mathbb{N}$ and a sequence of stochastic processes $\lbrace (X_t^k)_{t\in[0,T]} \rbrace_{k\geq 1}$ defined on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ such that the finite dimensional distributions of the processes $Y^{n_k}$ and $X^k$ coincide for every $k\geq 1$, c.f. \Cref{rem:finiteDistribution}, and $X_t^k$ converges in probability to $X_t$ as $k$ goes to infinity. Note first that the stochastic processes $\lbrace Y^n \rbrace_{n\geq 1}$ are almost surely continuous as a solution of mean-field SDE \eqref{eq:RegApproximatingMFSDE}. Furthermore, we get by Chebyshev's inequality that due to (i)
\begin{align*}
\tilde{\mathbb{P}}( \Vert Y_t^n \Vert > K) &\leq \frac{1}{K^2} \EPpO{Y_t^n}{\tilde{\mathbb{P}}}{2} \leq \frac{1}{K^2} C(1+\Vert x \Vert^2 ), \quad K>0,
\end{align*}
and thus
\begin{align*}
\lim_{K \to \infty} \lim_{n \to \infty} \sup_{t\in [0,T]} \tilde{\mathbb{P}}( \Vert Y_t^n \Vert > K) \leq \lim_{K \to \infty} \lim_{n \to \infty} \sup_{t\in [0,T]} \frac{1}{K^2} C(1+\Vert x \Vert^2 ) = 0.
\end{align*}
Equivalently, we get due to property (ii) that for every $\varepsilon > 0$
\begin{align*}
\tilde{\mathbb{P}}(\Vert Y_t^n - Y_s^n \Vert > \varepsilon) \leq \frac{1}{\varepsilon^2} \EPpO{Y_t^n - Y_s^n}{\tilde{\mathbb{P}}}{2} \leq \frac{C h}{\varepsilon^2},
\end{align*}
and thus
\begin{align*}
\lim_{h\to 0} \lim_{n\to \infty} \sup_{\vert t-s \vert \leq h} \tilde{\mathbb{P}}(\Vert Y_t^n - Y_s^n \Vert > \varepsilon) \leq \lim_{h\to 0} \lim_{n\to \infty} \sup_{\vert t-s \vert \leq h} \frac{C h}{\varepsilon^2} = 0.
\end{align*}
Consequently, the assumptions of \Cref{thm:SkorohodRepresentationTheorem} are fulfilled. For the sake of readability, we assume in the following without loss of generality that $n_k = k$. Further note that due to the uniform integrability of $\lbrace \Vert X_t^k \Vert^2 \rbrace$ by property (i), we get that for every $t\in [0,T]$ the sequence $\lbrace X_t^k \rbrace_{k\geq 1}$ converges to $X_t$ in $L^2(\Omega)$. Due to property (ii) we further get in connection with Kolmogorov's continuity theorem that $(X_t)_{t\in[0,T]}$ can be assumed to have almost surely continuous path. Using approximation by Riemann sums, we further have that $$\int_0^t b_k\left(s,X_s^k, \int_{\mathbb{R}^d} \varphi_k \left(s,X_s^k,z\right) \mathbb{P}_{X_s^k}(dz) \right) ds$$ and $$ \int_0^t b_k \left(s,Y_s^k, \int_{\mathbb{R}^d} \varphi_k \left(s,Y_s^k,z\right) \mathbb{P}_{Y_s^k}^k(dz) \right) ds$$ have the same distribution for every $k\geq 1$. Again by virtue of \Cref{thm:SkorohodRepresentationTheorem} we get that
\begin{align*}
B_t^k := X_t^k - \int_0^t b_k \left(s,X_s^k, \int_{\mathbb{R}^d} \varphi_k \left(s,X_s^k,z\right) \mathbb{P}_{X_s^k}(dz) \right) ds
\end{align*}
is $d$-dimensional Brownian motion on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and thus $X^k$ solves \eqref{eq:RegApproximatingMFSDE} on the stochastic basis $(\Omega, \mathcal{F}, \mathbb{F}, \mathbb{P}, B^k)$.\par
Let us define the stochastic differential equation
\begin{align}\label{helpMFSDE}
d\overline{X}_t = b\left(t,\overline{X}_t, \int_{\mathbb{R}^d} \varphi\left(t,\overline{X}_t,z\right) \mathbb{P}_{X_t}(dz) \right) dt + dB_t,~ t\in[0,T], ~\overline{X}_0 = x \in \mathbb{R}^d.
\end{align}
Due to the result of Veretennikov given in \cite{veretennikov1981strong}, SDE \eqref{helpMFSDE} has a unique strong solution on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Therefore it is left to show that for every $t \in [0,T]$ the sequence $\lbrace X_t^k \rbrace_{k\geq 1}$ converges weakly in $L^2(\Omega)$ to $\overline{X}_t$. Indeed, if this holds true, we get by the uniqueness of the limit that $\mathbb{P}_{X_t} = \mathbb{P}_{\overline{X}_t}$ for all $t\in[0,T]$ and consequently mean-field SDE \eqref{lebesgueMFSDE} and \eqref{helpMFSDE} coincide. Hence, we have found a weak solution of \eqref{lebesgueMFSDE}. In order to prove weak convergence in $L^2(\Omega)$ we use the Wiener transform and show for every $f\in L^2([0,T])$,
\begin{align*}
\left\Vert\mathcal{W}\left(X_t^k \right)(f) - \mathcal{W}\left(\overline{X}_t \right)(f)\right\Vert \xrightarrow[n\to\infty]{} 0.
\end{align*}
Using inequality
\begin{align}\label{eq:RegExponentialInequality}
|e^x - e^y| \leq |x-y|(e^x+e^y), \quad \forall x,y\in\mathbb{R},
\end{align}
Burkholder-Davis-Gundy's inequality and Minkowski's integral inequality, we get for $p:= \frac{1+ \varepsilon}{\varepsilon}$, $\varepsilon>0$ sufficiently small with respect to \Cref{lem:RegBoundsSolution}, that
\begin{align*}
&\left\Vert \mathcal{W}\left(X_t^k\right)(f) - \mathcal{W}\left(\overline{X}_t\right)(f)\right\Vert \\
&\quad \leq \mathbb{E} \left[ \left\Vert B_t^x\right\Vert \left| \mathcal{E}\left( \int_0^T b_k\left(t,B_t^x,\int_{\mathbb{R}^d} \varphi_k\left(t,B_t^x,z\right) \mathbb{P}_{X_t^k}(dz) \right) + f(t) dB_t \right) \right.\right.\\
&\qquad \left.\left. - \mathcal{E}\left( \int_0^T b\left(t,B_t^x,\int_{\mathbb{R}^d} \varphi\left(t, B_t^x,z\right) \mathbb{P}_{X_t}(dz)\right) +f(t) dB_t \right) \right| \right]\\
&\quad \lesssim \left( \int_0^T \mathbb{E} \left[ \left\Vert b_k\left(t,B_t^x,\int_{\mathbb{R}^d} \varphi_k\left(t,B_t^x,z\right) \mathbb{P}_{X_t^k}(dz) \right) \right.\right.\right.\\
&\qquad - \left.\left. \left. b\left(t,B_t^x,\int_{\mathbb{R}^d} \varphi\left(t, B_t^x,z\right) \mathbb{P}_{X_t}(dz) \right)\right\Vert^p \right]^{\frac{2}{p}} dt \right)^{\frac{1}{2}} + C_n \\
&\quad =: A_n + C_n,
\end{align*}
where
\begin{align*}
C_n := \int_0^T &\mathbb{E}\left[ \left| \left\Vert b_k\left(t,B_t^x,\int_{\mathbb{R}^d} \varphi_k\left(t,B_t^x,z\right) \mathbb{P}_{X_t^k}(dz) \right)+f(t)\right\Vert^2 \right. \right.\\
&\quad \left. \left.- \left\Vert b\left(t,B_t^x, \int_{\mathbb{R}^d} \varphi\left(t,B_t^x,z\right) \mathbb{P}_{X_t}(dz) \right)+f(t)\right\Vert^2 \right|^p \right]^{\frac{1}{p}} dt.
\end{align*}
We show using dominated convergence that $A_n$ converges to $0$ as $k$ tends to infinity. Since the family of coefficients $\lbrace b_k \rbrace_{k\geq 0}$ is pointwisely equicontinuous in the third variable \eqref{continuityThirdEquiApprox}, it suffices to show that for all $t\in [0,T]$ and $y \in \mathbb{R}^d$
\begin{align*}
\left\Vert \int_{\mathbb{R}^d} \varphi_k\left(t,y,z\right) \mathbb{P}_{X_t^k}(dz) - \int_{\mathbb{R}^d} \varphi\left(t,y,z\right) \mathbb{P}_{X_t}(dz) \right\Vert &\xrightarrow[k\to\infty]{}0, \text{ and} \\
\Ep{\overline{b}_k\left(t,B_t^x,\mathbb{P}_{X_t} \right) - \overline{b}\left(t,B_t^x,\mathbb{P}_{X_t} \right)}{p} &\xrightarrow[k\to\infty]{}0.
\end{align*}
The second convergence is an immediate consequence of the definition of $b_k$, \Cref{lem:RegBoundsSolution}, and dominated convergence. Thus, it remains to show the first convergence. Let $\delta>0$. Since $\varphi_k$ is of at most linear growth \eqref{linearGrowthMF} for all $k\geq 0$, we get by (i) that
\begin{align*}
&\sup_{k\geq 0} \mathbb{E} \left[ \left\Vert \varphi_k\left(t,y, X_t^k \right) \right\Vert \right] \leq C \left(1+ \Vert y \Vert + \sup_{k\geq 0} \Eabs{X_t^k} \right) \leq C_1,
\end{align*}
where $C_1>0$ is some constant independent of $k\geq 0$. Hence, due to dominated convergence we can find $N_1\in \mathbb{N}$ sufficiently large such that
\begin{align*}
\sup_{k\geq N_1} \mathbb{E}\left[\left\Vert \varphi_k \left(t,y, X_t \right) - \varphi\left(t,y, X_t\right) \right\Vert \right]<\frac{\delta}{3}.
\end{align*}
Note further that for $\varepsilon>0$ sufficiently small with respect to \Cref{lem:RegBoundsSolution},
\begin{align*}
\sup_{k\geq 0} \mathbb{E}\left[\mathcal{E}\left(\int_0^T b_k\left(t,B_t^x, \int_{\mathbb{R}^d} \varphi_k(t, B_t^x, z) \mathbb{P}_{X_t^k}(dz)\right) dB_t \right)^{1+\varepsilon}\right]^{\frac{1}{1+\varepsilon}} \leq C_2 < \infty,
\end{align*}
where $C_2>0$ is some constant. Thus we can find by Girsanov's theorem and again by dominated convergence an integer $N_2\in \mathbb{N}$ such that
\begin{align*}
&\sup_{m,k\geq N_2} \mathbb{E}\left[ \left\Vert \varphi_k(t,y,X_t^k) - \varphi_m(t,y,X_t^k) \right\Vert \right] \\
&\quad \leq \sup_{m,k\geq N_2} C_2 \mathbb{E}\left[\left\Vert \varphi_k(t,y,B_t^x)-\varphi_m(t,y,B_t^x) \right\Vert^p \right]^{\frac{1}{p}}<\frac{\delta}{3},
\end{align*}
where $p:= \frac{1+\varepsilon}{\varepsilon}$.
Therefore, using Minkowski's and Hölder's inequality we get for $N:= \max\lbrace N_1, N_2 \rbrace$ and $k\geq N$
\begin{align*}
&\left\Vert \mathbb{E}\left[ \varphi_k(t,y,X_t^k) \right] - \mathbb{E}\left[ \varphi(t,y,X_t) \right] \right\Vert \\
&\quad \leq \mathbb{E}\left[ \left\Vert \varphi_k(t,B_t^x, X_t^k) - \varphi_N(t,B_t^x,X_t^k) \right\Vert \right] + D_k \\
&\qquad + \mathbb{E}\left[ \left\Vert\varphi_N(t,y,X_t) - \varphi(t,y, X_t) \right\Vert \right] \\
&\quad \leq D_k + \frac{2\delta}{3},
\end{align*}
where
\begin{align*}
D_k:= \left\Vert \mathbb{E}\left[\varphi_N(t,y,X_t^k) \right] - \mathbb{E}\left[ \varphi_N(t,y,X_t) \right] \right\Vert.
\end{align*}
Since $\varphi_N$ is smooth and has compact support by the definition of mollification, $\varphi_N$ is also bounded. Hence, using the fact that $X_t^k$ converges in distribution to $X_t$ for every $t\in[0,T]$, we can find $k\geq N$ sufficiently large such that $D_k < \frac{\delta}{3}$. Analogously one can show that $C_k$ converges to $0$ as $k$ tends to infinity and therefore, $X$ is a weak solution of the mean-field SDE \eqref{lebesgueMFSDE}. Due to the proof of \cite[Theorem 3.7]{Bauer_MultiDim} we get as a direct consequence the existence of a strong solution of mean-field equation \eqref{lebesgueMFSDE} for the more general class of functionals $\varphi$.\par
Let $(\Omega, \mathcal{F}, \mathbb{F}, \mathbb{P}, X, B)$ and $(\hat{\Omega}, \hat{\mathcal{F}}, \hat{\mathbb{F}}, \hat{\mathbb{P}}, Y, W)$ be two weak solutions of mean-field SDE \eqref{lebesgueMFSDE} and assume that the drift coefficient $b$ is Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. In the following we show that $X$ and $Y$ have the same law, i.e. $\mathbb{P}_X = \hat{\mathbb{P}}_Y$. For the sake of readability we just consider the case $x=0$. The general case follows analogously. From \cite{Bauer_MultiDim} we know that there exist measures $\mathbb{Q}$ and $\hat{\mathbb{Q}}$ such that under these measures the processes $X$ and $Y$ are Brownian motions, respectively. Similar to the proofs of \cite[Theorem 3.7]{Bauer_MultiDim} and \cite[Theorem 2.7]{Bauer_StrongSolutionsOfMFSDEs} we use the idea of Li and Min in the proof of Theorem 4.2 in \cite{LiMin_WeakSolutions} and define the equivalent probability measure $\widetilde{\mathbb{Q}}$ by
\begin{align*}
\frac{\text{d} \widetilde\mathbb{Q}}{\text{d} \mathbb{P}} := \mathcal{E} \left( - \int_0^T \left( \overline{b}\left(t,X_t,\mathbb{P}_{X_t} \right) - \overline{b}\left(t,X_t, \hat{\mathbb{P}}_{Y_t} \right) \right) dB_t^X \right).
\end{align*}
Due to \cite{Bauer_MultiDim} and \cite{Bauer_StrongSolutionsOfMFSDEs}, $$\hat{\mathbb{P}}_{(Y,W)} = \widetilde{\mathbb{Q}}_{(X,B)}.$$ Thus, it is left to show that $\sup_{t\in[0,T]} \mathcal{K} \left(\widetilde\mathbb{Q}_{X_t}, \mathbb{P}_{X_t} \right) = 0$, from which we conclude that $\sup_{t\in[0,T]} \mathcal{K} \left(\hat{\mathbb{P}}_{Y_t}, \mathbb{P}_{X_t} \right) = 0$ and hence $\frac{d \widetilde\mathbb{Q}}{d \mathbb{P}} = 1$. Consequently, $\hat{\mathbb{P}}_{(Y,W)} = \mathbb{P}_{(X,B)}.$ Here, $\mathcal{K}$ denotes the Kantorovich metric defined by
\begin{align*}
\mathcal{K}( \mu, \nu) = \sup_{h \in \text{Lip}_1(\mathbb{R}^d;\mathbb{R})} \left\vert \int_{\mathbb{R}^d} h(x) (\mu - \nu)(dx) \right\vert, ~\mu, \nu \in \mathcal{P}_1(\mathbb{R}^d).
\end{align*}
Using Hölder's inequality for $p:= \frac{1+\varepsilon}{\varepsilon}$, where $\varepsilon>0$ is sufficiently small with regard to \Cref{lem:RegBoundsSolution}, inequality \eqref{eq:RegExponentialInequality}, Burkholder-Davis-Gundy's inequality, and the Lipschitz continuity of $b$ we get
\begin{align*}
&\mathcal{K} \left(\widetilde\mathbb{Q}_{X_t}, \mathbb{P}_{X_t}\right) = \sup_{h \in \text{Lip}_1(\mathbb{R}^d;\mathbb{R})} \left\vert \mathbb{E}_{\widetilde\mathbb{Q}} \left[h(X_t)-h(0) \right] - \mathbb{E}\left[h(X_t)-h(0) \right] \right\vert \\
&\quad \leq \mathbb{E} \left[ \left\Vert X_t \right\Vert \left\vert \mathcal{E} \left( - \int_0^t \left( \overline{b}\left(s,X_s,\mathbb{P}_{X_s}\right) - \overline{b}\left(s,X_s, \hat{\mathbb{P}}_{Y_s}\right) \right) dB_s \right) - 1 \right\vert\right] \\
&\quad \lesssim \mathbb{E} \left[ \left\vert \mathcal{E} \left( - \int_0^t \left( \overline{b}\left(s,X_s,\mathbb{P}_{X_s}\right) - \overline{b}\left(s,X_s, \hat{\mathbb{P}}_{Y_s}\right) \right) dB_s \right) - 1 \right\vert^{\frac{2(1+\varepsilon)}{2+\varepsilon}} \right]^{\frac{2+\varepsilon}{2(2+\varepsilon)}} \\
&\quad \lesssim \mathbb{E} \left[ \left\vert \int_0^t \left( \overline{b}\left(s,X_s,\mathbb{P}_{X_s} \right) - \overline{b}\left(s,X_s, \hat{\mathbb{P}}_{Y_s} \right) \right) dB_s \right.\right. \\
&\qquad \left. \left. + \frac{1}{2} \int_0^t \left\Vert \overline{b}\left(s,X_s,\mathbb{P}_{X_s}\right) - \overline{b}\left(s,X_s, \hat{\mathbb{P}}_{Y_s}\right) \right\Vert^2 ds \right\vert^{2p} \right]^{\frac{1}{2p}} \\
&\quad \lesssim \mathbb{E} \left[ \left\vert \int_0^t \left\Vert \overline{b}\left(s,X_s,\mathbb{P}_{X_s}\right) - \overline{b}\left(s,X_s, \hat{\mathbb{P}}_{Y_s} \right) \right\Vert^2 ds \right\vert^{p} \right]^{\frac{1}{2p}} \\
&\qquad + \mathbb{E} \left[ \left\vert \int_0^t \left\Vert \overline{b}\left(s,X_s,\mathbb{P}_{X_s} \right) - \overline{b}\left(s,X_s, \hat{\mathbb{P}}_{Y_s} \right) \right\Vert^2 ds \right\vert^{2p} \right]^{\frac{1}{2p}} \\
&\quad \lesssim \max_{q=1,2} \mathbb{E} \left[ \left( \int_0^t \left\Vert \int_{\mathbb{R}^d} \varphi(s,X_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^2 ds \right)^{qp} \right]^{\frac{1}{2p}} \\
&\quad = \max_{q=1,2} \mathbb{E} \left[ \left( \int_0^t \left\Vert \int_{\mathbb{R}^d} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^2 ds \right)^{qp} \right. \\
&\qquad \left. \times \mathcal{E}\left(-\int_0^s \overline{b}(u,B_u,\mathbb{P}_{X_u}) dB_u \right) \right]^{\frac{1}{2p}} \\
&\quad \lesssim \max_{q=1,2} \mathbb{E} \left[ \left( \int_0^t \left\Vert \int_{\mathbb{R}^d} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^2 ds \right)^{qp^2} \right]^{\frac{1}{2p^2}}.
\end{align*}
Equivalent to the steps before we get using $\sup_{t\in[0,T]}\EW{\Vert \overline{b}(t,B_t,\mu_t)\Vert^2} <\infty$ for all $\mu \in \mathcal{C}([0,T];\mathcal{P}_1(\mathbb{R}^d))$ that
\begin{align*}
&\left\Vert \int_{\mathbb{R}^d} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^2\\
&\quad = \left\Vert \mathbb{E} \left[ \varphi(s,y,X_s) \right] - \mathbb{E}_{\hat\mathbb{P}}\left[\varphi(s,y,Y_s) \right] \right\Vert_{y= B_s }^2 \\
&\quad = \mathbb{E}\left[ \left\Vert \varphi(s,y,B_s) \right\Vert \right. \\
&\qquad \times \left.\left\vert \mathcal{E}\left( - \int_0^s \overline{b}(u,B_u,\mathbb{P}_{X_u}) dB_u \right) - \mathcal{E}\left( - \int_0^s \overline{b}(u,B_u,\hat\mathbb{P}_{Y_u}) dB_u \right) \right\vert \right]_{y= B_s}^2 \\
&\quad \lesssim \left( 1+ B_s \right)^2 \mathbb{E}\left[\left\vert \int_0^s \left( \overline{b}(u,B_u,\mathbb{P}_{X_u}) - \overline{b}(u,B_u,\hat\mathbb{P}_{Y_u}) \right) dB_u \right.\right.\\
&\qquad \left.\left. +\frac{1}{2} \int_0^s \left( \left\Vert\overline{b}(u,B_u,\mathbb{P}_{X_u})\right\Vert^2 - \left\Vert\overline{b}(u,B_u,\hat\mathbb{P}_{Y_u})\right\Vert^2\right) du \right\vert^{2p} \right]^{\frac{1}{p}} \\
&\quad \lesssim \left( 1+ B_s \right)^2 \mathbb{E}\left[ \left( \int_0^s \left\Vert \overline{b}(u,B_u,\mathbb{P}_{X_u}) - \overline{b}(u,B_u,\hat\mathbb{P}_{Y_u}) \right\Vert^2 du\right)^p \right]^{\frac{1}{p}} \\
&\quad + \left( 1+ B_s \right)^2 \mathbb{E}\left[\left( \int_0^s \left\vert \left\Vert\overline{b}(u,B_u,\mathbb{P}_{X_u})\right\Vert^2 - \left\Vert\overline{b}(u,B_u,\hat\mathbb{P}_{Y_u})\right\Vert^2 \right\vert du \right)^{2p} \right]^{\frac{1}{p}} \\
&\quad \lesssim \left( 1+ B_s \right)^2 \mathbb{E}\left[ \left( \int_0^s \left\Vert \overline{b}(u,B_u,\mathbb{P}_{X_u}) - \overline{b}(u,B_u,\hat\mathbb{P}_{Y_u}) \right\Vert^2 du\right)^p \right]^{\frac{1}{p}} \\
&\lesssim \left( 1+ B_s \right)^2 \mathbb{E} \left[ \left( \int_0^s \left\Vert \int_{\mathbb{R}^d} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^{2} du \right)^p \right]^{\frac{1}{p}}.
\end{align*}
Applying the $L^{p^2}(\Omega)$ norm on both sides yields
\begin{align*}
&\mathbb{E} \left[ \left\Vert \int_\mathbb{R} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^{2p^2} \right]^{\frac{1}{p^2}}\\
&\lesssim \mathbb{E} \left[ \left( 1+ B_s \right)^{2p^2} \right] \mathbb{E} \left[ \left( \int_0^s \left\Vert \int_{\mathbb{R}^d} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^{2} du \right)^{p^2} \right]^{\frac{1}{p^2}} \\
& \lesssim \int_0^s \mathbb{E} \left[ \left\Vert \int_{\mathbb{R}^d} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^{2p^2} \right]^{\frac{1}{p^2}} du
\end{align*}
Using a Grönwall argument yields that
\begin{align*}
\mathbb{E} \left[ \left\Vert \int_{\mathbb{R}^d} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert^{2p^2} \right]^{\frac{1}{p^2}} = 0.
\end{align*}
In particular,
\begin{align*}
\left\Vert \int_{\mathbb{R}^d} \varphi(s,B_s,z) \left(\mathbb{P}_{X_s} - \hat{\mathbb{P}}_{Y_s}\right)(dz) \right\Vert = 0, \quad \mathbb{P}\text{-a.s.}
\end{align*}
and consequently, $\mathcal{K} \left(\widetilde\mathbb{Q}_{X_t}, \mathbb{P}_{X_t}\right) =0$.
\end{proof}
Due to \cite[Theorem 4.1]{Bauer_MultiDim} we immediately get Malliavin differentiability of the strong solution of mean-field equation \eqref{lebesgueMFSDE} for a more general class of functionals $\varphi$.
\begin{theorem}\label{thm:Malliavin}
Let $b:[0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ be bounded and continuous in the third variable \eqref{continuityThirdMF}. Further, let $\varphi:[0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ be of at most linear growth \eqref{linearGrowthMF}. Then, the strong solution of mean-field SDE \eqref{lebesgueMFSDE} is Malliavin differentiable.
\end{theorem}
\begin{remark}
In the one-dimensional case, $d=1$, the class of drift coefficients $b$ and functionals $\varphi$ can be further extended in order to obtain Malliavin differentiability of the strong solution. Consider the decomposition
\begin{align}\label{eq:RegFormDrift}
(b\diamond \varphi)\left(t,y,\mu\right) := \hat{b}\left(t,y,\int_\mathbb{R} \hat{\varphi}(t,y,z) \mu(dz) \right) + \tilde{b}\left(t,y,\int_\mathbb{R} \tilde{\varphi}(t,y,z) \mu(dz) \right),
\end{align}
where the drift $\hat{b}$ is merely measurable and bounded and the functional $\hat{\varphi}$ is merely measurable and of linear growth whereas $\tilde{b}$ and $\tilde{\varphi}$ are of linear growth \eqref{linearGrowthMF} and Lipschitz continuous in the second variable \eqref{lipschitzSecondMF}. If $b$ is continuous in the third variable \eqref{continuityThirdMF}, the strong solution of mean-field SDE \eqref{lebesgueMFSDE} is Malliavin differentiable due to \cite[Theorem 2.12]{Bauer_StrongSolutionsOfMFSDEs}.
\end{remark}
\begin{example}
Let $b:[0,T] \times \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ be a measurable and bounded function which is continuous in the third variable \eqref{continuityThirdMF}. Furthermore, define the functional $\varphi(t,y,z) = \mathbbm{1}_{\lbrace z\leq u \rbrace}$, where $u \in \mathbb{R}$ is some parameter. Then, the mean-field stochastic differential equation
\begin{align*
dX_t^x = b\left(t,X_t^x, F_{X_t^x}(u)\right) dt + dB_t, ~ t\in[0,T], ~ X_0^x = x \in \mathbb{R},
\end{align*}
where $F_{X_t^x}$ denotes the cumulative distribution function of $X_t^x$, has a Malliavin differentiable strong solution due to \Cref{thm:Malliavin}. If $b$ is Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}, the solution is unique. Note that it is also possible to choose $u=t$ or $u = y$, where the later one yields the mean-field SDE
\begin{align*
dX_t^x = b\left(t,X_t^x, F_{X_t^x}(X_t^x)\right) dt + dB_t,~ t\in[0,T], ~ X_0^x = x \in \mathbb{R}.
\end{align*}
\end{example}
\bigskip
Using It\^o's formula we are able to extend our results on mean-field SDE \eqref{lebesgueMFSDE} to more general diffusion coefficients. For notational simplicity we just consider the time-homogenous and one-dimensional case. However the time-inhomogeneous and multi-dimensional cases can be shown analogously.
\begin{theorem
Consider the time-homogeneous mean-field SDE
\begin{align}\label{generalizedMFSDE}
dX_t^x = b\left(X_t^x,\int_{\mathbb{R}} \varphi(X_t^x,z) \mathbb{P}_{X_t^x}(dz)\right) dt + \sigma(X_t^x) dB_t,~ t\in[0,T], ~ X_0^x = x \in \mathbb{R},
\end{align}
with measurable drift $b: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$, functional $\varphi:\mathbb{R} \times \mathbb{R} \to \mathbb{R}$, and volatility $\sigma: \mathbb{R} \to \mathbb{R}$. Moreover, let $\Lambda: \mathbb{R} \to \mathbb{R}$ be a twice continuously differentiable bijection with derivatives $\Lambda'$ and $\Lambda''$, such that for all $y\in\mathbb{R}$,
\begin{align*}
\Lambda'(y) \sigma(y) = 1,
\end{align*}
as well as $\Lambda^{-1}$ is Lipschitz continuous. Suppose that $(b^* \diamond \varphi^*): \mathbb{R} \times \mathcal{P}_1(\mathbb{R}) \to \mathbb{R}$, defined by
\begin{small}
\begin{align*}
&(b^*\diamond \varphi^*)(y,\mu) := \\
&\quad \Lambda'\left(\Lambda^{-1}(y)\right)b\left(\Lambda^{-1}(y),\int_{\mathbb{R}^d} \varphi(\Lambda^{-1}(y),\Lambda^{-1}(z)) \mu(dz)\right) + \frac{1}{2} \Lambda''\left(\Lambda^{-1}(y)\right) \sigma\left(\Lambda^{-1}(y)\right)^2,
\end{align*}
\end{small}
fulfills the assumptions of \Cref{extensionSolution} and \Cref{thm:Malliavin}, respectively. Then, there exists a (Malliavin differentiable) strong solution $(X_t^x)_{t\in[0,T]}$ of \eqref{generalizedMFSDE}. If moreover $b^*$ is Lipschitz continuous in the second variable \eqref{lipschitzThirdMF}, the solution is unique.
\end{theorem}
\begin{proof}
Since $(b^* \diamond \varphi^*)$ satisfies the conditions of \Cref{extensionSolution} and \Cref{thm:Malliavin}, respectively, mean-field SDE
\begin{align*}
dZ_t^x = b^*\left(Z_t^x,\int_\mathbb{R} \varphi^*\left(Z_t^x, z\right) \mathbb{P}_{Z_t^x}(dz)\right)dt + dB_t,~ t\in[0,T],~ Z_0^x = \Lambda(x),
\end{align*}
has a (Malliavin differentiable) (unique) strong solution. Thus $X_t^x := \Lambda^{-1}(Z_t^x)$ is a (unique) strong solution of \eqref{generalizedMFSDE} by the application of It\^{o}'s formula, and since $\Lambda^{-1}$ is Lipschitz continuous, $X^x$ is Malliavin differentiable.
\end{proof}
We conclude this section by applying our existence result on solutions of mean-field SDEs to construct solutions of ODEs. More precisely, consider the mean-field SDE
\begin{align}\label{MfOde}
dX_t^x = b(t,\mathbb{E}[X_t^x])dt + dB_t,~ t\in[0,T],~ X_0^x = x\in \mathbb{R}^d,
\end{align}
i.e.~the drift coefficient only depends on the solution via the expectation $\mathbb{E}[X_t^x]$. By \Cref{extensionSolution}, mean-field SDE \eqref{MfOde} has a strong solution if $b:[0,T] \times \mathbb{R}^d \to \mathbb{R}^d$ is of at most linear growth and continuous in the second variable. Now, by taking expectation on both sides, we loose the randomness and get that $u(t) := \mathbb{E}[X_t^x]$ solves the ODE
\begin{align}\label{eq:ODE}
d\,u(t) = b(t,u(t)) dt,~ t\in[0,T],~ u(0) = x\in \mathbb{R}^d.
\end{align}
We thus have developed a probabilistic approach to the following version of the theorem on existence of solutions of ODEs by Carathéodory, see e.g.~\cite[Theorem 1.1]{Persson_AGeneralizationOfCaratheodorysExistenceTheoremForODEs} or for a direct proof \cite[Chapter II, Theorem 3.2]{Reid_ODE}:
\begin{theorem}
Let $b:[0,T] \times \mathbb{R}^d \to \mathbb{R}^d$ be of at most linear growth and continuous in the second variable, i.e. $b$ fulfills the corresponding assumptions \eqref{linearGrowthMF} and \eqref{continuityThirdMF}. Furthermore, let $\left(X_t^x\right)_{t\in[0,T]}$ be a strong solution of \eqref{MfOde}. Then $u(t):=\mathbb{E}[X_{t}^x]$ is a solution of ODE \eqref{eq:ODE}.
\end{theorem}
\section{Regularity in the initial value}\label{sec:Regularity}
The aim of this section is to study the regularity of a strong solution of mean-field SDE \eqref{lebesgueMFSDE} as a function in its initial condition. More precisely, we investigate under which assumptions on $b$ and $\varphi$ the strong solution $X_t^x$ of \eqref{lebesgueMFSDE} is not just Sobolev differentiable but continuously differentiable as a function in $x$. These results will then be used to develop the Bismut-Elworthy-Li formula \eqref{eq:RegDelta}.
\subsection{Strong Differentiability}
First recall that due to \Cref{cor:SobolevDifferentiableRecall} the unique strong solution $X^x$ of mean-field SDE \eqref{lebesgueMFSDE} is Sobolev differentiable under the assumption that $b$ is measurable, bounded, and Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}, and $\varphi$ is measurable, of at most linear growth \eqref{linearGrowthMF}, and Lipschitz continuous in the third variable \eqref{lipschitzThirdMF}. Our aim is to find sufficient assumptions on $b$ and $\varphi$ such that the unique strong solution $X^x$ of \eqref{lebesgueMFSDE} is continuously differentiable in the initial condition.
\begin{proposition}\label{thm:strongDerivative}
Suppose $b, \varphi \in\mathfrak{C}([0,T]\times \mathbb{R}^d \times \mathbb{R}^d)$. Let $(X_t^{x})_{t\in[0,T]}$ be the unique strong solution of mean-field SDE \eqref{lebesgueMFSDE}. Then for every compact subset $K\subset \mathbb{R}^d$ there exists some constant $C>0$ such that for every $t \in [0,T]$ and $x,y\in K$
\begin{align*}
\Eabs{\partial_x X_t^x - \partial_y X_t^y} \leq C \Vert x-y \Vert.
\end{align*}
In particular, the map $x \mapsto X_t^x$ is continuously differentiable for every $t\in[0,T]$ and for every $1\leq p < \infty$
\begin{align}\label{eq:boundDerivative}
\sup_{t\in [0,T]} \sup_{x\in K} \Ep{\partial_x X_t^x}{p} < \infty.
\end{align}
\end{proposition}
\begin{proof}
Since $X^x$ is Sobolev differentiable by \Cref{cor:SobolevDifferentiableRecall} and
\begin{align*}
\sup_{t\in [0,T]} \esssup_{x\in K} \Ep{\partial_x X_t^x}{p} < \infty,
\end{align*}
by \cite[Lemma 4.13]{Bauer_MultiDim}, it suffices to show that $\partial_x X^x$ is almost surely continuous in $x\in K$. Note that we can choose an element of the equivalence class of weak derivatives $\partial_x X^x$ such that \eqref{eq:boundDerivative} holds. For the remainder of this proof we just consider this particular element and denote it without loss of generality by $\partial_x X^x$. Let $x,y \in K$ and $t\in[0,T]$ be arbitrary. Note that the first variation process $\partial_x X^x$ has the representation
\begin{align*}
\partial_x X_t^x = 1+ \int_0^t \partial_2 b(s,X_s^x, \rho(X_s^x)) \partial_x X_s^x + \partial_3 b(s,X_s^x, \rho(X_s^x)) \partial_x \rho(X_s^x) ds,
\end{align*}
where $\rho(X_t^x) := \int_{\mathbb{R}^d} \varphi(t,X_t^x,z) \mathbb{P}_{X_t^x}(dz).$ Thus, using Minkowski's and Hölder's inequalities we get
\begin{align*}
&\Eabs{\partial_x X_t^x - \partial_y X_t^y} \\
&\quad \leq \int_0^t \Eabs{\partial_2 b(s,X_s^x, \rho(X_s^x)) \partial_x X_s^x - \partial_2 b(s,X_s^y, \rho(X_s^y)) \partial_y X_s^y} \\
&\qquad + \Eabs{\partial_3 b(s,X_s^x, \rho(X_s^x)) \partial_x \rho(X_s^x) - \partial_3 b(s,X_s^y, \rho(X_s^y)) \partial_y \rho(X_s^y)} ds \\
&\quad \lesssim \int_0^t \Ep{\partial_2 b(s,X_s^x, \rho(X_s^x)) - \partial_2 b(s,X_s^y, \rho(X_s^y))}{2} \Ep{\partial_x X_s^x}{2}\\
&\qquad + \Ep{\partial_3 b(s,X_s^x, \rho(X_s^x)) - \partial_3 b(s,X_s^y, \rho(X_s^y))}{2} \Ep{\partial_x \rho(X_s^x)}{2}\\
&\qquad + \Eabs{\partial_x X_s^x - \partial_y X_s^y} + \Eabs{\partial_x \rho(X_s^x) - \partial_y \rho(X_s^y)} ds\\
&\quad \lesssim \int_0^t \left( \Ep{X_s^x - X_s^y}{2} + \Ep{\rho(X_s^x) - \rho(X_s^y)}{2} \right) \\
&\qquad \times \left( \Ep{\partial_x X_s^x}{2} + \Ep{\partial_x \rho(X_s^x)}{2} \right) \\
&\qquad + \Eabs{\partial_x X_s^x - \partial_y X_s^y} + \Eabs{\partial_x \rho(X_s^x) - \partial_y \rho(X_s^y)} ds.
\end{align*}
Using the assumptions on $\varphi$ we get
\begin{align}\label{eq:rhoContinuity}
\Ep{\rho(X_t^x) - \rho(X_t^y)}{2} &= \Ep{ \EW{\varphi(t,z_1,X_t^x) - \varphi(t,z_2,X_t^y)}_{z_1=X_t^x; z_2= X_t^y}}{2} \notag\\
&\lesssim \Ep{ \Eabs{z_1-z_2 \right\Vert + \left\Vert X_t^x - X_t^y}_{z_1=X_t^x; z_2= X_t^y}}{2} \notag\\
&\leq \Ep{X_t^x - X_t^y}{2}.
\end{align}
Furthermore, using the chain rule we have that
\begin{align*}
&\Ep{\partial_x \rho(X_t^x)}{2} \\
&\quad = \Ep{\EW{\partial_2 \varphi(t,z,X_t^x)}_{z=X_t^x} \partial_x X_t^x + \EW{\partial_3 \varphi(t,z,X_t^x) \partial_x X_t^x}_{z=X_t^x}}{2} \\
&\quad \lesssim \Ep{\partial_x X_t^x}{2} + \Eabs{\partial_x X_t^x} \leq \Ep{\partial_x X_t^x}{2}.
\end{align*}
Equivalently we obtain that
\begin{align*}
&\Eabs{\partial_x \rho(X_t^x) - \partial_y \rho(X_t^y)} \\
&\quad \leq \Eabs{\EW{\partial_2 \varphi(t,z,X_t^x)}_{z=X_t^x} \partial_x X_t^x - \EW{\partial_2 \varphi(t,z,X_t^y)}_{z=X_t^y} \partial_y X_t^y}\\
&\qquad + \Eabs{\EW{\partial_3 \varphi(t,z,X_t^x) \partial_x X_t^x}_{z=X_t^x} - \EW{\partial_3 \varphi(t,z,X_t^y) \partial_y X_t^y}_{z=X_t^y}} \\
&\quad \leq \Eabs{\EW{\partial_2 \varphi(t,z_1,X_t^x) - \partial_2 \varphi(t,z_2,X_t^y)}_{z_1= X_t^x; z_2 =X_t^y} \right\Vert \left\Vert \partial_x X_t^x}\\
&\qquad + \Eabs{\partial_x X_t^x - \partial_y X_t^y\right\Vert^p \left\Vert \EW{\partial_2 \varphi(t,z,X_t^y)}_{z=X_t^y} }\\
&\qquad + \Eabs{\Eabs{\partial_3 \varphi(t,z_1,X_t^x) - \partial_3 \varphi(t,z_2,X_t^y)\right\Vert \left\Vert \partial_x X_t^x}_{z_1=X_t^x; z_2=X_t^y}} \\
&\qquad + \Eabs{\Eabs{ \partial_x X_t^x - \partial_y X_t^y \right\Vert \left\Vert \partial_3 \varphi(t,z,X_t^y)}_{z=X_t^y}}\\
&\lesssim \Ep{\EW{\partial_2 \varphi(t,z_1,X_t^x) - \partial_2 \varphi(t,z_2,X_t^y)}_{z_1= X_t^x; z_2 =X_t^y}}{2} \Ep{\partial_x X_t^x}{2}\\
&\qquad + \Eabs{\partial_x X_t^x - \partial_y X_t^y} \\
&\qquad + \Eabs{\mathbb{E}\left[\left\Vert \partial_3 \varphi(t,z_1,X_t^x) - \partial_3 \varphi(t,z_2,X_t^y)\right\Vert^2 \right]^{\frac{1}{2}}_{z_1=X_t^x; z_2=X_t^y}} \Ep{\partial_x X_t^x}{2}\\
&\qquad + \Eabs{ \partial_x X_t^x - \partial_y X_t^y}\\
&\quad \lesssim \Ep{\partial_x X_t^x}{2}\Ep{X_t^x - X_t^y}{2} + \Eabs{\partial_x X_t^x - \partial_y X_t^y}.
\end{align*}
Thus, in combination with \eqref{eq:boundDerivative} we get
\begin{align*}
&\Eabs{\partial_x X_t^x - \partial_y X_t^y} \lesssim \int_0^t \Ep{X_s^x - X_s^y}{2} + \Eabs{\partial_x X_s^x - \partial_y X_s^y} ds.
\end{align*}
Using equation \eqref{eq:RegHoelderContinuity}, we get
\begin{align*}
&\Eabs{\partial_x X_t^x - \partial_y X_t^y} \lesssim \vert x - y \vert + \int_0^t \Eabs{\partial_x X_s^x - \partial_y X_s^y} ds.
\end{align*}
Finally, since $\Eabs{\partial_x X_s^x - \partial_y X_s^y}$ is integrable over $[0,T]$ and Borel measurable, we can apply Jones' generalization of Grönwall's inequality \cite[Lemma 5]{Jones_FundamentalInequalities} to get
\begin{align*}
\Eabs{\partial_x X_t^x - \partial_y X_t^y} \lesssim \vert x-y \vert.
\end{align*}
Thus, $\partial_x X^x$ has an almost surely continuous version in $x\in K$ by Kolmogorov's continuity theorem and consequently $X^x$ is continuously differentiable for every $t\in[0,T]$.
\end{proof}
\subsection{Bismut-Elworthy-Li formula
In this subsection we turn our attention to the Bismut-Elworthy-Li formula \eqref{eq:RegDelta}. With the help of the approximating sequence defined in \eqref{eq:RegApproximatingMFSDEExp} we show in the one-dimensional case, i.e. $d=1$, that $\partial_x \mathbb{E}[\Phi(X_T^x)]$ exists in the strong sense for functionals $\Phi$ merely satisfying some integrability condition, i.e. we show that $\mathbb{E}[\Phi(X_T^x)]$ is continuously differentiable.
\begin{lemma}\label{lem:RegRepDelta2}
Consider $d=1$. Let $(b \diamond \varphi)$ admit a decomposition \eqref{eq:RegFormDrift} and let $b, \varphi \in \mathcal{L}([0,T] \times \mathbb{R} \times \mathbb{R})$. Further, let $(X_t^x)_{t\in [0,T]}$ be the unique strong solution of mean-field SDE \eqref{lebesgueMFSDE} and $\Phi \in \mathcal{C}_b^{1,1}(\mathbb{R})$. Then $\mathbb{E}[\Phi(X_t^x)] \in \mathcal{C}^1(\mathbb{R})$ and
\begin{align}\label{representationDerivative}
\partial_x \mathbb{E} \left[ \Phi(X_t^x) \right] = \mathbb{E} \left[\Phi'(X_t^x) \partial_x X_t^x \right],
\end{align}
where $\Phi'$ denotes the first derivative of $\Phi$ and $\partial_x X_t^x$ is the first variation process of $X_t^x$ as given in \eqref{eq:RegRepresentationFV}.
\end{lemma}
In order to proof \Cref{lem:RegRepDelta2}, we need to define a sequence of mean-field equations similar to \cite{Bauer_StrongSolutionsOfMFSDEs} whose unique strong solutions approximate the unique strong solution of \eqref{lebesgueMFSDE}, where $(b \diamond \varphi)$ fulfills the assumptions of \Cref{lem:RegRepDelta2}. More precisely, by standard approximation arguments there exist sequences
\begin{align}\label{eq:RegApproximatingDrift}
b_n:= \tilde{b}_n + \hat{b}_n, \quad \text{ and } \quad \varphi_n := \tilde{\varphi}_n + \hat{\varphi}_n, \quad n\geq 1,
\end{align}
where $b_n, \varphi_n \in \mathcal{C}_0^{\infty}( [0,T] \times \mathbb{R} \times \mathbb{R})$ with \[\sup_{n\geq 1} \left( \Vert \tilde{b}_n\Vert_{\infty} + \Vert \tilde{\varphi}_n\Vert_{\infty} \right) \leq C < \infty\] and \[\sup_{n\geq 1}\left( \vert \hat{b}_n(t,y,z) \vert + \vert \hat{\varphi}_n(t,y,z) \vert \right) \leq C(1+\vert y \vert + \vert z \vert)\] for every $t\in[0,T]$ and $y,z \in \mathbb{R}$, such that $b_n \to b$ and $\varphi_n \to \varphi$ in a.e. $(t,y,z) \in [0,T] \times \mathbb{R} \times \mathbb{R}$ with respect to the Lebesgue measure, respectively. The original drift coefficients $b$ and $\varphi$ are denoted by $b_0$ and $\varphi_0$, respectively. Furthermore, we can assume that there exists a constant $C>0$ independent of $n\in \mathbb{N}$ such that \[b_n, \varphi_n \in \mathcal{L}([0,T]\times \mathbb{R} \times \mathbb{R}),\] and that $\hat{b}_n$ and $\hat{\varphi}_n$ are Lipschitz continuous in the second variable \eqref{lipschitzSecondMF} for all $n\geq 0$. Under these conditions the corresponding mean-field SDEs, defined by
\begin{align}\label{eq:RegApproximatingMFSDEExp}
dX_t^{n,x} &= b_n\left(t,X_t^{n,x}, \int_{\mathbb{R}} \varphi_n(t,X_t^{n,x}, z) \mathbb{P}_{X_t^{n,x}}(dz) \right) dt + dB_t, ~t\in[0,T],\\
X_0^{n,x} &= x \in \mathbb{R}, \notag
\end{align}
have unique strong solutions which are Malliavin differentiable by \Cref{thm:Malliavin}. Likewise the strong solutions $\lbrace X^{n,x}\rbrace_{n\geq 0}$ are continuously differentiable with respect to the initial condition by \Cref{thm:strongDerivative}. Due to \Cref{cor:L2Convergence} we have that $(X_t^{n,x})_{t\in[0,T]}$ converges to $(X_t^x)_{t\in[0,T]}$ in $L^2(\Omega)$ as $n\to \infty$ and similar to \cite[Lemma 3.10]{Bauer_StrongSolutionsOfMFSDEs} one can show for any compact subset $K\subset \mathbb{R}$ and $p\geq 1$ that
\begin{align}\label{eq:boundDerivativeUniform}
\sup_{n\geq 0} \sup_{t\in[0,T]} \sup_{x\in K} \Ep{\partial_x X_t^{n,x}}{p} < \infty.
\end{align}
\begin{proof}[Proof of \Cref{lem:RegRepDelta2}]
Note first that $\mathbb{E} \left[ \Phi(X_t^x) \right]$ is weakly differentiable by \Cref{cor:BismutFormulaRecall} and equation \eqref{representationDerivative} holds by \cite[Lemma 4.1]{Bauer_StrongSolutionsOfMFSDEs}. Hence it suffices to show that $\partial_x \mathbb{E}[\Phi(X_t^x)]$ is continuous. In order to prove this we show that
\begin{align*}
\mathbb{E}[\Phi(X_t^{n,x})] &\xrightarrow[n\to\infty]{} \mathbb{E}[\Phi(X_t^x)] \quad \forall x\in \mathbb{R}, \text{ and}\\
\mathbb{E} \left[ \Phi'(X_t^{n,x}) \partial_x X_t^{n,x} \right] &\xrightarrow[n\to\infty]{} \mathbb{E} \left[ \Phi'(X_t^x) \partial_x X_t^x \right] \quad \text{uniformly for } x\in K,
\end{align*}
where $\lbrace(X_t^{n,x})_{t\in[0,T]} \rbrace_{n\geq 1}$ is the approximating sequence defined in \eqref{eq:RegApproximatingMFSDEExp} and $K \subset \mathbb{R}$ is a compact subset. Note that $$\partial_x \mathbb{E}[\Phi(X_t^{n,x})]= \mathbb{E} \left[ \Phi'(X_t^{n,x}) \partial_x X_t^{n,x} \right]$$ is continuous in $x$ due to \Cref{thm:strongDerivative}. The first convergence follows directly by \Cref{rem:convergenceRho}. For the uniform convergence let $K\subset \mathbb{R}$ be a compact set and define for $n\geq 0$
\begin{align*}
D_n(s,t,x) &:= \exp \left\lbrace -\int_s^t \int_{\mathbb{R}} b_n(u,y,\varrho_u^{n,x}(y)) L^{B^x}(du,dy) \right\rbrace, \text{ and}\\
E_n(x) &:= \mathcal{E}\left( \int_0^T b_n(s,B_s^x,\varrho_s^{n,x}(B_s^x)) dB_s \right),
\end{align*}
where $\varrho_u^{n,x}(y) := \int_\mathbb{R} \varphi(u,y,z) \mathbb{P}_{X_u^{n,x}}(dz)$. In a first approximation we get using $\Vert \Phi' \Vert_{\infty} < \infty$ and representation \eqref{eq:RegRelationDerivatives} that
\begin{align*
\begin{split}
&\left| \mathbb{E} \left[ \Phi'(X_t^{n,x}) \partial_x X_t^{n,x} - \Phi'(X_t^x) \partial_x X_t^x \right] \right| \\
&\quad \lesssim \mathbb{E} \left[ \left| E_n(x) \left( D_n(0,t,x) + \int_0^t D_n(s,t,x) \partial_x b_n(s,y,\varrho_s^{n,x}(y))\vert_{y=B_s^x} ds \right) \right. \right.\\
&\qquad -\left.\left. E_0(x) \left( D_0(0,t,x) + \int_0^t D_0(s,t,x) \partial_x b_n(s,y,\varrho_s^{n,x}(y))\vert_{y=B_s^x} ds \right) \right| \right] \\
&\quad =: A_n(t,x).
\end{split}
\end{align*}
Equivalently, we get using $\Vert \partial_3 \varphi \Vert_\infty < \infty$ that for every $t\in [0,T]$ and $y \in \mathbb{R}$
\begin{align*}
\vert \partial_x \varrho_t^{n,x}(y) - \partial_x \varrho_t^x(y) \vert = \left\vert \mathbb{E} \left[ \partial_3 \varphi(t,y,X_t^{n,x}) \partial_x X_t^{n,x} - \partial_3 \varphi(t,y,X_t^x) \partial_x X_t^x \right] \right\vert \lesssim A_n(t,x).
\end{align*}
Note furthermore that by \eqref{eq:boundDerivativeUniform} we have for every $y\in \mathbb{R}$ that
\begin{align}\label{eq:boundPartialXb}
\vert \partial_x b_n(s,y,\varrho_s^{n,x}(y)) \vert &= \vert \partial_3 b_n(s,y,\varrho_s^{n,x}(y)) \partial_x \varrho_s^{n,x}(y) \vert \lesssim \left\vert \mathbb{E} \left[ \partial_3 \varphi(t,y,X_t^{n,x}) \partial_x X_t^{n,x}\right] \right\vert \notag \\
&\leq \mathbb{E}[\vert \partial_x X_t^{n,x} \vert] < \infty,
\end{align}
and for every $p\geq 1$
\begin{align}\label{eq:lipPartialXb}
&\EpE{\partial_x b_n(t,y,\varrho_t^{n,x}(y))\vert_{y=B_t^x} - \partial_x b(t,y,\varrho_t^{x}(y))\vert_{y=B_t^x}}{p} \notag \\
&\quad \lesssim \EpE{\partial_3 b_n(t,B_t^x,\varrho_t^{n,x}(B_t^x))- \partial_3 b(t,B_t^x,\varrho_t^{x}(B_t^x))}{p} \notag\\
&\qquad + \EpE{\partial_x \varrho_t^{n,x}(B_t^x) - \partial_x \varrho_t^x(B_t^x)}{p} \\
&\quad \lesssim \EpE{\partial_3 b_n(t,B_t^x,\varrho_t^{n,x}(B_t^x))- \partial_3 b(t,B_t^x,\varrho_t^{x}(B_t^x))}{p} + A_n(t,x). \notag
\end{align}
Using Hölder's inequality, \eqref{eq:boundPartialXb}, \Cref{lem:RegBoundsSolution}, and \Cref{cor:RegBoundLocalTime} we can decompose $A_n(t,x)$ into
\begin{small}
\begin{align*}
&A_n(t,x) \\
&\lesssim \mathbb{E} \left[ \left| E_n(x) - E_0(x) \right| \left| D_n(0,t,x) + \int_0^t D_n(s,t,x) \partial_x b_n(s,y,\varrho_s^{n,x}(y))\vert_{y=B_s^x} ds \right| \right] \\
&\quad + \mathbb{E} \left[ E_0(x) \left| D_n(0,t,x) - D_0(0,t,x) \right| \right] \\
&\quad + \mathbb{E} \left[ E_0(x) \left| \int_0^t D_n(s,t,x) \partial_x b_n(s,y,\varrho_s^{n,x}(y))\vert_{y=B_s^x} \right.\right.\\
&\qquad \left.\left. - D_0(s,t,x) \partial_x b(s,y,\varrho_s^{x}(y))\vert_{y=B_s^x} ds \right| \right] \\
&\lesssim \EpE{E_n(x) - E_0(x)}{q} + \EpE{D_n(0,t,x) - D_0(0,t,x)}{p} \\
&\quad + \EpE{\int_0^t D_n(s,t,x) \partial_x b_n(s,y,\varrho_s^{n,x}(y))\vert_{y=B_s^x} - D_0(s,t,x) \partial_x b(s,y,\varrho_s^{x}(y))\vert_{y=B_s^x} ds}{p}\\
&=: F_n(x) + G_n(0,t,x) + H_n(t,x),
\end{align*}
\end{small}
where $q:= \frac{2(1+\varepsilon)}{2+\varepsilon}$ and $p:= \frac{1+\varepsilon}{\varepsilon}$. Furthermore, we can bound $H_n(t,x)$ due to \Cref{cor:RegBoundLocalTime}, \eqref{eq:boundPartialXb}, and \eqref{eq:lipPartialXb} by
\begin{small}
\begin{align*}
H_n(t,x) &\leq \int_0^t \EpE{D_n(s,t,x) - D_0(s,t,x) \right|^p \left| \partial_x b_n(s,y,\varrho_s^{n,x}(y))\vert_{y=B_s^x}}{p} ds \\
&\qquad + \int_0^t \EpE{\partial_x b_n(s,y,\varrho_s^{n,x}(y))\vert_{y=B_s^x} - \partial_x b(s,y,\varrho_s^{x}(y))\vert_{y=B_s^x} \right|^p \left| D_0(s,t,x)}{p} ds \\
&\lesssim \int_0^t G_n(s,t,x) ds + \int_0^t \EpE{\partial_3 b_n(s,B_s^x,\varrho_s^{n,x}(B_s^x))- \partial_3 b(s,B_s^x,\varrho_s^{x}(B_s^x))}{2p} ds \\
&\qquad + \int_0^t A_n(s,x) ds \\
&=: \int_0^t G_n(s,t,x) ds + \int_0^t K_n(s,x) ds + \int_0^t A_n(s,x) ds,
\end{align*}
\end{small}
and thus
\begin{align*}
A_n(t,x) \leq C \left( F_n(x) + \sup_{s\in[0,t]} G_n(s,t,x) + \sup_{s\in[0,T]} K_n(s,x) \right) + C \int_0^t A_n(s,x) ds,
\end{align*}
for some constant $C >0$ independent of $t\in[0,T]$, $n\geq 0$, and $x\in K$. Consequently we get by Grönwall's inequality
\begin{align*}
A_n(t,x) \lesssim F_n(x) + G_n(0,t,x) + \sup_{s\in[0,T]} K_n(s,x) + \int_0^t \int_0^t G_n(s,r,x) ds dr.
\end{align*}
$F_n$ converges to $0$ uniformly in $x\in K$ by \Cref{cor:RegUniformConvergenceEcal}. Furthermore, we have that $G_n(s,t,x)$ is integrable over $t$ and $s$ by \Cref{cor:RegBoundLocalTime} and converges to $0$ uniformly in $x\in K$ by \Cref{cor:RegUniformConvergenceLocalTime}. Finally, we get due to $b \in \mathcal{L}([0,T] \times \mathbb{R} \times \mathbb{R})$ that
\begin{align*}
K_n(s,x) &\leq \int_0^t \EpE{\partial_3 b_n(s,B_s^x,\varrho_s^{n,x}(B_s^x)) - \partial_3 b_n(s,B_s^x,\varrho_s^x(B_s^x)}{2p} ds \\
&\qquad + \int_0^t \EpE{\partial_3 b_n(s,B_s^x,\varrho_s^x(B_s^x)) - \partial_3 b(s,B_s^x,\varrho_s^x(B_s^x))}{2p} ds \\
&\lesssim \int_0^t \left\vert \varrho_s^{n,x}(B_s^x) - \varrho_s^x(B_s^x) \right\vert ds \\
&\qquad + \int_0^t \EpE{\partial_3 b_n(s,B_s^x,\varrho_s^x(B_s^x)) - \partial_3 b(s,B_s^x,\varrho_s^x(B_s^x))}{2p} ds
\end{align*}
Note first that due to \Cref{rem:convergenceRho} we have that $\left\vert \varrho_s^{n,x}(B_s^x) - \varrho_s^x(B_s^x) \right\vert$ converges uniformly in $s\in[0,T]$ and $x \in K$ to $0$ as $n$ goes to infinity. Moreover,
\begin{align*}
&\EpE{\partial_3 b_n(s,B_s^x,\varrho_s^x(B_s^x)) - \partial_3 b(s,B_s^x,\varrho_s^x(B_s^x))}{2p} \\
&\quad = \left( \int_{\mathbb{R}} \left| \partial_3 b_n\left(t,y,\varrho_s^x(y) \right) - \partial_3 b\left(t,y,\varrho_s^x(y) \right) \right|^{2p} \frac{1}{\sqrt{2\pi t}} e^{-\frac{(y-x)^2}{2t}} dy \right)^{\frac{2}{2p}} \\
&\quad \leq e^{\frac{x^2}{2pt}} \left( \int_{\mathbb{R}} \left| \partial_3 b_n\left(t,y,\varrho_s^x(y) \right) - \partial_3 b\left(t,y,\varrho_s^x(y) \right) \right|^{2p} \frac{1}{\sqrt{2\pi t}} e^{-\frac{y^2}{4t}} dy\right)^{\frac{2}{2p}},
\end{align*}
where we have used $e^{-\frac{(y-x)^2}{2t}} = e^{-\frac{y^2}{4t}} e^{-\frac{(y-2x)^2}{4t}} e^{\frac{x^2}{2t}} \leq e^{-\frac{y^2}{4t}} e^{\frac{x^2}{2t}}$. Furthermore, equivalent to \eqref{eq:rhoContinuity} we can find a constant $C>0$ by \Cref{cor:Hölder} such that for all $t\in[0,T]$ and $x,y \in K$
\begin{align*}
\left\vert \varrho_s^{x}(z) - \varrho_s^{y}(z) \right\vert \leq C|x-y|.
\end{align*}
Consequently the function $x \mapsto \varrho_s^{x}(y)$ is continuous uniformly in $t\in [0,T]$. Thus $\Lambda^K := \lbrace \varrho_s^{x}(y) : x\in K \rbrace \subset \mathbb{R}$ is compact as an image of a compact set under a continuous function. Therefore due to the definition of the approximating sequence
\begin{align*}
\sup_{x\in K} \left| \partial_3 b_n(s,y,\varrho_s^{x}(y)) - \partial_3 b(s,y,\varrho_s^{x}(y)) \right| = \sup_{z \in \Lambda^K} \left| \partial_3 b_n(s,y,z) - \partial_3 b(s,y,z) \right| \xrightarrow[n\to\infty]{} 0,
\end{align*}
and hence $K_n(s,x)$ converges to $0$ uniformly in $s\in[0,T]$ and $x\in K$.
\end{proof}
We define the weight function $\omega_T: \mathbb{R} \to \mathbb{R}$ by
\begin{align}\label{eq:RegWeightFunction}
\omega_T(y) := \exp \left\lbrace - \frac{|y|^2}{4T} \right\rbrace, \quad y \in \mathbb{R}.
\end{align}
\begin{theorem}\label{thm:RegMainTheoremDelta}
Consider $d=1$. Let $(b \diamond \varphi)$ admit a decomposition \eqref{eq:RegFormDrift} and let $b, \varphi \in \mathcal{L}([0,T] \times \mathbb{R} \times \mathbb{R})$. Further, let $(X_t^x)_{t\in [0,T]}$ be the unique strong solution of mean-field SDE \eqref{lebesgueMFSDE} and $\Phi \in L^{2p}(\mathbb{R}; \omega_T)$, where $p:= \frac{1+\varepsilon}{\varepsilon}$, $\varepsilon>0$ sufficiently small with respect to \Cref{lem:RegBoundsSolution} and $\omega_T: \mathbb{R} \to \mathbb{R}$ as defined in \eqref{eq:RegWeightFunction}. Then
\begin{align*}
u(x) := \mathbb{E} \left[ \Phi(X_T^x) \right]
\end{align*}
is continuously differentiable in $x \in \mathbb{R}$ and the derivative takes the form
\begin{align}\label{DeltaMF}
u'(x) = \mathbb{E} \left[ \Phi(X_T^x) \left( \int_0^T a(s) \partial_x X_s^x + \partial_x b(t,y,\varrho_t^{x}(y))\vert_{y=B_t^x} \int_0^s a(u) du dB_s \right) \right],
\end{align}
where $a: \mathbb{R} \to \mathbb{R}$ is any bounded, measurable function such that
\begin{align*}
\int_0^T a(s) ds = 1.
\end{align*}
\end{theorem}
\begin{proof}
Due to \Cref{lem:RegRepDelta2} we already know that in the case $\Phi \in \mathcal{C}^{1,1}_b(\mathbb{R})$ the functional $\EW{\Phi(X_T^x)}$ is continuously differentiable and analogously to \cite[Theorem 4.2]{Bauer_StrongSolutionsOfMFSDEs} it can be shown that representation \eqref{DeltaMF} holds. Now, using mollification we can approximate $\Phi\in L^{2p}(\mathbb{R}; \omega_T)$ by a sequence of smooth functionals $\lbrace \Phi_n \rbrace_{n\geq 1} \subset C_0^{\infty}(\mathbb{R})$ such that $\Phi_n \to \Phi$ in $L^{2p}(\mathbb{R}; \omega_T)$ as $n\to\infty$. We define
\begin{align*}
u_n(x) &:= \mathbb{E} \left[ \Phi_n(X_T^x) \right]\quad \text{and} \\
\overline{u}(x) &:= \mathbb{E} \left[ \Phi(X_T^x) \left( \int_0^T a(s) \partial_x X_s^x + \partial_x b(t,y,\varrho_t^{x}(y))\vert_{y=B_t^x} \int_0^s a(u) du dB_s \right) \right].
\end{align*}
Note first that $\overline{u}$ is well-defined. Indeed, due to \eqref{eq:boundDerivative}, \Cref{lem:RegBoundsSolution}, and \eqref{eq:boundPartialXb} we get
\begin{align}\label{eq:RegWellDefined}
|\overline{u}(x)| &\leq \mathbb{E} \left[ \Phi(X_T^x)^2 \right]^{\frac{1}{2}} \mathbb{E} \left[ \left( \int_0^T a(s) \partial_x X_s^x + \partial_x b(t,y,\varrho_t^{x}(y))\vert_{y=B_t^x} \int_0^s a(u) du dB_s \right)^2 \right]^{\frac{1}{2}} \notag \\
&\lesssim \mathbb{E} \left[ \Phi(B_T^x)^2 \mathcal{E}\left( \int_0^T b(u,B_u^x,\rho_u^x)dB_u \right) \right]^{\frac{1}{2}} \sup_{s\in[0,T]} \mathbb{E} \left[ \left( \partial_x X_s^x \right)^2 \right]^{\frac{1}{2}} \\
&\lesssim \mathbb{E} \left[ \left| \Phi(B_T^x) \right|^{2p} \right]^{\frac{1}{2p}} < \infty. \notag
\end{align}
Due to \Cref{lem:RegRepDelta2}, $u_n$ is continuously differentiable for all $n\geq 1$. Thus it remains to show that $u_n'(x)$ converges to $\overline{u}(x)$ compactly in $x$ as $n\to\infty$, where denotes $u_n'$ the first derivative of $u_n$ with respect to $x$. Exactly in the same way as in equation \eqref{eq:RegWellDefined} we can find for any compact subset $K\subset \mathbb{R}$ a constant $C$ such that for every $x\in K$
\begin{align*}
|u'(x) - \overline{u}(x)| &\leq C \mathbb{E} \left[ \left| \Phi_n(B_T^x) - \Phi(B_T^x)\right|^{2p}\right]^{\frac{1}{2p}} \\
&= C \left( \int_{\mathbb{R}} \frac{1}{\sqrt{2 \pi T}} \left| \Phi_n(y) - \Phi(y)\right|^{2p} e^{-\frac{(y-x)^2}{2T}} dy \right)^{\frac{1}{2p}} \\
&\leq C \left( \frac{e^{\frac{x^2}{2T}}}{\sqrt{2 \pi T}} \int_{\mathbb{R}} \left| \Phi_n(y) - \Phi(y)\right|^{2p} e^{-\frac{y^2}{4T}} dy \right)^{\frac{1}{2p}} \\
&= C \left( \frac{e^{\frac{x^2}{2T}}}{\sqrt{2 \pi T}}\right)^{\frac{1}{2p}} \left\Vert \Phi_n - \Phi \right\Vert_{L^{2p}(\mathbb{R};\omega_T)},
\end{align*}
where we have used $e^{-\frac{(y-x)^2}{2t}} = e^{-\frac{y^2}{4t}} e^{-\frac{(y-2x)^2}{4t}} e^{\frac{x^2}{2t}} \leq e^{-\frac{y^2}{4t}} e^{\frac{x^2}{2t}}$. Consequently
\begin{align*}
\lim_{n\to\infty} \sup_{x\in K} |u_n'(x) - \overline{u}(x)| = 0.
\end{align*}
Thus $u'= \overline{u}$ and $u$ is continuously differentiable.
\end{proof}
|
2,869,038,156,860 | arxiv | \section*{Nomenclature}
{\renewcommand\arraystretch{1.0}
\noindent
\begin{tabular}{l @{\quad=\quad} l}
$a$ & speed of sound [m/s] \\
$C_D$& drag coefficient \\
$d$ & diameter of probe [m] \\
$E$ & Young's modulus [Pa] \\
$F_D$ & drag force [N] \\
$f$ & factor of safety $(f\geq1)$ \\
$g$ & local gravitational acceleration [m/s\textsuperscript{2}] \\
$h$ & thickness [m] \\
$\Delta h$ & heat of fusion [J/kg] \\
$k$ & conductivity [W/m-°C] \\
$P$ & pressure [Pa] \\
$\Pra$ & Prandtl number \\
$M$ & Mach number \\
$m$ & mass [kg] \\
$\Nu$ & Nusselt number \\
$\dot{Q}$ & heat flow rate [W] \\
$r$ & radius of probe [m] $(r=d/2)$ \\
$R$ & thermal resistance [°C/W] \\
$\Rey$ & Reynolds number \\
$T$ & temperature [°C] \\
$t$ & time [s] \\
$\boldsymbol{v}$ & velocity [m/s] \\
$\mathcal{V}$ & volume [m\textsuperscript{3}] \\
$z$ & altitude [m] \\
$\beta$ & ballistic coefficient [kg/m\textsuperscript{2}] \\
$\nu$ & kinematic viscosity [m\textsuperscript{2}/s] \\
$\eta$ & penalization factor $(\eta\leq1)$\\
$\rho$ & density [kg/m\textsuperscript{3}]\\
\end{tabular}}
\section{Introduction}
\lettrine{V}{ehicles} that explore the lower atmosphere of Venus must be designed to tolerate the high temperature and high pressure environment that reaches 462°C and 92 bar at the surface \cite{seiff1985models}. All past missions have used insulated pressure vessels to protect the payload, providing it with a benign environment for the short time needed to reach the surface and make scientific measurements along the way \cite{huntress2011soviet,bienstock2004pioneer}. This approach has been very successful, but has resulted in relatively large vehicles ranging from 99 kg \cite{bienstock2004pioneer} for the Pioneer Venus (PV) small probe to 716 kg \cite{huntress2011soviet} for the VEGA-2 lander. The purpose of the study reported here is to determine from a thermo-mechanical preliminary design perspective what the lower mass limit is for this kind of short-duration probe that employs the insulated pressure vessel architecture. This miniaturization is motivated by potential future mission opportunities for adding one or more probes as secondary payloads on a spacecraft going to Venus, and clearly smaller probes are more readily accommodated than larger ones. Continuing miniaturization and improvement of science instrumentation also facilitate the use of small but capable probes that carry less payload mass than their PV and Venera-VEGA predecessors.
Small Venus probes have the advantage that less heat flows into the payload because of the reduced surface area. However, the ability to absorb that heat without exceeding an allowable operating temperature is also reduced given the smaller mass available. The challenge for miniaturization is that surface area scales with the radius squared but the thermal absorption mass scales with the radius cubed, and this must place a fundamental limit on how small the probe can be and still maintain a survivable operating temperature all the way to the surface. For example, the VEVA mission proposal \cite{klaasen2003veva} envisioned four small 3.5 kg imaging probes dropped by a balloon \cite{kerzhanovich2003low}. The study of \citet{lorenz1998design} concluded that even smaller probes on the order of 1 kg are possible but only if the payload is ruggedized to tolerate an elevated temperature of 100°C and full Venus pressure of 92 bar. Extremely small probes, on order of 100 g, are discussed in an ESA microprobe design by \citet{wells2004atmospheric} if the minimum altitude requirement is relaxed to 30 km \cite{wells2004atmospheric} or 10 km \cite{vandenberg2005esa} rather than to the surface.
In this study, we examine the case where full temperature and pressure protection of the payload is provided all the way to the surface. In addition to traditional bluff-body probe designs, we also determine relationships for how streamlining the probe can further minimize the insulation mass required, at the cost of reducing the time-to-surface for data collection and transmission.
The paper is organized as follows: First we describe a simple drop probe model that captures the relevant physics while enabling extensive trade studies. Second, we perform a parametric study on a variety of probe designs and derive the scaling laws for the thermal and pressure vessel subsystems. Finally, we invert these scaling laws to address the design problem, determining a minimum mass cut-off for probes and discussing the associated trade space.
\section{Methodology \label{sec:methods}}
\subsection{Probe Model}
Venus entry is generally performed with a heat shield that is ejected after use, allowing the spacecraft to shed both mass and absorbed entry heat before plunging into the hot lower atmosphere. All Venus entries have been performed in this manner, with the partial exception of the Pioneer Venus Small Probe, which descended with the heat shield still attached \cite{bienstock2004pioneer}. Our focus for this paper is on probes, not landers, so no attempt will be made to cushion the landing or live on the surface for appreciable time (although one of the PV small probes did survive its landing without being specifically designed to do so \cite{bienstock2004pioneer}).
Our model, for the sake of consistency, assumes that entry into the atmosphere has been completed and heat shield ejected, leaving only the probe. Accordingly, our model furthermore applies to a probe dropped from a high-altitude Venus aerial platform. This probe, illustrated in Fig. \ref{fig:probeCartoon}, consists of a layered construction of five components in analog of Pioneer Venus Probes and the Venera/Vega landers:
\begin{enumerate}
\item A spherical pressure vessel made of titanium alloy Ti-6Al-4V.
\item An optional external cowling (assumed negligible mass), designed to reduce or augment the drag during descent. Such aerodynamic devices are generally only a few percent of the total mass for mature mass-breakdowns \cite{huntress2011soviet}, and different types (plates, tailboxes, or otherwise) have been shown to package into entry systems in tandem with other assets \cite{klaasen2003veva,vandenberg2005esa,huntress2011soviet}, so we make the simplifying assumption to ignore this small mass in this study.
\item An insulation layer, placed inside the pressure vessel similar to the PV probes. Our selected material is a calcium-silicate insulation (ZIRCAL-18 \cite{zrci2018zircal}) suitable for high-temperature application, though the PV probes instead used MLI blankets. Fiberglass filled with xenon gas \cite{hall2000thermal} is another proposed insulator.
\item A phase-change heat sink material, composed of a lithium salt (lithium nitrate trihydrate LiNO\textsubscript{3}-3H\textsubscript{2}O) that melts at $30^\circ$C, technology developed for the Venera/Vega landers \cite{huntress2011soviet}.
\item A payload, which claims the remaining volume and mass. The payload and heat-sink material are assumed to share the spherical volume inside the insulation.
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{probeCartoon.eps}
\caption{Descent Probe Schematic \label{fig:probeCartoon}}
\end{figure}
\begin{table*}[tb]
\centering
\begin{tabular}{llcccc}
\hline \hline
Material & Use & $\rho$ [g/cm\textsuperscript{3}]& $E$ [GPa] & $k$ [W/m-°C] & $\Delta h$ [kJ/kg] \\\hline
Titanium 6Al-V4 & Pressure Vessel & 4.43 & $113$ & 7 & --\\
ZIRCAL-18& Insulation & 0.280 & -- & 0.07 & -- \\
LiNO\textsubscript{3}-3H\textsubscript{2}O& Heat Sink & 1.50 & -- & -- & 296\\
\hline
\end{tabular}
\caption{\label{tab:materials} Selected Material Properties of Probe Components}
\end{table*}
The relevant material properties of these components are listed in Table \ref{tab:materials}. Future material science developments will naturally trickle into improved properties, but the current selection provides a reasonable baseline for mission design.
\subsection{Descent Model \label{sub:descent}}
Our descent model begins at $z=65$ km altitude above the surface of Venus, starting from an assumed initial velocity of $||\mathbf{v}(0)||=200$ m/s oriented $30^\circ$ below the horizon, similar to the Vega 2 flightpath \cite{huntress2011soviet}. Including a hypersonic entry above 65 km does not affect the results of our simulation, as the heat shield absorbs all the entry heat and then detaches.
As the probe descends, we interpolate the local atmospheric properties $\rho_\text{atm}$, $\nu_\text{atm}$, $a_\text{atm}$, $T_\text{atm}$, $P_\text{atm}$, and $k_\text{atm}$ (density, kinematic viscosity, speed of sound, temperature, pressure, and thermal conductivity) from the equatorial Venus International Reference Atmosphere (VIRA) \cite{seiff1985models}, thereby determining the instantaneous Reynolds ($\Rey$) and Mach numbers ($M$):
\begin{equation}
\label{eq:ReMach}
\Rey(t) = \frac{||\boldsymbol{v}(t)|| \, d}{\nu_\text{atm}(z)}; \,\,\,\,\, M(t) = \frac{||\boldsymbol{v}(t)||}{a_\text{atm}(z)}
\end{equation}
where $d=2r$ is the diameter of the probe. For any body, the drag coefficient $C_D$ is a function of the instantaneous $\Rey$ and $M$, meaning a $C_D$ lookup table can be used to determine the drag force $F_D(t)$:
\begin{align}
C_D(t) &= C_D(\Rey(t),M(t)) \label{eq:Cd}\\
F_D(t) &= \frac{1}{2}\rho_\text{atm}(z)||\boldsymbol{v}(t)||^2 \pi r^2 C_D(t) \label{eq:Fd}
\end{align}
For all of the probes analyzed, the Reynolds number varies little with time and stays well above natural turbulent transition for external boundary layers, generally around $\Rey\approx10^6$ (see Sec \ref{sec:results}.\ref{sub:trajectoryResults}), and most bodies transition earlier \cite{hoerner1965fluid}. In this regime, drag for both bluff and streamlined objects is only weakly dependent on Reynolds number, generally at or less than a scaling of $\Rey^{-1/6}$ \cite{hoerner1965fluid}. This allows for the simplifying assumption in the analysis below that the drag coefficient is independent of Reynolds number.
Accordingly, we can invert the drag problem and consider the \emph{$C_D$ as a design input} that the cowling is assumed to match once subsonic speeds are reached, limited by values plausible within this flow regime ($\Rey>10^6$, $M<0.8$). All landers and probes dropped to the Venus surface to-date have had drag coefficients higher than that of the pressure vessel sphere itself, either through the addition of a drag plate (Venera/Vega), a heat shield (PV-SP), or a large lip with spin vanes (PV-LP). However, a wide range of designed drag coefficients are possible (Fig. \ref{fig:probeCartoon}), for example:
\begin{enumerate}
\item A drag plate with roughly twice the diameter of the probe, bringing the drag coefficient up to $C_D=4$ normalized to the pressure vessel frontal area. To capture higher Mach numbers, we assume the expression rounded from \citet{hoerner1965fluid} to model the relatively brief supersonic descent phase:
\begin{align}
C_{D,\text{plate}}(M) &= 1.5+2.5(1+0.25M^2) \label{eq:Cdplate}
\end{align}
\item A sphere with mild surface roughness, leading to a streamlined $C_D=0.2$. Above $M=0.8$, we assume a linear drag rise to $C_D(M)=0.9$ \cite{hoerner1965fluid}.
\begin{align}
[M \leq 0.8]: \,\, C_{D,\text{sphere}}(M) &= 0.2 \label{eq:Cdsphere} \\
[0.8 < M < 1]: \,\, C_{D,\text{sphere}}(M) &= 0.2+3.5(M-0.8) \\
[M \geq 1]: \,\, C_{D,\text{sphere}}(M) &= 0.9
\end{align}
\item A tailbox, dropping the drag coefficient to $C_D=0.05$, the approximate minimum configuration reported by \citet{hoerner1965fluid}. Above $M>0.8$, we again assume a linear drag rise to $C_D(M)=0.9$, similar to the sphere.
\begin{align}
[M \leq 0.8]: \,\, C_{D,\text{tailbox}}(M) &= 0.05 \\
[0.8 < M < 1]: \,\, C_{D,\text{tailbox}}(M) &= 0.05+4.25(M-0.8) \\
[M \geq 1]: \,\, C_{D,\text{tailbox}}(M) &= 0.9 \label{eq:Cdtailbox}
\end{align}
\end{enumerate}
Note that all of these bodies would likely require spin-stabilization for the first few minutes of entry \cite{lorenz2007spinning} during the supersonic phase. The equation of motion of the probe, including both gravity and buoyancy, is then:
\begin{equation}
\label{eq:eqmotion}
m\dot{\boldsymbol{v}} = -\hat{\boldsymbol{v}}(t) \cdot F_D(t) \,\,\, -\hat{\boldsymbol{z}} \cdot m g(z) \,\,\, -\hat{\boldsymbol{z}} \cdot \frac{4}{3} \pi r^3 \rho_\text{atm}(z) g(z)
\end{equation}
where $\hat{\boldsymbol{v}}$ and $\hat{\boldsymbol{z}}$ are the unit vectors in the direction of $\boldsymbol{v}$ and $\boldsymbol{z}$, $m$ is the mass, and $g(z)$ is the local Venus gravitational acceleration from the VIRA model \cite{seiff1985models}:
\begin{equation}
g(z) = G \frac{M_\text{Venus}}{(z+r_\text{Venus})^2}
\end{equation}
assuming a mean Venus radius of $r_\text{Venus} = 6052$km, mass of $M_\text{Venus}=4.867\times10^{24}$kg, and universal gravitational constant $G=6.674\times10^{-11}$. All equations of motion are integrated in Matlab, utilizing the ode45 Runge-Kutta numerical solution \cite{dormand1980family}.
\subsection{Thermal Model}
The thermal model assumes a steady-state heat flow rate is obtained at every time instant during the descent, consisting of four steps: (1) convection from the atmosphere to the skin of the probe, (2) conduction through the pressure vessel, (3) conduction through the insulation, and finally (4) absorption into the phase-change heat sink. The thermal network (Fig. \ref{fig:thermalnetwork}) therefore comprises of a set of series resistances ($R_\text{convect}$, $R_\text{vessel}$, and $R_\text{insul}$) to determine the net heat flux:
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{thermalCartoon.eps}
\caption{Thermal Network Model \label{fig:thermalnetwork}}
\end{figure}
\subsubsection{Convection from the Atmosphere}
We model the convection problem as a simple dropping sphere, assuming the downstream cowling structural attachments are designed to have a negligible heat path compared to the stagnation flow on the probe front. At subsonic speeds, \citet{achenbach1978heat} provides heat convection correlations (Nusselt $\Nu$ as a function of $\Rey$) for spheres in a gas of Prandtl number $\Pra=0.71$. Also available from the VIRA data, $\Pra(z)=0.71$ is a faithful approximation for Venus atmosphere due to the majority carbon dioxide constituency.
\begin{align}
\label{eq:achenbach}
[\Rey_\text{film}<2\times 10^5]:& \,\,\,\,
\Nu \approx 2+\left(0.25\Rey_\text{film}+3\times 10^{-4}\Rey_\text{film}^{8/5}\right)^{1/2} \\
[4\times 10^5<\Rey_\text{film}<5\times 10^6]:& \,\,\,\,
\Nu \approx 430+5\times 10^{-3}\Rey_\text{film} + ... \, \text{\small [smaller high-order terms]}
\end{align}
\citet{achenbach1978heat} claims the Reynolds exponent drops back again to $\sfrac{4}{5}$ beyond $\Rey_\text{film}>5\times 10^6$, but does not give another relation for this regime. At the risk of slight overestimation of the convective coefficient and to retain the simplest possible model, we keep his linear correlation beyond $\Rey_\text{film}>5\times 10^6$. In Achenbach's relation, the Reynolds number must also be corrected for the new gas properties of the lower-temperature ``film'' near the probe, where $T_\text{film}$ is generally taken as the average of the atmospheric temperature and probe surface temperature. We assume a CO\textsubscript{2}-like correction for the kinematic viscosity, with properties from \citet{fenghour1998viscosity}, as CO\textsubscript{2} is by far the dominant atmospheric constituent (96.5\% in the VIRA model \cite{seiff1985models}):
\begin{align}
\label{eq:Refilm}
\Rey_\text{film}(t) = \Rey(t) \frac{\nu_\text{CO2}(T_\text{atm})}{\nu_\text{CO2}(T_\text{film})}
\end{align}
Finally, we can use Achenbach's relation to approximate the convective resistance $R_\text{convect}$ from the atmospheric temperature to the pressure vessel skin:
\begin{equation}
\label{eq:CdNu}
R_\text{convect}(t) = \frac{d}{4\pi r^2 k_\text{atm}(z)\, \Nu(Re_\text{film})}
\end{equation}
\subsubsection{Conduction through the Pressure Vessel}
We determine the required pressure vessel thickness $h_\text{vessel}$ from \citet{young2002roark} spherical buckling criterion for the maximum atmospheric pressure $P_\text{atm}(0)$, further including a factor-of-safety of $f_\text{vessel} = 1.3$ and Young's modulus weakened for high-temperature at $\eta_\text{T}=0.76$ \cite{handbook1998mil}.
\begin{equation}
\label{eq:pressureVessel}
\frac{h_\text{vessel}}{r} = \sqrt{\frac{f_\text{vessel} P_\text{atm}(0)}{0.365 \eta_\text{T} E_\text{vessel}}}
\end{equation}
The titanium pressure vessel does not provide any practical insulation against the rising external temperature, but its conductive resistance can be easily computed from spherical shell relations:
\begin{equation}
\label{eq:sphereResistance}
R_\text{vessel} = \frac{1}{4\pi k_\text{vessel} } \frac{h_\text{vessel}}{r(r-h_\text{vessel})}
\end{equation}
\subsubsection{Conduction through the Insulation}
The series conductive resistance of the insulation layer $R_\text{insul}$ beneath is similarly computed, with radius $r_\text{insul}=r-h_\text{vessel}$. As a rough model for heat leaks from sensor pass-throughs, we penalize the insulation effectiveness by $\eta_\text{insul} = 0.5$.
\begin{equation}
\label{eq:insulResistance}
R_\text{insul} = \frac{\eta_\text{insul} }{4\pi k_\text{insul} } \frac{h_\text{insul}}{r_\text{insul}(r_\text{insul}-h_\text{insul})}
\end{equation}
The insulation effectiveness $\eta_\text{insul}$ may well scale with probe size and type of science instruments selected, but is functionally equivalent to an effective change in insulation effectiveness (discussed in Sec. \ref{sec:results}.\ref{sub:material}).
\subsubsection{Net Heat Flux}
The external heat $\dot{Q}_\text{external}$ conducted through the layers is therefore determined from the series resistances and the heat sink melting temperature of $T_\text{sink}=30^\circ$C.
\begin{gather}
\label{eq:seriesResistance}
\dot{Q}_\text{external} = \frac{T_\text{atm}(z)-T_\text{sink}}{R_\text{convect}+R_\text{vessel}+R_\text{insul}} \\
\label{eq:melting}
\dot{m}_\text{sink,melted} = \frac{\dot{Q}_\text{external}+\dot{Q}_\text{internal}}{\Delta h_\text{sink}}
\end{gather}
where $\Delta h_\text{sink}$ is the specific heat of fusion of the heat sink material, $\dot{m}_\text{sink,melted}$ is the rate of melting in kilograms per second, and $\dot{Q}_\text{internal}$ covers the heat generation from internal to the insulation. We assume $50$ W of internal heat estimated from the PV-SP design: 10 W of instrument power \cite{bienstock2004pioneer}, a 10 W transmitter \cite{bienstock2004pioneer} operating at a typical 25\% efficiency to dissipate 30 W of heat, and a final $10$ W for the computer and power management system. These $50$ watts increase the rate of lithium melting by an extra 10 g/min, so designs with other internal power levels can be scaled accordingly.
For all trajectories, we simulate the initial temperature of the probe at $30^\circ$C, the melting point of the lithium salt heat sink; this assumes that the heatshield includes a thermal solution to provide this initial condition after entry. Subsequently, we then assume that all additional heat is absorbed by the phase-change heat sink material, thereby keeping the payload at constant temperature for the entire descent. This is a conservative assumption made for computational simplicity. Inclusion of the non-salt payload's ability to absorb heat will reduce the amount of salt required. Assuming a mostly metallic payload that is allowed to heat by 50$^\circ$C, each gram of salt absorbs roughly 12 times more energy than a gram of titanium (\textasciitilde500 J/kg-$^\circ$C) and 6.6 times more than aluminum (\textasciitilde900 J/kg-$^\circ$C); meaning our thermal capacity can be conservative by up to a factor of two for probes of large payload and little salt (see Section \ref{sec:results}:\ref{sub:params}).
We also however, include a thermal factor-of-safety $f_\text{sink}=1.3$ on the carried heat sink material, so 30\% extra remains on impact if the predicted uniform heat transfer is realized. This safety factor is meant to cover any ``hot spots'', where some extra reservoirs of the salt may needed to compensate for trouble areas in a complex interior. We also assume $f_\text{sink}$ will cover uncertainties in the atmospheric properties and flow characteristics around the probe (a conservative estimate given only mild density and temperature fluctuations from PV data \cite{seiff1980measurements} and a largely predictable subsonic descent).
The bonus heat capacity of the pressure vessel may be ignored as it resides outside the insulation layer: the convective resistance is much lower than the conductive (see Section \ref{sec:results}.\ref{sub:thermalResults}) so this capacity is quickly saturated by the rapid heat transfer. The Venera/Vega landers had insulation on both sides of the pressure vessel \cite{huntress2011soviet} which allowed use of the vessel heat capacity, but this architecture is generally not used in smaller probes \cite{wells2004atmospheric,klaasen2003veva,bienstock2004pioneer} that are more volume-constrained.
\section{Results \label{sec:results}}
\subsection{Parameterization \label{sub:params}}
The full range of simulated probes is enumerated in Fig. \ref{fig:dataKey}; this figure can be used as a data key for the remainder of this paper. Our parameters are selected to roughly overlap a wide array of probes and landers either already flown to Venus or detailed design concepts (marked with asterisk *). The diameter $d$ of these probes varies from $0.1$ to $1.3$ m, and bulk density $\rho_\text{bulk}$ (including the pressure vessel and everything inside it) varies from $0.6$ to $1.2$ g/cm\textsuperscript{3}. While certainly low compared to metallic materials, the bulk density of probes is usually limited to $\rho_\text{bulk} \leq 1.2$ due to volume packing constraints. Mass can be derived from the chosen bulk density and radius as $m= \rho_\text{bulk}\mathcal{V} = \frac{4}{3}\rho_\text{bulk}\pi r^3$. VEGA parameters are taken from \citet{huntress2011soviet}, PV-LP/PV-SP from \cite{,bienstock2004pioneer}, VEVA diameter and mass from \citet{klaasen2003veva} and \citet{kerzhanovich2003low} respectively, Lorenz from \citet{lorenz1998design}, and ESA Microprobe from \citet{wells2004atmospheric}.
In Fig. \ref{fig:dataKey} and throughout the remainder of the paper, each icon is scaled in size with the diameter of the selected probe, and its shading is a function of the bulk density. Icons shape gives the probe drag characteristics discussed in Sec. \ref{sec:methods}:\ref{sub:descent} - square symbols represent drag plates, circles represent spheres, and triangles represent streamlined probes. For the majority of the descent these shapes have the subsonic drag coefficients of $C_D=4$, $C_D=0.2$, and $C_D=0.05$ respectively, but during the brief supersonic phase variable drag coefficients are used instead (Eq. \ref{eq:Cdplate} - \ref{eq:Cdtailbox}).
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{symbolDefinition.eps}
\includegraphics[width=\figwidth]{datakey.eps}
\caption{Simulated Probe Design Space \label{fig:dataKey}}
\end{figure}
We also specifically call out a few design points (A, B, etc.) enumerated in Table \ref{tab:probes} for detailed analysis later in the manuscript to showcase results. For example, Probe A is spherical with drag coefficient $C_D=0.2$, has diameter $d=0.5$ m, and a bulk density of $\rho_\text{bulk}=0.8$ g/cm\textsuperscript{3}; comparable in diameter and mass to the Pioneer Venus Small Probe pressure vessel, though more streamlined. The smallest probes, such as Probe C, carry very little heat sink material and the 10 g/min melt due to internal heating ($m_\text{sink,internal}$) becomes a significant fraction of their thermal load.
\begin{table*}[tb]
\centering
\begin{tabular}{l | ccc | cc | ccccccc}
\hline \hline
\multirow{ 2}{*}{Id.} & $d$ & $\rho_\text{bulk}$ & $C_D$ & $m$ & $\beta$ & $h_\text{vessel}$ & $h_\text{insul}$ & $m_\text{sink}$ & $m_\text{sink,internal}$ & $m_\text{payload}$ & $\rho_\text{payload}$ & $t_\text{descent}$ \\
& [m] & [g/cm\textsuperscript{3}] & -- & [kg] & [kg/m\textsuperscript{2}] & [mm] & [mm] & [kg] & [\%] & [kg] & [g/cm\textsuperscript{3}] & [min] \\ \hline
A & 0.50 & 0.80 & 0.2 & 52 & 1300 & 4.9 & 34 & 4.8 & 5 & 24 & 0.68 & 23 \\
B & 0.40 & 0.80 & 0.05 & 27 & 4300 & 3.9 & 25 & 2.4 & 5 & 13 & 0.67 & 13 \\
C & 0.25 & 0.70 & 0.05 & 5.7 & 2300 & 2.4 & 37 & 0.91 & 20 & 1.3 & 0.64 & 17 \\
\hline
\end{tabular}
\caption{\label{tab:probes} Selected Example Probe Parameters}
\end{table*}
Again, none of the tested probes include parachutes, similar to the PV Small Probe. In order to make a comparison from our results to temporarily parachuted Venus platforms (such as Venera/Vega and the PV Large Probe), all reported descent times are measured from 50 km rather than the initial 65 km, as parachute jettison is generally performed at that altitude \cite{huntress2011soviet,bienstock2004pioneer}.
\subsection{Trajectory Correlations \label{sub:trajectoryResults}}
We first note that the lower drag coefficients shorten the descent timeline, as compared to the fully separated flow on classic Venus probes. The heaviest computed streamlined probes will reach the surface on the order of 10 to 20 minutes, rather than 40 to 60 minutes. As is common for falling objects in a set atmosphere and gravity, the trajectory details are solely dependent on the ballistic coefficient $\beta$.
\begin{equation}
\label{eq:ballistic}
\beta = \frac{m}{\pi r^2 C_D}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{trajCorrelate.eps}
\caption{Descent Trajectory Correlations \label{fig:trajCorrelate}}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{flightTrajectory.eps}
\caption{Descent Timeseries (Probe A) \label{fig:flightTrajectory}}
\end{figure}
For a probe at terminal velocity, we can explicitly derive that the impact velocity is proportional to $\beta^{1/2}$ and descent time as $\beta^{-1/2}$, consistent with Fig. \ref{fig:trajCorrelate}:
\begin{align}
v_z|_{z=0} = \hat{\boldsymbol{z}}\cdot\mathbf{v}(z)|_{z=0} &= \left. \sqrt{\frac{2mg(z)}{\rho_\text{atm}(z)\pi r^2 C_D}} \, \right|_{z=0} \propto \beta^{1/2} \label{eq:vz} \\
t_\text{descent} &= \int_{0}^{z(t=0)} \frac{1}{v_z(\zeta)} d\zeta \propto \beta^{-1/2}
\end{align}
These relations assume that the effect of buoyancy is relatively small, so are not expected to hold as the bulk density approaches the atmospheric density. Given the streamlining, the ballistic coefficients investigated can be well over 1000 kg/m\textsuperscript{2}.
The tight agreement in Fig. \ref{fig:trajCorrelate} illustrates that the probe quickly reaches terminal velocity for the given altitude, meaning that the impact velocity and descent time are dependent on the probe's ballistic characteristics, rather than the Venus entry parameters. Consequently, if the probe were dropped instead from an aerial platform at much lower initial velocity, the trajectory would be largely equivalent. Furthermore, the tight agreement in Fig. \ref{fig:trajCorrelate} also shows that Mach effects have little influence on the trajectory past 50 km, as only the subsonic drag coefficient is used in our ballistic coefficient in Eq. \ref{eq:ballistic}. For example, if we investigate the specific trajectory from Probe A (Fig. \ref{fig:flightTrajectory}a), we see that terminal velocity is reached and Mach number falls below subsonic within the first minutes of descent (Fig. \ref{fig:flightTrajectory}b). All reasonable initial velocity estimates provide similarly short timescales to terminal velocity and subsonic flight.
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{ReCorrelate.eps}
\caption{Vessel Diameter Reynolds Number at two Altitudes \label{fig:ReCorrelate}}
\end{figure}
The Reynolds number ramps up surprisingly slowly once the probe passes 50 km altitude (Fig. \ref{fig:flightTrajectory}b), growing by less than a factor of four for each probe tested and staying above turbulent transition ($\Rey>10^6$ \cite{hoerner1965fluid}) for external flow boundary layers (Fig. \ref{fig:ReCorrelate}). This phenomenon is due to the opposite scaling of atmospheric density and kinematic viscosity with altitude as the probe falls, and furthermore justifies modeling approximations of a single drag coefficient for the descent.
At any given altitude, $\Rey$ scales with the velocity and diameter of the vessel:
\begin{equation}
\label{eq:ReBeta}
\Rey (z) = \frac{d||\boldsymbol{v}(z)||}{\nu_\text{atmo}(z)} \propto \frac{\beta^{1/2}d}{\nu_\text{atmo}(z)}
\end{equation}
Combining Eq. \ref{eq:vz} and \ref{eq:ReBeta}, we note that in a hypothetical atmosphere where $\nu_\text{atm}(z) \propto [\rho_\text{atm}(z)]^{-0.5}$, an object falling at terminal velocity will stay at a perfectly fixed Reynolds number for the entire descent. The VIRA model \cite{seiff1985models} predicts a relationship closer to $\nu_\text{atm} \propto \rho_\text{atm}^{-0.9}$, which accounts for the slow Reynolds number growth.
\subsection{Thermal Correlations \label{sub:thermalResults}}
Choosing a specific drag coefficient clearly affects the thermal design: a faster dropping probe will have stronger external convection, while a slower probe will have a longer timeline for internal conduction to occur through the insulation. Depending on the parameters of the problem, one would expect that dropping faster or slower are both viable strategies to minimize thermal effects.
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{thermalPerformance.eps}
\caption{Thermal Resistance in Descent (Probe A) \label{fig:thermalPerformance}}
\end{figure}
However, the particulars of the Venus environment make \emph{dropping faster always more thermally efficient} once entry is performed, as the thermal resistances act in series and the insulation is by far most limiting, and supersonic heating of the Venus gas is negligible after heat shield jettison. Taking Probe A again as an example in Fig. \ref{fig:thermalPerformance}, we note that the convection and pressure vessel are of similar resistance to the heat flow, while the insulation required to ensure survival is 2.5 orders of magnitude more resistant ($R_\text{insul} \gg R_\text{convect}$ and $R_\text{insul} \gg R_\text{vessel}$). \citet{achenbach1978heat} predicts a roughly linear relationship between the airspeed and convective transfer coefficient, meaning that the probe has to drop orders of magnitude slower before the convective resistance need be considered as anything other than zero. In other words, convection is so strong that the skin temperature rapidly saturates at the local atmospheric temperature, and the insulation then constricts the heat flow from there.
Furthermore, the extremely high convection rate reduces our analysis sensitivity to Achenbach's heat transfer correlations. An infinite convection coefficient would act similarly to Achenbach's - the insulation is the only effective resistive bottleneck for the heat flow.
Applying this observation to the design problem, a spectrum of solutions exist in each probe for allocating mass between the insulation layer and the heat sink. Using more heat sink material reduces the required insulation and vice versa, although the heat sink mass rises rapidly with very thin insulation. Figure \ref{fig:thermalOptimum} illustrates the tradeoff between the two components as the insulation thickness is varied in Probe A.
An additional constraint is needed to close the problem, such as a cost function to minimize. Of note, both the minimum mass and minimum volume optimizations (shown in Fig. \ref{fig:thermalOptimum}) can be solved analytically for the dominant conduction effect. Referring first to the minimum mass solution, the total mass of the thermal components can be found by manipulating Eqs. \ref{eq:sphereResistance} - \ref{eq:melting}:
\begin{align}
\label{eq:massInsulAndSink}
m_\text{insul} &= \frac{4}{3}\pi\rho_\text{insul} \left[r_\text{insul}^3-(r_\text{insul}-h_\text{insul})^3\right] \\
m_\text{sink} &= \int_{0}^{t_\text{descent}} \frac{T_\text{atm}(\tau)-T_\text{sink}}{\Delta h_\text{sink} \, R_\text{insul}} d\tau \\
&+ \frac{t_\text{descent}\dot{Q}_\text{internal}}{\Delta h_\text{sink}} \nonumber
\end{align}
Performing the minimization of the net mass including safety factors ($m_\text{insul}+f_\text{sink} m_\text{sink}$) with respect to the insulation thickness we obtain:
\begin{equation}
\label{eq:calculus}
\frac{\partial m_\text{insul}}{\partial h_\text{insul}} + \frac{\partial m_\text{sink}}{\partial h_\text{insul}} = 0 , \,\,\, \frac{\partial^2 m_\text{insul}}{\partial h^2_\text{insul}} + \frac{\partial^2 m_\text{sink}}{\partial h^2_\text{insul}} > 0
\end{equation}
\begin{equation}
\label{eq:optim}
h_\text{insul,optim} = \frac{1}{2}r_\text{insul}-\frac{1}{2}r_\text{insul}\sqrt{1-4\sqrt{K/r_\text{insul}^2}}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{thermalDesign.eps}
\caption{Optimized Insulation Thickness (Probe A) \label{fig:thermalOptimum}}
\end{figure}
where quantity $K$ is dependent on the probe geometry and trajectory as:
\begin{equation}
\label{eq:C}
K = \frac{f_\text{sink}\eta_\text{insul}k_\text{insul}\int_{0}^{t_\text{descent}} [T_\text{atm}(\tau)-T_\text{sink}] d\tau}{\Delta h_\text{sink} \rho_\text{insul}}
\end{equation}
In the thin shell limit ($h_\text{insul}/r_\text{insul} \ll 1$), Eq. \ref{eq:optim} reduces to $h_\text{insul,optim} \approx \sqrt{K}$. Replacing $\rho_\text{insul}$ with $\rho_\text{sink}$ in Eq. \ref{eq:C} results in the minimum-volume solution of the two components ($m_\text{insul}/\rho_\text{insul}+m_\text{sink}/\rho_\text{sink}$), rather than minimum-mass.
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{thermalCorrelate.eps}
\caption{Insulation Thickness Correlation \label{fig:thermalCorrelate}}
\end{figure}
Figure \ref{fig:thermalCorrelate} illustrates the optimal insulation thickness for each simulated probe, both minimum mass and minimum volume optimums. As the descent time, and therefore the heat load, depends primarily on the ballistic coefficient, we again see a data collapse with respect to $\beta$. For thin shells, Eq. \ref{eq:optim} reduces to an insulation thickness proportional to $\beta^{-1/4}$:
\begin{equation}
\label{eq:b14}
h_\text{insul,optim} \approx \sqrt{K} \propto \sqrt{t_\text{descent}} \propto \beta^{-1/4}
\end{equation}
Most tested probes fall along this $\beta^{-1/4}$ relation, with the exception of the smallest diameters that challenge the thin shell limit. All our tabulated probes (A, B, etc.) use the minimum-mass solution.
\subsection{Payload Mass Fraction \label{sub:payload}}
In our methodology, the payload mass is simply that left over after subtracting the subsystem masses:
\begin{equation}
\label{eq:massPayload}
m_\text{payload} = m - m_\text{insul} - m_\text{sink} - m_\text{vessel}
\end{equation}
Combining Eq. \ref{eq:massInsulAndSink} and \ref{eq:optim}, we can first derive the mass fraction dedicated to the thermal system. Using thin-shell assumption, $\dot{Q}_\text{internal} \ll \dot{Q}_\text{external}$, and a constant dominating drag coefficient for the thermally significant subsonic descent portion, we reach the following scaling for our simulation inputs and material choices:
\begin{align}
\frac{m_\text{insul} + m_\text{sink}}{m} &\propto \frac{1}{\beta_\text{\phantom{B}}^{5/4}C_D} \nonumber \times \left(1-\frac{h_\text{vessel}}{r} \right)^2 \sqrt{f_\text{sink}\eta_\text{insul}\frac{\rho_\text{insul} k_\text{insul}}{\Delta h_\text{sink}}} \nonumber \\
&\propto \frac{C_D^{1/4}}{m_\text{\phantom{B}}^{5/12} \rho_\text{bulk}^{5/6}} \propto \frac{C_D^{1/4}}{d_\text{\phantom{B}}^{5/4} \rho_\text{bulk}^{5/4}}
\label{eq:massFracThermal}
\end{align}
where again ${h_\text{vessel}}/{r} = \sqrt{[f_\text{vessel} P_\text{atm}(0)]/[0.365 \eta_\text{T} E_\text{vessel}]}$. Of note, while the insulation thickness is dependent only on the ballistic coefficient and material properties (Eq. \ref{eq:b14}), the total mass of the thermal system depends on a coupling of multiple inputs.
Similarly, inverting the pressure vessel buckling criterion Eq. \ref{eq:pressureVessel} produces a mass fraction correlation for the pressure vessel, again assuming thin-shells:
\begin{equation}
\label{eq:massFracPV}
\frac{m_\text{vessel}}{m} = \frac{3\rho_\text{vessel}}{\rho_\text{bulk}}\frac{h_\text{vessel}}{r} \propto \frac{1}{\rho_\text{bulk}}
\end{equation}
The payload mass fraction of the probe (e.g. the efficiency of the design) is therefore maximized for probes that are:
\begin{enumerate}
\item \emph{Large} - Larger diameters $d$ dilutes the system mass of thermal system ($d^{-5/4}$ in Eq. \ref{eq:massFracThermal}), as the surface area to volume ratio decreases. Small probes, by necessity, cannot take advantage of this effect.
\item \emph{Streamlined} - Smaller drag coefficient $C_D$ lowers the thermal load for any given sized probe ($C_D^{1/4}$ in Eq. \ref{eq:massFracThermal}).
\item \emph{Dense} - Larger bulk density $\rho_\text{bulk}$ dilutes the mass of both the pressure vessel and thermal system ($\rho_\text{bulk}^{-5/4}$ and $\rho_\text{bulk}^{-1}$ in Eq. \ref{eq:massFracThermal}-\ref{eq:massFracPV}).
\end{enumerate}
\section{Probe Design}
\subsection{Mass Tradeoffs}
All three of the maximum payload paradigms discussed above (Sec. \ref{sec:results}.\ref{sub:payload}) also act to shorten the probe descent time, thereby creating a strong tradeoff in the system design of Venus atmospheric probes. Ideally, a Venus probe would have both a large payload \emph{and} a long descent time for science collection and transmission - however this necessarily increases the total mass of payload, structure, and thermal protection. The probe trajectory, therefore, must be carefully selected for the given science constraints of the mission. Small payloads may still be delivered at small mission mass.
\begin{figure}[t]
\centering
\includegraphics[width=\figwidth]{payload.eps}
\caption{Mass Costs for a target Payload and Descent Time ($\rho_\text{payload}$ = 0.7 g/cm\textsuperscript{3}) \label{fig:payload}}
\end{figure}
Figure \ref{fig:payload} provides summary data to assist the probe designer with this tradeoff. For a Venus probe of specified mission definition (e.g. payload mass and descent time), we derive the total mass cost and required drag coefficient. All parameters in this figure are solved iteratively using the expressions derived in Sec. \ref{sec:results}\ref{sub:payload} for a payload density of $\rho_\text{payload}=0.7$ g/cm\textsuperscript{3}.
Figure 10 includes a data point for the PV small probe based on published mass and descent time data. Unfortunately, the published PV data does not include a direct mass estimate that can be compared to our prediction of $\approx23$ kg payload mass as shown. However, \citet{bienstock2004pioneer} does claim a value of 5 kg of science instrument mass, which is a reasonable ~20\% mass fraction of our payload mass that must also include structure, power, avionics and communications hardware.
The total mass cost in Figure \ref{fig:payload} can be subdivided into two components, a (1) initial cost for descending to Venus without any spare mass allocated for payload, and a (2) marginal cost for either increasing the payload or the descent time:
\subsubsection{Initial Mass Cost at Zero Payload}
Following the y-axis in Fig. \ref{fig:payload}a, the zero-payload cost increases with larger drag coefficients. Combining Eq. \ref{eq:massPayload}-\ref{eq:massFracPV}, this scaling for a constant $C_D$ can be solved as:
\begin{gather}
m \big|_{m_\text{payload}=0} \propto C_D^{3/5} \left(1-\frac{3\rho_\text{vessel}}{\rho_\text{bulk}\big|_{m_\text{payload}=0}}\frac{h_\text{vessel}}{r}\right)^{-12/5} \times \nonumber \\
\left(\rho_\text{bulk}\big|_{m_\text{payload}=0}\right)^{-2} \left(f_\text{sink}\eta_\text{insul}\frac{\rho_\text{insul} k_\text{insul}}{\Delta h_\text{sink}}\right)^{6/5} \left(1-\frac{h_\text{vessel}}{r} \right)^{24/5}
\label{eq:zeroPayloadMass}
\end{gather}
where $\rho_\text{bulk}\big|_{m_\text{payload}=0}$ is the bulk density at zero payload, dependent only on material properties.
Density $\rho_\text{bulk}\big|_{m_\text{payload}=0}$ can be derived by noting in the zero-payload case, the thermal system volume is the difference between the total and vessel volumes:
\begin{gather}
\rho_\text{bulk}\big|_{m_\text{payload}=0} = \frac{1}{\mathcal{V}} \left[\mathcal{V}_\text{vessel}\rho_\text{vessel}+(\mathcal{V}-\mathcal{V}_\text{vessel})\rho_\text{thermal}\right] \nonumber\\
= \left(1-3\frac{h_\text{vessel}}{r}\right) \frac{2\rho_\text{sink}\rho_\text{insul}}{\rho_\text{sink}+\rho_\text{insul}}+3\rho_\text{vessel}\frac{h_\text{vessel}}{r}
\end{gather}
At the drag coefficients tested, the zero-payload cutoff occurs at 2.1 kg ($C_D=0.05$), 4.9 kg ($C_D=0.2$), and 29 kg ($C_D=4$). Such masses represent an ultimate minimum for vanishingly small (i.e. grams of circuit board) payloads given our material choice and model.
\subsubsection{Marginal Costs of Payload and Descent Time}
For each kilogram of increased payload, the mass of the probe increases by more than a kilogram as the probe mass cascades into the system mass. Similarly, increasing the descent time from any design point also increases the total mass. To derive a rule of thumb, we can fit a linear relation to the roughly parallel contour lines in Fig. \ref{fig:payload}:
\begin{equation}
m \approx k_1 (t_\text{descent}-k_0) + k_2 m_\text{payload} \label{eq:linearcost}
\end{equation}
where costs $k_1\approx 0.30$ kg/min, $k_2 \approx 2.4$ kg/kg, and $k_0 \approx 15$ min for the range of payload masses less than 10 kg and our chosen material properties. For any given payload size, the lowest plausible drag coefficient ($C_D=0.05$) creates a cutoff limit for shrinking the total mass and descent time.
\subsection{Point Designs}
Carrying on to specific examples, the point designs A, B, and C illustrate the compromises required for shrinking the total mass: both the descent time and payload mass must necessarily drop. Again, Table \ref{tab:probes} enumerates their parameters in detail.
Probe A at diameter $d=50$ cm approximates the Pioneer Venus Small Probe, with deceleration module removed to hasten its descent to the surface to only 23 minutes. Weighing 52 kg, this probe can carry a little under half its mass as payload given the reduced thermal load ($m_\text{payload}/m = 0.47$). It's payload is slightly larger than the estimate for PV-SP, but delivers it with less total mass.
Probe B at $d=40$ cm weighs half of Probe A, so one would expect much less payload could be carried as the effective thermal load increases with larger surface area to volume. However, by adding the cowling to reduce the drag, we maintain a similar mass efficiency. The ratio of total mass to payload mass is maintained ($m_\text{payload}/m = 0.48$).
Probe C represents an extreme design of $d=25$ cm, where the streamlined 5.7 kg probe only carries 1.3 kg of payload ($m_\text{payload}/m = 0.23$). The mass efficiency is necessarily compromised to keep the total mass low, and continued reduction of the total mass will drop the payload to zero.
Intriguingly, the zero-payload mass for a streamlined $C_D=0.05$ probe is within 3U cubesat range ($m<4$ kg). While a Venus descent probe by no means needs to fit within cubesat design constraints, it is indicative that probes of this size are perhaps plausible for Venus. Such an atmospheric probe would require a kilogram-scale payload, and science collection and transmission performed in approximately 20 minutes.
\subsection{Material Selection \label{sub:material}}
For each of the materials in Table \ref{tab:materials}, improved properties would naturally further miniaturize the design points. The relevant selection criteria can be determined by inspecting our derived scaling laws:
\begin{itemize}
\item \emph{Pressure Vessel Mass}: Scales as $\rho_\text{vessel}/\sqrt{E_\text{vessel}}$ (Eq. \ref{eq:massFracPV}), so lightweight stiff materials are preferred.
\item \emph{Thermal System Mass}: Scales as $\sqrt{\rho_\text{insul} k_\text{insul}/\Delta h_\text{sink}\eta_\text{insul}}$ (Eq. \ref{eq:massFracThermal}), so lightweight insulators, low leak rates, and high heat of fusion heat sinks are preferred. As the insulation resides inside the pressure vessel in our model, a thicker vessel (lower stiffness $E_\text{vessel}$) also slightly decreases the thermal load by reducing the insulation area, though at the cost of higher payload density.
\item \emph{Zero-Payload Initial Mass}: Scales with a complex interplay of parameters, where denser materials generally lower the mass as $(\rho_\text{bulk}\big|_{m_\text{payload}=0})^{-2}$ to leading order and better thermal materials as $(\rho_\text{insul} k_\text{insul}/\Delta h_\text{sink}\eta_\text{insul})^{6/5}$ (Eq. \ref{eq:zeroPayloadMass}).
\end{itemize}
Accordingly, given otherwise equivalent materials of the same functional expression ($\rho/\sqrt{E}$ or $\rho k$, etc.), a denser material will lower the zero-payload initial mass cost without increasing marginal costs. Moreover, insulation effectiveness is an especially strong driver: a hypothetical insulator of half the conductivity would reduce the mass of a zero-payload probe by $56\%$, as well as the mass fraction of the thermal system by $29\%$. Similar sensitivity will be realized from improvements or reductions in the assumed parasitic heat flows due to electrical feedthroughs in the insulation.
This mass sensitivity to the material choice also acts as a proxy for other miniaturized designs of different system architectures. \citet{lorenz1998design} reports probes on the order of $1$ kg by removing the pressure vessel and relying on the thermal capacity of the probe itself. The constraints on such a payload eliminate much of the protection mass included in our model, driving $h_\text{vessel}/r$ to zero and increasing the effective sink performance $\Delta h_\text{sink}$.
In summary, further mass savings can be obtained through a variety of factors: improving the thermal materials, hardening the payload, or streamlining the probe to accelerate the descent. Depending on the mission requirements, different methods should be employed to minimize the mass.
\section{Conclusion}
Venus drop probes are more easily accommodated as secondary mission payloads, either as single probes or in bulk, if the internal components and protection mass can be miniaturized. In this study, we have found that probes of substantially lower mass than Pioneer Venus appear plausible from a thermo-mechanical perspective, provided that streamlining is used to shorten the descent time through the atmosphere. Falling faster mitigates the timeline that heat conduction can occur, thereby reducing the heat load and associated thermal system mass.
However, fulfilling any science objective with a Venus probe requires adequate time to both collect and transmit data. Accordingly, the system design of a probe must balance the descent timeline with the payload size. For example, one of our proposed 5.7 kg probe designs carries 1.3 kg of payload at an accelerated 17 minute descent. Our kilogram-scale cutoff is already consistent with point designs from the literature \cite{huntress2011soviet,bienstock2004pioneer,lorenz1998design,wells2004atmospheric}, most similar in size to the VEVA imaging probes \cite{kerzhanovich2003low}. Future work is need to match our mass tradeoff analysis with specific payloads, science goals, and transmission rates. More complex cases are also possible: for example, a variable drag or lift device that slows the descent at specific altitudes (such a guided aerosonde \cite{matthies2014venus}) would incur further thermal penalties, especially in the final 5 km near the surface, but lower the data rate requirement. Alternatively, a probe that is only intended to collect data in the lower clouds between from 50-45 km faces an easier thermal environment and miniaturization opportunity.
Our model uses simplified conservative approximations to ensure robustness of the results, so there is potential for higher fidelity engineering designs to yield further reductions in probe mass. Specifically, we handicap the insulation effectiveness, rely only on the heat capacity of the phase-change heat sink, and make no attempt to harden any payload for the thermal environment. For our conservative model, we derive analytic expressions for optimizing the mass of insulation and heat sink for any given design point. For vanishingly small payloads, the total mass of the probe is still in the low kilograms simply due to the protection mass required. Such designs represent the ultimate limit for Venus probes of conventional payloads and existing materials.
According to the scaling laws derived in our study, research efforts should prioritize the following areas obtain future improvements in probe designs:
\begin{itemize}
\item \emph{Streamlined cowlings:} High ballistic coefficients allow small probes of little heat capacity to descend quickly, mitigating the limiting conductive thermal load. To leading order, the mass fraction of the thermal system grows with $C_D^{1/4}$ (Eq. \ref{eq:massFracThermal}).
\item \emph{Highly integrated thermal structural designs:} Compact packing allows denser probes, which further quickens the descent time for a given probe size. To leading order, the mass fraction of the thermal and pressure systems grow with $\rho_\text{bulk}^{-5/4}$ and $\rho_\text{bulk}^{-1}$ respectively (Eq. \ref{eq:massFracThermal}-\ref{eq:massFracPV}).
\item \emph{Small, low-mass payloads:} Each increased kilogram of payload cascades into more than a kilogram of added thermal and pressure compensation. For our selected parameter set, payload mass is inflated by 2.4 times into the total mass (Eq. \ref{eq:linearcost}).
\item \emph{Data transmission and compression:} Advanced data communication allows more science to be obtained for an expected short descent time. Roughly, every three minutes of descent requires an additional kilogram of mass (Eq. \ref{eq:linearcost}).
\item \emph{Improved material properties:} High temperature insulation and low-leak passthroughs are an especially strong design driver. Halving the specific conductivity of the insulation more than halves the probe mass (Sec. \ref{sec:results}:\ref{sub:material}).
\item \emph{High temperature and pressure electronics:} The ability to rely on the heat capacity and pressure rating of internal components replaces unneeded protection mass with components that directly enhance science return (see \citet{lorenz1998design}).
\end{itemize}
Finally, although our analysis assumes a probe dropped from a jettisoned heat shield at 65 km altitude, we expect similar minimum-mass limits to apply for a sonde dropped from a Venus aerial platform flying at or above 50 km. The thermal environment only begins becoming challenging below this altitude, and the sonde reaches terminal velocity so quickly that a slower initial aerial velocity would not appreciably affect the analysis.
\section*{Funding Sources}
The research described in this paper was funded by the Jet Propulsion Laboratory, California Institute of Technology, under contract NNN12AA01C with the National Aeronautics and Space Administration.
\section*{Acknowledgments}
The authors wish to acknowledge the Venus Bridge and Venus Aerial Vehicles study groups for motivating this research and providing thoughtful feedback throughout its development. Specific thanks to James Cutts for providing a leadership role in both study groups, facilitating discussions with the larger Venus community, and lending his system engineering expertise.
\bibliographystyle{new-aiaa}
|
2,869,038,156,861 | arxiv | \section{Introduction}
Given two latin squares of the same order a latin trade describes the differences between them. Early motivation \cite{DraKep1} for their study arose from considering the differences between the operation tables of a finite group and a latin square of the same order, that is: what is the `distance' between a group and a latin square?
The study of the topological and geometric properties of latin trades has lead to significant progress towards understanding such differences, see for example \cite{CavWan, DraHamKal, BlackburnTMcC, TMcC, Szabados}, also see \cite{Cav-surv} for a survey of earlier results.
Given a latin trade it may be the case that the constituent partial latin squares are not `contained' (do not embed) in any group operation table, \cite{CavWan}. Hence, it is desirable to identify those that are. Connected latin bitrades of maximum size, equivalently spherical latin bitrades provide a family of latin bitrades for which the constituent partial latin squares do embed. We are interested in the `minimal group' that such constituent partial latin squares embed in, and indeed what groups arise as such minimal groups.
\subsection{Spherical latin bitrades}
A \textit{partial latin square} $P$ is an $\ell\times m$ array, in which the cells either contain an element of a set $S$ of symbols or are empty, such that each row and each column contains each of the symbols of $S$ at most once. Without loss of generality we let $S=\{s_1,s_2,\ldots,s_n\}$ and index the rows and columns by the sets $R=\{r_1,r_2,\ldots,r_\ell\}$ and $C=\{c_1,c_2,\ldots,c_m\}$ respectively (we may assume that each symbol in $S$ occurs at least once in the array and the rows of $R$ and columns of $C$ are all nonempty). As such a partial latin square $P$ can be considered to be a subset of $R\times C\times S$ such that if $(r_1,c_1,s_1)$ and $(r_2,c_2,s_2)$ are distinct triples in $P$, then at most one of $r_1=r_2$, $c_1=c_2$ and $s_1=s_2$ holds.
A \textit{latin bitrade} is an ordered pair, $(W,B)$ say, of non-empty partial latin squares such that for each triple $(r_i,c_j,s_k)\in W$ (respectively $B$) there exists unique $r_{i'}\neq r_i$, $c_{j'}\neq c_j$ and $s_{k'}\neq s_k$ such that
$$\big\{(r_{i,'}c_j,s_k),(r_i,c_{j'},s_k),(r_i,c_j,s_{k'})\big\}\subset B\text{ (respectively $W$)}.$$
Note that $(W,B)$ is a latin bitrade if and only if $(B,W)$ is also a latin bitrade. The \textit{size} of such a latin bitrade is $|W|$ (equivalently $|B|$). A latin bitrade $(W,B)$ for which there does not exist any latin bitrade $(W',B')$ such that $W'\subsetneq W$ and $B'\subsetneq B$ is said to be \textit{connected}.
Let $(W,B)$ be a latin bitrade; for each row, $r$ say, of $(W,B)$ a permutation $\rho_r$ of the symbols in row $r$ can be defined by $\rho_r(s)=s'$ if and only if $(r,c,s)\in W$ and $(r,c,s')\in B$ for some $c$ in $C$. A row $r$ for which $\rho_r$ is comprised of a single cycle is said to be \textit{separated}. Similar definitions hold for separated columns and separated symbols. A latin bitrade in which each row, each column and each symbol is separated is called a \textit{separated latin bitrade}.
Suppose that $(W,B)$ is a latin bitrade which is not separated. Then replacing each non-separated row $x$ (respectively column, symbol) by new rows (respectively columns, symbol) for each of the cycles in $\rho_x$ we obtain a separated latin bitrade. See the survey paper \cite{Cav-surv} for further details and discussion.
A connected latin bitrade $(W,B)$ can be used to construct a face two-coloured triangulation $\mathcal{G}_{W,B}$ of a pseudo-surface $\Sigma$ in which the vertex set is $R\sqcup C\sqcup S$ and there is an edge between a pair of vertices if and only if the vertices occur together in a triple of $W$ (equivalently a triple of $B$). For each triple $(r,c,s)\in W$ a white triangular face with vertices $r,c,s$ is constructed and for each $(r',c',s')\in B$ a black triangular face with vertices $r',c',s'$ is constructed. As $(W,B)$ is a bitrade the graph underlying $\mathcal{G}_{W,B}$ is simple, and as $(W,B)$ is connected $\mathcal{G}_{W,B}$ is also connected. The pseudo-surface $\Sigma$ is a true surface if the rotation at each vertex is a full rotation; this occurs if and only if $(W,B)$ is separated (in which case each row, column or symbol permutation corresponds to the rotation at the corresponding vertex). If $\Sigma$ is not a surface, then replacing each pinch point of multiplicity $t$ with a $t$ vertices, one on each of the sheets at the pinch point, corresponds to the above construction taking a non-separated bitrade to a separated one. As the triangulation $\mathcal{G}_{W,B}$ is face two-coloured and the underlying graph is vertex three-coloured it follows, see the proof of Theorem 10.1 in \cite{GrannellGriggs}, that $\mathcal{G}_{W,B}$ is orientable.
The \textit{genus} of a separated connected latin bitrade is the genus of the surface obtained in the above manner; in particular separated connected latin bitrades of genus zero are referred to as \textit{spherical latin bitrades}. Note that for any connected latin bitrade of size $\ell$ we have that $|R|+|C|+|S|\leq \ell+2$, with equality if and only if the bitrade is a spherical latin bitrade, see \cite{BlackburnTMcC}. That is, spherical latin bitrades are the connected latin bitrades of minimal size (with respect to the sum of the number of rows, columns and symbols).
In \cite{CavLis} Cavenagh and Lison\v{e}k prove the following result.
\begin{theorem}[Cavenagh \& Lison\v{e}k, \cite{CavLis}]
\label{thm:CavLis}
Spherical latin bitrades are equivalent to spherical Eulerian triangulations whose underlying graphs are simple.
\end{theorem}
Note that an Eulerian graph that has an embedding in the sphere is necessarily vertex three-colourable \cite{Hea}. It is not hard to generalise Theorem \ref{thm:CavLis} to surfaces of higher genus, however as face two-coloured triangulations of surfaces of higher genus may not be vertex three-colourable, an additional condition is required.
\begin{cor}
Separated connected latin bitrades of genus $g$ are equivalent to vertex three-colourable Eulerian triangulations of genus $g$ whose underlying graphs are simple.
\end{cor}
\subsection{Embeddings of latin bitrades into abelian groups}
Two partial latin squares are said to be \textit{isotopic} if they are equal up to a relabelling of their sets of rows, columns and symbols. A partial latin square $P$, with row set $R$, column set $C$ and symbol set $S$, is said to \textit{embed in an
abelian group $\Gamma$} if there exist injective maps $\phi_1:R\rightarrow \Gamma$, $\phi_2:C\rightarrow \Gamma$ and $\phi_3:S\rightarrow \Gamma$ such that $\phi_1(r)+\phi_2(c)=\phi_3(s)$ for all $(r,c,s)\in P$. In other words $P$ is isotopic to a partial latin square contained in the operation table of $\Gamma$. See Figure \ref{fig:Kyle-example} for an example.
By defining $\phi|_R=\phi_1$, $\phi|_C=\phi_2$, and $\phi|_S=-\phi_3$ it follows, see \cite{BlackburnTMcC}, that $P$ embeds in an abelian group $\Gamma$ if and only if there exists a function $\phi:R\sqcup C\sqcup S\rightarrow \Gamma$ that is injective when restricted to each of $R$, $C$ and $S$ and is such that $\phi(r)+\phi(c)+\phi(s)=0$ for all $(r,c,s)\in P$. The map $\phi$ is called an \textit{embedding} of $P$.
An abelian group $\Gamma$ is said to be a \textit{minimal abelian representation} of a partial latin square $P$ if $P$ embeds in $\Gamma$ and the image of $\phi$ generates $\Gamma$ for all embeddings $\phi$ of $P$ in $\Gamma$.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{ccc}
\begin{tabular}{|ccc|}
\hline
$a$ & $b$ & $c$ \\
$c$ & & $a$ \\
& $a$ & $b$ \\
\hline
\end{tabular}
&$\qquad$ &
\begin{tabular}{c|cccc|}
$+$ & $0$ & $1$ & $2$ & $3$ \\
\hline
$0$ & $0$ & $\mathbf{1}$ & $\mathbf{2}$ & $\mathbf{3}$ \\
$1$ & $1$ & $2$ & $3$ & $0$ \\
$2$ & $2$ & $\mathbf{3}$ & $0$ & $\mathbf{1}$ \\
$3$ & $3$ & $0$ & $\mathbf{1}$ & $\mathbf{2}$ \\
\hline
\end{tabular}
\end{tabular}
\end{center}
\caption{The partial latin square above left embeds into $\mathbb{Z}_4$ as illustrated by the bold faced entries in the operation table of $\mathbb{Z}_4$, above right.}
\label{fig:Kyle-example}
\end{figure}
Two partial latin squares are said to be \textit{conjugate} if they are equal up to permutations of the roles of rows, columns and symbols. Two partial latin squares, say $P$ and $Q$, for which a partial latin square isotopic to $P$ is conjugate to a partial latin square isotopic to $Q$ are said to be
in the same \textit{main class}. Note that if a partial latin square $P$ has an embedding in an abelian group $\Gamma$, every partial latin square in the same main class as $P$ also has an embedding in $\Gamma$.
As we are interested in embeddings (into abelian groups) of partial latin squares (and given that if a partial latin square $P$ embeds in an abelian group $\Gamma$, so does any partial latin square isotopic to $P$) from here on we will assume that the row, column and symbol sets of a partial latin square are pairwise disjoint.
In \cite{CavDra} Cavenagh and Dr\'apal asked the following questions ``Can the individual partial latin squares of a connected separated latin bitrade be embedded into the operation table of an abelian group? If this is not true in general is it true for spherical latin bitrades?''. The case of spherical latin bitrades was solved by Cavenagh and Wanless in \cite{CavWan} and independently by Dr\'apal, H\"am\"al\"ainen and Kala in \cite{DraHamKal}. Cavenagh and Wanless \cite{CavWan} also showed that separated connected latin bitrades of higher genus exist for which the constituent partial latin squares do not embed in any group. Hence our focus on spherical latin bitrades.
Let $P$ be a partial latin square with row set $R$, column set $C$ and symbol set $S$. Let $V=R\cup C\cup S$ and define an abelian group $\mathcal{A}_P$ with generating set $V$ subject to the relations $\{r+c+s=0:(r,c,s)\in P\}$.
Note that, if $P$ and $Q$ are two partial latin squares in the same main class, then $\mathcal{A}_P\cong \mathcal{A}_Q$. Also, note that two partial latin squares, $P$ and $Q$, from different main classes may also satisfy $\mathcal{A}_P\cong \mathcal{A}_Q$ (see Figure 2 in \cite{TMcC}).
The group $\mathcal{A}_P$ has the `universal' property that any minimal abelian representation of $P$ is a quotient of $\mathcal{A}_P$, \cite{DraKep}, also see \cite{BlackburnTMcC}. Moreover $\mathcal{A}_P$ is of the form $\mathbb{Z}\oplus\mathbb{Z}\oplus\mathcal{C}_P$, again see \cite{BlackburnTMcC}. Dr\'apal et al \cite{DraHamKal} and Cavenagh and Wanless \cite{CavWan} proved that $\mathcal{C}_W$ is finite when $(W,B)$ is a spherical latin bitrade. So in this case $\mathcal{C}_W$ is the torsion subgroup of $\mathcal{A}_W$.
Cavenagh and Wanless conjectured that $\mathcal{C}_W\cong \mathcal{C}_B$ (and hence $\mathcal{A}_W\cong \mathcal{A}_B$), \cite{CavWan}, also see \cite{Kou,BCC}. This is indeed the case.
\begin{theorem}[Blackburn \& McCourt \cite{BlackburnTMcC}]
\label{thm:BlackburnTMcC}
Let $(W,B)$ be a spherical latin bitrade, then $\mathcal{A}_W\cong\mathcal{A}_B\cong\mathbb{Z}\oplus\mathbb{Z}\oplus\mathcal{C}$, where $\mathcal{C}$ is finite.
\end{theorem}
The group $\mathcal{C}$ in Theorem \ref{thm:BlackburnTMcC} is referred to as the \textit{canonical group} of the spherical latin bitrade (see \cite{GruWan, TMcC}).
In \cite{CavWan} Cavenagh and Wanless asked the following question.
\begin{question}
\label{ques:main}
Which abelian groups arise as the canonical group of a spherical latin bitrade?\footnote{Cavenagh and Wanless actually asked this for the finite torsion subgroup of $\mathcal{A}_W$ as Theorem \ref{thm:BlackburnTMcC} was not established at the time.}
\end{question}
It is this question that we address in this paper.
For any cyclic group $\mathbb{Z}_n$ the existence of spherical latin bitrades whose canonical group is isomorphic to $\mathbb{Z}_n$ was established by Cavenagh and Wanless in \cite{CavWan}. They also noted that no spherical latin bitrade exists whose canonical group is isomorphic to $\mathbb{Z}_2\oplus\mathbb{Z}_2$.
Given a face 2-coloured triangulation of the sphere in which the underlying graph is not necessarily simple and leaving the definitions of $\mathcal{A}_W$ and $\mathcal{A}_B$ unchanged it is still the case that $\mathcal{A}_W\cong\mathcal{A}_B\cong\mathbb{Z}\oplus\mathbb{Z}\oplus\mathcal{C}$ where $\mathcal{C}$ is finite \cite{BlackburnTMcC}. In \cite{TMcC} the second author showed that given any finite abelian group $\Gamma$ there exists a face 2-coloured triangulation of the sphere whose canonical group is isomorphic to $\Gamma$. However, unless $\Gamma$ is a cyclic group the triangulations constructed have underlying graphs that are not simple.
In Section \ref{sec:exist} we prove the existence of several, general, infinite families of abelian groups that arise as canonical groups of spherical latin bitrades. Before doing so, we first prove that there exist infinitely many abelian groups that do not arise as the canonical group of any spherical latin bitrade.
\begin{theorem}
\label{thm:non-existence}
There does not exist a spherical latin bitrade whose canonical group is isomorphic to $\mathbb{Z}_2^k$ for any $k\geq 2$.
\end{theorem}
\begin{proof}
In the following we will make repeated use of the fact that for $u,v,w,x,y,z\in \mathbb{Z}_2^k$, if $u+w=y$, $v+w=z$ and $v+x=y$, then $u+x=z$.
Let $k\geq 2$ and suppose that $(W,B)$ is a spherical latin bitrade whose canonical group is isomorphic to $\mathbb{Z}_2^k$. So, by Theorem \ref{thm:BlackburnTMcC}, both $W$ and $B$ embed in $\mathbb{Z}^k_2$.
Recall that we may assume that the row, column and symbol sets of $W$ (and of $B$) are pairwise disjoint; denote them, respectively, by $R=\{r_1,r_2,\ldots, r_\ell\}$, $C=\{c_1,c_2,\ldots, c_m\}$ and $S=\{s_1,s_2,\ldots, s_n\}$. Let $\mathcal{G}_{W,B}$ be the related triangulation and $G$ be the underlying graph of this triangulation. As $\mathcal{G}_{W,B}$ has a proper face 2-colouring, $G$ is Eulerian, and, as $(W,B)$ is a latin bitrade, the minimum degree of $G$ is at least four. Moreover, $\mathcal{G}_{W,B}$ is a triangulation of the sphere, so, by Euler's formula, $G$ contains at least six vertices of degree four.
As spherical latin bitrades in the same main class all have isomporphic canonical groups, without loss of generality, we may assume that the degree of $r_1$ is four, and $(r_1,c_1,s_1),(r_1,c_2,s_2)\in B$ and $(r_1,c_1,s_2), (r_1,c_2,s_1)\in W$ where $c_1\neq c_2$ and $s_1\neq s_2$.
Hence, as $(W,B)$ is a latin bitrade, there exist $x_1,x_2,x_3,x_4\in R\setminus\{r_1\}$ such that $(x_1,c_2,s_1),(x_3,c_1,s_2)\in B$ and $(x_2,c_1,s_1),(x_4,c_2,s_2)\in W$ (see Figure \ref{fig:CaseC} for an illustration of the corresponding faces).
\begin{figure}[!hb]
\begin{center}
\scalebox{0.9}
{\begin{tikzpicture}[fill=gray!50, scale=1, vertex/.style={circle,inner sep=2,fill=black,draw}, dot/.style={circle,inner sep=0.7,fill=black,draw}]
\coordinate (r1) at (2,2);
\coordinate (c1) at (3,3);
\coordinate (s1) at (1,3);
\coordinate (c2) at (1,1);
\coordinate (s2) at (3,1);
\coordinate (x1) at (4,2);
\coordinate (x2) at (2,4);
\coordinate (x3) at (0,2);
\coordinate (x4) at (2,0);
\filldraw[color=gray!50] (r1) -- (c1) -- (s1) -- cycle;
\filldraw[color=gray!50] (r1) -- (c2) -- (s2) -- cycle;
\filldraw[color=gray!50] (x1) -- (c1) -- (s2) -- cycle;
\filldraw[color=gray!50] (x3) -- (c2) -- (s1) -- cycle;
\draw[thick] (x1) -- (x2) -- (x3) -- (x4) -- cycle;
\draw[thick] (c1) -- (s1) -- (c2) -- (s2) -- cycle;
\draw[thick] (s1) -- (s2);
\draw[thick] (c1) -- (c2);
\node at (3.6,3.85) [dot]{};
\node at (3.75,3.75) [dot]{};
\node at (3.85,3.6) [dot]{};
\node at (3.6,0.15) [dot]{};
\node at (3.75,0.25) [dot]{};
\node at (3.85,0.4) [dot]{};
\node at (0.15,3.6) [dot]{};
\node at (0.25,3.75) [dot]{};
\node at (0.4,3.85) [dot]{};
\node at (0.15,0.4) [dot]{};
\node at (0.25,0.25) [dot]{};
\node at (0.4,0.15) [dot]{};
\node at (r1) [vertex,label=east:$r_1$]{};
\node at (c1) [vertex,label=north east:$c_1$]{};
\node at (s1) [vertex,label=north west:$s_1$]{};
\node at (c2) [vertex,label=south west:$c_2$]{};
\node at (s2) [vertex,label=south east:$s_2$]{};
\node at (x1) [vertex,label=east:$x_1$]{};
\node at (x2) [vertex,label=north:$x_2$]{};
\node at (x3) [vertex,label=west:$x_3$]{};
\node at (x4) [vertex,label=south:$x_4$]{};
\end{tikzpicture}}
\end{center}
\caption{The vertex $r_1$ and nearby faces in $\mathcal{G}_{W,B}$.}
\label{fig:CaseC}
\end{figure}
As $W$ embeds in $\mathbb{Z}_2^k$, $x_2=x_4$ and, as $B$ embeds in $\mathbb{Z}_2^k$, $x_1=x_3$. Suppose that $x_1=x_2$. Let
$$W'=\{(r_1,c_1,s_2),(r_1,c_2,s_1),(x_1,c_1,s_1),(x_1,c_2,s_2)\}$$
and
$$B'=\{(r_1,c_1,s_1),(r_1,c_2,s_2),(x_1,c_1,s_2),(x_1,c_2,s_1)\}.$$
Then $(W',B')$ is a spherical latin bitrade such that $W'\subseteq W$ and $B'\subseteq B$. As $(W,B)$ is connected, it must be the case that $W'=W$ and $B'=B$. However, the canonical group of $(W',B')$ is $\mathbb{Z}_2$, a contradiction. So $x_1\neq x_2$; in which case $G$ contains a subgraph $H=(V,E)$ where $V=\{r_1,x_1,x_2,c_1,s_1,s_2\}$ and $E=\{r_1c_1,r_1s_1,r_1s_2, x_1c_1,x_1s_1,x_1s_2, x_2c_1,x_2s_1,x_2s_2\}$. However, $H$ is isomorphic to $K_{3,3}$; which contradicts $\mathcal{G}_{W,B}$ being a spherical embedding.
\end{proof}
\section{Existence results}
\label{sec:exist}
\subsection{Directed Eulerian spherical embeddings}
Let $D$ be a, not necessarily simple, digraph of order $n$ with vertex set $V(D)=\{v_1,v_2,\ldots v_n\}$. The \textit{adjacency matrix} $A=[a_{ij}]$ of $D$ is the $n\times n$ matrix where entry $a_{ij}$ is the number of arcs from vertex $v_i$ to vertex $v_j$. The \textit{asymmetric Laplacian} of $D$ is the $n\times n$ matrix $L(D)=B-A$ where $B$ is the diagonal matrix in which entry $b_{ii}$ is the out-degree of vertex $v_{i}$.
The digraph $D$ is said to be \textit{Eulerian} if, for each $v\in V(D)$, the out-degree at $v$ equals the in-degree at $v$. Hence, in an Eulerian digraph we will simply refer to the degree of a vertex $v$, i.e. $\deg v$.
Let $D$ be a connected Eulerian digraph of order $n$ with vertex set $V(D)=\{v_1,v_2,\ldots v_n\}$. Fix an $i$, where $1\leq i\leq n$ and define $L'(D,i)$ to be the matrix obtained by removing row and column $i$ from $L(D)$. As $D$ is connected and Eulerian, the group $\mathbb{Z}^{n-1}/L'(D,i)\mathbb{Z}^{n-1}$ is invariant of the choice of $i$, see \cite[Lemma 4.12]{HolLevMesPerProWil}. Hence, the \textit{abelian sandpile group} of the connected Eulerian digraph $D$ can be defined to be the group $\mathcal{S}(D)=\mathbb{Z}^{n-1}/\mathbb{Z}^{n-1}L'(D,n)$; moreover $\mathcal{S}(D)\cong\mathbb{Z}^{n-1}/\mathbb{Z}^{n-1}L'(D,i)$, for any $1\leq i\leq n$.
Consider an embedding $\mathcal{D}$ of a connected Eulerian digraph $D$ in an orientable surface $S$. If each face of the embedding corresponds to a directed cycle in $D$, equivalently the rotation at each vertex alternates between incoming and outgoing arcs, then the embedding is said to be a \textit{directed Eulerian embedding}, see \cite{BonConMorMcK,BonHarSir}. If the embedding is in the sphere we call it a \textit{directed Eulerian spherical embedding}.
Suppose that $\mathcal{G}$ is a face two-coloured triangulation of the sphere. By \cite{Hea}, the underlying digraph of $\mathcal{G}$ has a vertex three-colouring with colour classes $R$, $C$ and $S$. Tutte \cite{Tutte} described a construction, from $\mathcal{G}$, of directed Eulerian spherical embeddings $D_I(\mathcal{G})=D_I$ with vertex set $I$, where $I\in\{R,C,S\}$. We give a description of the construction from $\cite{TMcC}$.
Let $\{I,I_1,I_2\}=\{R,C,S\}$. Consider a vertex $v_i\in I$. Then $v_i$ has even degree, say $d$, the rotation at $i$ is
$(u_1,v_1,u_2,v_2,\ldots, u_{d/2},v_{d/2}),$
where, without loss of generality, $u_j\in I_1$ and $v_j\in I_2$ for all $1\leq j\leq d/2$ and the edge $e_j$ between $u_j$ and $v_j$ in the rotation is contained in a black face.
Then in $D_I$ there are $d/2$ outgoing arcs from vertex $v_i$, say $a_j$, $1\leq j\leq d/2$, one for each black face, and the terminal vertex for arc $a_j$ is the vertex in $I$ contained in the white face containing edge $e_j$. Clearly, $D_I$ inherits a spherical embedding from $\mathcal{G}$ in which the arc rotation at each vertex alternates between incoming and outgoing arcs, so $D_I$ has a directed Eulerian spherical embedding. As the sphere is connected the graph underlying $D_I$ is connected.
Note that given any of $D_R$, $D_C$ or $D_S$ the original face two-coloured triangulation can be obtained by reversing the above construction:
\begin{lemma}[Tutte, \cite{Tutte}]
\label{lem:gobackwards}
Given a directed Eulerian spherical embedding $D$, there exists a face $2$-coloured spherical triangulation $\mathcal{G}$ with a vertex $3$-colouring given by the vertex sets $R$, $C$ and $S$, such that for some $I\in\{R,C,S\}$,
$D_I(\mathcal{G})\cong D.$
\end{lemma}
Tutte's Trinity Theorem \cite{Tutte} states that $|\mathcal{S}(D_R)|=|\mathcal{S}(D_C)|=|\mathcal{S}(D_S)|$. For a spherical latin bitrade $(W,B)$ with corresponding face two-coloured triangulation $\mathcal{G}$, this result was strengthened implicitly in \cite{BlackburnTMcC} and explicitly in \cite{TMcC} to $\mathcal{S}(D_R)\cong\mathcal{S}(D_C)\cong\mathcal{S}(D_S)\cong \mathcal{A}_W\cong\mathcal{A}_B$.
Given an arbitrary directed Eulerian spherical embeddings applying the above construction in reverse yields a face two-coloured triangulation. However, the underlying graph is not necessarily simple. In order to make use of the above equivalences (between sandpile groups and canonical groups of spherical latin squares) we make use of the following result.
\begin{proposition}[McCourt, \cite{TMcC}]
\label{prop:simple}
Suppose that $\mathcal{D}$ is a directed Eulerian spherical embedding with underlying digraph $D$. Further suppose that $D$ is connected, has no loops, no cut vertices and its underling graph has no 2-edge-cuts. Then there exists a spherical latin bitrade whose canonical group is isomorphic to $\mathcal{S}(D)$.
\end{proposition}
Hence, in order to construct a spherical latin bitrade with canonical group $\Gamma$ it suffices to find a directed Eulerian spherical embedding satisfying the connectivity conditions of Proposition \ref{prop:simple} whose abelian sandpile group is isomorphic to $\Gamma$.
\subsection{Arbitrary rank}
In this section we will construct families of canonical groups that have arbitrary rank. We will make repeated use of the following, elementary lemma.
\begin{lemma}
\label{cl:prime+comps}
Let $2\leq p,a$ and $0\leq x,y,\ell$. Further let $r=p(x+1)+a-x-1$, $s=p(y+1)+a-y-1$ and $t_{i,j}\in\mathbb{Z}$, for $1\leq i\leq m$ and $1 \leq j \leq \ell$. Then the matrix
\begin{center}
\resizebox{\linewidth}{!}{$L=\left[\begin{tabular}{cc|ccc|ccc|c|ccc}
$p$ & $-p+1$ & $0$ & $\cdots$ & $0$ & $0$ & $\cdots$ & $0$ & $-1$ & $0$ & $\cdots$ &$0$\\
$-p$ & $r$ &$-p$& $\cdots$ & $-p$ & $0$ & $\cdots$ & $0$ & $x+1-a$ & $0$ & $\cdots$ &$0$\\
\hline
$0$ & $-p+1$ & \multicolumn{3}{c|}{\multirow{3}{*}{$p\mathbb{I}_x$}}&$0$ & $\cdots$ &$0$&$-1$& $0$ & $\cdots$ &$0$\\
$\vdots$ & $\vdots$ & \multicolumn{3}{c|}{}&$\vdots$ &$\ddots$ &$\vdots$&$\vdots$& $\vdots$ &$\ddots$ &$\vdots$\\
$0$ & $-p+1$ & \multicolumn{3}{c|}{}&$0$ &$\cdots$ &$0$&$-1$& $0$ &$\cdots$ &$0$\\
\hline
$0$ & $-1$ & $0$ &$\cdots$ &$0$&\multicolumn{3}{c|}{\multirow{3}{*}{$p\mathbb{I}_y$}}&$-p+1$&$0$ &$\cdots$ &$0$\\
$\vdots$ & $\vdots$ & $\vdots$ &$\ddots$ &$\vdots$&\multicolumn{3}{c|}{}&$\vdots$&$\vdots$ &$\ddots$ &$\vdots$\\
$0$ & $-1$ & $0$ &$\cdots$ &$0$&\multicolumn{3}{c|}{}&$-p+1$&$0$ &$\cdots$ &$0$\\
\hline
$0$ & $y+1-a$ & $0$ & $\cdots$ & $0$ & $-p$ & $\ldots$ & $-p$ & $s$ &$t_{1,1}$ &$\cdots$ &$t_{1,\ell}$ \\
$0$ & $-1$ & $0$ & $\cdots$ & $0$ & $0$ & $\ldots$ & $0$ & $-p+1$ &$t_{2,1}$ &$\cdots$ &$t_{2,\ell}$\\
\hline
$0$ & $0$ & $0$ & $\cdots$ & $0$ & $0$ & $\cdots$ & $0$ & $0$ &$t_{3,1}$ &$\cdots$ &$t_{3,\ell}$\\
$\vdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$\\
$0$ & $0$ & $0$ & $\cdots$ & $0$ & $0$ & $\cdots$ & $0$ & $0$ &$t_{m,1}$ &$\cdots$ &$t_{m,\ell}$
\end{tabular}\right]$}
\end{center}
reduces (under operations invertible over $\mathbb{Z}$) to
$$\left[\begin{tabular}{cc|ccc|c|ccc}
$1$ & $0$ & $0$ & $\cdots$ & $0$ & $0$ & $0$ & $\cdots$ &$0$ \\
$0$ & $ap$ &$0$& $\cdots$ & $0$ & $0$ & $0$ & $\cdots$ &$0$ \\
\hline
$0$ & $0$ & \multicolumn{3}{c|}{\multirow{3}{*}{$p\mathbb{I}_{x+y}$}}&$0$& $0$ & $\cdots$ &$0$\\
$\vdots$ & $\vdots$ & \multicolumn{3}{c|}{}&$\vdots$& $\vdots$ & $\ddots$ &$\vdots$\\
$0$ & $0$ & \multicolumn{3}{c|}{}&$0$& $0$ & $\cdots$ &$0$\\
\hline
$0$ & $0$ & $0$ & $\cdots$ & $0$ & $p$&$t_{1,1}$ &$\cdots$ &$t_{1,\ell}$ \\
$0$ & $0$ & $0$ & $\cdots$ & $0$ & $-p$&$t_{2,1}$ &$\cdots$ &$t_{2,\ell}$\\
\hline
$0$ & $0$ & $0$ & $\cdots$ & $0$ & $0$&$t_{3,1}$ &$\cdots$ &$t_{3,\ell}$\\
$\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$\\
$0$ & $0$ & $0$ & $\cdots$ & $0$ & $0$&$t_{m,1}$ &$\cdots$ &$t_{m,\ell}$
\end{tabular}\right]$$
\end{lemma}
\begin{proof}
For $1\leq i\leq x$ and $1\leq j\leq y$ add Row $2+i$ and Row $2+x+j$ to Row $x+y+3$ of $L$. Subsequently, for $1\leq i\leq x$, add Column $2+i$ to Column $2$ and, for $1\leq j\leq y$, Column $2+x+j$ to Column $3+x+y$. Next add Column $2$ to Column $1$.
Now add Column $1$ to Column $3+x+y$ and $p-1$ copies of Column $1$ to Column $2$. Row $1$ can now be used to clear all non-zeros from Column $1$. Once this is completed it is easy to see the that the remaining non-zeros in Column $2$ can also be cleared.
\end{proof}
The proof of Lemma \ref{lem:composites} is essentially a special case of the proof of Theorem \ref{thm:primes_and_composites}, however, to aid the reader, we detail this simpler case before proving the general result.
\begin{lemma}
\label{lem:composites}
Let $1\leq k$ and let $2\leq m,a_1,a_2, \ldots, a_k$. Then there exists a spherical latin bitrade whose canonical group is isomorphic to $\bigoplus_{i=1}^k \mathbb{Z}_{m a_i}.$
\end{lemma}
\begin{proof}
We begin by defining a digraph $D_{m;a_1,a_2,\ldots,a_k}$ with vertex set $\{\alpha_0, \alpha_1, \alpha_2,\ldots, \alpha_{k},\break\gamma_1,\gamma_2,\ldots, \gamma_k\}$ and
\begin{itemize}[noitemsep]
\item for each $1\leq i\leq k$:
\begin{itemize}[noitemsep]
\item[$\circ$] $m-1$ arcs from $\alpha_i$ to $\gamma_i$ and $m-1$ arcs from $\gamma_i$ to $\alpha_i$;
\item[$\circ$] $a_i-1$ arcs from $\alpha_{i-1}$ to $\gamma_i$ and $a_i-1$ arcs from $\gamma_i$ to $\alpha_{i-1}$;
\item[$\circ$] an arc from $\alpha_{i}$ to $\alpha_{i-1}$;
\end{itemize}
\item for each $1\leq i\leq k -1$: an arc from $\gamma_{i}$ to $\gamma_{i+1}$; and
\item an additional arc from $\alpha_{0}$ to $\gamma_1$ and an additional arc from $\gamma_k$ to $\alpha_k$.
\end{itemize}
The digraph $D_{m;a_1,a_2,\ldots,a_k}$ has a directed Eulerian spherical embedding and satisfies the connectivity conditions of Proposition \ref{prop:simple}, as can be seen from Figure \ref{fig:Dma_embedding} (in this figure $t$ arcs from $u$ to $v$ alternating with $t$ arcs from $v$ to $u$ are represented by a bidirectional edge labelled $t$). Hence, there exists a spherical latin bitrade whose canonical group is isomorphic to $\mathcal{S}(D_{m;a_1,a_2,\ldots,a_k})$.
\begin{figure}[!h]
\begin{center}
{\begin{tikzpicture}[fill=gray!50, scale=1.1, vertex/.style={circle,inner sep=2,fill=black,draw}, dot/.style={circle,inner sep=0.7,fill=black,draw}]
\coordinate (a0) at (0.5,1);
\coordinate (a1) at (2,0);
\coordinate (a2) at (4,0);
\coordinate (a3) at (6,0);
\coordinate (ak2) at (8.5,0);
\coordinate (ak1) at (10.5,0);
\coordinate (ak) at (12.5,0);
\coordinate (c1) at (2,2);
\coordinate (c2) at (4,2);
\coordinate (c3) at (6,2);
\coordinate (ck2) at (8.5,2);
\coordinate (ck1) at (10.5,2);
\coordinate (ck) at (12.5,2);
\draw[thick,->-=.55, -<-=.45] (a0) -- (c1);
\draw[thick,->-=.55, -<-=.45] (a1) -- (c2);
\draw[thick,->-=.55, -<-=.45] (a2) -- (c3);
\draw[thick,->-=.55, -<-=.45] (ak2) -- (ck1);
\draw[thick,->-=.55, -<-=.45] (ak1) -- (ck);
\draw[thick,->-=.55, -<-=.45] (a1) -- (c1);
\draw[thick,->-=.55, -<-=.45] (a2) -- (c2);
\draw[thick,->-=.55, -<-=.45] (a3) -- (c3);
\draw[thick,->-=.55, -<-=.45] (ak2) -- (ck2);
\draw[thick,->-=.55, -<-=.45] (ak1) -- (ck1);
\draw[thick,->-=.55, -<-=.45] (ak) -- (ck);
\draw[thick,->-=.55, -<-=.45] (a0) -- (c1);
\draw[thick,->-=.5] (ak) -- (ak1);
\draw[thick,->-=.5] (ak1) -- (ak2);
\draw[thick,->-=.5] (a3) -- (a2);
\draw[thick,->-=.5] (a2) -- (a1);
\draw[thick,->-=.5] (a1) -- (a0);
\draw[thick,->-=.5] (c1) -- (c2);
\draw[thick,->-=.5] (c2) -- (c3);
\draw[thick,->-=.5] (ck2) -- (ck1);
\draw[thick,->-=.5] (ck1) -- (ck);
\draw[thick] (a3) -- (6.5,0);
\draw[thick] (c3) -- (6.5,2);
\draw[thick] (ak2) -- (8,0);
\draw[thick] (ck2) -- (8,2);
\draw [thick, ->-=.5] (a0) to [bend left=60](c1);
\draw [thick, ->-=.5] (ck) to [bend left=90](ak);
\node at (7,0) [dot]{};
\node at (7.25,0) [dot]{};
\node at (7.5,0) [dot]{};
\node at (7,2) [dot]{};
\node at (7.25,2) [dot]{};
\node at (7.5,2) [dot]{};
\node at (a0) [vertex,label=south:$\alpha_0$]{};
\node at (a1) [vertex,label=south:$\alpha_1$]{};
\node at (a2) [vertex,label=south:$\alpha_2$]{};
\node at (a3) [vertex,label=south:$\alpha_3$]{};
\node at (ak2) [vertex,label=south:$\alpha_{k-2}$]{};
\node at (ak1) [vertex,label=south:$\alpha_{k-1}$]{};
\node at (ak) [vertex,label=south:$\alpha_k$]{};
\node at (c1) [vertex,label=north:$\gamma_1$]{};
\node at (c2) [vertex,label=north:$\gamma_2$]{};
\node at (c3) [vertex,label=north:$\gamma_3$]{};
\node at (ck2) [vertex,label=north:$\gamma_{k-2}$]{};
\node at (ck1) [vertex,label=north:$\gamma_{k-1}$]{};
\node at (ck) [vertex,label=north:$\gamma_k$]{};
\node at (1.6,0.95){\tiny $m-1$};
\node at (3.6,0.7){\tiny $m-1$};
\node at (5.6,0.7){\tiny $m-1$};
\node at (8.1,0.95){\tiny $m-1$};
\node at (10.1,0.7){\tiny $m-1$};
\node at (12.1,0.7){\tiny $m-1$};
\node at (2.8,1.2){\tiny $a_2-1$};
\node at (4.8,1.2){\tiny $a_3-1$};
\node at (9.2,1.2){\tiny $a_{k-1}-1$};
\node at (11.3,1.2){\tiny $a_k-1$};
\node at (1.6,1.35){\tiny $a_1-1$};
\end{tikzpicture}}
\end{center}
\caption{A directed Eulerian spherical embedding of $D_{m;a_1,a_2,\ldots,a_k}$.}
\label{fig:Dma_embedding}
\end{figure}
Suppose that we order the vertices of $D_{m;a_1,a_2,\ldots,a_k}$ by $\alpha_k, \gamma_k, \alpha_{k-1}, \gamma_{k-1}, \ldots, \alpha_2, \gamma_2,\break \alpha_1, \gamma_1, \alpha_0$, and construct the associated asymmetric Laplacian. Then, removing the row and column corresponding to $\alpha_0$ yields the reduced asymmetric Laplacian $\mathcal{L}'(D_{m;a_1,a_2,\ldots,a_k})$.
Let $k\geq 1$ and $m,a_1,a_2,\ldots,a_{k+1}\geq 2$.
Note that
{\footnotesize $\mathcal{L}'(D_{m;a_1})=\begin{bmatrix}
m & -m+1 \\
-m & m+a_1-1
\end{bmatrix}$} reduces to {\footnotesize $\begin{bmatrix}
1 & 0 \\
0 & ma_1
\end{bmatrix}$}; so $\mathcal{S}(D_{m;a_1})\cong \mathbb{Z}_{m a_1}$.
Assume that $\mathcal{S}(D_{m;a_1,a_2,\ldots,a_{k}})$ is isomorphic to $\bigoplus_{i=1}^{k} \mathbb{Z}_{m a_i}$. Setting $a_i-1=a_i'$ for $1\leq i\leq k$,
the reduced asymmetric Laplacian $\mathcal{L}'_k=\mathcal{L}'(D_{m;a_1,a_2,\ldots,a_{k}})$ is shown below.
\vspace{-5mm}
\begin{center}
\resizebox{\linewidth}{!}{$\mathcal{L}'_k=\begin{bmatrix}
m & -m+1 & -1 & 0 & 0 & 0 & \cdots & 0 & 0\\
-m & m+a_k' & -a_k' & 0 & 0 & 0 & \cdots & 0 & 0\\
0 & -a_k' & m+a_k' & -m+1 & -1 & 0 & \cdots & 0 & 0\\
0 & -1 & -m+1 & m+a_{k-1}' & -a_{k-1}' & 0 & \cdots & 0 & 0\\
0 & 0 & 0 & -a_{k-1}' & m+a_{k-1}' & -m+1 & \cdots & 0 & 0\\
0 & 0 & 0 & -1 & -m+1 & m+a_{k-2}' & \cdots & 0 & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & 0 & 0 & 0 & \cdots & m+a_2' & -m+1\\
0& 0 & 0 & 0 & 0 & 0 & \cdots & -m+1 & m+a_1'
\end{bmatrix}$}
\end{center}
Now, consider the digraph $D_{m;a_1,a_2,\ldots,a_{k+1}}$. Applying Lemma \ref{cl:prime+comps}, with $p=m$ and $x=y=0$, to rows $\alpha_{k+1}, \gamma_{k+1},\alpha_{k}, \gamma_{k}$ we have that $\mathcal{L}'_{k+1}=\mathcal{L}'(D_{m;a_1,a_2,\ldots,a_{k+1}})$ reduces to
$$\left[\begin{tabular}{cc|ccc}
$1$ & $0$ & $0$ & $\cdots$ & $0$\\
$0$ & $m a_{k+1}$ & $0$ & $\cdots$ & $0$\\
\hline
\vspace{-2mm}
$0$ & $0$ & \\
$\vdots$ & $\vdots$ & &$\mathcal{L}'_k$&\\
$0$ & $0$ & &&
\end{tabular}\right].$$
It follows that $\mathcal{S}(D_{m;a_1,a_2,\ldots,a_{k+1}})$ is isomorphic to $\bigoplus_{i=1}^{k+1} \mathbb{Z}_{m a_i}$.
\end{proof}
It is now easy to establish the existence of spherical latin bitrades whose canonical groups can be expressed as the direct sum of components of composite order.
\begin{theorem}
\label{thm:composites}
Suppose that $\Gamma$ is a group isomorphic to a direct sum of cyclic groups of composite order; i.e. $\Gamma$ is isomorphic to $\oplus^k_{i=1} \mathbb{Z}_{n_i}$, where each $n_i$ is composite. Then there exists a spherical latin bitrade whose canonical group is isomorphic to $\Gamma$.
\end{theorem}
\begin{proof}
Let $n_1, n_2,\ldots, n_k$ be composite integers and consider $\Gamma\cong\oplus^k_{i=1} \mathbb{Z}_{n_i}$.
Recall that if $\gcd(n_u,n_v)=1$, $u\neq v$, then $\oplus^k_{i=1} \mathbb{Z}_{n_i}\cong \mathbb{Z}_{n_1}\oplus \cdots \oplus \mathbb{Z}_{n_{u-1}}\oplus\mathbb{Z}_{n_{u+1}}\oplus \cdots \oplus \mathbb{Z}_{n_{v-1}}\oplus\mathbb{Z}_{n_{v+1}}\oplus \cdots \oplus \mathbb{Z}_{n_k}\oplus \mathbb{Z}_{n_un_v}$. Thus we may assume that $\gcd\{n_1,n_2,\ldots, n_k\}\neq 1$.
Hence there exists a prime, $p$ say, such that $p$ divides $\gcd\{n_1,n_2,\ldots, n_k\}$. Note that, as $n_i$ is composite for all $1\leq i\leq k$, $p\neq n_i$. By setting $m=p$ and applying Lemma \ref{lem:composites} the result follows.
\end{proof}
The next result addresses the existence of spherical latin bitrades for which the Smith Normal Form of their canonical groups contains components of prime order.
\begin{theorem}
\label{thm:primes_and_composites}
Let $p$ be a prime and let $2\leq a_1,a_2, \ldots, a_k$. Further let $n\leq 1+2\sum_{i=1}^k (a_i-1)$. Then there exists spherical latin bitrade whose canonical group is isomorphic to
$$\mathbb{Z}_p^n\oplus\left(\bigoplus_{i=1}^k \mathbb{Z}_{p a_i}\right).$$
\end{theorem}
\begin{proof}
If $n=0$, then this is Lemma \ref{lem:composites}. So for the remainder of the proof assume that $n\geq 1$. As $n\leq 1+2\sum_{i=1}^k (a_i-1)$ there exists a $k'$, $0\leq k'< k$, and $t$, $0\leq t\leq 2a_{k'+1}-1$ such that
$$n=1+2\sum_{i=1}^{k'} (a_i-1)+t.$$
First construct the graph $D_{p;a_1,a_2,\ldots,a_k}$ (from the proof of Lemma \ref{lem:composites}). Next, add the following vertices,
\begin{itemize}[noitemsep]
\item for each $1\leq i\leq k'$: add vertices $\delta_{i,j}$ and $\epsilon_{i,j}$ for all $1\leq i\leq a_i-1$;
\item for each $1\leq j\leq \lceil t/2\rceil$: add vertices $\delta_{k'+1,j}$;
\item for each $1\leq j\leq \lfloor t/2\rfloor$: add vertices $\epsilon_{k'+1,j}$; and
\item the vertex $\epsilon_{1,0}$.
\end{itemize}
Now,
\begin{itemize}[noitemsep]
\item replace an arc from $\alpha_{0}$ to $\gamma_1$ with a single arc from $\epsilon_{1,0}$ to $\gamma_1$ and $p-1$ arcs from $\epsilon_{1,0}$ to $\alpha_{i-1}$ and $p$ arcs from $\alpha_{0}$ to $\epsilon_{1,0}$.
\item for each $1\leq i\leq k'$:
\begin{itemize}[noitemsep]
\item[$\circ$] replace the arcs from $\gamma_i$ to $\alpha_{i-1}$ with single arcs from $\delta_{i,j}$ to $\alpha_{i-1}$, $p-1$ arcs from $\delta_{i,j}$ to $\gamma_i$ and $p$ arcs from $\gamma_i$ to $\delta_{i,j}$, where $1\leq j\leq a_i-1$.
\item[$\circ$] replace the arcs from $\alpha_{i-1}$ to $\gamma_i$ with single arcs from $\epsilon_{i,j}$ to $\gamma_i$, $p-1$ arcs from $\epsilon_{i,j}$ to $\alpha_{i-1}$ and $p$ arcs from $\alpha_{i-1}$ to $\epsilon_{i,j}$, where $1\leq j\leq a_i-1$.
\end{itemize}
\item replace $\lceil t/2\rceil$ arcs from $\gamma_{k'+1}$ to $\alpha_{k'}$ with single arcs from $\delta_{k'+1,j}$ to $\alpha_{k'}$, $p-1$ arcs from $\delta_{k'+1,j}$ to $\gamma_{k'+1}$ and $p$ arcs from $\gamma_{k'+1}$ to $\delta_{k'+1,j}$, where $1\leq j\leq \lceil t/2\rceil$.
\item replace $\lfloor t/2\rfloor$ arcs from $\alpha_{k'}$ to $\gamma_{k'+1}$ with single arcs from $\epsilon_{k'+1,j}$ to $\gamma_{k'+1}$, $p-1$ arcs from $\epsilon_{k'+1,j}$ to $\alpha_{k'}$ and $p$ arcs from $\alpha_{k'}$ to $\epsilon_{k'+1,j}$, where $1\leq j\leq \lfloor t/2\rfloor$.
\end{itemize}
Call the resulting digraph $D^n_{p;a_1,a_2,\ldots,a_k}$, see Figure \ref{fig:Dman_construction} for an illustration of its construction.
Note that $D^n_{p;a_1,a_2,\ldots,a_k}$ has a directed spherical embedding, and that it satisfies the connectivity conditions of Proposition \ref{prop:simple}. Therefore, there exists a spherical latin bitrade whose canonical group is isomorphic to $\mathcal{S}(D^n_{p;a_1,a_2,\ldots,a_k})$.
\begin{figure}
For $i\leq k'$:
\begin{center}
\scalebox{0.85}
{\begin{tikzpicture}[fill=gray!50, scale=1.1, vertex/.style={circle,inner sep=2,fill=black,draw}, dot/.style={circle,inner sep=0.7,fill=black,draw}]
\coordinate (ai1) at (0,1);
\coordinate (ai) at (2,1);
\coordinate (ci1) at (0,3);
\coordinate (ci) at (2,3);
\coordinate (bi1) at (6,0);
\coordinate (bi) at (10,0);
\coordinate (di1) at (6,4);
\coordinate (di) at (10,4);
\coordinate (de1) at (6.75,3.25);
\coordinate (def) at (8.5,1.5);
\coordinate (ep1) at (7.5,2.5);
\coordinate (epf) at (9.25,0.75);
\draw[thick,->-=.55, -<-=.45] (ai1) -- (ci1);
\draw[thick,->-=.55, -<-=.45] (ai) -- (ci);
\draw[thick,->-=.55, -<-=.45] (ai1) -- (ci);
\draw[thick,->-=.5] (ai) -- (ai1);
\draw[thick,->-=.5] (ci1) -- (ci);
\draw[thick,->-=.55, -<-=.45] (bi1) -- (di1);
\draw[thick,->-=.55, -<-=.45] (bi) -- (di);
\draw[thick,->-=.5] (bi) to [bend left=20] (bi1);
\draw[thick,->-=.5] (di1) to [bend left=20] (di);
\draw[thick,->-=.5] (de1) -- (bi1);
\draw[thick,->-=.5] (def) -- (bi1);
\draw[thick,->-=.5] (ep1) -- (di);
\draw[thick,->-=.5] (epf) -- (di);
\draw [thick,->-=.55, -<-=.45] (bi1) to [bend right=10](ep1);
\draw [thick,->-=.5] (bi1) to [bend left=10](ep1);
\draw [thick,->-=.55, -<-=.45] (bi1) to [bend right=10](epf);
\draw [thick,->-=.5] (bi1) to [bend left=10](epf);
\draw [thick,->-=.55, -<-=.45] (di) to [bend right=10](de1);
\draw [thick,->-=.5] (di) to [bend left=10](de1);
\draw [thick,->-=.55, -<-=.45] (di) to [bend right=10](def);
\draw [thick,->-=.5] (di) to [bend left=10](def);
\draw [ultra thick, ->] (3,2) -- (4.5,2);
\node at (7.9,2.1) [dot]{};
\node at (8,2) [dot]{};
\node at (8.1,1.9) [dot]{};
\node at (ai1) [vertex,label=south:$\alpha_{i-1}$]{};
\node at (ai) [vertex,label=south:$\alpha_{i}$]{};
\node at (bi1) [vertex,label=south:$\alpha_{i-1}$]{};
\node at (bi) [vertex,label=south:$\alpha_{i}$]{};
\node at (ci1) [vertex,label=north:$\gamma_{i-1}$]{};
\node at (ci) [vertex,label=north:$\gamma_{i}$]{};
\node at (di1) [vertex,label=north:$\gamma_{i-1}$]{};
\node at (di) [vertex,label=north:$\gamma_{i}$]{};
\node at (de1) [vertex]{};
\node at (6.5,3.5) {\small $\delta_{i,1}$};
\node at (def) [vertex]{};
\node at (8.8,1.15) {\small $\delta_{i,a_i-1}$};
\node at (ep1) [vertex]{};
\node at (7.25,2.75) {\small $\epsilon_{i,1}$};
\node at (epf) [vertex]{};
\node at (9.5,0.5) {\small $\epsilon_{i,a_i-1}$};
\node at (0.67,2.1){\tiny $a_i-1$};
\node at (-0.4,1.95){\tiny $p-1$};
\node at (2.4,1.95){\tiny $p-1$};
\node at (8.2,3.95){\tiny $p-1$};
\node at (8.7,2.75){\tiny $p-1$};
\node at (7.8,0.05){\tiny $p-1$};
\node at (7.3,1.25){\tiny $p-1$};
\node at (5.6,1.95){\tiny $p-1$};
\node at (10.4,1.95){\tiny $p-1$};
\end{tikzpicture}}
\end{center}
Let $a=a_k'-1$, then, if $t=2\ell$:
\begin{center}
\scalebox{0.85}
{\begin{tikzpicture}[fill=gray!50, scale=1.1, vertex/.style={circle,inner sep=2,fill=black,draw}, dot/.style={circle,inner sep=0.7,fill=black,draw}]
\coordinate (ak) at (0,1);
\coordinate (ak1) at (2,1);
\coordinate (ck) at (0,3);
\coordinate (ck1) at (2,3);
\coordinate (bk) at (6,0);
\coordinate (bk1) at (10,0);
\coordinate (dk) at (6,4);
\coordinate (dk1) at (10,4);
\coordinate (de1) at (6.75,3.25);
\coordinate (epl) at (8,2);
\draw[thick,->-=.55, -<-=.45] (ak) -- (ck);
\draw[thick,->-=.55, -<-=.45] (ak1) -- (ck1);
\draw[thick,->-=.55, -<-=.45] (ak) -- (ck1);
\draw[thick,->-=.5] (ak1) -- (ak);
\draw[thick,->-=.5] (ck) -- (ck1);
\draw[thick,->-=.55, -<-=.45] (bk) -- (dk);
\draw[thick,->-=.55, -<-=.45] (bk1) -- (dk1);
\draw[thick,->-=.5] (bk1) to [bend left=20] (bk);
\draw[thick,->-=.5] (dk) to [bend left=20] (dk1);
\draw[thick,->-=.5] (de1) -- (bk);
\draw[thick,->-=.5] (epl) -- (dk1);
\draw [thick,->-=.55, -<-=.45] (bk) to [bend right=10](epl);
\draw [thick,->-=.5] (bk) to [bend left=10](epl);
\draw [thick,->-=.55, -<-=.45] (dk1) to [bend right=10](de1);
\draw [thick,->-=.5] (dk1) to [bend left=10](de1);
\draw [ultra thick, ->] (3,2) -- (4.5,2);
\draw [thick,->-=.55, -<-=.45] (dk1) to [bend left=30](bk);
\node at (7.275,2.725) [dot]{};
\node at (7.375,2.625) [dot]{};
\node at (7.475,2.525) [dot]{};
\node at (ak) [vertex,label=south:$\alpha_{k'}$]{};
\node at (ak1) [vertex,label=south:$\alpha_{k'+1}$]{};
\node at (bk) [vertex,label=south:$\alpha_{k'}$]{};
\node at (bk1) [vertex,label=south:$\alpha_{k'+1}$]{};
\node at (ck) [vertex,label=north:$\gamma_{k'}$]{};
\node at (ck1) [vertex,label=north:$\gamma_{k'+1}$]{};
\node at (dk) [vertex,label=north:$\gamma_{k'}$]{};
\node at (dk1) [vertex,label=north:$\gamma_{k'+1}$]{};
\node at (de1) [vertex]{};
\node at (6.5,3.5) {\small $\delta_{k'+1,1}$};
\node at (epl) [vertex]{};
\node at (7.3,2) {\small $\epsilon_{k'+1,\ell}$};
\node at (0.75,2.1){\tiny $a$};
\node at (-0.4,1.95){\tiny $p-1$};
\node at (2.4,1.95){\tiny $p-1$};
\node at (8.2,3.95){\tiny $p-1$};
\node at (7.4,0.8){\tiny $p-1$};
\node at (5.6,1.95){\tiny $p-1$};
\node at (10.4,1.95){\tiny $p-1$};
\node at (9.125,1.35) {\tiny $a-\ell$};
\end{tikzpicture}}
\end{center}
Again let $a=a'_k-1$, then, if $t=2\ell +1$:
\begin{center}
\scalebox{0.85}
{\begin{tikzpicture}[fill=gray!50, scale=1.1, vertex/.style={circle,inner sep=2,fill=black,draw}, dot/.style={circle,inner sep=0.7,fill=black,draw}]
\coordinate (ak) at (0,1);
\coordinate (ak1) at (2,1);
\coordinate (ck) at (0,3);
\coordinate (ck1) at (2,3);
\coordinate (bk) at (6,0);
\coordinate (bk1) at (10.5,-0.5);
\coordinate (dk) at (6,4);
\coordinate (dk1) at (10,4);
\coordinate (de1) at (6.75,3.25);
\coordinate (epl) at (7.8,2.2);
\coordinate (del1) at (8.6,1.4);
\draw[thick,->-=.55, -<-=.45] (ak) -- (ck);
\draw[thick,->-=.55, -<-=.45] (ak1) -- (ck1);
\draw[thick,->-=.55, -<-=.45] (ak) -- (ck1);
\draw[thick,->-=.5] (ak1) -- (ak);
\draw[thick,->-=.5] (ck) -- (ck1);
\draw[thick,->-=.55, -<-=.45] (bk) -- (dk);
\draw[thick,->-=.55, -<-=.45] (bk1) to [bend right=20] (dk1);
\draw[thick,->-=.5] (bk1) to [bend left=20] (bk);
\draw[thick,->-=.5] (dk) to [bend left=20] (dk1);
\draw[thick,->-=.5] (de1) -- (bk);
\draw[thick,->-=.5] (epl) -- (dk1);
\draw [thick,->-=.55, -<-=.45] (bk) to [bend right=10](epl);
\draw [thick,->-=.5] (bk) to [bend left=10](epl);
\draw [thick,->-=.55, -<-=.45] (dk1) to [bend right=10](de1);
\draw [thick,->-=.5] (dk1) to [bend left=10](de1);
\draw[thick,->-=.5] (del1) -- (bk);
\draw [thick,->-=.55, -<-=.45] (dk1) to [bend right=10](del1);
\draw [thick,->-=.5] (dk1) to [bend left=10](del1);
\draw [ultra thick, ->] (3,2) -- (4.5,2);
\draw [thick,-<-=.5] (dk1) to [out=275, in=45] (9.5,0.5) to [out=225, in=355] (bk);
\draw [thick,->-=.55, -<-=.45] (dk1) to [out=290, in=45] (9.75,0.25) to [out=225, in=340] (bk);
\node at (7.175,2.825) [dot]{};
\node at (7.275,2.725) [dot]{};
\node at (7.375,2.625) [dot]{};
\node at (ak) [vertex,label=south:$\alpha_{k'}$]{};
\node at (ak1) [vertex,label=south:$\alpha_{k'+1}$]{};
\node at (bk) [vertex,label=south:$\alpha_{k'}$]{};
\node at (bk1) [vertex,label=south:$\alpha_{k'+1}$]{};
\node at (ck) [vertex,label=north:$\gamma_{k'}$]{};
\node at (ck1) [vertex,label=north:$\gamma_{k'+1}$]{};
\node at (dk) [vertex,label=north:$\gamma_{k'}$]{};
\node at (dk1) [vertex,label=north:$\gamma_{k'+1}$]{};
\node at (de1) [vertex]{};
\node at (6.5,3.5) {\small $\delta_{k'+1,1}$};
\node at (del1) [vertex]{};
\node at (9.1,1.05) {\small $\delta_{k'+1,\ell+1}$};
\node at (epl) [vertex]{};
\node at (7.2,2.2) {\small $\epsilon_{k'+1,\ell}$};
\node at (0.75,2.1){\tiny $a$};
\node at (-0.4,1.95){\tiny $p-1$};
\node at (2.4,1.95){\tiny $p-1$};
\node at (8.2,3.95){\tiny $p-1$};
\node at (7.5,1.1){\tiny $p-1$};
\node at (8.7,2.5){\tiny $p-1$};
\node at (5.6,1.95){\tiny $p-1$};
\node at (11.1,1.8){\tiny $p-1$};
\node at (10.1,0) {\tiny $a-\ell-1$};
\end{tikzpicture}}
\end{center}
\caption{Constructing $D^n_{m;a_1,a_2,\ldots,a_k}$.}
\label{fig:Dman_construction}
\end{figure}
For ease of notation, let
$$d_i=\left\{\begin{array}{ll}
a_i-1 & \text{for }1\leq i\leq k'\\
\lceil t/2\rceil & \text{for }i=k'+1\\
0&\text{otherwise}
\end{array}\right.
\quad\text{ and }\quad
e_i=\left\{\begin{array}{ll}
a_i-1 & \text{for }1\leq i\leq k'\\
\lfloor t/2\rfloor & \text{for }i=k'+1\\
0&\text{otherwise}
\end{array}\right..$$
Suppose that we order the vertices of $D^n_{p;a_1,a_2,\ldots,a_k}$ by
$$(\alpha_k,\gamma_k,\delta_{k,d_k},\ldots,\delta_{k,1},\epsilon_{k,e_k}, \ldots, \epsilon_{k,1}), \ldots,
(\alpha_2,\gamma_2,\delta_{2,d_{2}},\ldots,\delta_{2,1},\epsilon_{2,e_{2}}, \ldots, \epsilon_{2,1}),$$
$$(\alpha_1,\gamma_1,\delta_{2,d_{1}},\ldots,\delta_{1,1},\epsilon_{1,e_{1}}, \ldots, \epsilon_{1,1}, \epsilon_{1,0}),\alpha_0$$
and construct the associated asymmetric Laplacian. Then, removing the row and column corresponding to $\alpha_0$ yields the reduced asymmetric Laplacian $\mathcal{L}'(D^n_{p;a_1,a_2,\ldots,a_k})$.
Let $k\geq 1$ and $p,a_1,a_2,\ldots,a_{k+1}\geq 2$ and let $1\leq n\leq 1+2\sum_{i=1}^{k+1} (a_i-1)$. Then, letting $x=d_1$, $y=e_1$ and $r=p(x+1)-a_1-x-1$,
$$\mathcal{L}'\left(D^{\min\{n,1+2(a_1-1)\}}_{p;a_1}\right)=\left[
\begin{tabular}{cc|ccc|ccc|c}
$p$ & $-p+1$ & $0$ & $\cdots$ & $0$ & $0$ & $\ldots$ & $0$ & $0$ \\
$-p$ & $r$ & $-p$ & $\ldots$ & $-p$ & $0$ & $\ldots$ & $0$ & $0$ \\
\hline
$0$ & $-p+1$ & \multicolumn{3}{c|}{\multirow{3}{*}{$p\mathbb{I}_{d_1}$}}&$0$ & $\cdots$ & $0$ & $0$\\
$\vdots$ & $\vdots$ & \multicolumn{3}{c|}{}&$\vdots$ &$\ddots$ &$\vdots$&$\vdots$ \\
$0$ & $-p+1$ & \multicolumn{3}{c|}{}&$0$ & $\cdots$ & $0$ & $0$\\
\hline
$0$ & $-1$ & $0$ & $\cdots$ & $0$ & \multicolumn{3}{c|}{\multirow{3}{*}{$p\mathbb{I}_{e_1}$}} & $0$\\
$\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ & \multicolumn{3}{c|}{}
& $0$ \\
$0$ & $-1$ & $0$ & $\ldots$ & $0$ & \multicolumn{3}{c|}{} & $\vdots$\\
\hline
$0$ & $-1$ & $0$ & $\ldots$ & $0$ & $0$ & $\cdots$ & $0$ & $p$\\
\end{tabular}
\right].$$
Which reduces, under a similar argument to that used to prove Lemma \ref{cl:prime+comps}, to
$$\left[
\begin{tabular}{cc|ccc}
$1$ & $0$ & $0$ & $\cdots$ & $0$\\
$0$ & $pa_1$ & $0$ & $\cdots$ & $0$ \\
\hline
$0$ & $0$ & \multicolumn{3}{c}{\multirow{3}{*}{$p\mathbb{I}_{d_1+e_1+1}$}}\\
$\vdots$ & $\vdots$ & \multicolumn{3}{c}{}\\
$0$ & $0$ & \multicolumn{3}{c}{}
\end{tabular}
\right].$$
Hence, $\mathcal{S}(D^{\min\{n,1+2(a_1-1)\}}_{p;a_1})\cong\mathbb{Z}_p^{\min\{n,1+2(a_1-1)\}}\oplus\mathbb{Z}_{p a_1}$.
Assume that $\mathcal{S}\left(D^{\min\{n,1+2\sum_{i=1}^k(a_i-1)\}}_{p;a_1,a_2,\ldots,a_k}\right)\cong\mathbb{Z}_p^{\min\{n,1+2\sum_{i=1}^k(a_i-1)\}}\oplus\left(\bigoplus_{i=1}^k \mathbb{Z}_{p a_i}\right)$. Denote $\mathcal{L}'\left(D^{\min\{n,1+2\sum_{i=1}^k(a_i-1)\}}_{p;a_1,a_2,\ldots,a_k}\right)$ by $\mathcal{L}_k'$ and
consider $\mathcal{L}'\left(D^n_{p;a_1,a_2,\ldots,a_{k+1}}\right)$. Applying Lemma \ref{cl:prime+comps}, with
$$x=\left\lceil \frac{1}{2}\max\left\{n-1-2\sum_{i=1}^k(a_i-1),0\right\}\right\rceil$$
and
$$y=\left\lfloor \frac{1}{2}\max\left\{n-1-2\sum_{i=1}^k(a_i-1),0\right\}\right\rfloor$$
to rows $\alpha_{k+1},\gamma_{k+1},\delta_{k+1,x},\ldots,\delta_{k+1,1},\epsilon_{k+1,y}, \ldots, \epsilon_{k+1,1},\alpha_k,\gamma_k$ of $\mathcal{L}_{k+1}(D^n_{p;a_1,a_2,\ldots,a_{k+1}})$ reduces it to
$$\left[\begin{tabular}{cc|ccc|ccc}
$1$ & $0$ & $0$ & $\cdots$ & $0$ & $0$ & $\cdots$ & $0$\\
$0$ & $p a_{k+1}$ & $0$ & $\cdots$ & $0$ & $0$ & $\cdots$ & $0$\\
\hline
$0$ & $0$ &\multicolumn{3}{c|}{\multirow{3}{*}{{$p\mathbb{I}_{x+y}$}}}& $0$ & $\cdots$ & $0$\\
$\vdots$ & $\vdots$ & \multicolumn{3}{c|}{} & $\vdots$ & $\ddots$ & $\vdots$\\
$0$ & $0$ & \multicolumn{3}{c|}{} & $0$ & $\cdots$ & $0$\\
\hline
$0$ & $0$ & $0$ & $\cdots$ & $0$ & \multicolumn{3}{c}{\multirow{3}{*}{{$\mathcal{L}'_k$}}}\\
$\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$& \multicolumn{3}{c}{}\\
$0$ & $0$ & $0$ & $\cdots$ & $0$& \multicolumn{3}{c}{}
\end{tabular}\right].$$
Therefore $\mathcal{S}(D^n_{p;a_1,a_2,\ldots,a_{k+1}})\cong\mathbb{Z}_p^n\oplus\left(\bigoplus_{i=1}^{k+1} \mathbb{Z}_{p a_i}\right)$.
\end{proof}
\subsection{Canonical groups of rank two}
In this section we will restrict our attention to canonical groups of rank two. We show that, with one exception and a further three possible exceptions, any finite abelian group of rank two is isomorphic to the canonical group of some spherical latin bitrade.
We will make use of the following elementary lemma.
\begin{lemma}
\label{lem:121-reduction}
Let $2\leq d$, $1\leq x$, $2\leq y$ and $t_{i,j}\in\mathbb{Z}$ for $1\leq i\leq x$ and $1\leq j\leq y$. Further let $M$ be the $d-1$ by $d$ matrix where
$$m_{ij}=\left\{\begin{array}{ll}
2&\text{if } i=j\\
-1&\text{if } j=i+1\text{ or }j=i-1\\
0&\text{otherwise}
\end{array}\right..
$$
Then the $d+x-2$ by $d+y-2$ matrix
$$\left[\begin{tabular}{ccc|ccccc}
\multicolumn{5}{c|}{\multirow{4}{*}{{\large$M$}}}& $0$ & $\cdots$ & $0$ \\
\multicolumn{5}{c|}{}& $0$ & $\cdots$ & $0$ \\
\multicolumn{5}{c|}{}&$\vdots$ &$\ddots$ &$\vdots$ \\
\multicolumn{5}{c|}{}& $0$ & $\cdots$ & $0$ \\
\hline
$0$ & $\cdots$ & $0$ & $t_{1,1}$ & $t_{1,2}$& $t_{1,3}$ & $\cdots$ & $t_{1,y}$\\
$\vdots$ & $\ddots$ & $\vdots$ & $\vdots$ & $\vdots$& $\vdots$ & $\ddots$ & $\vdots$\\
$0$ & $\cdots$ & $0$ & $t_{x,1}$ & $t_{x,2}$& $t_{x,3}$ & $\cdots$ & $t_{x,y}$
\end{tabular}\right]$$
reduces (under operations invertible over $\mathbb{Z}$) to
$$\left[\begin{tabular}{ccc|ccccc}
\multicolumn{3}{c|}{\multirow{3}{*}{{\large$\mathbb{I}_{d-2}$}}}& $0$& \multicolumn{1}{c|}{$0$} & $0$ & $\cdots$ & $0$ \\
\multicolumn{3}{c|}{}&$\vdots$&\multicolumn{1}{c|}{$\vdots$}&$\vdots$ &$\ddots$ &$\vdots$ \\
\multicolumn{3}{c|}{}& $0$ & \multicolumn{1}{c|}{$0$}& $0$ & $\cdots$ & $0$ \\
\cline{1-5}
$0$ & $\cdots$ & $0$& $d$ & \multicolumn{1}{c|}{$1-d$} & $0$& $\cdots$ & $0$ \\
\hline
$0$ & $\cdots$ & $0$ & $t_{1,1}$ & $t_{1,2}$& $t_{1,3}$& $\cdots$ & $t_{1,y}$\\
$\vdots$ & $\ddots$ & $\vdots$ & $\vdots$ & $\vdots$& $\vdots$ & $\ddots$ & $\vdots$\\
$0$ & $\cdots$ & $0$ & $t_{x,1}$ & $t_{x,2}$&$t_{x,3}$& $\cdots$ & $t_{x,y}$
\end{tabular}\right].$$
\end{lemma}
\begin{proof}
When $d=2$, the result is trivial.
Assume that the statement holds for $d=k$, and consider $L(k+1)$. Then $L(k+1)$ reduces to
$$\left[\begin{tabular}{ccc|cccccc}
\multicolumn{3}{c|}{\multirow{3}{*}{{\large$\mathbb{I}_{k-2}$}}}& $0$ & $0$& $0$& $0$ & $\cdots$ & $0$ \\
\multicolumn{3}{c|}{}&$\vdots$&$\vdots$&$\vdots$&$\vdots$ &$\ddots$ &$\vdots$ \\
\multicolumn{3}{c|}{}& $0$ & $0$& $0$& $0$ & $\cdots$ & $0$ \\
\hline
$0$ & $\cdots$ & $0$& $k$ & $1-k$ & $0$& $0$& $\cdots$ & $0$ \\
$0$ & $\cdots$ & $0$& $-1$ & $2$ & $-1$& $0$& $\cdots$ & $0$ \\
$0$ & $\cdots$ & $0$& $0$ & $t_{1,1}$ & $t_{1,2}$& $t_{1,3}$& $\cdots$ & $t_{1,y}$\\
$\vdots$ & $\ddots$ & $\vdots$& $\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$\\
$0$ & $\cdots$ & $0$ & $0$& $t_{x,1}$ & $t_{x,2}$&$t_{x,3}$& $\cdots$ & $t_{x,y}$
\end{tabular}\right]$$
Adding $k-1$ copies of Row $k$ to Row $k-1$ followed by adding one copy of the updated Row $k-1$ to Row $k$ yields a $1$ in entry $(k-1,k-1)$ and this is now the only non-zero in Column $k-1$. The result follows.
\end{proof}
\begin{lemma}
\label{lem:ab+bc+ac+1}
Suppose that $1\leq a,b,c$. Then there exists spherical latin bitrade whose canonical group is isomorphic to
$\mathbb{Z}_{ab+bc+ac+1}\oplus\mathbb{Z}_{ab+bc+ac+1}.$
\end{lemma}
\begin{proof}
Without loss of generality we may assume that $1\leq a\leq b\leq c$. Define $D_{a,b,c}$ to be the digraph of order $a+b+c+1$ with vertex set $\{\alpha_1,\alpha_2,\ldots,\alpha_a,\beta_1,\beta_2,\ldots,\beta_b,\gamma_1,\gamma_2,\break\ldots,\gamma_c,\delta\}$ and
\begin{itemize}[noitemsep]
\item for $1\leq i\leq a-1$ an arc from $\alpha_i$ to $\alpha_{i+1}$ and an arc from $\alpha_{i+1}$ to $\alpha_{i}$;
\item for $1\leq i\leq b-1$ an arc from $\beta_i$ to $\beta_{i+1}$ and an arc from $\beta_{i+1}$ to $\beta_{i}$;
\item for $1\leq i\leq c-1$ an arc from $\gamma_i$ to $\gamma_{i+1}$ and an arc from $\gamma_{i+1}$ to $\gamma_{i}$;
\item for each $\iota\in\{\alpha,\beta,\gamma\}$ an arc from $\delta$ to $\iota_1$ and from $\iota_1$ to $\delta$; and
\item $a$ arcs from $\beta_b$ to $\gamma_c$ and from $\gamma_c$ to $\beta_b$; $b$ arcs from $\alpha_a$ to $\gamma_c$ and from $\gamma_c$ to $\alpha_a$; and $c$ arcs from $\alpha_a$ to $\beta_b$ and from $\beta_b$ to $\alpha_a$.
\end{itemize}
Note that $D_{a,b,c}$ has a directed Eulerian spherical embedding, see Figure \ref{fig:rank2}, and that $D_{a,b,c}$ satisfies the connectivity conditions of Proposition \ref{prop:simple}. Hence, there exists a spherical latin bitrade whose canonical group is isomorphic to $\mathcal{S}(D_{a,b,c})$.
\begin{figure}[!h]
\begin{center}
\scalebox{0.9}
{\begin{tikzpicture}[fill=gray!50, scale=1.1, vertex/.style={circle,inner sep=2,fill=black,draw}, dot/.style={circle,inner sep=0.7,fill=black,draw}]
\coordinate (aa) at (4,6);
\coordinate (aa1) at (4,5);
\coordinate (a1) at (4,3);
\coordinate (bb) at (0,0);
\coordinate (bb1) at (1,0.5);
\coordinate (b1) at (3,1.5);
\coordinate (cc) at (8,0);
\coordinate (cc1) at (7,0.5);
\coordinate (c1) at (5,1.5);
\coordinate (d) at (4,2);
\draw [thick,->-=.6, -<-=.4] (bb) -- (bb1);
\draw [thick,->-=.6, -<-=.4] (d) -- (b1);
\draw [thick,->-=.6, -<-=.4] (aa) -- (aa1);
\draw [thick,->-=.6, -<-=.4] (d) -- (a1);
\draw [thick,->-=.6, -<-=.4] (cc) -- (cc1);
\draw [thick,->-=.6, -<-=.4] (d) -- (c1);
\draw [thick,->-=.6, -<-=.4] (bb) -- (cc);
\draw [thick,->-=.6, -<-=.4] (aa) to [bend right=20](bb);
\draw [thick,->-=.6, -<-=.4] (cc) to [bend right=20](aa);
\draw [thick] (aa1) -- (4,4.5);
\draw [thick] (a1) -- (4,3.5);
\draw [thick] (bb1) -- (1.5,0.75);
\draw [thick] (b1) -- (2.5,1.25);
\draw [thick] (cc1) -- (6.5,0.75);
\draw [thick] (c1) -- (5.5,1.25);
\node at (aa) [vertex,label=north:$\alpha_{a}$]{};
\node at (aa1) [vertex,label=east:$\alpha_{a-1}$]{};
\node at (a1) [vertex,label=east:$\alpha_{1}$]{};
\node at (bb) [vertex,label=south west :$\beta_{b}$]{};
\node at (bb1) [vertex,label=north:$\beta_{b-1}$]{};
\node at (b1) [vertex,label=north:$\beta_{1}$]{};
\node at (cc) [vertex,label=south east:$\gamma_{c}$]{};
\node at (cc1) [vertex,label=north:$\gamma_{c-1}$]{};
\node at (c1) [vertex,label=north:$\gamma_{1}$]{};
\node at (d) [vertex,label=north east:$\delta$]{};
\node at (4,4.15) [dot]{};
\node at (4,4) [dot]{};
\node at (4,3.85) [dot]{};
\node at (6,1) [dot]{};
\node at (6.2,0.9) [dot]{};
\node at (5.8,1.1) [dot]{};
\node at (2,1) [dot]{};
\node at (1.8,0.9) [dot]{};
\node at (2.2,1.1) [dot]{};
\node at (4,-0.25) {$a$};
\node at (7,3.5) {$b$};
\node at (1,3.5) {$c$};
\end{tikzpicture}}
\end{center}
\caption{A directed Eulerian spherical embedding of $D_{a,b,c}$.}
\label{fig:rank2}
\end{figure}
Suppose that we order the vertices of $D_{a,b,c}$ by
$$\gamma_1,\gamma_2,\ldots,\gamma_{c-2},\gamma_{c-1},\gamma_{c}, \beta_1,\beta_2,\ldots,\beta_{b-2},\beta_{b-1},\beta_b,\alpha_1,\alpha_2,\ldots,\alpha_{a-2},\alpha_{a-1}, \alpha_a,\delta.$$
Let $\mathcal{L}'(D_{a,b,c})$ be the reduced asymmetric Laplacian for $D_{a,b,c}$ obtained by removing the row and column corresponding to $\delta$.
When $a=b=c=1$, $\mathcal{L}'(D_{1,1,1})=${\footnotesize$\begin{bmatrix}
3&-1&-1\\
-1&3&-1\\
-1&-1&3
\end{bmatrix}$}, which reduces to {\footnotesize$\begin{bmatrix}
1&0&0\\
0&4&0\\
0&0&4
\end{bmatrix}$}.
Suppose that $2\leq a,b,c$. Consider $\mathcal{L}'(D_{a,b,c})$, via three applications of Lemma \ref{lem:ab+bc+ac+1} and setting $a+b+c+1=t$, this reduces to
$$\left[\begin{tabular}{ccc|cc|ccc|cc|ccc|cc}
\multicolumn{3}{c|}{\multirow{3}{*}{{\large$\mathbb{I}_{c-2}$}}}& $0$& $0$ & $0$ & $\cdots$ & $0$ & $0$& $0$& $0$ & $\cdots$ & $0$ & $0$& $0$\\\multicolumn{3}{c|}{}&$\vdots$&$\vdots$&$\vdots$ &$\ddots$ &$\vdots$&$\vdots$&$\vdots$&$\vdots$&$\ddots$&$\vdots$&$\vdots$&$\vdots$\\
\multicolumn{3}{c|}{}& $0$ & $0$& $0$ & $\cdots$ & $0$ &$0$&$0$&$0$& $\cdots$ & $0$ & $0$& $0$\\
\hline
$0$&$\cdots$&$0$&$c$&$1-c$&$0$&$\cdots$&$0$&$0$&$0$&$0$& $\cdots$ & $0$ & $0$& $0$\\
$0$&$\cdots$&$0$&$-1$&$t-c$&$0$&$\cdots$&$0$&$0$&$-a$& $0$ & $\cdots$ & $0$& $0$&$-b$\\
\hline
$0$&$\cdots$&$0$&$0$&$0$&\multicolumn{3}{c|}{\multirow{3}{*}{{\large$\mathbb{I}_{b-2}$}}}& $0$& $0$ & $0$& $\cdots$ & $0$ & $0$& $0$ \\
$\vdots$&$\ddots$&$\vdots$&$\vdots$&$\vdots$&\multicolumn{3}{c|}{}&$\vdots$&$\vdots$&$\vdots$& $\ddots$ & $\vdots$ & $\vdots$& $\vdots$\\
$0$&$\cdots$&$0$&$0$&$0$&\multicolumn{3}{c|}{}& $0$ & $0$& $0$& $\cdots$ & $0$ & $0$& $0$ \\
\hline
$0$&$\cdots$&$0$&$0$&$0$&$0$&$\cdots$&$0$&$b$&$1-b$& $0$ & $\cdots$ & $0$ &$0$&$0$\\
$0$&$\cdots$&$0$&$0$&$-a$&$0$&$\cdots$&$0$&$-1$&$t-b$& $0$ & $\cdots$ & $0$ &$0$&$-c$\\
\hline
$0$&$\cdots$&$0$&$0$&$0$& $0$& $\cdots$ & $0$ & $0$& $0$ & \multicolumn{3}{c|}{\multirow{3}{*}{{\large$\mathbb{I}_{a-2}$}}} & $0$& $0$ \\
$\vdots$&$\ddots$&$\vdots$&$\vdots$&$\vdots$& $\vdots$& $\ddots$ & $\vdots$ &$\vdots$&$\vdots$& \multicolumn{3}{c|}{} & $\vdots$& $\vdots$\\
$0$&$\cdots$&$0$&$0$&$0$& $0$& $\cdots$ & $0$ & $0$ & $0$& \multicolumn{3}{c|}{} & $0$& $0$ \\
\hline
$0$&$\cdots$&$0$&$0$&$0$&$0$&$\cdots$&$0$&$0$&$0$& $0$& $\cdots$ & $0$ & $a$ &$ 1-a$\\
$0$&$\cdots$&$0$&$0$&$-b$&$0$&$\cdots$&$0$&$0$&$-c$&$0$&$\cdots$&$0$&$-1$&$t-a$\\
\end{tabular}\right].$$
Computing the Smith Normal form of
$$\begin{bmatrix}
c&1-c&0&0&0&0\\
-1&t-c&0&-a&0&-b\\
0&0&b&1-b&0&0\\
0&-a&-1&t-b&0&-c\\
0&0&0&0&a&1-a\\
0&-b&0&-c&-1&t-a
\end{bmatrix}$$
we have that $\mathcal{S}(D_{a,b,c})\cong \mathbb{Z}_{ab+bc+ac+1}\oplus\mathbb{Z}_{ab+bc+ac+1}$.
The cases where $1=a<b\leq c$ and $1=a=b< c$ follow similarly.
\end{proof}
\begin{theorem}
\label{thm:rank2}
For $n, m\geq 2$, with one exception and a further three possible exceptions, there exists a spherical latin bitrade whose canonical group is isomorphic to $\mathbb{Z}_{n}\oplus\mathbb{Z}_{m}$.
The exceptions are as follows. There does not exist a spherical latin bitrade with canonical group isomorphic to $\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}$.
There may or may not exist a spherical latin bitrade with canonical group isomorphic to $\mathbb{Z}_{3}\oplus\mathbb{Z}_{3}$ or $\mathbb{Z}_{5}\oplus\mathbb{Z}_{5}$ or $\mathbb{Z}_{r}\oplus\mathbb{Z}_{r}$ for some $r$ greater than $10^{11}$.
Finally, if we assume the Generalised Riemann Hypothesis, then there exists a spherical latin bitrade with canonical group isomorphic to $\mathbb{Z}_{r}\oplus\mathbb{Z}_{r}$.
\end{theorem}
\begin{proof}
If $n$ and $m$ are coprime, then $\mathbb{Z}_n\oplus\mathbb{Z}_m\cong\mathbb{Z}_{nm}$ and the result follows from \cite{CavWan} (it also follows from Lemma \ref{lem:composites} with $k=1$). So assume that $n$ and $m$ are not coprime, that is we are in the rank 2 case.
Suppose that $n\neq m$. If $n$ and $m$ are both composite, then the result follows from Theorem \ref{thm:composites}. So suppose that $n$ is prime and $m$ is composite. Then as $n$ and $m$ are not coprime $m=kn$ for some $k>1$ and the result follows from Theorem \ref{thm:primes_and_composites}.
So, suppose that $n=m$. If there exist $a,b,c\geq 1$ such that $ab+ac+bc+1=n$, then by Lemma \ref{lem:ab+bc+ac+1} there exits a spherical latin bitrade whose canonical group is isomorphic to $\mathbb{Z}_n\oplus\mathbb{Z}_n$. In \cite{BorCho} Borwein and Choi proved that there are at most nineteen integers that are not of the form $ab+ac+bc+1$ where $a,b,c\geq 1$. The first eighteen are: $2$, $3$, $5$, $7$, $11$, $19$, $23$, $31$, $43$, $59$, $71$, $79$, $103$, $131$, $191$, $211$, $331$ and $463$. The nineteenth is greater than $10^{11}$ and is not an exception if the Generalised Riemann Hypothesis is assumed. For $n\in\{7,11,19,23,31,43,59,71,79,103,131,191,211,331,463\}$ directed Eulerian spherical embeddings whose underlying digraphs satisfy the connectivity conditions of Proposition \ref{prop:simple} and with abelian sandpile groups isomorphic to $\mathbb{Z}_n\oplus\mathbb{Z}_n$ are given in Figures \ref{fig:6m+5} and \ref{fig:6m+1}.\footnote{The families of indicated in Figures \ref{fig:6m+5} and \ref{fig:6m+1} generalise to give abelian sandpile groups isomorphic to $\mathbb{Z}_{6m+5}\oplus\mathbb{Z}_{6m+5}$, for all $m\geq 1$ and $\mathbb{Z}_{3m+1}\oplus\mathbb{Z}_{3m+1}$, for all $m\geq 1$, respectively. However, we do not require these more general results to prove Theorem \ref{thm:rank2}.}
\end{proof}
\begin{figure
\begin{center}
\scalebox{0.9}
{\begin{tikzpicture}[fill=gray!50, scale=1, vertex/.style={circle,inner sep=2,fill=black,draw}, dot/.style={circle,inner sep=0.7,fill=black,draw}]
\coordinate (a1) at (2,1.5);
\coordinate (am1) at (4,1.5);
\coordinate (am) at (5,1.5);
\coordinate (b) at (0,2.5);
\coordinate (c) at (2,3);
\coordinate (d) at (0,0.5);
\coordinate (e) at (2,0);
\coordinate (f) at (1,1.5);
\draw [thick,->-=.6, -<-=.4] (f) -- (c);
\draw [thick,->-=.6, -<-=.4] (b) -- (c);
\draw [thick,->-=.6, -<-=.4] (b) -- (d);
\draw [thick,->-=.6, -<-=.4] (b) -- (f);
\draw [thick,->-=.6, -<-=.4] (d) -- (f);
\draw [thick,->-=.6, -<-=.4] (d) -- (e);
\draw [thick,->-=.6, -<-=.4] (e) -- (f);
\draw [thick,->-=.6, -<-=.4] (a1) -- (f);
\draw [thick,->-=.6, -<-=.4] (am1) -- (am);
\draw [thick,->-=.6, -<-=.4] (c) to [bend left=20](am);
\draw [thick,->-=.6, -<-=.4] (e) to [bend right=20](am);
\draw [thick] (a1) -- (2.5,1.5);
\draw [thick] (am1) -- (3.5,1.5);
\node at (a1) [vertex,label=north:$\alpha_{1}$]{};
\node at (am1) [vertex,label=north:$\alpha_{m-1}$]{};
\node at (am) [vertex,label=east:$\alpha_{m}$]{};
\node at (b) [vertex]{};
\node at (c) [vertex]{};
\node at (d) [vertex]{};
\node at (e) [vertex]{};
\node at (f) [vertex]{};
\node at (2.85,1.5) [dot]{};
\node at (3,1.5) [dot]{};
\node at (3.15,1.5) [dot]{};
\node at (-0.5,1.5){$m$};
\end{tikzpicture}}
\end{center}
\caption{
Directed Eulerian spherical embedding of a digraph with abelian sandpile group isomorphic to $\mathbb{Z}_{6m+5}\oplus\mathbb{Z}_{6m+5}$, when $m\in\{1,3,9,11,21,31,36\}$.
}
\label{fig:6m+5}
\end{figure}
\begin{figure
\begin{center}
\scalebox{0.9}
{\begin{tikzpicture}[fill=gray!50, scale=1, vertex/.style={circle,inner sep=2,fill=black,draw}, dot/.style={circle,inner sep=0.7,fill=black,draw}]
\coordinate (a) at (1.5,0.75);
\coordinate (b) at (3,0);
\coordinate (c) at (1.5,2);
\coordinate (d) at (0,0);
\draw [thick,->-=.6, -<-=.4] (a) -- (b);
\draw [thick,->-=.6, -<-=.4] (a) -- (c);
\draw [thick,->-=.6, -<-=.4] (a) -- (d);
\draw [thick,->-=.6, -<-=.4] (b) -- (c);
\draw [thick,->-=.6, -<-=.4] (d) -- (c);
\draw [thick,->-=.6, -<-=.4] (d) -- (b);
\node at (a) [vertex]{};
\node at (b) [vertex]{};
\node at (c) [vertex]{};
\node at (d) [vertex]{};
\node at (1.5,-0.25){$m$};
\node at (2.6,1){$m$};
\node at (0.4,1){$m$};
\end{tikzpicture}}
\end{center}
\caption{Directed Eulerian spherical embedding of a digraph with abelian sandpile group isomorphic to $\mathbb{Z}_{3m+1}\oplus\mathbb{Z}_{3m+1}$, when $n\in\{2,6,10,14,26,34,110,154\}$.
}
\label{fig:6m+1}
\end{figure}
\subsection{Questions}
We conclude with three questions for future consideration. The first two address the remaining cases to be considered in order to resolve Question \ref{ques:main}.
\begin{question}
Let $p\neq 2$ be a prime, $n\geq 3$ if $p>7$ and $n\geq 2$ if $p=3$ or $5$; does there exist a spherical latin bitrade with canonical group is isomorphic to $\mathbb{Z}_p^n$?
\end{question}
\begin{question}
Let $p$ be a prime and let $2\leq a_1,a_2,\ldots,a_k$. If $n>1+2\sum_{i=1}^k(a_i-1)$, does there exist a spherical latin bitrade with canonical group is isomorphic to $$\mathbb{Z}_p^n\oplus\left(\bigoplus_{i=1}^k\mathbb{Z}_{pa_i}\right)?$$
\end{question}
Our final question arises naturally in response to the non-existence result Theorem \ref{thm:non-existence}. For a separated, connected latin bitrade $(A,B)$ of genus greater than zero, the group $\mathcal{A}_W$ is isomorphic to $\mathbb{Z}
\oplus\mathbb{Z}\oplus \mathcal{C}$, but the minimal abelian representation (if one exists) is now a quotient of $\mathcal{C}$, \cite[Theorem 6]{BlackburnTMcC}. Hence, we ask the following.
\begin{question}
Does there exist a family of separated, connected latin bitrades for which the minimum abelian representation of one (or both) of the partial latin squares is isomorphic to $\mathbb{Z}_2^k$ for arbitrary $k$? If so does such a family exist for a fixed genus?
\end{question}
\subsection*{Acknowledgements}
The authors express their thanks to the London Mathematical Society for a grant which enabled this research to by undertaken.
|
2,869,038,156,862 | arxiv | \section{Introduction\label{sec:Introduction}}
Understanding the properties of the dark sector represents one of the
great experimental and theoretical challenges facing physics today.
Indeed, we even lack insight into such fundamental questions as
whether the dark sector
is minimal ({\it e.g.}\/, consisting of only one or a few dark particle species)
or non-minimal ({\it e.g.}\/, consisting of many particle species).
A pressing phenomenological question, therefore,
is to determine how --- and
to what degree it is even possible ---
to experimentally distinguish non-minimal dark sectors
from their more traditional, minimal counterparts.
This is especially true for scenarios within the
Dynamical Dark Matter (DDM)~\cite{DDM1,DDM2} framework --- a framework for
dark-matter physics in which the dark-matter ``candidate'' is an ensemble
consisting of a potentially vast number of individual constituent particle
species exhibiting a variety of masses, decay widths, and cosmological
abundances.
Such DDM dark sectors give rise to
collective phenomena that transcend
expectations based on traditional dark-matter frameworks.
For example, the phenomenological viability of such a DDM ensemble as
a representation of the dark sector
rests not on the stability of each of these species
individually, but rather on a subtle balancing between decay
widths and cosmological abundances across the ensemble as a whole.
In many DDM scenarios, the ensemble constituents share the same or similar quantum numbers.
In such cases, the detection channels through which one might hope to
find evidence of such an ensemble are essentially identical to those in which one
would seek evidence of a traditional dark-matter candidate with the
identical quantum numbers.
However, even if the ensemble constituents share similar quantum numbers, they
generically differ in their masses and couplings.
As a result,
it is often possible
to distinguish DDM ensembles and other non-minimal dark sectors experimentally
by analyzing the distributions of relevant kinematic variables.
At direct-detection experiments, for example, the relevant distribution
is the recoil-energy spectrum of the recoiling nucleus~\cite{DDMDD}.
Likewise, at indirect-detection experiments,
the relevant kinematic distributions are the differential flux spectra
of the SM particles which can be produced via dark-matter annihilation or decay~\cite{DDMAMS}.
Finally, at colliders,
the relevant distributions are those corresponding to
a number of well-chosen kinematic variables formed from the momenta of Standard-Model (SM)
particles produced alongside the dark-matter particles.
The information contained in the full shapes of these distributions can be used to
distinguish DDM from traditional dark-matter scenarios~\cite{DDMLHC1,DDMLHC2},
and indeed can be used to
distinguish a variety of other non-minimal dark-matter scenarios as well~\cite{doojin1,doojin2,doojin3,doojin4}.
A variety of cosmic-ray particles --- among them electrons, positrons, photons,
antiprotons, neutrinos, and antideuterons --- can potentially yield information about
the nature of the dark matter. For example, it has been shown that DDM ensembles can give
rise to characteristic signatures~\cite{DDMAMS} in the flux spectra of electrons and
positrons which can account for the positron excess observed by
PAMELA~\cite{PAMELA}, AMS-02~\cite{AMS}, and a
host of other experiments --- most notably without predicting an abrupt downturn in the positron
fraction at high energies. However, of all cosmic-ray particles whose
flux spectra we have the ability to measure, photons are the particles that afford the greatest
potential for probing the structure of the dark sector.
This is true primarily for two reasons.
First, the spectrum of
photons injected by a particular source is deformed
far less by interactions with the interstellar medium (ISM) than are the spectra associated
with most other cosmic-ray particles. Thus, features imprinted on the photon spectrum at
injection --- features which might be indicative of dark-sector non-minimality --- are
not washed out as a result
of their propagation through this medium. Second, unlike neutrinos (which are also largely
unaffected by propagation through the ISM), photons are easy to detect and their energies
and directions can be measured with great precision.
It nevertheless remains true
that identifying such signatures at indirect-detection
experiments is generally more challenging for DDM scenarios
than for other, more traditional dark-matter scenarios. This is because
the injection spectra of photons and other cosmic-ray particles from
dark-matter annihilation or decay within DDM scenarios are subject to an additional ``smearing''
effect due to the partitioning of the total dark-matter abundance across an
ensemble of constituent particles with a range of masses. Thus, the characteristic
imprints which these ensembles leave in the corresponding flux spectra
typically take the form of continuum features rather than sharp peaks or lines.
This is especially true for cases in which the splittings between the masses of ensemble constituents are
small. Disentangling continuum features from astrophysical backgrounds is
generally significantly more challenging than disentangling sharp peaks or lines.
Moreover, even in situations in which
such features can be robustly identified, it is often impossible to conclusively determine
whether dark matter or some more mundane astrophysical process is responsible.
In this paper, we identify an unambiguous signature of DDM (and of non-minimal dark
sectors more generally) which
can serve to overcome these issues
and potentially be observed at gamma-ray telescopes
sensitive to photons with energies in the $\mathcal{O}(1 - 100)~{\rm MeV}$ range.
This signature arises in cases in
which each of the ensemble constituents annihilates or decays predominately into
a primary photon and a neutral pion~\cite{Boddy:2015efa}, the latter subsequently decaying
into a pair of secondary photons.
In general, the primary photons give rise to a line-like feature, while the
secondary photons give rise to a characteristic box-like feature whose width
is related to the energy (or boost) of the decaying pion.
(We review the kinematics of these processes in the Appendix.)
In the case of a single dark-matter species,
this combination of a line-like feature and a box-like feature
is notable and distinctive.
Such features have previously been studied,
{\it e.g.}\/, in Refs.~\cite{Ibarra:2012dw, Boddy:2015efa, Garcia-Cely:2016pse}.
In the case of a DDM ensemble, by contrast,
the primary photons give rise to a
{\it set}\/ of line-like features
while the secondary photons give rise to a {\it set}\/ of box-like features.
In this paper, we are particularly interested in the regime in which the splitting
between constituent masses is small compared to the energy resolution of the telescope.
In such cases, the set of line-like features will appear a single effective {\it continuum}\/ spectral feature.
Likewise, the pion energies will also form an effective continuum which then produces a continuum of box-like features.
Note that in this context the pion energies will form an effective continuum because these pions are produced via
the direct annihilation or decay
of the different DDM ensemble components which themselves exhibit an effective continuum of masses.
This is therefore somewhat different than the continuous pion spectra
that might emerge through multiple sequential decays,
as in Refs.~\cite{Kim:2015usa,Kim:2015gka},
or via $n$-body decays with $n\geq 3$.
Taken in isolation, each of these two spectral features
reveals information about the properties of the DDM ensemble.
However, what makes this
signature particularly advantageous from the perspective of distinguishing
between minimal and non-minimal DDM dark sectors is that the spectral shapes of these two
features are {\it correlated}\/. Thus, a comparison between the information independently extracted
from these two continuum features can provide a powerful consistency check that
they indeed have a common underlying origin in terms of a DDM ensemble.
Indeed, it was shown in Ref.~\cite{Boddy:2015efa} that for a single-particle dark-matter candidate
which decays into this same final state, correlations between the properties of
the line and box features in the gamma-ray spectrum could be used to reconstruct the
mass of the dark-matter particle. By contrast, in a DDM context, we shall see that the correlations
between the corresponding continuum features can be used to reconstruct the fundamental
relations which describe how the the masses, abundances, and lifetimes of the
ensemble constituents scale across the ensemble as a whole.
This paper is organized as follows. In Sect.~\ref{sec:model},
we discuss the circumstances under which the constituents of a DDM ensemble
annihilate or decay predominately to a $\gamma \pi^0$ final state.
We also establish the conventions we shall use for parametrizing such an ensemble.
In Sect.~\ref{sec:spectrum}, we then calculate the contribution to the differential
photon flux which arises from dark-matter annihilation or decay in such scenarios.
We also discuss the two distinctive features which arise in the flux spectrum and
examine how the spectral shapes of these features, and the degree to which they
overlap, vary as a function of the parameters which characterize the ensemble.
In Sect.~\ref{sec:prospects}, we investigate the prospects of identifying these spectral features
in the diffuse galactic gamma-ray spectrum and in the gamma-ray spectra
of dwarf spheroidal galaxies at the next generation of gamma-ray telescopes,
and in Sect.~\ref{sec:measurement} we examine
the degree to which the underlying parameters which characterize the DDM ensemble
can be extracted from the spectral shapes of these features.
Finally, in Sect.~\ref{sec:conclusions},
we summarize our conclusions
and provide an outlook for future work,
while in the Appendix we review the kinematics
leading to line-like and box-like features in the photon spectrum.
\section{DDM Ensembles and Their Decays to Photons and Pions\label{sec:model}}
Within the context of DDM framework~\cite{DDM1,DDM2}, the dark sector comprises a potentially vast ensemble
of individual particle species $\phi_n$ whose cosmological abundances $\Omega_n$
are balanced against their decay widths $\Gamma_n$ in such a way as to ensure
consistency with observational data.
It turns out that DDM ensembles arise naturally in a variety
of well-motivated extensions of the SM; these include
scenarios which involve extra
spacetime dimensions~\cite{DDM1,DDM2,DDMAxion},
large spontaneously-broken symmetry groups~\cite{RandomMatrixDDM},
confining hidden-sector gauge groups~\cite{HagedornDDM},
or bulk physics in open string theories~\cite{HagedornDDM,bhupal}.
In what follows, we adopt the convention
that the index $n= 0, 1, 2, \ldots, N$ labels the
particles in order of increasing mass.
Our principal aim in this paper is to study
the astrophysical gamma-ray signatures associated with DDM ensembles in which the
ensemble constituents annihilate or decay predominately into a $\gamma \pi^0$
final state (with a subsequent pion decay $\pi^0\to \gamma\gamma$),
and to determine the degree to which information about
the ensemble can be extracted from these signatures.
Such final states can arise in
DDM scenarios in which the $\phi_n$ couple directly to quarks via an effective contact
operator~\cite{Boddy:2015efa}. The structure of this operator can be inferred from the
fact that the final state $\gamma \pi^0$ is odd under charge-conjugation.
Under the assumption that the $SU(2)$ weak interaction can be neglected and that the fundamental
interactions between the ensemble constituents and the SM fields are $C$-invariant, the
initial state must therefore be $C$-odd as well. One possible operator
structure which possesses the appropriate symmetry properties is
\begin{equation}
\mathcal{O}_n ~=~ c_n \, B^\mu_n\, \bar q \gamma_\mu q~
\end{equation}
where $c_n$ is an operator coefficient and where $B^\mu_n$ is a $C$-odd
quantity involving the $\phi_n$ fields alone. One situation in which
an operator of this sort arises is that in which the $\phi_n$ are spin-1 fields $\phi_n^\mu$
and corresponds to the case in which $B^\mu_n$ is identified with the field $\phi_n^\mu$
itself. In this case, the operator gives rise to decay processes of the form
$\phi_n \rightarrow \gamma \pi^0$. Another situation in which such an operator
arises is that in which $B^\mu_n = \mathcal{J}_n^\mu / \Lambda^2$, where
$\mathcal{J}_n^\mu$ is an approximately conserved current associated with the
particle number of the ensemble constituent $\phi_n$. In this case,
the operator gives rise to annihilation processes of the form
$\phi_n^\dagger \phi_n \rightarrow \gamma \pi^0$. In both of these cases,
the fundamental interaction between the
dark ensemble constituents $\phi_n$ and SM quarks gives rise to an
effective operator of the form~\cite{Boddy:2015efa}
\begin{equation}
\mathcal{O}_{n,\textrm{eff}} ~=~ \frac{e \,c_n}{16 \pi^2 f_\pi }
B_{\mu,n}\, F_{\nu\rho}\,
(\partial_{\sigma}\pi^0)\, \epsilon^{\mu\nu\rho\sigma}~
\end{equation}
in the low-energy confined phase of the theory.
We have shown that there exists a self-consistent mechanism through which the
constituents of a DDM ensemble can be coupled to the photon and neutral-pion fields.
However, whether or not processes resulting in a $\gamma \pi^0$ final state dominate
the decay width or annihilation cross-section of a given $\phi_n$ also depends
on the center-of-mass (CM) energy $\sqrt{s_n}$ associated with those processes. Since a
number of considerations imply that the velocities of dark-matter particles
within the halos of galaxies are non-relativistic, the CM energy for the
annihilation or decay of an ensemble constituent with mass $m_n$ is well
approximated by
\begin{equation}
\sqrt{s_n} ~\approx~ \begin{cases}
2m_n & \mbox{for annihilation}\\
m_n & \mbox{for decay}~.
\end{cases}
\label{sndef}
\end{equation}
Moreover, the assumption that the dark matter is non-relativistic also implies
that the CM frame for annihilating/decaying dark-matter particles is effectively
equivalent to the rest frame of the instrument which detects the annihilation/decay
products.
In the regime in which $\sqrt{s_n} < m_{\pi^0}$,
annihilation/decay to a photon and an on-shell $\pi^0$ is kinematically forbidden.
Annihilation/decay to a three-photon final state can still proceed in this regime
via an off-shell $\pi^0$, but processes of this sort do not give rise to the
same characteristic features in the photon spectrum. On the other hand, in the
regime in which $\sqrt{s_n} > 2m_{\pi^\pm}$, the annihilation/decay of $\phi_n$
to $\pi^+\pi^-$ is kinematically allowed, but the photons produced as
final-state radiation in conjunction with charged-pion production can contribute significantly to
the photon flux and overwhelm the contribution from $\gamma \pi^0$. Thus,
the range of CM energies for which
the $\gamma \pi^0$ channel provides the dominant contribution to the photon flux is given by
\begin{equation}
m_{\pi^0} < \sqrt{s_n} < 2 m_{\pi^\pm}~,
\label{snrange}
\end{equation}
corresponding to the dark-matter mass ranges
\begin{equation}
\begin{cases}
{\textstyle{1\over 2}} m_{\pi^0} < m_n < m_{\pi^\pm} & \mbox{for annihilation}\\
m_{\pi^0} < m_n < 2m_{\pi^\pm} & \mbox{for decay}~.
\end{cases}
\end{equation}
For simplicity, in what follows we shall focus on DDM ensembles in which the masses
of all of the ensemble constituents lie within this range.
We are therefore interested
in DDM ensembles in which the mass scale of the $\phi_n$ is of order
$m_n \sim \mathcal{O}(100)~{\rm MeV}$.~
Indeed, the collective contribution to the photon flux from the annihilation/decay of
any lighter constituents in the DDM ensemble is typically negligible unless the density of such states is enormous.
For an ensemble constituent within our chosen mass range, the $\gamma \pi^0$
channel generically yields the dominant contribution to the photon flux. The only other
two-body final states which are consistent with the symmetries
of the theory and kinematically accessible within the range in Eq.~(\ref{snrange})
are $\bar{\nu}\nu$, $e^+e^-$, and $\mu^+\mu^-$. The first of these is
irrelevant for photoproduction, while the contribution to the photon flux from
the other two are necessarily suppressed by additional factors of either $\alpha$ or
$s_n G_F$, where $G_F$ is the Fermi constant.
Consequently these processes will be comparatively insignificant
whenever the $\gamma \pi^0$ state is accessible. The contributions associated with
final states involving three or more SM particles are likewise suppressed.
In general, the underlying mass spectrum of our DDM ensemble depends on
the type of ensemble under study, and as such it can be arbitrary.
For concreteness, however, we shall focus on the case in
which the mass spectrum of our DDM ensemble
takes the generic form
\begin{equation}
m_n ~=~ m_0 + n^{\delta}\Delta m~
\label{mnspectrum}
\end{equation}
where $m_0$ is the mass of $\phi_0$ (the lightest of the $\phi_n$) and where the
mass splitting $\Delta m$ and scaling exponent $\delta$ are free parameters describing our
underlying DDM ensemble.
Indeed, many realistic DDM ensembles have mass spectra which follow exactly this generic
form.
Thus, the spectrum of corresponding CM energies takes the form
\begin{equation}
\sqrt{s_n} ~=~ \sqrt{s_0} + n^{\delta}\Delta(\sqrt{s})~
\label{snspectrum}
\end{equation}
where $\Delta(\sqrt{s}) = \Delta m$ for decay
and $\Delta(\sqrt{s}) = 2\Delta m$ for annihilation.
The splittings $\Delta(\sqrt{s_n}) \equiv \sqrt{s_{n+1}}- \sqrt{s_n}$ between the
CM energies for the annihilation/decay of adjacent ensemble states
are therefore given by
\begin{equation}
\Delta(\sqrt{s_n})= \left[(n+1)^{\delta}-n^{\delta} \right]\Delta (\sqrt{s})~.
\end{equation}
The case with $\delta=1$ is particularly interesting, occurring
when $\phi_n$ are the modes in
a Kaluza-Klein tower. We shall therefore focus on this case in what follows.
For this value of $\delta$, the mass splitting
$m_{n+1}-m_n$ is uniform across the ensemble,
and $\Delta(\sqrt{s_n}) \equiv \Delta(\sqrt{s})$ for all $n$.
\section{Gamma-Ray Spectrum from DDM Annihilations/Decays\label{sec:spectrum}}
In this section, we examine the signal contribution to the differential photon flux
$d\Phi/dE_\gamma$ which arises in DDM scenarios in which the ensemble constituents
annihilate/decay to a $\gamma \pi^0$ final state (thereby producing a single
``primary'' photon), followed by a subsequent
decay $\phi^0\to \gamma\gamma$ (thereby producing two ``secondary'' photons). We begin with a
derivation of the general expression for this signal contribution, followed by
a discussion of the distinctive qualitative features in the flux spectrum which
arise in these scenarios.
Note that the kinematics of the $\phi_n\to \gamma \pi^0\to \gamma\gamma\gamma$ process
is reviewed in the Appendix.
\subsection{Differential photon flux: Quantitative results}
In order to derive an expression for the total differential photon flux $d\Phi_n/dE_\gamma$ coming
from anniliation and/or decay of the DDM ensemble, we begin by deriving
an expression for the photon flux $\Phi_n$ coming from each individual ensemble constituent.
This is not particularly difficult, as there are only two primary ingredients that enter
into such a calculation.
The first is the integrated
energy density $\rho_n$
(or squared energy density $\rho_n^2$)
of the $\phi_n$ component
along the line of sight:
\begin{equation}
J_n ~\equiv~ \int d\Omega \int_{\textrm{LOS}}d\ell \times\begin{cases}
\begin{array}{ll}
\rho_n^2 & \textrm{ for annihilation} \\
\rho_n & \textrm{ for decay~,}
\end{array} \end{cases}
\end{equation}
where the differential solid angle $d\Omega$ corresponds to our region of interest on the sky.
The second ingredient, by contrast, is the
annihilation/decay rate $R_n$ of this component into photons:
the decay rate for the $\phi_n$ component is nothing but $\Gamma_n$, while
the annihilation rate is given by
$\langle \sigma_n v \rangle/4m_n$
where
$\langle \sigma_n v \rangle$
is the thermally-averaged cross section
for the annihilation process $\phi^\dagger_n\phi_n \rightarrow \gamma \pi^0$.
Putting the pieces together, the resulting photon flux is then given by
$\Phi_n = {\cal N}_n (J_n/4\pi) (R_n/m_n)$, where
${\cal N}_n\equiv {\cal N}_n^{(p)} + {\cal N}_n^{(s)}=3$
is the total number of primary plus secondary photons produced via the annihilation/decay
of each $\phi_n$.
{\it A priori}\/,
it is difficult to determine the individual line-of-sight
integrals $J_n$.
However,
it is natural to suppose that the energy densities $\rho_n$ of the
individual $\phi_n$ within the galactic halo and within the halos of other
galaxies are proportional to their overall cosmological abundances.
In other words, we shall assume that
$\rho_n/\rho_{\mathrm{tot}} = \Omega_n/\Omega_{\mathrm{tot}}$,
where $\rho_{\mathrm{tot}} = \sum_{n=0}^N \rho_n$.
Under this assumption, we can then
define an overall $n$-independent ``$J$-factor'' which represents the {\it total}\/ energy density
integrated along the line of sight,
\begin{equation}
J ~\equiv~ \int d\Omega \int_{\textrm{LOS}}d\ell \times\begin{cases}
\begin{array}{ll}
\rho_{\rm tot}^2 & \textrm{ for annihilation} \\
\rho_{\rm tot} & \textrm{ for decay~,}
\end{array} \end{cases}
\label{Jdef}
\end{equation}
whereupon our resulting photon flux $\Phi_n$ takes the general form
\begin{equation}
\Phi_n ~=~ {\cal N}_n \, \frac{J}{4\pi} \, \frac{\Omega_n}{\Omega_{\mathrm{tot}}}\, \frac{\lambda_n}{m_n}~
\label{eq:PhintotExp}
\end{equation}
with
\begin{equation}
\lambda_n ~\equiv ~ \begin{cases} \displaystyle
\frac{\Omega_n}{\Omega_\textrm{tot}}
\frac{\langle \sigma_n v \rangle}{4m_n} & \textrm{ for annihilation} \\
\rule[0pt]{0pt}{14pt}\Gamma_n & \textrm{ for decay~.}
\end{cases}
\end{equation}
For simplicity, we assume that the cross-section for {\it co}\/-annihilation processes
of the form $\phi^\dagger_m\phi_n \rightarrow \gamma \pi^0$ with $m \neq n$
is negligible.
Given the result for the individual flux $\Phi_n$ in Eq.~(\ref{eq:PhintotExp}),
we can now derive the collective contribution to the {\it differential}\/ photon flux from the
annihilation/decay of the {\it entire}\/ DDM ensemble. Indeed, this is nothing but
the sum over the individual
contributions $d\Phi_n/dE_\gamma$ from each of the $\phi_n$:
\begin{eqnarray}
\frac{d\Phi}{dE_\gamma} ~=~ \sum_{n=0}^N \frac{d\Phi_n}{dE_\gamma} &=&
\sum_{n=0}^N \frac{J}{4\pi} \frac{\Omega_n}{\Omega_\textrm{tot}}
\frac{\lambda_n}{m_n} \frac{d\mathcal{N}_{n}}{dE_\gamma}\nonumber\\
&=& \frac{\Phi_0}{3} \, \sum_{n=0}^N \frac{\Omega_n}{\Omega_0}
\frac{m_0}{m_n}
\frac{\lambda_n}{\lambda_0}
\frac{d\mathcal{N}_{n}}{dE_\gamma}~,~~~~~~
\label{eq:DiffPhotonFlux}
\end{eqnarray}
where
\begin{equation}
\frac{d\mathcal{N}_{n}}{dE_\gamma} ~=~
\frac{d\mathcal{N}_{n}^{({p})}}{dE_\gamma}
+\frac{d\mathcal{N}_{n}^{({s})}}{dE_\gamma}~
\end{equation}
represents the differential number of photons
per unit $E_\gamma$ produced via a single annihilation/decay event involving
the constituent $\phi_n$.
Given the expression in Eq.~(\ref{eq:DiffPhotonFlux}), our next step
is to understand how $\Omega_n$, $\lambda_n$, and $m_n$ depend on $n$.
For an arbitrary collection of dark-sector species,
these quantities might not exhibit any regular behavior as functions of $n$.
In a DDM ensemble, however, the abundances, decay widths, and cross-sections of the
different components
all exhibit specific scaling relations
as functions of $m_n$
across the DDM ensemble.
Indeed, such scaling relations (whether exact or approximate)
tend to emerge naturally as a result of the various theoretical
structures underlying these ensembles.
Of course,
since a gamma-ray telescope is at best only capable of measuring the differential photon
flux $d\Phi/dE_\gamma$, we see
from Eq.~(\ref{eq:DiffPhotonFlux})
that such an instrument is not sensitive
to the individual scaling behaviors
of these different quantities; rather, it is only sensitive to
the scaling behavior of the particular combination $\Phi_n\propto \Omega_n \lambda_n/m_n$.
Accordingly,
for concreteness,
we shall assume that
the fluxes $\Phi_n$ scale with $m_n$ according to a single power law of the form
\begin{equation}
\Phi_n ~=~ \Phi_0 \left(\frac{m_n}{m_0}\right)^\xi ~=~
\Phi_0 \left(\frac{\sqrt{s_n}}{\sqrt{s_0}}\right)^\xi~
\label{eq:flux}
\end{equation}
where the masses/CM energies follow Eqs.~(\ref{mnspectrum}) and (\ref{snspectrum})
and where the exponent $\xi$ is taken to be a free parameter.
Indeed, this is tantamount to assuming that
\begin{equation}
\frac{\Omega_n}{\Omega_0}
\frac{\lambda_n}{\lambda_0} ~=~ \left( \frac{m_n}{m_0}\right)^{\xi+1}~=~
\left( \frac{\sqrt{s_n}}{\sqrt{s_0}}\right)^{\xi+1}~.
\end{equation}
As such, the exponent $\xi$ reflects the internal theoretical structure of
the DDM ensemble under study.
Note that this parametrization
is applicable to both annihilation and decay,
although in general we expect the actual value of $\xi$ for the case of annihilation
to differ from that for decay.
This parametrization allows us to recast
our expression for the differential photon flux in
Eq.~(\ref{eq:DiffPhotonFlux})
into the relatively simpler form
\begin{equation}
\frac{d\Phi}{dE_\gamma} ~=~ \frac{\Phi_0}{3} \sum_{n=0}^N
\left(\frac{\sqrt{s_n}}{\sqrt{s_0}}\right)^\xi
\frac{d\mathcal{N}_{n}}{dE_\gamma}~.
\label{eq:totalflux}
\end{equation}
Moreover, as discussed in the Introduction, we are primarily interested in
the regime for which $\Delta m \ll \Delta E_\gamma$ over the energy range
of interest, where $\Delta E_\gamma$ is the energy resolution of the detector.
Thus, since we expect $\Delta E_\gamma \lesssim E_\gamma \leq \sqrt{s_N}$, we
shall focus on the case in which $\Delta m \ll m_0$ and the sum over $n$
in Eq.~(\ref{eq:totalflux}) is well approximated by an integral over the
continuous variable $\sqrt{s}$.
We then obtain
\begin{equation}
\frac{d\Phi}{dE_\gamma} ~\approx~ \frac{\Phi_0}{3\Delta(\sqrt{s})}
\int_{\sqrt{s_0}}^{\sqrt{s_N}} d\sqrt{s}
\left(\frac{\sqrt{s}}{\sqrt{s_0}}\right)^\xi
\frac{d\mathcal{N}}{dE_\gamma}~,
\label{eq:masterflux}
\end{equation}
where $\Delta(\sqrt{s})$ is defined in Eq.~(\ref{snspectrum})
and where $d\mathcal{N}/dE_\gamma$ is the differential number of photons per
unit $E_\gamma$
resulting from an ensemble constituent annihilating or decaying with CM energy $\sqrt{s}$
into a $\gamma \pi^0 $ final state,
followed by a subsequent decay $\pi^0\to \gamma\gamma$.
Note that the integral in Eq.~(\ref{eq:masterflux}) continues to represent a sum
over ensemble constituents, with the contribution from any $\sqrt{s}$ representing
the contribution from that ensemble constituent which annihilates or decays with CM energy $\sqrt{s}$.
Proceeding further requires knowledge of
$d\mathcal{N}/dE_\gamma$. However, this quantity includes contributions from both primary and secondary photons,
and these two classes of photons
have very different kinematic features.
We shall therefore consider each of these classes separately.
As discussed in the Appendix,
the primary photons are all monochromatic, occupying a ``line'' with energy
\begin{equation}
E_{\rm line} ~=~ \frac{ s-{m_{\pi^0}^2}}{2 \sqrt{s}}~.
\label{Eline}
\end{equation}
There is also only one such photon per constituent decay/annihilation.
Thus the primary photon contribution to $d\mathcal{N}/dE_\gamma$
is simply
\begin{equation}
\frac{d\mathcal{N}^{(p)}}{dE_\gamma} ~=~ \delta(E_\gamma-E_{\rm line})~,
\end{equation}
whereupon the corresponding contribution to the flux in Eq.~(\ref{eq:masterflux}) is given by
\begin{eqnarray}
\frac{d\Phi^{(p)}}{dE_\gamma} &\approx& \frac{\Phi_0}{3\Delta(\sqrt{s})}\,
\left(\frac{\sqrt{s_*}}{\sqrt{s_0}}\right)^\xi
\frac{2s_\ast}{s_\ast+m_{\pi^0}^2} \nonumber \\
&& ~~~~\times \,
\Theta(\sqrt{s_\ast}-\sqrt{s_0})\,
\Theta(\sqrt{s_N} -\sqrt{s_\ast})~~~~~~~~~~
\label{eq:PhiP}
\end{eqnarray}
where $\Theta(x)$ is the Heaviside function and where
\begin{equation}
\sqrt{s_\ast} ~\equiv~ \sqrt{E_\gamma^2 + m_{\pi^0}^2} + E_\gamma~.
\label{eq:sqrtsn}
\end{equation}
Physically, this means that there is only one DM constituent whose decay or annihilation
contributes to the primary photon flux at any energy $E_\gamma$: this is the constituent
whose decay or annihilation
occurs with the CM energy $\sqrt{s_\ast}$ given in Eq.~(\ref{eq:sqrtsn}).
The secondary photons have a different kinematics, however.
As discussed in the Appendix, the secondary photons
emerging from an ensemble constituent decaying or annihilating with CM energy $\sqrt{s}$
have energies which uniformly populate a ``box''
whose lower and upper limits are respectively given by
\begin{equation}
E_{\rm box}^{-} ~=~ \frac{{m_{\pi^0}^2}}{2\sqrt{s}} ~,~~~~~~~
E_{\rm box}^{+} ~=~ \frac{\sqrt{s}}{2} ~.~
\end{equation}
Moreover, there are two secondary photons from each such event.
Thus, the normalized contribution from the secondary photons to the differential photon number per
unit $E_\gamma$
is given by
\begin{eqnarray}
\frac{d\mathcal{N}^{(s)}}{dE_\gamma}
&=& 2 \,
\frac
{\Theta(E_\gamma - E_{\rm box}^-) \,
\Theta(E_{\rm box}^+ - E_\gamma )}
{ E_{\rm box}^+ - E_{\rm box}^- } ~\nonumber\\
&=& \frac{4 \sqrt{s}}{s-{m_{\pi^0}^2}}\,
{\Theta(E_\gamma - E_{\rm box}^-) \,
\Theta(E_{\rm box}^+ - E_\gamma )}~,\nonumber\\
\label{twothetas}
\end{eqnarray}
whereupon the corresponding secondary photon flux becomes
\begin{equation}
\frac{d\Phi^{(s)}}{dE_\gamma} ~\approx~
\frac{4\Phi_0}{3\Delta(\sqrt{s})}\,
\int_{\sqrt{s_{\rm min}}}^{\sqrt{s_{N}}} d\sqrt{s}
\left(\frac{\sqrt{s}}{\sqrt{s_0}}\right)^\xi
\frac{\sqrt{s}}{s-{m_{\pi^0}^2}}
\label{secondint}
\end{equation}
with
\begin{equation}
\sqrt{s_{\rm min}} ~\equiv~ {\rm min} \left[ \sqrt{s_N}, \,
{\rm max}\left( \sqrt{s_0}, \, 2E_\gamma, \, \frac{{m_{\pi^0}^2}}{2E_\gamma}\right)\right]~.
\end{equation}
Indeed, for any given value of $E_\gamma$, the Heaviside
theta-functions in Eq.~(\ref{twothetas}) restrict the
values of $\sqrt{s}$ which contribute in Eq.~(\ref{secondint})
to those
which are compatible not only with our original
constraints $\sqrt{s_0}\leq \sqrt{s}\leq \sqrt{s_N}$ but also
with the simultaneous constraints
$E_\gamma < E_{\rm box}^+$ (which requires $\sqrt{s} > 2E_\gamma$)
and $E_\gamma > E_{\rm box}^-$ (which requires $\sqrt{s} > {m_{\pi^0}^2}/2E_\gamma$).
The result in Eq.~(\ref{secondint})
can then be integrated in closed form, yielding
\begin{eqnarray}
\frac{d\Phi^{(s)}}{dE_\gamma}
&\approx& \frac{2\Phi_0}{3\Delta(\sqrt{s})}
\left( \frac{{m_{\pi^0}}}{\sqrt{s_0}} \right)^\xi \, \times\nonumber\\
&& ~~~ \left\lbrack
B_{z_1} (-\xi/2,0)
-B_{z_2} (-\xi/2,0)
\right\rbrack~~~~~
\label{eq:PhiS}
\end{eqnarray}
where $B_z(a,b)$ is the incomplete Euler beta-function,
with $z_1\equiv {m_{\pi^0}^2}/s_{\rm min}$ and $z_2\equiv {m_{\pi^0}^2}/s_N$.
In summary, the overall signal contribution to the differential photon flux in
DDM scenarios of this sort is the sum of the two individual contributions from
primary and secondary photons given in Eqs.~(\ref{eq:PhiP}) and~(\ref{eq:PhiS}),
respectively.
\subsection{Differential photon flux: Qualitative features\label{sec:case-studies}}
The spectral feature associated with primary photons, which is described by
Eq.~(\ref{eq:PhiP}), extends between
$E_{\rm line}(\sqrt{s_0})$ and $E_{\rm line}(\sqrt{s_N})$.
The shape
of this feature is in large part dictated by the value of the index $\xi$.
However, since we are focusing on ensembles in which the CM energy
for the annihilation/decay of each of the $\phi_n$ falls
within the range $m_{\pi^0} < \sqrt{s_n} < 2m_{\pi^\pm}$, this feature typically
appears reasonably flat (unless the value of $\xi$ is extreme) and exhibits
a sharp cutoff at $E_\gamma = E_{\rm line}(\sqrt{s_N})$.
By contrast, the spectral feature associated with secondary photons, which is
described by Eq.~(\ref{eq:PhiS}), has a markedly different shape.
As discussed above, the individual contribution to $d\Phi^{(\mathrm{s})}/dE_\gamma$
from each $\phi_n$ consists of a flat, box-like feature centered at $E_\gamma = m_{\pi^0}/2$ on a logarithmic scale. Thus, the total contribution to the secondary photon
flux consists of a ``tower'' of such boxes centered at this same value of
$E_\gamma$. Since the width of each box is given by $(s_n-{m_{\pi^0}^2})/2\sqrt{s_n}$,
the narrowest box is associated with the lightest ensemble constituent participating
in the relevant annihilation/decay process, and has a
width $(s_0-m_{\pi^0}^2)/2\sqrt{s_0}$ if $\phi_0$ is indeed that constituent.
This implies that in cases in
which $\sqrt{s_0} \approx m_{\pi^0}$, a sharp peak or spike appears at the center of
the tower~\cite{Stecker,Agashe:2012bn}. By contrast, in cases in which the difference between $\sqrt{s_0}$
and $m_{\pi^0}$ is larger --- even by a few MeV --- the top of
the tower appears flat and forms a plateau~\cite{Stecker,Agashe:2012bn,Kim:2015gka,Chen:2014oha}.
We thus have
\begin{equation}
\begin{cases}
\sqrt{s_0} \approx {m_{\pi^0}} & {\rm spike} \\
\sqrt{s_0} > {m_{\pi^0}} & {\rm plateau}~.
\end{cases}
\label{peakcases}
\end{equation}
Another important consideration is whether and to what extent the spectral features
associated with primary and secondary photons in this scenario overlap. Indeed,
as we shall see in Sect.~\ref{sec:measurement}, the degree of overlap between these
spectral features determines the fitting procedure which must be used in extracting
information about the fundamental parameters governing the DDM ensemble.
In particular, in cases in which the two features are well separated, a parametric
fit can be performed for each in isolation. By contrast, in cases
in which the overlap between the two features is significant, a single fit must be
performed on the combined spectrum in order to extract the underlying
parameters governing the DDM ensemble. In either case, however, we shall find
that it is often possible to measure most of the underlying
parameters which characterize the DDM ensemble with reasonable precision.
In order to assess the degree of overlap between the primary and secondary photon
spectra for any particular choice of parameters, we compare the maximum possible
energy for a primary photon to the minimum possible energy for a secondary photon.
The former is given by
$E_{\rm line}(\sqrt{s_N})$ while
the latter is given by
$E_{\rm box}^-(\sqrt{s_N})$.
The spectral features associated with the primary and secondary photons
will thus overlap only if
$E_{\rm box}^-(\sqrt{s_N}) < E_{\rm line}(\sqrt{s_N})$,
or equivalently if
$\sqrt{s_N} > \sqrt{2}{m_{\pi^0}}$.
We thus have
\begin{equation}
\begin{cases}
\sqrt{s_N} < \sqrt{2} {m_{\pi^0}} & {\rm no~overlap} \\
\sqrt{s_N} > \sqrt{2} {m_{\pi^0}} & {\rm overlap}~.
\end{cases}
\label{overlapcases}
\end{equation}
\begin{table*}[t]
\centering
\begin{tabular}{||c||c|c|c|c|c||}
\hline
\hline
~Benchmark~ & ~$
\rule[-5pt]{0pt}{16pt}
\sqrt{s_0}$~(MeV)~ & ~$\sqrt{s_N}$~(MeV)~ & ~$N$~ &
~Behavior at \mbox{$E_\gamma=m_{\pi^0}/2$} ~ & ~Spectral overlap~ \\
\hline
A & 135 & 181 & 23 & spike & negligible \\
B & 135 & 231 & ~~48~~ & spike & significant \\
C & 164 & 180 & 8 & plateau & negligible \\
D & 164 & 230 & 33 & plateau & significant \\
\hline\hline
\end{tabular}
\caption{Four benchmark DDM ensembles --- each corresponding to a different
choice of the parameters $\sqrt{s_0}$ and $\sqrt{s_N}$ --- which illustrate the range of spectral
signatures which arise in this scenario. For each of these benchmarks, we have taken
$\Delta (\sqrt{s}) = 2$~MeV.~ The resulting features (spike versus plateau at $E_\gamma = {m_{\pi^0}}/2$
and the degree of spectral overlap) are governed by the criteria in Eqs.~(\ref{peakcases}) and (\ref{overlapcases}).
\label{tab:case}}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.4\textwidth, keepaspectratio]{case1_gamma1_new}\hspace{1.2cm}
\includegraphics[width=0.4\textwidth, keepaspectratio]{case2_gamma1_new}\\ \vskip 0.4cm
\includegraphics[width=0.4\textwidth, keepaspectratio]{case3_gamma1_new}\hspace{1.2cm}
\includegraphics[width=0.4\textwidth, keepaspectratio]{case4_gamma1_new}
\caption{The differential photon energy spectra associated
with the four benchmark parameter choices $A$ through $D$ defined in Table~\protect\ref{tab:case},
where we have taken $\xi=1$.
The black curve in each panel
represents the analytic result obtained by superposing the contributions to
the photon spectrum given in Eqs.~(\ref{eq:PhiP}) and~(\ref{eq:PhiS}), while
the blue histogram represents the results of a simulated data set smeared according
to the Gaussian smearing function in Eq.~(\protect\ref{eq:smearing}).}
\label{fig:thcurve}
\end{figure*}
In order to illustrate the range of different combinations of spectral shapes which
can arise in scenarios of this sort, we introduce a set of benchmark
parameter choices which exemplify four qualitatively different kinds of spectra.
The values of $\sqrt{s_0}$ and $\sqrt{s_N}$ for these benchmarks
are given in Table~\ref{tab:case}.~
For each benchmark we have taken
$\Delta \sqrt{s}=2$~MeV;
in this connection we recall that ${m_{\pi^0}}\approx 135$~MeV,
whereupon $\sqrt{2}{m_{\pi^0}}\approx 191$~MeV.~
Note that when discussing fluxes,
we shall describe our DDM ensembles in terms of the CM energies $\sqrt{s_n}$
characterizing the annihilations/decays of their constituents
rather than in terms of their corresponding masses $m_n$.
We do this in recognition of the fact that under
the assumption given in Eq.~(\ref{eq:flux}),
the photon fluxes
that result from such annihilations or decays
depend on these CM energies rather than on the underlying masses.
In particular, by describing our ensembles in terms of CM energies rather than masses,
we retain maximal generality and need not specify whether our ensemble constituents
are annihilating or decaying.
Indeed, this information cannot be gleaned from photon fluxes
alone, and it is only in mapping our CM energies $\sqrt{s_n}$ back to underlying masses $m_n$
that this information would be required.
The gamma-ray spectra corresponding to the benchmarks in Table~\ref{tab:case}
are displayed in Fig.~\ref{fig:thcurve}, where we have further assumed $\xi=1$.
Note that these plots include the contributions from both primary and secondary photons.
Each of the spectra
shown in the figure has been normalized so that they all share
a common total flux when integrated over all energies $E_\gamma$.
The black curve in each
panel represents the spectrum obtained by superposing the analytic expressions
given in Eqs.~(\ref{eq:PhiP}) and~(\ref{eq:PhiS}). By contrast, the blue histogram
represents the results of a Monte-Carlo simulation of the corresponding
gamma-ray spectrum as they might be observed by a physical detector. We account
for the non-zero energy resolution of the detector by smearing of the initial
photon energies obtained in the simulation using a Gaussian smearing function.
In particular, we take the probability $R_{\epsilon}$ for the detector to register
an energy $E_\gamma$, given an actual incoming photon energy $E_\gamma'$, to be
\begin{equation}
R_{\epsilon}(E_\gamma-E'_\gamma) ~=~ \frac{1}{\sqrt{2\pi}\epsilon E'_\gamma}
\exp\left[-\frac{(E_\gamma-E'_\gamma)^2}{2(\epsilon E'_\gamma)^2} \right]~,
\label{eq:smearing}
\end{equation}
where $\epsilon$ is a dimensionless parameter which sets the overall scale
of the $E'_\gamma$-dependent standard deviation
$\sigma_E(E_\gamma') = \epsilon {E_\gamma'}$ of the Gaussian. The results
in Fig.~\ref{fig:thcurve} correspond to a 1\% Gaussian smearing --- {\it i.e.}\/, to
the choice $\epsilon =0.01$.
Benchmark~A (top left panel of Fig.~\ref{fig:thcurve}) is representative of the regime
in which $\sqrt{s_0} \approx m_{\pi^0}$ and $\sqrt{s_N} < \sqrt{2}{m_{\pi^0}}$. In
this regime, there is no overlap between the features associated with the
contributions from primary and secondary photons, while the feature associated
with the secondary photons appears as a spike or peak rather than a plateau.
By contrast, Benchmark~B (top right panel) is representative of the regime in which
$\sqrt{s_0} \approx m_{\pi^0}$ and $\sqrt{s_N} > \sqrt{2}{m_{\pi^0}}$:
the feature associated with the secondary photons likewise appears as a spike, but
there is a significant overlap between this feature and the feature
associated with the primary photons. Benchmark~C (bottom left panel) is
representative of the regime in which $\sqrt{s_0}$ is significantly larger than
$m_{\pi^0}$ and $\sqrt{s_N} < \sqrt{2}{m_{\pi^0}}$: in this regime the features associated
with primary and secondary photons do not overlap, but the feature from
secondary photons exhibits a plateau rather than a spike. Finally, Benchmark~D
(bottom right panel) is representative of the regime in which $\sqrt{s_0}$ is
significantly larger than $m_{\pi^0}$ and $\sqrt{s_N} > \sqrt{2} {m_{\pi^0}}$: in this
regime the feature associated with the secondary photons likewise appears as a
plateau, but there is a significant overlap between this feature and the
feature associated with the primary photons.
\section{Discovery Reach of Future Experiments\label{sec:prospects}}
We now turn to examine the projected sensitivity of future gamma-ray experiments
to DDM ensembles which annihilate/decay primarily to $\gamma\pi^0$, followed by a subsequent
decay $\pi^0\to \gamma\gamma$. Indeed, a
variety of proposals have recently been advanced for experiments that would significantly improve
the sensitivity to photon signals in the relevant energy range.
These include the Advanced Compton Telescope (ACT)~\cite{ACT}, the Advanced Pair
Telescope (APT)~\cite{APT}, the Gamma-Ray Imaging, Polarimetry and Spectroscopy
(GRIPS) detector~\cite{GRIPS}, the Advanced Energetic Pair Telescope (AdEPT)~\cite{AdEPT},
the Pair-Production Gamma-Ray Unit (PANGU)~\cite{PANGU}, the Compton Spectrometer
and Imager (COSI)~\cite{COSI}, and the ASTROGAM detector~\cite{ASTROGAM}.
In our analysis, for concreteness, we consider a hypothetical space-based
detector with attributes similar to those of ASTROGAM.~ In particular, we
assume that our detector is sensitive in the energy range
$0.3~\text{MeV} \lesssim E_\gamma \lesssim 3000~\text{MeV}$, and we account for the
energy resolution of the detector using a Gaussian
smearing function of the form given in Eq.~\eqref{eq:smearing}. For simplicity,
we take the energy resolution to be 1\% ({\it i.e.}\/, we take $\epsilon =0.01$) and we take
the effective area to be $500~\mathrm{cm^2}$ throughout this entire $E_\gamma$
range.
These assumptions represent optimistic projections from the ASTROGAM design
specifications, and the actual detector response will be different.
In particular, since ASTROGAM will utilize two detector technologies
in order to cover different portions of this same $E_\gamma$ range, its energy
resolution and effective area will depend non-trivially on $E_\gamma$.
Our goal is to assess the discovery reach of our hypothetical detector
as a function of the parameters governing our underlying DDM model.
We shall assess this discovery reach as follows.
First, we define the quantity
\begin{equation}
\frac{\tilde \Phi}{J} ~=~ \frac{4\pi}{3J} \sum_n \Phi_n ~=~
\sum_n \frac{\Omega_n}{\Omega_{\rm tot}}
\frac{\lambda_n}{m_n} ~\equiv ~ \langle \lambda/m \rangle
\end{equation}
where $\tilde \Phi \equiv (4\pi/3)\Phi$ is the normalized total flux that we would
expect to see from a given DDM model.
In some sense
this quantity represents the ``particle-physics'' contribution to the total flux, with the
astrophysical factor $J$ divided out.
In order to assess the reach of our hypothetical detector, we therefore seek the
critical (minimal) value of $\tilde \Phi/J$
for which an excess might become apparent
after one year of continuous observation.
Or, phrased conversely, we
seek to determine the maximum value of $\tilde\Phi/J$
for which {\it no}\/ appreciable signal can be resolved after one year of continuous
observation.
If this maximum value of $\tilde \Phi/J$ is relatively small for a given
set of underlying DDM parameters,
our telescope is extremely sensitive to the corresponding DDM photon flux
and our discovery reach is enhanced.
By contrast, if this maximum value of $\tilde \Phi/J$ is relatively large,
the corresponding discovery reach of our hypothetical telescope is suppressed.
In our analysis, we shall consider two different regions of interest
on the sky
which correspond to two of the most promising search strategies for gamma-ray
signals of dark-matter annihilation/decay: searches in dwarf spheroidal galaxies
and searches in the diffuse galactic gamma-ray spectrum. We do not consider
signals from the Galactic Center, as the astrophysical backgrounds in this
region are not well understood and systematic uncertainties are therefore
expected to be large.
\subsection{Dwarf-spheroidal search\label{sec:DwarfSearch}}
Dwarf spheroidal galaxies provide a particularly auspicious environment
in which to search for signals of annihilating/decaying dark matter.
Observations of stellar kinematics within these galaxies suggest that they
are highly dark-matter dominated~\cite{MateoDwarfs,McConnachieDwarfs}.
In addition, since the solid angle on the
sky subtended by many of these galaxies is small, reasonably reliable empirical
estimates of the astrophysical foregrounds and backgrounds can be obtained from
measurements of the differential gamma-ray flux in the surrounding region.
Moreover, since most known dwarf spheroidals lie at significant distances from
the galactic plane of the Milky Way, these astrophysical foregrounds are small.
For concreteness, we focus our analysis on one particular dwarf galaxy,
Draco, which subtends a solid angle of approximately $1.6\times 10^{-3}$~sr
on the sky. For a region of interest defined by this solid angle,
an empirical reconstruction of the dark-matter halo profile of
this galaxy from stellar-kinematic data~\cite{GeringerSameth:2014yza} yields
a $J$-factor
$\log_{10} (J / \mathrm{GeV}^2\mathrm{cm}^{-5}) = 19.05^{+0.22}_{-0.21}$
for annihilation and
$\log_{10} (J / \mathrm{GeV}\mathrm{cm}^{-2} ) = 18.97^{+0.17}_{-0.24}$
for decay. For simplicity, we assume that the main source of
foreground/background photons is diffuse emission and assume that
contributions from nearby extragalactic sources are negligible.
We model the diffuse
contribution to the differential gamma-ray flux using a single power law,
with a normalization coefficient and scaling index derived from a
fit to COMPTEL~\cite{Weidenspointner:1999thesis} and
EGRET~\cite{Strong:2004ry} data:
\begin{equation}
\frac{d^2\Phi_b}{dE\, d\Omega} ~=~ 2.74 \times 10^{-3}
\left(\frac{E}{\mathrm{MeV}}\right)^{-2.0}
\mathrm{cm}^{-2} \mathrm{s}^{-1} \mathrm{sr}^{-1} \mathrm{MeV}^{-1}.
\label{eq:diffuse-bkg}
\end{equation}
In general, the DDM discovery reach of our hypothetical detector
depends on the underlying DDM parameters $\sqrt{s_0}$, $\sqrt{s_N}$, and $\xi$.
(As usual, we are assuming $\delta=1$ and $\Delta \sqrt{s} = 2$~MeV.)~
For each choice of parameters,
our results in Eqs.~(\ref{eq:PhiP}) and~(\ref{eq:PhiS})
make a prediction concerning the signal differential fluxes
$d\Phi^{(s,p)}/dE_\gamma$
of primary and secondary photons,
respectively. In particular, for any given values of
$(\sqrt{s_0},\sqrt{s_N})$, these signal fluxes
stretch over only a finite range of energies $E_\gamma$.
Thus, for any given $(\sqrt{s_0},\sqrt{s_N})$, we shall restrict our
analysis to those energy bins lying within this range.
The choice of
$(\sqrt{s_0},\sqrt{s_N},\xi)$ determines the overall {\it shape}\/
of the signal differential flux as a function of photon energy $E_\gamma$,
while the overall magnitude of this differential flux is determined by the
normalization $\Phi_0$.
Thus, for any given choice of
$(\sqrt{s_0},\sqrt{s_N},\xi)$,
we then seek to find the critical (minimal) value of $\Phi_0$
for which an excess signal just becomes observable.
Equivalently, we seek the largest value of $\Phi_0$ for which {\it no}\/ signal can be
discerned.
This largest value of $\Phi_0$ then leads to a corresponding largest value of $\tilde\Phi/J$,
where the numerical value of $J$ is given above.
In general, there are two different paths we might follow in order to determine
this critical value of $\Phi_0$.
One possible procedure is to find the critical value of
$\Phi_0$
for which an excess {\it in any single bin}\/ just becomes observable
(or equivalently, the largest value of $\Phi_0$ for which no signal can be
discerned {\it in any single bin}\/).
Within each bin, observability would be assessed as follows.
In general, the expected number of events within a given bin
includes a signal contribution from DDM annihilation/decay
within the halo of Draco as well as a background contribution given
by Eq.~\eqref{eq:diffuse-bkg}.
We would then seek the maximum value of $\Phi_0$ for which this observed
number of events in every bin is consistent
with the contribution from background alone within 95\%~C.L., assuming
Poisson statistics.
The above procedure describes a ``binned'' approach to determining the critical value
of $\tilde \Phi/J$ which is sensitive
to the overall {\it shape}\/ of the differential flux --- {\it i.e.}\/,
an approach which is based on an analysis of the
counts within individual energy bins.
However, an alternative path is to simply focus instead on the total integrated flux across all energy bins, and
to determine the critical value of $\tilde \Phi/J$ for which
this integrated flux exceeds the integrated contribution from background
alone within 95\%~C.L., assuming
Poisson statistics.
In order to asssess the greatest (maximum) discovery reach, we shall employ
whichever method (binned or integrated)
yields the smallest value for $\tilde\Phi/J$.
It turns out that if
$\sqrt{s_0} \approx m_{\pi^0}$, the primary photon spectrum extends down to very low photon energies where the diffuse background is quite large. Incorporating these high-background bins into a total (integrated) counting analysis significantly weakens the estimate of the discovery reach. Consequently, for $\sqrt{s_0}\approx {m_{\pi^0}}$, it turns out that the
binned analysis yields a greater discovery reach. For larger values of $\sqrt{s_0}$,
by contrast, it turns out that an analysis based on the total integrated flux is superior.
\subsection{Diffuse-background search\label{sec:DiffuseSearch}}
The total diffuse gamma-ray background consists of a contribution from unresolved
astrophysical sources as well as both a galactic and an extragalactic contribution
from dark-matter annihilation/decay. The extragalactic dark-matter contribution
is assumed to be isotropic, while the galactic contribution depends (through the
$J$-factor) on the dark-matter halo profile of the Milky Way.
However, this latter contribution is not particularly
sensitive to the form of the inner halo profile in situations in which the region of interest
includes only areas of the sky far from the Galactic Center. Moreover, the
diffuse extragalactic contribution to the photon flux from any particular
location on the sky is typically subleading in comparison with the diffuse
galactic contribution, except for cases in which that location lies near either of
the galactic poles (where the latter contribution is presumably at its minimum).
Accordingly, we adopt as our region of interest the region in which the galactic
latitude $b$ lies within the range $20^\circ < |b| < 60^\circ$.
In the following, we calculate the $J$-factors from their differential forms for an NFW profile,
for which numerical evaluations are given in Ref.~\cite{Jfactor882360}.
Disentangling the dark-matter contribution to the diffuse gamma-ray flux
from the astrophysical background requires detailed knowledge of that
background. However, the astrophysical contribution to the diffuse gamma-ray
flux is not well measured or understood. Given this uncertainty,
we evaluate the discovery reach for this diffuse search using two different
methods. The first of these involves no assumptions about the astrophysical
background and yields a more conservative estimate of the discovery reach,
while the second assumes a particular functional form for the background
and thereby yields a more optimistic estimate.
In deriving our more conservative estimate of the discovery reach, we compare the gamma-ray flux
spectrum observed by our hypothetical detector to the expected signal contribution
from dark-matter annihilation/decay alone. More specifically, we compare the
number of events observed in each energy bin to the corresponding number of
expected events, given a particular choice of DDM model parameters. Under the
assumption that the observed number of events in each bin is given by the
background spectrum in Eq.~\eqref{eq:diffuse-bkg}, we derive an upper limit
on $\tilde \Phi/J$ for which this observed number of events
$\mathcal{N}_i^{\mathrm{obs}}$ in each bin is consistent with the theoretical expectation
$\mathcal{N}_i^{\mathrm{exp}}$ to within $2\sigma_i$, where the index $i$ labels the bin
and where $\sigma_i$ denotes the corresponding uncertainty. In particular, $\sigma_i$
is dominated by systematic uncertainty in the expression for the differential
flux in Eq.~\eqref{eq:diffuse-bkg}, which we take to be 15\% of the flux itself.
In deriving our more optimistic estimate of the discovery reach, we follow
a procedure which is similar to that followed for the dwarf-spheroidal search. However,
rather than neglecting the background contribution to the expected
number of events in each bin, in this case we assume that this background contribution is
given by Eq.~\eqref{eq:diffuse-bkg}. Once again, we derive an upper limit
on $\tilde \Phi/J$ by assuming that the observed
number of events in each bin is likewise given by the background spectrum
in Eq.~\eqref{eq:diffuse-bkg} and requiring consistency between
$\mathcal{N}_i^{\mathrm{obs}}$ and $\mathcal{N}_i^{\mathrm{exp}}$ to within $2\sigma_i$ in each bin.
\subsection{Results}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth, keepaspectratio]{exclusion-sqrtSN}
\caption{The projected discovery reach for a representative next-generation
MeV-range gamma-ray telescope, plotted as functions of $\sqrt{s_N}$ for
different values of $\sqrt{s_0}$ and $\xi$, with $\Delta(\sqrt{s}) = 2$~MeV.~
The results are shown as an upper limit on the quantity $\tilde \Phi/J$ for
which a statistically significant signal is not observed within one year of
continuous observation.
Panels in the top, middle, and bottom rows correspond to $\xi= -1,0,+1$, respectively,
while those in the left and right columns correspond respectively to annihilating and decaying
dark-matter scenarios.
Within each panel, four benchmark choices of $\sqrt{s_0}$ are shown: $\sqrt{s_0}=135~\text{MeV}$ (red curves),
$149~\text{MeV}$ (green curves), $\sqrt{s_0}=191~\text{MeV}$ (blue curves), and
$\sqrt{s_0}= 230~\text{MeV}$ (orange curves).
In each case we then show results for $\sqrt{s_N}$
within the range $\sqrt{s_0}+ 10\Delta\sqrt{s} \leq \sqrt{s_N} \leq 2 m_{\pi^\pm}$.
The solid bands shown in each panel
correspond to the results of the dwarf-spheroidal search, as outlined in
Sect.~\protect\ref{sec:DwarfSearch},
with the results for $\sqrt{s_0}=135$~MeV obtained
through a binned approach and the others obtained through
an approach based on the total integrated flux.
The width of each band reflects a
$1\sigma$ uncertainty in the $J$-factor for the dwarf. The dashed and
dot-dashed lines correspond to the results of a diffuse-background search
using the optimistic and conservative analysis methods outlined in
Sect.~\protect\ref{sec:DiffuseSearch}, respectively.
\label{fig:exclusion-sqrtSN}}
\end{figure*}
The discovery reaches for both the dwarf-spheroidal search and the diffuse-background
search are shown in Fig.~\ref{fig:exclusion-sqrtSN}. In this
figure, the bounds on $\tilde \Phi/J$ from each search are shown as a
function of the parameter $\sqrt{s_N}$ for the four different reference values of
$\sqrt{s_0}$ labelled within each panel, with
$\Delta\sqrt{s} = 2$~MeV
and $\sqrt{s_0}+ 10\Delta\sqrt{s} \leq \sqrt{s_N} \leq 2 m_{\pi^\pm}$.
This lower bound on $\sqrt{s_N}$ ensures that we are including the contributions
of at least 10 ensemble constituents $\phi_n$
in addition to $\phi_0$ for each chosen value of $\sqrt{s_0}$, while
the upper bound
ensures that we do not exceed the threshold
$2m_{\pi^\pm}$ for charged-pion pair-production, beyond which
additional flux contributions must be included.
Results for $\xi= -1, 0, +1$ are shown along the top, middle, and bottom rows of Fig.~\ref{fig:exclusion-sqrtSN},
while the panels within the left and right columns of Fig.~\ref{fig:exclusion-sqrtSN} show the results for
annihilating and decaying dark-matter scenarios respectively.
The solid colored bands indicate the results of the dwarf-spheroidal search, with
the width of each band
reflecting a $1\sigma$ uncertainty in the $J$-factor for the dwarf. By contrast,
the dashed and dot-dashed lines correspond to the results of a diffuse-background
search using the optimistic and conservative analysis methods outlined in
Sect.~\protect\ref{sec:DiffuseSearch}, respectively.
For the dwarf-spheroidal search,
the results shown in Fig.~\ref{fig:exclusion-sqrtSN} indicate that the
discovery reach for our hypothetical telescope
tends to be relatively insensitive to $\sqrt{s_N}$ for large $\sqrt{s_0}$,
but more sensitive to $\sqrt{s_N}$ for smaller $\sqrt{s_0}$.
When scanned over possible values of $\sqrt{s_0}$, however,
the discovery reach tends to be relatively insensitive to $\sqrt{s_N}$:
cases with large $\sqrt{s_0}$
provide the greatest reach
when $\sqrt{s_N}$ is large but
cases with smaller $\sqrt{s_0}$ provide the greatest reach when $\sqrt{s_N}$ is smaller.
It is also noteworthy that when $\sqrt{s_0}\approx {m_{\pi^0}}$,
it is the binned analysis which provides the greater discovery reach;
the opposite is true when $\sqrt{s_0}$ is larger.
However, this can be understood as follows.
When $\sqrt{s_0} \approx m_{\pi^0}$, the primary photon spectrum extends down to
very low gamma-ray energies where the diffuse background is quite large.
Incorporating these high-background bins into an analysis based on total counts
then significantly weakens the estimate of the discovery reach.
However, this feature does not affect the individually-binned analysis where the
effects from such low-energy bins no longer dominate the analysis.
Thus, in such cases, the results of the binned analysis
are stronger than those of an integrated analysis.
Indeed, this is ultimately why the overall discovery reach
for small $(\sqrt{s_0},\sqrt{s_N})$ remains competitive
with that for larger values
of $(\sqrt{s_0},\sqrt{s_N})$, as shown in Fig.~\ref{fig:exclusion-sqrtSN}.
Indeed, we see from Fig.~\ref{fig:exclusion-sqrtSN}
that this remains true for all of the values of $\xi$ surveyed.
For the diffuse-background search, by contrast,
the discovery reach depends
more stikingly on both $\sqrt{s_0}$ and $\sqrt{s_N}$.
Moreover, the reach is sensitive to
the spectral shapes of both the primary and secondary photon contributions to
the gamma-ray spectrum. Overall, the secondary photon contribution has a
more significant impact on the discovery potential. The reason is that
in the regime in which
$\sqrt{s_N}$ is reasonably small and
$\sqrt{s_0} \approx m_{\pi^0}$,
the secondary photon spectrum is sharply peaked around
$E_\gamma = m_{\pi^0}/2$. As a result, the potential for observing an excess
in the corresponding energy bin has a profound positive effect on the overall
discovery reach. Indeed, it is evident from Fig.~\ref{fig:exclusion-sqrtSN}
that the reach is greatest in the regime in which
$\sqrt{s_N}$ is relatively small and
$\sqrt{s_0} \approx m_{\pi^0}$.
As $\sqrt{s_0}$ increases away from $m_{\pi^0}$ and the peak
becomes a plateau, the potential for observing an excess in this bin decreases.
Increasing $\sqrt{s_N}$ for fixed $\sqrt{s_0}$ has the effect of broadening
the secondary photon spectrum. On the one hand, this broadening reduces the
significance of the peak at $E_\gamma = m_{\pi^0}/2$; on the other hand,
it also extends the upper edge of the secondary photon spectrum to higher
$E_\gamma$, where the astrophysical background is smaller and a signal is
more readily observable. As a result of the interplay between these two
effects, the discovery reach initially falls with increasing $\sqrt{s_N}$ because the
energy bin corresponding to the peak provides the best prospects for observing
an excess in signal events. However, as $\sqrt{s_N}$ increases further and the
higher-energy bins become the most relevant for observing an excess, the
discovery potential stabilizes.
While the role played by the primary photon spectrum
in determining the discovery reach for the diffuse-background search
is less pronounced than that played
by the secondary photon spectrum,
the primary photon spectrum still has a demonstrable effect on the
discovery reach. In particular, as $\sqrt{s_0}$ increases, the primary photon
spectrum is shifted to higher values of $E_\gamma$ where astrophysical
backgrounds are small. For sufficiently large $\sqrt{s_0}$, this effect
more than compensates for the corresponding broadening of the secondary photon
spectrum and yields an overall increase in the discovery reach.
Comparing the cases of dark-matter annihilation and decay, we see that
the dwarf search has an order-of-magnitude greater discovery reach than
the diffuse search for annihilation, while both searches have comparable
discovery reaches for decay. Since the $J$-factor in Eq.~(\ref{Jdef})
depends on $\rho^2$ for annihilation, we expect the dense environment of the dwarf to be a more
advantageous system in which to search for annihilating dark matter
than the diffuse background. For decay, however, the $J$-factor involves only
a single power of $\rho$, and thus the dwarf search does not possess the
same upper hand as it has for annihilation.
\section{Extraction of Dark-Sector Parameters\label{sec:measurement}}
As discussed in the Introduction, our primary motivation for studying
DDM ensembles whose constituents annihilate/decay primarily into a $\gamma\pi^0$
final state, followed by the subsequent decay $\pi^0\to \gamma\gamma$,
is that the shapes of the spectral features associated with
primary and secondary photons are correlated. A comparison between the information
extracted from these two features can therefore provide a powerful consistency check
on the DDM interpretation of such a gamma-ray excess.
However, we are not merely
interested in the prospects for {\it observing}\/ a signal of a DDM ensemble with this
annihilation/decay phenomenology, as in Sect.~\ref{sec:prospects};
we are also interested in determining the degree to which we might then
{\it extract}\/ the values of the underlying parameters which characterize the DDM ensemble.
This is the subject to which we now turn.
Towards this end, we shall focus on the four benchmarks outlined in
Table~\ref{tab:case} and illustrated in Fig.~\ref{fig:thcurve} with $\xi = 1$.
For each benchmark, we shall
investigate the prospects for extracting the corresponding underlying DDM model parameters
$(\sqrt{s_0},\sqrt{s_N},\xi)$
by generating and then analyzing corresponding sets of simulated detector data.
We begin our discussion by outlining how these data sets are
generated and analyzed.
We then discuss the extent
to which our underlying DDM parameters can be meaningfully extracted in each case.
Specifically, using the simulated detector data for each benchmark, we shall focus
on two critical but somewhat distinct questions:
\begin{itemize}
\item To what extent can we extract {\it evidence}\/ of a correlation between
primary and secondary photon flux spectra?
\item To what extent does the {\it assumption}\/ of such a correlation
{\it enhance}\/ our ability to extract the corresponding
underlying DDM model parameters?
\end{itemize}
Note that a positive outcome to the first question
implicitly strengthens our interpretation of a measured photon flux
as resulting from annihilating/decaying dark matter (as opposed to, say,
other astrophysical sources).
By contrast, once we are assured that such a photon flux has a dark-matter origin,
such a correlation between the primary and secondary photon fluxes is {\it automatic}\/.
It is then the second question above which becomes critical for extracting the underlying
physics of the dark sector.
\subsection{Generating and analyzing simulated data sets}
In order to generate our simulated data sets,
we begin by determining the total expected number $N_B$ of background events observed
by our hypothetical detector within our region of interest during one year
of continuous observation.
This number $N_B$ is therefore evaluated across the entire energy range $0.3~\text{MeV} < E_\gamma < 3000~\text{MeV}$
to which the detector is sensitive,
yielding the result $N_B\approx 2.32\times 10^5$.
Likewise, we determine a number $N_S$ of signal events by assuming the
minimum necessary in order to claim a $5\sigma$ discovery based on a simple counting
analysis in which the statistical significance is estimated using $N_S/\sqrt{N_B}$.
This yields $N_S\approx 5 \sqrt{N_B} \approx 2.41\times 10^3$.
In principle, one might argue that the values of $N_B$ and $N_S$ should depend
on the energy range over which the particular benchmark can be expected to provide data and thereby
be sensitive to background.
However, since there are relatively few background events in the high-energy regime,
it turns out that the above values of $N_B$ and $N_S$,
as calculated for our hypothetical detector as a whole,
are not significantly different from those that would correspond to Benchmark~B, which has the largest energy range.
In the following, we will take the above values of $N_B$ and $N_S$ to be fixed across all benchmarks.
This allows us to make
a meaningful comparison across benchmarks by considering our fixed quantity to be the
number of signal events itself (rather than, say, a corresponding statistical significance).
This procedure for calculating signals and backgrounds across the entire energy range
to which our hypothetical detector is sensitive also reflects
what one would actually do upon faced with an experimental signal ---
namely, analyze this signal over the entire energy range available, without any
foreknowledge or assumptions regarding the particular underlying spectral features involved.
Given the above values of $N_B$ and $N_S$,
the generation of our simulated data set for each benchmark proceeds as follows.
The signal contribution associated with each ensemble constituent is determined by
partitioning the $N_S$ signal events among the $\phi_n$ in proportion to the contribution $\Phi_n$
that each makes toward the total photon flux $\Phi$. Photon energies for background
events are generated randomly from the relevant probability distribution function
over the entire range mentioned above.
Photon energies for the set of signal events associated with a given $\phi_n$ are
also generated randomly, with one third of the events assigned
the primary photon energy $E_{\rm line}$ given in Eq.~(\ref{Eline}) and the
other two thirds distributed according to a normalized probability distribution
function derived from Eq.~(\ref{twothetas}).
Finally, the raw $E_\gamma$ values for both signal and background events are
smeared according to Eq.~(\ref{eq:smearing}) with $\epsilon = 0.01$ in
order to account for the energy resolution of the detector.
The net result of this procedure is a set of four simulated energy spectra
that might emerge from the decays/annihilations of our four DDM ``benchmark'' ensembles.
Our analysis of these data sets then proceeds as follows.
First, recognizing that these data sets represent the total ``observed''
differential photon fluxes,
we begin by disentangling our ``signal'' contribution
from astrophysical backgrounds. For this reason, we focus exclusively on
dwarf-spheroidal searches, as the corresponding backgrounds can be estimated directly
from measurements. For concreteness, we consider the same region of interest
which characterized the dwarf-spheroidal search in Sect.~\ref{sec:DwarfSearch}
and adopt the same set of parameters for our hypothetical detector. To
isolate the signal contribution, we employ a minimal background-subtraction
procedure in which an expected number of background events
${\mathcal{N}}_i^{\mathrm{BG}}$ in each energy bin is derived using
the background model in Eq.~\eqref{eq:diffuse-bkg}
and is subtacted from the corresponding total number of observed events
${\mathcal{N}}_i^{\mathrm{Data}}$.
Again, we emphasize that we can follow this procedure because experimentalists will actually be able to measure
the background, unlike the situation in the case of a diffuse search.
The resulting number of events
\begin{equation}
\mathcal{N}_i^{\mathrm{Sig}} ~\equiv ~ \mathcal{N}_i^{\mathrm{Data}}
- {\mathcal{N}}_i^{\mathrm{BG}}~
\end{equation}
is thus our ``signal'' contribution, to be interpreted as
coming from the decays/annihilations of the constituents of the DDM ensemble.
Given this signal contribution, we determine the corresponding values
of the underlying DDM shape parameters
$(\sqrt{s_0}, \sqrt{s_N}, \xi)$ by fitting the template functions
in Eqs.~(\ref{eq:PhiP}) and~(\ref{eq:PhiS}) to this residual spectrum.
However, the specific fit we perform will depend on which of the fundamental
questions itemized above we are attempting to answer.
To address the first question, we perform {\it independent}\/ fits
of the primary and secondary flux spectra, extracting independent best-fit
values $(\sqrt{s_{0,p}},\sqrt{s_{N,p}},\xi_p)$ for the primary flux spectra
and $(\sqrt{s_{0,s}},\sqrt{s_{N,s}},\xi_s)$ for the secondary flux spectra.
Comparing these sets of parameters with each other thus provides a test of our
purported correlations between these two spectra.
Likewise, comparing each independent set of parameters against our corresponding
original benchmark values provides a measure of our ability
to extract our underlying DDM parameters {\it without}\/ assuming a correlation between the two spectra.
By contrast, to address the second question,
we perform a constrained fit of {\it both}\/ spectra simultaneously
with only a single set of free parameters
$(\sqrt{s_0},\sqrt{s_N},\xi)$.
Comparing the results thus obtained with those previously obtained with
independent fits for each spectrum then provides a measure
of the extent to which the existence of a correlation between
the two spectra enhances our ability to extract the underlying
DDM model parameters.
In practice,
it is important to recognize that there is actually another variable
beyond the shape variables $(\sqrt{s_0},\sqrt{s_N},\xi)$
which must also be fit when extracting our
underlying DDM parameters: this is the overall normalization
factor $\Phi_0$.
In fact, strictly speaking, the overall normalization factor for both the primary photon
spectrum in Eq.~(\ref{eq:PhiP}) and
the secondary photon spectrum
in Eq.~(\ref{eq:PhiS})
is not $\Phi_0$ alone, but rather the
parameter combination
\begin{equation}
\Psi ~\equiv~ \Xi \, (\sqrt{s_0})^{-\xi}~,
\label{Psidef}
\end{equation}
where
\begin{equation}
\Xi ~\equiv~ \frac{\Phi_0}{\Delta(\sqrt{s})}~.
\label{Xidef}
\end{equation}
We shall therefore fit the
aggregate quantity $\Psi$ directly, and only subsequently extract a value for $\Xi$
using the results of our overall fits for $\sqrt{s_0}$ and $\xi$.
Unfortunately, without {\it a priori}\/ knowledge of $\Delta(\sqrt{s})$, we see that
the parameter combination $\Xi$ cannot be disentangled further and thus
represents the irreducible limit of our ability to extract
the underlying DDM flux normalization using these methods.
As briefly discussed in Sect.~\ref{sec:case-studies},
the procedure that we shall use in performing these parametric fits to the signal
spectrum depends on the degree of overlap between the spectral features
associated with primary and secondary photons. In the regime in which
$\sqrt{s_N} < \sqrt{2} {m_{\pi^0}}$, these two features are well separated
and a fit can be performed for each feature independently.
Indeed, this will be our procedure for Benchmarks~A and C.~
By contrast, in the regime in which $\sqrt{s_N} > \sqrt{2} {m_{\pi^0}}$,
the overlap is significant
and a single fit must be performed for both features simultaneously.
This will be our procedure for Benchmarks~B and D.
Thus, summarizing, the specific types of fits we shall perform
depend not only on which of the questions itemized above
we are seeking to address, but also on which benchmark we
are studying.
To address the first question for Benchmark~A,
we shall perform two independent four-parameter fits, extracting
independent values
$(\sqrt{s_{0,p}},\sqrt{s_{N,p}},\xi_p,\Psi_p)$
and
$(\sqrt{s_{0,s}},\sqrt{s_{N,s}},\xi_s,\Psi_s)$
using our data sets for
the primary and secondary spectra respectively.
We also follow an identical procedure in order to address the first question
for Benchmark~C.~
Indeed, it is only because these two spectra are non-overlapping for Benchmarks~A and C
that we allow each fit to have its own independent normalization in these cases.
By contrast, in order to address the first question for Benchmark~B
or Benchmark~D, we perform a single seven-parameter fit to the parameters
$(\sqrt{s_{0,p}},\sqrt{s_{N,p}},\xi_p,
\sqrt{s_{0,s}},\sqrt{s_{N,s}},\xi_s,\Psi)$.
Indeed, in these cases, the overlapping nature of the primary and secondary photon
spectra requires that we impose a common normalization $\Psi$ during the fitting
process.
Of course, the results of this fit then yield independent values for
$\Xi_{p} = \Psi (\sqrt{s_{0,p}})^{\xi_p}$
and
$\Xi_{s} = \Psi (\sqrt{s_{0,s}})^{\xi_s}$.
Finally, in order to address the second question for each benchmark, we compare
the above results with those obtained through a single four-parameter fit
to the underlying DDM parameters
$(\sqrt{s_0},\sqrt{s_N},\xi,\Psi)$.
Note that this analysis applies equally well for either annihilation or decay,
as the only difference between
these two cases lies not in the extracted values of the $\sqrt{s_n}$ parameters but
rather in the subsequent mapping between these parameters and the
original DDM mass variables $m_n$,
as already discussed in Sects.~\ref{sec:model}
and \ref{sec:spectrum} [especially Eq.~(\ref{sndef})]
and at the end of the Appendix.
\subsection{Results}
\begin{figure*}[t]
\includegraphics[width=0.425\textwidth, keepaspectratio]{case1_fit} \hskip 1.0cm
\includegraphics[width=0.425\textwidth, keepaspectratio]{case2_fit} \vskip 0.4cm
\includegraphics[width=0.425\textwidth, keepaspectratio]{case3_fit} \hskip 1.0cm
\includegraphics[width=0.425\textwidth, keepaspectratio]{case4_fit}
\caption{Sample photon-energy spectra (black dots with corresponding statistical error bars)
for Benchmarks~A (upper left panel), B (upper right panel), C (lower left panel),
and D (lower right panel) after background subtraction, along with the corresponding
best fits for the primary and secondary photon spectra (solid red curves).
For each benchmark, the numbers of background and signal events are taken to be
$N_B = 2.32\times10^5$ and $N_S = 2.41\times10^3$, as discussed in the text.
Note that we plot the quotient $(N_{\rm Data}-N_{\rm BG})/\Delta E_{\rm bin}$ on the vertical axis
(where the numerator tabulates the signal counts within each bin and the denominator indicates the corresponding
bin size), as this quotient is invariant under changes in the specific choice of bin size
when the bin size is sufficiently small.
The corresponding error bars, by contrast, depend on bin size,
and we have chosen $\Delta E_{\rm bin}=2~{\rm MeV}$ for the curves in these plots.
The best-fit parameters are also indicated within each panel,
along with the corresponding goodness-of-fit
$\chi^2$ per degree of freedom, while the upper and lower uncertainties quoted for each best-fit parameter
indicate the limits of the corresponding range within which $\chi^2$ varies by less than one unit.
Note that the fits performed here are {\it unconstrained}\/, in the sense
that the primary and secondary photon spectra are fit independently. These fits thus provide a test of the extent to which the correlations between these two spectra can be discerned from data.}
\label{fig:fitBMA}
\end{figure*}
The results of our analysis are as follows.
For each of the benchmarks listed in Table~\ref{tab:case},
our corresponding simulated data set is shown in Fig.~\ref{fig:fitBMA} (black dots with error bars).
Specifically, these dots represent the residual
populations of events $\mathcal{N}_i^{\mathrm{Sig}}$ in the relevant energy bins,
with error bars corresponding to statistical uncertainties.
Also superimposed on these data sets are the results of parametric fits
to the spectral features associated with primary and secondary photons
(solid red lines).
Recall that in these plots, the spectral features associated with the
primary and secondary flux spectra are fit independently.
As discussed above, these are the fits which are designed to address the first question itemized above.
The results for Benchmark~A are shown in the upper left panel of Fig.~\ref{fig:fitBMA}.
For Benchmark~A, our value of $N_S$ translates into the result
\begin{equation}
\Xi ~=~ 5.4 \times 10^{-9}~ {\rm cm}^{-2} \,{\rm s}^{-1}\, {\rm MeV}^{-1}~,
\label{inputval}
\end{equation}
which we take as our input value for this benchmark.
We perform our fit to the primary photon spectrum for Benchmark~A within
the energy range $20~\text{MeV} \leq E_\gamma \leq 45~\text{MeV}$ --- indeed, the region $E_{\gamma}<20~\text{MeV}$ is background-dominated, leaving the corresponding bin counts less reliable, given our signal statistics.
We find that the best-fit values for $\xi_p$ and $\sqrt{s_{N,p}}$
are those indicated in the upper left panel of Fig.~\ref{fig:fitBMA}.
It is immediately evident that these extracted values are consistent with the corresponding input values
to within $1\sigma$.
Note that no meaningful
information can be extracted for $\sqrt{s_{0,p}}$, as
large uncertainties in the
event counts in bins with $E_\gamma \lesssim 20~\text{MeV}$ completely obscure all meaningful information about the low-energy cutoff in the primary photon spectrum.
Thus, a best-fit value for $\Xi$ is not available from the primary photon spectrum, as this would require
the value for $\sqrt{s_{0,p}}$.
By contrast, a fit to the secondary photon spectrum for Benchmark~A provides far more reliable
information about the properties of the underlying DDM ensemble. Performing such a fit
within the energy range $50~\text{MeV} \leq E_\gamma \leq 90~\text{MeV}$ where the residual bin counts are greater than $\sim 10$, we find the results shown in the upper left panel of Fig.~\ref{fig:fitBMA}.
Once again, each of these extracted values is in good agreement with the corresponding input value to within $1\sigma$.
Since we are able to meaningfully extract $\sqrt{s_0}$ for the secondary photon spectrum, we are also able to report
the best-fit value for $\Xi$ in this case.
We find that the best-fit value for the normalization parameter is
\begin{equation}
\Psi_s ~=~ 4.8^{+275.9}_{-4.8} \times 10^{-11} ~ {\rm cm}^{-2} \, {\rm s}^{-1} \, {\rm MeV}^{-1-\xi}~,
\end{equation}
from which we obtain the value of $\Xi$ for the secondary photon spectrum:
\begin{eqnarray}
\Xi_s ~=~ 3.0^{+169.0}_{-12.3}\times 10^{-9} ~ {\rm cm}^{-2}\, {\rm s}^{-1}\, {\rm MeV}^{-1}~.
\end{eqnarray}
Although this extracted value is consistent with the input value in Eq.~(\ref{inputval})
to within $1\sigma$, the corresponding uncertainty is too large to infer any useful information.
Thus, for Benchmark~A, we conclude that it is difficult to obtain meaningful information
concerning the normalization parameter $\Xi$ from either the primary or secondary photon spectrum.
By contrast, we see that reasonable estimates of the parameters which govern the
{\it shapes}\/ of the primary and secondary photon spectra
can indeed potentially be obtained from future gamma-ray detectors.
Moreover,
the fact that the values of DDM parameters such as $\sqrt{s_N}$ and $\xi$
extracted from the primary photon spectrum
match those extracted from the secondary photon spectrum
implies that we can indeed perform a successful test of the underlying correlations between these two spectra,
and indicates that our primary and secondary spectra together contain
consistent information regarding the underlying DDM model.
We now turn to Benchmark~B, for which results are shown in the
upper right panel of Fig.~\ref{fig:fitBMA}.
For this benchmark, the signal events are produced within the energy range $135~\text{MeV} < \sqrt{s} < 231~\text{MeV}$,
requiring us to take
\begin{equation}
\Xi ~=~ 2.3\times 10^{-9} ~{\rm cm}^{-2} \, {\rm s}^{-1} \, {\rm MeV}^{-1}~
\end{equation}
as an input value.
Since the primary and secondary photon spectra overlap significantly for this
benchmark, we perform a combined fit to both features in the manner discussed
above, taking our fitting range to be $20~\text{MeV} \leq E_{\gamma} \leq 100~\text{MeV}$. We then obtain
the best-fit values for the shape parameters
given in Fig.~\ref{fig:fitBMA}.
Once again, we observe
that the parameters $\xi$ and $\sqrt{s_N}$ extracted for both spectra
agree reasonably well with each other, thus providing a rough test of their correlation.
Moreover, each of the extracted shape parameters listed in the upper right panel of Fig.~\ref{fig:fitBMA}
is consistent with the corresponding input value to within $(1-2)\sigma$.
Thus, for Benchmark~B, we conclude that our fitting procedure yields reasonable estimates for the shape parameters which characterize the photon spectrum associated with our DDM ensemble.
The best-fit value of $\Psi$, by contrast, comes with large uncertainties.
Indeed, we shall find that this is a characteristic of all of the benchmarks we shall be examining.
We shall therefore refrain from
quoting further best-fit values for $\Psi$ and $\Xi$ in what follows.
However, we stress that in all cases this is strictly only an artifact of the parametrization
and does not represent a corresponding uncertainty
in actual signal flux or in the number of signal events (the uncertainty for which is indeed small).
Note that Benchmark~B provides a better handle for measuring the DDM scaling parameter
$\xi$ accurately, especially when
compared to $\xi_p$ in Benchmark~A.~
This is ultimately because more signal events in Benchmark~B are populated in the higher-energy regime where the background contribution is relatively small.
Therefore, even after background subtraction, the residual spectrum for Benchmark~B better preserves the original shape information than it does for Benchmark~A.
For the remaining two benchmarks, even the primary photon spectrum has a reasonable sensitivity to $\sqrt{s_0}$ because it starts from $E_{\gamma}>20~\text{MeV}$ where uncertainties in the event counts in bins are fairly decent.
Our results for Benchmark~C are shown in the lower left panel of Fig.~\ref{fig:fitBMA}.
The signal events are generated with $164~\text{MeV} < \sqrt{s} < 180~\text{MeV}$, from which we find
that
\begin{equation}
\Xi ~=~ 1.6\times 10^{-8} ~{\rm cm}^{-2} \, {\rm s}^{-1}\, {\rm MeV}^{-1}~.
\end{equation}
As with Benchmark~A, the two photon spectra are well separated, and thus two individual fits are possible.
We adopt the same energy ranges as for Benchmark~A, namely
$20~\text{MeV} \leq E_{\gamma} \leq 45~\text{MeV}$ and $50~\text{MeV} \leq E_{\gamma} \leq 90~\text{MeV}$, respectively,
for our fits to the primary and secondary photon spectra,
and obtain the best-fit results
for the shape parameters
as shown in the figure.
The parameters for the primary and secondary photon spectra are generally consistent with each other,
thus indicating the possibility of testing correlations between them,
and they are also in a good agreement with the corresponding input values to within $(1-2)\sigma$.
It turns out that the overall shape of the secondary photon spectrum does not change much for this benchmark,
even with substantial variations of the scaling parameter.
\begin{figure*}[t]
\includegraphics[width=0.425\textwidth, keepaspectratio]{case1_fit_Constrained} \hskip 1.0cm
\includegraphics[width=0.425\textwidth, keepaspectratio]{case2_fit_Constrained} \vskip 0.4cm
\includegraphics[width=0.425\textwidth, keepaspectratio]{case3_fit_Constrained} \hskip 1.0cm
\includegraphics[width=0.425\textwidth, keepaspectratio]{case4_fit_Constrained}
\caption{Same as Fig.~\protect\ref{fig:fitBMA}, except that we now perform {\it constrained}\/ fits in which the
primary and secondary photon spectra are assumed to be correlated.
A comparison with the results of Fig.~\protect\ref{fig:fitBMA} demonstrates
that the assumption of such correlations can significantly enhance our ability
to accurately extract the underlying DDM parameters governing the dark sector. }
\label{fig:finalfignew}
\end{figure*}
Results for Benchmark~D are shown in the lower right panel of Fig.~\ref{fig:fitBMA}.
The signal events are generated with $164~\text{MeV} < \sqrt{s} < 230~\text{MeV}$, from which we find
\begin{equation}
\Xi ~=~ 3.7\times 10^{-9} ~{\rm cm}^{-2}\, {\rm s}^{-1}\, {\rm MeV}^{-1}~.
\end{equation}
We then perform a single combined fit to both photon spectra,
again adopting the same fitting range $20~\text{MeV} \leq E_{\gamma} \leq 100~\text{MeV}$
as for Benchmark~B.~
The best-fit values for all shape parameters are listed in Fig.~\ref{fig:fitBMA}.
We can easily see that the parameters measured from both spectral features are
consistent with each other, as expected.
The extracted values also all agree with the corresponding input values to within $(1-2)\sigma$.
In general, scanning the results in Fig.~\ref{fig:fitBMA} for all four benchmarks
simultaneously, we see that our best-fit results for $\sqrt{s_0}$ and $\sqrt{s_N}$
are generally quite accurate. Unfortunately, we also observe that these fits
generally do a poor job of extracting the true underlying values of the
DDM scaling parameter $\xi$.
While certain benchmarks (such as Benchmark~B) lead to
relatively accurate best-fit values for $\xi$, particularly for the primary photon spectrum,
these predictions become significantly worse for those benchmarks (such as
Benchmarks~A and C) in which the spectral features associated with
the primary and secondary photons are relatively well-separated in energy,
with minimal overlap.
The case of Benchmark~C is particularly poor, with {\it negative}\/ central values of $\xi$
extracted from both the primary and secondary spectra!
Indeed, the negative central value for $\xi_p$ is reflected in the negative slope of the red best-fit line
along the primary plateau in the lower left panel of Fig.~\ref{fig:fitBMA}.
All of the fits performed thus far treat our primary and secondary photon
spectra independently. As discussed above, they are therefore suitable for addressing
the first bulleted question at the beginning of this section
concerning the extent to which correlations between the two photon fluxes might
be discernible in realistic data samples.
However, in order to address
the second of our bulleted questions, we need to assume the existence of such correlations and
perform constrained fits to both spectra simultaneously.
Indeed, it is only by performing such constrained fits and comparing the results thus obtained
with those of the unconstrained fits we have already perfomed that we can determine the extent to which
these correlations enhance our ability to extract the underlying DDM parameters from data.
The results of such constrained fits are shown in Fig.~\ref{fig:finalfignew}.
Upon comparison with the corresponding results in Fig.~\ref{fig:fitBMA},
we immediately see that while our extracted best-fit values of $\sqrt{s_0}$ and $\sqrt{s_N}$ continue
to be as accurate as they were before,
our extracted best-fit values for the DDM scaling parameter $\xi$ are significantly improved.
Indeed, in all cases the true value
$\xi=1$ is within the errors quoted.
The case of Benchmark~C is particularly noteworthy.
Where previously our unconstrained fits had yielded negative values for both $\xi_p$ and $\xi_s$,
the simple act of changing to a constrained fit has pushed the corresponding best-fit
result to a central value $\xi= 1.01$, which is remarkably close to the true value!
In general, we see that it is Benchmarks~A and C --- {\it i.e.}\/, benchmarks in which our two spectral features are
well separated in energy --- for which the switch from
an unconstrained fit to constrained fit
produces the greatest improvement.
It is thus these benchmarks for which the assumption of a correlation between the primary
and secondary photon spectra is of greatest value.
Indeed, as evident from Fig.~\ref{fig:finalfignew},
the assumption of a correlation between the primary and secondary flux spectra
leads to a signficant improvement in our ability to extract the underlying DDM parameters
regardless of the particular benchmark under study.
Of course, our comparison between the fits in Fig.~\ref{fig:fitBMA} and
those in Fig.~\ref{fig:finalfignew}
amounts to analyzing the results of only a single pseudo-experiment. In principle, one could rerun this
experiment with many different random data sets, and repeat this analysis in each case.
However, we shall refrain from this exercise because
the main points that we have aimed to demonstrate are already evident.
Indeed, the results illustrated in
Figs.~\ref{fig:fitBMA} and \ref{fig:finalfignew}
prove to be both typical and robust.
We conclude, then, that it will indeed be possible
to extract evidence of a correlation
between primary and secondary photon
spectra at future gamma-ray facilities.
Moreover, we see that
the assumption of such a correlation
will indeed significantly sharpen our ability to extract
the corresponding underlying dark-sector parameters.
Thus, through this correlation, we see that our ability to indirectly probe
the physics of the dark sector through emitted
gamma-rays can be greatly enhanced.
\section{Conclusions and Outlook\label{sec:conclusions}}
In this paper, we have identified an unambiguous indirect-detection signature of
Dynamical Dark Matter which arises in cases in which the constituents of the
DDM ensemble annihilate or decay primarily into a final state involving a primary
photon and a neutral pion, the latter subsequently decaying into a pair of secondary photons.
When the mass gap between DDM constituents is sufficiently small that particle detectors
are unable to resolve the contributions of individual constituents in the photon energy spectra,
this signature involves a pair of characteristic {\it continuum}\/ features in the
gamma-ray spectrum in the $\mathcal{O}(1 - 100)~{\rm MeV}$ range --- one feature associated
with the primary photons, and the other feature associated with the secondary photons.
Since the spectral shapes
of these two features are correlated, a comparison between the information extracted
from the two continuum features provides a powerful consistency check that
they indeed have a common origin in terms of an underlying DDM ensemble.
We have examined the prospects for observing a signal of this sort at the
next generation of MeV-range gamma-ray telescopes and investigated the extent
to which the parameters which govern the DDM ensemble can be extracted from
spectral data once such a signal is unambiguously identified.
As we have seen, it should be possible not only to extract evidence of this correlation
in future photon spectral data, but also to exploit this correlation in
order to significantly enhance our ability to extract these underlying DDM parameters.
A few comments are in order. First, in order to maintain maximum generality,
we emphasize that
we have estimated both the discovery
reach and the potential for measuring the DDM model parameters at the next generation
of gamma-ray detectors by defining a simplified, hypothetical detector whose attributes
have been chosen not to be identical to those of any particular such instrument,
but rather to be representative of this class of experiments in general.
However, for a realistic
detector, the corresponding analysis would typically involve additional subtleties and
complications. For example, the energy resolution for such a detector is typically
not described by a Gaussian smearing function with a constant value of $\epsilon$.
Moreover, the effective area for a realistic detector is typically not
independent of photon energy throughout the range of $E_\gamma$ to which the
instrument is sensitive.
In addition to these experimental simplifications, there are also a number of theoretical
approximations which we have employed in our analysis.
For example, we have taken the branching fraction for the
annihilation/decay of all ensemble constituents to the $\gamma \pi^0$
final state to be effectively unity. However, there are situations in which this is
not necessarily true for the lightest ensemble constituents. The reason is that
a fundamental interaction between the $\phi_n$ and SM quarks of the sort
which leads to dark-matter annihilation/decay to $\gamma \pi^0$ also
generically leads to annihilation/decay to $e^+ e^-$ and/or $\mu^+ \mu^-$,
via loop-level processes involving a virtual photon. The branching fraction into
such leptonic final states is typically negligible for most of the $\phi_n$.
However, it can become significant for processes in which the CM
energy is only slightly above the kinematic threshold $\sqrt{s}_n \approx m_{\pi^0}$
for $\gamma \pi^0$ production. As a result, the sharpness of the peak in the
secondary photon spectrum at $E_\gamma \approx m_{\pi^0}/2$ depends both on
$\sqrt{s_0}$ and on the energy resolution of the detector. Incorporating these
considerations into a more detailed analysis would inevitably lead to a modification
of our quantitative results in DDM scenarios of this sort.
On a related note, we remark that our focus
in this paper
has been on the case in which that the dominant signal contribution
to the photon flux arises from ensemble constituents whose
CM energy for annihilation/decay lies within
the range $m_{\pi^0} \leq \sqrt{s} \leq 2m_{\pi^\pm}$. However, it
is also useful to consider how our results would be affected if
non-trivial contributions to the photon flux were also to arise from
constituents with $\sqrt{s}$ outside this range, and thus from
photoproduction processes with different kinematics. For example,
it is important to examine whether such contributions might obscure the
spectral features which we have discussed in this paper.
We begin by considering the contribution from constituents with
$\sqrt{s}$ slightly above the $2m_{\pi^\pm}$ threshold, for which the dominant
$C$-odd final state will be $\pi^+ \pi^-$. The principal contribution from the
photon flux in this case arises from final-state radiation. Photons produced in
this way tend to be quite soft, and as a result, any contamination of our
signal spectrum from such photons would primarily affect the region where
$E_\gamma$ is low and statistical power is already poor. By contrast,
for the $\gamma \pi^0$ final state which has been the focus of our paper,
at least one of the two salient spectral features always appears at a relatively high
energy. For constituents with even larger $\sqrt{s}$, for which final states involving
three or more pions are accessible, the shape of the resulting photon spectrum
becomes highly model-dependent. However, one generally expects these spectra to
be relatively smooth and featureless over the range of $E_\gamma$ relevant for
our analysis.
Now let us turn to the contribution from constituents with $\sqrt{s} < m_{\pi^0}$.
For $\sqrt{s}$ in this regime, the dominant contribution to the photon flux arises
from the final state $3\gamma$, and from final-state radiation produced in conjunction with
the final state $e^+ e^-$. The former contribution is associated with processes
involving an off-shell $\pi^0$, while the latter is associated with processes involving an off-shell
photon attached to a quark loop. Photons produced in conjunction with the $e^+ e^-$
final state will once again be quite soft and consequently have little impact on our
results. By contrast, the contribution from the $3\gamma$ final state could potentially
distort the shape of the secondary-photon spectrum at energies slightly below its peak.
In this paper, we have applied our analysis to a DDM ensemble in which the photon flux scales
with the center-of-mass energy as a power law. Examples of explicit DDM models in which
such behavior is exhibited include those in Refs.~\cite{DDM1,DDM2}.
Indeed, as noted above Eq.~(\ref{eq:flux}), scaling relations
of this form
tend to emerge naturally for a variety of
theoretical structures
underlying these ensembles.
However, there do exist DDM constructions in which such scaling relations are given not by simple power
laws but by other functional forms~\cite{RandomMatrixDDM,HagedornDDM}.
These situations can nevertheless be addressed in a manner similar to that which we have
employed in this paper.
In general, the photon flux is determined by the scaling of the abundance and
annihilation/decay rate as in Eq.~(\ref{eq:PhintotExp}). Thus, for any other DDM construction,
one can similarly determine the primary and secondary photon fluxes. In fact,
it is not even necessary for the dark sector to constitute a DDM ensemble at all.
Even if the dark sector
consists of multiple particles whose lifetimes and abundances are not determined by any
unified organizing principle, those lifetimes and annihilation/decay rates completely determine the primary and
secondary differential photon fluxes.
The lynchpin of this paper has been the correlation between the spectral shapes of
the primary and secondary photon fluxes.
Fortunately, this correlation is robust and survives even if the
dark sector lacks a unified organizing principle.
To see this most directly, we recall that
each dark-sector constituent
of a given mass makes only a single monochromatic contribution to the primary photon flux.
Thus, the relation between the primary flux and the underlying
dark-sector component masses is easily invertible: if the primary flux is known,
then one can easily determine the spectrum of particles and annihilation/decay
rates which generated that primary spectrum. This in turn then provides a prediction for the secondary photon flux.
Using the primary photon flux to predict the secondary photon flux is a strategy that
is likely to be most useful in the case where $\sqrt{s_N} < \sqrt{2} {m_{\pi^0}}$, for which the primary and secondary photon features can be cleanly separated. Indeed, after subtracting the
estimated background from the data in the region of the primary feature, the residuals constitute
a measurement of the primary photon flux, up to statistical fluctuations and the smearing due to the
energy resolution. One could then use this primary photon flux to generate a prediction for the secondary
flux, and test the goodness of fit for this prediction to the actual data in the region of the secondary
feature.
However, since the determination of the primary
photon flux is distorted by the statistical fluctuations and the effects of a finite energy resolution,
the implementation of this strategy is likely to be non-trivial.
This would therefore be an interesting direction for future study.
In cases for which $\sqrt{s_N} > \sqrt{2} {m_{\pi^0}}$, by contrast, the primary and secondary photon features are expected to overlap significantly. As a result, it may be more problematic to cleanly separate them.
Despite this fact, we have already seen that these two features remain correlated and in the case of a DDM ensemble we have seen that this correlation can significantly enhance our ability to extract the underlying DDM parameters --- even
when these features overlap significantly.
In general, however, performing an {\it a priori}\/ separation of the primary and secondary photon features
will undoubtedly be a more complicated task in the cases where these features overlap.
One useful tool in this regard may be to exploit the so-called ``log-symmetry'' of the secondary photon flux --- {\it i.e.}\/, the invariance of this flux under the energy-inversion symmetry $E\to {m_{\pi^0}^2}/4E$, as discussed in the Appendix.
Any contributions to the total flux which violate this symmetry are necessarily those from the primary photons.
Finally, in closing, we remark that correlations between continuum features
which arise in the gamma-ray spectra of annihilating/decaying DDM ensembles
arise not only for the $\gamma \pi^0$ final state
which has been the focus of this paper, but for other final states as well.
For example, in DDM scenarios in which each of the ensemble constituents
can annihilate into both $\gamma\gamma$ and $\gamma Z$,
similar correlations between the shapes of the two resulting
spectral features can likewise be exploited in order to corroborate the DDM
origin of the excess and to extract information about the parameters
governing the underlying ensemble. Thus, such correlations could likewise be used
in order to extract information about this alternative class of DDM ensembles.
\begin{acknowledgments}
We would like to thank Kaustubh Agashe and Graciela Gelmini for useful discussions.
We would also like to acknowledge the hospitality
of the Center for Theoretical Underground Physics and Related Areas (CETUP$^\ast$),
where this work was initiated
during the 2015 Summer Program;
DK, JK, JCP, and BT would also like to thank CETUP$^\ast$ for partial support during this program.
KB and JK are supported in part by the National Science Foundation under CAREER Grant PHY-1250573,
while KRD is supported in part by the Department of Energy under Grant DE-FG02-13ER41976
and by the National Science Foundation through its employee IR/D program.
DK is supported in part by the U.S.\ Department of Energy under Grant DE-SC0010296,
and JCP is supported in part by the Basic Science Research Program through the
National Research Foundation of Korea funded by the Ministry of Education (NRF-2013R1A1A2061561).
BT is supported in part by an internal research award from Reed College.
The opinions and conclusions expressed herein are those of the authors, and do not
represent any funding agencies.
\end{acknowledgments}
|
2,869,038,156,863 | arxiv | \section{Introduction}
\label{sec:Introduction}
Low- to intermediate-mass stars approach the end of their evolution through the asymptotic giant branch (AGB) phase, during which they develop large-amplitude variability characterized by periodicities of tens to hundreds of days. These objects are known as long-period variables (LPVs), a rather generic term encompassing distinct evolutionary phases with a variety of observed features and physical properties.
Mira variables, the most spectacular among the LPVs, are AGB stars thought to pulsate in the fundamental radial mode, and follow a well known period-luminosity (PL) relation \citep[e.g.][]{Feast_1984,Feast_etal_1989}. Several other PL relations of LPVs have been discovered in the Large Magellanic Cloud (LMC) using data from the MACHO survey \citep{Wood_etal_1999,Wood_2000}, and are interpreted as resulting from pulsation in overtone modes \citep[e.g.][]{Wood_2015,Trabucchi_etal_2017} or ellipsoidal light variations in binary systems \citep{Soszynski_etal_2004_ellipsoidal}. One sequence is formed by periods systematically longer that those of Miras, and cannot be reconciled with the same kind of stellar oscillation invoked to explain other sequences. They are called long secondary periods (LSPs), and their nature is currently unknown \citep[e.g.][and references therein]{Nicholls_etal_2009,Takayama_Ita_2020}.
The PL relations of LPVs have recently been drawing attention due to their potential as distance indicators \citep{Whitelock_2013,Rau_etal_2019,Yuan_etal_2017M33,Yuan_etal_2017LMC,Yuan_etal_2018,Huang_etal_2018,Huang_etal_2020}. Indeed, Mira variables are easy to find and identify: they are intrinsically bright, have large photometric amplitudes, and are common stars, found in rather diverse astrophysical environments. For similar reasons, LPVs are often used to characterize stellar populations over a wide range of ages and metallicities \citep[e.g.][]{Lebzelter_etal_2018}, and have the potential to be used as age indicators \citep[e.g.][]{Feast_Whitelock_2000,Feast_etal_2006,Grady_etal_2019}.
The strong dependence of pulsation properties upon stellar structure makes for an excellent tool to infer otherwise elusive stellar parameters such as radii and current masses. This is especially true in the case of multi-periodicity, a common feature in LPVs, that offers a way to obtain stringent constraints on modelling by simultaneously satisfying several observed variability requirements. In this respect, many LPVs (especially at RGB luminosities) show properties
that are somewhat akin to solar-like oscillating red giants, \citep[e.g.][]{Dziembowski_Soszynski_2010,Mosser_etal_2013,Stello_etal_2014} suggesting the possibility to apply the successful investigation tools of asteroseismology to more evolved stars.
Pulsation is of special interest in the context of AGB stars, as it plays a crucial role in the mass-loss process that leads to their death \citep[see e.g. the review by][]{Hofner_Olofsson_2018}. In brief, large-amplitude pulsation drives energetic shock waves through the atmosphere, generating favourable physical conditions for the condensation of dust grains. Stellar radiation then transfers momentum to dust (and, via dynamical coupling, to the gas), driving intense outflows from the surface. The correlation of pulsation in different modes with mass loss and dust formation has been explored, e.g., by \citet{Lebzelter_etal_2006}, who linked the change of dominant dust species of AGB stars in 47 Tuc with the switch from overtone to fundamental pulsation, and more recently by \citet{McDonald_Trabucchi_2019}, who examined the signature of the transition between distinct mass-loss rate regimes in the PL relations of LPVs in the LMC.
The importance of a good understanding of mass loss on the AGB can hardly be overstated. On one side, it determines AGB lifetimes, a knowledge of which is necessary to accurately quantify the important contribution of these bright stars
to the integrated light of galaxies and their spectral energy distribution \citep[e.g.][]{Maraston_2005}. On the other side, it can be an important contributor in the chemical evolution of galaxies, as it pollutes the interstellar medium with ejecta that have been enriched in nucleosynthesis products by repeated third dredge-up events, and by CNO-cycle burning at the bottom of the convective envelope in the most massive AGB stars \citep{Kobayashi_etal_2011,Karakas_Lattanzio_2014}.
LPVs are thus critical ingredients in several fields of astrophysical research, and efforts are being made to establish solid connections between their pulsation features and their physical and evolutionary properties, in order to fully exploit their potential. This is especially important in view of the large volume of stellar variability data expected from current and upcoming large-scale surveys, such as \textit{Gaia}\ \citep{GaiaCollaboration_2018} and the Rubin Observatory Legacy Survey of Space and Time \citep[LSST, ][]{Ivezic_LSST_2109}. Some recent results include evidence for the Mira PL relation being independent of metallicity \citep{Goldman_etal_2019}, and the exploitation of the \textit{Gaia}\ Data Release 2 (DR2) catalog of LPV candidates \citep{Mowlavi_etal_2018} to characterize the mass dependence of PL relations of LPVs in the Magellanic Clouds \citep{Lebzelter_etal_2019}. On the theoretical side, effort has been devoted to the implementation of descriptions of time-dependent convection to model its coupling with pulsation in one-dimensional models \citep[e.g.][and references therein]{Olivier_Wood_2005,Xiong_etal_2018}, while the innovative three-dimensional ``star-in-a-box'' models of AGB stars by \citet{Freytag_etal_2017} naturally develop pulsation with periods compatible with observations.
In the previous work of this series \citep[][hereafter \citetalias{Trabucchi_etal_2019}]{Trabucchi_etal_2019}, we have addressed the theoretical study of LPVs by computing a large grid of linear 1D models of non-adiabatic radial pulsation. This allowed both for the derivation of easy-to-use tools and prescriptions to predict variability features as a function of global stellar parameters, and to combine them with synthetic stellar population simulations to test current pulsation models against the benchmark of resolved stellar populations in the Magellanic Clouds. This approach confirmed the validity of linear predictions for overtone mode pulsation \citep{Trabucchi_etal_2017}, while showing how they systematically overestimate the period of fundamental mode pulsation. In the latter, the amplitude of radial displacement of the stellar layers during a pulsation cycle becomes very large, hence the breakdown of the linear approximation. Miras and related fundamental mode pulsators are possibly the type of LPVs with the most promising applications, but most of the available prescriptions used to describe their variability are based on linear models, intrinsically inadequate for this task \citep{Trabucchi_etal_2020_ViennaVarStar}.
Nonlinear hydrodynamic simulations are naturally more suited to describe large-amplitude pulsation \citep{YaAri_Tuchman_1996,Lebzelter_Wood_2005,Ireland_Scholz_Wood_2008,Kamath_etal_2010}, and have been employed in several works to study LPVs [see e.g. \citet{Olivier_Wood_2005}]. Yet, all such studies are limited to a small number of models, while, to date, there exists no systematic investigation of nonlinear pulsation in luminous red giants as a function of global stellar parameters. As a result, it is still unclear how different are the predictions coming from linear and nonlinear models. In the present work, we address this kind of study with the help of a one-dimensional hydrodynamic code that includes a time-dependent treatment of convection. This gives us the opportunity of investigating in detail how pulsation is affected by the dissipation of kinetic energy due to turbulent viscosity, another open issue.
This paper is structured as follows. In Sect.~\ref{sec:ModelsAndParameters} we introduce the main features of our methodology and the range of stellar parameters covered in this study. In Sect.~\ref{sec:StaticEnvelopesAndLinearStability} we describe the calculation and properties of the set of reference linear pulsation models. The effect of turbulent viscosity on linear pulsation is also discussed. The calculation and processing of nonlinear models are described in Sect.~\ref{sec:NonlinearModels}. Sect.~\ref{sec:Results} is dedicated to the discussion and interpretation of the results, and to the modelling of the period-mass-radius relation of nonlinear fundamental mode pulsation. We compare our models with observations in Sect.~\ref{sec:ComparisonWithObservations}, while Sect.~\ref{sec:Conclusions} is dedicated to conclusions.
\section{Models and parameters}
\label{sec:ModelsAndParameters}
Firstly, we compute a set of static envelope models representative of M-type (O-rich) AGB stars, and examine their linear and nonlinear radial pulsation. The set is constructed by varying the stellar parameters mass $M$ (generally different than the initial mass $M_{\rm i}$) and luminosity $L$ while we keep fixed chemical composition (metallicity $Z$, hydrogen mass fraction $X$, and number ratio ${\rm C}/{\rm O}$ of carbon-to-oxygen atoms at the surface). The chosen metallicity ($Z=0.006$) is roughly representative of the young and intermediate-age populations in the Small and Large Magellanic Clouds (SMC and LMC), our reference targets for comparison with observations. The mixing length parameter $\alpha_{\rm ML}$ (mixing length in units of pressure scale height) is also varied, so to obtain a few different values of effective temperature $T_{\rm eff}$ (i.e. of radius $R$) at fixed mass and luminosity. The mass of the core is artificially increased with luminosity through an analytic relation derived from evolutionary models (see Sect.~\ref{ssec:StaticModels}).
The relevant parameters in the models and corresponding values are summarized in Table~\ref{tab:ModelParameters}, and cover a subset of the grid of \citetalias{Trabucchi_etal_2019}. The only exception is that we explore the effect of turbulent viscosity, described by the free parameter $\alpha_{\nu}$, which was fixed to $\alpha_{\nu}=0$ in our previous work\footnote{
Test calculations suggest that a nonzero value of $\alpha_{\nu}$ favours convergence of the hydrodynamic code, hence the choice $\alpha_{\nu}=10^{-4}$. This should be considered effectively equivalent to choosing $\alpha_{\nu}=0$ (no turbulent viscosity), there being negligible differences between the respective results.
}. Increasing $\alpha_{\nu}$ makes more efficient the dissipation of pulsational kinetic energy by convective turbulence, and is expected to stabilize pulsation (both in the linear and nonlinear regimes) and to reduce the amplitude of pulsation in nonlinear models. The present work represents the first systematic study of these effects.
It is worth clarifying the meaning of amplitude adopted throughout this paper. In each mass zone of nonlinear models, all relevant physical properties display some degree of oscillation, the amplitude of which can be defined as the difference between maximum and minimum values during the pulsation cycle. Here, we will usually consider the peak-to-peak amplitude of radial displacement of the optical surface (where the Rosseland mean optical depth is $\sim2/3$). While this is correlated with the variation of bolometric luminosity, the latter does not necessarily have the same amplitude as that in observed finite pass bands such as VIJHK. These amplitudes are determined by radiative processes in the atmospheric layers that are not dealt with in the present models.
\begin{table}
\centering
\caption{
Parameters varied in the computation of pulsation models, and corresponding values. The exact boundaries of the luminosity range depend on other parameters.
}\label{tab:ModelParameters}
\begin{tabular}{c|c}
Parameter & Values \\
\hline
$Z$ & 0.006 \\
$X$ & 0.7 \\
${\rm C}/{\rm O}$ & 0.55 \\
$\alpha_{\rm ML}$ & 1.5, 2.0, 2.5 \\
$\alpha_{\nu}$ & $10^{-4}$, 0.05, 0.1, 0.2 \\
$M/{\rm M}_{\odot}$ & 0.6, 1.0, 1.6, 2.6, 4.4, 7.0 \\
$\log(L/{\rm L}_{\odot})$ & [2.5, 5.0], step: 0.01 \\
\end{tabular}
\end{table}
\section{Linear pulsation models}
\label{sec:StaticEnvelopesAndLinearStability}
We used the codes described in \citet[][and references therein]{Wood_Olivier_2014} to compute static envelope models and to examine their linear pulsation properties. We briefly summarise the procedure (we refer to \citetalias{Trabucchi_etal_2019} for more details) and examine the results, which will be used as reference for the analysis of nonlinear models.
\subsection{Static models}
\label{ssec:StaticModels}
Spherically symmetric static models are obtained by integrating the envelope structure from a layer sufficiently high in the atmosphere down to a rigid\footnote{
Due to the strong density contrast between stellar core and envelope, the amplitude of pulsation becomes negligible near the bottom of the envelope, i.e. the core and envelope are dynamically decoupled. This justifies ignoring the core region, a common approach in modelling stellar pulsation.
} core of very small radius ($0.15\,{\rm R}_{\odot}$) for which mass and output luminosity are provided as boundary conditions. This innermost region, not modelled, would encompass the actual CO core produced during core He-burning stages, as well as the nuclear-burning shells. The whole stellar luminosity is assumed to be generated within the core and to be constant throughout the envelope. To ensure that static models are representative of AGB envelopes, a core mass-luminosity relation (CMLR) based on stellar evolutionary models is assumed, namely the mean value between the two CMLRs presented in \citetalias{Trabucchi_etal_2019} (Eq.~5). Additional boundary conditions are the total mass and the chemical composition, which is assumed to be homogeneous due to efficient convective mixing. Convection is treated by means of the usual mixing length theory, and the mixing length parameter $\alpha_{\rm ML}$ is used as a control parameter to change the effective temperature. Radiative Rosseland mean opacities as a function of density and temperature and fully consistent with the envelope metal mixture [based on a solar-scaled mixture derived from \citet{Caffau_etal_2011}, but with additional changes in CNO abundances as exemplified in Sect.~\ref{ssec:EffectOfChemicalComposition}] are supplied through external tables, including up-to-date molecular contributions \citep{Marigo_Aringer_2009}.
It is worth pointing out that, given the assumptions made, our static models are not totally accurate descriptions of the envelopes of the most massive AGB stars, which are expected to undergo hot bottom burning (HBB). These stars are overluminous with respect to expectations from the CMLR. In particular, for a given mass and luminosity, a smaller core mass should be assumed to describe a HBB star. In \citetalias{Trabucchi_etal_2019}, linear pulsation models were computed with two different CMLRs in order to account, at least partially, for such situations, as well as for the differences associated with the occurrence of thermal pulses. It was found that varying the core mass and radius has little impact on linear pulsation properties. Here, we assume this to be the case for nonlinear models as well. A detailed study of the role of core parameters in nonlinear pulsation, especially in high-mass AGB stars, as well as of the possible impact of HBB, is nonetheless desirable.
\subsection{Stability}
The linear stability analysis involves solving the linearized equations of non-adiabatic, radial oscillations about an equilibrium configuration, which is described by static models. Relevant physical properties are assumed to depend on time in the form $\exp(\omega t)$ with a complex frequency $\omega=\omega_{\rm R}+\mathbf{i}\omega_{\rm I}$, and time-dependent convection is treated as described in \citet{Fox_Wood_1982}. The code searches for solutions (eigenfrequencies and wave functions) by exploring the complex frequency plane starting from a user-defined region. For each envelope model, we compute the five lowest-order solutions, or pulsation modes, described by their period $P_n=2\uppi/\omega_{{\rm I},n}$ and growth rate $GR_n=\exp(2\uppi\omega_{{\rm R},n}/\omega_{{\rm I},n})-1$. The radial order $n=0$ corresponds to the fundamental mode (FM), while $n=1$ is the first overtone mode (1OM), and so on. The growth rate represents the fractional increase in the amplitude of surface radial displacement per pulsation cycle, and is an indication of the degree to which a mode is stable or excited. It is assumed that modes with a positive value of the growth rate are (linearly) excited, while a stable mode has negative growth rate. The mode with the largest growth rate for a given envelope model is identified with the ``dominant'' mode, which is expected to have the strongest signature in the observed light curve of a pulsating star.
\subsection{Model sequences}
To ensure convergence, it is necessary to define a suitable region in the complex frequency plane where the code will begin to search for solutions. This is less of an issue at relatively low luminosity, where pulsation is almost adiabatic ($|\omega_{\rm R}|\simeq0$), but is crucial in bright models that undergo strongly non-adiabatic pulsation. It is thus convenient to organize the computation of envelopes in ``model sequences'', or ``luminosity sequences''. These are one-parameter families of models in which mass and composition are fixed (as well as $\alpha_{\rm ML}$ and $\alpha_{\nu}$), while the luminosity is increased by small steps. After the first few steps, eigenvalues are searched for around the complex frequency extrapolated from previous models. Convergence issues might still arise, especially towards the high-luminosity end of model sequences. In such cases a few iterations are attempted by expanding the search region, and if convergence is not reached the model is skipped and computation moves towards the next one along the sequence. This can lead to small gaps in the sequences (cf. Sect.~\ref{ssec:TurbulentViscosityLinearPulsation}), that do not affect the results of our study.
Note that the effective temperature changes along a sequence, and so does the core mass (through the CMLR), while at the same time the envelope mass decreases. As long as the latter is not too small, a luminosity sequence is representative of the Hayashi line for given mass, composition, and model parameters. This facilitates the analysis of results as, to some extent, luminosity sequences mimic the average evolution of a star along the AGB. Nonetheless, luminosity sequences should not be confused with evolutionary tracks, as they lack a description of many crucial processes (nuclear reactions, mass-loss, stellar winds, thermal pulses and dredge-up events). Even though model sequences are constructed by varying luminosity, in the context of pulsation they are more conveniently parametrized by the value of the surface radius. In fact, varying any parameter of Table~\ref{tab:ModelParameters} (except for mass) affects pulsation periods indirectly mainly by changing the radius of the model \citepalias[see the discussion in ][]{Trabucchi_etal_2019}. In other words, the relation between period, mass, and radius is independent of other stellar and model parameters to a high degree of approximation. For this reason, throughout this paper, the evolution of pulsation along model sequences is usually examined as a function of radius.
\begin{figure}
\includegraphics[width=.99\columnwidth]{{figures/linear_anueffects_example}.png}
\caption{
Linear periods and growth rates as a function of radius for selected sequences, colour coded according to pulsation modes, showing the effect of varying $\alpha_{\rm ML}$ and $\alpha_{\nu}$. Sequences with $M=1.6\,{\rm M}_{\odot}$ are shown in the top three panels, while the bottom panel shows the case $M=4.4\,{\rm M}_{\odot}$. In the top panel, the line style indicate the value of $\alpha_{\rm ML}$, while darker tones indicate larger values of $\alpha_{\nu}$ (visible only in the enlarged inset panel, showing an enlarged view of the fundamental mode sequence). Other panels show the effect on growth rates of varying either $\alpha_{\rm ML}$ or $\alpha_{\nu}$, with distinct line styles corresponding to distinct values as indicated in the legend of each panel. The values of fixed parameters are also indicated in each panel.
}
\label{fig:linear_anueffects_example}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{{figures/linearDomPatches_ML}.png}
\caption{
Each panel shows the regions where, in the mass-luminosity plane, distinct pulsation modes are dominant according to linear predictions. Red patches indicate regions where the fundamental mode is dominant, and similarly for the overtone modes, using the same colour code as in Fig.~\ref{fig:linear_anueffects_example} (summarized in the top-left panel). Grey patches represent regions where all modes are stable. The mixing length parameter $\alpha_{\rm ML}$ increases for panels from left to right, while the turbulent viscosity parameter $\alpha_{\nu}$ increases for panels from top to bottom. Note that the scale is logarithmic along the horizontal axis.
}
\label{fig:linearDomPatches_ML}
\end{figure*}
\subsection{Turbulent viscosity and linear pulsation}
\label{ssec:TurbulentViscosityLinearPulsation}
We consider first the effect of turbulent viscosity on linear periods. Since it is found to be similar for all explored values of mass, we show only the case $M=1.6\,{\rm M}_{\odot}$ (top panel of Fig.~\ref{fig:linear_anueffects_example}). There, the period-radius (PR) relation for different combinations of $\alpha_{\rm ML}$ and $\alpha_{\nu}$ is displayed (in this paper, $R$ is the radius where the Rosseland mean optical depth is $\tau_{\rm R}\simeq2/3$). Varying either parameter leads to small changes. Variations of $\alpha_{\rm ML}$ change the radius at which models deviate from their Hayashi line \citepalias[cf. fig.~16 of][]{Trabucchi_etal_2019}, hence the period differences at large radii. In comparison, the effect of changing $\alpha_{\nu}$ over the range of likely values is small, and can be appreciated only in the highly enlarged inset panel of Fig.~\ref{fig:linear_anueffects_example}. We conclude that linear periods are almost independent of turbulent viscosity.
Linear stability is more sensitive to the value of $\alpha_{\rm ML}$ and $\alpha_{\nu}$. Let us consider first the effect of turbulent viscosity on growth rates for the $1.6\,{\rm M}_{\odot}$ sequences, as displayed in the second panel from the top of Fig.~\ref{fig:linear_anueffects_example} (having fixed $\alpha_{\rm ML}=2.0$ for clarity). One can recognize the pattern described in \citetalias{Trabucchi_etal_2019}, with overtone growth rates gradually increasing with radius until their period reaches the acoustic cut-off, and they rapidly become stable. The FM growth rate shows only a mild increase at first, and it even decreases for a while, bringing the FM to a temporary stabilization, before growing very rapidly. Qualitatively, this pattern is not altered by varying $\alpha_{\nu}$: the growth rates of all modes are reduced, but the effect is stronger for the overtones, especially at the smallest radii. At $\log(R/{\rm R}_{\odot})\lesssim1.9$, increasing $\alpha_{\nu}$ from $10^{-4}$ to 0.2 leads to a decrease of about 0.02 in the FM growth rate. To achieve the same change for overtone modes it is sufficient to increase $\alpha_{\nu}$ to 0.05. Therefore, increasing turbulent viscosity leads to efficient suppression of overtone modes and favours the dominance of the FM.
Increasing $\alpha_{\rm ML}$ has the effect of decreasing the growth rate at a given $R$ for the 3OM and 4OM, but actually increases the growth rate off the 1OM and FM, while leaving the 2OM with mixed effects (third panel from the top in Fig.~\ref{fig:linear_anueffects_example}). For large enough values of $\alpha_{\rm ML}$, the temporary stabilization of the FM is lifted, so that its growth rate is essentially flat until it reaches the point where it increases very steeply. The effect is more pronounced in the large-mass models, with FM growth rates increasing monotonically with radius, as exemplified in the bottom panel of Fig.~\ref{fig:linear_anueffects_example}. This panel also shows the relatively small effect on the FM growth rate of an increase of $\alpha_{\nu}$.
A more general picture is given in Fig.~\ref{fig:linearDomPatches_ML} which shows the regions where distinct modes are dominant in the mass-luminosity plane for different combinations of $\alpha_{\rm ML}$ and $\alpha_{\nu}$. Narrow white strips in the FM-dominated areas, at large $M/L$, correspond to the few models skipped due to convergence issues while computing luminosity sequences. The fact that overtone modes are suppressed for large values of $\alpha_{\rm ML}$ and $\alpha_{\nu}$ is very evident. Since overtone modes (at least up to the 3OM) are observed at AGB luminosities \citep{Trabucchi_etal_2017,Yu_etal_2020}, this suggests that relatively low values of $\alpha_{\nu}$ and $\alpha_{\rm ML}$ are appropriate, at least at low $L/M$. At higher luminosities, no combination of $\alpha_{\rm ML}$ and $\alpha_{\nu}$ can be ruled out as in all the explored cases the fundamental mode becomes eventually dominant. It should also be noted that there is no guarantee that these parameters should have constant values, while there is actually some evidence of the contrary. For instance, \citet{Lebzelter_Wood_2016} used an earlier version of the linear pulsation code employed here to model the AGB stars in the LMC cluster NGC 1846, and found that $\alpha_{\rm ML}$ has to increase with luminosity in order to simultaneously reproduce the observed photometry and variability. \citet{Ireland_Scholz_Wood_2011} modelled the variability of a few nearby Miras using values of $\alpha_{\nu}$ in the range 0.25-0.32, which, in view of our results, suggests that $\alpha_{\nu}$ should also increase with luminosity.
The mild sensitivity of the FM growth rates on turbulent viscosity allow it to become dominant, even though weakly excited, at the low-luminosity end of the model sequences when overtone modes are suppressed. Hence linear models predict that the FM can be dominant at low $L/M$, with small growth rates, and later at high luminosities and with very large growth rates, while overtone modes are dominant between these two regimes. If the increase in $\alpha_{\nu}$ is such that the 1OM and 2OM become stable, and if the fundamental mode undergoes temporary stabilization, no pulsation occurs in between.
\section{Nonlinear pulsation models}
\label{sec:NonlinearModels}
\subsection{Computation and general features}
\label{ssec:ComputationAndGeneralFeatures}
To probe the nonlinear pulsation of each envelope model along the luminosity sequences we use the 1D hydrodynamic code described in \citet{Wood_1974}, with updates described in \citet{Keller_Wood_2006}. Briefly, the code solves the equations of nonlinear, radial stellar pulsation incorporating a time-dependent mixing length theory of convection for energy transport and damping of pulsation by turbulent viscosity. Pulsation is treated as an initial value problem, and the equations in difference form are solved at a time $t_n$ given a model at time $t_{n-1}=t_n-\Delta t_n$. A static envelope model is used at $t_0$ as initial condition.
At low $L/M$, models tend to be stable. As $L/M$ increases, the models become unstable to pulsation: nonlinear pulsation grows from numerical noise, and increases in amplitude with time until it reaches the limit cycle (full-amplitude regime). Then, normally, pulsation becomes fairly regular. An example of this behaviour is displayed in Fig.~\ref{fig:nlSTS_example1}, showing the time-dependence of several properties computed from the $\log(L/{\rm L}_{\odot})=3.85$ model of the sequence with $M=1.6\,{\rm M}_{\odot}$, $\alpha_{\rm ML}=2.0$ and $\alpha_{\nu}=10^{-4}$. The model reaches full amplitude after about 10 yr, when kinetic energy stops increasing and settles to a more or less constant value (except for the fluctuation over individual pulsation cycles).
\begin{landscape}
\begin{figure}
\centering
\includegraphics[height=.42\textheight]{{figures/nlSTS_good_small}.png}
\caption{
Properties of the nonlinear time series at $\log(L/{\rm L}_{\odot})=3.85$ from the sequence with $M=1.6\,{\rm M}_{\odot}$, $\alpha_{\rm ML}=2.0$, $\alpha_{\nu}=10^{-4}$. Panels from top to bottom show the time evolution of kinetic energy, total bolometric luminosity, effective temperature, surface velocity, and radius. In the bottom panel the the radius of several mass zones is shown, as well as the radius of the optical surface (where the Rosseland mean optical depth is $\tau_{\rm R}=2/3$, red line). The shift of the optical surface between different mass zones during a pulsation cycle is evident. Panels on the right column show an enlarged view of the last five pulsation cycles.
}
\label{fig:nlSTS_example1}
\includegraphics[height=.42\textheight]{{figures/nlSTS_flag_small}.png}
\caption{
Same as Fig.~\ref{fig:nlSTS_example1}, but for $\log(L/{\rm L}_{\odot})=4.18$, representing an example of ``flagged'' time series, whose estimated dominant period might be unreliable.
}
\label{fig:nlSTS_example2}
\end{figure}
\end{landscape}
For the first few years, the model in Fig.~\ref{fig:nlSTS_example1} is dominated by 1OM pulsation, in agreement with linear predictions. However, the FM soon becomes unstable giving rise to a multiperiodic behaviour seen as a modulation of the light curve around $15\,{\rm yr}\lesssim t\lesssim 30\,{\rm yr}$. At $t\gtrsim30$ yr the FM is dominant, in contrast with linear results that predict it to be stable. In other words, in this and similar models where the linear results predict stability or even weak instability, dominant FM pulsation (DFMP) in nonlinear models appears earlier (at smaller radii and lower luminosities) than in linear models. This is a common feature in our model sequences (except at the largest masses, where the FM is almost always dominant, see Sect.~\ref{ssec:NonlinearPeriodsAmplitudes}).
In general, time series can be roughly divided in two phases, i.e. linear growth and full-amplitude pulsation. When pulsation involves the FM, models go through an additional ``relaxation phase'' before attaining full amplitude \citep[cf.][their fig.~7]{Lebzelter_Wood_2005}.
This is associated with a readjustment of the envelope structure, as discussed in Sect.~\ref{ssec:EffectsOfLargeAmplitudePulsationOnStellarStructure}, and corresponds to $35\,{\rm yr}\lesssim t\lesssim 45\,{\rm yr}$ in Fig.~\ref{fig:nlSTS_example1}, where it is characterized by a ``bump'' in the trend of kinetic energy. During the relaxation phase, periods and amplitudes are systematically larger than at full amplitude.
We emphasize that the stages of amplitude growth and relaxation are not expected to be observed. These features result from the nonlinear series being initiated from a model in perfect hydrostatic equilibrium, which is not the case of pulsation in real AGB stars.
In total, we computed about 11500 nonlinear time series, corresponding to roughly 120 days of cumulative computer time using 3.6 Ghz CPUs.
About 65 per cent (7439) of the time series were found to be stable against nonlinear pulsation, and were discarded in the present study. Note, however, that test calculation suggest that at least some of such stable models can actually pulsate and reach limit amplitude if a small velocity perturbation is applied to the initial model, rather than having pulsation grow from numerical noise. In other words, for a given luminosity sequence (i.e., for fixed stellar and model parameters), instability could emerge earlier (at lower $L$), provided the perturbation is large enough. The results presented here should be considered a ``pessimistic'' view, i.e. the largest possible luminosity at which pulsational instability is maintained.
While the perturbation approach would probably be more realistic, we chose not to follow it. Indeed, we are not especially interested in the emergence of instability (the transition from a static configuration to a pulsating one), but rather in the transition towards dominant fundamental mode pulsation, which is usually preceded by dominant pulsation in the first overtone mode.
Finally, we point out that the fact that the present models predict a range of parameters within which pulsation is stable does not mean that oscillations are absent, as they might exist by means of a different excitation mechanism. In particular, there is increasing evidence that, at luminosities typical of the upper red giant branch, LPVs undergo oscillations due to stochastic driving, similar to what happens in solar-like oscillators \citep[e.g.][]{Mosser_etal_2013}.
\subsection{Processing of time series}
\label{ssec:ProcessingOfTimeSeries}
We examine visually all time series to identify and discard stable models. For each time series, we also identified the full-amplitude portion, to which we restrict our analysis. We then estimate the mean value and amplitude of variation of relevant global properties (luminosity, surface temperature, radius) and compute their Fourier power spectrum, whose main peak we identify as the dominant period. The radial order of the dominant mode is then assessed by comparison with linear results. Pulsation in nonlinear models is never perfectly sinusoidal, hence all time series display several harmonic signals at frequencies that are integer multiples of the dominant one. Moreover, models are found along all luminosity sequences showing additional peaks corresponding to one or two pulsation modes other than the dominant. In general, this multiperiodicity is most evident when the dominant mode is about to shift.
After processing all time series, we perform a quality check to assess the reliability of results. The main issue we encounter concerns time series that are truncated early due to convergence problems. This is problematic when the time series is interrupted before attaining full amplitude, or if it covers too few full-amplitude cycles. This problem is most common in bright models dominated by FM pulsation, and is partially alleviated by retaining part of the relaxation phase of the time series in the processing step. This conservative approach is driven both by the intent of extending as much as possible the time series, and by the difficulty of distinguishing between relaxation and full-amplitude phase. The downside of this approach is that the relaxation phase is predominant in some models, hence the derived period and amplitude will overestimate the full-amplitude values.
During this quality assessment step we identified a subset of time series that we flagged as unreliable. Some of these time series are truncated well before attaining full amplitude, without any clear sign of periodicity. Other cases are more subtle as they display some degree of periodicity, but the estimated period is inconsistent with neighbouring time series, with similar mean luminosity, along model sequences. Usually, this is either due to the relaxation phase being predominant, leading to an overestimate of the pulsation period, or to some erratic behaviour that makes regular variability less evident. An example of the latter behaviour is displayed in Fig.~\ref{fig:nlSTS_example2}, showing a time series in which the outer mass zones are pushed to large distances from the nominal surface and follow long ballistic trajectories before falling back.
Overall, such flagged models represent only about 6 per cent of our set, but are in most cases found at large luminosities. They are therefore retained as indicators of upper limits of period and amplitude in order to better characterize this regime.
\begin{figure*}
\includegraphics[width=.99\textwidth]{{figures/nonlinPAmp_M1.6_aMLvar_anuvar}.png}
\caption{
Panels on the left show the dependence on surface radius of nonlinear periods (top) and peak-to-peak amplitude of surface displacement $\Delta R$ (bottom), as a function of varying $\alpha_{\nu}$ (different symbols), while $\alpha_{\rm ML}$ is fixed. The same quantities are shown in the panels on the right, except $\alpha_{\nu}$ is fixed and symbols indicate different values of $\alpha_{\rm ML}$. Linear periods are shown as solid lines in the top panel for reference. Radial orders of pulsation are colour-coded as in Fig.~\ref{fig:linear_anueffects_example}. Arrows mark the radius at which fundamental mode pulsation becomes dominant, depending on the value of $\alpha_{\nu}$ (left panels) or $\alpha_{\rm ML}$ (right panels). Vertical lines indicate approximately the radius at which nonlinear fundamental mode periods deviate from linear predictions. Dashed and dotted lines in the bottom panel correspond to $\Delta R=R$ and $\Delta R=0.5\,R$, respectively.
}
\label{fig:nonlinearPAmp_var_aML_anu}
\end{figure*}
\begin{figure*}
\includegraphics[width=.99\textwidth]{{figures/nlP0_onset}.png}
\caption{
Radius $R_{\rm dom,0}$ (left panel) and luminosity $L_{\rm dom,0}$ (right panel) at which the fundamental mode becomes dominant in the present nonlinear calculations, as a function of mass. Different symbols are used to indicate distinct values of the mixing length parameter (downward pointing triangles: $\alpha_{\rm ML}=1.5$; circles: $\alpha_{\rm ML}=2.0$; upward pointing triangles: $\alpha_{\rm ML}=2.5$), while the value of the turbulent viscosity parameter $\alpha_{\nu}$ is colour-coded. The solid red line indicates the approximate relation we adopted to describe long-period variability in synthetic stellar population models (see text). Note that the scale is logarithmic along the horizontal axis. For comparison, the red line in the right panel indicate the onset of dominant fundamental mode pulsation according to linear predictions, obtained from Eq.~10 of \citet{Trabucchi_etal_2017} (strictly valid only for $M\lesssim3\,{\rm M}_{\odot}$, and corresponding to $\alpha_{\nu}=0$).
}
\label{fig:nlP0_onset}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{{figures/nonlinearDomPatches_ML_noFlag_stable}.png}
\caption{
Similar to Fig.~\ref{fig:linearDomPatches_ML}, but for the results of nonlinear models.
}
\label{fig:nonlinearDomPatches_ML_noFlag_stable}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{{figures/nlSTS_adjust3}.png}
\caption{
Pulsation in the envelope interior of four models at different stages along the sequence with $M=1.6\,{\rm M}_{\odot}$, $\alpha_{\rm ML}=2.0$, $\alpha_{\nu}=10^{-4}$. Rows from top to bottom show nonlinear results corresponding to static models with $\log(L/{\rm L}_{\odot})=3.36$, 3.77, 3.92, 4.03. Panels in the left column show the radial displacement as a function of time of a few selected mass zones within the nonlinear models. Light solid lines include each pulsation cycle, while pulsation has been smoothed out in the dark solid lines, the dashed line showing the position of the optical surface. Large ticks along the vertical axes mark the radii of the same mass zones in the corresponding hydrostatic model. Vertical lines indicate the time at which the model has approximately reached (or is relaxing to) full-amplitude pulsation. Panels in the right column show the time-averaged interior mass distribution of nonlinear models (circles) in comparison with the mass profile of the corresponding static envelope models. Empty circles mark the edge of the rigid core. Mean values of luminosity and radius are indicated in the panels on the right.
}
\label{fig:nlSTS_adjust}
\end{figure*}
\section{Results and discussion}
\label{sec:Results}
\subsection{Nonlinear periods and amplitudes}
\label{ssec:NonlinearPeriodsAmplitudes}
The general trend of nonlinear periods $P$ and amplitudes $\Delta R$ as a function of mean stellar radius is illustrated in Fig.~\ref{fig:nonlinearPAmp_var_aML_anu}, which displays the values determined for the $M=1.6\,{\rm M}_{\odot}$ sequences. The stellar radius $R$ is defined by the location where the Rosseland mean optical depth is $\tau_{\rm R}=2/3$, and $\Delta R$ is its total variation in the limit cycle model. As in the linear case, periods depend only weakly (and only at large radii) upon $\alpha_{\rm ML}$, while they are essentially insensitive to the value of $\alpha_{\nu}$. The only exception is the low-luminosity regime of the most massive models ($M\geq4.4\,{\rm M}_{\odot}$), in which the FM period shows a slight dependence upon $\alpha_{\nu}$.
The most striking feature, however, is the stark deviation from linear predictions at $\log(R/{\rm R}_{\odot})\gtrsim2.4$, beyond which the fundamental mode PR relation becomes significantly less steep. In contrast, linear and nonlinear FM periods are in good agreement at smaller radii. In the present nonlinear models, the period of dominant overtone mode pulsation is always found to be in excellent agreement with linear results.
The deviation of the nonlinear FM period from the linear FM period is due to the rearrangement of the stellar envelope structure that occurs for large-amplitude FM pulsation \citep[][, see Sect.~\ref{ssec:EffectsOfLargeAmplitudePulsationOnStellarStructure}]{YaAri_Tuchman_1996,Lebzelter_Wood_2005,Kamath_etal_2010}. As seen in Fig.~\ref{fig:nonlinearPAmp_var_aML_anu}, the amplitude $\Delta R$ of surface displacement increases monotonically with stellar radius and the nonlinear periods diverge from linear predictions when $\Delta R\sim0.5\,R$. The increase of amplitude with radius slows down as it reaches $\Delta R\sim R$, that appears to be a limiting value. This will be discussed in more detail in Sect.~\ref{sssec:ChangeOfSlope}.
The effect of varying model parameters $\alpha_{\rm ML}$ and $\alpha_{\nu}$ upon amplitude is not obvious, mostly because the values of $\Delta R$ displayed in the bottom panels of Fig.~\ref{fig:nonlinearPAmp_var_aML_anu} include pulsation from all active modes, and not only the dominant. In general, amplitudes become smaller when $\alpha_{\nu}$ is increased, i.e. when turbulent viscous dissipation is larger. However, this seems to be the case only for the FM, while overtone modes appear to be largely unaffected. A rough quantitative estimate suggests that $\Delta R$ is reduced by 30-40 per cent when $\alpha_{\nu}$ is increased from 0 to 0.2, which is similar to the effect obtained by decreasing $\alpha_{\rm ML}$ from 2.5 to 1.5.
An additional effect of varying mixing length and turbulent viscosity is to shift the stability regime of all modes, in particular the value of radius $R_{\rm dom,0}$ (or luminosity $L_{\rm dom,0}$) at which the FM becomes dominant. This radius is indicated by arrows in Fig.~\ref{fig:nonlinearPAmp_var_aML_anu}, and is shifted towards larger values by increasing $\alpha_{\nu}$. In contrast, $R_{\rm dom,0}$ is little affected by changing $\alpha_{\rm ML}$, at least at small masses. Fig.~\ref{fig:nlP0_onset} gives a more general picture, showing the dependence of $R_{\rm dom,0}$ and $L_{\rm dom,0}$ upon mass, $\alpha_{\nu}$, and $\alpha_{\rm ML}$, and a comparison with the linear prescription for the onset of DMFP from \citetalias{Trabucchi_etal_2019}. At the larger masses, the pattern is complicated by the suppression of overtone modes, which favours a much earlier onset of DFMP.
It is instructive to examine the regions where nonlinear pulsation is dominant in the mass-luminosity plane, which is displayed in Fig.~\ref{fig:nonlinearDomPatches_ML_noFlag_stable} (the nonlinear equivalent of Fig.~\ref{fig:linearDomPatches_ML}). Note that model sequences at each mass are less extended in luminosity than in the case of linear calculations, due to the difficulty of converging nonlinear time series at large luminosities. These issues and the occurrence of flagged models result in the white stripes visible in Fig.~\ref{fig:linearDomPatches_ML}. With respect to Fig.~\ref{fig:linearDomPatches_ML}, these stripes are further emphasized by the fact that the mean luminosity of nonlinear time series is slightly different than that of the initial static model, resulting in an uneven luminosity sampling. White gaps should thus not be interpreted as regions of pulsational stability.
There are two main differences with respect to Fig.~\ref{fig:linearDomPatches_ML}.
Firstly, the low-luminosity portion of each sequence is much less populated (grey patches). This may be because the nonlinear models were not run long enough for pulsation to grow from noise. Recalling the remarks made in Sect.~\ref{ssec:ComputationAndGeneralFeatures} that a small imposed perturbation can lead to a continuing pulsation in seemingly stable models, the grey patches in Fig.~\ref{fig:nonlinearDomPatches_ML_noFlag_stable} should not immediately be interpreted as evidence for pulsational stability. Nonetheless, the results in Fig.~\ref{fig:linearDomPatches_ML} do suggest that nonlinear pulsation hardly occurs in overtones higher than the second, and that the fundamental mode does not show two dominance regimes in the nonlinear case. Secondly, and most importantly, the transition to DFMP occurs at lower luminosities than in the linear case, and the same is true for overtone modes (when a comparison is possible). This is especially evident at small values of $\alpha_{\rm ML}$ and $\alpha_{\nu}$, with DFMP arising as much as $\sim0.3$ dex earlier in luminosity, an amount that is reduce to $\sim0.05$ dex when $\alpha_{\rm ML}=2.5$ and $\alpha_{\nu}=0.2$.
\subsection{Effects of large-amplitude pulsation on stellar structure}
\label{ssec:EffectsOfLargeAmplitudePulsationOnStellarStructure}
As discussed above, the onset of large-amplitude, FM pulsation causes the nonlinear PR relation to deviate from linear predictions. To explain this, we examine how the average envelope structure of nonlinear models is affected by pulsation. We consider four time series at different stages along the luminosity sequence with $M=1.6\,{\rm M}_{\odot}$, $\alpha_{\rm ML}=2.0$, and $\alpha_{\nu}=10^{-4}$. The interior pulsation of these models is displayed in Fig.~\ref{fig:nlSTS_adjust} in terms of the radial motion of selected mass zones. Panel (a.1) displays a relatively small-radius model ($\simeq103\,{\rm R}_{\odot}$) whose pulsation is dominated by the 2OM. Pulsation has low amplitude and is limited to the outermost layers. In comparison, pulsation is significant throughout the outer half of the envelope of the $\sim196\,{\rm R}_{\odot}$ model displayed in panel (b.1). This model is dominated by FM pulsation, but is still in the linear regime. While amplitude in its interior is much smaller than at the surface, it is large enough to cause a slight readjustment of the envelope structure, that shows a tendency to contract (except for the outermost layers). The structural readjustment can be better appreciated in terms of the mass profile $m(r)$, as displayed in panels (a.2) and (b.2) where the static structure (solid line) is compared with the time-averaged structure of the nonlinear model (circles).
The model shown in panels (c.1) and (c.2) has $R\simeq264\,{\rm R}_{\odot}$ and is well beyond the linear regime. Pulsation occurs with significant amplitude in the outer 60 per cent of the envelope by radius, causing a substantial readjustment of the envelope. In the $R\simeq356\,{\rm R}_{\odot}$ model [panels (d.1) and (d.2)] mass zones are displaced by as much as 20-25 per cent with respect to the static structure.
We conclude that large-amplitude pulsation causes the stellar envelope to develop a steeper density stratification than what would occur if it were in hydrostatic equilibrium. In turn, this affect pulsation periods, that become shorter. This behaviour of nonlinear models was first noted by \citet{YaAri_Tuchman_1996}, and can be understood, on a qualitative level, in terms of the period-mean density relation, $P\propto\overline{\rho}^{-1/2}$ \citep[e.g.][]{Cox_TSP_1980}, even though it strictly applies only in hydrostatic equilibrium. The majority of the envelope layers that undergo pulsation have higher density with respect to the static case, and therefore pulsate with shorter period.
\begin{figure*}
\includegraphics[width=\textwidth]{{figures/nonlinearPMRfit_M_alpha}.png}
\caption{
Period-radius relations from nonlinear models at fixed mass but varying $\alpha_{\rm ML}$ (panels in the top and middle rows), and at fixed $\alpha_{\rm ML}$ but varying mass (panels in the bottom row). Light gray lines in the background indicate predictions of linear models. No distinction is made between models with different $\alpha_{\nu}$. Cross symbols indicate flagged models. Coloured solid lines are the best fits of Eq.~\ref{eq:nlPMR} for each combination of mass and $\alpha_{\rm ML}$ (colour coding is indicated in each the panel).
}
\label{fig:nonlinearPMRfit_M_alpha}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{{figures/fitParam_M_alpha}.png}
\caption{
Best-fit parameters for the period-radius relation (Eq.~\ref{eq:nlPMR}) as a function of mass and mixing length parameter $\alpha_{\rm ML}$ (colour-coded). Note that the scale is logarithmic along the horizontal axis.
}
\label{fig:fitParam_M_alpha}
\end{figure*}
\subsection{Nonlinear period-mass-radius relation of the fundamental mode}
\label{ssec:NonlinearPeriodMassRadiusRelationRelationOfTheFundamentalMode}
\subsubsection{Best-fit PMR}
\label{sssec:bestfitPMR}
One of the key objectives of this study is to deliver an easy-to-use tool to make predictions of pulsation properties as a function of global stellar parameters. A convenient way to do so is by deriving analytical approximations of the period-mass-radius (PMR) relationship followed by pulsation models. These are displayed in Fig.~\ref{fig:nonlinearPMRfit_M_alpha}. Each panel there corresponds to a fixed mass, except for panels in the bottom row in which $\alpha_{\rm ML}$ is fixed instead. Linear results are also shown for reference. It is evident that linear and nonlinear overtone mode periods are essentially equal at all masses. Hence we focus on the fundamental mode, for which we derive a nonlinear PMR in two steps: (1) we examine the PR relation at fixed mass, and identify an adequate best-fit analytic function to represent it; (2) we model the dependence of the best fitting parameters upon mass.
Fig.~\ref{fig:nonlinearPMRfit_M_alpha} clearly shows the two features of the PR relation introduced in Sect.~\ref{ssec:NonlinearPeriodsAmplitudes}, i.e. the deviation from the linear trend and the tendency to approach a constant period at large radii. We model these features by adopting the following functional form:
\begin{equation}\label{eq:nlPMR}
\log(P_0) =
\left\{
\begin{array}{ll}
\log(P_{\rm b}) + \alpha\log(R/R_{\rm b}) & \mbox{if } R<R_{\rm b} \\
\log(P_{\rm b}) + \beta\log(R/R_{\rm b}) & \mbox{if } R_{\rm b}\leq R<R_{\rm s} \\
\log(P_{\rm s}) & \mbox{if } R_{\rm s}\leq R \,,
\end{array}
\right.
\end{equation}
where radii are in solar units and periods in days. In the $\log(P)$~-~$\log(R)$ plane, Eq.~\ref{eq:nlPMR} represents a broken line whose slope changes from $\alpha$ to $\beta=\log(P_{\rm b}/P_{\rm s}) / \log(R_{\rm b}/R_{\rm s})$ at the breaking point ($R_{\rm b}$, $P_{\rm b}$), and becomes zero after the ``saturation'' point ($R_{\rm s}$, $P_{\rm s}$). In other words we describe the period-radius relation as a power law broken at two points, with a saturation at large radii (where period becomes independent of radius).
We constrain the values of the five free parameters $R_{\rm b}$, $P_{\rm b}$, $R_{\rm s}$, $P_{\rm s}$, and $\alpha$ by computing a least-square fit to the data points obtained from models, and the result is displayed by solid lines in Fig.~\ref{fig:nonlinearPMRfit_M_alpha}. This is done independently for each combination of mass and $\alpha_{\rm ML}$, while no distinction is made between models computed with different $\alpha_{\nu}$. Moreover, we exclude data points corresponding to flagged models, i.e. to time series for which the estimated period is not considered reliable (see Sect.~\ref{ssec:ProcessingOfTimeSeries}). Best-fit parameters as a function of mass and $\alpha_{\rm ML}$ are shown in Fig.~\ref{fig:fitParam_M_alpha}.
Even though there is some dependence upon $\alpha_{\rm ML}$, we do not model it (see Sect.~\ref{sssec:remarks}). The dependence of each of the parameters upon mass can be described once again by a broken power law. Given the coarse sampling, the breaking mass $M_{\rm b}$ cannot be determined accurately by least-square fitting, but is close to $M_{\rm b}^k=1.0\,{\rm M}_{\odot}$ (for $R_{\rm s}$ and $P_{\rm s}$) and $M_{\rm b}^k=2.6\,{\rm M}_{\odot}$ (for $R_{\rm b}$, $P_{\rm b}$, and $\alpha$). It is convenient to adopt these values. This leaves us with two free parameters to be determined in the adopted the functional form:
\begin{equation}\label{eq:nlMdep}
K =
\left\{
\begin{array}{ll}
\log(k_{\rm b}) + \gamma_1^k\log(M/M_{\rm b}^k) & \mbox{if } M<M_{\rm b}^k \\
\log(k_{\rm b}) + \gamma_2^k\log(M/M_{\rm b}^k) & \mbox{if } M_{\rm b}^k\leq M \,,
\end{array}
\right.
\end{equation}
where $K=\log(R_{\rm b})$, $\log(P_{\rm b})$, $\log(R_{\rm s})$, $\log(P_{\rm s})$, or $\alpha$. The hyper-parameters $k_{\rm b}$ and $M_{\rm b}^k$ represent coordinates of breaking points past which the slope changes from $\gamma_1^k$ to $\gamma_2^k$. The best-fit curves are displayed in Fig.~\ref{fig:fitParam_M_alpha}, while the best-fit values of the twenty hyper-parameters derived this way are summarized in Table~\ref{tab:hyperparameters}.
To test our best-fit PMR, we use Eqs.~\ref{eq:nlPMR} and~\ref{eq:nlMdep} to estimate periods from mass and radius for the original data set (once again, excluding flagged models). The resulting periods are within 10 per cent of the original value in more than 80 per cent of cases, as displayed in Fig.~\ref{fig:fitErrors}. The relative error is not found to correlate significantly with any relevant stellar or model parameter.
\begin{table}
\centering
\caption{
Best-fit values of the hyper-parameters of Eq.~\ref{eq:nlMdep}, which describe the dependence upon mass of the parameters of Eq.~\ref{eq:nlPMR}.
}\label{tab:hyperparameters}
\begin{tabular}{c|c|c|c|c}
Parameter & $k_{\rm b}$ & $M_{\rm b}^k$ & $\gamma_1^k$ & $\gamma_2^k$ \\
\hline
$\log(R_{\rm b})$ & $421\,{\rm R}_{\odot}$ & $2.6\,{\rm M}_{\odot}$ & 0.952 & 0.114 \\
$\log(P_{\rm b})$ & $440\,{\rm days}$ & $2.6\,{\rm M}_{\odot}$ & 0.976 & -0.264 \\
$\log(R_{\rm s})$ & $311\,{\rm R}_{\odot}$ & $1.0\,{\rm M}_{\odot}$ & 1.590 & 0.654 \\
$\log(P_{\rm s})$ & $388\,{\rm days}$ & $1.0\,{\rm M}_{\odot}$ & 1.808 & 0.502 \\
$\alpha$ & $49.7$ & $2.6\,{\rm M}_{\odot}$ & -0.279 & 0.544
\end{tabular}
\end{table}
\subsubsection{The change of slope}
\label{sssec:ChangeOfSlope}
An unfortunate consequence of the change in slope of the PR relation is that the nonlinear PMR relation cannot be uniquely inverted to estimate mass and radius from a given period. This can be appreciated in the bottom-row panels of Fig.~\ref{fig:nonlinearPMRfit_M_alpha}, where it is evident that the period-radius relations are seen to cross. This makes it somewhat difficult to constrain the global parameters of LPVs from their observed periods. It is reasonable to think that this degeneracy could be lifted by using information on pulsation amplitudes, a detailed understanding of which is thus desirable.
\begin{figure}
\includegraphics[width=\columnwidth]{{figures/fitErrors}.png}
\caption{
Relative error made when computing the fundamental mode period from Eq.~\ref{eq:nlPMR} with respect to the actual values determined from nonlinear time series. Predictions are within a 10 per cent error in more than 80 per cent of the cases.
}
\label{fig:fitErrors}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{{figures/DeathLine}.png}
\caption{
Region of avoidance of nonlinear pulsation (shaded area) in the PR diagram. Circles have the same meaning as in the bottom-row panels of Fig.~\ref{fig:nonlinearPMRfit_M_alpha}. Flagged models are not shown. The dashed line is defined by Eq.~\ref{eq:DeathLine}.
}
\label{fig:DeathLine}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{{figures/nonlinearDR_aML}.png}
\caption{
Similar to the bottom row of Fig.~\ref{fig:nonlinearPMRfit_M_alpha}, but showing the amplitude $\Delta R$ of displacement of the stellar surface during pulsation as a function of mean stellar radius. Dashed lines correspond to $\Delta R = R$. Each panel correspond to a different value of $\alpha_{\rm ML}$, while mass is colour-coded and no distinction is made for different values of $\alpha_{\nu}$.
}
\label{fig:nonlinearDR_aML}
\end{figure*}
On the other hand, the bending and flattening of the PR relation results in the existence of a region of the PR diagram where nonlinear pulsation is never found, regardless of mass, in contrast with linear results. In other words nonlinear pulsation periods cannot grow beyond a ``death line'' (the dashed line in Fig.~\ref{fig:DeathLine}) approximately described by
\begin{equation}\label{eq:DeathLine}
\log(P_{\rm 0,max}) = 0.096 + 1.022 \log(R) \,,
\end{equation}
which represents the maximum FM period at a given radius. Given the observed period of an LPV known to pulsate in the FM, Eq.~\ref{eq:DeathLine} can be used to estimate a lower limit to its surface radius.
It is worth noting that the amplitude $\Delta R$ of surface displacement shows a somewhat similar behaviour to that of periods. As noted in Sect.~\ref{ssec:NonlinearPeriodsAmplitudes}, its growth is limited to values smaller than or approximately equal to the current surface radius. Once this limit is reached, the rate of amplitude increase is set by $\Delta R\simeq R$. This property is shared by all our sequences of nonlinear models, and appears to be independent of model parameters $\alpha_{\rm ML}$ and $\alpha_{\nu}$, as displayed in Fig.~\ref{fig:nonlinearDR_aML}. The existence of some dissipative process preventing an arbitrarily large growth of pulsation amplitude would not be surprising, however the reason behind the behaviour displayed by models, and in particular why amplitude should be limited to the value of the surface radius, is not clear, and will not be further investigated here.
\subsubsection{Remarks}
\label{sssec:remarks}
Some assumptions have been made in deriving the analytic PMR that deserve discussion. Having excluded flagged models from the fit makes questionable the choice of a flattening PR relation. At large masses, in particular, the large-radii regime is poorly constrained, and the saturation point is essentially determined by the longest non-flagged period in the data set. The sequences with $M=7.0\,{\rm M}_{\odot}$ and $\alpha_{\rm ML}=1.5$ represent an extreme case in which the saturation parameters cannot be constrained at all. Nonetheless, there is evidence in support of our choice. Let us consider, for instance, the $M=1.0\,{\rm M}_{\odot}$ sequence (central panel in the top row of Fig.~\ref{fig:nonlinearPMRfit_M_alpha}). Most of the flagged models are distributed irregularly in the top-right corner of the diagram, but a significant fraction are well consistent with the best-fit curve, even though they were excluded from its derivation. The situation is similar for other values of mass, except the largest ones. One could doubt that the parameters $R_{\rm b}$ and $P_{\rm b}$ obtained for $M\geq4.4\,{\rm M}_{\odot}$ are realistic, given the scarcity of data points to fit, but panels (c) and (d) of Fig.~\ref{fig:fitParam_M_alpha} suggest otherwise. Indeed, the best-fit values at large masses are well consistent with the trend observed at small masses, where these parameters are better constrained. A dedicated study of nonlinear pulsation in massive AGB models is desirable to better understand this.
It is also worth spending a few words about the role of $\alpha_{\rm ML}$ in the PMR relation, which we completely neglected. This approximation is rather good and justified for all parameters of Eq.~\ref{eq:nlPMR}, except the saturation period $P_{\rm b}$. This is immediately clear by looking at the top-left panel of Fig.~\ref{fig:nonlinearPMRfit_M_alpha}: changing $\alpha_{\rm ML}$ from 1.5 to 2.5 leads to a 60 per cent difference in $P_{\rm b}$ for the $1.0\,{\rm M}_{\odot}$ models. The difference is somewhat smaller at larger masses [see also panel (d) of Fig.~\ref{fig:fitParam_M_alpha}], but still significant. The main problem is that the value of $\alpha_{\rm ML}$ is meaningful only in the context of the codes used in this study, hence its inclusion in the PMR relation would only lead to confusion. In principle, one could examine the dependence of the PMR on some other stellar parameter related with $\alpha_{\rm ML}$, such as effective temperature. We explored this possibility and found no description simple enough to justify its inclusion in the best-fit PMR. Disregarding the dependence upon $\alpha_{\rm ML}$ is therefore justified in terms of best trade-off between accuracy and simpleness of the analytic PMR relation.
\subsection{Effect of chemical composition}
\label{ssec:EffectOfChemicalComposition}
The effects on linear pulsation of independent changes in metallicity and ${\rm C}/{\rm O}$, at fixed mass and radius, have been examined in \citetalias{Trabucchi_etal_2019}. It was shown that, as far as overtone modes are concerned, increasing the metallicity of an O-rich model from $Z=0.006$ to $Z=0.017$ causes linear periods to shorten by less than 1 per cent. In contrast, the same change can decrease the linear FM period by up to $\sim10$ per cent. A similar change can be achieved by varying ${\rm C}/{\rm O}$ from $\sim0.55$ ($\simeq{\rm C}/{\rm O}_{\odot}$) to 3. Note, however, that the latter result was obtained in \citetalias[][]{Trabucchi_etal_2019} (as well as in the present work) by increasing the mass fraction $X_{\rm C}$ of carbon at the expenses of helium, while keeping fixed the abundance of oxygen $X_{\rm O}$. While this is qualitatively consistent with evolutionary chemical enrichment, it also means that we cannot completely disentangle the effects of pure ${\rm C}/{\rm O}$ variations from those associated with increased metallicity.
In terms of stability, increasing the metallicity in the linear models was found to lead to a decrease in growth rates of both the FM and 1OM, causing a delay in the onset of dominant pulsation in these modes, while leaving higher overtones almost unaffected. Growth rates decrease with increasing ${\rm C}/{\rm O}$ (except they have a maximum around ${\rm C}/{\rm O}\simeq1$), therefore, in general, the onset of pulsation is delayed in C-rich models with respect to O-rich ones.
\begin{figure}
\includegraphics[width=\columnwidth]{{figures/nonlinPamp_Crich_Zsun}.png}
\caption{
Periods (top panel) and amplitudes (bottom panel) of an O-rich model sequence (blue circles) and for the corresponding C-rich (red circles) and solar-metallicity (green circles) test sequences. All sequences are computed with $M=1.6\,{\rm M}_{\odot}$, $\alpha_{\rm ML}=2.0$, and $10^{-4}$. Linear periods are shown in the top panel for reference, using the same colour code. As in Fig.~\ref{fig:nonlinearPAmp_var_aML_anu}, vertical solid lines mark approximately the radius at which nonlinear fundamental mode periods deviate from linear predictions, while the dashed and dotted lines in the bottom panel represent $\Delta R=R$ and $\Delta R=0.5\,R$, respectively.}
\label{fig:nonlinPamp_Crich_Zsun}
\end{figure}
Given the substantial chemical evolution that AGB stars are subject to, and the variety of astrophysical environments they populate, a systematic analysis of the impact of varying chemical composition upon nonlinear pulsation is highly desirable, but is beyond the scope of this work. Here, we probe possible such effects by considering only two test cases corresponding, respectively, to C-rich composition and solar metallicity. We compute luminosity sequences for these two cases setting the other parameters to $M=1.6\,{\rm M}_{\odot}$, $\alpha_{\rm ML}=2.0$, and $\alpha_{\nu}=10^{-4}$. The results are displayed in Fig.~\ref{fig:nonlinPamp_Crich_Zsun}.
Qualitatively, we find the same trends identified in the linear case. In the regime where the FM is dominant, C-rich and solar-metallicity models have nonlinear periods about $10-15$ per cent shorter than the reference models (O-rich and with $Z=0.006$). Moreover, the onset of DFMP is delayed (occurs at larger radii and higher luminosities), and has smaller amplitude at a given radius, both features being consistent with the stability predictions from linear models. Another effect of composition variations is the delay of the amplitude ``saturation'', which occurs at $R\gtrsim350\,{\rm R}_{\odot}$ in the reference O-rich sequence, and at $R\gtrsim450\,{\rm R}_{\odot}$ in the C-rich and solar-metallicity sequences. According to the discussion of Sect.~\ref{ssec:NonlinearPeriodMassRadiusRelationRelationOfTheFundamentalMode}, this suggests that the change of slope of the PR relation could also be delayed. However, our test cases do not provide enough evidence to support this possibility. Further investigation is needed to understand this and to make predictions concerning possible observable effects.
\section{Comparison with observations}
\label{sec:ComparisonWithObservations}
\subsection{Simulated variability}
In order to compare our results with observations of variability in LPVs, we follow the same general procedure described in \citet{Trabucchi_etal_2017}. As reference observations we adopt data from the third phase of the Optical Gravitational Lensing Experiment \citep{Udalski_etal_1992}, namely the OGLE-III Catalog of LPVs in the Magellanic Clouds \citep{Soszynski_etal_2009_LMC,Soszynski_etal_2011_SMC}, which sources we cross-match with near-infrared photometry from the Two Micron All Sky Survey \citep[2MASS, ][]{Cutri_etal_2003,Skrutskie_etal_2006}. We complement this data set with variability information and photometry from the second data release of the \textit{Gaia}\ mission \citep[\textit{Gaia}\ DR2,][]{GaiaCollaboration_2018}, namely the catalog of LPVs by \citet{Mowlavi_etal_2018}. From the latter, we select stars belonging to the LMC and SMC using the criteria of \citet{Mowlavi_etal_2019} (their table~1).
We use the \textsc{trilegal}\ code \citep{Girardi_etal_2005} to produce synthetic models of the AGB population of the Magellanic Clouds. Stellar isochrones used in \textsc{trilegal}\ \citep{Marigo_etal_2017} are obtained from \textsc{parsec}\ evolutionary tracks \citep{Bressan_etal_2012}, and from TP-AGB tracks computed with the \textsc{colibri}\ code \citep{Marigo_etal_2013}. In particular, we use the TP-AGB tracks from the S\_35 set by \citet{Pastorelli_etal_2019} and from the S\_37 set by \citet{Pastorelli_etal_2020}. We refer to these works for further details concerning the setup of the simulations.
We filter the simulation to exclude evolutionary stages other than the AGB, except we retain simulated stars experiencing late core-He burning (CHeB), representative of red supergiants (RSGs). This choice is motivated by the presence of RSG semi-regular variables in the \textit{Gaia}\ DR2 LPV that display a similar behaviour to AGB LPVs, and for which a comparison between models and observations is interesting. Strictly speaking, pulsation models presented here are not appropriate to describe pulsation in RSGs due to the chosen core mass-luminosity relation that does not reflect the properties of CHeB stars. Keeping that in mind, we assume that the core mass and size have little effect on pulsation properties.
We further restrict the simulation to O-rich stars only, i.e. with ${\rm C}/{\rm O}<1$. Observational data are filtered accordingly following the approach of \citet{Lebzelter_etal_2018} based on the \textit{Gaia}-2MASS diagram. We compute linear periods and growth rates for each simulated star by interpolating in the grid of models from \citetalias{Trabucchi_etal_2019}. Variability properties are computed this way for all pulsation modes from the fundamental to the fourth overtone. However, we focus our analysis on FM pulsation, as nonlinear overtone periods are entirely compatible with linear predictions and in agreement with observations \citep{Trabucchi_etal_2017}. Finally, we estimate nonlinear FM periods for simulated stars using by applying the best-fit PMR derived in Sect.~\ref{sssec:bestfitPMR}.
Pulsation periods alone are not enough to compare models with observations, as it is necessary to assess which simulated stars are undergoing DFMP. In linear models, this is easily determined using growth rates. In the nonlinear case, normally, the onset of DFMP corresponds to the radius $R_{\rm dom,0}$ at which dominant pulsation shifts from 1OM to FM. This parameter has already been determined for our model sequences, and in principle one should derive some analytic approximation of its dependence upon stellar and model parameters (similar to Eq.~10 of \citet{Trabucchi_etal_2017}). Instead, we only describe the mass dependence as
\begin{equation}\label{eq:onset_dom0}
\log(R_{\rm dom,0}) = 2.130 + 1.150 \log(M/{\rm M}_{\odot}) - 0.496 \log(M/{\rm M}_{\odot})^2 \,,
\end{equation}
which is depicted as a red line in Fig.~\ref{eq:onset_dom0}. There are three main motivations for this choice. Firstly, our aim is that of assessing the accuracy of nonlinear periods predictions, rather than investigating the onset of DFMP. Secondly, since $R_{\rm dom,0}$ depends upon turbulent viscosity, an intermediate step would be necessary involving the calibration of $\alpha_{\nu}$, which is not trivial. Finally, linear models suggest that the onset of DFMP is sensitive to metallicity, a parameter not explored in the present study. These arguments justify the crude approximation given by Eq.~\ref{eq:onset_dom0}. Simulated stars whose current radius is larger than $R_{\rm dom,0}$ are assumed to undergo DFMP.
\begin{figure*}
\includegraphics[width=.9\textwidth]{{figures/comparisonPLD_WJK_smc}.png}
\includegraphics[width=.9\textwidth]{{figures/comparisonPLD_WRP_smc}.png}
\caption{
Period-luminosity diagrams of LPVs in the SMC using the Wesenheit indices $W_{\scaleto{\rm J,K}{4.5pt}}$ (top row) and $W_{\scaleto{\rm B,R}{4.5pt}}$ (bottom row). Small dots are observations are from OGLE-III (light grey dots) and \textit{Gaia}\ DR2 (dark grey dots). The approximate location of PL sequences A$^{\prime}$, A, B, C$^{\prime}$, and C is indicated by solid lines, while dashed lines correspond to the ``death line'' displayed in Fig.~\ref{fig:DeathLine}. Simulated fundamental mode LPVs based on linear predictions from \citetalias{Trabucchi_etal_2019} or on nonlinear predictions from this work are shown, respectively, in panels on the central and right column. In both cases, they are colour-coded by current mass.
}
\label{fig:comparisonPLD_SMC}
\end{figure*}
\subsection{Period-luminosity diagrams}
To construct period-luminosity diagrams (PLD), we make use of the Wesenheit indices \citep[see e.g.][]{Madore_1982,Lebzelter_etal_2019}
\begin{equation}\label{eq:wjk}
W_{\scaleto{\rm J,K}{4.5pt}} = K_{\rm s} - 0.686 (J-K_{\rm s}) \,,
\end{equation}
\begin{equation}\label{eq:wrp}
W_{\scaleto{\rm B,R}{4.5pt}} = G_{\scaleto{\rm RP}{4.5pt}} - 1.3 (G_{\scaleto{\rm BP}{4.5pt}} - G_{\scaleto{\rm RP}{4.5pt}}) \,,
\end{equation}
which are obtained with 2MASS photometry in the $J$, $K_{\rm s}$ filters and \textit{Gaia}\ photometry in the $G_{\scaleto{\rm BP}{4.5pt}}$, $G_{\scaleto{\rm RP}{4.5pt}}$ filters. The PLDs are shown in Fig.~\ref{fig:comparisonPLD_SMC} and~\ref{fig:comparisonPLD_LMC} for the SMC and LMC, respectively. Synthetic photometry for the simulation is obtained by means of tabulated bolometric corrections as explained in \citet{Pastorelli_etal_2019}.
Periods obtained from the newly derived fundamental mode PMR are in nice agreement with observations and show a substantial improvement with respect to linear predictions. Owing to the bending of the nonlinear PR relation, FM periods cannot increase arbitrarily past the observed PL sequence C, in contrast with what happens when linear models are employed. Despite being a crude approximation, the criterion determined from nonlinear models for the onset of DFMP also improves upon the approach based on linear growth rates. By capturing the earlier onset at low masses, it results in a more populated faint tail of the PL sequence, in better agreement with observations. The onset of DFMP is responsible for the left edge of PL sequence C at $W_{\scaleto{\rm J,K}{4.5pt}}\gtrsim8$, which we find to be fairly well reproduced by models.
The long-period edge of sequence C, on the other hand, is determined by the bending and saturation of the nonlinear PR relation. Eq.~\ref{eq:DeathLine} describing the ``death line'' in the PR diagram can be converted into a relation describing the maximum period of the FM at given $W_{\scaleto{\rm J,K}{4.5pt}}$. This is depicted by dashed lines in Fig.~\ref{fig:comparisonPLD_SMC} and~\ref{fig:comparisonPLD_LMC}, and reproduces remarkably well the slope and the right edge of PL sequence C. In this respect, PL sequence C is intrinsically different than other PL sequences, whose long-period edge is determined either by the shift of dominant pulsation to a lower-order mode, or by overtone pulsation becoming stable as the oscillation frequency becomes equal to the acoustic cut-off frequency at the surface \citepalias[][]{Trabucchi_etal_2019}. Note that stars shown in Figs.~\ref{fig:comparisonPLD_SMC} and~\ref{fig:comparisonPLD_LMC} are optically visible. Stars in the final high mass-loss rate superwind phase are detected as infrared sources only, and their periods can fall to the long-period side of the optical sequence C \citep[e.g.][]{Wood_2015}. These stars probably coincide with those we tried to model here but for which no stable limit cycle could be found due to the extremely large amplitude of pulsation (the models that were flagged in Fig.~\ref{fig:nonlinearPMRfit_M_alpha}).
\begin{figure*}
\includegraphics[width=.9\textwidth]{{figures/comparisonPLD_WJK_lmc}.png}
\includegraphics[width=.9\textwidth]{{figures/comparisonPLD_WRP_lmc}.png}
\caption{
Similar to Fig.~\ref{fig:comparisonPLD_SMC}, but for the LMC.
}
\label{fig:comparisonPLD_LMC}
\end{figure*}
Based on theoretical arguments, \citet{Wood_2015} showed how stellar mass decreases towards longer periods across PL sequences, at fixed luminosity. This effect, reproduced by our PMR relation, was first observed by \citet{Feast_etal_1989} and \citet{Hughes_Wood_1990} for Miras on the PL sequence C, and recently investigated in more detail by \citet{Lebzelter_etal_2019}. At $W_{\scaleto{\rm J,K}{4.5pt}}\simeq8.5$, mass decreases from $\sim4\,{\rm M}_{\odot}$ to $\sim2\,{\rm M}_{\odot}$ across PL sequence C. At fainter magnitudes this is less evident, as stars with $M\simeq0.8\,{\rm M}_{\odot}$ tend to be brought back towards the left edge of the PL sequence (see Sect.~\ref{ssec:evotracks}).
The offset is especially large for RSGs, predicted at $W_{\scaleto{\rm J,K}{4.5pt}}\simeq7.5$ in the region between PL sequences C$^{\prime}$ and C, in agreement with observations \citep{Lebzelter_etal_2019}. The location of RSGs is also determined by an earlier onset of DFMP at towards large masses (see Fig.~\ref{fig:nlP0_onset}).
\subsection{Evolutionary tracks}
\label{ssec:evotracks}
To better understand the implications of nonlinear PMR relation, it is instructive to follow the evolutionary path of FM pulsation in the period-luminosity diagram. To do so, we use Eq.~\ref{eq:nlPMR} to compute FM periods along AGB evolutionary tracks computed with the \textsc{colibri}\ code. Bolometric luminosity is converted to magnitudes in the 2MASS passbands using \textsc{trilegal}, and the result is displayed on top of observations in Fig.~\ref{fig:evoPLD_WJK_LMC}. We show tracks corresponding to initial metallicity $Z_{\rm i}=0.006$\footnote{
Envelope metallicity increases during the TP-AGB, but it remains safely below $Z\simeq0.006$ during the O-rich phase for all evolutionary tracks used here, being thus consistent with the value used for pulsation models.
} and to a few different values of initial mass, $M_{\rm i}=1.0, 2.0, 3.4$, and $5.4\,{\rm M}_{\odot}$. At the beginning of each track, pulsation is linear, and the path followed by the FM period is less steep than the observed PL sequences, hence it crosses them. The PR relation bends after reaching the breaking radius $R_{\rm b}$, which is mirrored by a change of slope rather evident in the $5.4\,{\rm M}_{\odot}$ evolutionary track. At this point, the path in the PLD becomes steeper than the PL sequence. Note that a star can go back to the linear regime if it contracts after a thermal pulse.
\begin{figure}
\includegraphics[width=\columnwidth]{{figures/evoPLD_WJK_LMC}.png}
\caption{
Period-luminosity diagram of LPVs in the the LMC in the form ($W_{\scaleto{\rm J,K}{4.5pt}}$, $P$). Curves are obtained from \textsc{colibri}\ AGB evolutionary tracks, and represent the evolutionary path of the fundamental mode in the PLD for different initial masses (as labelled). Circles indicate stages of quiescent H-shell burning. Black lines and white circles correspond to stages at which the fundamental mode is not dominant. After it becomes dominant, lines and circles are coloured according to surface composition (blue: O-rich; red: C-rich). All tracks have initial metallicity $Z_{\rm i}=0.006$.
}
\label{fig:evoPLD_WJK_LMC}
\end{figure}
If the tracks reach the saturation radius, the path becomes vertical, as seen in the brightest portion of the $2.0\,{\rm M}_{\odot}$ track. Note that the latter is representative of a star that becomes C-rich, hence the predicted periods are not necessarily realistic, even though they appear to be rather compatible with observations. This track shows rather well the fact that, due to the bending and saturation of the PR relation, a star that has crossed the PL sequence C and reached its long-period edge can be brought back near the left edge at brighter magnitudes.
\section{Summary and conclusions}
\label{sec:Conclusions}
We presented the results from the computation of a set of nonlinear pulsation models of O-rich AGB stars, widely covering the relevant range of stellar masses and luminosities. We found that nonlinear fundamental mode periods at large amplitudes are systematically shorter than in linear calculations, and are in good agreement with observations. The overtone mode periods were found to be entirely compatible with linear predictions. As long as the amplitude of pulsation (in terms of radial displacement of the optical surface) is not larger than $\sim50$ per cent of the mean radius, the nonlinear fundamental mode period is also consistent with predictions from linear models. At larger amplitudes, the linear approximation breaks down and the slope of the period-radius relation decreases abruptly with respect to the linear case. This is explained in terms of a substantial structural readjustment of the stellar envelope induced by large-amplitude pulsation. With respect to corresponding static models, the mean density increases in the pulsating layers, causing the period to decrease. At the largest radii, the fundamental mode period becomes independent of radius.
We modelled our results by means of an analytic period-mass-radius relation, that we tested against observations of long-period variables in the Magellanic Clouds using appropriate synthetic stellar population models. By capturing the earlier onset of dominant fundamental mode pulsation and its shallower dependence on stellar radius, nonlinear predictions result in much better agreement with observations than linear models. In particular, they are able to reproduce the observed PL sequence C, where Mira variables are found. The bending and saturation of the nonlinear period-radius relation is the very reason for the origin of the long-period edge of PL sequence C, that linear models fail to describe.
By delivering, for the first time, a way to accurately predict the period of fundamental mode long-period variables, this work represents an important achievement in the study of luminous red giant stars. The present models will help to fill the gap between theory and observation that has affected several subjects, from the study of mass loss on the AGB to the exploitation of Miras as standard candles. Our results will be especially important in view of the large wealth of stellar variability data expected from ongoing and future surveys such as \textit{Gaia}, LSST, PLATO, and JWST.
A number of questions remain open, some of which were only briefly addressed in this work. In particular, our study suggests that chemical composition might have a significant impact on pulsation property, and requires systematic investigation. Pulsational stability as a function of global stellar parameters also needs to be better constrained, a task that would necessarily involve the investigation of multiperiodicity in nonlinear models. We explored the role of turbulent viscosity in linear and nonlinear pulsation, and found it to affect periods to a negligible level in most cases. In contrast, as expected, the amplitude and onset of pulsation are rather sensitive to the value employed for the turbulent viscosity parameter, a calibration of which is highly desirable. This is especially important as it represents a key step for the modelling of the light curves and photometric amplitudes of long-period variables. The ability to accurately predict such features would require great effort due to the necessity of modelling the complicated hydrodynamics of the atmospheric layers, including radiative transfer, chemistry and dust formation. At the same time, it would be highly rewarding, as it would provide additional tools to characterize in detail the properties of long-period variables and their hosting stellar populations.
\section*{Data availability}
The nonlinear time series generated in this research will made available on \url{http://starkey.astro.unipd.it/} and on VizieR. The catalogues of observed LPVs underlying this article are available from \citet{Soszynski_etal_2009_LMC,Soszynski_etal_2011_SMC} at \url{http://www.astrouw.edu.pl/ogle/ogle3/OIII-CVS/}, and from \citet{Mowlavi_etal_2018} at \url{https://gea.esac.esa.int/archive/}.
\section*{Acknowledgements}
M.T. and N.M. acknowledge the support provided by the Swiss National Science Foundation through grant Nr. 188697.
We acknowledge the support from the ERC Consolidator Grant funding scheme ({\em project STARKEY}, G.A. n. 615604).
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
This publication makes use of data from the \mbox{OGLE-III} Catalog of Variable Stars.
This research made use of \textsc{NumPy} \citep{numpy2020}, \textsc{SciPy} \citep{SciPy}, \textsc{matplotlib}, a Python library for publication quality graphics \citep{matplotlib}, and \textsc{Astropy}, a community-developed core Python package for Astronomy \citep{astropy2018}.
\bibliographystyle{mnras}
|
2,869,038,156,864 | arxiv | \section{}
\begin{abstract}
The recent $x> 1$ (e,e') and correlation experiments at
momentum transfer $Q^2 \ge 2 \,\mbox{GeV}^2$ confirm presence of short-range correlations (SRC) in nuclei mostly build of nucleons.
Recently we evaluated in a model independent way the dominant photon contribution to the nuclear structure. Taking into account this effect and using definition of x consistent with the exact kinematics of eA scattering (with exact sum rules) results in the significant reduction of
$R_A(x,Q^2)=F_{2A}(x,Q^2)/F_{2N}(x,Q^2)$ ratio which explains $\sim $ 50\% of the EMC effect for $x\le$ 0.55 where Fermi motion effects are small. The remaining part of the EMC effect at $x\ge 0.5$ is consistent with dominance of the contribution of SRCs. Implications for extraction of the $F_{2n}/F_{2p}$ ratio are discussed. Smallness of the non-nucleonic degrees of freedom in nuclei matches well the recent observation of a two-solar mass neutron star, and while large pn SRCs lead to enhancement of the neutron star cooling rate for kT$\le$ 0.01 MeV.
\end{abstract}
\maketitle
\section{Introduction}
To resolve microscopic structure of nuclei one needs to use high energy high momentum transfer probes. Otherwise the high frequency components of the nuclear wave function enter only as renormalization/cutoff parameters in the descriptions of the low energy phenomena, like for example in chiral effective field theory.
The key questions which can be addressed by using high energy processes and which are relevant for the description of high density cold nuclear matter at the neutron star densities are (i) can nucleon be good quasiparticles for description of high energy processes off nuclei, (ii) does the notion of the momentum distributions in nuclei make sense for $k\ge m_{\pi}$, (iii) what is probability and structure of the short-range/ high momentum correlations in nuclei, (iv) what are the most important non-nucleonic degrees of freedom in nuclei, and (v) what is the microscopic origin of intermediate and short-range nuclear forces.
Below we summarize the recent progress in the studies of hard nuclear processes which allows to address several of these questions.
\section{Recent progress in the studies of the SRCs in nuclei}
Singular nature of $NN$ interaction at large momenta/ small internucleon distances leads to universal structure of SRC and the prediction of the scaling of the ratios of the cross sections of $x > 1$ scattering at sufficiently large $Q^2 \ge 2 \mbox{GeV}^2$ \cite{Frankfurt:1981mk}. In particular for $1+k_F/m_N < x < 2$:
\begin{equation}
R_A(x,Q^2)
2\sigma(eA \to e + X)/ A\sigma(e^2H \to e + X).
\end{equation}
Here $a_2(A)$ has the meaning of the relative probability of the two nucleon SRCs per nucleon in a nucleus and in the deuteron.
The first evidence for such scaling of the ratios was reported in \cite{Frankfurt:1988nt}. The extensive studies were performed using various data taken at SLAC in \cite{Frankfurt:1993sp}.
The experiments performed at Jlab allowed to explore the scaling of ratios in the same experiment.
In \cite{Egiyan:2003vg,Egiyan:2005hs} the scaling relative to $^3$He was established. Very recently the results of the extensive study of the nucleus/deuteron ratios were reported in \cite{Fomin:2011ng} allowing a high precision determination of the relative probability of the two nucleon SRCs in nuclei and the deuteron.
The results of \cite{Fomin:2011ng} are in a good agreement with the early analysis of \cite{Frankfurt:1993sp}, see Fig.~1.
Several theoretical observations are important for interpretation of the scaling ratios: (a) The invariant energy of the produced system for the interaction off the deuteron is small - $W-m_{^2H} \le \mbox{250 MeV}$ so production of inelastic final states is strongly suppressed. Correspondingly, scattering off exotic configurations like hidden color configurations which decay into excited baryon states, $\Delta$'s, etc is strongly suppressed in the discussed kinematics, (b) The closure is valid for the final state interaction of the nucleons of the SRC and the residual nucleus system. Only the f.s.i. between the nucleons of the SRC contributes to the total (e,e') cross section \cite{Frankfurt:1993sp,Frankfurt:2008zv}. Since this interaction is the same for light and heavy nuclei it does not modify the scaling of the ratios, (c) In the limit of large $Q^2$ the cross section is expressed through the light - cone projection of the nuclear density matrix, $\rho_A^N(\alpha)$ that is the integral over all components of the interacting nucleon four momentum except $\alpha\equiv p_-/(m_A/A)$ where $p_- =p_0-(\vec{p}\cdot \vec{q})/ |\vec{q}|$. The ratio of the cross sections reaches a plateau at $x(Q^2)$ corresponding to the scattering off a nucleon with minimal momentum $\sim k_F$ indicating that the dominance of two nucleon SRC sets in just above the Fermi surface. A further confirmation of dominance of two nucleon correlation comes from the observation \cite{Frankfurt:1993sp} of precocious scaling of the ratios plotted as a function of the minimal $\alpha$ for the scattering off two nucleon SRC at rest (the Fermi motion of the pair practically cancels out in such a ratio)\cite{Frankfurt:1988nt}: $\alpha_{tn}= 2 -(q_0-q_3/ 2 m_N)(1+ (\sqrt{W^2-4m_N^2}\/ W)$, where $W^2=4m_N^2+4 q_0m_N-Q^2$.
The precocious $\alpha_{tn} $ scaling indicates that $R_A$ is equal to the ratio of the light cone density matrices of the deuteron and nucleus. It also strongly indicates that SRCs of baryon charge two are predominantly build of two nucleons rather than some exotic states.
\begin{figure}
\includegraphics[height=.3\textheight]{a2plot}
\caption{Comparison of the first determination of $a_2(A)$ based on the analysis of the SLAC data \cite{Frankfurt:1993sp} with the most recent Jlab measurements \cite{Fomin:2011ng}. }
\end{figure}
To probe directly the structure of the SRCs it is advantageous to study a decay of SRC after one nucleon of the SRC is removed which is described by the nuclear decay function \cite{Frankfurt:1981mk,Frankfurt:1988nt}. In the two nucleon SRC approximation the decay function is simply expressed through the density matrix as the removal of one of the nucleons of the correlation results in the release of the second nucleon with probability of one. A series of the experiments was performed at BNL and Jlab which studied (p,2p), (e,e'p) reactions in the kinematics where a fast proton of the nucleus is knocked out, see review and references in \cite{Subedi:2008zz,Frankfurt:2008zv}. In spite of very different kinematics -- removal of forward moving nucleon in the $^{12}$(p,2p) case and backward moving proton in the $^{12}$C(e,e'p) case, different probes and different momentum transfer $-t \approx \mbox{5 GeV}^2 $ and $Q^2=\mbox{2 GeV}^2$ -- the same of the neutron emission pattern is observed - the neutron is emitted with a probability $\sim 90\%$ in the direction approximately opposite to the initial proton direction with the correlation setting in very close to $k_F(C) \sim $ 220 MeV/c. The Jlab experiment observed in the same kinematics both proton and neutron emission in coincidence
with $e'p $ and found the probability of the proton emission to be about 1/9 of the neutron probability. Hence the data confirm the our theoretical expectation that removal of a fast nucleon is practically always associated
with the emission of the nucleon in the opposite direction with the SRC contribution providing the dominant component of the nuclear wave function starting close to the Fermi momentum. The large pn/pp ratio also confirms the standard expectation of the nuclear physics that short-range interactions are much stronger in the isospin zero channel than in the isospin one channel. Saturation of the probability provides an independent confirmation of the conclusion that at least up to momenta $ \sim 500 \div 600 $ MeV/c SRC predominantly consist of two nucleons.
\section{New developments in the studies of the EMC effect}
The deep inelastic scattering off nuclei can be described in the impulse approximation as the convolution of the LC density matrix and the elementary cross section:
\begin{equation}
F_{2A}(x, Q^2) = \int_0^A {d\alpha\over \alpha} \rho_A^N(\alpha) F_{2N}({x\over \alpha}, Q^2),
\label{conv}
\end{equation}
where $x=AQ^2/2q_0m_A$ and
$\rho_A^N(\alpha))$ satisfies the baryon charge conservation sum rule:
$\int_0^A {d\alpha\over \alpha} \rho_A^N(\alpha)=A$. If the nucleus in the fast frame consists only of nucleons, $\rho_A^N(\alpha))$ also satisfies the momentum sum rule:
$\int_0^A \alpha{d\alpha\over \alpha} \rho_A^N(\alpha)=A$. Together these sum rules imply that in the many nucleon approximation the EMC ratio $R_A(x,Q^2)= F_{2A}(x,Q^2) /F_{2N}(x,Q^2)$ should be slightly below one for a range of x below $x_0=2/(1+n)$ where $F_{2N}(x)\propto (1-x)^n$ and exceed one for $x> x_0$. Significant deviation of the EMC ratio from these expectations clearly indicates presence of non-nucleonic degrees of freedom in nuclei.
We have demonstrated recently \cite{Frankfurt:2010cb} that two effects should be taken into account before considering modifications of the many nucleon approximation for the nuclear wave function: presence of the Coulomb field in a fast nucleon and the difference between the definition of the Bjorken variable in the theoretical expression (Eq.~1) - $x=AQ^2/2q_0m_A$, and the one used in the experimental papers -
$x_p=Q^2/2q_0m_p$.
Atomic nuclei carry electric charge. Therefore the Coulomb field of a nucleus is a
fundamental property of the nucleus in its rest frame. Under the Lorentz transformation
to the frame where the nucleus has a large momentum, the rest frame nucleus Coulomb field is
transformed into the field of equivalent photons. This phenomenon is well known as
Fermi - Weizsacker - Williams approximation for the wave function of a rapid projectile with
nonzero electric charge. Application of this techique allows to evaluate the role of photon degrees of freedom
in the partonic nucleus structure. In particular we find for an additional
(to the case of the system of free nucleons)
light-cone (LC) fraction of the nucleus momentum carried by photons \cite{Frankfurt:2010cb} \footnote{This formulae corrects corresponding expression of ref.\cite{Frankfurt:2010cb} where numerical coefficient was overestimated by a factor $\sim $ 2.6.}
\begin{equation}
\lambda_{\gamma}=\int_0^1\, dx xP_{\gamma}(x,Q^2)={\alpha_{em}} {2\over \sqrt{3\pi}} {Z(Z-1)\over A}{1 \over m_NR_A}.
\label{ww7}
\end{equation}
The leading effect $\propto Z^2$ is due to the coherent
emission by the nucleus as a whole, and the term $\propto Z$ is due to the subtraction of the incoherent emission of photons by individual protons. Numerically $\lambda_{\gamma}(^{12}C) = .11\% ; \lambda_{\gamma}(^{56}Fe) =.35\%; \lambda_{\gamma}(^{197}Au) =0.65\%$.
Presence of the dynamic photon field modifies the parton momentum sum rule:
\begin{equation}
\int_0^A \left[(1/A)(xV_{A}(x, Q^2) +xS_{A}(x, Q^2) +xG_{A}(x,Q^2))\right]dx=1- \lambda_{\gamma}.
\label{sumrule}
\end{equation}
This effect has to be taken into account in the analyses of the nuclear pdfs. In particular it leads to a $\sim 1.3 \% $ reduction of the momentum fraction carried by gluons in heavy nuclei since it is determined using Eq.\ref{sumrule} and the $F_{2A}/F_{2^2H}$ data.
The depletion of the LC fraction carried by nucleons leads to a significant EMC effect for $A\ge 50$:
\begin{equation}
R_A(x,Q^2) -1 = - \lambda_{\gamma}xF_{N}^{\prime}(x,Q^2)/ F_{N}(x,Q^2).
\label{RA}
\end{equation}
\begin{figure}
\includegraphics[height=.35\textheight]{emcplotnew4}
\caption{Solid lines are results of calculation taking into account the Coulomb effect and the effect of proper definition of x. In Fig.~2b the dashed line is the contribution of the hadronic EMC effect due to SRCs normalized for large A with the A-dependence given by $a_2(A)$ from \cite{Fomin:2011ng}. The dashes lines in Fig.~2b are result of adding the effect of the Fermi motion.
The data are from the SLAC and Jlab experiments \cite{Gomez:1993ri,Seely:2009gt}. }
\end{figure}
Correcting for the difference between $x_p$ used in the experimental papers and $x$ which enters in the convolution expression Eq.\ref{conv} \cite{Frankfurt:1981mk} also leads to a EMC like effect for $R_A$.
It can be taken into account by the substitution
$\lambda_{\gamma} \to \lambda_{\gamma} + (\epsilon_A- \epsilon_{^2H} - (m_n-m_p) (N-Z)/A)/m_p$
in Eq.~\ref{RA}.
The Coulomb field effect is much smaller than the x-rescaling for $A\le 12$, while for $A\sim 200$ it is as large as the x-rescaling effect. Combined these two effects leads to the solid curves in Fig.~2. One can see that these two {\it model independent} effects explain $\approx 50\%$ of the EMC effect for $x\le 0.5$ where Fermi motion effects are small. For $x > 0.5$ where Fermi motion contribution becomes large, an additional effect of modification of the hadronic component of the nucleus wave function is necessary mostly to compensate the Fermi motion effect. The "hadronic EMC effect" is $\approx$ 4\% for $A\ge 50$ for x=0.5 and grows rapidly with further increase of x: $\sim $ 15\% for x=0.6, $\sim$ 25\% for x=0.7. This steep $x$-dependence
is consistent
with the expectation of the color screening model that maximal suppression $\sim 20\%$ occurs for very large $x$ where point like configurations dominate in $F_{2N}$ \cite{Frankfurt:1985cv,Ciofi}.
It was demonstrated in \cite{Frankfurt:1985cv,Ciofi} that a bound nucleon deformation is proportional to the nucleon's kinetic energy (the nucleon off-shellness). Hence the EMC effect is proportional to the average kinetic nucleon energy, which is dominated by the contribution of the SRCs.
In Fig.~2b we plot $1-R_A(x=0.5)$ since the Fermi motion does not contribute for this x \cite{Frankfurt:1981mk}. One can see that the A-dependence of the "extra" EMC effect for $F_{2A}/F_{2^2H}$ is indeed roughly consistent with the measured A-dependence of $a_2(A)-1$ (the same is true for x=0.6, 0.7).
Our analysis indicates that the non-nucleonic components contribute significantly only in nucleons with $x\ge 0.5$ quarks. Such configurations occurs with a very small probability $\sim 2\%$. Hence we conclude that the probability of exotic component
relevant for the large x EMC effect is $\sim 0.2\%$. Since the residual effect for smaller $x$ is $\le 1\div 2 \%$ we conclude that overall the probability of the exotic component in nuclei is $\le 2\%$. This is consistent with the results of the analysis described in Sect. 2 that SRCs are dominated by the nucleonic degrees of freedom.
In the case of the scattering off the deuteron the Coulomb and x-rescaling effects are practically negligible and only hadronic effect is present. Since the hadronic EMC effect is proportional to the average nucleon kinetic energy (average virtuality) it is expected to be approximately factor of 4 smaller for the deuteron than for medium and heavy nuclei \cite{Frankfurt:1985cv}, \cite{Ciofi}.
As a result the EMC effect for the deuteron ($R_D(x,Q^2)=F_{2D}/F_{2N}(x,Q^2)$) say for $x=0.5$ is approximately 1/4 of the difference between the dashed and solid curves in Fig.~2b for $A\ge 50$ - that is
$1 - R_D(0.5,Q^2) \approx 0.01$ (which is
a factor of $\sim 2$ smaller than if one assumes that all the EMC effect is due to the scattering off the SRCs) leading to a reduction of the extracted $F_{2n}/F_{2p}$ ratio at large $x$.
\section{Some implications for neutron stars}
The small probability of the nonnucleonic degrees of freedom in nuclei including SRC which follows from the studies of the hard nuclear phenomena fits well with the recent observation\cite{Demorest:2010bx} of a heavy neutron star of about two Solar masses - models where nonnucleonic degrees of freedom are easily excited do not allow existence of such heavy neutron stars.
Our focus is on the outer core where nucleon density is close to the nuclear one: $\rho \sim (2\div 3) \rho_0$, where $\rho_0 \approx$ 0.16 nucleon/fm$^3$ and the ratio of the proton and neutron densities $x\sim 1/10$, corresponding to
\begin{equation}
k_F(p)/k_F(n) = (N_p/N_n)^{1/3}\equiv x^{1/3} \ll 1.
\end{equation}
Since the probability of the pn SRC grows with the neutron density which is a factor of $4 \div 6$ higher for $\rho \sim (2\div 3) \rho_0$. As a result
the neutron gas "heats" the proton gas leading to practical disappearance of the proton Fermi surface \cite{Frankfurt:2008zv}.
The high momentum tail of proton, neutron distributions are directly calculable. In the leading order in $k_F^2/k^2$ the occupation numbers for protons and neutrons with momenta above Fermi surface are
\begin{eqnarray}
f_{n}(k,T=0) \approx \left({\rho_{n}}\right)^2(\left(\frac{V_{nn}(k)}{k^2/m_N}\right)^2+ 2x\left(\frac{V_{pn}(k)}{k^2/m_N}\right)^2), \\ \nonumber
\, f_{p}(k,T=0) \approx \left({\rho_{n}}\right)^2(x^2 \left(\frac{V_{pp}(k)}{k^2/m_N}\right)^2 +
2x \left(\frac{V_{pn}(k)}{k^2/m_N}\right)^2).
\end{eqnarray}
Since there is an equal number of protons and neutrons above Fermi surface, but $x\ll 1$, the
effect is much larger for protons than for neutrons.
As a result, the internucleon interaction tends to equilibrate momenta of protons and neutrons -strong departure from the ideal gas approximation. The Migdal jump in the proton momentum distribution almost disappears in this limit. The suppression of the proton Fermi surface leads to the suppression of the proton superconductivity. At the same time the superfluidity of neutrons and proton-neutron pairs is not excluded.
Another effect is the large enhancement of neutrino cooling of the neutron stars at finite temperatures \cite{Frankfurt:2008zv}.
The enhancement (factor of R as compared to the URCA process) is due to presence of the proton holes in the proton Fermi sea. For example taking $x =0.1$, and the neutron density ~ $ \rho_0$ , we find for the temperature kT $\ll$ 1 MeV:
\begin{equation}
R \approx 0.1 (MeV/kT)^{3/2},
\end{equation}
and much larger enhancement for $x \ll 0.1$ where the URCA process is not effective.
Since the temperature of the isolated neutron star drop below 0.01 MeV after one year, the discussed mechanism leads to a large enhancement
of cooling.
\section{Conclusions}
The impressive experimental progress of the last few years - discovery of strong short range correlations in nuclei with strong dominance of I=0 SRC - confirmed a series of our predictions of 80's and has proven validity of general strategy of using hard nuclear reactions for probing microscopic nuclear structure. It provides a solid basis for the further studies. Several experiments are under way and several are already a part of the planned 12 GeV Jlab research. The hadronic EMC effect is a factor $\sim 2$ smaller for $x \le 0.5$ than was thought previously,
but it kicks in rapidly at $x > 0.5$ implying that the tagged structure function studies should observe a transition from nearly free nucleon like $F_{2N}$ for $x\le 0.45$ to a strongly deformed $F_{2N}$ at $x\sim 0.6$. Nucleons remain practically undeformed up to the local densities comparable to the neutron star densities which is consistent with a stiff equation of state for the neutron stars.
A direct observation of 3N SRCs, nonnucleonic degrees of freedom in nuclei ($\Delta$-isobar like configurations, etc) which are of direct
relevance for the neutron stars core dynamics are on the top of the agenda for the future research. Observation of these effects will be one of the aims of our data mining program at Jlab, as well as of a number of experiments at 12 GeV.
Complementary experiments with hadron beams (FAIR, J-PARC) are highly desirable.
|
2,869,038,156,865 | arxiv | \section{Introduction}
Anti-coordination games form some of the basic payoff structures in game theory.
Such games are ubiquitous; miners deciding which land to drill for resources,
company employees trying to learn diverse skills, and airplanes selecting flight
paths all need to mutually anti-coordinate their strategies in order to maximize
their profits or even avoid catastrophe.
Two-player anti-coordination is simple and well understood. In its barest form,
the players have two actions, and payoffs are symmetric for the players, paying
off $1$ if the players choose different actions and $0$ otherwise. This game
has two strict pure-strategy equilibria, paying off $1$ to each player, as well
as a non-strict mixed-strategy equilibrium paying off an expected $1/2$ to each
player.
In the real world, however, coordination and anti-coordination games are more
complex than the simple two-player game. People, companies, and even countries
play such multi-party games simultaneously with one another. One straightforward
way to model this is with a graph, whose vertices correspond to agents and whose
edges capture their pairwise interactions. A vertex then chooses one of $k$
strategies, trying to anti-coordinate with all its neighbors simultaneously.
The payoff of a vertex is the sum of the payoffs of its games with its neighbors
-- namely the number of neighbors with which it has successfully
anti-coordinated. It is easy to see that this model naturally captures many
applications. For example countries may choose commodities to produce, and
their value will depend on how many trading partners do not produce that
commodity.
In this paper we focus on finding \text{pure strategies} in equilibrium, as well
as their associated social welfare and price of anarchy, concepts we shall
presently define. We look at both strict and non-strict pure strategy
equilibria, as well as games on directed and undirected graphs. Directed graphs
characterize the case where only one of the vertices is trying to
anti-coordinate with another. The directed case turns out to not only
generalize the symmetric undirected case, but also captures coordination in
addition to anti-coordination.
These problems also have nice interpretations as certain natural graph coloring
and partition problems, variants of which have been extensively studied. For
instance, a pure strategy equilibrium in an undirected graph corresponds to what
we call a stable $k$-coloring of the graph, in which no vertex can have fewer
neighbors of any color different than its own. For $k=2$ colors this is
equivalent to the well-studied \emph{unfriendly partition} or
\emph{co-satisfactory partition} problem. The strict equilibrium version of
this problem (which corresponds to what we call a strictly stable $k$-coloring)
generalizes the \emph{strictly unfriendly partition problem}. We establish both
the NP-hardness of the decision problem for strictly unfriendly partitions and
NP-hardness for higher $k$.
\subsection{Previous work}
In an early work on what can be seen as a coloring game, Naor and
Stockmeyer~\cite{NaorS93} define a \emph{weak $k$-coloring} of a graph to be one
in which each vertex has a differently colored neighbor. They give a locally
distributed algorithm that, under certain conditions, weakly $2$-colors a graph
in constant time.
Then, in an influential experimental study of anti-coordination in networks,
Kearns~et~al.~\cite{KearnsSM06} propose a true graph coloring game, in which
each participant controlled the color of a vertex, with the goal of coloring a
graph in a distributed fashion. The players receive a reward only when a proper
coloring of the graph is found. The theoretical properties of this game are
further studied by Chaudhuri~et~al.~\cite{ChaudhuriGJ08} who prove that in a
graph of maximum degree $d$, if players have $d + 2$ colors available they will
w.h.p.\ converge to a proper coloring rapidly using a greedy local algorithm.
Our work is also largely motivated by the work of Kearns~et~al., but for a
somewhat relaxed version of proper coloring.
Bramoull\'{e}~et~al.~\cite{BramoulleLGV04} also study a general
anti-coordination game played on networks. In their formulation, vertices can
choose to form links, and the payoffs of two anti-coordinated strategies may not
be identical. They go on to characterize the strict equilibria of such games,
as well as the effect of network structure on the behavior of individual agents.
We, on the other hand, consider an arbitrary number of strategies but with a
simpler payoff structure.
The game we study is related to the MAX-$k$-CUT game, in which each player
(vertex) chooses its place in a partition so as to maximize the number of
neighbors in other partitions. Hoefer~\cite{Hoefer2007}, Monnot \&
Gourv\`es~\cite{G09}, research Nash equlibria and coalitions in this context.
Our Propositions~\ref{propn:alg} and~\ref{obs:poa} generalize known facts proved
there, and we include them for completeness.
This paper also has a strong relationship to \emph{unfriendly partitions} in
graph theory. An unfriendly partition of a graph is one in which each vertex
has at least as many neighbors in other partitions as in its own. This topic
has been extensively studied, especially in the combinatorics
community~\cite{AharoniMP90,BruhnDGS10,CowanE,ShelahM90}. While locally finite
graphs admit $2$-unfriendly partitions, uncountable graphs may
not~\cite{ShelahM90}.
Friendly (the natural counterpart) and unfriendly partitions are also studied
under the names \emph{max satisfactory} and \emph{min co-satisfactory
partitions} by Bazgan~et~al.~\cite{BazganTV10}, who focus on partitions of size
greater than $2$. They characterize the complexity of determining whether a
graph has a $k$-friendly partition and asked about characterizing $k$-unfriendly
partitions for $k > 2$. Our notion of stable colorings captures unfriendly
partitions, and we also solve the $k>2$ case.
A natural strengthening of the notion above yields \emph{strictly unfriendly
partitions}, defined by Shafique and Dutton~\cite{ShafiqueD09}. A strictly
unfriendly partition requires each vertex to have strictly more neighbors
outside its partition than inside it. Shafique and Dutton characterize a weaker
notion, called \emph{alliance-free partition}, but leave characterizing strictly
unfriendly partitions open. Our notion of strictly stable coloring captures
strictly unfriendly partitions, giving some of the first results on this
problem. Cao and Yang~\cite{CaoY12a} also study a related problem originating
from sociology, called the \emph{matching pennies game}, where some vertices try
to coordinate and others try to anti-coordinate. They prove that deciding
whether such a game has a pure strategy equilibrium is NP-Hard. Our work on the
directed case generalizes their notion (which they suggested for future work).
Among our results we give a simpler proof of their hardness result for $k=2$ and
also tackle $k >2$, settling one of their open questions.
There are a few related games on graphs that involve coloring, but they instead
focus on finding good proper colorings. In~\cite{PS08} Panagopoulou and Spirakis
define a coloring game in which the payoff for a vertex is either zero if it
shares a color with a neighbor, and otherwise the number of vertices in the
graph with which it shares a color. They prove pure Nash equilibria always exist
and can be efficiently computed, and provide nice bounds on the number of colors
used. Chatzigiannakis, et al.~\cite{CKPS10} extend this line of work by
analyzing distributed algorithms for this game, and Escoffier, et
al.~\cite{EGM12} improve their bounds.
\subsection{Results}
We provide proofs of the following, the last two being our main results.
\begin{enumerate}
\item \emph{For all $k \ge 2$, every undirected graph has a stable
$k$-coloring, and such a coloring can be found in polynomial time.} \\ Our
notion of stable $k$-colorings is a strengthening of the notion of
$k$-unfriendly partitions of Bazgan~et~al.~\cite{BazganTV10}, solving their
open problem number 15.
\item \emph{For undirected graphs, the price of anarchy for stable $k$-colorings is
bounded by $\frac{k}{k-1}$, and this bound is tight.}
\item \emph{In undirected graphs, for all $k \ge 2$, determining whether a
graph has a strictly stable $k$-coloring is NP-hard.} \\ For $k=2$, this
notion is equivalent to the notion that is defined by Shafique and
Dutton~\cite{ShafiqueD09}, but left unsolved.
\item \emph{For all $k \ge 2$, determining whether a directed graph has even a
non-strictly stable $k$-coloring is NP-hard.}\\ Because directed graphs also
capture coordination, this solves two open problems of Cao and
Yang~\cite{CaoY12a}, namely generalizing the coin matching game to more than
two strategies and considering the directed case.
\end{enumerate}
\section{Preliminaries}
For an unweighted undirected graph $G=(V,E)$, let $C = \{f | f: V
\to \{1, \ldots ,k \}\}.$ We call a function $c \in C$ a \textbf{coloring}.
We study the following anti-coordination game played on a graph $G=(V,E)$. In
the game, all vertices simultaneously choose a color, which induces a coloring
$c \in C$ of the graph. In a given coloring $c$, an agent $v$'s
\textbf{payoff}, $\mu_c(v)$, is the number of neighbors choosing colors
different from $v$'s, namely
\[
\mu_c(v) := \sum_{\{v,w\} \in E} \ind{c(v) \neq c(w)}.
\]
Note that in this game higher degree vertices have higher potential payoffs.
We also have a natural generalization to directed graphs. That is, if $G =
(V,E)$ is a directed graph and $c$ is a coloring of $V$, we can define the
payoff $\mu_c(v)$ of a vertex $v \in V$ analogously as the sum over outgoing
edges:
\[
\mu_c(v) := \sum_{(v,w) \in E} \ind{c(v) \neq c(w)}
\]
Here a directed edge from $v$ to $w$ is interpreted as ``$v$ cares about $w$.''
We can then define the social welfare and price of anarchy for directed graphs
identically using this payoff function.
Given a graph $G$, we define the \textbf{social welfare} of a coloring $c$ to
be
\[
W(G,c) := \sum_{v \in V} \mu_c(v).
\]
We say a coloring $c$ is \textbf{stable}, or in {equilibrium}, if no vertex can
improve its payoff by changing its color from $c(v)$ to another color. We define
$Q$ to be the set of stable colorings.
We call a coloring function $c$ \textbf{strictly stable}, or in {strict
equilibrium}, if every vertex would decrease its payoff by changing its color
from $c(v)$ to another color. If a coloring function is stable and at least one
vertex can change its color without decreasing its payoff, then the coloring is
\textbf{non-strict}.
We define the \textbf{price of anarchy} for a graph $G$ to be
\[
\mbox{PoA}(G) := \frac{\max_{c' \in C}W(G,c')}
{\min_{c \in Q}W(G,c)}.
\]
This concept was originally introduced by Koutsoupias and Papadimitriou
in~\cite{KP99}, where they consider the ratio of social payoffs in the best and
worst-case Nash equilibria. Much work has since focused on the price of anarchy,
e.g.~\cite{FKKMS02,RT02}.\\
\noindent \textbf{Mixed and pure strategies}\ \
It is natural to consider both pure and mixed strategies for the players in our
network anti-coordination game. A pure strategy solution does not in general
exist for every 2 player game, while a mixed strategy solution will. However,
in this coloring game not only will a pure strategy solution always exist, but
for any mixed strategy solution there is a pure strategy equilibrium solution
which achieves a social welfare at least as good, and where each player's payoff
is identical with its expected payoff under the mixed strategy.\\
\noindent \textbf{Strict and non-strict stability}\ \
It is worthwhile to note that a strictly stable coloring $c$ need not provide
the maximum social welfare. In fact, it is not difficult to construct a graph
for which a strictly stable coloring exists yet the maximum social welfare is
achieved by a non-strictly stable coloring, as shown in
Figure~\ref{fig:weakstrongwelfare}.
\begin{figure}[t]
\centering
\scalebox{0.35}{\includegraphics{strong-weak-welfare2s.png}}
\caption{The strictly stable 2-coloring on the left attains a social welfare of
40 while the non-strictly stable coloring on the right attains
42, the maximum for this graph.}
\label{fig:weakstrongwelfare}
\end{figure}
\section{Stable colorings}
First we consider the problem of finding stable colorings in graphs. For the
case $k=2$, this is equivalent to the solved unfriendly partition problem. For
this case our algorithm is equivalent to the well-studied local algorithm for
MAX-CUT~\cite{ElsasserT11,MonienT10}. Our argument is a variant of a standard
approximation algorithm for MAX-CUT, generalized to work with partitions of size
$k \ge 2$.
\begin{propn}\label{propn:alg}
For all $k \ge 2,$ every finite graph $G=(V,E)$ admits a stable $k$-coloring.
Moreover, a stable $k$-coloring can be found in polynomial time.
\end{propn}
\begin{proof}
Given a coloring $c$ of a graph, define $\Phi(c)$ to be the number of
properly-colored edges. It is clear that this function is bounded and that
social welfare is $2 \Phi(c)$. Moreover, the change in a vertex's utility by
switching colors is exactly the change in $\Phi$, realizing this as an exact
potential game~\cite{M96}. In a given coloring, we call a vertex $v$
\emph{unhappy} if $v$ has more neighbors of its color than of some other color.
We now run the following process: while any unhappy vertex exists, change its
color to the color
\begin{equation}\label{eq:greedy}
c'(u) = \argmin_{m \in \{1, \ldots, k\}} \sum_{ v \in N(u)}\ind{c(v) = m}.
\end{equation}
As we only modify the colors of unhappy vertices, such an amendment to a
coloring increases the value of $\Phi$ by at least 1. After at most $|E|$ such
modifications, no vertex will be unhappy, which by definition means the coloring
is stable. \hfill $\square$
\end{proof}
We note that because, in the case of $k=2$, maximizing the social welfare of a
stable coloring is equivalent to finding the MAX-CUT of the same graph, which is
known to be NP-hard~\cite{GareyJ79}, we cannot hope to find a global optimum for
the potential function. However, we can ask about the price of anarchy, for
which we obtain a tight bound. The following result also appears, using a
different construction, in~\cite{Hoefer2007}, but we include it herein for
completeness.
\begin{propn}\label{obs:poa}
The price of anarchy of the $k$-coloring anti-coordination game is at most
$\frac{k}{k-1}$, and this bound is tight.
\end{propn}
\begin{proof}
By the pigeonhole principle, each vertex can always achieve a $\frac{k-1}{k}$
fraction of its maximum payoff by choosing its color according to
Equation~\ref{eq:greedy}. Hence, if some vertex does not achieve this payoff
then the coloring is not stable. This implies that the price of anarchy is at
most $\frac{k}{k-1}$.
To see that this bound is tight take two copies of $K_k$ on vertices $v_1,
\dots, v_k$ and $v_{k+1}, \dots, v_{2k}$ respectively. Add an edge joining $v_i$
with $v_{i+k}$ for $i\in \{1,\dots,k\}$. If each vertex $v_i$ and $v_{i+k}$ is
given color $i$ this gives a stable $k$-coloring of the graph, as each vertex
has one neighbor of each of the $k$ colors attaining the social welfare lower
bound of $2(\frac{k-1}{k})|E|$. If, however, the vertices $v_{i+k}$ take color
$i+1$ for $i\in\{1,\dots,k-1\}$ and $v_{2k}$ takes color 1, the graph achieves
the maximum social welfare of $2|E|$. This is illustrated for $k=5$ in
Figure~\ref{fig:k5copies}.
\hfill
$\square$\end{proof}
\begin{figure}[htb]
\centering
\scalebox{0.40}{\includegraphics{k5construction.png}}
\caption{A graph achieving PoA of $\frac{5}{4}$, for k=5}
\label{fig:k5copies}
\end{figure}
\section{Strictly Stable Colorings}
In this section we show that the problem of finding a strictly stable
equilibrium with any fixed number $k \geq 2$ of colors is NP-complete. We give
NP-hardness reductions first for $k \geq 3$ and then for $k=2$. The $k=2$ case
is equivalent to the strictly unfriendly $2$-partition
problem~\cite{ShafiqueD09}, whose complexity we settle.
\begin{theorem}
For all $k \geq 2$, determining whether a graph has a strictly stable
$k$-coloring is NP-complete.
\end{theorem}
\begin{proof}
This problem is clearly in NP. We now analyze the hardness in two cases.
\noindent \emph{1)} $k\ge3$:
For this case we reduce from classical $k$-coloring. Given a graph $G$, we
produce a graph $G'$ as follows.
Start with $G' = G$, and then for each edge $e = \{ u,v \}$ in $G$ add a copy
$H_e$ of $K_{k-2}$ to $G'$ and enough edges s.t.\ the
induced subgraph of $G'$ on $V(H_e) \cup \left \{ u,v \right \}$
is the complete graph on $k$ vertices. Figure~\ref{fig:edgegadget} illustrates
this construction.
\begin{figure}[htb]
\centering
\scalebox{0.4}{\includegraphics{arbitrary-k-reduction-gadget-small.png}}
\caption{The gadget added for each edge in $G$.}
\label{fig:edgegadget}
\end{figure}
Now supposing that $G$ is $k$-colorable, we construct a strictly stable
equilibrium in $G'$ as follows. Fix any proper $k$-coloring $\varphi$ of $G$.
Color each vertex in $G'$ which came from $G$ (which is not in any $H_e$) using
$\varphi$. For each edge $e = (u,v)$ we can trivially assign the remaining
$k-2$ colors among the vertices of $H_e$ to put the corresponding copy of $K_k$
in a strict equilibrium. Doing this for every such edge results in a strictly
stable coloring. Indeed, this is a proper $k$-coloring of $G'$ in which every
vertex is adjacent to vertices of all other $k-1$ colors.
Conversely, suppose $G'$ has a strictly stable equilibrium with $k$ colors.
Then no edge $e$ originally coming from $G$ can be monochromatic. If it were,
then there would be $k-1$ remaining colors to assign among the remaining $k-2$
vertices of $H_e$. No matter the choice, some color is unused and any vertex of
$H_e$ could change its color without penalty, contradicting that $G'$ is in a
strict equilibrium.
The only issue is if $G$ originally has an isolated vertex. In this case, $G'$
would have an isolated vertex, and hence will not have a strict equilibrium
because the isolated vertex may switch colors arbitrarily without decreasing its
payoff. In this case, augment the reduction to attach a copy of $K_{k-1}$ to
the isolated vertex, and the proof remains the same.
\noindent \emph{2)} $k =2$:
We reduce from 3-SAT. Let $\varphi
= C_1 \wedge \dots \wedge C_k$ be a boolean formula in 3-CNF form. We construct
a graph $G$ by piecing together gadgets as follows.
For each clause $C_i$ construct an isomorphic copy of the graph shown in
Figure~\ref{fig:clausegadget}. We call this the \emph{clause gadget} for $C_i$.
In Figure~\ref{fig:clausegadget}, we label certain vertices to show how the
construction corresponds to a clause. We call the two vertices labeled by the
same literal in a clause gadget a \emph{literal gadget.} In particular,
Figure~\ref{fig:clausegadget} would correspond to the clause $(x \vee y \vee
\bar{z})$, and a literal assumes a value of true when the literal gadget is
monochromatic. Later in the proof we will force literals to be consistent across
all clause gadgets, but presently we focus on the following key property of a
clause gadget.
\begin{figure}[t]
\centering
\scalebox{0.37}{\includegraphics{2-coloring-clause-gadget-small.png}}
\caption{The clause gadget for $(x \vee y \vee \bar{z})$. Each literal
corresponds to a pair of vertices, and a literal being satisfied corresponds
to both vertices having the same color.}
\label{fig:clausegadget}
\end{figure}
\begin{lemma}
\label{lemma:clausegadget}
Any strictly stable 2-coloring of a clause gadget has a monochromatic literal
gadget. Moreover, any coloring of the literal gadgets which includes a
monochromatic literal extends to a strictly stable coloring of the clause
gadget (excluding the literal gadgets).
\end{lemma}
\begin{proof}
The parenthetical note will be resolved later by the high-degree of the vertices
in the literal gadgets. Up to symmetries of the clause gadget (as a graph) and
up to swapping colors, the proof of Lemma~\ref{lemma:clausegadget} is
illustrated in Figure~\ref{fig:clauselemmaproof}. The first five graphs show the
cases where one or more literal gadgets are monochromatic, and the sixth shows
how no strict equilibrium can exist otherwise. Using the labels in
Figure~\ref{fig:clauselemmaproof}, whatever the choice of color for the vertex
$v_1$, its two uncolored neighbors must have the same color (or else $v_1$ is
not in strict equilibrium). Call this color $a$. For $v_2, v_3$, use the same
argument and call the corresponding colors $b, c$, respectively. Since there are
only two colors, one pair of $a,b,c$ must agree. WLOG suppose $a=b$. But then
the two vertices labeled by $a$ and $b$ which are adjacent are not in strict
equilibrium. \hfill $\square$
\end{proof}
\begin{figure}[h]
\centering
\scalebox{0.5}{\includegraphics{clause-gadget-lemma-small.png}}
\caption{The first five figures show
that a coloring with a monochromatic literal gadget can be extended to a strict
equilibrium. The sixth (bottom right) shows that no strict equilibrium can
exist if all the literals are not monochromatic.}
\label{fig:clauselemmaproof}
\end{figure}
Using Lemma~\ref{lemma:clausegadget}, we complete the proof of the theorem. We
must enforce that any two identical literal gadgets in different clause gadgets
agree (they are both monochromatic or both not monochromatic), and that any
negated literals disagree. We introduce two more simple gadgets for each
purpose.
The first is for literals which must agree across two clause gadgets, and we
call this the \emph{literal persistence gadget}. It is shown in
Figure~\ref{fig:connectiongadgets}. The choice of colors for the literals on one
side determines the choice of colors on the other, provided the coloring is
strictly stable. In particular, this follows from the central connecting vertex
having degree 2. A nearly identical argument applies to the second gadget, which
forces negated literals to assume opposite truth values. We call this the
\emph{literal negation gadget}, and it is shown in
Figure~\ref{fig:connectiongadgets}. We do not connect all matching literals
pairwise by such gadgets but rather choose one reference literal $x'$ per
variable and connect all literals for $x, \overline{x}$ to $x'$ by the needed
gadget.
\begin{figure}[htb]
\centering
\scalebox{0.45}{\includegraphics{connection-gadgets-small.png}}
\caption{The literal persistence gadget (left) and literal negation gadget
(right) connecting two clause gadgets $C_i$ and $C_j$. The vertices labeled $x$
on the left are part of the clause gadget for $C_i$, and the vertices labeled
$x$ on the right are in the gadget for $C_j$.}
\label{fig:connectiongadgets}
\end{figure}
The reduction is proved in a straightforward way. If $\varphi$ is satisfiable,
then monochromatically color all satisfied literal gadgets in $G$. We can extend
this to a stable 2-coloring: all connection gadgets and unsatisfied literal
gadgets are forced, and by Lemma~\ref{lemma:clausegadget} each clause gadget can
be extended to an equilibrium. By attaching two additional single-degree
vertices to each vertex in a literal gadget, we can ensure that the literal
gadgets themselves are in strict equilibrium and this does not affect any of the
forcing arguments in the rest of the construction.
Conversely, if $G$ has a strictly stable 2-coloring, then each clause gadget has
a monochromatic literal gadget which gives a satisfying assignment of $\varphi$.
All of the gadgets have a constant number of vertices so the construction is
polynomial in the size of $\varphi$. This completes the reduction and proves the
theorem. \hfill $\square$
\end{proof}
\section{Stable colorings in directed graphs}
In this section we turn to directed graphs. The directed case clearly
generalizes the undirected as each undirected edge can be replaced by two
directed edges. Moreover, directed graphs can capture coordination. For two
colors, if vertex $u$ wants to coordinate with vertex $v$, then instead of
adding an edge $(u,v)$ we can add a proxy vertex $u'$ and edges $(u,u')$ and
$(u',v)$. To be in equilibrium, the proxy has no choice but to disagree with
$v$, and so $u$ will be more inclined to agree with $v$. For $k$ colors we can
achieve the same effect by adding an undirected copy of $K_{k-1}$, appropriately
orienting the edges, and adding edges $(u,x), (x,v)$ for each $x \in K_{k-1}$.
Hence, this model is quite general.
Unlike in the undirected graph case, a vertex updating its color according to
Equation~\ref{eq:greedy} does not necessarily improve the overall social
welfare. In fact, we cannot guarantee that a pure strategy equilibrium even
exists -- e.g.\ a directed $3$-cycle has no stable 2-coloring, a fact that we
will use in this section.
We now turn to the problem of determining if a directed graph has an equilibrium
with $k$ colors and prove it is NP-hard. Indeed, for strictly stable colorings
the answer is immediate by reduction from the undirected case. Interestingly
enough, it is also NP-hard for non-strict $k$-colorings for any $k \geq 2$.
\begin{theorem}
For all $k \geq 2$, determining whether a directed graph has a
stable $k$-coloring is NP-complete.
\end{theorem}
\begin{proof}
This problem is clearly in NP. We again separate the hardness analysis into two
parts: $k=2$ and $k \geq 3$.
\noindent \emph{1)} $k=2$:
We reduce from the balanced unfriendly partition problem. A balanced 2-partition
of an undirected graph is called unfriendly if each vertex has at least as many
neighbors outside its part as within. Bazgan et al. proved that the decision
problem for balanced unfriendly partitions is NP-complete~\cite{BazganTV10}.
Given an undirected graph $G$ as an instance of balanced unfriendly partition,
we construct a directed graph $G'$ as follows.
Start by giving $G'$ the same vertex set as $G$, and replace each undirected
edge of $G$ with a pair of directed edges in $G'$. Add two vertices $u,v$ to
$G'$, each with edges to the other and to all other vertices in $G'$. Add an
additional vertex $w$ with an edge $(w,v)$, and connect one vertex of a directed
3-cycle to $u$ and to $w$, as shown in Figure~\ref{fig:weaktwocolornphard}.
\begin{figure}[htb]
\centering
\scalebox{0.4}{\includegraphics{weak-twocolor-small.png}}
\caption{The construction from balanced unfriendly partition to directed stable
2-coloring. Here $u$ and $v$ ``stabilize'' the 3-cycle. A bold arrow denotes a
complete incidence from the source to the target.}
\label{fig:weaktwocolornphard}
\end{figure}
An unbalanced unfriendly partition of $G$ corresponds to a two-coloring of $G$
in which the colors occur equally often. Partially coloring $G'$ in this way, we
can achieve stability by coloring $u,v$ opposite colors, coloring $w$ the same
color as $u$, and using this to stabilize the 3-cycle, as shown in
Figure~\ref{fig:weaktwocolornphard}. Conversely, suppose $G$ does not have a
balanced unfriendly partition and fix a stable 2-coloring of $G'$. WLOG suppose
$G$ has an even number of vertices and suppose color 1 occurs more often among
the vertices coming from $G$. Then $u,v$ must both have color 2, and hence $w$
has color 1. Since $u,w$ have different colors, the 3-cycle will not be stable.
This completes the reduction.
\noindent \emph{2)} $k\ge3$:
We reduce from the case of $k=2$. The idea is to augment the construction $G'$
above by disallowing all but two colors to be used in the $G'$ part. We call the
larger construction $G''$.
We start with $G'' = G'$ add two new vertices $x,y$ to $G''$ which are adjacent
to each other. In a stable coloring, $x$ and $y$ will necessarily have different
colors (in our construction they will not be the tail of any other edges). We
call these colors 1 and 2, and will force them to be used in coloring $G'$.
Specifically, let $n$ be the number of vertices of $G'$, and construct $n^3$
copies of $K_{k-2}$. For each vertex $v$ in any copy of $K_{k-2}$, add the edges
$(v,x), (v,y)$. Finally, add all edges $(a,b)$ where $a \in G'$ and $b$ comes
from a copy of $K_{k-2}$. Figure~\ref{weakkcolornphard} shows this construction.
\begin{figure}[htb]
\centering
\scalebox{0.4}{\includegraphics{weak-ktotwo-small.png}}
\caption{Reducing $k$ colors to two colors. A bold arrow indicates complete
incidence from the source subgraph to the target subgraph.}
\label{weakkcolornphard}
\end{figure}
Now in a stable coloring any vertex from a copy of $K_{k-2}$ must use a
different color than both $x,y$, and the vertex set of a copy of $K_{k-2}$ must
use all possible remaining $k-2$ colors. By being connected to $n^3$ copies of
$K_{k-2}$, each $a \in G'$ will have exactly $n^3$ neighbors of each of the
$k-2$ colors. Even if $a$ were connected to all other vertices in $G'$ and they
all use color 1, it is still better to use color 1 than to use any of the colors
in $\left \{ 3, \dots, k \right \}$. The same holds for color 2, and hence we
force the vertices of $G'$ to use only colors 1 and 2. \hfill $\square$
\end{proof}
\section{Discussion and open problems}
In this paper we defined new notions of graph coloring. Our results elucidated
anti-coordination behavior, and solved some open problems in related areas.
Many interesting questions remain. For instance, one can consider alternative
payoff functions. For players choosing colors $i$ and $j$, the payoff $|i-j|$ is
related to the \emph{channel assignment problem}~\cite{vandenHeuvel98}. In the
cases when the coloring problem is hard, as in our problem and the example
above, we can find classes of graphs in which it is feasible, or study random
graphs in which we conjecture colorings should be possible to find. Another
variant is to study weighted graphs, perhaps with weights, as distances,
satisfying a Euclidian metric.
\subsubsection*{Acknowledgements}
We thank Gy\"{o}rgy Tur\'{a}n for helpful discussions.
\bibliographystyle{plain}
|
2,869,038,156,866 | arxiv | \section{Introduction}\label{sec:intro}
In the age of cloud computing, cloud servers with adequate resources help their clients
by performing huge amount of computation or by storing large amount of data (say, in the order of terabytes)
on their behalf. In this setting, a client only has to download the result of the computation or has to read
(or update) the required portion of the outsourced data.
Several storage service providers like Amazon Simple Storage Service (S3), Dropbox, Google Drive and Microsoft Azure
provide storage outsourcing facility to their clients (data owners).
However, a cloud storage server can be malicious and delete some (less frequently accessed) part of the client's data in
order to save space. Secure cloud storage protocols (two-party protocols between the client and the server)
provide a cryptographic solution to this problem by ensuring that the client's data are stored untampered in
the cloud server.
In a secure cloud storage scheme, a client can remotely check the integrity of her data file outsourced
to the cloud server. A possible way to do this is to divide the data file into blocks and attach
an authenticator (or tag) to each of these blocks before the initial outsourcing. When the client wants
to check the integrity of her data, she downloads all the blocks of the file along with their tags from the server
and verifies them individually. However, this process is highly inefficient due to the large communication
bandwidth required between the client and the cloud server.
In order to resolve the issue mentioned above, a notion called \textit{proofs-of-storage} comes
into play where the client can audit her data file stored in the server without accessing
the whole file, and still, be able to detect an unwanted modification of the file
done by the malicious server.
The concept of \textit{provable data possession} (PDP) is introduced by Ateniese et al.~\cite{Ateniese_CCS}
where the client computes an authentication tag for each block of her data file
and uploads the file along with these authentication tags as stated earlier.
Later, the client can execute an \textit{audit} protocol and verify the integrity
of the data file by checking only a predefined number of randomly sampled blocks of the file.
Although efficient PDP schemes~\cite{Ateniese_CCS,Erway_TISSEC,Wang_TPDS,Wang_TC}
are available in the literature, they only provide the guarantee of retrievability
of \textit{almost all} blocks of the data file. We briefly mention some situations
where this guarantee is not sufficient. The data file may contain some sensitive
information (e.g., accounting information) any part of which the client does not want to
lose. On the other hand, a corruption in the compression table of an outsourced compressed file might
make the whole file unavailable. In a searchable symmetric encryption scheme~\cite{CurtmolaGKO06},
the client encrypts a database using a symmetric encryption scheme to form an index
(metadata for that database)~\cite{Goh03} and outsources the encrypted documents along with
the index to the server. The size of this index is very small compared to the encrypted
database itself. However, loss of the index completely defeats the purpose of the searchable
encryption scheme. In such circumstances, the client wants a stronger notion than PDP
which would guarantee that the server has stored the \textit{entire} file properly and
the client can retrieve \textit{all} of her data blocks at any point of time.
To address the issue mentioned above, Juels and Kaliski~\cite{JK_CCS} introduce
\textit{proofs of retrievability} (POR) where the data file outsourced to the server can be
retrieved in its entirety by the client. The underlying idea~\cite{SW_ACR} of
a POR scheme is to encode the original data file with an error-correcting code, authenticate
the blocks of the encoded file with tags and upload them on the storage server.
As in PDP schemes, the client can execute an audit protocol to check the integrity
of the outsourced data file. The use of error-correcting codes ensures that \textit{all}
data blocks of the file are retrievable.
Depending on the nature of the outsourced data, POR schemes are classified as:
POR schemes for \textit{static} data (static POR) and \textit{dynamic} data
(dynamic POR). For static data, the client cannot change her data after the initial outsourcing
(suitable mostly for backup or archival data). Dynamic data are more generic in that the client
can modify her data as often as needed.
The POR schemes are \textit{publicly verifiable} if audits can be performed by any third party auditor (TPA)
with the knowledge of public parameters only; they are \textit{privately verifiable} if only the client (data owner)
with some secret information can perform audits.
Designing an \textit{efficient} and publicly verifiable dynamic POR scheme is an important research problem due to its practical applications.
There are various efficiency parameters where the performance of a dynamic POR scheme might be improved.
Some of them include the communication bandwidth required to read or write a data block (or to perform
an audit), the client's storage, and the computational cost at the client's (or the server's) end.
On the other hand, the client often prefers to delegate the auditing task to a third party auditor (TPA)
who performs audits on the client's data and informs the client in case of any anomaly detected in
the server's behavior.
Shi et al.~\cite{Stefanov_CCS} propose two efficient dynamic POR schemes: one with
private verifiability and another with public verifiability.
In this work, we provide a construction of a publicly verifiable dynamic POR scheme that is more efficient (in terms of write and audit costs)
than the publicly verifiable scheme proposed by Shi et al.~\cite{Stefanov_CCS}.
Moreover, the public parameters used in the latter scheme need to be changed for \textit{each} write
operation performed on the client's data which is clearly an overhead as, in that case, these public parameters are to
be validated and certified for every write.
\medskip
\noindent{\bf Our Contribution}\q
We summarize our contributions in this paper as follows.
\begin{itemize}
\item We construct a \textit{dynamic} proofs-of-retrievability (POR) scheme where the client outsources
her data file to the cloud server and she can update the content of the file later.
Our construction is based on the \textit{homomorphic hashing} technique.\smallskip
\item Our dynamic POR scheme offers \textit{public verifiability}, that is, the client can delegate the auditing task
to a third party auditor who performs audits on the client's behalf. \smallskip
\item We show that our scheme is secure in the sense that the client gets an assurance
that her data file stored by the server is authentic and up-to-date, and \textit{all}
the data blocks can be retrieved by the client as often as needed. \smallskip
\item We analyze the performance of our scheme and compare it with other existing dynamic POR
schemes (having private or public verifiability). \smallskip
\item Our publicly verifiable dynamic POR scheme enjoys more efficiency (in terms of communication bandwidths
required for a write and an audit) than the ``state-of-the-art'' publicly verifiable dynamic POR scheme~\cite{Stefanov_CCS}.
Moreover, unlike the latter scheme, there is no need to validate or certify the public parameters in our scheme
for every write operation as they are fixed since the initial setup phase.
\end{itemize}
\medskip
The rest of the paper is organized as follows.
Section~\ref{prelims} describes the preliminaries and background related to our work.
In Section~\ref{scs}, we survey the existing literature on secure cloud storage schemes.
Section~\ref{scheme} provides a detailed construction of our publicly verifiable dynamic POR scheme.
We analyze the security of our scheme in Section~\ref{security}.
Finally, in Section~\ref{performance_ana}, we discuss the performance
of our scheme and compare our scheme with other existing dynamic POR schemes
based on different parameters (shown in Table~\ref{tab:comparison_POR}). We also show that our scheme is more efficient
than the publicly verifiable dynamic POR scheme proposed by Shi et al.~\cite{Stefanov_CCS}.
In the concluding Section~\ref{sec:conclusion}, we summarize the work done in this paper.
\section{Preliminaries and Background}
\label{prelims}
\subsection{Notation}
We take $\lambda$ to be the security parameter.
An algorithm denoted by $\mathcal{A}(1^\lambda)$ is a probabilistic polynomial-time
algorithm when its running time is polynomial in $\lambda$ and its output $y$
is a random variable which depends on the internal coin tosses of $\mathcal{A}$.
If $\mathcal{A}$ is given access to an oracle $\mathcal{O}$, we denote $\mathcal{A}$
by $\mathcal{A}^{\mathcal{O}}$ also.
An element $a$ chosen uniformly at random from a set $S$
is denoted as $a\xleftarrow{R}S$.
A function $f:\N\rightarrow\R$ is called negligible in $\lambda$ if for
all positive integers $c$ and for all sufficiently large $\lambda$,
we have $f(\lambda)<\frac{1}{\lambda^c}$.
\subsection{Erasure Code}
\label{erasure_code}
A $(\tilde{m},\tilde{n},d)_\Sigma$-erasure code~\cite{ErasureCode_FAST05_Tutorial,ErasureCode_FAST12} is an error-correcting code~\cite{MWSloane77}
that comprises an encoding algorithm Enc: $\Sigma^{\tilde{n}}\rightarrow\Sigma^{\tilde{m}}$
(encodes a message consisting of $\tilde{n}$ symbols into a longer codeword consisting of $\tilde{m}$ symbols) and
a decoding algorithm Dec: $\Sigma^{\tilde{m}}\rightarrow\Sigma^{\tilde{n}}$ (decodes a codeword to a message),
where $\Sigma$ is a finite alphabet and $d$ is the minimum distance
(Hamming distance between any two codewords is at least $d$) of the code.
The quantity $\frac{\tilde{n}}{\tilde{m}}$ is called the rate of the code.
A $(\tilde{m},\tilde{n},d)_\Sigma$-erasure code can tolerate up to $d-1$ erasures.
If $d=\tilde{m}-\tilde{n}+1$, we call the code a maximum distance separable (MDS) code.
For an MDS code, the original message can be reconstructed from any $\tilde{n}$
out of $\tilde{m}$ symbols of the codeword. Reed-Solomon codes~\cite{RSCode}
and their extensions are examples of non-trivial MDS codes.
\subsection{Merkle Hash Tree}
\label{MHT}
A Merkle hash tree~\cite{Merkle_CR} is a binary tree where each leaf-node stores a data item.
The label of each leaf-node is the data item stored in the node itself.
A collision-resistant hash function $h_{CR}$ is used to label the intermediate nodes of the tree.
Each of the outputs of $h_{CR}$ on different inputs is a binary string of length $O(\lambda)$.
The label of a intermediate node $v$ is the output of $h_{CR}$ computed on the labels of the children
nodes of $v$.
A Merkle hash tree is used as a standard tool for efficient memory-checking.
\begin{figure}[bp]
\scriptsize
\centering
\begin{tikzpicture}[level distance=1.2cm,
level 1/.style={sibling distance=4.2cm},
level 2/.style={sibling distance=1.8cm},
level 3/.style={sibling distance=1cm},
every text node part/.style={align=center}]
\node {A \\ $h_{CR}(h_{CR}(h_{CR}(d_1, d_2), h_{CR}(d_3, d_4)), h_{CR}(h_{CR}(d_5, d_6), h_{CR}(d_7, d_8)))$}
child {node {B \\ $h_{CR}(h_{CR}(d_1, d_2), h_{CR}(d_3, d_4))$}
child {node {D \\ $h_{CR}(d_1, d_2)$}
child {node {H \\ $d_1$}}
child {node {I \\ $d_2$ }}
}
child {node {E \\ $h_{CR}(d_3, d_4)$}
child {node {J \\ $d_3$}}
child {node {K \\ $d_4$}}
}
}
child {node {C \\ $h_{CR}(h_{CR}(d_5, d_6), h_{CR}(d_7, d_8))$}
child {node {F \\ $h_{CR}(d_5, d_6)$}
child {node {L \\ $d_5$}}
child {node {M \\ $d_6$}}
}
child {node {G \\ $h_{CR}(d_7, d_8)$}
child {node {N \\ $d_7$}}
child {node {O \\ $d_8$}}
}
};
\end{tikzpicture}
\caption{A Merkle hash tree containing data items $\{d_1,d_2,\ldots ,d_8\}$.}
\label{fig:MHT}
\end{figure}
Fig.~\ref{fig:MHT} shows a Merkle hash tree containing the data items $\{d_1,d_2,\ldots,d_8\}$
stored at the leaf-nodes. Consequently, the labels of the intermediate nodes are computed
using the hash function $h_{CR}$. The hash value of the node $A$ is the \textit{root-digest}.
The proof showing that a data item $d$ is present in the tree consists of the data item $d$ and
the labels of the nodes along the \textit{associated path} (the sequence of siblings of the node
containing the data item $d$). For example, a proof showing that $d_3$ is present in the tree
consists of $\{d_3,(d_4,l_D,l_C)\}$, where $d_4,l_D$ and $l_C$ are the labels of the nodes
$K,D$ and $C$, respectively. Given such a proof, a verifier computes the hash value of the root.
The verifier outputs \texttt{accept} if the computed hash value matches with the root-digest;
it outputs \texttt{reject}, otherwise. The size of a proof is logarithmic in the number of data items
stored in the leaf-nodes of the tree.
Due to the collision-resistance property of $h_{CR}$, it is infeasible (except with some
probability negligible in the security parameter $\lambda$) to add or modify a data item
in the Merkle hash tree without changing its root-digest.
\subsection{Digital Signature Scheme}
\label{dig_sig}
Diffie and Hellman introduce the public-key cryptography and the notion
of digital signatures in their seminal paper ``New Directions in
Cryptography''~\cite{DH_ITIT}. Rivest et al.~propose the first digital signature scheme
based on the RSA assumption\cite{RSA_CACM}.
Boneh et al.~\cite{BLS_JOC} introduce the first signature scheme
where the signatures are short (e.g., such a signature of size 160 bits provides the security comparable to
that of a 1024-bit RSA signature).
The DSA (Digital Signature Algorithm)~\cite{DSA} and ECDSA (Elliptic Curve Digital Signature Algorithm)~\cite{ECDSA}
signature schemes (variants of the ElGamal signature scheme~\cite{ElGamal_CR}) are widely used in practice.
A digital signature scheme
consists of the following polynomial-time algorithms:
a key generation algorithm KeyGen, a signing algorithm Sign
and a verification algorithm Verify. KeyGen takes as input the security parameter
$\lambda$ and outputs a pair of keys $(psk,ssk)$, where $ssk$ is the
secret key and $psk$ is the corresponding public verification key. Algorithm
Sign takes a message $m$ from the message space $\mathcal{M}$
and the secret key $ssk$ as input
and outputs a signature $\sigma$.
Algorithm Verify takes as input the public key $psk$, a message $m$
and a signature $\sigma$, and outputs \texttt{accept} or \texttt{reject}
depending upon whether the signature is valid or not.
Any of these algorithms can be probabilistic in nature.
The correctness and security (existential unforgeability under adaptive
chosen message attacks~\cite{GMR_ACM}) of a digital signature scheme are described as follows.
\begin{enumerate}
\item \textit{Correctness}:\q Algorithm Verify always accepts a signature generated by an honest signer, that is,
\begin{align*}
\Pr[\text{Verify}_{psk}(m,\text{Sign}(ssk,m))=\texttt{accept}]=1.
\end{align*}
\item \textit{Security}:\q Let Sign$_{ssk}(\cdot)$ be the signing oracle and $\mathcal{A}$
be any probabilistic polynomial-time adversary with an oracle access to Sign$_{ssk}(\cdot)$.
The adversary $\mathcal{A}$ adaptively makes polynomial
number of sign queries to Sign$_{ssk}(\cdot)$ for different messages and gets back the signatures
on those messages. The signature scheme is secure if $\mathcal{A}$ cannot produce,
except with some probability negligible in $\lambda$,
a valid signature on a message not queried previously, that is, for
any probabilistic polynomial-time adversary $\mathcal{A}^{\text{Sign}_{ssk}(\cdot)}$,
the following probability
\beas
\Pr[(m,\sigma)\leftarrow\mathcal{A}^{\text{Sign}_{ssk}(\cdot)}(psk):
{m\not\in Q_{s}} \wedge \text{Verify}_{psk}(m,\sigma)=\texttt{accept}]
\eeas
is negligible in $\lambda$, where $Q_{s}$ is the set of sign queries made by
$\mathcal{A}$ to $\text{Sign}_{ssk}(\cdot)$.
The probability is taken over the internal coin tosses of $\mathcal{A}$.
\end{enumerate}
\subsection{Discrete Logarithm Assumption}
\label{disLog}
The discrete logarithm problem~\cite{Dlog_McCurley,Dlog_Bellare} over a multiplicative group $G_q=\langle g \rangle$ of prime order $q$ and generated by $g$
is defined as follows.
\begin{definition}[Discrete Logarithm Problem]
Given $y\in G_q$, the discrete logarithm problem over $G_q$ is to compute $x\in\Z_q$
such that $y=g^x$.
\end{definition}
The discrete logarithm assumption over $G_q$ says that, for any probabilistic polynomial-time
adversary $\mathcal{A}(1^\lambda)$, the probability
\beas
\Pr_{x\xleftarrow{R}\Z_q}[x\leftarrow\mathcal{A}(y): y=g^x]
\eeas
is negligible in $\lambda$, where the probability is taken over the internal
coin tosses of $\mathcal{A}$ and the random choice of $x$.
\begin{figure}[t]
\centering
\includegraphics[width=.4755\textwidth]{DPOR.eps}
\caption{The entities involved in a dynamic POR scheme.
The client (data owner) processes the data file $F$ to form another file $F'$ and outsources it to the cloud storage server.
She can later read or write the outsourced data. For a privately verifiable scheme,
the client performs audits on her data.
For a publicly verifiable scheme, she can delegate the auditing task to a third party auditor who performs audits
on behalf of the client.}
\label{fig:dpor}
\end{figure}
\subsection{Dynamic Proofs of Retrievability}
\label{dpor_def}
We define a proofs-of-retrievability scheme for \textit{dynamic} data as follows~\cite{Wichs_ORAM,Stefanov_CCS}.
\begin{definition}[Dynamic POR]
A dynamic POR scheme consists of the following protocols between two stateful parties: a client (data owner) and a server.
\begin{itemize}
\item \emph{Init($1^\lambda,n,\beta,F$):} This protocol associates a random file-identifier \emph{\texttt{fid}}
to the data file $F$ consisting of $n$ data blocks each of $\beta$ bits, and it outputs the client
state $state_C$ and another file $F'$ to be stored by the server.
\item \emph{Read($i,F',state_C,\texttt{fid}$):} This protocol outputs the data block at the $i$-th location of
the current state of the file or \emph{\texttt{abort}}.
\item \emph{Write($i,\texttt{updtype},B,F',state_C,\texttt{fid}$):} This protocol inserts the block $B$ after the $i$-th block of the file
or sets $i$-th block of the file to $B$ or deletes the $i$-th block of the file ($B$ is \emph{\texttt{null}}
in this case) based on the value of the variable \emph{\texttt{updtype}}.
It outputs updated $(\tilde{F}',\widetilde{state}_C)$ or \emph{\texttt{abort}}.
\item \emph{Audit($F',state_C,\texttt{fid}$):} This protocol checks memory contents of the current state of the data file
and outputs 1 or 0.
\end{itemize}
\end{definition}
A dynamic POR scheme is \textit{privately verifiable} if only the client with some secret information can perform an audit,
that is, $state_C$ is secret. Otherwise, it is \textit{publicly verifiable}.
For a publicly verifiable dynamic POR scheme, a third party auditor (TPA) can audit
the client's data on behalf of the client who delegates her auditing task to the TPA. In this case, we use the term ``verifier''
to denote an auditor who can be the TPA or the client herself.
Fig.~\ref{fig:dpor} shows the entities involved in a dynamic POR scheme.
Security of a dynamic POR scheme is described
in Section~\ref{security}.
\subsection{Homomorphic Hash Function}
\label{hom_hash}
A homomorphic hash function $h: \F^m\rightarrow G_q$
(for a finite field $\F$ and a multiplicative group $G_q$ of prime order $q$)
is defined as a collision-resistant hash function satisfying the following two properties:
1) for vectors $\hbox{u},\hbox{v}\in\F^m$ and scalars $\alpha,\beta\in\F$,
it holds that $h(\alpha\hbox{u} + \beta\hbox{v}) = h(\hbox{u})^{\alpha}\cdot h(\hbox{v})^{\beta}$,
and 2) it is computationally hard to find vectors $\hbox{u},\hbox{v}\in\F^m$ ($\hbox{u}\not=\hbox{v}$)
such that $h(\hbox{u})=h(\hbox{v})$.
Krohn et al.~\cite{Krohn_SP} construct a homomorphic hash function in the context of content distribution.
The construction is similar to that proposed in incremental hashing schemes~\cite{IncCrypto_CR}.
Let $G_q$ be a multiplicative group of prime order $q$.
Let $m$ elements (generators) $g_1,g_2,\ldots,g_m$ be selected randomly from $G_q$.
Then, the homomorphic hash of a vector $\hbox{u}=[u_{1},u_{2},\ldots,u_{m}]\in \Z_q^m$ is defined as
$h(\hbox{u})=\prod_{i=1}^{m}{g_i}^{u_i}$.
The hash function thus constructed is homomorphic, and
the collision-resistance property is derived from the discrete logarithm assumption over $G_q$.
We use this construction in our dynamic POR scheme to generate
authentication tags for data blocks (see Section~\ref{tag_gen}).
\section{Related Work}
\label{scs}
Ateniese et al.~\cite{Ateniese_CCS} introduce the notion of \textit{provable data possession} (PDP).
In a PDP scheme, the client computes an authentication tag (e.g., message authentication code~\cite{CBC_MAC}) for each block
of her data file and uploads the file along with the authentication tags. During an audit protocol,
the client samples a predefined number of random block-indices and sends them to the server
(\textit{challenge} phase). The cardinality of the challenge set is typically taken to be $O(\lambda)$,
where $\lambda$ is the security parameter. Depending upon the challenge, the server does some computations
over the stored data and sends a proof to the client (\textit{response} phase). Finally, the client checks
the integrity of her data based on this proof (\textit{verification} phase).
\textit{Almost all} data blocks can be retrieved from a (possibly malicious) server
passing an audit with a non-negligible probability.
Other PDP schemes include~\cite{Ateniese_SCOM,Erway_TISSEC,Wang_TPDS,Wang_TC,Curtmola_ICDCS}.
Juels and Kaliski~\cite{JK_CCS} introduce \textit{proofs of retrievability} (POR) for static data
(Naor and Rothblum~\cite{NR_JACM} give a similar idea for sublinear authenticators).
According to Shacham and Waters~\cite{SW_ACR}, the retrievability guarantee for \textit{all}
data blocks of the outsourced file can be achieved by encoding the original file with an
erasure code (see Section~\ref{erasure_code}) before authenticating (and uploading) the blocks
of the encoded file. Due to the redundancy added to the data blocks, the server has to delete
or modify a considerable number of blocks to actually delete or modify a data block which
makes it difficult for the server to pass an audit.
Following the work by Juels and Kaliski,
several POR schemes have been proposed for static data (\textit{static POR}) and dynamic data (\textit{dynamic POR}).
Shacham and Waters~\cite{SW_ACR} propose two POR schemes for static data (one with private verifiability and another with
public verifiability) where the response from the server is short.
Bowers et al.~\cite{Bowers_CCSW} propose a theoretical framework for designing POR schemes
and provide a prototype implementation of a variant of~\cite{JK_CCS}. In another work,
Bowers et al.~\cite{Bowers_HAIL} introduce HAIL (High-Availability and Integrity Layer),
a distributed POR setting where the client's data are disseminated across multiple servers.
Dodis et al.~\cite{Wichs_HA} introduce a notion called ``POR codes'' and show how POR schemes can be
instantiated based on these POR codes. They explore the connection between POR codes and the notion
of hardness amplification~\cite{Hard_Amp}.
Xu and Chang~\cite{Xu_ASIACCS} improve the privately verifiable scheme of~\cite{SW_ACR}
by making the communication bandwidth required for an audit to be $O(\lambda)$.
Armknecht et al.~\cite{Outpor_CCS} propose a POR scheme where any entity among the client (data owner),
the auditor and the cloud server can be malicious, and any two of them can collude as well.
The auditor performs two audits: one for the auditor itself and another on behalf
of the client. The challenge sets are generated using a public randomized algorithm derived
from the hash value of the latest block added to the Bitcoin block chain~\cite{Nakamoto08}.
A few dynamic POR schemes are there in the literature.
Stefanov et al.~\cite{IRIS} propose an authenticated file system called ``Iris''
that is highly scalable and resilient against a malicious cloud server.
Cash et al.~\cite{Wichs_ORAM} encode a small number of data blocks \textit{locally}
and hide the access pattern of the related (belonging to the same codeword) blocks
from the server using oblivious RAM (ORAM)~\cite{ORAM_JACM}. Due to the use of
expensive ORAM primitives, this scheme is inefficient.
Shi et al.~\cite{Stefanov_CCS} propose two practical dynamic POR schemes which reduce
the cost of computation as well as the communication bandwidth required to execute
the protocols involved.
Chandran et al.~\cite{Bhavana_TCC} introduce the notion of ``locally updatable and
locally decodable codes'' and propose an efficient dynamic POR scheme by
applying the techniques used in the construction of such a code.
Ren et al.~\cite{Ren_TSC15} propose a dynamic POR scheme for multiple storage servers
where the data file is split into data blocks and each of these data blocks is encoded
using intra-server (erasure coding) and inter-server (network coding) redundancy.
An update in a block requires changing only a few codeword symbols.
Moreover, the inter-server redundancy achieved using network coding reduces
the repair bandwidth required in case any of the servers fails.
The POR scheme by Guan et al.~\cite{Guan_ESORICS} exploits
the privately verifiable scheme of~\cite{SW_ACR} and gives
a publicly verifiable scheme with the help of indistinguishability obfuscation
($i\mathcal{O}$)~\cite{BarakIO_CR,GargIO_FOCS}.
\section{Dynamic POR Scheme with Public Verifiability}
\label{scheme}
In this section, we describe our publicly verifiable dynamic POR scheme with efficient
writes and audits.
Like the existing dynamic POR schemes~\cite{Wichs_ORAM,Stefanov_CCS}, our
construction also rely on the hierarchical structure provided by the
oblivious RAM~\cite{ORAM_JACM}. Specifically, we follow a storage
structure similar to the one proposed by Shi et al.~\cite{Stefanov_CCS}.
However, our construction is more efficient than their scheme in terms
of the cost of a write operation and the cost of an audit.
Our construction is based on \textit{collision-resistant homomorphic hashing}
technique~\cite{Krohn_SP,IncCrypto_CR} along with a digital signature scheme.
To the best of our knowledge, the homomorphic hashing technique has not been used
before in the context of POR schemes.
\subsection{Storage Structure for Data Blocks}
\label{storage_blocks}
Our scheme relies on a storage structure similar to that proposed by Shi et al.~\cite{Stefanov_CCS}.
Let the client (data owner) associate a random file-identifier \texttt{fid} of $\lambda$ bit-size to the data file she
wants to outsource to the cloud server. We assume that the data file is divided into blocks of size
$\beta$ bits, and read (and write) operations are performed on these blocks.
The value of $\beta$ is taken to be $\flr{\log\tilde{p}}$ for a large prime $\tilde{p}$. The way this prime
$\tilde{p}$ is selected is discussed in Section~\ref{buff_H}. For static data, a standard way
to guarantee retrievability of the file is to encode the file using an erasure code~\cite{SW_ACR}.
The main drawback of using erasure codes in \textit{dynamic} POR is that an update in a single block
in a codeword (say, C) is followed by updates on other $O(n)$ blocks in C,
where $n$ is the number of blocks being encoded to form C.
The underlying idea to overcome this drawback is not to update the encoded copy (C)
for every write operation (insertion, deletion or modification). Instead, it is updated (or rebuilt) only when
sufficient updates are done on the data file. Thus, the amortized cost for writes is
reduced dramatically. However, this encoded copy stores stale data between two such rebuilds.
Therefore, a hierarchical log structure is maintained which temporarily
stores values for the intermediate writes between two successive rebuilds of C.
Each level of this hierarchical log
is also encoded using an erasure code.
We adopt the storage structure and code construction mentioned above in our scheme.
However, we use collision-resistant homomorphic hashing to construct another hierarchical
storage (discussed in Section~\ref{storage_tags}) in order to reduce the cost of a write
and an audit for the client.
Our scheme involves the following three data structures in order to store the data blocks
of the client's file:
\begin{itemize}
\item an \textit{unencoded} buffer U containing all the up-to-date data blocks of the file
(that is, U is updated after every write operation is performed),
\item an \textit{encoded} buffer C which is updated after every $n$ writes
(that is, C is rebuilt afresh by encoding the latest U after every $n$ write operations), and
\item an
\textit{encoded} hierarchical (multiple levels of buffers) log structure H which accommodates
all intermediate writes between two successive rebuilds of C (H is made empty after every $n$ writes).
\end{itemize}
We note that \textit{all} of these data structures are stored on the cloud server.
The server also stores two additional data structures, $\tilde{\text{H}}$ and $\tilde{\text{C}}$
(similar to H and C, respectively), for authentication tags described in Section~\ref{storage_tags}.
\subsubsection{Structure of Buffer U}
\label{buff_U}
The buffer U contains an up-to-date copy of the data file. Reads and writes are
performed directly on the required locations of U. A Merkle hash tree is maintained
over the data blocks of U to check the authenticity of the read block (see Section~\ref{MHT} for the description
of a Merkle hash tree). The Merkle proof sent by the server along with the read block
is verified with respect to the up-to-date root-digest (say, $digMHT$) of the Merkle hash tree.
One can also use other authenticated data structures
like rank-based authenticated skip lists~\cite{Erway_TISSEC} instead of a Merkle hash tree.
Let $n$ be the number of blocks the client outsources to the cloud server initially. So the height
of the Merkle tree built on U is $\ceil{\log n}$.
Read and write operations on the buffer U are described in details in Section~\ref{operations}.
\subsubsection{Structure of Hierarchical Log H}
\label{buff_H}
A hierarchical log structure H is maintained that consists of $(k+1)$ levels
$\text{H}_0,\text{H}_1,\ldots,\text{H}_k$, where $k=\flr{\log n}$.
The log structure H stores the intermediate writes temporarily. For each $0\le l\le k$,
the $l$-th level H$_l=(X_l,Y_l)$ consists of an encoded copy of $2^l$ data blocks using
a $(2^{l+1},2^l,2^l)$-erasure code, where $X_l$ and $Y_l$ contain $2^l$ blocks each.
The original data blocks encoded in H$_l$ arrive at time $t,t+1,\ldots,t+2^l-1\mod n$,
where $t$ is a multiple of $2^l$. We describe the encoding procedure as follows.
Let $\tilde{p}$ be a large prime such that $\tilde{p}=\alpha\cdot(2n)+1$ for some $\alpha\in\N$ and
the bit-size of a block $\beta=\flr{\log\tilde{p}}$, where $\beta\gg\lambda$. Let $\tilde{g}$ denote a generator of
$\Z_{\tilde{p}}^*$. Then, $\omega=\tilde{g}^\alpha\Mod\tilde{p}$ is a $2n$-th primitive root of unity modulo $\tilde{p}$.
When a block $B$ is written to H, it is inserted in the topmost level ($l=0$) if H$_0$ is empty.
That is, $X_0$ is set to $B$. In addition, $Y_0$ is set to $B\cdot\omega^{\psi(t)}$ for the $t$-th ($\Mod n$)
write, where $\omega$ is the $2n$-th primitive root of unity modulo $\tilde{p}$.
Here, $\psi(\cdot)$ is the bit reversal function, where $\psi(t)$ is the value of the binary string
obtained by reversing the binary representation of $t$.
If the top $i$ levels $\text{H}_0,\text{H}_1,\ldots,\text{H}_{i-1}$ are already full,
a \textit{rebuild} is performed to accommodate all the blocks in these levels
as well as the new block in $\text{H}_{i}$ (and to make all the levels
up to $\text{H}_{i-1}$ empty). Shi et al.~\cite{Stefanov_CCS} employ a fast incrementally constructible code based on
Fast Fourier Transform (FFT)~\cite{FFT}\footnote{We can use any linear-time encodable and
decodable error-correcting code~\cite{Spielman} instead of the FFT-based code. However, as we
compare the performance of our scheme with that of the ``state-of-the-art'' publicly verifiable
scheme of~\cite{Stefanov_CCS} in Section~\ref{performance_ana}, we use similar code and parameters
for the ease of comparison.
}.
Fig.~\ref{fig:rebuildX} describes the algorithm
for rebuild of $X_l$ that in turn uses the algorithm \texttt{mix} shown in Fig.~\ref{fig:mixX}.
Although the algorithm deals with $X_l$, the same algorithm can be used for rebuilding $Y_l$
if we replace the $X$ arrays by corresponding $Y$ arrays and the incoming block $B$ by
$B\cdot\omega^{\psi(t)}$. We refer~\cite{Stefanov_CCS} for the form of the $(2^l\times 2^{l+1})$
generator matrix $G_l$ for the $l$-th level code. Let $\tilde{\hbox{x}}_l$ be the vector
containing $2^l$ data blocks most recently inserted in H (after applying a permutation).
Then, the output of the algorithm \texttt{mix} for H$_l$ is the same as that when
$\tilde{\hbox{x}}_l$ is multiplied by $G_l$.
Any $(2^l\times 2^l)$ submatrix of the generator matrix $G_l$ is full rank, and thus,
the code achieves the maximum distance separable (MDS) property.
As a concrete example, the rebuild of $X_3$ is demonstrated in Fig.~\ref{fig:rebuildH}.
The other part of H$_3$ (that is, $Y_3$) is rebuilt in a similar fashion.
We observe that, by using this code, the rebuild cost of H$_l$ is $O(\beta\cdot 2^l)$
(i.e., linear in the length of H$_l$) since the algorithm \texttt{mix} populates H$_l$ by combining two arrays
$\text{H}_{l-1},\text{H}'_{l-1}\in\Z_{\tilde{p}}^{2^{l-1}}$ (see Fig.~\ref{fig:mixX} and Fig.~\ref{fig:rebuildH}).
The $l$-th level is rebuilt after $2^l$ writes.
Therefore, the amortized cost for rebuilding is $O(\beta\log n)$ per write operation.
Each rebuild of the buffer C (discussed in Section~\ref{buff_C})
is followed by making all levels of H empty.
\begin{figure}[t]
\fbox{
\begin{minipage}[t]{.975\textwidth}
\renewcommand{\labelitemi}{$\bullet$}
\begin{center}
\textbf{Rebuild algorithm for $X_l$ to accommodate $B$ in H}
\end{center}
\textbf{Input}: Already full levels $X_0,X_1,\ldots,X_{l-1}$ and empty $X_l$.\newline
\textbf{Output}: Empty levels $X_0,X_1,\ldots,X_{l-1}$ and rebuilt $X_l$.
\begin{itemize}
\item $X'_1\leftarrow\texttt{mix}(X_0,B,0)$
\item \textbf{For} $i=1$ to $l-1$ \textbf{do}\newline
\q$X'_{i+1}\leftarrow\texttt{mix}(X_i,X'_i,i)$
\item Make $X_0,X_1,\ldots,X_{l-1}$ empty and output $X_l=X'_l$
\end{itemize}
\end{minipage}}
\caption{Rebuild algorithm for $X_l$.}\label{fig:rebuildX}
\end{figure}
\begin{figure}[t]
\fbox{
\begin{minipage}[t]{.975\textwidth}
\renewcommand{\labelitemi}{$\bullet$}
\begin{center}
\textbf{Algorithm} $\texttt{mix}(A_0,A_1,l)$
\end{center}
\textbf{Input}: Two arrays $A_0,A_1\in\Z_{\tilde{p}}^{2^l}$.\newline
\textbf{Output}: Array $A$ of length $2^{l+1}$.
\begin{itemize}
\item Let $\omega_l=\omega^{2n/2^{l+1}}$ be the $2^{l+1}$-th primitive root of unity modulo $\tilde{p}$
\item \textbf{For} $i=0$ to $2^l-1$ \textbf{do}
\begin{align}
A[i] & \leftarrow A_0[i]+\omega_l^iA_1[i]\mod {\tilde{p}}\\
A[i+2^l] & \leftarrow A_0[i]-\omega_l^iA_1[i]\mod {\tilde{p}}
\end{align}
\item Output $A$
\end{itemize}
\end{minipage}}
\caption{Algorithm \texttt{mix} for two arrays $A_0$ and $A_1$.}\label{fig:mixX}
\end{figure}
\begin{figure*}[htbp]
\centering
\fbox{\includegraphics[width=.46\textwidth]{H2.eps}}
\qquad
\fbox{\includegraphics[width=.46\textwidth]{H3.eps}}
\caption{Rebuild process for $X_3$. (a) Initially, the levels $X_0,X_1$ and $X_2$ are already full, and $X_3$ is empty.
The rebuild algorithm starts after a new block $B$ arrives. It forms temporary levels $X'_1,X'_2$ and $X'_3$
using the algorithm \texttt{mix}. Linear mixing of blocks are shown using black arrows (Eqn.~1) and red arrows (Eqn.~2).
(b) Finally, the levels $X_0,X_1$ and $X_2$ are made empty, and $X_3=X'_3$ is the newly rebuilt level.}\label{fig:rebuildH}
\end{figure*}
\subsubsection{Structure of Buffer C}
\label{buff_C}
Unlike the buffer U (and H), no read or write operations are performed directly on the buffer C.
After $n$ write operations, the buffer U is encoded using an erasure code to form a new copy of C,
and the existing copy of C is replaced by this new one. The rebuild of C can be done using the same
FFT-based code discussed in Section~\ref{buff_H} which costs $O(\beta n\log n)$ both in time and bandwidth.
As C is rebuilt after every $n$ write operations, the cost incurred per write is $O(\beta\log n)$.
We note that C contains stale data between two successive rebuilds. However, the intermediate writes
are accommodated in H with appropriate encoding. Thus, these blocks written between two successive
rebuilds of C are also retrievable at any point of time.
\subsection{Storage Structure for Authentication Tags Corresponding to Data Blocks}
\label{storage_tags}
We note that each data block in U, H and C
is of size $\beta=\flr{\log\tilde{p}}$ bits for a large prime $\tilde{p}=\alpha\cdot(2n)+1$ for some $\alpha\in\N$.
Thus, the size of a data block $\beta\gg\lambda$, where $\lambda$ is the security parameter.
For example, $\beta$ is taken to be 64 KB and $\lambda$ is taken to be 128 bits in our scheme (see Table~\ref{tab:parameters} in Section~\ref{performance_ana}).
In addition to the log structure H and the buffer C, two similar
structures $\tilde{\text{H}}$ and $\tilde{\text{C}}$ for authentication tags corresponding
to the blocks in H and C (respectively) are stored on the cloud server. Thus, the server
stores U, H, C, $\tilde{\text{H}}$ and $\tilde{\text{C}}$. The benefits of storing
$\tilde{\text{H}}$ and $\tilde{\text{C}}$ on the server are as follows.
Let us assume that the authentication tags for data blocks have the following properties.
\begin{enumerate}
\item The size of a tag is much less than that of a block.
\item The tags are homomorphic in the sense that, given the tags of two blocks $B_1$ and $B_2$,
the tag on any linear combination of $B_1$ and $B_2$ can be generated.
\end{enumerate}
We note that the fundamental operation for a write (or rebuild) on H and C is to encode data blocks, that is,
to compute a linear combination of those data blocks (see Eqn.~1 and Eqn.~2 in Fig.~\ref{fig:mixX}).
Due the \textit{second} property mentioned above, while the server itself performs write operations on H and C,
the client (data owner) can perform similar operations on $\tilde{\text{H}}$ and $\tilde{\text{C}}$.
On the other hand, due to the \textit{first} property, the bandwidth required between the client and
the server for a write operation decreases significantly as the client now has to download much
smaller tags instead of large data blocks. The cost of storage is less nowadays, whereas bandwidth is
more expensive and often limited. Therefore, it is reasonable if we trade the storage off to reduce
the communication bandwidth between the client and the server required for a write (or rebuild).
Indeed, the authentication tags (described in the following section) we use in our dynamic POR scheme satisfy the following properties.
\begin{itemize}
\item The size of a tag is $O(\lambda)$ and is independent of the size of a data block
$\beta$, where $\lambda\ll\beta$.
\item The homomorphic property is achieved by using a collision-resistant homomorphic hash function.
\end{itemize}
Apart from efficient write operations, the cost of an audit in our \textit{publicly verifiable dynamic} POR scheme is comparable
to that in the privately verifiable scheme of~\cite{Stefanov_CCS}, and it is \textit{much less} than that
in the publicly verifiable scheme discussed in the same work.
\subsubsection{Generation of Authentication Tags}
\label{tag_gen}
\smallskip
\noindent
\textbf{Setup}\q
For the data file identified by \texttt{fid}, the client runs an algorithm Setup($1^\lambda$)
to set parameters for generating authentication tags.
The algorithm Setup selects two random primes $p$ and $q$ such that $|q|=\lambda_q=2\lambda+1$,
$|p|=\lambda_p=O(\lambda)$ and $q|(p-1)$. Now, it divides each block $B$ of the data file into segments
of size $(\lambda_q-1)$ bits each. This ensures that each segment is less than $q$ and can
therefore be represented as an element of $\Z_q$. Thus, $m=\ceil{\beta/(\lambda_q-1)}$
is the number of segments in a block, where a block is $\beta=\flr{\log\tilde{p}}$ bits long.
In this setting, each block $B$ can be represented as a vector $[b_{1},b_{2},\ldots,b_{m}]\in \Z_q^m$.
Let $G_q$ be a subgroup of $\Z_p^*$ having order $q$. That is, $G_q$ consists of the order $q$
elements in $\Z_p$. Then, $m$ random elements $g_1,g_2,\ldots,g_m\xleftarrow{R}G_q$ are selected.
Let $\mathcal{S}=(\text{KeyGen}, \text{Sign}, \text{Verify})$ be a digital signature scheme (see Section~\ref{dig_sig})
where the algorithm Sign takes messages in $\{0,1\}^*$ as input and outputs signatures
of size $O(\lambda)$ bits each. Let the pair of signing and verification keys for
$\mathcal{S}$ be $(ssk,psk)$.
The Setup algorithm outputs the primes $p$ and $q$, the secret key $SK=ssk$,
the public parameters $PK=(g_1,g_2,\ldots,g_m,\texttt{fid},psk)$, and the descriptions of $G_q$ and $\mathcal{S}$.
\medskip
\noindent
\textbf{Format of a Tag}\q
The client computes the homomorphic hash~\cite{Krohn_SP,IncCrypto_CR} on a block $B=[b_{1},b_{2},\ldots,b_{m}]\in \Z_q^m$ as
\begin{align}\label{eqn:tag}
h(B)=\prod\limits_{i=1}^{m}{g_i}^{b_i} \Mod p.
\end{align}
Using the signature scheme $\mathcal{S}$, the client generates
the final authentication tag for the block $B$ as
\begin{align*}
\tilde{h}(B)=(h(B),\text{Sign}_{ssk}(h(B),\texttt{fid},\texttt{addr},t)),
\end{align*}
where \texttt{addr} is the physical address $B$ is written to (at time $t$)
and \texttt{fid} is the file-identifier of the data file the block $B$ belongs to.
\medskip
\noindent
\textbf{Collision-resistance and Homomorphic Properties}\q
As shown in~\cite{IncCrypto_CR,Krohn_SP}, given that the discrete logarithm assumption (see Section~\ref{disLog}) holds in $G_q$,
it is computationally hard to find two blocks $B_1$ and $B_2$ such that $B_1\not=B_2$ and $h(B_1)=h(B_2)$
(\textit{collision-resistance} property).
On the other hand, given $B_1=[b_{11},b_{12},\ldots,b_{1m}]\in \Z_q^m$ and
$B_2=[b_{21},b_{22},\ldots,b_{2m}]\in \Z_q^m$, any linear combination of $B_1$ and $B_2$ can be written as
$B=\alpha_1 B_1+\alpha_2 B_2=[\alpha_1 b_{11}+\alpha_2 b_{21},\alpha_1 b_{12}+\alpha_2 b_{22},\ldots,\alpha_1 b_{1m}+\alpha_2 b_{2m}]\in \Z_q^m$.
Therefore, $h(B)$ can be computed as
$\prod_{i=1}^{m}{g_i}^{\alpha_1 b_{1i}+\alpha_2 b_{2i}} \Mod p=h(B_1)^{\alpha_1}\cdot h(B_2)^{\alpha_2}$
(\textit{homomorphic} property).
\medskip
\noindent
\textbf{Size of a Tag}\q
The size of an authentication tag $\tilde{h}(B)$ is the sum of $|h(B)|$ (which is $\lambda_p=O(\lambda)$ bits)
and the size of a signature in $\mathcal{S}$.
If we use the standard ECDSA signature scheme~\cite{ECDSA}
as $\mathcal{S}$, then a signature is $4\lambda$ bits long\footnote{To reduce the size of
a tag, we can use short signatures of size $2\lambda$ bits~\cite{BLS_JOC}. However, the verification of a signature
is more expensive due to computation of bilinear pairings~\cite{Galbraith_DAM}.}.
Thus, $|\tilde{h}(B)|$ is also $O(\lambda)$ bits.
For the values of different parameters considered in our scheme
(see Table~\ref{tab:parameters} in Section~\ref{performance_ana}),
the size of a tag is only 192 bytes which is very small compared to the size of a block (64 KB).
\medskip
\noindent
\textbf{Improvement in Cost of Tag Computation}\q
To compute the homomorphic hash $h(B)$ on a block $B$ using Eqn.~\ref{eqn:tag}, it requires $m$ exponentiations and $(m-1)$
multiplications modulo $p$. We can reduce this computational complexity in the following way
at the cost of the client storing $m$ elements of $\Z_q^*$ which is essentially equivalent to just one block.
The client chooses $g\xleftarrow{R}G_q$ and $\gamma_1,\gamma_2,\ldots,\gamma_m\xleftarrow{R}\Z_q^*$.
The client sets $g_i=g^{\gamma_i} \Mod p$ for each $i\in[1,m]$. The client includes the vector
$\Gamma=[\gamma_1,\gamma_2,\ldots,\gamma_m]$ and $g$ in her secret key $SK$ and makes $g_1,g_2,\ldots,g_m$ public as before.
Now, the homomorphic hash $h(B)$ on a block $B$ is computed as
\begin{align}\label{eqn:Imp}
h(B)& = \prod\limits_{i=1}^{m}{g}^{\gamma_i b_i} \Mod p\notag\\
& = g^{\sum_{i=1}^{m}{\gamma_i b_i}} \Mod p
\end{align}
which requires only \textit{one} exponentiation modulo $p$
along with $m$ multiplications and $(m-1)$ additions modulo $q$. This is a huge improvement
in the cost for computing an authentication tag. On the other hand, the storage overhead at the client's side
is $|\Gamma|$ which is same as the size of a single block $B$. Considering the fact that the client outsources
millions of blocks to the cloud server, this amount of client storage is reasonable for all practical
purposes.
\subsubsection{Storage Structure for $\tilde{\text{H}}$ and $\tilde{\text{C}}$}
\label{buff_tags}
The storage structures for $\tilde{\text{H}}$ and $\tilde{\text{C}}$ are exactly the same
as those for H and C, respectively, except that the authentication tags
(instead of data blocks) are stored in $\tilde{\text{H}}$ and $\tilde{\text{C}}$
(see Section~\ref{storage_blocks} for structures of H and C).
\subsection{Operations}
\label{operations}
There are three types of operations involved in a dynamic POR scheme. The client can read, write and audit
her data stored on the cloud server. The read and write operations are \textit{authenticated} in that the client
can verify the authenticity of these operations. We note that though the client herself performs reads and writes on her data,
she can delegate the auditing task to a third party auditor (TPA) for a publicly verifiable scheme. As our scheme
is \textit{publicly verifiable}, we use the term \textit{verifier} to denote an auditor who can be a TPA or the client herself.
Fig.~\ref{fig:flow} gives an overview of the communication flow between the client and the server during these
operations.
\begin{figure*}[t]
\centering
\fbox{\includegraphics[width=.46\textwidth]{read.eps}}
\qquad
\fbox{\includegraphics[width=.46\textwidth]{write.eps}}
\caption{Communication flow between the client and the server for different operations described in Section~\ref{operations}.
In the setup phase, the client sets parameters for the scheme and outsources the preprocessed file to the server.
Initially, the client uploads U, C and $\tilde{\text{C}}$.
The server stores them along with H and $\tilde{\text{H}}$ that are initialized to be empty.
Then, the client can perform reads, writes and audits on her data in an \textit{interleaved} fashion.
We note that, during a write, the server itself rebuilds H and C (if necessary). On the other hand, the client
rebuilds $\tilde{\text{H}}$ and $\tilde{\text{C}}$ by downloading some of the authentication tags from them,
computing the tag on the new block (using homomorphic property of tags) and sending the new tag
to the server. As our scheme is publicly verifiable, audits can be performed by a third party
auditor (TPA) as well.}\label{fig:flow}
\end{figure*}
\subsubsection{Read}
\label{subsec:read}
Reads are performed directly from the unencoded buffer U. The authenticity and freshness of the block
read can be guaranteed by using a Merkle hash tree~\cite{Merkle_CR} (or a similar data structure like
rank-based authenticated skip list~\cite{Erway_TISSEC}) over the blocks of U.
That is, the blocks of U constitute the leaf-nodes of the Merkle hash tree
(see Section~\ref{MHT} for a brief description of a Merkle hash tree).
The server sends the Merkle proof $\Pi_{read}$ containing the requested block and the labels of the nodes along
the associated path of the tree to the client. The client maintains the up-to-date value of the root-digest
of the Merkle hash tree $digMHT$ and verifies the proof $\Pi_{read}$ with respect to this root-digest.
We note that the size of the root-digest of the Merkle hash tree is $O(\lambda)$ bits.
\subsubsection{Write}
\label{subsec:write}
Let \texttt{updtype} denote the type of a write operation which can be insertion of a new block after the $i$-th block,
deletion of the $i$-th block or modification of the $i$-th block. A write operation affects the buffers in the following way.
\begin{itemize}
\item \textbf{Write on U}\q As the buffer U is unencoded, a write operation on U can be performed
in a similar way as done on the data blocks in a dynamic provable data possession (PDP) scheme~\cite{Erway_TISSEC}.
We briefly describe the procedure as follows.
Let $digMHT$ be the root-digest of the current Merkle hash tree which is stored at the client's side.
The client performs an \textit{authenticated} read on the $i$-th data block of U (as described above).
If the associated Merkle proof $\Pi_{read}$ does not match with $digMHT$, the client aborts.
Otherwise, she computes, from $\Pi_{read}$, the value that would be the new root-digest (say, $digMHT_{new}$)
if the write operation is correctly performed on U stored at the server.
The client stores $digMHT_{new}$ temporarily at her end and asks the server to perform the write on U.
The server performs this write operation on U, computes the root-digest $digMHT_{server}$ of the Merkle hash tree
and sends $digMHT_{server}$ to the client. The client verifies whether
\begin{align}\label{eqn:MHT}
digMHT_{new}\stackrel{?}{=}digMHT_{server}.
\end{align}
If they are not equal, the client aborts. Otherwise, the client sets $digMHT=digMHT_{new}$ at her end.
\item \textbf{Write on H and $\tilde{\text{H}}$}\q We assume that deletion of a block in U
corresponds to insertion of a block (with \texttt{null} content) in the hierarchical log H.
Therefore, for a write of any \texttt{updtype} (i.e., insertion, deletion or modification),
only insertions take place in H. The way a (possibly encoded) block $B$ is inserted in H
is discussed in details in Section~\ref{buff_H}. The cloud server itself performs this operation on H.
An insertion in $\tilde{\text{H}}$ is performed by the client herself as this procedure requires
the knowledge of secret information held by the client. The client computes the authentication tag
on the (possibly encoded) block and insert it in $\tilde{\text{H}}$.
The underlying basic operation of the rebuild phase of H is to compute a linear combination
$B$ (e.g., $\alpha_1 B_1+\alpha_2 B_2)$ of two blocks $B_1$ and $B_2$ (see Eqn.~1 and~2 in Fig.~\ref{fig:mixX}).
Similarly, the corresponding operation for the rebuild of $\tilde{\text{H}}$ is to compute
$\tilde{h}(B)$ given $\tilde{h}(B_1)$ and $\tilde{h}(B_2)$. For $i=1,2$, the client first
downloads $\tilde{h}(B_i)$ and verifies the signature on $h(B_i)$ by checking whether
\begin{align*}
\text{Verify}_{psk}((h(B_i),\texttt{fid},\texttt{addr}_i,t_i),\tilde{h}(B_i))\stackrel{?}=\texttt{accept},
\end{align*}
where $psk$ is the verification key for the signature scheme $\mathcal{S}$.
We note that \texttt{addr}$_1$ (or \texttt{addr}$_2$) is the physical address of the block $B_1$ (or $B_2$)
written at time $t_1$ (or $t_2$), and \texttt{fid} is the file-identifier of the data file the block $B$ belongs to.
For any block in H and C, the time when the block was written most recently can be easily computed
from the current time itself. If the verification passes,
the client computes $h(B)=h(B_1)^{\alpha_1}\cdot h(B_2)^{\alpha_2}$ and $\tilde{h}(B)$ subsequently.
This requires two exponentiations and one multiplication modulo $p$ along with one Sign and two Verify operations.
\item \textbf{Write on C and $\tilde{\text{C}}$}\q As mentioned in Section~\ref{buff_C}, C
($\tilde{\text{C}}$ for authentication tags) is rebuilt after every $n$ writes.
The server performs a rebuild on C, whereas a rebuild on $\tilde{\text{C}}$ is performed by the client.
Basic operations involved in rebuilds of C and $\tilde{\text{C}}$ are the same
as those for rebuilds of H and $\tilde{\text{H}}$, respectively, and thus are omitted here.
\end{itemize}
\subsubsection{Audit}
\label{subsubsec:audit}
In the \textit{challenge} phase, the verifier chooses $r=O(\lambda\log n)$ random locations $\{\texttt{addr}_i\}_{1\le i\le r}$
from all levels (where $O(\lambda)$ random locations are selected from each level) of H and C.
Then, she sends a challenge set $Q=\{(\nu_i,\texttt{addr}_i)\}_{1\le i\le r}$
to the cloud server, where $\nu_1,\nu_2,\ldots,\nu_r\xleftarrow{R}\Z_q^*$ are random coefficients.
In the \textit{response} phase, the server sends to the verifier a proof containing $B^*=\sum_{1\le i\le r}{\nu_i}B_{\texttt{addr}_i}$
and $\{\tilde{h}(B_{\texttt{addr}_i})\}_{1\le i\le r}$. Upon receiving the proof from
the server, the verifier verifies each of the signatures on $\{h(B_{\texttt{addr}_i})\}_{1\le i\le r}$.
Then, she computes $h^*=\prod_{1\le i\le r}{h(B_{\texttt{addr}_i})}^{\nu_i}$ and $h(B^*)$ using Eqn.~\ref{eqn:tag}.
Finally, the verifier checks whether
\begin{align}\label{eqn:verAudit}
h(B^*)\stackrel{?}=h^*
\end{align}
and outputs 0 if any of the verifications fails; she outputs 1, otherwise.
\section{Security}
\label{security}
We define security of a dynamic POR scheme~\cite{Stefanov_CCS,Wichs_ORAM} and show that our scheme described in Section~\ref{scheme}
is secure according to this definition. We also show that the server cannot pass an audit without
storing \textit{all} data blocks properly, except with some probability negligible in $\lambda$.
\subsection{Overview of Security of a Dynamic POR Scheme}
\label{security_overview}
A dynamic POR scheme must satisfy the following properties~\cite{Stefanov_CCS}.
The formal security definition is given in Section~\ref{security_model}.
\begin{enumerate}
\item \textbf{Authenticity and Freshness}\q The authenticity property requires that the cloud server cannot
produce valid proofs during
audits without storing
the corresponding blocks and their respective authentication information untampered, except with
a probability negligible in $\lambda$.
For dynamic data,
the client can modify an existing data block. However, a malicious
cloud server may discard this change and keep an old copy of the block. As the old copy
of the block and its corresponding tag constitute a valid pair, the client has no way to detect
if the cloud server is storing the \textit{fresh} (latest) copy. Thus, the client must be convinced
that the server has stored the up-to-date blocks.\smallskip
\item \textbf{Retrievability}\q Retrievability of data requires that, given a probabilistic
polynomial-time adversary $\mathcal{A}$ that can respond correctly to a challenge $Q$
with some non-negligible probability,
there exists a polynomial-time extractor algorithm $\mathcal{E}$ that can extract \textit{all}
data blocks of the file (except with negligible probability) by challenging $\mathcal{A}$ for a polynomial
(in $\lambda$) number of times and verifying the responses sent by $\mathcal{A}$.
The algorithm $\mathcal{E}$ has a black-box rewinding access to $\mathcal{A}$.
Authenticity and freshness of data restrict the adversary $\mathcal{A}$
to produce valid responses (without storing the data in an authentic and up-to-date fashion) during these interactions
only with some probability negligible in the security parameter $\lambda$.
\end{enumerate}
\subsection{Security Model}
\label{security_model}
We first describe the following security game between the challenger (acting as the client)
and the adversary (acting as the cloud server).
\begin{itemize}
\item The adversary selects a file $F$ associated with a file-identifier \texttt{fid} to store.
The challenger processes the file to form another file $F'$ and returns $F'$ to the adversary.
The challenger stores only some metadata for verification purpose.
\item The adversary adaptively chooses a sequence of operations defined by
$\{\texttt{op}_i\}_{1\le i\le q_1}$ ($q_1$ is polynomial in
the security parameter $\lambda$), where $\texttt{op}_i$ is a read, a write or an audit.
The challenger executes these operations on the file stored by the adversary.
For each operation, the challenger verifies the response sent by the adversary
and updates the metadata at her end only if the response passes the verification.
\item Let $F^*$ be the final state of the file after $q_1$ operations.
The challenger has the latest metadata for the file $F^*$. Now, she executes an audit
protocol with the adversary. The challenger sends a random challenge set $Q$ to the adversary,
and the adversary returns a cryptographic proof to the challenger.
The adversary wins the game if it passes the verification.
\end{itemize}
\begin{definition}[\textbf{Security of a Dynamic POR Scheme}]\label{def:security_dpor}
A dynamic POR scheme is secure if, given any probabilistic polynomial-time
adversary $\mathcal{A}$ who can win the security game mentioned above with some non-negligible
probability, there exists a polynomial-time extractor algorithm $\mathcal{E}$ that can extract
all data blocks of the file by interacting (via challenge-response) with $\mathcal{A}$
polynomially many times.
\end{definition}
\subsection{Security Analysis of Our Scheme}
\label{security_analysis}
We state and prove the following theorem in order to analyze the security of our dynamic POR scheme.
\begin{theorem}\label{theorem_DPOR}
Given that the discrete logarithm assumption holds in $G_q$ and the underlying digital signature scheme is secure,
the dynamic POR scheme described in Section~\ref{scheme} is secure according to Definition~\ref{def:security_dpor}.
\end{theorem}
\begin{proofTheorem}
We use the following claim in order to prove Theorem~\ref{theorem_DPOR}.
\begin{claim}\label{claim_authenticity}
Given that the discrete logarithm assumption holds in $G_q$ and the underlying digital signature scheme is secure,
authenticity and freshness of the challenged blocks in H and C are guaranteed.
\end{claim}
\begin{proof}
We prove the above claim for the log structure H. The proof for C follows in a similar way.
In our scheme, every block $B$ (of the file identified by \texttt{fid}) in H corresponds to an authentication tag
$\tilde{h}(B)=(h(B),\text{Sign}_{ssk}(h(B),\texttt{fid},\texttt{addr},t))$ present in $\tilde{\text{H}}$,
where the signing algorithm Sign uses the secret key $ssk$ of the client and $t$ is the last write-time of the block $B$.
Let $B$ be the correct block that was actually written by the client to the address \texttt{addr} at time $t$.
Suppose this block in \texttt{addr} is challenged during an audit.
We note that the last write-time $t$ of the block is computable from \texttt{addr} and the current time.
So, the values of \texttt{fid}, \texttt{addr} and $t$ are known to the challenger.
Therefore, in order to break the authenticity of the scheme,
the PPT adversary $\mathcal{A}$ has to find a block $B'\not=B$ and its tag $\tilde{h}(B')$ such that one of the following conditions holds:
\begin{itemize}
\item Case I: $h(B')\not=h(B)$ and $\tilde{h}(B')=(h(B'),\text{Sign}_{ssk}(h(B'),\texttt{fid},\texttt{addr},t))$,
\item Case II: $h(B')=h(B)$.
\end{itemize}
\paragraph{Case I}\quad
We show that, if the adversary $\mathcal{A}$ can find a block $B'\not=B$ and its authentication tag $\tilde{h}(B')$ such that
$h(B')\not=h(B)$ and $\tilde{h}(B')=(h(B'),\text{Sign}_{ssk}(h(B'),\texttt{fid},\texttt{addr},t))$,
then it can break the security of the underlying signature scheme
(the security of a digital signature scheme is discussed in Section~\ref{dig_sig}).
Let the adversary $\mathcal{A}$ be provided with a set of
polynomially many
authentication tags $\{\tilde{h}(B_i)=(h(B_i),\text{Sign}_{ssk}(h(B_i),\texttt{fid},\texttt{addr}_i,t_i))\}_{i\in I}$
for $\{(B_i,\texttt{addr}_i,t_i)\}_{i\in I}$
of $\mathcal{A}$'s choice
(where $I=[1,k]$ for some $k$ polynomial in $\lambda$).
Let us assume that the adversary $\mathcal{A}$ is able to find another block $B'$ and its tag
$\tilde{h}(B')=(h(B'),\text{Sign}_{ssk}(h(B'),\texttt{fid},\texttt{addr}_j,t_j))$,
such that $j\in I$, $B'\not=B_j$ and $h(B')\not=h(B_j)$.
Then, we can construct another probabilistic polynomial-time (PPT)
algorithm $\mathcal{B}^{\mathcal{O}_{ssk}(\cdot)}$ that, given the public key $psk$ and an access to the signing oracle $\mathcal{O}_{ssk}(\cdot)$,
executes $\mathcal{A}$ as a subroutine.
Initially, $\mathcal{B}$ provides the
public parameters $(g_1,g_2,\ldots,g_m,\texttt{fid},psk)$ and
the description of $G_q$ to $\mathcal{A}$.
With the help of $\mathcal{O}_{ssk}(\cdot)$, $\mathcal{B}$
responds to $\mathcal{A}$'s queries with $\{\tilde{h}(B_i)=(h(B_i),\text{Sign}_{ssk}(h(B_i),\texttt{fid},\texttt{addr}_i,t_i))\}_{i\in I}$.
Now, if $\mathcal{A}$ finds another block $B'$ and its tag
$\tilde{h}(B')=(h(B'),\text{Sign}_{ssk}(h(B'),\texttt{fid},\texttt{addr}_j,t_j))$
as described above with probability $\epsilon_{\mathcal{A}}$ in (polynomial) time $t'_{\mathcal{A}}$,
then $\mathcal{B}$ also finds a forged signature $\text{Sign}_{ssk}(h(B'),\texttt{fid},\texttt{addr}_j,t_j)$
(that was not queried to the signing oracle before)
with probability $\epsilon_{\mathcal{B}}=\epsilon_{\mathcal{A}}$ in time $t'_{\mathcal{B}}\approx t'_{\mathcal{A}}$.
\paragraph{Case II}\quad
We show that, if the adversary $\mathcal{A}$ can find a block $B'\not=B$ and
its authentication tag $\tilde{h}(B')=(h(B'),\text{Sign}_{ssk}(h(B'),\texttt{fid},\texttt{addr},t))$
such that $h(B')=h(B)$,
then it can solve the discrete logarithm problem over $G_q$
(we refer to~\cite{IncCrypto_CR,Krohn_SP} for the detailed proof showing that the collision-resistance property holds for $h$).
The idea of the proof is as follows. Let us assume that the adversary $\mathcal{A}$,
given the description of the multiplicative group $G_q=\langle g \rangle$ and
$m$ random elements $g_1,g_2,\ldots,g_m$ of $G_q$,
is able to find two blocks
$B,B'\in \Z_q^m$
such that $B\not=B'$ and $h(B)=h(B')$. Then, we can construct another probabilistic polynomial-time (PPT)
algorithm $\mathcal{B}$ that, given the description of $G_q$
and $y\in G_q$, executes $\mathcal{A}$ as a subroutine to find a collision and uses this collision to compute $x\in\Z_q$ such that $y=g^x$.
In order to do that, $\mathcal{B}$ selects $z_1,z_2,\ldots,z_m\xleftarrow{R}\{0,1\}$
and $u_1,u_2,\ldots,u_m\xleftarrow{R}\Z_q$. For each $i\in [1,m]$, $\mathcal{B}$ sets
$g_i=g^{u_i}$ if $z_i=0$; it sets $g_i=y^{u_i}$ if $z_i=1$.
Then, $\mathcal{B}$ provides $\mathcal{A}$ with the description of $G_q=\langle g \rangle$ and
the elements $g_1,g_2,\ldots,g_m\in G_q$ computed in the previous step.
Now, suppose $\mathcal{A}$ finds two blocks
$B=[b_1,b_2,\ldots,b_m]\in \Z_q^m$ and $B'=[b'_1,b'_2,\ldots,b'_m]\in \Z_q^m$
with probability $\epsilon_{\mathcal{A}}$ in (polynomial) time $t'_{\mathcal{A}}$,
such that $B\not=B'$ and $h(B)=h(B')$.
Then, $\mathcal{B}$ sets $a=\sum_{z_i=1}u_i(b_i-b'_i) \Mod q$ and computes $a'=a^{-1}\Mod q$
($a$ is non-zero with probability at least $\frac{1}{2}$).
Since $h(B)=h(B')$, we have
\begin{align*}
& \prod_{i=1}^{m}{g_i^{b_i}} = \prod_{i=1}^{m}{g_i^{b'_i}}\\
\implies & \prod_{z_i=1}{y^{u_i(b_i-b'_i)}} = \prod_{z_i=0}{g^{u_i(b'_i-b_i)}}\\
\implies & y^a = \prod_{z_i=0}{g^{u_i(b'_i-b_i)}}\\
\implies & y^{aa'} = \prod_{z_i=0}{g^{a'u_i(b'_i-b_i)}}\\
\implies & y = g^x,
\end{align*}
where $x=\sum_{z_i=0}{a'u_i(b'_i-b_i)} \Mod q$ is the discrete logarithm of $y$ in $G_q$.
Thus, the algorithm $\mathcal{B}$ solves the discrete logarithm problem over $G_q$
with probability $\epsilon_{\mathcal{B}}\ge\frac{\epsilon_{\mathcal{A}}}{2}$
in (polynomial) time $t'_{\mathcal{B}}=t'_{\mathcal{A}}+O(m{\lambda}^3)$.
The overhead term $O(m{\lambda}^3)$ is attributed to some arithmetic operations (including $m$ exponentiation operations)
that $\mathcal{B}$ has to perform.
Given an address \texttt{addr}, let $B$ be the latest block that was actually written by the client to \texttt{addr} at time $t$.
Let the challenger challenge the block in \texttt{addr} during an audit.
In order to retain an older block $B'\not=B$ (written to the same address \texttt{addr} at time $t'<t$) and still pass the audit,
the adversary $\mathcal{A}$ has to produce its authentication tag for time $t$ (we note that the tag for $B'$ for time $t'$
is available to $\mathcal{A}$) such that one of the conditions mentioned above (Case I and Case II) holds.
As we have seen earlier, it is computationally hard to find such a block $B'$, except with a probability negligible in $\lambda$.
Thus, the adversary must store each of the challenged blocks with its latest content to pass the audit.
\end{proof}
We define a polynomial-time extractor algorithm $\mathcal{E}$ that can extract
all blocks from each of the levels of H and C (except with negligible probability) by interacting with an adversary $\mathcal{A}$
that wins the security game described in Section~\ref{security_model} with some non-negligible probability.
As our dynamic POR scheme satisfies the \textit{authenticity}
and \textit{freshness} properties mentioned above,
the adversary $\mathcal{A}$ cannot produce a valid proof $(B^*=\sum_{1\le i\le r}{\nu_i}B_{\texttt{addr}_i},\{\tilde{h}(B_{\texttt{addr}_i})\}_{1\le i\le r})$
for a given challenge set $Q=\{(\nu_i,\texttt{addr}_i)\}_{1\le i\le r}$ without storing
the challenged blocks and their corresponding tags properly,
except with some negligible probability (see Section~\ref{subsubsec:audit} and Claim~\ref{claim_authenticity}).
This means that if the verifier outputs 1 during the extraction phase,
$B^*$ in the proof is indeed the linear combination of the untampered blocks $\{B_{\texttt{addr}_i}\}_{1\le i\le r}$
using coefficients $\{\nu_i\}_{1\le i\le r}$.
Suppose that the extractor $\mathcal{E}$ wants to extract
$r$ blocks indexed by $J$. It challenges $\mathcal{A}$ with a challenge set $Q=\{(\nu_i,\texttt{addr}_i)\}_{i\in J}$.
If the proof is valid (that is, the verifier outputs 1), $\mathcal{E}$ initializes a matrix
$M_\mathcal{E}$ as $[\nu_{1i}]_{i\in J}$, where $\nu_{1i}=\nu_{i}$ for each $i\in J$.
The extractor challenges $\mathcal{A}$ for the same $J$ but with different random coefficients.
If the verifier outputs 1 and the vector of coefficients is linearly independent to
the existing rows of $M_\mathcal{E}$, then $\mathcal{E}$ appends this vector to $M_\mathcal{E}$ as a row.
The extractor $\mathcal{E}$ runs this procedure until the matrix $M_\mathcal{E}$ has $r$ linearly independent rows.
So, the final form of the full-rank matrix $M_\mathcal{E}$ is $[\nu_{ji}]_{j\in[1,r], i\in J}$.
Consequently, the challenged blocks can be extracted using Gaussian elimination.
Following the way mentioned above, the extractor algorithm $\mathcal{E}$ can interact with $\mathcal{A}$
(polynomially many times) in order to extract $\rho$-fraction of blocks
(for some $\rho$) for each level of H and C by setting the index set $J$ appropriately.
Use of a $\rho$-rate erasure code ensures retrievability of all blocks
of C (i.e., all the encoded blocks of U up to the last rebuild of C) and H (i.e., all the encoded blocks
of U written after the last rebuild of C).
For each $l$-th level of H (or C), the FFT-based code used in our scheme is a $(2^{l+1},2^l,2^l)$-erasure code;
thus, $\rho=\frac{1}{2}$.
This completes the proof of Theorem~\ref{theorem_DPOR}.
\end{proofTheorem}
\subsection{Probabilistic Guarantees}
As we mention in Section~\ref{operations}, each of the levels of H and the buffer C is audited with $O(\lambda)$ random locations.
Due to the use of a $(2^{l+1},2^l,2^l)$-erasure code for each level $0\le l\le \flr{\log n}$,
the server has to actually delete half of the blocks in a level in order to delete a single block in that level.
Thus, if the server corrupts half of the blocks in any level, then
it passes an audit with probability $p_{cheat}=(1-\frac{1}{2})^{O(\lambda)}=2^{-O(\lambda)}$
that is negligible in $\lambda$.
\section{Performance Analysis}
\label{performance_ana}
We analyze the performance of the following types of operations (described in Section~\ref{operations})
involved in our publicly verifiable dynamic POR scheme.
\begin{itemize}
\item \textbf{Read}\q
For an authenticated read on the data block $B$ present in U, the server sends the corresponding Merkle proof $\Pi_{read}$
which consists of the block $B$, the data block in the sibling leaf-node of $B$ and the hash values along the associated path of the Merkle
hash tree (see Section~\ref{MHT}).
Thus, a read operation takes $2\beta+O(\lambda\log n)$ communication bandwidth between the client and the server.
To reduce this cost, the client can generate authentication tags on the data blocks of U (as discussed in Section~\ref{tag_gen})
and construct a Merkle tree over these tags instead of the data blocks. In this setting, $\Pi_{read}$ consists of $\tilde{h}(B)$,
the authentication tag in its sibling leaf-node and the hash values along the associated path.
This reduces the communication bandwidth between the client and the server for a read to $\beta+O(\lambda\log n)$.
\item \textbf{Write}\q
A write operation incurs the following costs.
\begin{itemize}
\item \textit{Write on U}:\q A write operation on U involves an authenticated read operation followed by
the verification of Eqn.~\ref{eqn:MHT}. Thus, each write operation requires $\beta+O(\lambda\log n)$ bandwidth
between the client and the server (for communicating $\Pi_{read}$ and $digMHT_{server}$).
\item \textit{Write on H and $\tilde{\text{H}}$}:\q The cost of a write on H is $O(\beta\log n)$ (see Section~\ref{buff_H}).
Similarly, the cost of a write on $\tilde{\text{H}}$ is $O(\lambda\log n)$ as the blocks are replaced
by their authentication tags in $\tilde{\text{H}}$ and
the size of a tag is $O(\lambda)$ bits.
\item \textit{Write on C and $\tilde{\text{C}}$}:\q C (or $\tilde{\text{C}}$) is rebuilt after every $n$ writes.
As mentioned in Section~\ref{buff_C}, a write operation on C costs $O(\beta\log n)$ both in time and bandwidth.
Similarly, the cost of a write on $\tilde{\text{C}}$ is $O(\lambda\log n)$.
\end{itemize}
\item \textbf{Audit}\q
For a challenge set $Q$ containing $r=O(\lambda\log n)$ random locations $\{\texttt{addr}_i\}_{1\le i\le r}$
and random coefficients $\nu_1,\nu_2,\ldots,\nu_r\in\Z_q^*$, the server computes a proof containing
$B^*=\sum_{1\le i\le r}{\nu_i}B_{\texttt{addr}_i}$ and $\{\tilde{h}(B_{\texttt{addr}_i})\}_{1\le i\le r}$
and sends the proof to the verifier.
Thus, the bandwidth required for an audit is given by
$\beta+O(\lambda^2\log n)$.
\end{itemize}
\medskip
\noindent
\textbf{Comparison among Dynamic POR Schemes}\q
We compare our scheme with other existing dynamic proofs-of-retrievability (POR) schemes which is summarized in Table~\ref{tab:comparison_POR}.
The comparison is based on the asymptotic complexity for different parameters.
Some of the figures mentioned in Table~\ref{tab:comparison_POR} are taken from~\cite{Stefanov_CCS}.
Table~\ref{tab:parameters} mentions typical values of the parameters used in our scheme~\cite{Krohn_SP}.
\begin{table*}[tbp]
\small
\centering
\caption{Comparison among dynamic POR schemes based on different parameters (asymptotic complexity)}\label{tab:comparison_POR}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Dynamic & \multirow{2}{*}{Client} & \multicolumn{2}{c|}{Cost of a write operation} & \multicolumn{2}{c|}{Cost of an audit operation} & {\multirow{3}{*}{Verifiability}} \\
\cline{3-6}
POR & \multirow{2}{*}{storage} & Server & \multirow{2}{*}{Bandwidth} & Server & \multirow{2}{*}{Bandwidth} & \\
schemes & & computation & & computation & & \\
\hline
\hline
Iris~\cite{IRIS} & $O(\beta\sqrt{n})$ & $O(\beta)$ & $O(\beta)$ & $O(\beta\lambda\sqrt{n})$ & $O(\beta\lambda\sqrt{n})$ & Private\\
\hline
Cash et al.~\cite{Wichs_ORAM} & $O(\beta)$ & $O(\beta\lambda(\log n)^2)$ & $O(\beta\lambda(\log n)^2)$ & $O(\beta\lambda(\log n)^2)$ & $O(\beta\lambda(\log n)^2)$ & Private\\
\hline
Chandran et al.~\cite{Bhavana_TCC} & $O(\beta)$ & $O(\beta(\log n)^2)$ & $O(\beta(\log n)^2)$ & $O(\beta\lambda\log n)$ & $O(\beta\lambda\log n)$ & Private\\
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Shi et al.~\cite{Stefanov_CCS}}} & $O(\beta)$ & $O(\beta\log n)$ & $\beta+O(\lambda\log n)$ & $O(\beta\lambda\log n)$ & $\beta+O(\lambda^2\log n)$ & Private\\
\cline{2-7}
\multicolumn{1}{|c|}{} & $O(\beta\lambda)$ & $O(\beta\log n)$ & $\beta(1+\epsilon)+O(\lambda\log n)^\dag$ & $O(\beta\lambda\log n)$ & $O(\beta\lambda\log n)$ & Public\\
\hline
Our scheme & $O(\beta)$ & $O(\beta\log n)$ & $\beta+O(\lambda\log n)$ & $O(\beta\lambda\log n)$ & $\beta+O(\lambda^2\log n)$ & Public\\
\hline
\end{tabular}
\vspace{0.12in}
\begin{tablenotes}
\item[] We take $\lambda$ as the security parameter and $n$ as the number of blocks (each $\beta$-bits long)
of the data file to be outsourced to the server. For all of the schemes mentioned above, the storage
on the server side is $O(\beta n)$, where $\beta\gg\lambda$. The cost of an authenticated read operation
is $\beta+O(\lambda\log n)$ if a Merkle hash tree is maintained over the unencoded data blocks
for checking authenticity and freshness. \smallskip
\item[] $\dag$ $\epsilon$ is a constant such that $\epsilon>0$.
\end{tablenotes}
\end{table*}
From Table~\ref{tab:comparison_POR}, we note that, in our publicly verifiable dynamic POR scheme,
bandwidths required for a write and an audit are given by
$\beta+O(\lambda\log n)$ and $\beta+O(\lambda^2\log n)$, respectively. These figures are asymptotically
the same as those in the \textit{privately verifiable} scheme of~\cite{Stefanov_CCS}. On the other hand,
this is a significant improvement over the
\textit{publicly verifiable} scheme of~\cite{Stefanov_CCS}
where bandwidths required for a write and an audit are $\beta(1+\epsilon)+O(\lambda\log n)$ and $O(\beta\lambda\log n)$,
respectively, for a constant $\epsilon>0$ and $\beta\gg\lambda$.
Additionally, one drawback of the publicly verifiable scheme proposed by Shi et al.~\cite{Stefanov_CCS}
is due to the fact that one or more Merkle hash trees (\textit{separate} from
the Merkle hash tree\footnote{We note that, in our scheme as well as in~\cite{Stefanov_CCS},
a Merkle hash tree is maintained for the unencoded buffer U. However, as U is not audited
(its authenticity is checked only by the client during a read or write) by a third party auditor, the client keeps
the root-digest of this Merkle hash tree (for U) private avoiding frequent updates in the public parameters.} maintained for U)
are maintained to ensure the integrity of the blocks
in the \textit{hierarchical log} H (one for the entire log or one for each of its levels). To enable a third party
auditor (TPA) to audit the data blocks residing at different levels of this log, the root-digests of
these trees need to be made public. However, some of these root-digests are changed as often as new data blocks are inserted
in the hierarchical log structure, thus resulting in
a change in the public parameters for \textit{each} write. This incurs an additional (non-trivial) overhead for
validation and certification of the public parameters for every write operation. On the other hand, the public
parameters in our publicly verifiable dynamic POR scheme
are fixed throughout the execution of the protocols involved.
\begin{table}[t]
\small
\centering
\caption{Typical values of the parameters used}
\label{tab:parameters}
\begin{tabular}{|c|c|c|}
\hline
Parameter & Description of parameter & Value \\
\hline
\hline
$\lambda$ & Security parameter (in bits) & 128 \\
\hline
$\lambda_p$ & Size of prime $p$ (in bits) & 1024 \\
\hline
$\lambda_q$ & Size of prime $q$ (in bits) & 257 \\
\hline
$\beta$ & Size of a data block (in KB) & 64 \\
\hline
{\multirow{2}{*}{$m$}} & $\ceil{\beta/(\lambda_q-1)}$ & {\multirow{2}{*}{128}} \\
& $=$ number of segments in a block & \\
\hline
\end{tabular}
\end{table}
Apart from the schemes listed in Table~\ref{tab:comparison_POR}, we mention some POR schemes
proposed recently that handle data dynamics as follows.
The dynamic POR scheme proposed by Guan et al.~\cite{Guan_ESORICS}
uses the notion of indistinguishability obfuscation ($i\mathcal{O}$)~\cite{BarakIO_CR,GargIO_FOCS}
to construct a publicly verifiable POR scheme from the privately verifiable scheme of Shacham and Waters~\cite{SW_ACR}.
It also handles dynamic data using a ``modified B+ Merkle tree''.
However, the $i\mathcal{O}$ candidates available in the literature are not currently practical.
Ren et al.~\cite{Ren_TSC15} propose a dynamic POR scheme where the data file is encoded
using erasure coding (intra-server encoding) and network coding (inter-server encoding).
The encoded blocks are then disseminated among multiple storage servers.
Use of network coding reduces the communication bandwidth required for a repair in case of a node (server) failure.
For the intra-server encoding, each block is divided into some sub-blocks (using an erasure code),
and a ``range-based 2-3 tree'' (rb23Tree) is built upon these sub-blocks for each server.
This ensures the authenticity and freshness properties of the blocks within a server.
We note that each block is encoded (locally) into a few number of sub-blocks for the intra-server encoding.
Therefore, an update in a block (or in any of its sub-blocks) requires updating only a few sub-blocks corresponding to that block.
This makes an update in this scheme efficient. On the other hand, a malicious server needs to delete only a few sub-blocks
to actually make a block unavailable.
Thus, the dynamic POR scheme proposed by Ren et al.~\cite{Ren_TSC15} differs from our scheme on the basis of the granularity of data the client needs.
\section{Conclusion}
\label{sec:conclusion}
In this work, we have proposed a dynamic POR scheme where the client can update her data file
after the initial outsourcing of the file to the cloud server and retrieve all of her data at
any point of time. Our scheme is publicly verifiable, that is, anyone having the knowledge of
the public parameters of the scheme can perform an audit on the client's behalf, and
it offers security guarantees of a dynamic POR scheme. This scheme
is more efficient (in terms of the cost of a write or an audit) than other
practical and publicly verifiable dynamic POR schemes with a similar data granularity.
|
2,869,038,156,867 | arxiv | \section{Introduction}
\label{Sec:Introduction}
Data-driven modeling using linear operators has the potential to transform the estimation and control of strongly nonlinear systems.
Linear operators, such as Koopman and Perron-Frobenius operators, provide a principled linear embedding of nonlinear dynamics, extending the application of standard linear methods to nonlinear systems, and thus, significantly simplifying the control design and reducing the computational burden.
More broadly, data-driven discovery of dynamical systems is undergoing rapid development, driven by the lack of simple, or often entirely unknown, equations and the increasing abundance of high-fidelity data.
There have been recent successes in the discovery of functional representations of nonlinear dynamical systems, e.g.\ using evolutionary optimization techniques~\cite{Bongard2007pnas,Schmidt2009science} and sparse optimization~\cite{Brunton2016pnas}.
However, control based on nonlinear equations is particularly challenging, becoming infeasible for higher-dimensional problems, lacking guarantees, and often requiring problem-tailored formulations.
In contrast, the emerging field of linear operators in dynamical systems seeks to embed nonlinear dynamics in a globally linear representation, providing a compelling mathematical framework for the linear estimation, prediction, and control of strongly nonlinear systems.
The rise of advanced data science and machine learning algorithms, vastly expanded computational resources, and advanced sensor technologies make this a fertile ground for the rapid development of data-driven approximations to these linear operators for control.
In this chapter, we review key innovations, discuss major challenges and promising future directions, and provide a high-level and unified perspective on data-driven approximations of transfer operators for control.
Data-driven system identification has reached a high degree of maturity.
There exist a plethora of techniques that identify linear and nonlinear systems~\cite{Nelles2013book} based on data, including state-space modeling via the eigensystem realization algorithm (ERA)~\cite{ERA:1985} and other subspace identification methods, Volterra
series~\cite{Brockett1976automatica,maner1994nonlinear}, linear and nonlinear autoregressive models~\cite{Akaike1969annals} (e.g., ARX, ARMA, NARX, and NARMAX), and neural network models~\cite{lippmann1987introduction,draeger1995model,wang2016combined}, to name only a few.
The reader is referred to~\cite{ljung2010arc,ljung2010ccc} for an compressed overview of identification techniques for linear and nonlinear systems.
In the machine learning community, manifold learning, e.g. locally linear embedding and self-organizing maps, and non-parametric modeling, e.g.\ Gaussian processes, have been proven to be useful for identifying nonlinear systems~\cite{principe1998ieee,ko2007ieee,kocijan2004acc}.
Most of these models are considered data-driven~\cite{sjoberg1995automatica}, as they do not impose a specific model structure based on the governing equations.
However, there is an increasing shift from black-box modeling to inferring unknown physics and constraining models with known prior information.
For instance, the recent sparse identification of nonlinear dynamics (SINDy)~\cite{Brunton2016pnas}, which has been extended to incorporate the effect of control~\cite{Brunton2016nolcos, kaiser2017arxiv_b}, is able to take into account known expert knowledge such as symmetries and conservation laws~\cite{Loiseau2018jfm}.
Learning accurate nonlinear models is particularly challenging as small deviations in the parameters may produce fundamentally different system behavior.
Importantly, it is possible to control many nonlinear systems using linear models. Examples include weakly nonlinear systems for which models are obtained based on a local linearization of the nonlinear dynamics at a reference point.
However, the pronounced nonlinearities present in many applications generally require nonlinear control design.
The lack of robustness and stability guarantees except for special cases and the increased computational burden during the on-line phase, which becomes prohibitive for high-dimensional systems, restricts the application of nonlinear control to low-dimensional systems and requires specialized design techniques.
Encoding nonlinear dynamics in linear models through operator-theoretic approaches provides a new opportunity for the control of previously intractable systems.
Linear embedding theory for nonlinear systems goes back to seminal works by B.O. Koopman in 1931~\cite{Koopman1931pnas} and T. Carleman in 1932~\cite{carleman1932am}.
Koopman showed that Hamiltonian dynamics can be described through an infinite-dimensional linear operator acting on the Hilbert space of all possible observables that can be measured from the underlying state.
Closely related, Carleman demonstrated that systems of ordinary differential equations with polynomial nonlinearities can be represented as an infinite-dimensional system of linear differential equations.
Since the introduction of the so-called Carleman linearization, it has been applied to a wide range of problems~\cite{bellman1963qam,brockett1976ieee,kowalski1991nonlinear}
including for the Lyapunov exponent calculation~\cite{andrade1982jmp} and for finding first integrals~\cite{kus1983jpa}.
The emerging Koopman operator perspective provides an alternative direction and generalizes beyond polynomial systems.
The ultimate goal is to learn a globally linear embedding of the nonlinear dynamics such that powerful linear methods become immediately applicable and are useful in a larger domain.
Of particular interest are the spectral properties of these operators that encode global information and can be related to geometrical properties of the underlying dynamical system.
The potential to discover global properties of dynamical systems through operator-theoretic methods for diagnostic purposes, i.e. improving understanding of the underlying dynamics, has driven continued efforts to develop improved algorithms.
The confluence of big data, advances in machine learning, and new sensor technologies has further fueled the progress in data-driven methods, facilitating equation-free approximations of these operators.
These efforts will be discussed below particularly in the context of system identification for control.
We introduce the Koopman and Perron-Frobenius operators by considering the following autonomous nonlinear dynamical system
\begin{equation}\label{Eqn:Dynamics}
\frac{d}{dt} \mathbf{x}(t) = \mathbf{F}(\mathbf{x}(t)),\quad \mathbf{x}(0) = \mathbf{x}_0
\end{equation}
with $\mathbf{x}\in\mathbb{X}\subset\mathbb{R}^n$, initial condition $\mathbf{x}_0\in\mathbb{X}$, and the flow is denoted by $\mathbf{S}^t$ so that $\mathbf{x}(t)=\mathbf{S}^t(\mathbf{x}_0)$.
The nonlinear system~\eqref{Eqn:Dynamics} can be equivalently described by infinite-dimensional, linear operators acting on observable or density functions.
The linearity of these operators is appealing; however, their infinite dimensionality poses issues for representation and computation and current research aims to approximate the evolution instead on a finite-dimensional subspace facilitating a finite-dimensional matrix representation~\cite{Brunton2016plosone}.
Let $(\mathbb{X}, \mathfrak{B},\mu)$ be a measure space with state space $\mathbb{X}$, $\sigma$-algebra $\mathfrak{B}$, and measure $\mu$.
The Koopman operator~\cite{Koopman1931pnas,Koopman1932pnas,Mezic2005nd,Mezic2013arfm} is an infinite-dimensional linear operator that advances measurement functions $f\in L^{\infty}(\mathbb{X})$:
\begin{equation}\label{Eqn:KoopmanOperator}
f(t,\mathbf{x}_0) = U^t f(\mathbf{x}_0)= f(\mathbf{S}^t(\mathbf{x}_0)).
\end{equation}
Any set of eigenfunctions of the Koopman operator span an invariant subspace, where the dynamics evolve linearly along these basis directions.
Thus, the spectral analysis of the Koopman operator is of particular interest and spectral properties have been shown to be related to intrinsic time scales, geometrical properties, and long-term behavior of the underlying dynamical system~\cite{Mezic2017book}.
Koopman eigenfunctions $\phi(\mathbf{x})$ associated with a particular eigenvalue $\lambda$ are special or intrinsic observables, which evolve linearly according to
\begin{equation}\label{Eqn:KoopmanEigenfunctions}
U^t \phi(\mathbf{x}_0)= \phi(\mathbf{S}^t(\mathbf{x}_0)) = e^{\lambda t} \phi(\mathbf{x}_0),
\end{equation}
and which can generally be discontinuous~\cite{mezic1999chaos,mezic2003cdc,Mezic2004physicad}.
It can be shown~\cite{lasota2013book} for continuously differentiable functions $f$ with compact support, the function ${f}(t,\mathbf{x}) := U^t[f](\mathbf{x})$ satisfies the first-order partial differential equation (PDE):
\begin{equation}
\frac{\partial}{\partial t}{f}(t,\mathbf{x}) = {\bf F}(\mathbf{x})\cdot\nabla {f}(t,\mathbf{x}) = L_U {f}(\mathbf{x}),\quad {f}_0:={f}(0,\mathbf{x}),
\end{equation}
where $L_U$ is the infinitesimal generator of the semigroup of Koopman operators $\{U^t\}_{t\geq 0}$, for which an exponential representation $U^t = e^{L_U t}$ exists.
Smooth Koopman eigenfunctions can then be interpreted as the eigenfunctions of the generator and satisfy
\begin{equation}
\frac{d}{dt}\phi(\mathbf{x}) = {\bf F}(\mathbf{x})\cdot\nabla \phi(\mathbf{x})= \lambda \phi(\mathbf{x}).
\end{equation}
As the Koopman operator evolves measurement functions, this perspective is particularly amenable to data-driven approaches.
The Koopman operator is dual to the Perron-Frobenius operator~\cite{nicolis1995book,lasota2013book,ChaosBook2012,bollt2013book}, i.e. $\langle P^t\rho,f\rangle = \langle \rho,U^t f\rangle$ for any $\rho\in L^1$ and $f\in L^{\infty}$, and as a consequence these share certain properties. For instance, for measure-preserving systems these operators are unitary and exhibit the same point spectrum~\cite{Mezic2004physicad}.
The Perron-Frobenius operator propagates densities $\rho\in L^1(\mathbb{X})$ and is defined as
\begin{equation}
\int_{B}\, P^t\rho(\mathbf{x}) \mu(d\mathbf{x}) = \int_{\mathbf{S}^{-t}(B)}\rho(\mathbf{x})\,\mu(d\mathbf{x})\quad\forall B\in\mathfrak{B}.
\end{equation}
Further,
\begin{equation}
\rho(t,\mathbf{x}) = P^t\rho(\mathbf{x})= \int_{\mathfrak{B}} \delta(\mathbf{x}-\mathbf{S}^t(\mathbf{x}_0))\rho_0(\mathbf{x}_0) d\mathbf{x}_0,
\end{equation}
where $\delta(\mathbf{x}-\mathbf{S}^t(\mathbf{x}_0))$ represents the deterministic kernel.
For an invertible system, this becomes $P^t\rho(\mathbf{x})= J^{-t}(\mathbf{x}) \rho(\mathbf{S}^{-t}(\mathbf{x}))$,
where $J^{-t}(\mathbf{x}):=\det(d\mathbf{S}^{-t}(\mathbf{x})/d\mathbf{x})$ is the determinant of the Jacobian of $\mathbf{S}^{-t}(\mathbf{x})$;
thus, the density varies inversely with the infinitesimal volume occupied by the trajectories.
For invertible and conservative systems we have $P^t\rho(\mathbf{x})= \rho(\mathbf{S}^{-t}(\mathbf{x}))$ with volume preservation $\mathrm{div} \,\mathbf{F} = 0$.
Eigenfunctions of the Perron-Frobenius operator satisfy
\begin{equation}
P^t \nu(\mathbf{x}) = J^{-1}(\mathbf{x}) \nu(\mathbf{S}^{-1}(\mathbf{x})) = e^{\lambda t}\nu(\mathbf{x}).
\end{equation}
Of particular interest is the
{\it physical} invariant measure $\mu^{*}(B) = \mu^{*}(\mathbf{S}^{-t}(B))$ for all sets $B\in\mathfrak{B}$, which is stationary under the evolution of the flow.
The associated invariant density
$P\rho^{*}(\mathbf{x}) = \rho^{*}(\mathbf{x})$
corresponds to an eigenfunction at eigenvalue $1$, which describes the asymptotic behavior of the underlying dynamics. Control is often designed to alter the observed invariant measure or density.
Spectral properties of the Perron-Frobenius operator are related, e.g., to almost-invariant sets, meta-stable states, mixing properties, decay of correlations~\cite{Gaspard1995pre,dellnitz1999jna,dellnitz2000nonl,Froyland2009pd}.
The infinitesimal generator of the semigroup of Perron-Frobenius operators $\{ P^t \}_{t\geq 0}$ is given by the Liouville operator $L_P$~\cite{liouville1838jmpa,gaspard2005chaos}:
\begin{equation}\label{Eqn:LiouvilleEquation}
\frac{\partial }{\partial t} {\rho}(t,\mathbf{x}) = - \nabla \cdot\left({\bf F}(\mathbf{x})\,{\rho}(t,\mathbf{x})\right) = L_P[{\rho}](\mathbf{x}),\quad {\rho}_0:={\rho}(0,\mathbf{x}),
\end{equation}
for continuously differentiable $\rho$ with compact support and appropriate boundary conditions.
The Liouville equation~\eqref{Eqn:LiouvilleEquation} describes how the flow transports densities in phase space and has a very intuitive interpretation as the conservation of probability or mass in terms of trajectories in the phase space. Alternatively, the evolution of the density may interpreted as propagated uncertainty of an initial state.
This is a first-order PDE that can be solved with the method of characteristics~\cite{zwillinger1989handbook,dutta2011jgcd}.
The invariant density satisfies $L_P[{\rho}^{*}](\mathbf{x})=0$ and is an eigenfunction at eigenvalue $0$.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Dynamics_v2}
\caption{Poincar\'e's local phase space characterization versus the global operator-theoretic perspective.}
\label{Fig:Geometry_vs_Operator}
\end{figure}
The aim of this chapter is to provide an overview of major developments and advances in data-driven control using operator-theoretic methods.
The chapter is organized as follows:
In Sec.~\ref{Sec:ProblemFormulation}, the general control problem for a nonlinear system is formulated from an optimal control perspective.
Control-dependent Koopman and Perron-Frobenius operators are introduced in Sec.~\ref{Sec:TransferOperators}.
The main objective of the operator-theoretic approach is to find a linear representation of the underlying dynamical system. Data-driven system identification methods for the controlled nonlinear system are summarized in Sec.~\ref{Sec:SystemIdentification}.
Important aspects in control theory such as observability, controllability, state estimation, and control design in the operator-theoretic framework are discussed in Sec.~\ref{Sec:Control}.
The chapter is concluded in Sec.~\ref{Sec:Conclusions} with a discussion on major challenges, open problems, and possible future directions of transfer operators approximations and their application for control.
\section{Control problem formulation}
\label{Sec:ProblemFormulation}
Reformulating strongly nonlinear dynamics in a linear framework via transfer operators is appealing as it enables the application of powerful optimal and robust estimation and control techniques available for linear systems~\cite{sp:book,dp:book,stengel2012book}.
The formulation of an optimal control problem~\cite{stengel2012book} appears in many applications, such as trajectory control in robotics or boundary layer stabilization in fluids, and has been considered widely in the context of Koopman operators for control~\cite{Brunton2016plosone,Korda2016arxiv,Kaiser2017arxiv,KaKuBr2018arxiv}.
Optimization-based approaches for control provide a highly suitable framework for the control of nonlinear system, e.g. including constraints and flexible cost formulations. The optimal control problem can be stated as follows:
\begin{subequations}\label{Eqn:CostFunction}
\begin{align}
\min\limits_{\mathbf{u}\in\mathbb{U}} & \int\limits_{t_0}^{t_f}\, l[\mathbf{y}(t),\mathbf{u}(t),t] \, d t +l_f[\mathbf{y}(t_f),t_f]
\end{align}
\end{subequations}
subject to
\begin{subequations}\label{Eqn:NonlinearSystemWithControl}
\begin{align}
\frac{d}{dt} \mathbf{x}(t) &= {\mathbf{F}}(\mathbf{x},\mathbf{u}),\quad \mathbf{x}(0) = \mathbf{x}_0\\
\mathbf{y}(t) &= \mathbf{H} (\mathbf{x},\mathbf{u})
\end{align}
\end{subequations}
and possibly additional constraints on states, measurements, control inputs or time.
We consider the nonlinear system~\eqref{Eqn:Dynamics} extended to include an external input ${\mathbf{u}\in\mathbb{U}\subset\mathbb{R}^{q}}$, where $\mathbb{U}$ is the space of admissible control functions (or sequences in discrete time), and assume continuously differentiable state dynamics
${\bf F}:\mathbb{X}\times\mathbb{U}\rightarrow \mathbb{X}$.
The state $\mathbf{x}$ may not be fully accessible and instead a limited set of output measurements $\mathbf{y}\in\mathbb{Y}\subset\mathbb{R}^p$ may be collected, which are prescribed by the measurement function $\mathbf{H}:\mathbb{X}\times \mathbb{U}\rightarrow\mathbb{Y}$.
The nature of the solution of the optimization problem is determined by the choice of the terminal state cost $l_f[\cdot]$ and the running cost $l[\cdot]$.
The objective is to determine a control law or policy that minimizes the cost functional~\eqref{Eqn:CostFunction}.
A nonlinear optimal control formulation for the system in~\eqref{Eqn:NonlinearSystemWithControl} can be established using dynamic programming~\cite{Bertsekas2005book}.
This relies on Bellman's principle of optimality~\cite{bellman1957book}
and leads to the Hamilton-Jacobi-Bellman (HJB) equation~\cite{bellman1964book}, a nonlinear PDE for the globally optimal solution.
Solving this nonlinear PDE is computationally challenging and as a result, a variational argument and Pontryagin's maximum principle~\cite{Pontryagin2062interscience} is often used instead, leading to a set of coupled ordinary differential equations (ODEs), the Euler-Lagrange equations.
While this two-point boundary value problem is solvable for higher-dimensional problems in contrast to the HJB equation, it is still too computationally demanding in real-time applications for many nonlinear systems, e.g. in autonomous flight~\cite{tang2018ar}.
Moreover, for high-dimensional systems, such as fluid flows, expensive direct and adjoint simulations render this approach infeasible, instead motivating the use of reduced-order models.
Controlling nonlinear, and possibly high-dimensional, systems is generally computationally demanding and performance and robustness guarantees exist only for certain classes of dynamical systems.
However, the control problem above simplifies considerably for quadratic cost functions and linear dynamics of the form
\begin{equation}\label{Eqn:LinearSystem}
\frac{\mathrm d}{\mathrm dt}\mathbf{x} = \mathbf{A} \mathbf{x} + \mathbf{B}\mathbf{u},\quad \mathbf{x}(0) = \mathbf{x}_0,
\end{equation}
where the matrix $\mathbf{A}$ may be obtained by a suitable linearization of the nonlinear dynamics~\eqref{Eqn:NonlinearSystemWithControl} around an equilibrium point or operating condition.
Considering linear dynamics, the optimization problem can be simplified tremendously and becomes solvable even for high-dimensional systems.
In the simplest case, without any constraints, this reduces to solving an algebraic Riccati equation (ARE) and yields the linear quadratic regulator (LQR)~\cite{stengel2012book}.
Many problems may not permit a fully linear representation or the approximation may only be valid close to the linearization point.
Moreover, models estimated from data may quickly become invalid, either due to the application of control or changing system parameters and conditions.
For these cases, there exist adaptive and nonlinear variants of the formulation above.
For instance, control-affine, nonlinear systems, whose governing equations can be factored into a linear-like structure, permit a state-dependent transition matrix $\mathbf{A}(\mathbf{x})$ and actuation matrix $\mathbf{B}(\mathbf{x})$ so that the ARE may be solved point-wise as a state-dependent Riccati equation (SDRE)~\cite{pearson1962ije}. The SDRE generalizes LQR for nonlinear systems, retaining a simple implementation and often yielding near optimal controllers~\cite{clautier1996proc}.
Alternatively, multiple model systems can be considered, which usually consist of a large set of locally valid models, for which control is then determined either using a specific model selected based on some metric or by averaging/interpolating the control action from the model set~\cite{murray1997book}.
A widely adopted adaptive variant is model predictive control (MPC)~\cite{garcia1989model,morari1999model,allgower2004nonlinear}, which solves the optimal control problem over a receding horizon subject to the modeled dynamics and system constraints.
The receding horizon problem is formulated as an open-loop optimization over a finite time horizon. At each time step and given the current measurement, a sequence of future control inputs minimizing the cost $J$ over the time horizon is determined.
The first control value of this sequence is then applied, and the optimization is re-initialized and repeated at each subsequent time step. This
results in an implicit feedback control law $\mathbf{C}(\mathbf{x}_j) := \mathbf{u}_{j+1}(\mathbf{x}_j)$, where $\mathbf{u}_{j+1}$ is the first input in the optimized actuation sequence starting at the initial condition $\mathbf{x}_j:=\mathbf{x}(t_j)$.
MPC is particularly ubiquitous in industrial applications including process industries~\cite{mayne2014automatica} and aerospace~\cite{eren2017jgcd}, as it enables more general formulations of control objectives and the control of strongly nonlinear systems with constraints, which are difficult to handle using traditional linear control approaches.
Linear models arising as approximations of the Koopman or Perron-Frobenius operators may be combined with any of these approaches. The potential benefit is the increased accuracy and validity as compared with linear models based on a local linearization and a reduced computational burden compared with schemes based on nonlinear models. Moreover, many data-driven approximation algorithms for these operators are easy to implement and efficient to compute as these are founded on standard linear algebra techniques.
\section{Control-oriented Transfer Operators}
\label{Sec:TransferOperators}
Controlled dynamical systems have been increasingly considered in the operator-theoretic framework, particularly for state estimation, to disambiguate dynamics from the effect of control, and for related problems such as optimized sensor and actuator placement.
While the Liouville equation has been examined for control purposes for a longer time~\cite{brockett2007ams,brockett2012chapter,roy2017jsc}, and interestingly also in the context of artificial intelligence~\cite{kwee2001ab}, the control-oriented formulations of the Koopman and Perron-Frobenius operators have gained traction only recently~\cite{Proctor2016arxiv,williams2016ifac,froyland2016siam,das2018arxiv}.
\subsection{Non-affine control systems}
We consider a non-affine control dynamical system and assume access to the full state $\mathbf{y}=\mathbf{x}$:
\begin{equation}\label{Eqn:NonaffineControlSystem}
\frac{d}{dt} \mathbf{x}(t) = {\mathbf{F}}(\mathbf{x},\mathbf{u}),\quad \mathbf{x}(0) = \mathbf{x}_0.
\end{equation}
The flow associated with~\eqref{Eqn:NonaffineControlSystem} is referred to as the {\it control flow} and is denoted by $\mathbf{S}^t(\mathbf{x},\mathbf{u})$. Skew-product flows, such as $\mathbf{S}^t(\mathbf{x},\mathbf{u})$, arise in topological dynamics to study non-autonomous systems, e.g.\ with explicit time dependency.
Thus, we consider the action of an operator in the extended state space $\mathbb{X}\times\mathbb{U}$.
The Koopman operator is defined as acting on the extended state:
\begin{equation}\label{NonaffineKoopmanControl}
U^t f(\mathbf{x},\mathbf{u}) = f(S^t(\mathbf{x},\mathbf{u}),\mathbf{u}).
\end{equation}
Here the inputs $\mathbf{u}$ evolve dynamically, e.g.\ by a prescribed exogenous behavior or a given state-dependent feedback control law $\mathbf{u} = \mathbf{K}(\mathbf{x})$.
If the inputs themselves are not evolving dynamically, this would reduce to $U^t f(\mathbf{x},\mathbf{u}) = f(S^t(\mathbf{x},\mathbf{u}),0)$,
in which case the inputs parameterize the dynamics~\cite{Proctor2016arxiv}.
The spectral properties of $U$ should then contain information about the unforced system with $\mathbf{u}={\bf 0}$.
Koopman eigenfunctions associated with~\eqref{NonaffineKoopmanControl} satisfy
\begin{equation}
U^t \phi(\mathbf{x},\mathbf{u}) = e^{\lambda t} \phi(\mathbf{x},\mathbf{u}).
\end{equation}
Assuming smooth dynamics and observables, the generator equation for the Koopman family is:
\begin{equation}
\frac{\partial}{\partial t} f(t,\mathbf{x},\mathbf{u}) = \mathbf{F}(\mathbf{x},\mathbf{u}) \cdot\nabla_{\mathbf{x}} f(t,\mathbf{x},\mathbf{u}) + \dot{\mathbf{u}} \cdot\nabla_{\mathbf{u}} f(t,\mathbf{x},\mathbf{u}).
\end{equation}
As above, smooth Koopman eigenfunctions can be considered eigenfunctions of the infinitesimal generator satisfying
\begin{equation}
\frac{d}{dt} \phi(\mathbf{x},\mathbf{u}) = \mathbf{F}(\mathbf{x},\mathbf{u}) \cdot\nabla_{\mathbf{x}} \phi(\mathbf{x},\mathbf{u}) + \dot{\mathbf{u}} \cdot\nabla_{\mathbf{u}} \phi(\mathbf{x},\mathbf{u})= \lambda \phi(\mathbf{x},\mathbf{u}).
\end{equation}
Representing the system in terms of a finite set of observables or eigenfunctions generally requires a reformulation of the cost functional~\eqref{Eqn:CostFunction} so that
\begin{equation}\label{Eqn:CostFunctionKO}
J = \int\limits_{t_0}^{t_f}\, l^K[\mathbf{z}(t),\mathbf{u}(t),t] \, d t +l_f^K[\mathbf{z}(t_f),t_f],
\end{equation}
where $\mathbf{z}(t)= {\bf f}(\mathbf{x}(t),\mathbf{u}(t))$ describes the nonlinear transformation of the state $\mathbf{x}$ through a vector-valued observable ${\bf f}(\mathbf{x},\mathbf{u})=[f_1(\mathbf{x},\mathbf{u}),\ldots,f_d(\mathbf{x},\mathbf{u})]^T$.
It may be of interest to directly control specific observables, e.g. Koopman eigenfunctions, that are associated with a particular physical behavior.
If the state $\mathbf{x}$ is included as an observable, it is possible to transform the cost functional~\eqref{Eqn:CostFunctionKO} into~\eqref{Eqn:CostFunction} by modifying $l^K$ and $l_f^K$ accordingly.
However, there does not exist a finite-dimensional Koopman-invariant subspace explicitly including the state, that is topologically conjugate to a system representing multiple fixed points, limit cycles, or more complicated structures~\cite{Brunton2016plosone}.
Nevertheless, it may be possible to obtain a linearization in the entire basin of attraction for a single fixed point or periodic orbit~\cite{Williams2015jnls,Lan2013physd}.
Further, Koopman eigenfunctions may be considered as observables and the state may be recovered via inversion, e.g. approximated from data using multidimensional scaling~\cite{kawahara2016nips}.
Analogously, we can consider the Perron-Frobenius operator acting on densities
\begin{equation}
P^t \rho(\mathbf{x},\mathbf{u}) = J^{-1}(\mathbf{x})\rho(S^{-t}(\mathbf{x},\mathbf{u}),\mathbf{u}),
\end{equation}
for which eigenfunctions satisfy
\begin{equation}
P^t \nu(\mathbf{x},\mathbf{u}) = e^{\lambda t} \nu(\mathbf{x},\mathbf{u}).
\end{equation}
The control-oriented Liouville equation describing the transport of densities under the influence of an exogenous input is given by:
\begin{equation}
\frac{\partial }{\partial t} \rho(t,\mathbf{x},\mathbf{u}) = - \nabla_{\mathbf{x}}\cdot \left( \mathbf{F}(\mathbf{x},\mathbf{u}) \rho(t,\mathbf{x},\mathbf{u})\right) - \nabla_{\mathbf{u}}\cdot \left( \dot{\mathbf{u}} \rho(t,\mathbf{x},\mathbf{u})\right),\quad \rho_0(\mathbf{x},\mathbf{u}) :=\rho(0,\mathbf{x},\mathbf{u}),
\end{equation}
where the initial condition is, e.g., $\rho_0(\mathbf{x},\mathbf{u}) = \delta(\mathbf{x}'-\mathbf{x})\delta(\mathbf{u}'-\mathbf{u})$, a point-mass at $\mathbf{x}'$ and $\mathbf{u}'$ for deterministic dynamics.
This assumes the conservation of probability in the state-action space. The first term on the right-hand side describes the local change in density due to the flow, while the second term describes the local change due to exogenous inputs.
We are often interested in changing the long-term density $\rho(t,\mathbf{x})$.
The objective is then to determine inputs $\mathbf{u}$ so that $\rho(t,\mathbf{x})$ becomes close to a desired target density $\rho^{T}(\mathbf{x})$ over some time horizon $[0,t_f]$ or in the limit $t_f\rightarrow\infty$, which can be interpreted as steering a density of initial conditions or particles governed by the underlying dynamical system~\eqref{Eqn:NonaffineControlSystem}.
Assuming further that the inputs do not evolve dynamically, i.e.\ $\dot{\bf u} = 0$, we have:
\begin{equation}
\frac{\partial }{\partial t} \rho(t,\mathbf{x}) = - \nabla_{\mathbf{x}}\cdot \left( \mathbf{F}(\mathbf{x},\mathbf{u}) \rho(t,\mathbf{x})\right),
\end{equation}
which is considered in the seminal works of R. Brockett~\cite{brockett2007ams,brockett2012chapter}, particularly in the context of ensemble control.
The finite-time control objective may be formulated as
\begin{equation}\label{Eqn:CostFunctionPFO}
J = \int_{t_0}^{t_f}\,\int_{\mathbb{X}}\left[\rho(t,\mathbf{x}) - \rho^{T}(\mathbf{x})\right]l^P_{x}(\mathbf{x}) d\mathbf{x} + l^P_{u}[\mathbf{u}(t),t]dt
\end{equation}
measuring the weighted deviation from the desired density and penalizing control input and time.
Ideally, the controlled system has a unique equilibrium density $\rho^{T}(\mathbf{x})$, corresponding to the sole eigenfunction of the Perron-Frobenius operator with eigenvalue $1$, with all others decaying exponentially.
In~\cite{brockett2007ams}, a general cost functional of the following form is proposed:
\begin{equation}\label{Eqn:CostFunctionPFO-1}
J = \int_{t_0}^{t_f}\int_{\mathbb{X}}\rho(t,\mathbf{x})l(\mathbf{x},\mathbf{u})d\mathbf{x} dt+ \int_{\mathbb{X}} \left( \frac{\partial \mathbf{u}}{\partial \mathbf{x}}\right)^2d\mathbf{x} + \int_{t_0}^{t_f}\left(\frac{\partial \mathbf{u}}{\partial t}\right)^2 dt,
\end{equation}
where the first term evaluates the average performance of the system at the current time, the second term penalizes high gain feedback, and the third term may promote linearity of the control law.
In general, different control perspectives can be assumed: (1) actively modifying the observable or density function, or (2) directly controlling the trajectory informed by the evolution dynamics of the observable or density function, e.g. ensemble control in a drift field.
An illustrative schematic is provided in Fig.~\ref{Fig:ControlPerspectives} using a particle in a field analogy.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{ActivePassiveParticleControl_v2}
\caption{Controlling actively the observable or density function versus the trajectory in the operator-theoretic framework.}
\label{Fig:ControlPerspectives}
\end{figure}
\subsection{Affine control systems}
A control-affine dynamical system is given by
\begin{equation}\label{Eqn:NonlinearSystemWithControlAffine}
\frac{d}{dt} \mathbf{x}(t) = {\mathbf{F}}(\mathbf{x})+{\mathbf{G}}(\mathbf{x})\mathbf{u},\quad \mathbf{x}(0) = \mathbf{x}_0,
\end{equation}
where ${\mathbf{G}}(\mathbf{x})\mathbf{u}:=\sum_i g_i({\bf x})u_i$ represents mixed terms in ${\bf x}$ and ${\bf u}$.
Above we sought a representation on the extended state $(\mathbf{x},\mathbf{u})$, in which case the spectral properties of the associated operators will contain knowledge on the unforced system with $\mathbf{u}={\bf 0}$. Alternatively, we can consider operators associated with the unforced dynamics $\mathbf{F}$ and how these are affected by the external control input, rendering the representation non-autonomous.
Then, additional terms will arise and inform how control affects observables, densities, and eigenfunctions:
\begin{equation}
\frac{\partial}{\partial t} f(t,\mathbf{x}) = (\mathbf{F}(\mathbf{x})+\mathbf{G}(\mathbf{x})\mathbf{u})\cdot\nabla f(t,\mathbf{x}) =\mathbf{F}(\mathbf{x})\cdot\nabla f(t,\mathbf{x}) + \mathbf{G}(\mathbf{x})\mathbf{u}\cdot\nabla f(t,\mathbf{x}),
\end{equation}
where $\nabla:=\nabla_{\mathbf{x}}$.
We assume Koopman eigenfunctions are associated with the unforced dynamics $\mathbf{F}(\mathbf{x})$.
Then, the smooth Koopman eigenfunctions satisfy
\begin{equation}
\frac{d}{dt} \phi(\mathbf{x}) = \lambda \phi(\mathbf{x}) + \nabla \phi(\mathbf{x})\cdot \mathbf{G}(\mathbf{x})\mathbf{u},
\end{equation}
which renders the dynamics governing the eigenfunctions generally nonlinear.
Only in the special case, where $\mathbf{G}(\mathbf{x})$ is a linear function of the state $\mathbf{x}$, this remains linear. The equation becomes bilinear for quadratic $\mathbf{G}(\mathbf{x})$, for which a rich control literature exists~\cite{elliott2009bilinear}.
The Liouville equation for the density becomes:
\begin{equation}\label{Eqn:AffineSystem:ControlledLiouville}
\frac{\partial }{\partial t} \rho(t,\mathbf{x}) = - \nabla\cdot \left( (\mathbf{F}(\mathbf{x})+\mathbf{G}(\mathbf{x})\mathbf{u}) \rho(t,\mathbf{x})\right)
= - \nabla\cdot \left( (\mathbf{F}(\mathbf{x}) \rho(t,\mathbf{x})\right) - \nabla\cdot \left(\mathbf{G}(\mathbf{x})\rho(t,\mathbf{x})\right) \mathbf{u}
\end{equation}
and correspondingly eigenfunctions satisfy:
\begin{equation}
\frac{d}{dt} \nu(\mathbf{x}) = \lambda \nu(\mathbf{x}) - \nabla\cdot(\mathbf{G}(\mathbf{x})\nu(\mathbf{x}))\mathbf{u}.
\end{equation}
The controlled Liouville equation has also been examined in other contexts, where a scalar observable is advected with the flow, e.g.\ concentration of chemical components or temperature distribution, in cases where diffusion is assumed negligible.
It may be possible that the controller directly injects or removes content to modify the density, e.g.
for carbon removal strategies in the atmosphere or oil spill mitigation in the ocean.
Then, \eqref{Eqn:AffineSystem:ControlledLiouville} is modified to
\begin{equation}
\frac{\partial }{\partial t} \rho(t,\mathbf{x}) = - \nabla\cdot \left( (\mathbf{F}(\mathbf{x}) \rho(t,\mathbf{x})\right) + \sum_{i=1}^{p}\, g_i(\mathbf{x}) u_i,
\end{equation}
where $g_i(\mathbf{x})$ is a set of functions prescribing the spatial impact of the actuators and the vector field $\mathbf{F}$ remains constant in time.
\section{System identification}
\label{Sec:SystemIdentification}
In this section, we outline data-driven strategies to learn control-oriented representations of dynamical systems in the operator-theoretic framework.
Most approaches rely on dynamic mode decomposition for the Koopman operator and the classical Ulam's method for the Perron-Frobenius operator.
Emerging techniques such as machine learning and feature engineering aim to provide generalizations for systems of higher complexity, while exploiting and promoting sparsity often targets interpretability and efficient computation.
A general overview of algorithms for data-driven approximation of the Koopman and the Perron-Frobenius operators have been provided in~\cite{klus2017data}, which will not be repeated here.
Here, we discuss approaches that specifically include the effect of control or external parameters.
\subsection{Koopman operator}
\label{Sec:SystemIdentification_KO}
System identification for the Koopman operator can be broadly divided into three main directions: (1) linear models as approximations to the Koopman operator describing the evolution of a, usually large, set of observable functions, (2) (bi)linear models for Koopman eigenfunctions, which constitute a Koopman-invariant subspace, and (3) linear models parametrized by the control input.
We assume a collection of data tuples $(\mathbf{x}_k,\mathbf{x}_k',\mathbf{u}_k)$, where the state $\mathbf{x}_k:=\mathbf{x}(t_k)\in\mathbb{R}^n$, the time-shifted state $\mathbf{x}_k':=\mathbf{x}(t_k+\Delta t)\in\mathbb{R}^n$, and inputs $\mathbf{u}_k:=\mathbf{u}(t_k)\in\mathbb{R}^q$ are sampled at discrete times $t_k$, $k=1,\ldots,m$.
We define a vector-valued observable function ${\bf f}(\mathbf{x}) = [f_1(\mathbf{x}),f_2(\mathbf{x}),\ldots,f_d(\mathbf{x})]^T$ with ${\bf f}:\mathbb{R}^n\rightarrow\mathbb{R}^d$ evaluated on data, which is generally a nonlinear transformation of the state so that $\mathbf{z} = {\bf f}(\mathbf{x})$ embeds the state in a feature space of possibly higher dimension, i.e. $d\gg n$.
\newline
\textbf{Dynamic mode decomposition with control:}
Building on DMD, the control-oriented extension {\it DMD with control} (DMDc) was proposed to disambiguate the effect of control from the natural dynamics in high-dimensional systems~\cite{Proctor2016siads}.
Assuming the availability of full-state measurements, observables are taken as linear measurements of the state, i.e. ${\bf f} = \mathbf{x}$.
Then, the dynamical behavior of coordinates $\mathbf{z}$ are modeled as $\dot{\mathbf{z}}(t) = \mathbf{A}\mathbf{z}(t) + \mathbf{B}\mathbf{u}(t)$ or
\begin{equation}
{\mathbf{z}}_{k+1} = \mathbf{A}\mathbf{z}_{k} + \mathbf{B}\mathbf{u}_{k}
\end{equation}
in discrete time, which makes standard linear control techniques readily applicable.
The dimensionality of the problem is determined by the dimension of the state. However, many systems evolve on a low-dimensional attractor so that the singular value decomposition (SVD) can be used to solve the problem efficiently. Further, DMDc has been combined with compressed DMD~\cite{Brunton2015jcd} for compressive system identification~\cite{bai2017aiaa}.
While DMDc is feasible for high-dimensional systems with $n\ll m$, such as fluids that may have millions or billions of degrees of freedom, linear measurements are known to not be sufficiently rich to capture strongly nonlinear behavior, motivating the use of nonlinear observables.
Nevertheless, recent work~\cite{Kaiser2018prsa} indicates that even models with relatively little predictive power may still be sufficient for model predictive control schemes (see also Sec.~\ref{Sec:ControlExamples}).
\newline
\textbf{Extended DMD with control:} Extended DMD (eDMD) has been proposed as a generalization of DMD employing a dictionary of nonlinear functions as observables~\cite{Williams2015jnls}. DMD is then equivalent to eDMD with the special set of observables corresponding to linear monomials, ${\bf f}(\mathbf{x}) = [x_1,x_2,\ldots,x_n]^T$.
In domains where such a local linearization would be insufficient, more complex basis functions, such as high-order monomials, Hermite polynomials, or radial basis functions, can be employed.
The number of observables typically exceeds the state dimension, i.e.\ $d\gg n$; however, the algorithm can be reformulated to scale with the dimension of samples instead of the dimension of features/states
eDMD has also been extended for nonlinear system identification, which we refer to as {\it eDMD with control} (eDMDc)~\cite{williams2016ifac}, and been combined with MPC~\cite{Korda2016arxiv,Korda2018arxiv}.
By transforming the system so that the dynamics evolve linearly, linear MPC methods become readily applicable, which require solving a convex quadratic program instead of a non-convex program as in nonlinear MPC. In~\cite{Korda2016arxiv} the MPC optimization is further reformulated to scale with the dimension of the state $n$ and not with the number of observables $d$, which is advantageous if $d \gg n$.
However, the quality of the resulting model and its prediction is highly sensitive to the choice of function basis.
\newline
\textbf{Time-delay coordinates and DMDc:} Often we do not have access to the full state, but instead can only measure a single or few variables. Then time-delay coordinates, i.e. ${\bf f}(\mathbf{x}(t)) = [x(t),x(t-1\tau),x(t-2\tau),\ldots,x(t-(d-1)\tau)]$ with time delay $\tau$, have been proven to be useful for analyzing and modeling dynamics.
The Taken's embedding theorem~\cite{Takens1981lnm}
provides conditions under which the original system can be faithfully reconstructed in time-delay coordinates, providing a diffeomorphism between the two systems.
Recently, time-delay coordinates have been shown to provide a Koopman-invariant subspace and used to model unforced systems with bimodal or bursting behavior as linear systems closed by an intrinsic forcing term~\cite{Brunton2017natcomm}.
This formulation is achieved by performing DMD on the delay coordinates arranged in a Hankel matrix~\cite{Brunton2017natcomm,Arbabi2016arxiv}.
Hankel matrix based system identification has been used for decades in the eigensystem realization algorithm (ERA)~\cite{ERA:1985}.
Assuming sample pairs $(x_k,u_k)$ of a single long trajectory $x(t)$ generated by a continuous system with single control input $u(t)$,
the vector-valued observable is given by $\mathbf{z}_k = {\bf f}(\mathbf{x}_k) = [x_k, x_{k-1}, x_{k-2},\ldots, x_{k-d_1+1}]$ where $f_{i}(x_{k})=x_{k-(i-1)}$ and $\mathbf{v}_k = [u_k, u_{k-1}, u_{k-2},\ldots, u_{k-d_2+1}]$.
The embedding dimensions $d_1=d_2=d$ are assumed equal for simplicity.
The system is then given by
\begin{subequations}\label{Eqn:KM:TDC_MISO}
\begin{align}
{\mathbf{z}}_{k+1} = \mathbf{A}{\mathbf{z}}_k + \mathbf{B} \mathbf{v}_k\\
\mathbf{x}_{k} = \begin{bmatrix}
1 & 0 & \ldots & 0
\end{bmatrix}{\mathbf{z}}_k
\end{align}
\end{subequations}
where the state is recovered from the first component of $\mathbf{z}$.
The control matrix $\mathbf{B}$ must satisfy a lower triangular structure to not violate causality.
If the number of time-delay coordinates is large, an SVD can be applied and the model is built on the first $r$ eigen-time delay coordinates.
It can be advantageous to embed the full state $\mathbf{x}$ if available instead of just a single component, as more information is used to improve prediction accuracy.
In Eq.~\eqref{Eqn:KM:TDC_MISO} the control history appears as a separate vector of control inputs; it can also be useful to augment the state with the past control values so that the current actuation value appears as single control input:
\begin{subequations}\label{Eqn:KM:TDC_SISO}
\begin{align}
\hat{\mathbf{z}}_{k+1} =
\begin{bmatrix}
{\mathbf{z}}_k\\
u_{k-1}\\
\vdots\\
u_{k-d+1}
\end{bmatrix}_{k+1} &=
\left[\begin{array}{c|c}
\mathbf{A} & \mathbf{B}_{[2,d]} \\
\hline
{\bf 0} & \begin{bmatrix}
0 & {\bf 0}\\
{\bf 0} & {\bf I}
\end{bmatrix}
\end{array}\right]
\begin{bmatrix}
{\mathbf{z}}_k\\
u_{k-1}\\
\vdots\\
u_{k-d+1}
\end{bmatrix}_{k} +
\begin{bmatrix}
\mathbf{B}_{[1]} \\ 1 \\ 0 \\ \vdots \\0
\end{bmatrix}
u_k\\
\mathbf{x}_{k} &= \begin{bmatrix}
1 & 0 & \ldots & 0
\end{bmatrix}
\hat{\mathbf{z}}_k,
\end{align}
\end{subequations}
where ${\bf I}$ denotes the $d-2\times d-2$ identity matrix and $\mathbf{B}_{[a,b]}$ contain columns $a$ through $b$ of $\mathbf{B}$.
\newline
\textbf{Control in eigenfunction coordinates:} Eigenfunctions of the Koopman operator provide a natural set of intrinsic observables as these themselves behave linearly in time and correspond to global properties of the system.
Particularly, control affine systems Eq.~\eqref{Eqn:NonlinearSystemWithControlAffine} have been studied in this context.
The general approach relies on estimating eigenfunctions associated with the autonomous system and then identifying how these are affected by the control input.
The observables are then given by a set of eigenfunctions $\mathbf{z} = {\bf f}(\mathbf{x}) = [\phi_1(\mathbf{x}), \ldots, \phi_r(\mathbf{x})]$. Their continuous-time dynamics are given by
\begin{equation}\label{Eqn:KM:KRONIC}
\dot{\mathbf{z}} = (\mathbf{F}(\mathbf{x})+\mathbf{G}(\mathbf{x})\mathbf{u})\cdot\nabla_{\mathbf{x}}{\bf f}(\mathbf{x}) = \mathbf{A} \mathbf{z} + \mathbf{B}(\mathbf{z}) \mathbf{u},
\end{equation}
where $\mathbf{A} := \mathbf{F}(\mathbf{x})\cdot\nabla_{\mathbf{x}}{\bf f}(\mathbf{x}) = \boldsymbol{\Lambda} := \mathrm{diag}(\lambda_1,\ldots,\lambda_r)$
and $\mathbf{B}(\mathbf{z}) := \mathbf{G}(\mathbf{x})\mathbf{u}\cdot\nabla_{\mathbf{x}}{\bf f}(\mathbf{x})$.
For $\mathbf{u} = {\bf 0}$, this constitutes a Koopman-invariant subspace for the unforced system; otherwise, the control term $\mathbf{B}(\mathbf{z})$ prescribes how these eigenfunctions are modified by $\mathbf{u}$.
The advantage of this formulation is that the dimension of the system scales with the number of eigenfunctions, and we are often interested in a few dominant eigenfunctions associated with persistent dynamics.
This formulation has been examined for state estimation and observer design~\cite{Surana2016cdc,surana2017cdc} and data-driven control~\cite{Kaiser2017arxiv}.
In general, the eigenfunctions can be identified using eDMD, kernel-DMD or other variants, and the model~\eqref{Eqn:KM:KRONIC} may be well represented as long as their span contains $\mathbf{F}(\mathbf{x})$, $\mathbf{G}(\mathbf{x})$, and $\mathbf{x}$~\cite{surana2017cdc}.
However, the eigenfunctions obtained from DMD/eDMD may be spurious, i.e.\ they do not behave linearly.
For instance, noise can produce corrupt, physically irrelevant but energetically important, unstable eigenfunctions.
An alternative~\cite{Kaiser2017arxiv} seeks a functional representation of the eigenfunctions in a dictionary of basis functions $\boldsymbol{ \Theta}(\mathbf{x}) = [\theta_1(\mathbf{x}),\theta_2(\mathbf{x}),\ldots,\theta_p(\mathbf{x})]$ so that $\phi(\mathbf{x})\approx \sum_{i=1}^p\,\theta_i(\mathbf{x})\xi_i = \boldsymbol{\Theta}(\mathbf{x})\mathbf{\xi}$.
A sparsity constraint on the vector of coefficients $\xi\in\mathbb{R}^d$ can then be used to identify an analytical expression of the eigenfunction. This approach is restricted to smooth eigenfunctions in the point spectrum of the Koopman operator and results in an optimization problem to find a sparse vector in the nullspace of the matrix $\left(\dot{\mathbf{x}}\cdot\nabla_{\mathbf{x}} \boldsymbol{\Theta}(\mathbf{x}) - \lambda \boldsymbol{\Theta}(\mathbf{x})\right)\mathbf{\xi} = 0$ for a given eigenvalue $\lambda$.
This requires an accurate estimation of the time derivative of $\mathbf{x}$ and is sensitive to noise. The control term $\mathbf{B}(\mathbf{z})$ can either be determined from the gradient of the identified eigenfunctions for given $\mathbf{G}(\mathbf{x})$ or identified directly~\cite{KaKuBr2018arxiv} for unknown $\mathbf{G}(\mathbf{x})$ by similarly expanding it in terms of a dictionary, $\mathbf{G}(\mathbf{x}) \approx \sum_{i=1}^q \psi_i(\mathbf{x}) \eta_i$.
The approach can be further extended to nonaffine control systems, for which control-dependent eigenfunctions may be considered, using a library on the extended state, i.e.\ $\boldsymbol{ \Theta}(\mathbf{x},\mathbf{u})$.
This data-driven characterization has been combined with SDRE~\cite{Kaiser2017arxiv} and MPC~\cite{KaKuBr2018arxiv} for controlling the underlying dynamical system.
The cost functional generally needs to be formulated in eigenfunction coordinates as discussed in the previous section.
\newline
\textbf{Parameterized linear models: } Another class of linear parameter-varying models have been considered to disambiguate the approximation of the Koopman operator of the unforced system from the effect of exogenous forcing~\cite{williams2016ifac} and for model predictive control of switched systems~\cite{Peitz2017arxiv}.
In particular, a set of linear models parametrized by the control input is considered as an approximation to the non-autonomous Koopman operator:
\begin{equation}\label{Eqn:KM:ParametrizedKoopman}
{\mathbf{z}}_{k+1} = \mathbf{A} (\mathbf{u}) \mathbf{z}_k.
\end{equation}
For discrete-valued inputs $\mathbf{u}_i\in\{\mathbf{u}_1,\ldots,\mathbf{u}_q \}$, a finite set of matrices defined as $\{\mathbf{A}_i:=\mathbf{A}(\mathbf{u}_i)\}_{i=1}^q$ may be estimated, in the simplest case with DMD or a variant by determining these for each discrete control input separately.
This replaces the non-autonomous Koopman operator with a set of autonomous Koopman operators, each element associated with a constant control input, as considered in~\cite{Proctor2017siads}.
A more robust way that generalizes to continuous input data has been proposed in~\cite{williams2016ifac}, where $\mathbf{A}(\mathbf{u})$ is expanded in terms of basis functions $\psi_i:\mathbb{U}\rightarrow\mathbb{R}$ as in eDMD for the state variable, so that $\mathbf{A} (\mathbf{u}) = \sum_{k=1}^{d} \psi_k(\mathbf{u}) \mathbf{A}_k$. Then, for a given value of $\mathbf{u}$, matrix $\mathbf{A}$ is constructed using coefficients $\mathbf{A}_k$ and its spectral properties can be examined. Since eDMD may be prone to overfitting, the estimation problem can be regularized using the group sparsity penalty~\cite{simon2012standardization}.
Determining an optimal control sequence for Eq.~\eqref{Eqn:KM:ParametrizedKoopman} is a combinatorial problem, which can be efficiently solved for low-dimensional problems using iterative algorithms from dynamic programming~\cite{Bertsekas2005book}.
In~\cite{Peitz2017arxiv}, the MPC problem for the sequence of control inputs is transformed into a time switching optimization problem assuming a fixed sequence of consecutive discrete control inputs. The model can further be modified as a bilinear continuous, piecewise-smooth variant with constant system matrices, which does not suffer from the curse of dimensionality and allows for continuous control inputs via linear interpolation~\cite{peitz2018feedback}.
\subsection{Perron-Frobenius operator}
\label{Sec:SystemIdentification_PFO}
A classical method to approximate the Perron-Frobenius operator is the Ulam-Galerkin method, a particular Galerkin method where the test functions are characteristic functions. Given an equipartition of the phase space into disjoint sets $\{B_1,\ldots,B_d\}$ and data, the action of the operator can be approximated as
\begin{equation}\label{Eqn:TransitionMatrix}
\mathbf{P}_{ij}^{\tau} = \frac{\mathrm{card}( \mathbf{x}_k \vert \mathbf{x}_k \in B_j \wedge \mathbf{F}^{\tau}(\mathbf{x}_k) \in B_i)}{\mathrm{card}(\mathbf{x}_k \in B_j)},
\end{equation}
where $\mathbf{F}^{\tau}$ is the flow map and $\tau$ is the time duration.
The $\tau-$step stochastic transition matrix acts on a discrete probability vector $\mathbf{p}=[p_1,\ldots,p_d]^T$ and satisfies the properties $\sum_{j=1^d}P_{ij}=1$ and $0\leq P_{ij}\leq 1$. A widely used software package based on set-oriented methods is GAIO~\cite{dellnitz2001gaio}.
Each element $B_i$ can be associated with a distinct symbol. The dynamics are then propagated on the partition and can be analyzed in terms of the sequence of symbols.
The approach has been generalized to PDEs~\cite{vaidya2009cdc,Kaiser2014jfm}, i.e. where the evolution of a pdf $p(\mathbf{u},t)$ of a spatiotemporal vector field $\mathbf{u}(\mathbf{x},t)$ is of interest. After expanding the vector field $\mathbf{u}(\mathbf{x},t) = \sum_i a_i(t)\mathbf{u}_i(\mathbf{x})$, e.g. using POD, Ulam's method can be applied to the $a_i$'s.
Other data-driven methods to approximate the Perron-Frobenius operator include blind source separation, the variational approach for conformation dynamics (VAC)~\cite{noe2013variational}
and DMD-based variants exploiting the duality between Koopman and Perron-Frobenius operators, e.g. naturally structured DMD (NSDMD)~\cite{huang2017arxiv} and constrained Ulam dynamic mode decomposition~\cite{goswami2018csl}.
Data-driven control based on the Perron-Frobenius operator appears more challenging as it relies on good statistical estimates requiring large amounts of data.
In practice, methods often use a Monte-Carlo sampling to estimate the transition probabilities, which suffers from the curse of dimensionality.
However, it is particularly appealing as it provides a natural framework to incorporate uncertainties and transients, allows the control of ensembles and mixing properties of the underlying dynamical system, and yields a more global controller
\newline
\textbf{Parametrized models: }
Identifying a probabilistic model disambiguating the unforced dynamics and the effect of actuation is commonly realized by a parametrized representation.
Incorporating the effect of control can be achieved via a simple extension of Ulam's method by considering a discretized control input that parametrizes the transition matrix in~\eqref{Eqn:TransitionMatrix}:
\begin{equation}\label{Eqn:ControlTransitionMatrix}
\mathbf{P}_{ij}^{u_l} = \frac{\mathrm{card}( \mathbf{x}_k \vert \mathbf{x}_k \in B_j \wedge \mathbf{F}^{\tau}(\mathbf{x}_k,\mathbf{u}_l) \in B_i)}{\mathrm{card}(\mathbf{x}_k \in B_j)},
\end{equation}
where each $\mathbf{P}(\mathbf{u}_l):=\mathbf{P}_{ij}^{u_l}$ satisfies the normalization and positivity properties.
The dynamics of the probability vector can then be expressed as
\begin{equation}\label{Eqn:DSDT-MarkovModel}
\mathbf{p}_{k+1} = \mathbf{P}(\mathbf{u}_l) \mathbf{p}_k.
\end{equation}
Approximations for~\eqref{Eqn:ControlTransitionMatrix} have been obtained employing set-oriented methods~\cite{das2017acc} and from NSDMD and using Gaussian radial basis functions~\cite{das2018arxiv}, which are then used to derive optimal feedback control strategies.
The model~\eqref{Eqn:DSDT-MarkovModel} can be derived from a discretization of the Liouville equation in space, time, and control~\cite{kaiser2017tcfd} and represents a control-dependent Markov chain.
The corresponding control problem can be associated with a Markov decision process (MDP) and appears in discrete optimal control~\cite{Bertsekas2005book}.
Therein, it is used as a model to predict the expectation of the value function in dynamic programming, without reference to the Liouville equation.
Building on these ideas, cluster-dependent feedback control has been developed by examining the spectral properties of the control-dependent transition matrix family~\cite{kaiser2017tcfd}, where state discretization of a fluid from high-dimensional snapshot data is achieved using POD and k-means clustering.
\newline
\textbf{Multiplicative models: }
Alternatively, the effect of control may be encoded in a separate stochastic matrix $\mathbf{P}^u$ approximating the action of a stochastic kernel:
\begin{equation}
\mathbf{p}_{k+1} = \mathbf{P} \mathbf{P}^u\mathbf{p}_k,
\end{equation}
which recovers the unforced dynamics when $\mathbf{P}^u = {\bf I}$ is the identity matrix.
This model appears, e.g., in the context of optimizing mixing properties~\cite{froyland2016siam,froyland2016arxiv}, where $\mathbf{P}$ represents advection via the Perron-Frobenius operator and $\mathbf{P}^u$ represents the discretized diffusion kernel to be optimized.
In~\cite{mehta2008tac}, limitations of nonlinear stabilization are examined when $\mathbf{P}^u = \mathbf{P}^{1+K}$ encodes a state feedback law.
\newline
\textbf{Additive models: }It may also be possible to formulate additive models, which exhibit some similarity to traditional state-space models, of the form
\begin{equation}\label{Eqn:PFO_additive}
\mathbf{p}_{k+1} = (\mathbf{P} + \mathbf{P}^u)\mathbf{p}_{k},
\end{equation}
where additional constraints are necessary, so that $\mathbf{P} + \mathbf{P}^u$ satisfies the positivity and conservation constraints from probability. The matrix $\mathbf{P}^u$ may be interpreted as a disturbance to the unforced transition probabilities $\mathbf{P}$.
It may also be possible to represent $\mathbf{P}^u$ as $\mathbf{P}^c\mathbf{u}_k$, where $\mathbf{P}^c$ is a constant matrix and $\mathbf{u}_k$ is the control input.
A similar formulation to~\eqref{Eqn:PFO_additive} incorporating control appears from a state-discretization of the controlled Liouville equation as outlined in~\cite{Kaiser2014jfm}, resulting in a discrete-state, continuous-time Markov model.
Variations of this type of model have been used to study optimal response of Markov chains~\cite{antown2018arxiv} and to control chaos~\cite{bollt2000ijbc}.
\newline
\textbf{Remark: }
Until recently, the focus on control in the Perron-Frobenius and Liouville operator framework has been mostly analytic and theoretical. Further, most problems studied have been low-dimensional.
On the other hand, there exists an extensive literature on probabilistic models for control,
which can benefit the operator-theoretic framework but which have not been fully explored yet.
Examples include Hidden Markov models, see~\cite{alpcan2006cdc,yu2007cdc} in the context of the Perron-Frobenius and Fokker-Planck operators, and partial-observation Markov decision processes (POMDP)~\cite{aastrom1965jmaa}.
\subsection{Numerical example}
\label{Sec:ControlExamples}
We compare the effectiveness of model predictive control for different DMDc-based models, which differ in their choice of dictionary functions.
The van der Pol oscillator is considered as an illustrative example:
\begin{equation}
\frac{d^2 x}{d t^2} - \mu(1-x^2)\frac{d x}{d t} + x = u
\end{equation}
with $\mu = 0.2$ and where $u$ is the external control input.
In the following, the state vector is defined as $\mathbf{x}:=[x_1,x_2]^T=[x,dx/dt]^T$.
The training data consists of 200 trajectories in the box $[-6,\, 6]\times [-6,\, 6]$ integrated until $T=1$ with timestep $\Delta t=0.05$, i.e. 4000 sample points in total. The forcing $u(t) = 5 \sin(|\omega_1| t)\sin(|\omega_2| t)$, where $\omega_i\sim\mathcal{N}(0,10)$, is applied for the identification task.
The models under consideration include DMDc on the state $\mathbf{x}$, eDMDc using monomials up to order $5$, and delayDMDc with embedding dimension $d=5$, as compared in Fig.~\ref{Fig:MPC-Example}.
For the control optimization, weight matrices $Q = (\begin{smallmatrix}
1 & 0\\ 0 &1
\end{smallmatrix})$, $R_u = 0.1$, and $R_{\Delta u}=0.1$, and input constraints $-5<u<5$, $-50<\Delta u<50$ are employed. The prediction/control horizon is $T = 0.75= 15 \Delta t$.
The models are evaluated on a test dataset that is different from the training set, visualized for a subset of trajectories in Fig.~\ref{Fig:MPC-Example}(a).
The smallest error is achieved by eDMDc, followed by delayDMDc and DMDc, with the latter exhibiting a large deviation from the actual evolution.
MPC control results are displayed for $\mu = 0.2$ in Fig.~\ref{Fig:MPC-Example}(b)-(c).
Three main observations can be made:
(1) Despite the prediction inaccuracies, all three models are able to successfully control the system;
(2) best performance is achieved with eDMDc and delayDMDc, while delayDMDc generally performs slightly better;
(3) the farther away the initial condition is from the fixed point to be stabilized, the worse the performance for DMDc (compare results in Fig.~\ref{Fig:MPC-Example}(d)).
It is important to point out that the specific control performance is highly sensitive to the chosen prediction horizon, weights, and the forcing for the training data; however, delayDMDc appears to be most robust.
Observed here and in previous work~\cite{Kaiser2018prsa}, superior prediction performance may be overrated in combination with predictive control schemes. The overall performance lies more in the robustness of MPC than in the specific model choice due to the repeated initialization with the updated measurement vector and the sufficiently short prediction horizon.
\begin{figure}[tb]
\centering
\begin{overpic}[width=\linewidth]{EX_VANDERPOL_02_PredictionTraining}
\put(-2,4){(a)}
\put(5.5,5){\colorbox{white}{Validation data}}
\put(33,24){\colorbox{white}{DMDc}}
\put(57,24){\colorbox{white}{eDMDc}}
\put(80,24){\colorbox{white}{delayDMDc}}
\end{overpic}\\
\begin{overpic}[width=\linewidth]{EX_VANDERPOL_02_ControlResultsEnsemble}
\put(-2,4){(b)}
\put(8,5){\colorbox{white}{Unforced}}
\put(32,5){\colorbox{white}{Controlled}}
\end{overpic}\\
\begin{overpic}[width=\linewidth]{EX_VANDERPOL_02_ControlResults}
\put(-2,4){(c)}
\end{overpic
\\
\begin{overpic}[width=\linewidth]{EX_VANDERPOL_02_ControlResultsDOA}
\put(-2,4){(d)}
\end{overpic}
\caption{Van der Pol oscillator for $\mu = 0.2$:
(a) validation data and prediction (colored by control input),
(b) unforced and forced phase plots with MPC for initial conditions of the validation dataset (colored by control input),
(c) cumulative cost of subset of the trajectories and a specific example trajectory, and
(d) color-coded initial conditions with respect to converged control cost.
}
\label{Fig:MPC-Example}
\end{figure}
\section{Control theory in the operator-theoretic framework}
\label{Sec:Control}
In this section, we review control-theoretic advances using transfer operators and their connections to classical concepts in control such as stability analysis and observability.
\subsection{Stability Analysis}\label{Sec:StabilityAnalysis}
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Stability_v3}
\caption{Classical stability analysis and within the operator-theoretic framework. The duality between the Lyapunov function and Lyapunov density~\cite{rantzer2001scl} can be derived from the duality of the adjoint transport operators. Operators $U^c$ and $P^c$ are restricted to the corresponding function space with support on $\mathbb{X}\backslash \mathbb{A}$, where $\mathbb{A}$ is the attractor.}
\label{Fig:StabilityAnalysis}
\end{figure}
Analyzing the stability of dynamical systems is an important aspect of control theory, e.g. in order to develop stabilizing controllers.
In particular, the Lyapunov function plays an essential role in the global stability analysis and control design of nonlinear systems.
For stable, linear systems, the Lyapunov function is the positive solution of a corresponding Lyapunov equation.
In contrast, finding and constructing a Lyapunov function poses a severe challenge for general nonlinear systems.
There has been important progress within the operator-theoretic context for nonlinear stability analysis as spectral properties of the operators can be associated with geometric properties such as Lyapunov functions and measures, contracting metrics, and isostables.
Some of this work relates back to a weaker notion of stability in terms of a density function as introduced by A. Rantzer~\cite{rantzer2001scl}.
In particular, he introduces the concept of asymptotic stability in an almost everywhere (a.e.) sense, which is guaranteed through the existence of a density function.
Around the same time, it was shown that Lyapunov functions can be constructed using a suitable choice of polynomials, resulting in a linear problem formulation~\cite{Parrilo2000phd}.
These ideas were later combined in a convex, linear formulation based on the discretization of the density function via polynomials, to jointly synthesize the density function and state feedback.
The crucial connection of this line of work to the Perron-Frobenius operator was made by U. Vaidya and P. Mehta~\cite{vaidya2006ccc} demonstrating that these ideas of duality and linearity can be expressed using methods from ergodic theory, and the Lyapunov measure represents the dual of the Lyapunov function, including its relationship to the Koopman operator.
Spectral properties, specifically invariant sets, are then used to determine stability properties. The Lyapunov measure is closely related to the density function introduced in~\cite{rantzer2001scl} and captures the weaker a.e. notion of stability.
Analogously to the Lyapunov equation, the solution to the Lyapunov measure equation provides necessary and sufficient conditions for a.e. stability of nonlinear systems.
The Lyapunov equation and Lyapunov measure have been connected to the Koopman and Perron-Frobenius operator, respectively, e.g. providing an explicit formula for the Lyapunov measure in terms of the Perron-Frobenius operator, which generalizes stability analysis to invariant sets of nonlinear systems~\cite{vaidya2007acc}.
Set-oriented methods have been used for solving the Lyapunov measure equation leading to a linear program~\cite{vaidya2008tac}.
A weaker notion of {\it coarse stability} has been further introduced to characterize the stability properties obtained from the finite-dimensional approximation.
A formulation for the infinitesimal generator of the Perron-Frobenius operator, the Liouville operator, has also been proposed and can be solved numerically using the finite-element method~\cite{rajaram2010jmaa}.
In particular, for a stable fixed point $\mathbf{x}^{*}\in\mathbb{X}$, a Lyapunov measure (or density) $\mu$ satisfies a.e.
\begin{equation}
L_P [\mu] (\mathbf{x})<0\quad\forall \mathbf{x}{\not =} \mathbf{x}^{*},
\end{equation}
which corresponds to $\nabla (\mathbf{F} \mu(\mathbf{x}))>0$ as originally introduced in~\cite{rantzer2001scl}.
Thus, the Lyapunov measure (density) decays a.e. under the action of the Liouville operator and is related to its spectral properties.
These ideas have further been extended to stochastic systems~\cite{vaidya2015acc,vaidya2015cdc}, used to compute the domain of attraction~\cite{wang2010cdc}, and employed for control~\cite{vaidya2010ieee,raghunathan2014ieee}.
The Koopman operator has also been used to study nonlinear stability.
Certain sets of points in phase space that exhibit the same asymptotic behavior are referred to as isostables and play an important role in characterizing transient behavior.
Isostables correspond to level sets of a particular Koopman eigenfunction and can be computed in the entire basin of attraction using, e.g., Laplace averages~\cite{mauroy2013pd}.
Building on~\cite{vaidya2006ccc}, it has been further shown~\cite{mauroy2013pd,mauroy2013cdc} that special Lyapunov functions from classical stability theory can be determined from Koopman eigenfunctions and a sufficient condition for global stability is the existence of stable eigenfunctions.
As with the Lyapunov measure, it can be shown that for a stable fixed point there exists
a special, nonnegative observable $V(\mathbf{x})$ that satisfies:
\begin{equation}
L_U [V](\mathbf{x})<0\quad\forall \mathbf{x}{\not =} \mathbf{x}^{*}.
\end{equation}
Thus, the Lyapunov function decays everywhere under action of the adjoint Liouville operator and is related to its properties and those of the Koopman operator family.
In~\cite{mauroy2013cdc}, it has been shown that global stability of a fixed point can be established through the existence of a set of $C^1$ eigenfunctions of the Koopman operator associated with the eigenvalues of the Jacobian of the vector field, and that Koopman eigenfunctions can be used to define a Lyapunov function and contracting metrics.
A numerical scheme based on a Taylor expansion is proposed to compute stability properties including the domain of attraction~\cite{mauroy2013cdc}.
These ideas have been further extended to examine global stability properties of hyperbolic fixed points and limit cycles and also for non-analytical eigenfunctions~\cite{Mauroy2016ieee}.
\subsection{Observability and controllability}
Characterizing a system's observability and controllability is crucial for designing sensor-based estimation and control strategies.
Linear observability and controllability criteria have been extended to characterize nonlinear systems~\cite{Hermann1977ieeetac,khalil1996book}, e.g. local observability can be tested by constructing an observability matrix using Lie derivatives.
Balanced model reduction has also been extended to nonlinear systems~\cite{scherpen1993scl,lall2002ijrnc,zhang2002automatica}.
Operator-theoretic approaches provide new opportunities for determining observability and controllability properties of high-dimensional nonlinear systems from data using linear techniques.
There has been considerable work on nonlinear observability analysis based on the Perron-Frobenius operator formulation for the Lyapunov measure equation.
Analogously to the observability Gramian for linear systems, which can be obtained as the solution to a Lyapunov equation, the nonlinear equivalent can be obtained from the Lyapunov measure equation.
In~\cite{vaidya2007cdc}, set-oriented methods have been used to compute the observability Gramian for nonlinear systems with output measurements, based on partitioning the phase space into slow and fast time regimes using the Lyapunov measure.
The linear observability Gramian appears here as a special case of the transfer operator based approach.
The degree of observability of a set can further be related to the residence time.
These ideas have been extended to determine sensor and actuator placement for the controlled PDE associated with the infinitesimal generator of the transfer operator~\cite{vaidya2012jmaa}.
In particular, the observability and controllability is characterized in terms of the support of the observability and controllability matrices, respectively, and the finite-time and infinite-time Gramians are formulated in terms of the transfer operators.
The infinite-time controllability is only well defined when the vector field $\mathbf{F}(\mathbf{x})$ is a.e. uniformly stable; thus, an alternative approach to compute the infinite-time controllability set is based on an exponential discounting of the density, which does not rely on the stability property~\cite{sinha2013ecc}.
The terms \emph{coarse observability} and \emph{coarse controllability} have been introduced to characterize the weaker notion obtained from the finite-dimensional approximation~\cite{sinha2016jmaa}.
The Koopman operator perspective may also be used to assess nonlinear observability and controllability.
Generally, two states are indistinguishable if their future output sequences are identical. Thus, a nonlinear system is considered nonlinearly observable if any pair of states are distinguishable~\cite{HermannandKrener(1977)}.
In~\cite{Surana2016nolcos}, nonlinear observability is evaluated in the proposed Koopman-based linear observer design framework, which provides a linear representation of the underlying system in terms of the Koopman eigenfunctions (see Sec.~\ref{Sec:StateEstimation}). In particular, the nonlinear system is nonlinearly observable if the pair $(\mathbf{A},\mathbf{C}^H)$ is observable, which can be determined via the rank condition of the corresponding observability matrix.
These ideas have been applied to study pedestrian crowd flow~\cite{benosman2017ifac}, extended further to input-output nonlinear systems~\cite{Surana2016cdc} resulting in bilinear or Lipschitz formulations, and used to compute controllability and reachability~\cite{goswami2017cdc}.
The observability and controllability Gramians, which are used to examine the degree of observability and controllability, can be computed in the lifted observable space given by a dictionary of functions~\cite{yeung2018acc}.
In this case, the observability/controllability of the underlying system is related to the observability/controllability of the observables.
The underlying assumption is that the state, input, and output are representable in terms of a few Koopman eigenfunctions, which relies on a suitable choice of observable functions to retain the relevant information.
\subsection{Observer synthesis for state estimation }\label{Sec:StateEstimation}
Observer design is critical for sensor-based estimation of high-dimensional systems.
The observer is a dynamical system that estimates the current state $\hat{\mathbf{x}}_k$ from the history of output measurements of the system.
The Extended Kalman Filter (EKF) is perhaps the most widely used nonlinear observer, benefiting from high performance and a simple formulation based on linearization of the nonlinear state equations.
However, EKFs are sensitive to noise, do not have convergence guarantees, have limited performance for strongly nonlinear systems, and assume Gaussian noise and disturbances ~\cite{reif2000ieee,chaves2002ejc}.
Observer design in the Koopman operator framework involves constructing a state estimator for the system
$\mathbf{x}_{k+1} = \mathbf{F} (\mathbf{x}_{k}, \mathbf{u}_{k}),\quad \mathbf{y}_k= \mathbf{H}(\mathbf{x}_{k})$
using a nonlinear transformation of states $\mathbf{x}$ and outputs $\mathbf{y}$ (and inputs for systems with actuation). This results in the following estimator dynamical system
\begin{equation}\label{Eqn:KoopmanObserver}
{\mathbf{z}}_{k+1} = \mathbf{A} {\mathbf{z}}_{k},\quad
{\mathbf{y}}_k = \mathbf{C}^y(\mathbf{z}_{k}),\quad
\mathbf{x}_k = \mathbf{C}^{x}(\mathbf{z}_k),
\end{equation}
where $\mathbf{C}^x$ and $\mathbf{C}^H$ are formed from the Koopman modes and observer Koopman modes.
System~\eqref{Eqn:KoopmanObserver} can be obtained from a spectral decomposition of the Koopman operator, where $\mathbf{z}$ represent Koopman eigenfunction coordinates, which has been referred to as Koopman observer form~\cite{Surana2016nolcos} (see also Sec.~\ref{Sec:SystemIdentification_KO}).
The resulting system requires that the state and output lie in the span of a subset of eigenfunctions.
Luenberger or Kalman observers can then be synthesized based on~\eqref{Eqn:KoopmanObserver}, where convergence can be shown under the observability condition based on the Koopman spectral properties, and shown to exhibit increased performance over the extended Kalman filter~\cite{Surana2016nolcos}.
This work has been extended to input-output systems yielding Lipschitz or bilinear observers, resulting from the representation of the unforced system in eigenfunction coordinates~\cite{Surana2016cdc}, and for constrained state estimation based on a moving horizon analogous to model predictive control~\cite{surana2017cdc}.
Building on kernel DMD and the Koopman observer form, a Luenberger Koopman observer has been applied to pedestrian crowd flow to estimate the full state from limited measurements~\cite{benosman2017ifac}.
The probabilistic perspective based on the Perron-Frobenius operator allows one to incorporate uncertainty into observer synthesis.
Particle filters based on Monte Carlo sampling are commonly used for forecasting and estimation under uncertainty by approximating the PDF with a finite number of samples.
However, this approach may suffer from the curse of dimensionality~\cite{snyder2008mwr}, particle degeneracy~\cite{arulampalam2002tsp}, and loss of diversity~\cite{ristic2003book}.
A particular challenge is the accurate estimation of the filter weights.
The Perron-Frobenius and Liouville operator-based approaches have been shown to be advantageous over Monte-Carlo methods~\cite{daum2006non,runolfsson2009acc}.
More recently~\cite{dutta2011jgcd}, a Bayesian formalism involving the Perron-Frobenius operator has been proposed,
which is shown to achieve superior performance over a generic particle filter and bootstrap filter.
This work has been extended to stochastic systems using a Karhunen-Loeve decomposition of the process noise for uncertainty quantification and state estimation~\cite{dutta2012cca,dutta2013acc,dutta2015ieee}.
\subsection{Control design}
Linear models that capture the dynamics of a nonlinear system provide tremendous opportunities for model-based control, making linear techniques readily applicable to nonlinear systems.
In particular, Koopman-based frameworks are amenable to the application of standard linear control theory, in contrast to the probabilistic formulations based on the Perron-Frobenius operator, which require special care.
Koopman models (see Sec.~\ref{Sec:SystemIdentification_KO}) have been increasingly used in combination with optimal control for trajectory steering, e.g. with LQR~\cite{Brunton2016plosone}, SDRE~\cite{Kaiser2017arxiv}, and MPC~\cite{Korda2016arxiv, Peitz2017arxiv,arbabi2018arxiv,KaKuBr2018arxiv}.
Besides theoretical developments, Koopman-based MPC has been increasingly applied in realistic problems, such as power grids~\cite{Korda2018arxiv}, high-dimensional fluid flows~\cite{hanke2018arxiv}, and experimental robotic systems~\cite{abraham2017arxiv,NxR2017video}.
Although the predictive power of Koopman models may be sensitive to the particular choice of observables and the training data, MPC provides a robust control framework that systematically compensates for model uncertainty by taking into account new measurements (see also Sec.~\ref{Sec:ControlExamples}).
Optimal control formulations have also been considered for switching problems~\cite{sootla2016ifac,sootla2017arxiv,Peitz2017arxiv}.
Based on a global bilinearization, the underlying dynamical system can be stabilized using feedback linearization~\cite{goswami2017cdc}.
A different strategy aims to shape Koopman eigenfunctions directly, referred to as eigenstructure assignment, which has been examined for fluid flow problems~\cite{hemati2017aiaa}.
In contrast to pole placement, which aims to design the eigenvalues of the closed-loop system, eigenstructure assignment aims to additionally modify the structure of the eigenfunctions in a desirable way, which requires additional inputs.
More recently, feedback stabilization for nonlinear systems has been achieved via a control Lyapunov function-based approach based on the bilinear Koopman system in eigenfunction coordinates~\cite{Surana2016cdc}.
In particular, a convex optimization problem is formulated to search for the control Lyapunov function for the bilinear Koopman system~\cite{huang2019arxiv}.
In many applications it is of interest to shape the distribution of some quantity, e.g. swarms of UAVs or satellites, spin dynamics in nuclear magnetic resonance (NMR) spectroscopy and imaging~\cite{li2009tac}, dispersions in process industries~\cite{wang2001timc}, oil spill in the ocean, search mission design~\cite{phelps2014automatica}, and beam dynamics~\cite{propoi1994problems,ovsyannikov2006beam}.
Of particular interest is the shaping of the asymptotic pdf~\cite{forbes2004jpc,guo2005automatica,zhu2012pem}, so that the stationary pdf of the closed-loop system coincides with a desired distribution.
This relies on the idea that temporal averages can be replaced by spatial analogues under Birkoff's ergodic theorem~\cite{lasota2013book}, after which the mean or variance of a quantity can be controlled by modifying the stationary pdf of the controlled system.
Optimal control approaches have also been proposed to modify the pdf in the presence of uncertainty in initial conditions or parameters~\cite{ross2016acc,phelps2016sjco}.
The process of driving one distribution to another one is further intimately related to Monge-Kantorovich optimal transport theory~\cite{monge1781memoire,kantorovich1942dan,elamvazhuthi2016arxiv,halder2014aac}.
In~\cite{halder2014aac}, optimal transport theory has been used to solve an finite-horizon control problem to achieve a desired distribution, where the optimal control vector field is estimated by solving the associated Liouville equation over the finite horizon. Set-oriented and graph-based methods have also been used to study controllability and optimal transport~\cite{elamvazhuthi2016optimal}.
Set-oriented representations of the dynamics (see Sec.~\ref{Sec:SystemIdentification_PFO}) are amenable to optimal control approaches based on dynamic programming~\cite{Bertsekas2005book} for discrete-time, discrete-state, and discrete-action problems. The discretized control problem can be interpreted as a Markov decision problem (MDP).
The optimization problem for the MDP can then be posed as a linear program, which has been demonstrated for communication networks~\cite{alpcan2006cdc}.
An optimal control formulation in terms of an MDP has been formulated for the discretization Liouville equation and demonstrated for navigation in a maze~\cite{kwee2001ab}.
Similar ideas have been used for controlling communication networks~\cite{alpcan2006cdc} and optimizing the asymptotic pdf over the velocity field in high-dimensional fluid flows~\cite{kaiser2017tcfd}.
The inverse Frobenius-Perron problem (IFPP)~\cite{bollt2000ijbc} formulates the control problem for the stationary pdf as a perturbation problem, which aims to find a perturbed transition matrix and the associated perturbed dynamics that give rise to the target density.
The Markov transition matrix approximating the Perron-Frobenius operator can also be interpreted as a graph, where the nodes correspond the symbols associated with the discrete states and the vertex weights are the transition probabilities facilitating the application of graph-theoretic methods for optimal or minimal-path searches, e.g. in the context of IFPP~\cite{bollt2001ijbc}.
Stabilization problems have been successfully solved via the Lyapunov measure equation~\cite{vaidya2010ieee,raghunathan2014ieee}, where the problem can be formulated as a linear program, extended for stochastic systems~\cite{das2017acc}, and applied to high-dimensional fluid flows~\cite{vaidya2009cdc}.
A different perspective is assumed in~\cite{froyland2016siam}, where a convex optimization problem is formulated to determine local perturbations in the context of optimal mixing.
In particular, the stochastic kernel is designed as a perturbation to the deterministic kernel of the Perron-Frobenius operator, which controls the diffusion and thus the mixing properties of the underlying system. A similar problem has been studied based on the infinitesimal generator~\cite{froyland2016arxiv}, but resulting in a convex quadratic program.
More recent work~\cite{antown2018arxiv} considers the perturbation of the stochastic part of the dynamical system and provides a solution in closed form.
The success of MPC has also driven its use for controlling probability distributions.
A nonlinear MPC problem using set-oriented representations~\cite{ohsumi2010ifac} suffers, however, from the curse of dimensionality.
Thus, the work has been extended using Monte-Carlo and particle filter methods to estimate the pdf, which is referred to as particle MPC~\cite{ohsumi2011ifac}. In particle MPC the Liouville equation enters the MPC formulation as a constraint to control the mean and variance of the state.
It is worth noting that data-driven PDF prediction using neural nets is increasingly explored due to faster evaluation/prediction of the PDF in contrast to numerically integrating sample points. The loss metric, i.e.\ the error between the true and estimated PDF from NN, can be computed using automatic differentiation of neural nets~\cite{Nakamura2018siam}, building on physics informed neural net via PDE constraints~\cite{raissi2017arxiv_a}, e.g. the Liouville equation.
\subsection{Sensor and actuator placement}
Determining suitable locations of sensors and actuators for data collection and decision-making is a crucial task in any real-world application. Generally, sensor and actuator placement may refer to selecting spatial locations or specific variables, which are used synonymously here.
Operator-theoretic placement provides a significant opportunity, as they capture global properties, such as meta-stable states, associated with persistent dynamics, and they generalize to nonlinear systems, e.g. for estimating nonlinear observability and controllability.
Selecting optimal sensor and actuator locations amounts to a combinatorially hard problem.
Compressed sensing~\cite{candes2006ieee,Donoho2006ieeetit}, sparsity-promoting algorithms employing an $l_1$-penalty term, and sparse sampling techniques such as gappy POD~\cite{everson1995karhunen} have played an increasingly important role in the context of sensor/actuator selection~\cite{Manohar2017csm}.
In particular, the underlying low-rank structure of the data can be exploited for efficient sensing.
Thus, compressed sensing and sparsity-promoting algorithms have also been increasingly combined with POD~\cite{Bai2014aiaa} and DMD~\cite{Jovanovic2014pof,Proctor2014epj,Brunton2015jcd,bai2017aiaa}.
Cluster-based reduced-order models (CROM)~\cite{Kaiser2014jfm}, which provide a coarse approximation of the Perron-Frobenius operator for PDEs, have been combined with the sparse sensor optimization for classification (SSPOC)~\cite{brunton2016siam} algorithm and pivot locations from the QR factorization (see e.g. discrete empirical interpolation methods (DEIM)~\cite{drmac2016siam}) applied to the dominant POD modes to learn optimized sensor locations tailored to a specific model~\cite{kaiser2018jcp}. Probabilistic dynamics are then preserved if the resulting sensing matrix satisfies the restricted isometry property from compressed sensing.
Recently~\cite{manohar2017arxiv}, sensor placement has been explored for multi-scale systems using multi-resolution DMD and DEIM, where sensor locations are tailored to intermittent or transient mode activity.
The control oriented framework DMD with control~\cite{Proctor2016arxiv} has been unified with compressive DMD~\cite{Brunton2015jcd} for compressive system identification~\cite{bai2017aiaa} to identify low-order models from limited input-output data and enable reconstruction of the high-dimensional, interpretable state. This framework, and similarly eDMDc, provide opportunities for optimized placement of sensors and actuators using linear systems techniques.
A greedy submodular approach was proposed in~\cite{Surana2016slides} based on the Koopman observer form and a mutual information or observability based criterion.
Sensor placement and actuator placement can also be performed by leveraging the generator PDEs of the Perron-Frobenius and Koopman families.
Controllability and observability Gramians can be generalized for nonlinear systems based on the (controlled) Liouville and adjoint Liouville equation and subsequently used for sensor and actuator placement by maximizing the support (or the $l_2$ norm) of the finite-time Gramians~\cite{vaidya2012jmaa,sinha2013ecc,sinha2016jmaa}.
The approach utilizes set-oriented methods, where sensors/actuators are placed in certain cells, and the location can be optimized by solving a convex optimization problem~\cite{sinha2013ecc}.
A greedy heuristic approach based on these ideas using set-oriented methods is proposed in~\cite{fontanini2016be}, which further investigates different criteria such as maximizing the sensing volume (sensor coverage), response time and accuracy (relative measure transported to the sensor in finite time) and incorporating spatial constraints.
The framework has been further extended to incorporate uncertainty~\cite{sharma2018arxiv}.
Balanced truncation has recently been used for efficient placement of sensors and actuators to simultaneously maximize the controllability and observability Gramians~\cite{Manohar2018arxivB}, which may be promising for Koopman-based Gramians.
\section{Conclusions \& Future directions}
\label{Sec:Conclusions}
In this chapter, we have explored a number of applications of the Koopman and Perron Frobenius operators for the control of nonlinear systems.
Although operator-theoretic analysis has a long history in dynamical systems and control, there has been considerable renewed interest in the past decade.
Much of this recent progress has been driven by the increased availability of data, along with advanced machine learning and optimization algorithms.
These data-driven approaches are providing approximations to the Koopman and Perron Frobenius operators that are then used for control.
We have introduced a general control formulation and discussed how this may benefit from a coordinate transformation where nonlinear dynamics become approximately linear.
Considerations such as stability, controllability and observability, controller and observer design, and sensor/actuator placement were discussed in the context of operator-theoretic approaches, highlighting recent theoretical and applied advances.
We also discussed a number of important embedding strategies, such as extended DMD, delay coordinates, and eigenfunctions, which may all be used for these control objectives.
Finally, we provided an example that leverages these various embeddings for model predictive control, showing that MPC is remarkably robust to model uncertainty.
Despite the tremendous recent progress in operator-based control, there are a number of limitations that must be addressed in the future.
Obtaining accurate representations of the Koopman operator from data is a key enabler of model-based control.
However, identifying a coordinate system that is closed under the Koopman operator is notoriously difficult, as the eigenfunctions may have arbitrarily complex representations.
Fortunately, emerging techniques in deep learning are providing powerful representations for these linearizing transformations~\cite{mardt2017arxiv,otto2017arxiv,takeishi2017anips,lusch2017arxiv,yeung2017arxiv,Li2017chaos,wehmeyer2018arxiv}.
Another key challenge is the existence of a continuous eigenvalue spectrum in chaotic dynamical systems, which complicates matters considerably.
Recent methods in delay coordinates~\cite{Susuki2015cdc,Brunton2017natcomm,Arbabi2016arxiv,susuki2017arxiv} and tailored neural network architectures~\cite{lusch2017arxiv} are making headway, although this is far from resolved.
However, at this point, these advanced embedding techniques have not been adapted for control design, which is an exciting avenue of ongoing work.
Although much of the focus in data-driven Koopman and Perron Frobenius theory has been on obtaining increasingly accurate approximate embeddings, it may be that improved models have marginal benefit in control design.
Certainly improved models are useful for prediction and estimation, but advanced control techniques, such as model predictive control, are remarkably robust to lackluster models.
In many cases, it may be that a simple DMDc model may work nearly as well as a sophisticated model based on the lates embedding techniques.
However, future operator-based controllers will need to be certified, requiring guarantees on the model fidelity and a rigorous quantification of errors and uncertainties.
Importantly, the model uncertainty is intimately related to the quality and quantity of the training data, the richness of the dynamics sampled, and the basis used for regression.
Future efforts will undoubtedly continue to incorporate known or partially known dynamics, structure, and constraints in the data-driven regressions, improving both the models and the uncertainty bounds.
The current state of the art in operator-theoretic control, with a compelling set of success stories and a long list of future work, make it likely that this field will continue to grow and develop for decades to come.
It is inevitable that these efforts will increasingly intersect with the fields of machine learning, optimization, and control.
Advances in machine learning and sparse optimization will continue to drive innovations, both for model discovery and for sensor placement, which are closely related to Koopman and Perron Frobenius.
Likewise, concrete successes in the control of nonlinear systems will continue to motivate these advanced operator-based approaches.
\begin{acknowledgement}
EK gratefully acknowledges support by the ``Washington Research Foundation Fund for Innovation in Data-Intensive Discovery" and a Data Science Environments project award from the Gordon and Betty Moore Foundation (Award \#2013-10-29) and the Alfred P. Sloan Foundation (Award \#3835) to the University of Washington eScience Institute, and funding through the Mistletoe Foundation.
SLB and JNK acknowledge support from the Defense Advanced Research Projects Agency (DARPA contract PA-18-01-FP-125). SLB acknowledges support from the Army Research Office (W911NF-17-1-0306 and W911NF-17-1-0422). JNK acknowledges support from the Air Force Office of Scientific Research (FA9550-19-1-0011).
\end{acknowledgement}
\bibliographystyle{plain}
|
2,869,038,156,868 | arxiv |
\section{Introduction}
The integrated spectra of galaxies are notoriously difficult to interpret
in terms of their overall metallicity and age (cf. the excellent review by
\cite{CWB96}). Every part of the spectral energy distribution (SED) of a
stellar system yields a different clue to a complex puzzle. In the
space-ultraviolet part of these SEDs we can unambiguously study the
contribution of the hottest stars in stellar populations. The far-UV ($\rm
\lambda \lambda 1250 - 2000 \AA$) and mid-UV ($\rm \lambda \lambda 2000-3300
\AA$) can most easily be studied in galaxies at high--redshift. Certainly, a
prerequisite towards understanding the UV SEDs of galaxies at high redshift is
to understand what we see in the UV SEDs of nearby stellar systems.
Towards the goal of understanding the UV SEDs of stellar populations,
several of us have used the International Ultraviolet Explorer (IUE) to
assemble a medium resolution (6$\rm \AA$) far-UV and mid-UV library
of the spectral energy distributions of stars (\cite{FOBW90}; hereafter FOBW;
\cite{FOBW92}; \cite{lietal98}). In Fanellli et al. (1990, 1992) we found
that the mid-UV color and absorption line indices have dependences on
temperature and metallicity which are strong and distinct from those derived
from spectra longward of 3300$\rm \AA$. In particular, Fanelli et al. (1992)
found that absorption line measures using the usual index method (cf.
\cite{O73}; \cite{F73}) yield mid-UV indices which are primarily sensitive to
stellar temperature but insensitive to metallicity. One index in particular
(MgII at 2800$\rm \AA$) is found to be {\it inversely} metallicity-sensitive
for F--G dwarf stars, in that it gets weaker with increasing metallicity (cf.
FOBW; \cite{smithetal}). In contrast, the mid-UV colors, which measure
mean line blanketing, are found to the most sensitive to metallicity
differences among stellar populations (FOBW).
Several groups have studied the far-UV and mid-UV SEDs of early--type
galaxies and spiral galaxy bulges (e.g., \cite{BBBFL88}; \cite{BFD95}), as
well as of M31 globular clusters (e.g. \cite{CCBFK}; \cite{CB88};
\cite{CKCF90}). From these studies we have learned that early-type galaxies
contain a hot stellar population (T $>$ 20000K) whose amplitude, relative to
the optical, is well correlated with stellar population absorption line
strengths. Various studies indicate that this hot stellar population is
important in galaxy spectra below $\rm \sim 3200 \AA$; consists mainly of hot,
metal rich, extreme horizontal branch stars and their post-HB descendents;
and is very sensitive to the age and abundance of their parent population
(e.g. \cite{GR90}, \cite{BFD95}, \cite{dorman95}, \cite{TCBF96}, and
\cite{yi97}).
The integrated light of Galactic globular clusters can usually be
interpreted in terms of combinations of spectra of well understood field star
populations near the Sun (cf. \cite{christensen}; \cite{rose94};
\cite{rosedeng98}). However, integrated optical line indices of M31 globular
clusters exhibit anomalies which seem to distinguish them both from local
field stars and Galactic globular clusters (cf. \cite{vdB69}; \cite{BFGK84};
\cite{trip89}; \cite{BH90}), especially concerning CNO-related indices (such
as CN).
In the present paper we report on Hubble Space Telescope/Faint Object
Spectrograph (HST/FOS) observations with small ($\le 1\arcsec$) apertures of
the $\rm 2300 - 4800 \AA$ SEDs of four M31 globular clusters and six bright
elliptical galaxies. The galaxies chosen have old stellar populations that span
the range known from optical observations (cf. \cite{FWBDDLBT},
\cite{gonzalez}). The M31 globular clusters were chosen to span a range of CN
line strengths; all four have had their color-magnitude diagrams determined by
HST imaging (cf. \cite{fusipec96}). We combine these data with IUE-obtained
mid-UV spectra of Galactic globular clusters and the dwarf elliptical galaxy
M32. In \S 2 we discuss the FOS observations and present the observed spectra.
In \S 3 we present a summary of the Galactic globular clusters and the M32 IUE
data, which are detailed elsewhere. \S 4 details how the relationship between
mid-UV line indices and the 2600-3000 color compare for the galaxies,
M31 globular clusters, IUE data for Galactic stars and IUE data for
Galactic globular clusters. \S 5 makes the analogous comparisons for the
near--UV line indices. The various differences and similarities seen among
the line stength-color relations for these stellar populations are discussed
in \S 6. The main results of this paper are summarized in \S 7, and an
Appendix discusses how the HST/FOS acquired each object for its science
aperture.
\section{Observations and Reductions}
\subsection{Target Selection}
Of the four M31 globular clusters chosen for study here, three (MII = K1,
MIV = K219, and K280) had been previously studied in integrated light by
Burstein et al. (1984) while a fourth (K58) comes from the integrated light
study of Brodie \& Huchra (1990). [The ``K'' names of the globular clusters
are taken from the listing of \cite{SKHV77}; ``M'' names from \cite{ME53};
``Bol'' names from \cite{batt80} and \cite{batt87}.]
These globular clusters span the range from the most metal-poor (MIV) to the
most metal-rich (K58) M31 clusters known.
The six galaxies were chosen to span a wide range of known stellar populations
among old stellar systems as inferred by Gonzalez (1993). Gonzalez bases his
estimates of galaxy ages on analyzing the strengths of $\rm H\beta$, mean Fe
indices and the optical $\rm Mg_2$ index with the galaxy population models of
Worthey (1992). (Here, we explicitly state ``the optical $\rm Mg_2$ index,''
so as to distinguish from the UV-based Mg-indices.) In his reanalysis of
these data using the more up-to-date models of Charlot et al. (1996), Trager
(1996) finds the rank-ordering of galaxy ages to be the same as Gonzalez. We
note that while both Gonzalez and Trager provide independent estimates of age
and metalliticy for their galaxies, in our samples age and metallicity (at
least, strong absorption lines) are correlated (stronger lines $\propto$
older ages).
The galaxies (in order of decreasing optical Mg$_2$ strength) are: NGC~7619
(Mg$_2$ = 0.358), NGC~3608 (0.329), NGC~6127 (0.322), NGC~5831 (0.303),
NGC~3605 (0.241), NGC~5018 (0.210) and M32 (0.198). NGC~5018 is added to this
sample, although not specifically observed by Gonzalez, as it has UV/optical
colors and optical line strengths similar to that of M32, yet has the
luminosity of a giant elliptical (\cite{BBB93}). Of this sample, only M31 and
M32 have far--UV SEDs available (both from IUE; \cite{BBBFL88}). On the scale
defined by the Worthey models, ages for these objects range from above 12 Gyr
at the oldest, to 3--5 Gyr at the youngest, or Mg$_2$ = 0.35 at the
strongest-lined and 0.20 at the weakest-lined. We also note that none of these
objects is expected to have a significant stellar population component of age
less than 1 Gyr (cf. \cite{gonzalez}; \cite{Trager1}).
\subsection{Observations}
FOS observations for the globular clusters were originally scheduled under
HST/GO program P2298 for Cycle 1, but were obtained in Cycles 1, 2 and 4. FOS
observations for the galaxies were taken under HST/GO program P6585 in Cycle~6.
Table~1 contains the log of observations, with the HST archival name for each
spectrum given for easy reference. Also given in Table~1 are J2000 positions
for the objects observed, as well as their optical Mg$_2$ values and
the radial velocities we used for them. The Mg$_2$
measures come from Burstein et al. (1984) for MII, MIV and K280. The Mg$_2$
values for K58 is transformed from that listed by Brodie \& Huchra (1990) (by
applying a net offset of +0.026 mag to bring their measures in accord with
those of Burstein et al.). The Mg$_2$ measures for the galaxies come from
Trager et al. (1998). All Lick Mg$_2$ observations were observed with a
1.4$''$ $\times$ 4$''$ slit (cf. \cite{Trager2}), so sample the clusters and
galaxies at angular resolutions comparable to the HST observations.
The radial velocities for the M31 globular clusters come from Huchra, Brodie
\& Kent (1991), while those for the galaxies come from de Vaucouleurs et al.
(1991).
Given the diffuse nature of the objects studied, the FOS aperture for each
observation had to be centered via the FOS ACQ/PEAK-UP procedure to obtain
the required positional accuracy within the nominal 1$''$ circular aperture
(for Cycles 1 and 2, pre-COSTAR, which becomes 0.86$''$ for Cycles 4 and 6,
post-COSTAR). The manner and accuracy of target acquisition are detailed
in Appendix~A.
The M31 globular clusters were observed with three FOS gratings: G130H, G270H
and G400H. The G130H grating was used, despite its limited wavelength
coverage, as it was suggested by previous IUE observations (e.g. \cite{CCBFK})
that some of the M31 globular clusters might contain a substantial hot stellar
population. Although we took long integrations with the G130H grating, no
significant signal in the full 1300--1600$\rm \AA$ bandpass was measured for
any of the M31 clusters (see discussion below). Thus, we do not find any very
hot stellar populations in these clusters that falls within our aperture (of
diameter 2.9 pc at the distance of M31). Combination of the G270H plus G400H
gratings give us continuous wavelength coverage from 2300$\rm \AA$ to 4800$\rm
\AA$, with essentially no observational overhead.
As the galaxy spectra were obtained after the M31 globular cluster spectra, we
knew that we would not likely get any scientific information from the G130H
grating in a reasonable number (i.e., $< 20$) of orbits, so it was decided to
concentrate only on getting accurate spectra with the G270H and G400H
gratings. It typically took 6 orbits to obtain the spectra we show here for
the galaxies, including 3.5 orbits to do the ACQ/PK procedure. Obtaining
accurate UV spectra of diffuse objects with HST/FOS was expensive in time.
\subsection{Data Reduction}
Our data reduction procedures followed those recommended by the HST/FOS Data
Products Guide (\cite{DPG}; hereafter DPG). The data sent to us were
processed by Space Telescope Science Institute with the Routine Science Data
Processing software, which produces files of fluxes, wavelengths, errors, data
qualities as well as raw data. As a check, we ran the raw data through our
local IRAF/STSDAS programs and found no differences between the HST-provided
data and our re-reduced data.
Flux and wavelength files were combined as recommended by the DPG, including
resampling and Hanning smoothing of the data. Since the observed spectral
resolution is finer than the spectral resolution permitted by the nominal
1$''$ aperture, the spectra were further smoothed by a boxcar window of
sufficient size to produce spectral resolution of $\rm 5.5 \AA$ for the G270H
grating and $\rm 8.8 \AA$ for the G400H grating. The G270H spectra of two
galaxies (NGC~6127, NGC~5018) are noisier than for the other objects in this
sample. Examination of the error spectrum for NGC~6127 shows that the
apparent "emission" line at the position of the Mg 2800 line is an artifact
resulting from low signal-to-noise and a diode in the FOS that had just gone
bad.
As stated previously, we only were able to obtain upper limits for the
observations of the M31 globular clusters obtained with the FOS G130H grating.
Integrating the flux observed with this grating from 1300 to 1600$\rm \AA$ we
obtain: $\rm 9.3 \pm 19 \times 10^{-17}$ ergs s$^{-1}$ cm$^{-2}$ for MII,
$\rm 0.8 \pm 19 \times 10^{-17}$ ergs s$^{-1}$ cm$^{-2}$ for K58 and $\rm 1.3
\pm 18 \times 10^{-17}$ ergs s$^{-1}$ cm$^{-2}$ for MIV. The FOS shutter did
not open for the observation of K280, so we use this observation to give us
the background rate.
All spectra are dereddened, using the values of E(B--V) given in Table~1 and
the reddening law of Cardelli et al. (1989), updated by O'Donnell (1994) for
the near-UV. E(B--V) values are taken from Burstein \& Heiles (1984) for the
galaxies and for M31 (assuming the clusters are reddened only by foreground
Milky Way extinction). Spectra of all objects were then Doppler-shifted to
their rest wavelengths using the radial velocities given in Table~1. The
de-reddened fluxes are given at rest wavelengths, and will be made available
for each object in our sample in ASCII format via various electronic means,
including: anonymous ftp; electronic preprint, and, eventually, the
Astronomical Data Center.
The resulting spectra are shown in linear flux units in Figures~1a and 1b for
the globular clusters and in Figures 2a and 2b for the galaxies, at their rest
wavelengths. Figures 3, 4a and 4b show the full HST/FOS spectra of the
clusters and the galaxies (plus the IUE mid-UV spectrum of M32) in log flux
units. Spectra from the G270H grating and the G400 grating are shown
separately in parts a and b of Figures 1 and 2 for clarity. A test of the
zero point accuracy of the relative fluxes obtained from the two spectra can
be derived from the 70$\rm \AA$ overlap in the region 3200 -- 3270 $\rm \AA$.
As shown in Figures 3 and 4, the relative fluxes agree within the nominal
value of 5\% we are led to expect by the FOS Handbook.
\subsection{Line Indices}
Our spectra cover a range of UV and optical wavelengths for which no single
unified set of absorption line and continuum indices has yet been defined.
Rather, our choice of indices and colors come from three separate studies: a)
The range 2300 to 3100 $\rm \AA$ has the mid-UV indices defined by FOBW. b)
The range 3100 to 3800 $\rm \AA$ have several indices defined by Davidge \&
Clark (1994), including the very useful NH molecular band at 3360$\rm \AA$. c)
The range 3800 to 4800 $\rm \AA$ is covered by Lick Observatory photometric
systems (\cite{worthey94}). A list of the definitions of the line strength
indices measured for our spectra, and the source of each index, is given in
Table~2.
To determine the index values, we use the definitions of spectral
indices given by Gonzalez (1993):
\begin{equation}
\rm I_{a} = \int_{\lambda_{c_{1}}}^{\lambda_{c_{2}}}
(1-\frac{S(\lambda)}{C(\lambda)})\ d\lambda
\end{equation}
\begin{equation}
\rm I_{m} = -2.5 log_{10} {\int_{\lambda_{c_{1}}}^{\lambda_{c_{2}}}
\frac{S(\lambda)}{C(\lambda)} \ d\lambda}
\end{equation}
\noindent where $S(\lambda)$ represents the object flux at each wavelength
within the central bandpass, $C(\lambda)$ represents the pseudo-continuum flux
for $S(\lambda)$ within the central bandpass, and $\lambda_{c_{1}}$ and
$\lambda_{c_{2}}$ are the limits of the central bandpass. All fluxes in this
paper are computed per unit wavelength. Equivalent width indices computed
from Equation 1 are measured in angstroms; indices computed from Equation 2
are measured in magnitudes. Equation 2 is essentially identical to the line
index definition of FOBW, differing negligibly ($<<0.001$ mag) in practice.
The pseudo-continuum within the central bandpass is determined by linear
interpolation between the red and blue pseudo-continuum sidebands defined for
each absorption feature. The wavelength ranges of the red and blue sidebands
and the central bandpasses for the absorption features measured in this paper
are given in Table~2. The spectral break indices and the intermediate band
color index 2600-3000 are defined as the ratio of the mean fluxes within the
appropriate bands converted to magnitudes. To account for uncertainties in
the radial velocites and in the wavelength scales used, each spectral line
measurement was adjusted interactively. We adjusted the continuum and feature
passband positions (typically by less than 2 pixels) by comparing them with
their placement given in the papers that define these line indices. The
spectral indices and colors for all observations are given in Tables~3a,b.
Errors were determined from the error spectra provided by the pipeline
reduction process. These error spectra are computed by propagating the errors
at each point with the assumption that the dominant noise in the raw data is
photon noise. We then determine the mean error based on photon statistics
($\rm \delta S(\lambda)$ or $\rm \delta C(\lambda)$ for each bandpass and
propagate those errors through the previous equations, with the results:
\begin{equation}
\rm \frac{\delta I_a}{I_a} = \frac{(\lambda_1 - \lambda_2)}{C(\lambda)}
\sqrt{[(\delta
S(\lambda))^2 + {(\frac{\delta C(\lambda) \times S(\lambda)}{C(\lambda)})}^2
} \, ,
\end{equation}
\begin{equation}
\rm \delta I_{mag} = \frac{2.5}{ln(10)} \sqrt{[\frac{\delta
S(\lambda)}{S(\lambda)}]^{2} + [\frac{\delta
C(\lambda)}{C(\lambda)}]^{2}} \, , and
\end{equation}
\begin{equation}
\rm \delta Break = \frac{2.5}{ln(10)} \sqrt{[\frac
{\delta S(\lambda)}{S(\lambda)}]^{2}} \, .
\end{equation}
\noindent The errors for our measured indices are given in Table~3 in terms
of either magnitudes (for broad features and breaks) or equivalent widths
(in $\rm \AA$).
\section{Supplemental IUE Data}
To supplement the HST observations, we have also included IUE-obtained data
for the galaxy M32 (from \cite{Buson90}) and for 24 Galactic globular
clusters. The IUE globular cluster data come from the study of Rose \& Deng
(1998). Rose and Deng surveyed the IUE database and identified 24 Galactic
globular clusters which have been observed by IUE in their core regions, and
which have good quality mid-UV IUE low-resolution spectra. In the case of
M32, we have derived all of the mid-UV line indices and 2600--3000 color
(listed in Table~3a) from the data of Buson et al. In the case of the Galactic
globular clusters, Rose and Deng derived only four of the mid-UV line indices
(Mg I, Mg II, the 2609/2660 and 2828/2921 line breaks) plus the 2600--3000
color. The data for M32 appear in a subset of the line strength-color
relations we compare to those of stars, while the Galactic globular cluster
data appear only when we intercompare the line strengths of galaxies, M31
globular clusters and Galactic globular clusters. The IUE obervations act as
a good benchmark against which to differentially compare the HST/FOS
observations of M31 globular clusters and galaxies.
Details of the data reduction of IUE data are given in Rose \& Deng (1998). In
summary, most of the Galactic clusters have only a single IUE observation, but
for the ones with multiple observations, the coadded spectra were used. We
list the 24 globular clusters in Table~4, together with their IUE-based line
strength measures from Rose \& Deng, estimates of [Fe/H] taken from Harris
(1997), and $\rm Mg_2$ measurements taken from Burstein et al. (1984). As the
IUE aperture of $10\arcsec \times 20\arcsec$ covers a fairly large region of
the cores of most clusters, we believe the spectra are representative of the
integrated mid-UV light of these cores. All of the IUE spectra were
individually corrected for interstellar extinction by Rose and Deng using the
reddening curve of Cardelli et al. (1989), as modified by O'Donnell (1994),
with E(B-V) values from Peterson (1993).
\section{Stellar Populations in the Mid-UV}
\subsection{Choosing an Appropriate Mid-UV Color}
If we are to correctly interpret the absorption line measurements in the
mid-UV, it is important to compare these measurements to a UV measure of
temperature. As shown in FOBW, the 2600--3000 color is a reasonable
compromise, in that: a) it can be self-consistently defined for mid-UV
spectra, as well as reliably measured; and b) it is sensitive to the stellar
population mix in the mid-UV. Within that stellar population mix we cannot
ignore the contribution of the hot component which dominates the far-UV in
elliptical galaxies. As shown in Burstein et al. (1988), this ``UVX''
component can contribute substantial flux at wavelengths as long as 3200 $\rm
\AA$ in some cases. UVX effects are clearly present in our galaxy sample.
They are seen, for instance, in the anticorrelation between the 2600--3000
colors and the optical Mg$_2$ indices for these galaxies (cf. the
data in Table~1 with those in Table~3a).
Tables~3a,b give reliable 2600--3000 colors for our program
objects. Similarly, the stellar library from IUE (FOBW) has directly
measured values of 2600--3000. Of the two other stellar libraries used here,
it is profitable to estimate 2600--3000 colors only for the line indices of
Davidge \& Clark (1994), as these are closest in wavelength to the mid-UV. We
list the measurements for additional indices from the Lick system in
Tables~3a,b for reference only. We display our results in the form of
(logarithmic) index-index diagrams below.
There are two main issues to be addressed here. First, do the galaxies and the
two globular cluster systems differ significantly from each other in any index
comparisons? We will see that they do. Second, is it possible to synthesize
the composite systems (clusters or galaxies) by suitable combinations of the
Galactic stellar library objects? We do not attempt actual spectral synthesis
modeling in this paper, so it is important to understand where potential
combinations would appear in the diagrams. A given index formed from any
combination of stars is the weighted sum of the indices of its components,
where the weight $w_i$ for component $i$ is $0 \leq w_i \leq 1.0$ and $\Sigma
w_i = 1.0$. However, the weights will change with wavelength. This means that
unless the two indices involve a common normalizing wavelength, a combination
formed from two components will not fall exactly on the straight line
connecting them in a ratio-ratio diagram, and their relationship will be
further distorted in the logarithmic index diagrams shown here.
As discussed above, most of the galaxies have significant UVX contamination at
wavelengths below $\rm 3200 \AA$ but the M31 globular clusters do not. This
means that separations between clusters and galaxies in the index-index
diagrams could be the result of feature dilution or bluing caused by the UVX
component (which has a smooth energy distribution similar to that of an early
B-type star, with a 2600--3000 color of $\sim -0.5$). However, such
separations are not necessarily only the result of UVX contamination, and we
will try to distinguish between these two cases. On the other hand, those
diagnostic diagrams in which the galaxies and M31 clusters are superposed but
separated from the locus of Galactic library stars are indicative of other
explanations for the separation: e.g. differences in chemical abundances.
\subsection{Elliptical Galaxies and Globular Clusters in The Mid-UV}
For clarity, we compare the line indices and spectrum breaks determined in our
FOS G270 spectra and in the IUE LW spectrum for M32 to those for IUE-observed
stars in Figures 5a-g. At the end of this section we will compare these
values with the IUE values for Galactic globular clusters. Those indices with
center wavelengths below 2550$\rm \AA$ were not used, as the IUE data are
generally too noisy shortward of this wavelength. The data for galaxies are
plotted as open symbols; those for the M31 globular clusters are plotted as
closed symbols; the data for Galactic stars are plotted as small, subdued
symbols. All error bars are smaller than the plotting symbol for the
2600--3000 color, save for NGC~5018, and are not shown. Errors in the line
indices are explictly shown in Figures~5. Each of the possible comparisons
among the galaxies, M31 globular clusters, Galactic globular clusters, and
Galactic stellar libraries is of interest.
Figure 5a: 2609/2660 break. The M31 clusters follow the locus of Galactic
library stars well, but the galaxies other than M32 fall systematically
below, with 2609/2660 values at a given 2600-3000 color up to 0.4 mag
smaller. This is probably a result of UVX contamination in the galaxies; the
implied contribution of the hot component in the galaxies at 2660 $\rm \AA$
is about 60\% in the most extreme case (NGC~7619; cf. \cite{BBBFL88}).
Figure 5b: 2828/2921 break. The values for the M31 globular clusters and the
galaxies agree well, and both lie at somewhat stronger index measures compared
to the stellar library at the same color. The offsets from the Galactic
library locus here cannot be explained by UVX contamination in the galaxies.
The two weakest clusters (MII and MIV) have 2828/2921 values lower than those
of any galaxy.
Figure 5c: Mg II 2800 index. This is the line index with the paradoxical
behavior with metallicity: it gets weaker as metallicity increases. Moreover,
it is double-valued with 2600--3000 color (as is the 2609/2600 break), peaking
near $2600-3000 = 1.5$ (owing to chromospheric emission becoming greater with
later-type stars; cf. FOBW, \cite{smithetal}). Both galaxies and M31 globular
clusters show well-defined relationships between line index and color. The
M31 clusters agree well with the locus of Galactic library stars. However,
the galaxies have indices which are systematically $\sim 0.3$ mag smaller
than the clusters and the stars. This could be another instance of UVX
contamination; however, for at least 3 of the galaxies the Mg I 2852 index
(see below) does not show a similar offset despite immediate proximity in
wavelength. This argues against UVX contamination as the main cause of the
Mg II offset in the galaxies.
Figure 5d: Mg I 2852 index. Despite involving the same element, lying only
50$\rm \AA$ from Mg II, and sharing a continuum sideband with the Mg II index,
in this Mg I index the M31 clusters and galaxies behave very differently than
in Mg II. Indices for three of the clusters are significantly stronger than
in the Galactic library stars, and three of the galaxies are coincident with
these clusters. The other five objects fall close to the stellar locus.
Figure 5e: Fe I 3000 index. As we have shown previously (FOBW), the Fe I 3000
index shows little direct correlation with metallicity among Galactic stars,
but does show a strong temperature dependence. Here, five of the galaxies
and two of the clusters have indices significantly higher than the Galactic
library locus and the remaining four objects. This is the only UV index in
which the galaxies have conspicuously stronger index values than the M31
globular clusters (note that for most optical-band metallic features, gE
nuclei have stronger indices than do globular clusters).
Figure 5f: Blend 3096 index. This index is a combination of Al I (at 3092.7
$\rm \AA$) and Fe I lines. It was found in FOBW to have the clearest
luminosity segregation for late-type Galactic stars among the mid-UV indices.
Here, all of the M31 clusters and three of the galaxies have indices
significantly stronger than the Galactic library locus. In contrast to Fe I
3000, the clusters form the upper envelope for the galaxies. This is the
first index we have discussed where it is reasonably clear that the Galactic
library cannot be used to synthesize most of the enhanced cluster or
galaxy indices and the anomaly cannot be attributed to UVX contamination
affecting the line index itself. On the other hand, it is curious that the
three galaxies with the bluest measures of 2600--3000 (and, by inference, the
strongest UVX components) define Mg I 2852 and Blend 3096 relations with
2600--3000 colors that are {\it more similar} to those of the M31 globular
clusters than those defined by the other 4 galaxies.
Figure 5g: Mg Wide index. We calculate this index as it is most likely the
index researchers will use to investigate the stellar populations of galaxies
at high redshift, when signal-to-noise and bandpass widths are limited. The
relationships defined by M31 globular clusters, galaxies and Galactic stars
are essentially coincident in this figure.
Comparison of the four Mid-UV line indices measured for Galactic globular
clusters by Rose \& Deng (1998) to those measured here for elliptical galaxies
and M31 globular clusters is shown in Figures 6a-d. Error bars for the HST
acquired data are supressed in this figure for clarity. In all four diagrams,
the stellar populations of the Galactic globular clusters substantially overlap
those of the M31 globular clusters. The offsets between M31 globular clusters
and galaxies in the 2609/2660 and Mg II 2800 indices are also seen between the
Galactic globular clusters and the galaxies. The Galactic globular clusters
define a range in Mg I 2852 vs. 2600--3000 that encompasses both M31 globular
clusters and all of the galaxies. Note that two UV indices of M32
(the galaxy lying at 2600--3000 = 1.50) have large differences from those of
the Galactic globular cluster 47 Tuc (2600--3000 color, Mg II 2800), to which
it is often compared. This reinforces the smaller amplitude distinctions at
optical wavelengths discussed by Rose (1994).
\newpage
\section{The Near-UV and Blue Indices}
Line indices for features in integrated spectra in the near-UV, from 3100$\rm
\AA$ to 3600$\rm \AA$, just below the Balmer jump, are the current ``orphans''
of stellar population work. In our perusal of the literature, we have only
found three studies that have explored this spectral line region in any depth
(\cite{DC94}, \cite{Carb82} (and associated references 1982-85), and
\cite{BRV88}). Of these, the line indices defined by Davidge \& Clark
are closest in spirit to the methodology employed here, while the
measurements of Carbon et al. and Boulade et al. differ in method and are
more specific to their own stellar population measurements.
As with the mid-UV indices, we would like to intercompare the near-UV
measurements of the elliptical galaxies and M31 globular clusters with those
of Galactic stars. The current stellar data for all but the CN4170 feature
are effectively limited to the 16 stars observed by Davidge \& Clark. The
spectra of stars observed by Boulade et al. were not reduced to fluxes, and
hence their measures could not be used here without separate calibrations,
which are not available. The CN4170 observations of Davidge \& Clark use the
Lick (\cite{worthey94}) bandpass definition for the ``CN1'' index. 74 stars
with Lick CN1 data were observed by FOBW, making this the only feature with a
large stellar sample of both near-UV feature data and mid-UV 2600--3000
colors. The 2600--3000 colors for the Davidge \& Clark stars are given in
Table~5a, while Table~5b lists the Worthey et al. (1994) CN4170 measures and
their 2600-3000 colors from Li et al (1998).
Four of the stars from Davidge \& Clark have published 2600--3000 colors in
FOBW, and three others have mid-UV colors from other IUE observations (as
maintained in our IUE Stellar Library (\cite{lietal98}, in preparation.). For
the remaining Davidge \& Clark stars we estimate their values of 2600--3000
from those given for the average stellar groups in Fanelli et al. (1992)
(using the colors given in Table~2 of that paper. We note that owing to a
transcription error, the 2600--V and 3000--V colors for the group means listed
in Table 7B of \cite{FOBW92} are based on narrow-band rather than
intermediate-band fluxes, though the other tabulated quantities are correct.)
These 2600--3000 colors should be accurate to 0.10 mag for the non-IUE stars,
based on making similar estimates for the Davidge \& Clark stars with measured
IUE colors.
Figures 7a-f compare the near-UV line indices for galaxies, M31 globular
clusters and Galactic stars.
Figure 7a: NH 3360 index. Given previous measurements of strong CN features
in the M31 globular clusters and early-type galaxies, measurement of this
index is central to the issue of possible nitrogen enhancement in these
systems. The NH 3360 values for the Davidge \& Clark main sequence stars
earlier than K2~V are generally less than 3.00$\rm \AA$. As shown in this
figure, the NH 3360 index is obviously far stronger in the M31 globular
clusters than in Galactic stars of similar 2600--3000 color. While the
formal values of NH 3360 for the galaxies are stronger than for the Galactic
stars, the errors on these measures indicate that only the NH 3360 index
for NGC~7619 is signficantly above those for the stars, but weaker than those
for the M31 globular clusters. These differences must be due to the strength
of the NH 3360 index itself and cannot be due to UVX contamination.
Figure 7b: CN 3883. The distribution of M31 globular clusters, galaxies and
stars is similar to that found for Bl 3580 and Fe I 3000, rather than as
found for NH 3360, in that this index seems to reach a maximum value. Since
this index is known to become saturated when the CN molecule becomes too
abundant (cf. \cite{langer}), it is not surprising that we see little
difference between the globular clusters and galaxies here, when we see more
of a difference between them in NH 3360.
Figure 7c. Ca HK. This index is mildly enhanced in the more metal-rich
globular clusters and the galaxies, being of similar line strength for
both kinds of stellar populations at similar 2600--3000 colors. The
degree of enhancement is small enough that it is plausible one might
be able to fit these measurments with a library of Galactic stars.
Figure 7d. BL 3580. The CN feature at 3580$\rm \AA$ lies within this
passband (cf. \cite{Carb82}), but the feature seen is also a blend of many
lines due to mostly Fe-peak elements. As such, the relative distribution of
values for the M31 globular clusters, galaxies and Galactic stars is very
similar to that found for the Fe I 3000 index in the mid-UV (Fig.~5e). The
difference here is that most of the galaxies and globular clusters have
stronger Bl 3580, relative to the stars, than Fe I 3000. That both galaxies
and globular clusters share this difference rules out the UVX component as
its main source. The fact that the strength of this blend is not one-to-one
with that of CN4170 (cf. Table~3A) for M31 globular clusters suggests that
CN is not the principle absorption source for Bl 3580.
Figure 7e. CN 4170. The galaxies and M31 globular clusters define different
relationships between 2600--3000 and this CN index than they do with CN 3883
or NH 3360. The globular clusters and three of the galaxies lie on a
well-defined locus offset by $\sim$0.15 mag to stronger CN from the galactic
stars. Three galaxies have yet stronger CN at a given 2600--3000, but if
their colors were corrected for the UVX contribution, it is possible they
would lie on an extension of the M31 cluster locus. The CN excesses for the
clusters and galaxies were expected on the basis of earlier studies
(e.g., \cite{BFGK84}; \cite{trip89}).
Figure 7f. Break 4000. This break correlates reasonably well with 2600--3000
color, but shows little difference among M31 globular clusters, galaxies and
Galactic stars. As such, this index is as disappointing as the Mg-wide index
(Fig.~5g), in that it does not distinguish differences in stellar populations
despite differences known in other Mg-- and N--related line indices.
Note that, in Figs. 7a-f, the line indices for the three galaxies with blue
2600--3000 colors (NGC~7619, 3608, 5831) show a relationship to those for the
redder 2600--3000 galaxies that is partly defined by this color difference.
If we had used a color such as B--V for the horizontal axis of these graphs,
the line index--color relations would be continuous for the galaxies in these
diagrams. However, the general relationship of the M31 globular clusters
relative to the galaxies would not be very different.
\section{Discussion}
It has been previously suggested on the basis of high precision optical and
infrared photometry or spectra (cf. \cite{frogel80}, \cite{O83},
\cite{BFGK84}, \cite{rose85}, \cite{Burstein87}, \cite{rose94}) that the cores
of elliptical galaxies, M31 globular clusters, and Galactic globular clusters
constitute three distinct types of stellar populations. The mid-- and near--UV
observations discussed here substantially strengthen this view. The more
subtle distinctions which are present in the optical bands are found to be
amplified in the mid-- and near--UV. However, in the variety of index-color
behaviors exhibited, our data also emphasize the complexity of the situation.
In this section we discuss these issues.
\subsection{2600--3000 Color vs. Optical Mg$_2$}
While the 1550--V color employed by Burstein et al. (1988) cleanly separates
out very hot stars in a stellar population, the 2600--3000 color depends on
the detailed mix of hot and warm stars. It is clear that for globular clusters
in our Galaxy and in M31, the dominant contribution of flux to the 2600--3000
spectral region comes from the brightest main sequence stars --- typically
main sequence turn-off (MSTO) stars. The situation is more complicated for
galaxies, as the UVX component can contribute significant flux throughout this
wavelength region in the spectra of most giant ellipticals.
As shown in many studies (cf. \cite{frogel80}; \cite{BFGK84}) optical and
infrared broad-band colors of galaxies, M31 globular clusters and Galactic
globular clusters tend to track each other very well. This is also true for
some optical absorption line indices, such as Mg$_2$, but less so for others
(such as the Lick iron line indices Fe 5270 and Fe 5335). In the relationship
of UV/optical colors with optical line indices, Burstein et al. (1988) showed
that 1550--V color correlates well with Mg$_2$, albeit in the opposite sense
to optical colors (1500--V becomes bluer with stronger $\rm Mg_2$).
Figure~8 shows how $\rm Mg_2$ is related to 2600--3000 color for galaxies, M31
globular clusters and Galactic globular clusters. To the data in the present
paper we have added the 2600--3000 colors for those Galactic globular clusters
with $\rm Mg_2$ measures by Burstein et al. 1984, as well as 2600--3000 colors
for representative galaxies from the study of Burstein et al. (1988), combined
with the new Trager et al. (1998) $\rm Mg_2$ measures. We note that the
aperture size difference between IUE ($\sim 8''$ effective diameter; cf.
Burstein et al.) and the Lick optical Mg$_2$ measures ($\sim 2''$ diameter)
did not significantly affect the 1500--V vs. Mg$_2$ analysis of Burstein et al.
Hence, we do not expect using IUE 2600--3000 colors to signficantly affect the
comparison we make with optical Mg$_2$ here.
It is evident that the 2600-3000 relation to Mg$_2$ for these stellar
populations is not monotonic. M31 globular clusters and Galactic globular
clusters show a general increase in the strength of Mg$_2$ with redder
2600--3000 color but with the M31 clusters generally having stronger Mg$_2$
indices than Galactic clusters of similar 2600--3000 color {\it at all
colors}. One galaxy in this sample, the S0 NGC~5102 (also NGC~205, not
shown), has colors similar to the bluest clusters, but this is a well-known
star-forming system in which the blue colors result from upper main sequence
stars (cf. discussion in Burstein et al. 1988); it is not representative of
the other objects in the sample.
The other galaxies appear to fall into two loosely-defined groups. One group
of 4 (NGC~4382, NGC~5018, NGC~3605 and M32) is coincident with the reddest and
most metal-rich clusters. A larger group has significantly higher Mg$_2$
showing relatively small scatter in the index but large 2600--3000 scatter.
This is similar to what is seen when Mg$_2$ is plotted versus the 1550--V
color of Burstein et al. (1988), in that the line index varies little while
the 1550-V color varies over nearly 2 magnitudes. This strongly suggests that
the UVX component that gives rise to the 1550--V range in color among
ellipticals is also important in determining the 2600--3000 range in color.
We can estimate how much UVX contamination would be necessary to produce the
blueward scatter of the strong-lined objects. An extrapolation of the
relatively shallow slope of the line-color correlation in Figure~8 for
the clusters and the redder galaxy group predicts a 2600--3000 color
near 2.0 for objects with Mg$_2$ values comparable to those in the
strong-lined group. Adopting a 2600--3000 color for the UVX objects
of $\sim -0.5$, we find that the UVX would have to contribute over
65\% of the light at 2600 $\rm \AA$ in order to shift the strong-lined
objects to 2600--3000 colors of $<$ 1.0. This is reasonably consistent with
our estimate (\S 4) of over 50\% contamination of the UVX component of the 2600
bandpass when the UVX component is strong.
Obviously, a larger sample of UV data is needed to explore this correlation
further. On the basis of the present sample alone, it appears that the
galaxies themselves may fall into several discrete groups. Galaxies in several
of the other line index-color diagrams (notably BL 3096, Fig. 5f) also have a
clump-like distribution. Hints of similar discreteness in the comparison of
fluxes at 1500\AA, 2500\AA, and the V-band with Mg$_2$ were pointed out by
Dorman et al. (1995). This would imply that metallicity, as measured by
Mg$_2$, is not the only driver of the amplitude of the UVX component. A
similar conclusion is reached by Ohl et al. (1998) on the basis of UV color
and line strength gradients inside ellipticals.
As a separate issue, it is not clear why the M31 globular clusters should have
stronger $\rm Mg_2$ indices than Galactic globular clusters at a given
2600--3000 color. As we are comparing HST FOS data with IUE data, it is
possible that systematic errors could play a role. While reliable errors for
the IUE cluster data are difficult to determine, it is hard to see how the
errors could be as large as the 0.3--0.5 magnitude 2600--3000 color
difference we do see.
\subsection{Nitrogen-Dependent Indices}
Burstein et al. (1981; 1984) first pointed out that CN line indices were
significantly stronger in M31 globular clusters than Galactic globular
clusters of the same B--V color. This discovery was later confirmed by
Tripicco (1989). As shown in Figure~7, the three near-UV line indices
involving nitrogen: CN 4170, CN 3883 and NH 3360 show marked enhancements
relative to Galactic stars for both galaxies and M31 globular clusters.
Whereas interpreting the strength of CN in terms of nitrogen abundance is
complicated by questions of carbon abundances and the mixture of main sequence
and giant-branch light (molecular abundances are very sensitive to surface
gravity), NH is notably cleaner because only the one metal is involved and the
light at 3300\AA\ is dominated by main sequence stars.
The fact that the M31 globular clusters are over-enhanced the {\it most} in NH
3360 relative to Galactic stars is a clear indication that the M31 globular
cluster stars are overabundant in nitrogen. How overabundant? Look at the
spectrum of MIV, the globular cluster in M31 with similar colors and optical
line strengths as M92 (NGC~6341), one of the most metal-weak globular clusters
in our own Galaxy. Carbon et al. (1982) studied the temperature, gravity and
abundance dependence of NH 3360 in the spectra of the giant stars of M92. They
show that, in this low abundance range, the NH 3360 feature is little
dependent on gravity, but primarily dependent on temperature and abundance. In
particular, NH 3360 all but disappears in stars with effective temperatures
hotter than 5500 K. The MSTO stars in M92 are hotter than this temperature,
and it is these stars that contribute the bulk of the flux at 3360$\rm \AA$ in
this metal--poor cluster (cf. discussion by \cite{rose94}), and, by
association, in MIV.
Moreover, it seems that the NH feature reaches something approaching a
``saturation level'' in the more metal-rich M31 globular clusters, reaching a
near-constant value of $\rm \sim 7.5 \AA$. Such an effect is also seen in the
CN~3883 index where it is known to be a saturation effect (\cite{langer}). The
strong enhancement of both NH~3360 and CN~4170 in M31 globular clusters
relative to Galactic stars means that the enhancement of CN~4170 in M31
globular clusters relative to Galactic globular clusters found by Burstein et
al. (1984) is also due to a nitrogen overenhancement. Significantly, the
NH~3360 index for these galaxies is also enhanced relative to Galactic stars,
but is generally lower than for the M31 globular clusters. (Interestingly,
Boulade et al. (1993) found NH3360 strength in M32 to be reasonably fit with a
Galactic stellar population.) Thus, we are left with another unanswered
question: Why are many old stellar populations outside our Galaxy enhanced in
nitrogen abundance relative to the disk and halo stars near us and to the
stars in Galactic globular clusters?
\subsection{Mg-related Indices}
It has already been shown by Gonzalez (1993) and Trager (1996) that the
optical $\rm Mg_2$ index is overly strong in the stellar populations of
early-type galaxies, compared to optical Fe-based indices and stellar
population models made from Galactic stars. The M31 clusters also appear to
be significantly stronger in Mg$_2$ than Galactic clusters, compared to their
2600--3000 colors, though both are significantly weaker in Mg$_2$ than most
galaxies (Fig.~8; see, however, the Mg$_2$ - B--V relations for these same
clusters in \cite{BFGK84}).
Paradoxically, these relationships fail to hold for the UV Mg I and Mg II
indices, relative to the 2600--3000 color. Behavior of the Mg I 2852 index
(Fig.~6d) for the two sets of clusters may parallel that of Mg$_2$ in Fig.~8,
but the two cluster samples appear to overlap in the Mg II 2800 index
(Fig.~6c). Furthermore, the galaxies are no stronger in Mg I 2852 than the
clusters and are actually significantly weaker in Mg II.
Given the optical-band Mg$_2$ results, the galaxies are clearly not
underabundant in magnesium itself. There are at least three possible effects
which may operate to produce the UV Mg anomalies in galaxies. First is UVX
contamination, which could dilute the Mg II strengths, though as noted in
Sec.~4.2 this does not seem capable of explaining the entire Mg II anomaly.
Second is the fact that the Mg II index actually declines in more metal-rich
stars, apparently owing to large line blanketing in the sidebands (FOBW,
\cite{FOBW92}). The third possibile effect is Mg II line reversal by
chromospheric emission. This is clearly detectable in individual stars
(\cite{smithetal}). However it is only present in stars younger than 1--2 Gyr
(according to conventional Ca II chromospheric emission dating scales). While
there is good evidence for the presence of intermediate age populations in the
range 4--8 Gyr in some of our sample elliptical galaxies (Gonzalez 1993,
Trager 1996), components with ages in the 1--2 Gyr range are usually
excludable because of their gross effect on the continuum energy distribution.
However, it may be that the chromospheric and galaxy spectral age scales are
not yet mutually consistent.
As a separate consideration, it is clear from the distinct behavior of the
elliptical galaxies and star clusters in the Mg indices that a metal-poor
(cluster-like) stellar population is {\it not} likely to be dominant at
far--UV wavelengths in the galaxies, as had been claimed by Park \& Lee
(1997), among others.
\subsection{The Other UV Indices}
\subsubsection{Breaks 2609/2660 and 2828/2921}
The 2609/2660 index is the most affected by flux from the far-UV strong
stellar population in galaxies among all of the indices measured in this
paper. The galaxies as a group are offset from the globular clusters as a
group by $\approx 0.3$ mag. As we know the ellipticals have a strong UVX
component in their UV spectra that is missing in the globular clusters, it is
likely that the index offset is explained mainly by UVX contamination. Rose \&
Deng (1998) also show that the offset between 47 Tuc and M32 (the reddest
galaxy in Fig.~6a) can be explained by the weak UVX component (\cite{BBBFL88})
in M32 that is not in 47 Tuc.
The 2828/2921 index shows reasonable agreement among all of the globular
clusters and galaxies, with perhaps an intrinsic spread of about 0.1 mag.
Given this general agreement, it is likely that this index can be fit in all
of the stellar populations with the normal Galactic library. UVX
contamination is evidently considerably less at 2900\AA\ than at 2600\AA.
\subsubsection{Fe I 3000 and Bl 3580}
Both of these indices are dominated by a series of strong Fe I lines,
although the BL 3580 index is also somewhat affected by CN. The apparent
excess line strengths in Figs. 5e and 7d may be explainable by compositeness
effects --- i.e., the combination of cool and warm stars from the normal
Galactic library, together with the presence of hot UVX stars in some of the
galaxies. Detailed modeling would be required to confirm that, however.
\subsubsection{Blend 3096}
This is the first time this feature has been reliably measured in external
stellar populations. Examination of the solar spectrum (\cite{moore66}) shows
the strongest lines in the central bandpass of 3086--3106$\rm \AA$ are due to
Al I (at 3092.7$\rm \AA$) and Mg I (at 3096.9$\rm \AA$), combined with weaker
lines of Fe I, Ti II, Ni I and OH. To our surprise, this index is so strong
in the strongest-lined globular clusters and in NGCs~3608, 5831 and 7619
that it appears no combination of Galactic stars can fit it.
A close up examination of the observed spectra of the M31 globular clusters
(where internal velocity dispersion does not erase fine detail) shows this
feature to be double-peaked in absorption, with the separation
of the two peaks of about 6$\rm \AA$. One peak is around 3092$\rm \AA$, and
can be reasonably associated with the Al I line. The other peak is around
3098$\rm \AA$, and is likely a combination of the strong Mg I feature plus
the relatively strong Fe I lines between 3099--3101$\rm \AA$.
The strength of this feature in the M31 globular cluster spectra and in
a subset of the galaxies is such that it is not just due to the known
Mg I enhancement, but also must involve an enhancement in aluminum as well.
This is the first time, to our knowledge, that an enhancement in Al has
been found in extragalactic stellar systems.
\subsubsection{Ca HK, 4000$\rm \AA$ Break, Mg Wide}
These three low-resolution line indices measure some of the most prominent
features in the integrated spectra of old stellar systems: strong breaks in
the spectra around the Ca H\&K features near 4000$\rm \AA$ and near the Mg I
2852 and Mg II 2800 features. These are the line strengths workers have used
to try to measure evolution in stellar populations at high redshift (e.g.
\cite{wind94}). It is therefore disappointing that, whereas we can identify
three separate stellar populations among M31 globular clusters, elliptical
galaxies and Galactic stars/globular clusters in other line indices, these
three indices are insensitive to these distinctions. As Rose (1994) and others
have pointed out, lower resolution measures of old stellar populations can
suggest a deceptive uniformity.
\subsection{The M31 Globular Cluster HST Color-Mag Diagrams}
Fusi-Pecci et al. (1996) have published color-magnitude diagrams (V vs. B--V
and I vs. V--I) for the four M31 globular clusters in our study. While the
metallicities estimated from the giant branches for MIV, MII and K280 are
reasonably in agreement with the metalliticies we would estimate from our FOS
observations, K58 stands out as a notable exception. Whereas Fusi-Pecci et
al. would have K58 be only slightly more metal-rich than MII, both the Brodie
\& Huchra measurement of its optical Mg$_2$ index, as well as our FOS spectra,
indicate this globular cluster likely has somewhat above solar metallicity.
Some confusion in ranking the clusters via the CM diagrams might be the result
of using V--I for some and B--V for others (e.g., K58). A V--I diagram for
K58 would be interesting, as it is possibly metal-rich enough to have its I
band flux saturated via an extreme nitrogen overabundance, much like Frogel et
al. result for the bulge of our Galaxy (\cite{frogel87})? Clearly accurate
integrated spectra of these, and other M31 globular clusters into the
near-infrared would be of interest.
\section{Conclusions}
The same two elemental "suspects" come up time and again when intrinsic
differences among stellar populations are discussed: nitrogen and magnesium.
To this list, we now can perhaps add aluminum based on the behavior of the
Bl~3096 index. One important contribution of our observations is that variations
in nitrogen abundance have been cleanly identified. It is clear from the
enhanced NH~3360 features, especially in the M31 globular clusters,
that as a group these four clusters have stars that are substantially
enhanced in nitrogen compared to Galactic stars. Based on the models of Carbon
et al., it is possible that the overabundance reaches factors of 10 -- 50 in
the M31 clusters. Nitrogen abundance estimates based on the CN indices
longward of 3800\AA\ are made ambiguous because of the involvement of carbon
and because the indices are more sensitive to the mixture of giant and dwarf
light. The value of the NH~3360 feature in old stellar population spectra is
that it isolates nitrogen and that dwarf light dominates at this wavelength.
The inferred large overabundance of nitrogen in the M31 globular clusters does
not conflict with the known agreement of broad-band colors between M31 and
Galactic globular clusters. Nor does it conflict with the fact that the
HST-derived color-mag diagrams of these same globular clusters (cf.
\cite{fusipec96}) yield giant branch slopes generally consistent with their
derived broad-band colors. As Carbon et al. show, stars with very different
atmospheric abundances of nitrogen exist side-by-side in the color-mag diagram
of M92. Moreover, theoretical studies by VandenBerg (e.g. \cite{vdbS}) show
that increasing only the oxygen abundance does not change the structure of the
HR diagram, but rather makes the cluster look older than it really is. As the
same is likely to be true for nitrogen, HR diagrams alone will not divulge
nitrogen abundance differences among different stellar populations.
The combination of enhancements in N, Mg and perhaps Al are unusual, as each
is thought to come from different kinds of stars: N from low mass stars; Mg
direct from C-- and O-- burning; Al from a combination of C-- and Ne--
burning, but also tied directly to O abundance. Laird (1985) and Boulade et
al. (1988) both found low metallicity stars with large (5--50) overabundances
in nitrogen. Norris \& Smith (1983) point to a possible connection between
overenhancements of N, Na and Al in Galactic globular cluster giant stars.
It would obviously be useful to attempt to measure Na abundances in
elliptical galaxies from prominent features like the Na I D lines at high
enough spectral resolution that contamination from interstellar absorption can
be eliminated.
Our observations have also emphasized the importance of the effects of the hot
UVX component at wavelengths as long as 3000\AA. This is a potentially
serious complication for the age-dating of high redshift galaxies based on
restframe mid--UV spectra. At present, we do not understand the UVX component
well enough to predict its strength from first principles; only direct
observations of the restframe far--UV spectra suffice to determine its
amplitude. This is particularly true if it is not simply related to a single
parameter like metallicity, as suggested by the apparent discreteness of the
galaxy distribution in some of our correlations.
Other implications of these results for the study of stellar populations of
early-type galaxies, both nearby and at high redshift, will be discussed
further in Ponder et al. (1998).
\section{Acknowledgements}
DB, CCW, RWO, JAF and JAR all wish to thank and acknowledge the support
of NASA Grants GO-2298-87A and GO-6585-95A for this research, and the referee
for helpful comments. DB wishes to thank the hospitality of the Space
Telescope Science Institute during his visits and the assistance of Mr. Yong
Li. CCW specifically wishes to acknowledge that this project is supported by
the research contract GO-06585.04-95A awarded to the Computer Sciences
Corporation by Space Telescope Science Institute which is operated by the
Association of Universities for Research in Astronomy, Inc., for NASA under
contract NAS5-26555. RWO was also supported in part by NASA grant NAG5-6403
to the University of Virginia.
|
2,869,038,156,869 | arxiv | \section{Introduction}
A circle packing in the plane $\c$ is simply a union of circles (here a line
is regarded as a circle of infinite radius). As we allow circles to intersect with each other,
our definition of a circle packing is more general than the conventional definition of a
circle packing.
For a given circle packing $\P$ in the plane, we are interested in counting and distribution
of small circles in $\P$. A natural size of a circle is measured by its radius. We will use
the curvature of a circle, that is, the reciprocal of its radius, instead.
We suppose that $\mathcal P$ is locally finite in the sense that for any $T>1$, there
are only finitely many circles in $\P$ of curvature at most $T$ in any fixed bounded
subset of $\c$. Geometrically, $\P$ is locally finite if there
is no infinite sequence of circles in $\P$ converging to a fixed circle.
For instance, if the circles of $\P$ have disjoint interiors as in Fig.~\ref{f3}, $\P$ is locally finite.
For a bounded subset $E$ of $\c$ and $T>1$, we set
$$N_T(\P, E):=\# \{C\in \P: C\cap
E\ne\emptyset ,\;\;\operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}(C)<T\} $$
where $\operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}(C)$ denotes the curvature of a circle $C$.
The local finiteness assumption on $\P$ implies that $N_T(\P, E)<\infty$.
Our question is then if there exists a Borel measure $\omega_\P$ on $\c$
such that for all nice Borel subsets $E_1, E_2\subset \c$,
$$\frac{N_T(\P, E_1)}{N_T(\P, E_2)}\sim_{T\to \infty} \frac{\omega_\P(E_1)}{\omega_\P(E_2)},$$
assuming $N_T(\P, E_2)>0$ and $\omega_\P(E_2)>0$.
Our main theorem applies to a very general packing
$\P$, provided $\P$ is invariant under a non-elementary (i.e., non virtually-abelian)
Kleinian group satisfying certain finiteness conditions.
Recall that a Kleinian group is a discrete subgroup of $G:=\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$
and $G$ acts on the extended complex plane $\hat \c=\c\cup\{\infty\}$ by M\"obius transformations:
$$\begin{pmatrix} a & b\\ c& d\end{pmatrix} z=\frac{az+b}{cz+d} $$
where $a,b,c,d\in \c$ with $ad-bc=1$ and $z\in \hat \c$.
A M\"obius transformation maps a circle to a circle and by the Poincare extension,
$G$ can be identified with the group of all orientation preserving isometries of $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
Considering the upper-half space model $\mathbb H}\newcommand{\LG}{\Lambda_\G^3=\{(z, r): z\in \c, r>0\}$,
the geometric boundary $\partial_\infty(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ is naturally identified with
$\hat \c$.
For a Kleinian group $\Gamma$,
we denote by $\Lambda(\Gamma)\subset\hat \c$ its limit set,
that is, the set
of accumulation points of an orbit of $\Gamma$ in $\hat \c$, and
by $0\le \delta_\Gamma\le 2 $ its critical exponent. For $\Gamma$ non-elementary, it is known that
$\delta_\Gamma >0$.
Let $\{\nu_x:x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3\}$ be a $\Gamma$-invariant conformal density of
dimension $\delta_\Gamma$ on $\Lambda(\Gamma)$, which exists by the work
of Patterson \cite{Patterson1976} and Sullivan \cite{Sullivan1979}.
\begin{figure}
\begin{center}
\includegraphics[width=5cm]{applane.pdf}
\includegraphics[width=5cm]{splane.pdf}
\caption{Apollonian circle packing and Sierpinski curve (by C. McMullen)}
\label{f3}
\end{center}
\end{figure}
In order to present our main theorem on the asymptotic of $N_T(\P, E)$
we introduce two invariants associated to $\Gamma$ and $\P$.
The first one is a Borel measure on $\c$ depending only on $\Gamma$.
\begin{Def}\rm Define
a Borel measure $\omega_\Gamma$ on $\c$: for $\psi\in C_c(\c)$
$$\omega_\Gamma(\psi)=\int_{z\in \c} \psi (z) e^{\delta_\Gamma \beta_z(x,(z,1)) }\; d\nu_{x}(z) $$
where $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ and
$\beta_z(x_1,x_2)$ is the signed distance between the horospheres based at $z\in \c$ and passing
through $x_1, x_2\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
By the conformal property of $\{\nu_{x}\}$,
$\omega_{\Gamma}$ is well-defined independent of the choice of $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^n$.
\end{Def}
We have a simple formula: for $j=(0,1)\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$,
$$d\omega_{\Gamma} =(|z|^2+1)^{\delta_\Gamma} d\nu_{j} .$$
For a vector $u$ in the unit tangent bundle $\op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$, denote by $u^{+}\in \hat \c$
(resp. $u^-\in \hat \c$) the
forward (resp. backward) end point of the geodesic determined by $u$.
On the contracting horosphere $H^-_\infty(j)\subset \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ consisting of
upward unit normal vectors on the horizontal plane $\{(z,1):z\in\c\}$,
the normal vector based at $(z,1)$ is mapped to $z$ via
the map $u\mapsto u^-$.
Under this correspondence, the measure $\omega_\Gamma$ on $\c$ is equal to the density of
the Burger-Roblin measure $\tilde m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}$ (see Def. \ref{BR})
on $H^-_\infty(j)$.
The second invariant is a number in $[0,\infty]$ measuring a certain size of $\P$.
\begin{Def}[The $\Gamma$-skinning size of $\P$] {\rm
For a circle packing $\P$ invariant under $\Gamma$,
define $0\le \operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(\P)\le \infty$ as follows:
$$\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(\P):=\sum_{i\in I} \int_{s\in \operatorname}\newcommand{\supp}{\operatorname{supp}{Stab}_{\Gamma} (C_i^\dagger)\backslash C_i^\dagger} e^{\delta_\Gamma
\beta_{s^+}(x,\pi(s))}d\nu_{x}(s^+)$$
where $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$, $\pi: \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)\to \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ is the canonical projection,
$\{C_i:i\in I\}$ is a set of representatives of $\Gamma$-orbits in $\P$,
$C_i^\dagger\subset \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ is the set of unit normal vectors to
the convex hull $\hat C_i$ of $C_i$ and $\operatorname}\newcommand{\supp}{\operatorname{supp}{Stab}_{\Gamma} (C_i^\dagger)$ denotes the set-wise
stabilizer of $C_i^\dag$ in $\Gamma$.
Again by the conformal property of $\{\nu_x\}$, the definition of
$\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P)$ is independent of the choice of $x$ and the choice of representatives $\{C_i\}$.
}\end{Def}
We remark that the value of $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(\P)$ can be zero or infinite in general
and we do not assume any condition on $\operatorname}\newcommand{\supp}{\operatorname{supp}{Stab}_{\Gamma} (C_i^\dagger)$'s (they may be trivial).
We denote by
$m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma$ the Bowen-Margulis-Sullivan measure on the unit tangent bundle
$\op{T}^1(\Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ associated to the density $\{\nu_x\}$
(Def. \ref{bms}).
When $\Gamma$ is geometrically finite, i.e.,
$\Gamma$ admits a finite sided fundamental domain in $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$,
Sullivan showed that $|m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|<\infty$ \cite{Sullivan1984}
and that $\delta_\Gamma $ is equal to
the Hausdorff dimension of the limit set $\Lambda(\Gamma)$ \cite{Sullivan1979}.
A point in $\Lambda(\Gamma)$ is called a parabolic fixed point of $\Gamma$ if it is fixed
by a parabolic element of $\Gamma$.
\begin{Def}\rm
By an \emph{infinite bouquet of tangent circles glued at a point $\xi\in \c$},
we mean a union of two collections, each consisting of
infinitely many pairwise internally tangent circles with the common tangent point $\xi$ and their radii tending to $0$,
such that the circles in each collection are externally tangent to the circles in the other at $\xi$ (see Fig.~\ref{bouqqq}).
\end{Def}
\begin{figure}
\begin{center}
\includegraphics[width=5cm]{bouquet1.pdf}
\caption{Infinite bouquet of tangent circles}
\label{bouqqq}
\end{center}
\end{figure}
\begin{Thm}\label{m1gf}
Let $\mathcal P$ be a
locally finite circle packing in $\c$ invariant under
a non-elementary geometrically finite group $\Gamma$ and with finitely many
$\Gamma$-orbits.
If $\delta_\Gamma\le 1$, we further assume that $\P$ does not contain an infinite
bouquet of tangent circles glued at a parabolic fixed point of $\Gamma$.
Then $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P)<\infty$ and
for any bounded Borel subset $E$ of $\c$ with
$\omega_\Gamma(\partial(E))=0$,
$$\lim_{T\to \infty} \frac{N_T(\P, E)}{T^{\delta_\Gamma}}=
\frac{ \operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P) }{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|} \cdot
\omega_\Gamma(E) . $$
If $\P$ has infinitely many circles, then $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P)>0$.
\end{Thm}
\begin{Rmk}\rm
\begin{enumerate}
\item Given a finite collection
$\{C_1, \cdots, C_m\}$ of circles in the plane $\c$ and a non-elementary
geometrically finite group $\Gamma <\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$,
Theorem \ref{m1gf} applies to $\P:=\cup_{i=1}^m \Gamma(C_i)$, provided
$\P$ contains neither infinitely many circles converging to a fixed circle
nor any infinite bouquet of tangent circles.
\item In the case when $\delta_\Gamma\le 1$ and
$\P$ contains an infinite bouquet of tangent circles glued at a parabolic fixed point of $\Gamma$, we have
$\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P)=\infty$ \cite{OhShahGFH}.
In that case if the interior of $E$ intersects $\Lambda(\Gamma)$
non-trivially, the growth order of $N_T(\mathcal P, E)$
is $T\log T$ if $\delta_\Gamma=1$, and it is
$T$ if $\delta_\Gamma<1$ \cite{OS}.
\item We note that the asymptotic of $N_T(\P, E)$
depends only on $\Gamma$, except for the $\Gamma$-skinning size of $\P$.
This is rather surprising in view of the fact
that there are circle packings with completely different
configurations but invariant under the same group $\Gamma$.
\item Theorem \ref{m1gf} implies that the asymptotic
distribution of small circles in $\P$ is completely determined by
the measure $\omega_\Gamma$: for any bounded Borel
sets $E_1, E_2$ with $\omega_\Gamma(E_2)>0$ and
$\omega_\Gamma(\partial(E_i))=0$, $i=1,2$, as $T\to \infty$,
$$\frac{N_T(\P, E_1)}{N_T(\P, E_2)} \sim \frac{\omega_\Gamma(E_1)}{\omega_\Gamma(E_2)} .$$
\item Suppose that all circles in $\P$ can be oriented so that they have
disjoint interiors whose union is equal to
the domain of discontinuity $\Omega(\Gamma):=\hat \c-\Lambda(\Gamma)$. If
either $\P$ is bounded or $\infty$ is a parabolic fixed point
for $\Gamma$, then
$\delta_\Gamma$ is equal to the circle packing exponent $e_{\mathcal P}$ defined as:
$$e_{\mathcal P}=\inf\{s: \sum_{C\in \P} \operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}^{-s}<\infty\}=\sup\{s:\sum_{C\in \P}\operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}(C)^{-s}=\infty\}. $$
This was proved by Parker~\cite{Parker1995} extending the earlier works of Boyd \cite{Boyd1973} and Sullivan \cite{Sullivan1984} on bounded
Apollonian circle packings.
\end{enumerate}
\end{Rmk}
In the proof of Theorem \ref{m1gf}, the geometric finiteness assumption on $\Gamma$ is used only to
ensure the finiteness of the Bowen-Margulis-Sullivan measure $m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma$.
We have the following more general theorem:
\begin{Thm}\label{m1}
Let $\mathcal P$ be a locally finite
circle packing invariant under
a non-elementary Kleinian group $\Gamma$ and with finitely many $\Gamma$-orbits.
Suppose that
$$|m_{\Gamma}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|<\infty\quad\text{and}\quad \operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(\P)<\infty .$$
Then for any bounded Borel subset $E$ of $\c$ with
$\omega_\Gamma(\partial(E))=0$,
$$\lim_{T\to \infty}\frac{N_T(\P, E)}{T^{\delta_\Gamma}}
=\frac{ \operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P) }{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|} \cdot
\omega_\Gamma(E) . $$
If $\P$ is infinite, then $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P)>0$.
\end{Thm}
Since there is a large class of geometrically infinite
groups with $|m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|<\infty$ \cite{Peigne2003}, Theorem \ref{m1} is not subsumed
by Theorem \ref{m1gf}.
We remark that the condition on the finiteness of
$m_\Gamma^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}$ implies that the density $\{\nu_x\}$ is determined uniquely up to homothety
(see \cite[Coro. 1.8]{Roblin2003}).
\begin{Rmk}{\rm
\begin{enumerate}
\item
The assumption of $|m_\Gamma^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|<\infty$ implies that
$\nu_x$ (and hence $\omega_\Gamma$) is atom-free \cite[Sec. 1.5]{Roblin2003}, and hence
the above theorem works for any bounded Borel subset $E$
intersecting $\Lambda(\Gamma)$ only at finitely many points.
\item It is not hard to show that
$\Gamma$ is Zariski dense in $\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$ considered as a real algebraic group if and only
if $\Lambda(\Gamma)$ is not contained in a circle in $\hat \c$.
In such a case,
any proper real subvariety of $\hat \c$ has zero $\nu_x$-measure.
This is shown in \cite[Cor.1.4]{FlaminioSpatzier} for $\Gamma$ geometrically finite but
its proof works equally well if $\nu_x$ is $\Gamma$-ergodic, which is the case when $|m_\Gamma^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|<\infty$.
Hence Theorem~\ref{m1} holds for any Borel subset $E$ whose boundary is contained in a countable union of real algebraic curves.
\end{enumerate}} \end{Rmk}
We now describe some concrete applications of Theorem~\ref{m1gf}.
\bigskip
\subsection{Apollonian gasket}
Three mutually tangent circles in the plane determine a curvilinear triangle, say, $\mathcal T$.
By a theorem of Apollonius of Perga (c. 200 BC), one can inscribe
a unique circle into the triangle $\mathcal T$, tangent to all of the
three circles. This produces three more curvilinear triangles inside $\mathcal T$ and
we inscribe a unique circle into each triangle. By continuing to add circles in this way,
we obtain an infinite circle packing of $\mathcal T$, called the Apollonian gasket for $\mathcal T$,
say, $\mathcal{A}$ (see Fig.~\ref{f4}).
\begin{figure}
\begin{center}
\includegraphics[width=4cm]{CurviLin}
\caption{Apollonian gasket}\label{f4}
\end{center}
\end{figure}
By adding {\it all} the circles tangent to three of the given ones, not only those within $\mathcal T$,
one obtains an Apollonian circle packing $\mathcal P:=\mathcal P(\mathcal T)$, which may be bounded or unbounded (cf. \cite{GrahamLagariasMallowsWilksYanI} \cite{GrahamLagariasMallowsWilksYanI-n}, \cite{SarnakMAA}, \cite{SarnakToLagarias}, \cite{KontorovichOh}).
\begin{figure}\begin{center}
\includegraphics [width=2in]{newDual}
\caption{Dual circles}
\label{picDual}\end{center}
\end{figure}
Fixing four mutually tangent circles in $\P$,
consider the four dual circles determined by the six intersection points (see Fig. \ref{picDual}
where the dotted circles are dual circles to the solid ones),
and
denote by $\Gamma_\P$ the intersection of $\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$ and the group
generated by the inversions with respect to those dual circles.
Then $\Gamma_\P$ is a geometrically finite Zariski dense subgroup
of the real algebraic group $\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$ preserving $\P$, and
its limit set in $\hat \c$ coincides with the residual set of $\P$ (cf. \cite{KontorovichOh}).
We denote by $\alpha$
the Hausdorff dimension of the residual set of $\P$, which
is known to be $1.3056(8)$ according to McMullen \cite{McMullen1998}.
\begin{Cor}\label{Apo} Let $\mathcal{T}$ be a curvilinear triangle
determined by three mutually tangent circles and $\mathcal{A}$ the Apollonian gasket for
$\mathcal{T}$.
Then for any Borel subset $E\subset \mathcal T$
whose boundary is contained in a countable union of real algebraic curves,
$$\lim_{T\to \infty}\frac{N_T(E)}{ T^{\alpha} }
= \frac{ \operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma_{\P}}(\P) }{\alpha \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_{\Gamma_{\P}}|} \cdot
\omega_{\Gamma_{\P}}(E)$$
where $N_T(E):=\#\{C\in \mathcal{A}: C\cap E\ne \emptyset, \;\; \operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}(C)<T\}$ and
$\P=\P(\mathcal T)$.
\end{Cor}
Either when $\P$ is bounded
and $E$ is the disk enclosed by the largest circle of $\P$, or when $\P$ lies between two parallel lines
and $E$ is the whole period,
it was proved in \cite{KontorovichOh} that
$N_T(\P, E)\sim c \cdot T^\alpha$ for some $c>0$. This implies that
{\footnote{$\asymp$ means that the ratio of the two sides is between two uniform constants}}
$N_T(\mathcal T)\asymp T^\alpha$.
The approach in \cite{KontorovichOh} was based on
the Descartes circle theorem in parameterizing quadruples of circles of curvature at most $T$ as
vectors of maximum norm at most $T$
in the cone defined by the Descartes quadratic equation.
We remark that the fact that $\alpha$ is strictly bigger than $1$
was crucial in the proof of \cite{KontorovichOh} as based on the $L^2$-spectral theory
of $\Gamma_{\mathcal P}\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
\begin{figure}
\begin{center}
\includegraphics [height=9cm]{pi1}
\includegraphics [height=8cm]{pi2}\end{center}
\caption{Regions whose boundary intersects $\Lambda(\Gamma)$ at finitely many points (background pictures are reproduced with permission from Indra's Pearls, by
D.Mumford, C. Series and D. Wright, copyright Cambridge University Press 2002)}
\label{f2}
\end{figure}
\bigskip
\subsection{Counting circles in the limit set $\Lambda(\Gamma)$}
If $X$ is a finite volume hyperbolic $3$-manifold with totally geodesic boundary, then
its fundamental group $\Gamma:=\pi_1(X)$ is geometrically finite and $X$ is homeomorphic to
$\Gamma \backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3\cup \Omega(\Gamma)$ where
$\Omega(\Gamma):= \hat \c -\Lambda(\Gamma)$ is the domain
of discontinuity \cite{Kojima1992}.
The set $\Omega(\Gamma)$
is a union of countably many disjoint open disks in this case and has finitely many
$\Gamma$-orbits by the Ahlfors finiteness theorem \cite{Ahlfors1964}.
Hence Theorem \ref{m1gf} applies to counting
these open disks in $\Omega(\Gamma)$ with respect to the curvature.
For example, for the group $\Gamma$ generated by reflections in the sides of a unique
regular tetrahedron whose convex core is bounded by four $\frac{\pi}4$ triangles and
four right hexagons, $\Omega(\Gamma)$ is illustrated
in the second picture in Fig. \ref{f3} (see \cite[P.9]{McMullennotes275} for details).
This circle packing is called a {\it Sierpinski curve}, being homeomorphic to
the well-known Sierpinski carpet \cite{Claytor1934}.
Two pictures in Fig. \ref{f2} can be found
in the beautiful book {\it Indra's pearls} by Mumford, Series and Wright
(see P. 269 and P. 297
of \cite{MumfordSeriesWright})
where one can find many more circle packings to which our theorem applies.
The book presents explicit geometrically finite Schottky groups $\Gamma$ whose limit sets
are illustrated in Fig. \ref{f2}.
The boundaries of the shaded regions meet $\Lambda(\Gamma)$ only at finitely many points.
Hence our theorem applies to counting circles in these shaded regions.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{fractal}
\caption{Schottky dance (reproduced with permission from Indra's Pearls, by
D.Mumford, C. Series and D. Wright, copyright Cambridge University Press 2002)} \label{f7}
\end{center}
\end{figure}
\subsection{Schottky dance}
Another class of examples is obtained by considering the images of
Schottky disks under Schottky groups.
Take $k\ge 1$ pairs of mutually disjoint closed disks
$\{(D_i, D_i') : 1\le i\le k\}$ in $\c$ and for each $1\le i\le k$, choose
a M\"obius transformation $\gamma_i$ which maps the interior of
$D_i$ to the exterior of $D_i'$ and the interior of $D_i'$ to
the exterior of $D_i$. The group, say, $\Gamma$,
generated by $\{\gamma_i:1\le i\le k\}$ is called a Schottky group of genus $k$ (cf. \cite[Sec. 2.7]{MardenOutercircles}).
The $\Gamma$-orbit of the disks $D_i$ and $D_i'$'s nests
down onto the limit set $\Lambda(\Gamma)$ which is totally disconnected.
If we denote by $\P$ the union $\cup_{1\le i\le k} \Gamma(C_i)\cup \Gamma(C_i')$
where $C_i$ and $C_i'$ are the boundaries of $D_i$ and $D_i'$ respectively,
then $\P$ is locally finite, as the nesting disks become smaller and smaller.
The common exterior of hemispheres above the initial disks $D_i$ and $D_i'$
is a fundamental domain for $\Gamma$ in the upper half-space $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$, and hence
$\Gamma$ is geometrically finite.
Since $\P$ contains no infinite bouquet of tangent circles,
Theorem \ref{m1gf} applies to $\P$; for instance,
we can count circles in the picture in Fig. \ref{f7} (\cite[Fig. 4.11]{MumfordSeriesWright}).
\subsection*{On the structure of the proof}
In \cite{KontorovichOh}, the counting problem for a bounded Apollonian circle packing
was related to the equidistribution of expanding closed horospheres
on the hyperbolic $3$-manifold $\Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3$. For a general circle packing,
there is no analogue of the Descartes circle theorem which made such a relation possible.
The main idea in our paper
is to relate the counting problem for a general circle packing $\P$ invariant
under $\Gamma$ with
the equidistribution of orthogonal translates of a closed totally geodesic surface
in $\op{T}^1(\Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3)$.
Let $C_0$ denote the unit circle centered at the origin and $H$ the stabilizer of $C_0$ in $\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$.
Thus $H\backslash G$ may be considered as the space of totally geodesic planes of $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
The important starting point is to describe certain subset $B_T(E)$ in
$H\backslash G$ so that
the number of circles in the packing $\P:=\Gamma(C_0)$ of curvature at most $T$ intersecting $E$
can be interpreted as the number of
points in $B_T(E)$ of a discrete $\Gamma$-orbit on $H\backslash G$.
We then describe the weighted limiting distribution of orthogonal
translates of an $H$-period $(H\cap \Gamma)\backslash H$ (which
corresponds to a properly immersed hyperbolic surface which may be of infinite area)
along these sets $B_T(E)$ in terms of the Burger-Roblin measure (Theorem \ref{mtt}) using the main result in \cite{OhShahGFH}
(see Thm. \ref{os}).
To translate the weighted limiting distribution result into the asymptotic
for $N_T(\P, E)$, we
relate the density of the Burger-Roblin measure of the contracting horosphere $H_\infty^-(j)$
with the measure $\omega_\Gamma$.
A version of Theorem~\ref{m1gf} in a weaker form, and some of its applications
stated above were announced in \cite{OhICM}. We remark that the
methods of this paper can be easily generalized to prove a similar result
for a sphere packing in the $n$-dimensional Euclidean space invariant
under a non-elementary discrete subgroup of $\operatorname}\newcommand{\supp}{\operatorname{supp}{Isom}(\mathbb H}\newcommand{\LG}{\Lambda_\G^{n+1})$.
\subsection*{Acknowledgment} We would like to thank Curt McMullen
for inspiring discussions. The applicability of our other paper \cite{OhShahGFH}
in the question addressed in this paper came
up in the conversation
of the first named author with him during her one month visit to Harvard in October, 2009.
She thanks the Harvard mathematics department for the hospitality.
We would also like to thank Yves Benoist for helpful discussions.
\section{Expansion of a hyperbolic surface by orthogonal geodesic flow}
\label{sectionh}
We use the following coordinates for the upper half space model for $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$:
$$\mathbb H}\newcommand{\LG}{\Lambda_\G^3=\{z+rj=(z,r) :z\in \c, r>0\}$$
where $j=(0,1)$.
The isometric action of $G=\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$, via the Poincare extension of
the linear fractional transformations, is explicitly given as the following (cf. \cite{ElstrodtGrunewaldMennickebook}):
\begin{equation}\label{EGM}
\begin{pmatrix} a & b\\ c& d\end{pmatrix} (z+rj)=\frac{(az+b)(\bar c\bar z +\bar d)+a\bar c r^2}{|cz+d|^2+|c|^2r^2}
+\frac{r}{|cz+d|^2+|c|^2r^2} \;j .\end{equation}
In particular, the stabilizer of $j$ is the following maximal compact subgroup of $G$:
$$K:=\operatorname}\newcommand{\supp}{\operatorname{supp}{PSU}(2)=\{\begin{pmatrix} a & b\\ -\bar b & \bar a \end{pmatrix}: |a|^2+|b|^2=1 \} .$$
We set
$$ A:=\{a_t:=\begin{pmatrix} e^{t/2} & 0\\ 0& e^{-t/2}\end{pmatrix}:t\in \mathbb{R}\},\quad M:=\{ \begin{pmatrix} e^{i\theta} & 0\\ 0& e^{-i\theta}\end{pmatrix} :\theta\in \mathbb{R} \} $$
and $$
N:=\{n_z :=\begin{pmatrix} 1 & z\\ 0& 1\end{pmatrix}:z\in \c\} ,\quad
N^-=\{n_{z}^-:=\begin{pmatrix} 1 & 0\\ z& 1\end{pmatrix}:z\in \c\} .$$
We can identify $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$ with $G/K$ via the map $g(j)\mapsto gK$.
Denoting by $X_0\in \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ the upward unit normal vector based at $j$,
we can also identify the unit tangent bundle $\operatorname}\newcommand{\supp}{\operatorname{supp}{T^1}(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ with $G.X_0=G/M$:
here $g.X_0$ is given by $d\lambda(g) (X_0)$ where $\lambda(g): G/K\to G/K$ is the left translation $\lambda(g)(g'K)=
gg'K$ and $d\lambda(g)$ is its derivative at $j$.
The geodesic flow $\{g^t\}$ on
$\op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ corresponds to the right translation by $a_t$ on $G/M$:
$$g^t(gM)=ga_tM .$$
For a circle $C$ in $\c$, denote by $\hat C$ its convex hull, which is the northern hemisphere above $C$.
Set $C_0$ to be the unit circle in $\c$ centered at the origin.
The set-wise stabilizer of $\hat C_0$ in $G$ is given by
$$H=\operatorname}\newcommand{\supp}{\operatorname{supp}{PSU}(1,1) \cup \begin{pmatrix} 0&1 \\ -1 & 0\end{pmatrix}\operatorname}\newcommand{\supp}{\operatorname{supp}{PSU}(1,1)$$
where
$$\operatorname}\newcommand{\supp}{\operatorname{supp}{PSU}
(1,1)=\{ \begin{pmatrix} a&b \\ \bar b &\bar a\end{pmatrix} : |a|^2-|b|^2=1 \} .$$
Note that $H$ is equal to the stabilizer of $C_0$ in $G$ and hence
$\hat C_0$ can be identified with $H/H\cap K$.
We have the following generalized Cartan decomposition (cf. \cite{Schlichtkrull1984}):
for $A^+=\{a_t: t\ge 0\}$,
$$G=HA^+K $$
in the sense that every element of $g\in G$ can be written as $g=hak$, $h\in H, a\in A^+, k\in K$
and $h_1a_1k_1=h_2a_2k_2$ implies that $a_1=a_2$, $h_1= h_2 m$ and $k_1= m^{-1} k_2$ for
some $m\in H\cap K\cap Z_G(A)=M$.
As $X_0$ is orthogonal to the tangent space $\op{T}_{j}(\hat C_0)$,
$H.X_0=H/M$ corresponds to the set of unit normal vectors
to $\hat C_0$, which we will denote by $C_0^\dagger$. Note that $C_0^\dagger$
has two connected
components, depending on their directions.
For $t\in \mathbb{R}$, the set $g^t( C_0^\dagger)=(H/M)a_t=(Ha_tM)/M$
corresponds to a union of two surfaces consisting of the orthogonal translates of $\hat C_0$ by distance $|t|$ in each direction, both having the same boundary $C_0$.
\bigskip
Let $\Gamma<G$ be a non-elementary discrete subgroup.
As in the introduction, let
$\{\nu_x:x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3 \}$ be a $\Gamma$-invariant conformal density on $\hat \c$ of dimension $\delta_\Gamma$,
that is, each $\nu_x$ is a finite measure on $\hat \c$
satisfying that
for any $x,y\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$, $z\in \hat \c$ and $\gamma\in
\Gamma$,
$$\gamma_*\nu_x=\nu_{\gamma x};\quad\text{and}\quad
\frac{d\nu_y}{d\nu_x}(z)=e^{-\delta_\Gamma \beta_{z} (y,x)}. $$
Here $\gamma_*\nu_x(R)=\nu_x(\gamma^{-1}(R))$ for a Borel subset $R\subset \hat{\c}$
and the Busemann function $\beta_z(y_1, y_2)$ is given by
$\lim_{t\to\infty} d(y_1, \xi_t)-d(y_2,\xi_t)$ for a geodesic ray
$\xi_t$ toward $z$.
For $u\in \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$, we define $u^{+}\in \hat \c$
(resp. $u^-\in \hat \c$) to be the
forward (resp. backward) end point of the geodesic determined by $u$ and $\pi(u)\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ to be the basepoint. Fixing $o\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$, the map $u\mapsto (u^+, u^-,
t:=\beta_{u^-}(\pi(u), o))$ is
a homeomorphism between $\op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ and $(\hat \c\times \hat \c - \{(\xi,\xi):\xi\in
\hat \c\}) \times \mathbb{R}$.
\begin{Def}\label{bms} \rm The Bowen-Margulis-Sullivan measure $m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma$ associated to $\{\nu_x\}$
(\cite{Bowen1971}, \cite{Margulisthesis}, \cite{Sullivan1984}) is the measure on $\op{T}^1(\Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3)$
induced by the following
$\Gamma$-invariant measure on $\op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$: for $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$,
$$d \tilde m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}(u)=e^{\delta_\Gamma \beta_{u^+}(x, \pi(u))}\;
e^{\delta_\Gamma \beta_{u^-}(x,\pi(u)) }\;
d\nu_{x}(u^+) d\nu_{x}(u^-) dt .$$
\end{Def}
By the conformal properties of $\{\nu_x\}$,
this definition is independent of the choice of $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
We also denote by $\{m_x:x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3\}$ a
$G$-invariant conformal
density of dimension $2$, which is unique up to homothety:
each $m_x$ a finite measure on $\hat\c$ which is invariant under
$\operatorname}\newcommand{\supp}{\operatorname{supp}{Stab}_G(x)$ and $d m_x(z) =e^{-2 \beta_z(y,x)} dm_y(z)$ for any $x,y\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ and $z\in \hat \c$.
\begin{Def}\label{BR} {\rm The Burger-Roblin measure $m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma$
associated to $\{\nu_x\}$ and $\{m_x\}$ (\cite{Burger1990}, \cite{Roblin2003})
is the measure on $\op{T}^1(\Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^n)$ induced by the following $\Gamma$-invariant measure on $\op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^n)$:
$$d \tilde m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}(u)=e^{2\beta_{u^+}(x, \pi(u))}\;
e^{\delta_\Gamma \beta_{u^-}(x,\pi(u)) }\;
dm_x(u^+) d\nu_x(u^-) dt $$
for $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
By the conformal properties of $\{\nu_x\}$ and $\{m_x\}$,
this definition is independent of the choice of $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
} \end{Def}
For any circle $C$, let
\[
H_C=\{g\in G:gC=C\}=\{g\in G: gC^\dag=C^\dag\}.
\]
We consider the following two measures on $C^\dag$: Fix any $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$, and let
\begin{equation}\label{hars}
d\mu^{\operatorname{Leb}}_{C^\dag} (s):=e^{2\beta_{s^+}(x,\pi(s))}dm_x(s) \text{ and }
d\mu^{\operatorname{PS}}_{C^\dag} (s):=e^{\delta_\Gamma \beta_{s^+}(x,\pi(s))} d\nu_x(s^+) .
\end{equation}
These definitions are independent of the choice of $x$ and
$\mu^{\operatorname{Leb}}_{C^\dag}$ (resp. $\mu^{\operatorname{PS}}_{C^\dag}$) is left-invariant by $H_C$ (resp. $H_C\cap \Gamma)$).
Hence we may consider the measures
$\mu^{\operatorname{Leb}}_{C^\dag}$ and $\mu^{\operatorname{PS}}_{C^\dag}$ on the quotient $(H\cap \Gamma)\backslash C^\dag$.
We denote by $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C)$ the total mass of $\mu^{\operatorname{PS}}_{C^\dag}$; that is,
$$\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C):=\int_{s\in \Gamma\cap H\backslash C_0^\dag} e^{\delta_\Gamma \beta_{s^+}(x,\pi(s))} d\nu_x(s^+) .$$
In general, $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C)$ may be zero or infinite.
\begin{Thm}[{\cite[Theorem~1.9]{OhShahGFH}}] \label{os}
Suppose
that the natural projection map $\Gamma\cap H_C \backslash \hat C \to \Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ is proper.
Assume that
$|m_\Gamma^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|<\infty$ and
$\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C)<\infty$. Then for any $\psi\in C_c(\Gamma\backslash G/M)$,
as $t\to \infty$,
$$e^{(2-\delta_\Gamma)t}\int_{s\in (\Gamma\cap H_C)\backslash C^\dagger}\psi (sa_t) d\mu^{\operatorname{Leb}}_{C^\dag} (s)
\sim\frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C) }{|m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|}
m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi) . $$
Moreover $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C) >0$ if $[\Gamma: H_C\cap \Gamma]=\infty$.
\end{Thm}
Note that if
$|m_\Gamma^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|<\infty$, then $\Gamma$ is of divergence type; that is, the Poincare series
of $\Gamma$ diverges at $\delta_\Gamma$. When $\Gamma$ is of divergence type,
the $\Gamma$-invariant conformal density $\{\nu_x\}$
of dimension $\delta_\Gamma$ is unique up to homothety
(see \cite[Remark following Corollary 1.8]{Roblin2003}):
explicitly $\nu_x$ can be taken as the weak-limit as $s\to \delta_\Gamma^+$
of the family of measures
$$\nu_{x}(s):=\frac{1}{\sum_{\gamma\in \Gamma} e^{-sd(j, \gamma j)}}
\sum_{\gamma\in\Gamma} e^{-sd(x, \gamma j)} \delta_{\gamma j} .$$
Recall that $g\in \operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$ is parabolic
if and only if $g$ has a unique fixed point in $\hat \c$.
\begin{Thm}[{\cite[Theorem~5.2]{OhShahGFH}}] \label{bou} Let $\Gamma$ be geometrically finite.
Suppose
that the natural projection map $\Gamma\cap H_C \backslash \hat C \to \Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ is proper.
Then $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C)<\infty$ if and only if either $\delta_\Gamma > 1$ or
any parabolic fixed point of $\Gamma$ lying on $C$ is
fixed by a parabolic element of $H_C\cap \Gamma$.
\end{Thm}
\begin{proof}
Note that in the notation of
{\cite[Theorem~5.2]{OhShahGFH}},
if we put $E=\hat C$, which is a complete totally geodesic submanifold of $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$ of
codimension $1$,
then $\partial(\pi(\tilde E))=C$, $\tilde E=C^\dag$, $\Gamma_{\tilde E}=H_C\cap \Gamma$,
and $\lvert \mu_E^{\operatorname{PS}}\rvert=\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C)$. Hence the conclusion is immediate.
\end{proof}
\section{Reformulation into the orbital counting problem on the space of hyperbolic planes}
Let $G=\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$ and
$\Gamma<G$ be a non-elementary discrete subgroup. Let $C$ be a circle in $\hat \c$
and $H_C$ denote the set-wise stabilizer of $C$ in $G$.
It is clear:
\begin{Lem}\label{infinite}
If $\Gamma(C)$ is infinite, then $[\Gamma:H_C\cap \Gamma]=\infty$.
\end{Lem}
\begin{Lem}\label{locfinite}\label{lc}
The following are equivalent:
\begin{enumerate}
\item
A circle packing $\Gamma(C)$ is locally finite;
\item
the natural projection map $f:\Gamma\cap H_C \backslash \hat C \to \Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ is proper;
\item
$H_C\backslash H_C\Gamma $ is discrete in $H_C\backslash G$.
\end{enumerate}
\end{Lem}
\begin{proof}
We observe that the properness of $f$ is equivalent to the condition that
only finitely many distinct hemispheres in $\Gamma(\hat C)$ intersects a given compact subset of $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
Note that any compact subset of $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$ is contained in a compact subset
the form $E\times [r_1, r_2]=\{(z,r): z\in E, \, r_1\le r\le r_2\}$ for $E\subset \c$
compact and $0< r_1 < r_2<\infty$, and that the radius of a circle in $\c$ is same as
the height of its convex hull in $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$. Hence the properness of the map $f$ is again
equivalent to the condition that for any $r>0$ and a compact subset $E\subset \c$,
there are only finitely many distinct circles in $\Gamma(C)$
intersecting $E$ and of radii at least $r$, that is, $\Gamma(C)$ being locally finite,
proving the equivalence of (1) and (2).
It is straightforward to verify that the properness of $f$ and that of the projection
map $\Gamma\cap H_C\backslash C^\dag\to \Gamma\backslash \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ are equivalent.
Let $X_C\in C^\dag$ such that $X_C^+=\infty\in \hat\c$.
Let $M_C=\{g\in G: gX_C=X_C\}$. Since $\hat{C}$ is the unique totally geodesic
submanifold of $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$ orthogonal to $X_C$, $M_C$ is contained in $ H_C$.
We identify $G/M_C$ with $\op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ via $gM_C\mapsto gX_C$. Since $H/M_C$
identifies with $C^\dag$, the canonical map $\Gamma\cap H_C\backslash H_C/M_C\to \Gamma\backslash G/M_C$
is proper. Since $M_C$ is compact, it follows
that $\Gamma\cap H_C\backslash H_C\to \Gamma\backslash G$ is proper. Equivalently $\Gamma H_C$ is closed in $G$ (see \cite{OhShahGFH} for the equivalence).
As $\Gamma$ is countable,
this is again equivalent to the discreteness of $H_C\backslash H_C\Gamma$ in $H_C\backslash G$. This proves the
equivalence of (2) and (3).
\end{proof}
\begin{Rmk}\rm If $\Gamma\cap H_C$ is a lattice in $H_C$, then $\Gamma H_C$ is closed in $G$ (\cite[\S1]{msrbook}), and hence
$\Gamma(C)$ is a locally finite circle packing. In this case, by \cite[Theorem~1.11]{OhShahGFH}, we have
$\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C)<\infty$.
\end{Rmk}
\begin{Prop}\label{pf} Let $\xi\in C$ be a parabolic fixed point of $\Gamma$.
Suppose that $\Gamma(C)$ does not contain an infinite bouquet of tangent circles glued at $\xi$.
Then $\xi$ is a parabolic fixed point for $H_C\cap \Gamma$.
\end{Prop}
\begin{proof} Suppose that there exists a parabolic element $\gamma\in \Gamma - H_C$
fixing $\xi\in\c$. By sending $\xi$ to $\infty\in\hat\c$ by an element of $G$,
we may assume that $\xi=\infty$ and
$\gamma$ acts as a translation on $\c$.
Since $\gamma C\neq C$ and $C$ is a circle passing through $\infty$,
we have that $\{\gamma^{k}C:k\in\mathbb{Z}\}$ is an infinite collection of parallel lines.
By sending $\infty$ back to the original $\xi$, we see that $\{\gamma^{k}C:k\in\mathbb{Z}\}$ is an infinite bouquet of tangent circles glued at $\xi$.
\end{proof}
\subsection{Deduction of Theorem~\ref{m1gf} from Theorem~\ref{m1}}
We only need to ensure that $\operatorname{sk}_{\Gamma}(\P)<\infty$, or equivalently, $\operatorname{sk}_\Gamma(C)<\infty$ for each $C\in\P$.
By the assumption in Theorem~\ref{m1gf}, if $\xi\in C$ is any parabolic fixed point of $\Gamma$, then by Proposition~\ref{pf}, $\xi$ is a parabolic fixed point for $H_C\cap \Gamma$. Therefore by Theorem~\ref{bou}, $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C)<\infty$.
\qed
\subsection{Relating counting on a single $\Gamma$-orbit to a set $B_T(E)\subset H\backslash G$}
In the rest of this section, let $C_0$ denote the unit circle in $\c$ centered
at the origin
and let $H:=\operatorname}\newcommand{\supp}{\operatorname{supp}{Stab}(\hat C_0)$. We follow notations from Section \ref{sectionh}.
We assume that
$\Gamma(C_0)$ is a locally finite circle packing of $\c$.
Let $E$ be a bounded subset in $\c$
and set
$$N_T(\Gamma(C_0), E):=\#\{C\in \Gamma(C_0): C\cap E\ne \emptyset,\quad \operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}(C)<T\} .$$
For $s>0$, we set
$$A^+_s:=\{a_t: 0\le t\le s\};\quad A^-_s:=\{a_{-t}: 0\le t\le s\} .$$
For a subset $E\subset \c$, we set $N_E:=\{n_z:z\in E\}$.
\begin{Def}[Definition of $B_T(E)$]
{\rm For $E\subset \c$ and $T>1$, we define the subset $B_T(E)$ of $H\backslash G$ to be the image of
the set $$KA^+_{\log T}N_{-E}=\{ ka_t n_{-z}\in G: k\in K, 0\le t< \log T, z\in E\}$$
under the canonical projection $G\to H\backslash G$.}
\end{Def}
For a bounded circle $C$ in $\c$, $C^\circ$ denotes the open disk enclosed by $C$.
We will not need this definition for a line since
there can be only finitely many lines intersecting a fixed bounded
subset in a locally finite
circle packing.
\begin{Def}\label{adm}{\rm For a given circle packing $\P$,
a bounded subset $E\subset \c$ is said to be \emph{$\mathcal P$-admissible}\/ if,
for any bounded circle $C\in \mathcal P$, $C^\circ\cap E\ne \emptyset$ implies
$C^\circ\subset E$, possibly except for finitely many circles.
}\end{Def}
The following translation of $N_T(\Gamma (C_0), E)$ as
the number of points in $[e]\Gamma \cap B_T(E)$, where $[e]=H\in H\backslash G$,
is crucial in our approach:
\begin{Prop}\label{tran} If $E$ is $\Gamma(C_0)$-admissible,
there exists $m_0\in \mathbb N$ such that for all $T\gg 1$,
$$ \# [e]\Gamma\cap B_T(E) - m_0 \le N_T(\Gamma (C_0), E)\le \# [e]\Gamma\cap B_T(E) + m_0.$$
\end{Prop}
\begin{proof}
Observe that
\begin{align*}
\# [e]\Gamma\cap B_T(E)&=\#\{[\gamma]\in \Gamma\cap H\backslash \Gamma: H\gamma\cap KA_{\log T}^+N_{-E}\ne\emptyset \} \\
&=\# \{[\gamma]\in \Gamma/\Gamma\cap H: \gamma HK\cap N_E A_{\log T}^-K\ne \emptyset\} \\ &=\#\{\gamma (\hat C_0): \gamma HK\cap N_EA_{\log T}^-K\ne \emptyset\} \end{align*}
where the second equality is obtained by taking the inverse.
Since
\begin{align*}N_EA_{\log T}^- j &= \{(z,r)\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3:
T^{-1} < r\le 1, z\in E\}
\end{align*}
and $K$ is the stabilizer of $j$ in $G$,
it follows that $$\# [e]\Gamma\cap B_T(E)=
\#\{\gamma (\hat C_0): \gamma (\hat C_0)\text{ contains $(z,r)$ with } z\in E,\;\; T^{-1}< r\le 1\}.
$$
By the admissibility assumption on $E$,
we observe that
$\gamma(\hat C_0)$ contains $(z,r)$ with $z\in E$ and $T^{-1}< r\le 1$
if and only if the center of $\gamma (C_0)$ lies in $E$ and
the radius of $\gamma( C_0)$ is greater than $T^{-1}$, possibly except for finitely many number (say, $m_0$) of circles.
\end{proof}
\section{Uniform distribution along the family $B_T(E)$ and the Burger-Roblin measure}\label{sbr}
We keep the notations $C_0, H, K, M, A^+, X_0, G,
\{m_x:x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3\}$, etc., from section \ref{sectionh}.
Denoting by $dm$ the probability invariant measure on $M$,
\begin{equation} \label{harh} dh=d\mu^{\operatorname{Leb}}_{C_0^\dag}(s)dm\end{equation} is a Haar measure on $H\cong C_0^\dag \times
M$ as $\mu^{\operatorname{Leb}}_{C_0^\dag}$ is $H$-invariant, and
the following defines a Haar measure on $G$: for $g=ha_rk\in HA^+K$,
\begin{equation}\label{harg} dg=4\sinh r \cdot \cosh r \; d h dr dm_{j}(k)\end{equation}
where $dm_{j}(k):=dm_{j}(k.X_0^+)$.
We denote by $d\lambda$ the unique measure on $H\backslash G$ which is compatible with the choice
of $dg$ and $dh$: for $\psi\in C_c(G)$,
$$\int_G\psi \; dg=\int_{[g]\in H\backslash G}\int_{h\in H}\psi(h[g])\;dhd\lambda[g] .$$
For a bounded set $E\subset \c$,
recall that the set $B_T(E)$ in $H\backslash G$ is the image of
the set $$KA^+_{\log T}N_{-E}=\{ ka_t n_{-z}\in G: k\in K, 0\le t<\log T, z\in E\}$$
under the canonical projection $G\to H\backslash G$.
The goal of this section is to deduce the following theorem \ref{mtt} from Theorem \ref{os}:
\begin{Thm}\label{mtt}
Let $\Gamma$ be a non-elementary discrete subgroup of $G$. Suppose that
$|m_\Gamma^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}| <\infty$ and $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C_0)<\infty$.
Suppose that the natural projection map $\Gamma\cap H\backslash \hat C_0 \to \Gamma\backslash \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ is proper.
Then for any bounded Borel subset $E\subset \c$ and for any $\psi\in C_c(\Gamma\backslash G)$, we have
$$\lim_{T\to \infty}\frac{1}{T^{\delta_\Gamma}}\int_{g\in B_T(E)}\int_{h\in \Gamma\cap H\backslash H}\psi(hg)dhd\lambda(g)
= \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C_0) }{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|}\cdot
\int_{n\in N_{-E}} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_n )\; dn $$
where $\psi_n\in C_c(\Gamma\backslash G)^M$ is given by
$\psi_n(g)=\int_{m\in M}\psi(gmn)dm$ and $dn$ is the Lebesgue measure on $N$.
\end{Thm}
In order to prove this result using Theorem~\ref{os}, it is crucial to understand the shape of the set $B_T(E)$
in the $HA^+K$ decomposition of $G$. This is one of the important technical steps in the proof.
\vskip 5pt
\noindent{\bf{On the shape of $B_T(E)$}:}
Fix a left-invariant metric on $G$. For $\epsilon>0$, let $U_\epsilon$ be the $\epsilon$-ball around $e$ in $G$.
For a subset $W$ of $G$, we set $W_\epsilon=W\cap U_\epsilon$.
\begin{Prop}\label{ksmall}
\begin{enumerate}\item If $a_t\in HKa_sK$ for $s>0$, then $|t|\le s.$
\item Given any $\epsilon>0$, there exists $T_0=T_0(\epsilon)$ such that
$$\{k\in K: a_t k\in HKA^+\text{ for some $t>T_0$}\}\subset K_{\epsilon}M.$$
\end{enumerate}
\end{Prop}
\begin{proof}
Suppose $a_t=hk_1a_sk_2$ for $h\in H, k_1,k_2\in K$.
We note that, as $Aj$ is orthogonal to $\hat C_0$ and $j\in \hat C_0$,
\begin{align*}| t|&= d(\hat C_0, a_t j)=d(\hat C_0, hk_1a_s j)\\
&=d(\hat C_0, k_1a_sj)\le d(j, k_1a_s j)=d(j, a_sj)=s,\end{align*}
proving the first claim.
For the second claim,
suppose $a_tk\in HKa_s$ for some $s\ge 0$. Then
$ka_{-s}\in a_{-t}HK$.
Applying both sides to $j\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$,
$k(e^{-s}j)\in a_{-t}\hat C_0$.
Now $a_{-t}\hat C_0=e^{-t} \hat C_0$ is the northern
hemisphere of Euclidean radius $e^{-t}$ about $0$ in $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
On the other hand $A^-j=(0,1]j$ for $A^-=\{a_{-s}: s\ge 0\}$ and
$K_{\epsilon}\{(0,1]j\}$
consists of geodesic rays in $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$ joining $j$ and points
of $K_{\epsilon} ( 0) \subset \c$. Now $K_{\epsilon}( 0)$
contains a disk of radius, say $r_{\epsilon}>0$, centered at $0$ in $\c$,
and hence $K_{\epsilon}\{(0,1]j\}$ contains a Euclidean half ball of radius
$r_{\epsilon}>0$ centered at $0$ in $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$.
Therefore for $t>T_0(\epsilon):=-\log(r_\epsilon)$,
$k(e^{-s}j) \in a_{-t}\hat C_0$ implies that $k(e^{-s}j) \in K_{\epsilon}\{(0,1]j\}$,
in other words, $ka_{-s} K \subset K_{\epsilon} A^-K$. By the uniqueness of
the left $K$-component, modulo the right multiplication by $M$,
in the decomposition $G=KA^-K$, it follows that $k\in K_{\epsilon}M$, proving the second claim.
\end{proof}
For $t\in \mathbb{R}$ and $T>1$,
set $$K_T(t):=\{k\in K: a_tk\in HKA^+_{\log T} \}.$$
As a consequence of Proposition~\ref{ksmall}, we have the following.
\begin{Cor}\label{property}
\begin{enumerate}
\item For all $0\le t< \log T$, $e\in K_T(t)$.
\item For all $t>\log T$, $K_T(t)=\emptyset$.
\item For any $\epsilon>0$, there exists $T_0(\epsilon) \ge 1$ such that we have
$$K_T(t)\subset K_\epsilon M\quad\text{for all $t>T_0(\epsilon)$}. $$
\end{enumerate}
\end{Cor}
Thus for any $T>1$,
\begin{equation}\label{hhkk}
HKA^+_{\log T}=\cup_{ 0\le t<\log T}Ha_tK_T(t).
\end{equation}
Since $B_T(E)=H\backslash HKA^+_{\log T}N_{-E}$,
\eqref{hhkk} together with Corollary \ref{property} shows
that $B_T(E)$ is essentially of the form $H\backslash Ha_{\log T}K_\epsilon M N_{-E}$.
The following proposition shows that
$B_T(E)$ can be basically controlled by the set $H\backslash H a_{\log T} N_{-E}$.
\begin{Prop}\label{uk} Fix a bounded subset $E$ of $\c$.
There exists $\ell=\ell(E) \ge 1$ such that
for all sufficiently small $\epsilon>0$,
$$a_t km n_z \in H_{\ell \epsilon} m a_t n_z U_{\ell \epsilon} $$
holds for any $m\in M$, $t>0$, $z\in E$,
and $k\in K_\epsilon$.
\end{Prop}
\begin{proof} Recalling that $N^-$ denotes the lower triangular subgroup of $G$, we note that
the product map $N^-\times A\times M\times N \to G$ is a diffeomorphism at a neighborhood
of $e$, in particular, bi-Lipschitz. Hence
there exists $\ell_1>1$ such that for all small $\epsilon>0$,
\begin{equation}\label{k}
K_{ \epsilon}\subset N_{\ell_1 \epsilon}^{-} A_{\ell_1 \epsilon} M_{\ell_1 \epsilon} N_{\ell_1\epsilon} .\end{equation}
Similarly due to the $H\times A\times N$ product decomposition of $G_\epsilon$, there exists $\ell_2>1$ such that
\begin{equation}\label{ue}U_\epsilon\subset H_{\ell_2 \epsilon}A_{\ell_2\epsilon}N_{\ell_2 \epsilon}\end{equation}
for all small $\epsilon>0$ (\cite[Lem 2.4]{GorodnikShahOhIsrael}).
We also have $\ell_3>1$ such that
for all small $\epsilon>0$,
\begin{equation}\label{amn}
A_{(\ell_1+\ell_2) \epsilon}N_{(\ell_1+ \ell_2) \epsilon} M_{\ell_1\epsilon}
\subset U_{\ell _3\epsilon}.
\end{equation}
Now let $t>0, k\in K_\epsilon, m\in M, n\in N$.
Then by \eqref{k}, we may write
$$k=n^-_1 b_1 m_1 n_1 \in N_{\ell_1 \epsilon}^{-} A_{\ell_1 \epsilon} M_{\ell_1 \epsilon} N_{\ell_1\epsilon}.$$
Since $a_t n^{-}_1a_{-t}\in N_\epsilon^{-}$ for $t>0$, we
have, by \eqref{ue},
$$a_t n^{-}_1a_{-t}= h_2b_2m_2n_2\in H_{\ell_2 \epsilon}A_{\ell_2\epsilon}M_{\ell_2 \epsilon}N_{\ell_2 \epsilon} .$$
Therefore
\begin{align*}
a_tk mn&=(a_t n^-_1 a_{-t})(a_t b_1m_1n_1) m n\\&=
(h_2b_2m_2 n_2) a_tb_1m_1n_1m n\\
&=h_2b_2m_2 (a_tb_1 b_1^{-1}a_{-t}) n_2 a_tb_1m_1n_1m n
\\
&= h_2a_t (b_2m_2)b_1(b_1^{-1} a_{-t} n_2 a_t b_1) m_1n_1 mn\\
&\in h_2 a_t A_{(\ell_1 +\ell_2)\epsilon}M_{\ell_2 \epsilon}N_{ (\ell_1+\ell_2) \epsilon} M_{\ell_1\epsilon} mn\quad\text{by \eqref{amn}}
\\ & \subset h_2 a_t U_{\ell_3 \epsilon} m n .\end{align*}
As $E$ is bounded, there exists $\ell=\ell(E) >\ell_2 $ such that
for all small $\epsilon>0$ and for all $z\in E$,
$$U_{\ell_3 \epsilon} mn_z\subset m n_z U_{\ell\epsilon } .$$
Since $a_t$ commutes with $m$, we obatin for all $z\in E$ that
$$a_t kmn_z\subset H_{\ell\epsilon} m a_t n_z U_{\ell \epsilon} .$$
\end{proof}
\subsection*{Proof of Theorem \ref{mtt}}
Let $\ell=\ell(E)\ge 1$ be as in Proposition \ref{uk}.
For $\psi\in C_c(\Gamma\backslash G)$ and $\epsilon>0$,
we define $\psi_\epsilon^{\pm}\in C_c(\Gamma\backslash G)$,
$$\psi_\epsilon^+(g):=\sup_{u\in U_{\ell \epsilon} } \psi(gu)\quad \text{and }\quad \psi_\epsilon^-(g):=\inf_{u\in U_{\ell \epsilon} }\psi(gu) .$$
For a given $\eta>0$,
there exists $\epsilon=\epsilon(\eta)>0$ such that for all $g\in \Gamma\backslash G$,
$$|\psi_\epsilon^+(g)-\psi_\epsilon^-(g)|\le \eta$$
by the uniform continuity of $\psi$.
On the other hand, by Theorem \ref{os},
we have $T_1(\eta)\gg 1$ such that
for all $t>T_1(\eta)$,
\begin{align}\label{pstar}&\int_{h\in \Gamma\cap H\backslash H}\psi_\epsilon^+(ha_tn) dh \\ \notag &=
\int_{s\in \Gamma\cap H\backslash C_0^\dagger}\int_{m\in M}\psi_\epsilon^+(s a_t mn)dm d\mu^{\operatorname{Leb}}_{C_0^\dag} (s)\\ \notag &=
(1+O(\eta)) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C_0)}{|m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_{\epsilon,n}^+) e^{(\delta_\Gamma -2) t}\end{align}
where $\psi_{\epsilon,n}^+(g)=\int_{m\in M} \psi_\epsilon^+(gmn) dm$.
As $N_{-E}$ is relatively compact, the implied constant can be taken uniformly over all $n\in N_{-E}$.
Let $T_0(\epsilon)>T_1(\eta)$ be as in
Proposition \ref{ksmall}.
For $[e]=H\in H\backslash G$ and $s>0$,
set $$V_T(s):=\cup_{s\le t < \log T}[e]a_tK_T(t)N_{-E}$$
so that $$B_T(E)=V_T(s)\cup (B_T(E)- V_T(s)).$$
Setting $$\psi^H(g):= \int_{h\in \Gamma\cap H\backslash H}\psi(hg) dh ,$$
note that $\psi^H$ is left $H$-invariant as $dh$ is a Haar measure.
We will show that $$\limsup_{T\to\infty}
\frac{1}{T^\delta} \int_{[g]\in V_T(T_0(\epsilon))} \psi^H(g) d\lambda(g) =
(1+O(\eta)) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma({C_0})}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|}\cdot
\int_{n\in N_{-E}} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_n ) dn .$$
By Corollary \ref{property}, we have
$$V_T(T_0(\epsilon))\subset \cup_{T_0(\epsilon)\le t\le \log T}[e]a_tK_\epsilon MN_{-E} .$$
Let $[g]\in V_T(T_0(\epsilon))$, so
$[g]=[e]a_tkmn$ with $T_0(\epsilon) \le t\le \log T$, $k\in K_\epsilon$, $m\in M$ and $n\in N_{-E}$.
By Proposition \ref{uk},
there exist $h_0\in H$ and $ u\in U_{\ell \epsilon }$ such that
$$a_tkmn =h_0m a_t n u$$
so that $[g]=[e]a_t nu$, since $M\subset H$.
We have
$$\psi^H(g)=\int_{h\in \Gamma\cap H\backslash H}\psi(h a_t n u) dh
\le
\int_{h\in \Gamma\cap H\backslash H}\psi_\epsilon^+(ha_t n) dh. $$
The measure $e^{2t}dtdn$ is a right invariant measure of $AN$ and
$[e]AN$ is an open subset in $H\backslash G$. Hence $d\lambda(a_tn)$ (restricted to $[e]AN$)
and $e^{2t}dtdn$
are constant multiples of each other. It follows from the formula of $dg$ that
$d\lambda(a_tn)=e^{2t}dtdn$.
Therefore
$$\int_{[g]\in V_T(T_0(\epsilon))}\psi^H(g) d\lambda(g) \le
\int_{n\in N_{-E}} \int_{T_0(\epsilon)<t \le\log T} \int_{h\in \Gamma\cap H\backslash H}
\psi_\epsilon^+(ha_tn)
dh e^{2t} dtdn .$$
By the choice of $\epsilon=\epsilon(\eta)$, we also have
$$m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_{\epsilon, n}^+)=(1+O(\eta))m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_n)$$
where the implied constant depends only on $\psi$.
Hence by \eqref{pstar},
\begin{multline*}
\int_{n\in N_{- E}}\int_{T_0(\epsilon) < t<\log T}\int_{h\in \Gamma\cap H\backslash H}\psi_{\epsilon,n}^+(ha_t) dh e^{2t} dtdn
\\ = (1+O(\eta)) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|}\cdot
\int_{n\in N_{-E}} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_n ) dn \cdot
( T^{\delta_\Gamma} -e^{\delta_\Gamma T_0(\epsilon)}).
\end{multline*}
Hence
\begin{multline*}\limsup_T
\frac{1}{T^{\delta_\Gamma}} \int_{[g]\in V_T(T_0(\epsilon))} \psi^H(g) d\lambda(g)
=
(1+O(\eta)) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|}\cdot
\int_{n\in N_{-E}} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_n ) dn .\end{multline*}
On the other hand, since $\Gamma\backslash \Gamma H$ is a closed subset of $\Gamma\backslash G$,
so is $\cup_{0\le t\le s} \Gamma\backslash \Gamma H a_t K N_{-\overline E} $ for any fixed $s>0$; in particular,
its intersection with a compact subset of $\Gamma\backslash G$ is compact.
Since
$$\cup_{[g]\in B_T(E)-V_T(s)}\Gamma\backslash \Gamma H g\subset \cup_{0\le t\le s} \Gamma\backslash \Gamma H a_t K N_{-\overline E} ,$$
and $\psi$ has compact support, we have, as $T\to \infty$,
$$ \int_{[g]\in B_T(E)-
V_T(T_0(\epsilon))} \int_{h\in \Gamma\cap H\backslash H}\psi(hg) dh d\lambda(g) =O(1) .$$
Therefore
$$\limsup_T
\frac{1}{T^\delta}\int_{[g]\in B_T(E)} \psi^H(g) d\lambda(g)
\le (1+O(\eta)) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|}\cdot
\int_{n\in N_{-E}} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_n ) dn .$$
As $\eta>0$ is arbitrary and $\epsilon(\eta)\to 0$ as $\eta\to 0$,
we have
$$\limsup_T
\frac{1}{T^\delta}\int_{[g]\in B_T(E)} \psi^H(g) d\lambda(g)
\le \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|}\cdot
\int_{n\in N_{-E}} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_n ) dn .$$
Similarly we can show that
$$\liminf_T
\frac{1}{T^\delta}\int_{[g]\in B_T(E)} \psi^H(g) d\lambda(g)
\ge \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|}\cdot
\int_{n\in N_{-E}} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi_n ) dn .$$
\qed
\section{On the measure $\omega_{\Gamma}$}\label{smeasure}
In this section we will describe a measure $\omega_{\Gamma}$ on $\c$ and show that the term
\[
\int_{n\in N_{-E}} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\Psi_n )\; dn,
\]
which appears in the asymptotic expression in Theorem~\ref{mtt}, converges to $\omega_{\Gamma}(E)$ as the support of $\Psi$ shrinks to $[e]$ with $\int_{\Gamma\backslash G}\Psi \,dg= 1$.
We keep the notations $ G, K, M, A^+, N, N^- , a_t, n_z, n^-_z$, etc., from section \ref{sectionh}.
Throughout this section,
we assume that $\Gamma$ is a non-elementary discrete subgroup of $G$.
Recall that $\{\nu_x=\nu_{\Gamma, x}:x\in\mathbb H}\newcommand{\LG}{\Lambda_\G^3\}$ denotes a $\Gamma$-invariant conformal density for $\Gamma$ of dimension $\delta_\Gamma>0$.
\begin{Def}\label{ome} Define a Borel measure $\omega_{\Gamma}$
on $\c$ as follows: for $\psi\in C_c(\c)$,
$$\omega_{\Gamma}(\psi)=\int_{z\in \c}e^{\delta_\Gamma\beta_z(x,z+j)}\psi(z) \underline{}d\nu_{\Gamma, x}(z) $$
for $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ and $z+j:=(z,1)\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$. \end{Def}
In order to see that the definition of $\omega_\Gamma$ is independent of the choice of $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$, we observe that for any $x_1,x_2\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ and $z\in \c$,
$$ e^{\delta_\Gamma (\beta_z(x_1, z+j)-\beta_z(x_2, z+j ))}
\frac{d\nu_{x_1}}{d{\nu}_{x_2}}(z)=
e^{\delta_\Gamma \cdot \beta_z(x_1, x_2)}
\frac{d\nu_{x_1}}{d{\nu}_{x_2}}(z)= 1 $$
by the conformality of $\{\nu_{x}\}$.
\begin{Lem}\label{omd} For any $x=p+rj\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ and $\psi\in C_c(\c)$,
$$\omega_\Gamma(\psi):=\int_{z\in \c} (r^{-1}|z-p|^2+r)^{\delta_\Gamma} \psi( z) d\nu_{x}(z) .$$
\end{Lem}
\begin{proof}
It suffices to show that
$$\beta_z( p+rj, z+j)=\log\frac{|z-p|^2+r^2}{r} .$$
We use the fact that the hyperbolic distance $d$ on the upper half space model
of $\mathbb H}\newcommand{\LG}{\Lambda_\G^3$
satisfies
$$\cosh (d(z_1+r_1j, z_2+r_2j))=\frac{|z_1-z_2|^2+r_1^2+r_2^2}{2r_1r_2}$$
for $z_i+r_i j \in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$ (cf. \cite{ElstrodtGrunewaldMennickebook}).
Note that \begin{align*}
\beta_z(z, z+j)&=\beta_0(j, -z+p+rj)\\&=
\lim_{t\to \infty} t -d(-z+p+rj, e^{-t}j)
\\ &= \lim_{t\to \infty} t- d(p+rj, z+e^{-t}j) .\end{align*}
Now $$\cosh d(p+rj, z+e^{-t}j)=\frac{e^t(|z-p|^2+r^2)+e^{-t}}{2r}$$
and hence
$$ e^{d(p+rj, z+e^{-t}j)} +e^{-d(p+rj, z+e^{-t}j)}
= \frac{e^t(|z-p|^2+r^2)+e^{-t}}{r}. $$
Therefore as $t\to \infty$,
$$d(p+rj, z+e^{-t}j)\sim t+\log \frac{|z-p|^2+r^2}{r} .$$
Hence $$\beta_z(p+rj, z+j)=\log\frac{|z-p|^2+r^2}{r} .$$
\end{proof}
\begin{Def}{\rm For a function $\psi$ on $\c$ with compact support, define
a function $\mathfrak R_\psi$ on $MAN^-N\subset G$ by
$$\mathfrak R_\psi(m a_t n^-_x n_z)=e^{-\delta_\Gamma t} \psi (-z) $$
for $m\in M, t\in \mathbb{R}, x,z\in \c$.
If $\psi$ is the characteristic function of $E\subset \c$,
we put $\mathfrak R_E=\mathfrak R_{\chi_E}$.
}\end{Def}
Since the product map $M\times A\times N^-\times N\to G$ has a diffeomorphic image,
the above function is well-defined.
\begin{Prop}\label{omd2} For any $\psi\in C_c(\c)$,
$$\omega_\Gamma(\psi)=\int_{k\in K/M}{\mathfrak R}_\psi (k^{-1}) d\nu_{j}(k(0)).$$
\end{Prop}
\begin{proof} If $k\in K$ with $k^{-1}=ma_tn^-_xn_z \in MAN^-N$,
since $MAN$ fixes $0$,
$$k(0)= n_{-z}(0)=-z .$$
We note that $\lim_{s\to \infty} a_{-s}(j)=0$
and compute
\begin{align*}
&0=\beta_{-z}(k(j), j)\\& =\beta_{-z}(n_{-z} n^-_{-x} a_{-t} j, j) \\
&=\beta_0( n^-_{-x}a_{-t}j, n_z(j) )\\
& =\lim_{s\to \infty} d( n^-_{-x}a_{-t} j, a_{-s}j) -d(n_z(j), a_{-s} j)\\
&=\lim_{s\to \infty} d((a_{s} n^-_{-x}a_{-s}) a_{s-t} j, j) - d(n_z(j), a_{-s} j) \\
&=\lim_{s\to \infty} d(a_{s-t}j, j) -d(n_z(j), a_{-s} j)\\
&= \lim_{s\to \infty} s-t -d(n_z(j), a_{-s} j)
\end{align*}
and hence
$$-t=\lim_{s\to \infty} d(n_z(j), a_{-s} j) -s=\beta_0( n_z(j), j ) =\beta_{-z}(j, -z+j) .$$
Hence for $k^{-1}\in K\cap MAN^-N$,
$$ {\mathfrak R}_\psi(k^{-1})=e^{\delta_\Gamma \beta_{k(0)}(j, n_{k(0)}(j) )}\psi (k(0)).$$
Since the complement of $NN^-AM/M$ in $K/M$
is a single point and $\nu_j$ is atom-free,
we have
\begin{align*}&\int_{k\in K/M}{\mathfrak R}_E(k^{-1}) d\nu_{j}(k(0))\\
&=\int_{k\in (K\cap NN^-AM)/M}{\mathfrak R}_E(k^{-1}) d\nu_{j}(k(0)) \\
&=\int_{z\in \c}e^{\delta_\Gamma \beta_{k(0)}( j, k(0)+j)} \psi( k(0)) d\nu_{j}(k(0))
\\
&=\int_{z\in \c}
e^{\delta_\Gamma \beta_{-z} (j,-z+j)}\psi(-z) d\nu_{j}(-z)
\\
&=\int_{z\in \c}
e^{\delta_\Gamma \beta_{z} (j, z+j)}\psi(z) d\nu_{j}(z)=\omega_\Gamma(\psi).
\end{align*}
\end{proof}
\begin{Lem} \label{man} If
$(ma_tn_x^-n_z)(m_1 a_{t_1}n_{x_1}^-n_{z_1})=m_0 a_{t_0}n_{x_0}^-n_{z_0}$ in the
$MAN^-N$ coordinates,
then $$t_0=t+t_1+2\log(|1+e^{-t_1}x_1z'|)$$ for some $z'\in \c$ with $|z|=|z'|$.
\end{Lem}
\begin{proof} Note
that if $m_1=\text{diag}(e^{i\theta_1}, e^{-i\theta_1})$, then
$$a_tn_x^-n_zm_1= m_1 a_t n_{e^{i\theta_1}x}^- n_{e^{i\theta_1} z} .$$
Hence we may assume $m_1=m=e$ without loss of generality.
We use the following simple identity for $z,x\in \c$:
\begin{equation}\label{id} n_zn_x^-=\begin{pmatrix}1+xz &0\\ 0&(1+xz)^{-1}\end{pmatrix} n^-_{x(1+xz)}n_{z(1+xz)^{-1}} .
\end{equation}
Hence we have
\begin{align*}&(a_tn_x^-n_z)( a_{t_1}n_{x_1}^-n_{z_1})
\\ &=
(a_{t+t_1})(a_{t_1}^{-1}n_x^-a_{t_1})(a_{t_1}^{-1}n_za_{t_1}) n_{x_1}^-n_{z_1}\\
&=a_{t+t_1} n_{e^{t_1}x}^- n_{e^{-t_1}z} n_{x_1}^-n_{z_1}\\ &=
a_{t+t_1} n_{e^{t_1}x}^- \begin{pmatrix}1+ e^{-t_1}x_1z &0\\ 0&(1+e^{-t_1}x_1 z)^{-1}\end{pmatrix} n^-_{x_1(1+e^{-t_1}x_1z)} n_{e^{-t_1}z(1+e^{-t_1}zx_1)^{-1}}n_{z_1}
\\ &=m a_{t+t_1+2\log(|1+e^{-t_1}x_1z|)}
n^-_{x_2 } n_{z_2}
\end{align*}
for appropriate $m\in M$ and $x_2, z_2\in \c$.
\end{proof}
Let $E\subset \c$ be a bounded subset and $U_\epsilon\subset G$ a symmetric $\epsilon$-neighborhood of $e$ in $G$.
For $\epsilon>0$, set
\begin{equation}\label{e}
E_\epsilon^{+}:=U_\epsilon (E)\quad\text{and}\quad E_\epsilon^{-}:=\cap_{u\in U_\epsilon} u(E).\end{equation}
\begin{Lem}\label{casek}\label{ell} There exists $\ell>0$ such that for all small $\epsilon>0$ and any $g\in U_{\ell \epsilon}$,
$$\int_{k\in K/M} {\mathfrak R}_E(k^{-1}g) d\nu_{j}(k(0)) =
(1+O(\epsilon))\cdot \omega_\Gamma({E_{\epsilon}^\pm})$$
where the implied constant depends only on $E$.
\end{Lem}
\begin{proof}
Write $k^{-1}=ma_tn_x^-n_z$ and $g=m_1a_{t_1}n_{x_1}^-n_{z_1}\in U_{\epsilon}$.
By Lemma \ref{man}, we have
$k^{-1}g =m_0a_{t_0}n_{x_0}n_{z_0}$ where
$t_0=
t+t_1+2\log(|1+e^{-t_1}x_1z|)$.
Since
$ {\mathfrak R}_E(k^{-1}g)=e^{-\delta_\Gamma t_0}\chi_{E}(g^{-1}k(0)),
$
we have
\begin{align*}
&\int_{k\in K/M} {\mathfrak R}_E (k^{-1}g) d\nu_{j}(k(0))\\
&=\int_{k(0)\in g(E)} e^{-\delta_\Gamma t_0} d\nu_j(k(0)) \\
&=\int_{k(0)\in g(E)} e^{-\delta_\Gamma t} e^{-\delta(t_1+2\log (|1+e^{t_1}x_1 z|))} d\nu_j(k(0))\\
&=(1+O(\epsilon)) \int_{k(0)\in E_{\epsilon}^\pm} e^{-\delta_\Gamma t} e^{-\delta(t_1+2\log (|1+e^{t_1}x_1 z|))} d\nu_j(k(0)).
\end{align*}
Since $t_1=O(\epsilon), x_1=O(\epsilon)$ and $z=-k(0)\in -g(E)\subset -E_\epsilon^+$,
$$t_1+2\log (|1+e^{-t_1}x_1 z|)=O(\epsilon)$$
where the implied constant depends only on $E$.
Hence
\begin{align*}&
\int_{k\in K/M} {\mathfrak R}_E (k^{-1}g) d\nu_{j}(k(0))
\\&=(1+O(\epsilon)) \int_{k(0)\in E_{\epsilon}^\pm} e^{-\delta_\Gamma t} d\nu_j(k(0))\\
&=(1+O(\epsilon)) \int_{k\in K} {\mathfrak R}_{E_{\epsilon}^\pm}(k^{-1}) d\nu_{j}(k(0))\\ &=(1+O(\epsilon))\cdot \omega_\Gamma
({E_{\epsilon}^\pm}) .\end{align*}
\end{proof}
For $\epsilon>0$,
let $\psi^\epsilon$ be a non-negative continuous function in $C(G)$ with support in $U_{ \epsilon}$
with integral one and
$\Psi^\epsilon\in C_c(\Gamma\backslash G)$ be the $\Gamma$-average of $\psi^\epsilon$:
$$\Psi^\epsilon(\Gamma g):=\sum_{\gamma\in \Gamma}\psi^\epsilon(\gamma g) .$$
We define $\Psi_E^{\epsilon} \in C_c(\Gamma\backslash G)^M$ by
$$\Psi_{E}^{\epsilon}(g):=\int_{z\in -E}\int_{m\in M}\Psi^\epsilon(gmn_{z}) dmdz .$$
\begin{Lem}\label{ccc} For a bounded Borel subset $E\subset \c$,
there exists $c=c(E)>1$ such that for all small $\epsilon>0$,
$$ (1-c\cdot \epsilon)\cdot
\omega_{\Gamma}( E_\epsilon^-)\le
m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\Psi^\epsilon_{E} ) \le (1+c\cdot \epsilon)\cdot
\omega_{\Gamma}( E_\epsilon^+) . $$
\end{Lem}
\begin{proof} Note that
$N^-$ is the expanding horospherical subgroup for the right action of $a_t$, i.e.,
$N^-=\{g\in G: a_t g a_{-t}\to e\text{ as $t\to \infty$}\}$.
We have for $\psi\in C_c(G)^M$,
$$\tilde m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\psi)=\int_{KAN^-}\psi(ka_t n^-) e^{-\delta_\Gamma t} dn dt d\nu_{j}(k(0)) $$
(cf. \cite[6.2]{OhShahGFH}).
We note that $d(a_tn_x^-m n_z)=dtdxdm dz$ is the restriction of the
Haar measure $dg$ to $AN^- N\subset G/M$.
We deduce
\begin{align*}& m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\Psi_E^\epsilon)=
\int_{z\in {- E}} \tilde m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}(\psi^\epsilon_{n_z} ) dz\\
&=\int_{ z\in {-E}} \int_{KAN^-} \int_{m\in M}
\psi^\epsilon(ka_tn_x^- m n_z) e^{ - \delta_\Gamma t}dm dxdtd\nu_{j}(k(0)) dz \\
&=\int_{k\in K}\int_{AN^-MN}\psi^\epsilon(k(a_tn^-_x m n_z)) \chi_{-E}(z) e^{ - \delta_\Gamma t} dxdt dm dz d\nu_{j}(k(0)) \\
&=\int_{k\in K}\int_{g\in G } \psi^\epsilon(k g) {\mathfrak R}_E(g) dg d\nu_{j}(k(0))\\
&=\int_{g\in U_\epsilon} \psi^\epsilon(g) \left(\int_{k\in K} {\mathfrak R}_E(k^{-1}g) d\nu_{j}(k(0)) \right) dg. \end{align*}
Hence by Lemma \ref{ell} and the identity $\int_{U_\epsilon} \psi^\epsilon dg=1$,
we have
$$m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\Psi_E^\epsilon)=
(1+O(\epsilon))\omega_{\Gamma}(E_{\epsilon}^{\pm}) .$$
\end{proof}
\begin{Cor}\label{inter}
If $\omega_\Gamma(\partial(E))=0$, then
$$\omega_\Gamma(E)=\lim_{\epsilon\to 0} m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\Psi^\epsilon_{E} ) .$$
\end{Cor}
\begin{proof}
For any $\eta>0$, there exists $\epsilon=\epsilon(\eta)$ such that
$\omega_{\Gamma}(E_\epsilon^+-E_{\epsilon}^-)<\eta$.
Together with Lemma \ref{ccc}, it implies that
$$ m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\Psi^\epsilon_{E} ) = (1+O(\epsilon))(1+O(\eta))\omega_\Gamma(E)$$
and hence the claim follows.
\end{proof}
\section{Conclusion: Counting circles}\label{conclusion}
Let $\Gamma<G:=\operatorname}\newcommand{\supp}{\operatorname{supp}{PSL}_2(\c)$ be a non-elementary discrete group
with $|m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|<\infty$.
Suppose that $\mathcal P:=\Gamma(C)$ is a locally finite circle packing.
Recall that $$\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P)=\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C):=
\int_{s\in \operatorname}\newcommand{\supp}{\operatorname{supp}{Stab}_{\Gamma} (C^\dagger)\backslash C^\dagger} e^{\delta_\Gamma
\beta_{s^+}(x,s)}d\nu_{\Gamma, x}(s^+), $$
where $C^\dagger$ is the set of unit normal vectors to $\hat C$.
It follows from the conformal property of $\{\nu_{\Gamma, x}\}$
that $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C)$
is independent of the choice of $C\in\Gamma(C)$, and hence
is an invariant of the packing $\Gamma(C)$.
Theorem \ref{m1} is an immediate consequence of the following statement.
\begin{Thm}\label{mmt2} Suppose that $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C)<\infty$.
For any bounded Borel subset $E$ of $\c$ with $\omega_\Gamma(\partial(E))=0$,
we have
\begin{equation}\label{nt}
\lim_{T\to \infty}\frac{N_T(\mathcal P, E)}{T^{\delta_\Gamma} }=
\frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\P)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|} \cdot \omega_{\Gamma}(E).\end{equation}
Moreover $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C) >0$ if $\P$ is infinite.
\end{Thm}
The second claim on the positivity of $\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_\Gamma(C)$ follows from the second claim
of Theorem \ref{os} and Lemma \ref{infinite}.
We will first prove Theorem \ref{mmt2} for the case when $C$ is the unit circle $C_0$ centered
at the origin and deduce the general case from that.
\subsection*{The case of $C=C_0$.}
Fix $\eta>0$.
As $\omega_\Gamma(\partial (E))=0$,
there exists $\epsilon=\epsilon(\eta)>0$ such that
\begin{equation}\label{ej}\omega_\Gamma( {E_{4\epsilon}^+} - E_{4\epsilon}^-)\le \eta \end{equation}
where $E_{4\epsilon}^{\pm}$ is defined as in \eqref{e}:
$E_{4\epsilon}^{+}:=U_{4\epsilon} (E)$ and $ E_{4\epsilon}^{-}:=\cap_{u\in U_{4\epsilon}} u(E)$.
We can find a $\P$-admissible Borel subset $\tilde E_{\epsilon}^{+}$ such that
$E\subset \tilde E_{\epsilon}^{+}\subset E_{\epsilon}^+$ by adding
all the open disks inside $E_{\epsilon}^{+}$
intersecting the boundary of $E$.
Similarly we can find a $\P$-admissible Borel subset $\tilde E_{\epsilon}^{-}$ such that
$E_{\epsilon}^-\subset\tilde E_{\epsilon}^-\subset E$ by adding all the open disks inside $E$
intersecting the boundary of $E_{\epsilon}^{-}$.
By the local finiteness of $\P$, there are only finitely many circles intersecting $E$ (resp.
$\tilde E_{\epsilon}^{-}$) which are not contained in $\tilde E_{\epsilon}^{+}$ (resp. $E$). Therefore
there exists $q_\epsilon\ge 1$ (independent of $T$) such that
\begin{equation}\label{npt}
N_T(\P, \tilde E_{\epsilon}^{-} ) - q_\epsilon\le N_T(\P, E) \le N_T(\P, \tilde E_{\epsilon}^{+} ) +q_\epsilon .\end{equation}
Recalling the set $B_T(\tilde E^{\pm}_\epsilon)= H\backslash H KA^+_{\log T}N_{-\tilde E^{\pm}_\epsilon}\subset H\backslash G ,$
it follows from Proposition \ref{tran} and \eqref{npt} that for all $T\gg 1$,
\begin{equation}\label{tft}
\# [e]\Gamma\cap B_T(\tilde E^{-}_\epsilon)-m_0 \le N_T(\Gamma (C_0), E)\le \# [e]\Gamma\cap B_T(\tilde E^{+}_\epsilon) +m_0 \end{equation}
for some fixed $m_0=m_0(\epsilon)\ge 1$.
\begin{Lem}\label{strrr}
There exists $\ell >0$ such that for all $T> 1$ and for all small $\epsilon>0$,
$$KA_{\log T}^+U_{\epsilon}\subset K A^+_{\log T +\epsilon}N_{\ell \epsilon}$$
where $N_{\ell \epsilon}$ is the $\ell \epsilon$-neighborhood of $e$ in $N$.
\end{Lem}
\begin{proof}
We may write $U_\epsilon=M_\epsilon N^-_\epsilon A_\epsilon N_\epsilon=K_\epsilon A_\epsilon N_\epsilon$ up to uniform Lipschitz constants.
For $u=mn^-a n\in M_\epsilon N^-_\epsilon A_\epsilon N_\epsilon$,
$a_t u= m (a_tn^-a_{-t}) a_ta n$.
Since $a_t n^-a_{-t}\in U_{\epsilon}$ for $t>0$,
we may write it as $k_1 a_1 n_1\in K_{\epsilon} A_{\epsilon} N_{ \epsilon}$.
Hence for $0<t<\log T$, we have
$(a^{-1} a_{-t} n_1 a_t a) \in N_\epsilon$ and
$$a_tu =(mk_1) (a_1 a_ta) (a^{-1} a_{-t} n_1 a_t a)
n \in K A^+_{\log T + 2\epsilon }N_{2\epsilon} .$$
This proves the claim.
\end{proof}
\begin{Lem}[Stability of $KAN$-decomposition]\label{strong}\label{strong2} There exists $\ell_0>0$
(depending on $E$) such that for all $T> 1$ and for all
small $\epsilon>0$,
\begin{equation*}
KA_{\log T}^+N_{-\tilde E^+_\epsilon} U_{\ell_0\epsilon} \subset KA_{\log T +\epsilon}^+ N_{-E_{2\epsilon}^+} ;\end{equation*}
\begin{equation*}
KA_{\log T-\epsilon}^+N_{-E_{2\epsilon}^-} \subset KA_{\log T}^+ (\cap_{u\in U_{\ell_0\epsilon}} N_{-\tilde E^-_\epsilon} u) .\end{equation*}
\end{Lem}
\begin{proof}
There exists $\ell_0>0$ depending on $E$
such that $N_{-\tilde E^+_\epsilon} U_{\ell_0\epsilon}\subset U_\epsilon N_{-E_{2\epsilon}^+}$.
Hence the first claim follows from Lemma \ref{strrr}. The second claim can be proved similarly.
\end{proof}
For $\epsilon>0$, define functions $F_T^{\epsilon, \pm}$ on $\Gamma\backslash G$:
$$F_T^{\epsilon, +}(g):=\sum_{\gamma\in (H\cap \Gamma)\backslash \Gamma}\chi_{B_{e^{\epsilon}T}(N_{-E_{2\epsilon}^+})}([e]\gamma g);\quad
F_T^{\epsilon, -}(g):=\sum_{\gamma\in (H\cap \Gamma)\backslash \Gamma}\chi_{B_{e^{-\epsilon}T}(N_{-E_{2\epsilon}^-})}([e]\gamma g) .$$
Let $\ell_0>0$ be as in Lemma \ref{strong}.
Without loss of generality, we may assume that $\ell_0<\ell$
for $\ell$ as in Lemma \ref{ell}.
\begin{Lem} For all $g\in U_{\ell_0 \epsilon }$ and $T\gg 1$,
\begin{equation}\label{ft} F_T^{\epsilon, +}(g)-m_0
\le N_T(\Gamma (C_0), E) \le F_T^{\epsilon, +}(g)+m_0.
\end{equation}\end{Lem}
\begin{proof} Note that, since $U_{\ell_0\epsilon}$ is symmetric, for any $g\in U_{\ell_0 \epsilon }$,
$$\# [e]\Gamma\cap B_T(\tilde E^+_\epsilon) \le \# [e]\Gamma\cap B_T(\tilde E^+_\epsilon) U_{\ell_0 \epsilon} g^{-1} \le
\# [e]\Gamma g \cap B_{e^{\epsilon}T}(N_{-E_{2\epsilon}^+}) ,$$
by Lemma \ref{strong}, which proves the second inequality by \eqref{tft}.
The other
inequality can be proved similarly.
\end{proof}
For $\epsilon>0$,
let $\psi^\epsilon$ be a non-negative continuous function in $C(G)$ with support in $U_{\ell_0 \epsilon}$
with integral one and
$\Psi^\epsilon\in C_c(\Gamma\backslash G)$ be the $\Gamma$-average of $\psi^\epsilon$:
$$\Psi^\epsilon(\Gamma g):=\sum_{\gamma\in \Gamma}\psi^\epsilon(\gamma g) .$$
By integrating \eqref{ft} against $\Psi^\epsilon$, we have
$$\langle F_{T}^{\epsilon,-}, \Psi^\epsilon\rangle -m_0
\le N_T(\Gamma (C_0), E) \le \langle F_{T}^{\epsilon,+}, \Psi^\epsilon\rangle +m_0. $$
Since \begin{align*}
\langle F_{T}^{\epsilon,+}, \Psi^\epsilon \rangle \notag &=\int_{\Gamma\backslash G}
\sum_{\gamma\in \Gamma\cap H\backslash \Gamma}\chi_{B_{e^\epsilon T}(N_{-E_{2\epsilon}^+})}([e]\gamma g )
\Psi^\epsilon(g) \; dg\\ &= \int_{g\in \Gamma\cap H\backslash G} \chi_{B_{e^\epsilon T}(N_{-E_{2\epsilon}^+})} ([e] g) \Psi^\epsilon(g) \; dg \notag
\\ &=\int_{[g]\in B_{e^\epsilon T}(N_{-E_{2\epsilon}^+})}\int_{h\in \Gamma\cap H\backslash H}\Psi^\epsilon (hg)\, dhd\lambda(g)
\end{align*}
we deduce from Theorem \ref{mtt} and Lemma \ref{locfinite}
that
\begin{equation}\label{sim2} \langle F_{T}^{\epsilon,+}, \Psi^\epsilon \rangle \sim
\frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|}\cdot
\int_{n\in N_{-E_{2\epsilon}^+} } m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\Psi_n^\epsilon ) dn \cdot T^{\delta_\Gamma} \cdot e^{\epsilon \delta_\Gamma} \end{equation}
where $\Psi_n^\epsilon (g)=\int_{m\in M}\Psi^\epsilon(gmn)dm$.
Therefore by applying Lemma \ref{ccc} to \eqref{sim2} and using \eqref{ej}, we deduce
\begin{align*} \limsup_T \frac{\langle F_{T}^{\epsilon,+}, \Psi^\epsilon \rangle}{T^{\delta_\Gamma}}
&\le (1+\epsilon) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|}\cdot
\int_{n\in N_{-E_{2\epsilon}^+} } m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BR}}_\Gamma(\Psi_n^\epsilon ) dn \\ & \le
(1+\epsilon)(1+c\epsilon) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|}\cdot
\omega_{\Gamma}(E_{4\epsilon}^+) \\&
\le (1+ c_1 \eta )(1+ c_2\epsilon ) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\Gamma(C_0))}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|}\cdot
\omega_{\Gamma}(E) \end{align*}
where the constants $c,c_1, c_2$ depend only on $E$.
Similarly, we have
\begin{align*} \liminf_T \frac{\langle F_{T}^{\epsilon,+}, \Psi^\epsilon \rangle}{T^{\delta_\Gamma}}
\ge (1-c_1\eta)(1-c_2\epsilon) \frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|}\cdot
\omega_{\Gamma}(E).
\end{align*}
As $\eta>0$ is arbitrary and $\epsilon=\epsilon(\eta)\to 0$ as $\eta\to 0$, we have
$$
\lim_{T\to \infty}\frac{N_T(\Gamma (C_0), E)}{T^{\delta_\Gamma} }=
\frac{\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(C_0)}{\delta_\Gamma \cdot |m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_\Gamma|} \cdot \omega_{\Gamma}(E).$$
This proves Theorem \ref{mmt2} for $C=C_0$.
\subsection*{The general case}
Let $r>0$ be the radius of $C$ and $p\in \c$ the center of $C$.
Set
$$g_0=n_{p}a_{\log r}=\begin{pmatrix} 1& p\\ 0 &1\end{pmatrix}\begin{pmatrix} \sqrt {r}& 0\\ 0 &\sqrt{r^{-1}} \end{pmatrix} .$$
Then $g_0^{-1}(z)= r^{-1}(z-p)$ for $z\in \c$ and $g_0^{-1}(C)=C_0 .$
Setting $\Gamma_0=g^{-1}_0\Gamma g_0$, we have \begin{align*}
N_T(\Gamma (C), E)&=\#\{C\in \Gamma(g_0(C_0)): C^\circ \cap E\ne\emptyset, \operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}(C)<T\}\\
&=\#\{g_0^{-1}(C)\in \Gamma_0(C_0): C^\circ \cap E\ne\emptyset, \operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}(C)<T\}\\
&=\#\{C_*\in \Gamma_0(C_0): C_*^\circ \cap g_0^{-1} (E)\ne\emptyset, \operatorname}\newcommand{\supp}{\operatorname{supp}{Curv}(C_*)<r^{-1} T\}\\
&=N_{r^{-1}T}(\Gamma_0 (C_0), g_0^{-1}(E)).
\end{align*}
We claim that
\begin{equation}\label{haha}
\frac{1} { |m_{\Gamma_0}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|} \cdot
\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma_0}(\Gamma_0(C_0)) \cdot r^{-\delta_\Gamma}\cdot \omega_{\Gamma_0} (g^{-1}_0(E))=
\frac{1} { |m_{\Gamma}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|} \cdot
\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma}(\Gamma(C)) \cdot \omega_{\Gamma} (E).
\end{equation}
Note that
the each side of the above is independent of the choices of conformal densities of $\Gamma_0$ and $\Gamma$ respectively.
Fixing a $\Gamma$-invariant conformal density $\{\nu_{\Gamma, x}\}$
of dimension $\delta_\Gamma$,
set $$\nu_{\Gamma_0, x}:={g_0}^*\nu_{\Gamma, g_0(x)}$$
where $g_0^*\nu_{\Gamma, g_0(x)}(R)=\nu_{\Gamma, g_0(x)}(g_0(R)).$
It is easy to check that $\nu_{\Gamma_0,x}$ is supported on $\Lambda(\Gamma_0)=g_0\Lambda(\Gamma)$
and satisfies
$$\frac{d\nu_{\Gamma_0,x}}{d\nu_{\Gamma_0,y}}(z)=e^{-\delta_\Gamma\beta_z(x,y)};\quad
\gamma_*\nu_{\Gamma_0, x}=\nu_{\Gamma_0,\gamma(x)}$$
for all $x,y\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$, $\gamma\in \Gamma_0$ and $z\in \hat \c$.
Hence $\{\nu_{\Gamma_0, x}:x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3\}$ is a $\Gamma_0$-invariant conformal density
of dimension $\delta_\Gamma=\delta_{\Gamma_0}$ and satisfies that for $f\in C_c(\c)$
$$\int_{g_0(z)\in E} f(z) d\nu_{ \Gamma_0,x}(z)=
\int_{z\in E} f(g^{-1}_0(z)) d\nu_{\Gamma,g_0(x)}(z) .$$
We consider the Bowen-Margulis-Sullivan measures $m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_{\Gamma}$ and
$m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_{\Gamma_0}$
on $\Gamma\backslash \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ and $\Gamma_0\backslash \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ associated to $\{\nu_{\Gamma, x}\}$ and
$\{\nu_{\Gamma_0, x}\}$, respectively.
\begin{Lem}\label{fione} For a bounded Borel function $\psi$ on $\Gamma\backslash \op{T}^1{(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)}$,
consider a function $\psi_{g_0}$ on $\Gamma_0\backslash \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^3)$ given by $\psi_{g_0}(u):= \psi(g_0 (u)).$
Then
$$m_{ \Gamma_0 }^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}(\psi_{g_0})=m_{\Gamma}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}(\psi) .$$
In particular,
$|m_{\Gamma_0}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|=|m_{\Gamma}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}| .$
\end{Lem}
\begin{proof} Note that
if $v=g(u)$, then
$$ \beta_{u^{\pm}}(x, \pi(u))= \beta_{v^{\pm}}(g(x), \pi(v)).$$
Since $\nu_{\Gamma_0, x}=g_0^*\nu_{\Gamma, g_0(x)}$, we have
\begin{align*} &m_{\Gamma_0 }^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}(\psi_{g_0})\\ &
=\int_{u\in \Gamma_0\backslash \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^n)} \psi(g_0(u))e^{\delta_\Gamma \beta_{u^+}(x, \pi(u))}\;
e^{\delta_\Gamma \beta_{u^-}(x,\pi(u)) }\;
d\nu_{\Gamma_0,x}(u^+) d\nu_{\Gamma_0 ,x}(u^-) dt\\ &=
\int_{v\in \Gamma\backslash \op{T}^1(\mathbb H}\newcommand{\LG}{\Lambda_\G^n)} \psi(v)e^{\delta_\Gamma \beta_{v^+}(g_0(x), \pi(v))}\;
e^{\delta_\Gamma \beta_{v^-}(g_0(x),\pi(v)) }\;
d\nu_{\Gamma, g_0(x)}(v^+) d\nu_{\Gamma, g_0(x)}(v^-) dt\\ & =
m^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}_{\Gamma}(\psi) .\end{align*}
\end{proof}
Similarly, we can verify:
\begin{Lem} \label{fitwo} For any $x\in \mathbb H}\newcommand{\LG}{\Lambda_\G^3$,
$$\int_{s\in \operatorname}\newcommand{\supp}{\operatorname{supp}{Stab}_{\Gamma_0}(C_0^\dagger)\backslash C_0^\dagger}e^{\delta_\Gamma \beta_{s^+}(x, s)} d\nu_{ \Gamma_0, x}(s^+)
= \int_{s\in \operatorname}\newcommand{\supp}{\operatorname{supp}{Stab}_{\Gamma}(C^\dagger)\backslash C^\dagger}e^{\delta_\Gamma \beta_{s^+}(g_0(x), s)} d\nu_{\Gamma, g_0(x) }(s^+);$$
that is,
$\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma} (\Gamma(C))=\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma_0} (\Gamma_0(C_0)) .$
\end{Lem}
\begin{Lem} For any bounded Borel subset $E\subset \c$,
$$\omega_{\Gamma_0}(g^{-1}_0(E)) = r^{\delta_\Gamma} \omega_{\Gamma}(E) .$$
\end{Lem}
\begin{proof} Since $g_0^{-1}(z)=r^{-1}(z-p)$,
$r$ is the linear distortion of the map $g_0^{-1}$ in the Euclidean metric, that is,
$r=\lim_{w\to w_0}\frac{|g_0^{-1}(w)-g_0^{-1}(w_0)|}{|w-w_0|}$ for any $w_0\in \c$.
Hence
$$d\nu_{\Gamma, g_0(j)}(w)=r^{\delta_\Gamma} \frac{(|w|^2+1)^{\delta_\Gamma}}{(|g_0^{-1}(w)|^2+1)^{\delta_\Gamma}} d\nu_{\Gamma, j}(w).$$
Since $\nu_{\Gamma_0, x}=g_0^*\nu_{\Gamma, g_0(x)}$, we deduce
\begin{align*}\omega_{\Gamma_0}(g^{-1}_0(E))&=
\int_{z\in g_0^{-1}(E)}( |z|^2+1)^{\delta_\Gamma} d\nu_{\Gamma_0, j}(z)\\&=
\int_{u\in E} (|g_0^{-1}(u)|^2+1)^{\delta_\Gamma} d\nu_{\Gamma, g_0(j)}(u)
\\ &=r^{\delta_\Gamma} \int_{ u\in E }(|u|^2+1)^{\delta_\Gamma} d\nu_{\Gamma,j}(u)
\\ &=r^{\delta_\Gamma} \omega_\Gamma(E).
\end{align*}
\end{proof}
This concludes a proof of \eqref{haha}.
Therefore, since $\operatorname{sk}_{\Gamma_0}(C_0)<\infty$ and $|m_{\Gamma_0}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|<\infty$, the previous case of $C=C_0$ yields that
\begin{align*}
\lim_{T\to \infty}\frac{1}{T^{\delta_\Gamma}} N_T(\Gamma (C), E)
&=\lim_{T\to \infty}\frac{1}{T^{\delta_\Gamma}}
N_{r^{-1}T}(\Gamma_0 (C_0), g_0^{-1}(E))\\ & =
\frac{1}{\delta_{\Gamma_0} \cdot |m_{\Gamma_0}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|} \cdot
\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma_0}(C_0) \cdot r^{-\delta_\Gamma}\cdot \omega_{\Gamma_0} (g^{-1}_0(E))
\\& = \frac{1}{\delta_\Gamma \cdot |m_{\Gamma}^{\operatorname}\newcommand{\supp}{\operatorname{supp}{BMS}}|} \cdot
\operatorname}\newcommand{\supp}{\operatorname{supp}{sk}_{\Gamma} (C) \cdot \omega_{\Gamma}(E).\end{align*}
This completes the proof of Theorem \ref{mmt2}. \qed
\bibliographystyle{plain}
|
2,869,038,156,870 | arxiv | \section{Introduction
\label{Introduction}}
The discovery of neutrino
oscillations~\cite{fukuda:2002pe,ahmad:2002jz,araki:2004mb,Kajita:2004ga,ahn:2002up}
has indicated a very peculiar structure of lepton
mixing~\cite{Maltoni:2004ei}, quite distinct from that of quarks.
These data have triggered a rush of papers attempting to understand the
values of the leptonic mixing angles from underlying symmetries at a
fundamental level.
An attractive possibility is that the observed pattern of lepton
mixing results from some kind of flavour symmetry, such as $A_4$,
valid at a some superhigh energy scale where the dimension-five
neutrino mass operator arises~\cite{babu:2002dz}.
Here we reconsider the Harrison-Perkins-Scott (HPS) mixing
pattern~\cite{Harrison:2002er} within a simple reference model
approach. Our only assumption is that at the high energy scale the
tree-level neutrino mass matrix $m_{\nu}^{\textrm{tree}}$ is
diagonalized by the so-called HPS matrix, taken as,
\begin{equation}
\label{eq:HPS}
U_{\textrm{HPS}} =
\begin{pmatrix}
\sqrt{2/3} & 1/\sqrt{3} & 0\\
-1/\sqrt{6} & 1/\sqrt{3} & -1/\sqrt{2}\\
-1/\sqrt{6} & 1/\sqrt{3} & 1/\sqrt{2}
\end{pmatrix}\;,
\end{equation}
which corresponds to the following mixing angle values:
\begin{align}
\begin{split}
\tan^2\theta_{\Atm}&=\tan^2\theta_{23}^0=1\,,\\
\sin^2\theta_{\textrm{Chooz}}&=\sin^2\theta_{13}^0=0\,,\\
\tan^2\theta_{\Sol}&=\tan^2\theta_{12}^0=0.5\;.
\end{split}
\end{align}
These predictions which hold at high energies may be regarded as a
good first approximation to the observed values~\cite{Maltoni:2004ei}
indicated by oscillation
experiments~\cite{fukuda:2002pe,ahmad:2002jz,araki:2004mb,Kajita:2004ga,ahn:2002up}.
The diagonal neutrino mass matrix can be written as
$ \hat{m}_{\nu}^{\textrm{tree}}=U_{\textrm{HPS}}^{\textrm{T}}\cdot m_{\nu}^{\textrm{tree}}\cdot U_{\textrm{HPS}}=
{\rm diag} (m_1, m_2, m_3)$,
so that the tree-level neutrino mass matrix becomes
\begin{equation}
\label{eq:m-neu-anz}
m_{\nu}^{\textrm{tree}} =
\begin{pmatrix}
\frac{2}{3}m_1+\frac{1}{3}m_2 & -\frac{1}{3}m_1+\frac{1}{3}m_2 & -\frac{1}{3}m_1+\frac{1}{3}m_2\\[+1mm]
-\frac{1}{3}m_1+\frac{1}{3}m_2 & \frac{1}{6}m_1+\frac{1}{3}m_2+\frac{1}{2}m_3 & \frac{1}{6}m_1+\frac{1}{3}m_2-\frac{1}{2}m_3\\[+1mm]
-\frac{1}{3}m_1+\frac{1}{3}m_2 & \frac{1}{6}m_1+\frac{1}{3}m_2-\frac{1}{2}m_3 & \frac{1}{6}m_1+\frac{1}{3}m_2+\frac{1}{2}m_3\\
\end{pmatrix}\;.
\end{equation}
This form corresponds to a specific structure for the dimension-five
lepton number violating operator.
\begin{figure}[h] \centering
\includegraphics[height=3.5cm,width=.4\linewidth]{d-5.eps}
\caption{\label{fig:d-5} Dimension five operator responsible for
neutrino mass.}
\end{figure}
For example, it constitutes the most general ansatz that follows from
a basic $A_4$ symmetry for the neutrino mass matrix and the quark
mixing matrix~\cite{babu:2002dz}. One of the central open questions in
neutrino physics is to identify the exact mechanism of producing
Fig.~\ref{fig:d-5}. As a first step, here we will adopt a
model-independent approach of considering the implications of
Eq.~(\ref{eq:m-neu-anz}) assuming only the evolution expected in
flavor-blind softly broken minimal supergravity at unification. This
will provide us with a reference value that can be useful in the
future for treating different models of neutrino
mass~\cite{Valle:2006vb}.
\section{Radiative corrections
\label{corrections}}
It has already been noted that radiative corrections present in the
Standard Model renormalization group equations (RGEs), leave the HPS
``reference'' predictions essentially stable~\cite{Luo:2005fc}. In
addition to Minimal Supersymmetric Standard Model RGE evolution, here
we consider also the effect of one-loop threshold
effects~\cite{Chun:1999vb}.
We will first consider the evolution of the neutrino oscillation
parameters that follow from Eq.~(\ref{eq:m-neu-anz}), which covers
both the cases of degenerate as well as
hierarchical neutrino masses. The radiatively corrected neutrino mass
matrix in this case becomes
\begin{equation}\label{eq:m-neu-1loop}
m_{\nu}^{\textrm{1-loop}}=m_{\nu}^{\textrm{tree}}+\hat{\delta}^{\textrm{T}}\cdot m_{\nu}^{\textrm{tree}}+m_{\nu}^{\textrm{tree}}\cdot\hat{\delta}\;,
\end{equation}
where
\begin{equation}
\label{eq:delta-matrix}
\hat{\delta} =
\begin{pmatrix}
\delta_{ee}' & \delta_{\mu e} & \delta_{\tau e}\\
\delta_{e\mu} & \delta_{\mu\mu}' & \delta_{\tau\mu}\\
\delta_{e\tau} & \delta_{\mu\tau} & \delta_{\tau\tau}'\\
\end{pmatrix}\;.
\end{equation}
The diagonal elements include the threshold correction and the RGE running
\begin{equation}
\label{eq:delta-diag}
\delta_{\alpha\alpha}'=\delta_{\alpha\alpha}+\delta_{\alpha}\;,
\end{equation}
where the RGE running effect is~\cite{Babu:1993qv}
\begin{equation}\label{eq:del-RGE}
\delta_{\alpha} = \frac{-h_{\alpha}^2}{16\pi^2}\ln\left(\frac{M_{\textrm{GUT}}}{M_{\textrm{EWSB}}}\right)\;.
\end{equation}
In order to get the analytic expressions for the threshold
corrections, we proceed as in Ref.~\cite{Hirsch:2003dr}. However, now
we do not neglect Yukawa couplings, taking into account the fact that
right- and left-handed charged sleptons mix. Therefore, the analytic
expressions for the deltas are
\begin{equation}
\begin{split}
\label{radneumass}
\delta_{\alpha \beta}^{{\rm (a)}\chi^+} &=
\sum_{i=1}^6 \sum_{A=1}^2\frac{1}{16\pi^2}
(g U_{A1}^*R_{i\alpha}^{\tilde\ell}-h_{\alpha}U_{A2}^*R_{i\alpha+3}^{\tilde\ell})
(g U_{A1}R_{i\beta}^{\tilde\ell\ast}-h_{\beta}U_{A2}R_{i\beta+3}^{\tilde\ell\ast})\\
&\quad\times B_1(m_{\chi_{A}^+}^2,m_{\tilde\ell_i}^2) \;, \\
\delta_{\alpha \beta}^{{\rm (a)}\chi^0} &=
\sum_{i=1}^3 \sum_{A=1}^4 \frac{1}{32\pi^2}|g N_{A2}-g' N_{A1}|^2
R_{i\alpha}^{\tilde\nu} R_{i \beta}^{\tilde\nu\ast}
B_1(m_{\chi_{A}^0}^2,m_{\tilde\nu_i}^2) \;, \\
\delta_{\alpha \beta}^{{\rm (c)}\chi^+} &=
\sum_{i=1}^6 \sum_{A=1}^2 \sum_{B=1}^2\frac{1}{4\pi^2}
(g U_{A1}^*R_{i\alpha}^{\tilde\ell}-h_{\alpha}U_{A2}^*R_{i\alpha+3}^{\tilde\ell})
g U_{A1}|V_{B2}|^2 R_{i \beta}^{\tilde\ell\ast}
C_{00}(m_{\chi_{A}^+}^2,
m_{\chi_{B}^+}^2 ,m_{\tilde\ell_i}^2) \;, \\
\delta_{\alpha \beta}^{{\rm (c)}\chi^0} &=
\sum_{i=1}^3 \sum_{A=1}^4 \sum_{B=1}^4 \frac{1}{8\pi^2}
|gN_{A2}-g'N_{A1}|^2|N_{B4}|^2
R_{i\alpha}^{\tilde\nu} R_{i \beta}^{\tilde\nu\ast}
C_{00}(m_{\chi_{A}^0}^2,
m_{\chi_{B}^0}^2,m_{\tilde\nu_i}^2) \;,
\end{split}
\end{equation}
where we have evaluated the Feynman diagrams at zero external
momentum, which is an excellent approximation as the neutrino masses
are tiny. Here $\delta_{\alpha \beta}^{{\rm (a,c)}\chi^+},
(\alpha,\beta=e,\mu,\tau$), are the contributions from the
chargino/charged slepton diagrams in Fig.~\ref{loopfig} (a,c), respectively,
while $\delta_{\alpha \beta}^{{\rm (a,c)}\chi^0}$ are the contributions
from the neutralino/sneutrino diagrams.
The values of the $\delta_{\alpha\beta}$'s, in Eqs. (\ref{eq:delta-matrix})
and (\ref{eq:delta-diag})
are the sum of the four contributions given above.
Analogous contributions exist corresponding to the symmetrized terms
in Eq.~(\ref{eq:m-neu-1loop}), required by the Pauli principle, as
displayed in Fig.~\ref{loopfig} (b,d).
In the above formulas, $U$ and $V$ are the chargino mixing matrices
and $m_{\chi^+_A}, (A=1,2)$, are chargino masses, while $N$ is the
neutralino mixing matrix with $m_{\chi^0_A}, (A=1,..,4)$, denoting the
neutralino masses. Finally, the matrices $R^{\tilde\ell/\tilde\nu}$
denote the slepton/sneutrino mixing matrices, respectively. The
coupling constant of the $SU(2)$ gauge group is denoted $g$ and that
of $U(1)$ is $g'$. Here $h_{\alpha}$ is the charged lepton Yukawa
coupling in the basis where the charged lepton masses are diagonal.
Furthermore $B_1$ and $C_{00}$ are Passarino-Veltman functions given
by
\begin{equation}
B_1(m_0^2,m_1^2)=-\frac{1}{2}\Delta_\epsilon +\frac{1}{2}
\ln \left( \frac{m_0^2}{M^2_{\textrm{EWSB}}} \right)+
\frac{-3+4t-t^2-4t\ln (t)+2t^2\ln(t)}{4(t-1)^2}\;,
\end{equation}
where $t=m_1^2/m_0^2$ and
\begin{equation}
C_{00}(m_0^2,m_1^2,m_2^2) =
\frac{1}{8}(3+2\Delta_{\epsilon}) -\frac{1}{4} \ln \left( \frac{m_0^2}{M^2_{\textrm{EWSB}}} \right)
+ \frac{-2r_1^2(r_2-1)\ln(r_1)+2r_2^2(r_1-1)\ln(r_2)}
{8(r_1-1)(r_2-1)(r_1-r_2)}\;
\end{equation}
where $r_1=m_1^2/m_0^2$ and $r_2=m_2^2/m_0^2$. We have
used dimensional regularization, with $\epsilon=4-n$ and $n$ is
the number of space-time dimensions. The term $\Delta_{\epsilon}=(2/\epsilon)-\gamma+4\ln(4\pi)$, where $\gamma$ is Euler's constant, is divergent as $\epsilon\to 0$.
\begin{figure}
\centering
\includegraphics[height=8cm,width=.8\textwidth]{nu-loops-fig.ps}
\caption{Feynman diagrams responsible for neutrino mass radiative
corrections. The blob indicates an effective Lagrangian term
obtained from integrating out the heavy right-handed neutrinos}
\label{loopfig}
\end{figure}
\section{Corrections to mixing angles: numerical results}
\label{sec:corr-neutr-mixing}
We now describe our numerical procedure. In order to compute the
magnitude of the radiative corrections expected in the HPS anzatz we
work in the framework of a reference minimal supergravity model
approach, with universal flavor-blind soft supersymmetry breaking
terms at unification. Therefore the off-diagonal elements in the
matrix in Eq.~(\ref{eq:delta-matrix}) are all zero~\footnote{Nonzero
off-diagonal elements may arise due to running, see discussion.}
\begin{equation}
\label{eq:zero-off-diag}
\delta_{e\mu}=\delta_{e\tau}=\delta_{\mu\tau}=
\delta_{\mu e}=\delta_{\tau e}=\delta_{\tau \mu}=0\;.
\end{equation}
We first have used the SPheno package~\cite{Porod:2003um} to calculate
spectra and mixing matrices within mSUGRA within the ranges:
$M_{1/2},\,M_0,\,A_0\in[100,\,1000]$ GeV, $A_0$ with both signs,
$\tan\beta\in[2.5,\,50]$ and $\mu$ with both signs. Then we have
calculated the RGE running, Eq.~(\ref{eq:del-RGE}), and the threshold
corrections, Eqs.~(\ref{radneumass}). We have explicitly checked that
the dominant contribution to $\delta'_{\alpha\alpha}$, defined in
Eq.~(\ref{eq:delta-diag}), always comes from the threshold corrections
for $\alpha=e,\,\mu$.
Also for $\alpha=\tau$, threshold corrections are usually more important
than RGE running contributions,
typically
\begin{equation}
\delta_{\alpha\alpha}\sim\mathcal{O}(10^{(-4,-3)})\,,\quad \forall\alpha
\end{equation}
while
\begin{align}
\delta_{e}&\sim\mathcal{O}(10^{(-11,-9)}) & \delta_{\mu}&\sim\mathcal{O}(10^{(-7,-4)}) & \delta_{\tau}&\sim\mathcal{O}(10^{(-4,-2)}) \,.
\end{align}
Note that only for very large values of $\tan\beta$, the RGE effect
$\delta_{\tau}$ is slightly larger than the threshold corrrections
$\delta_{\tau\tau}$.
Using these radiative corrections we have computed the delta matrix in
Eq.~(\ref{eq:delta-matrix}) and inserted it in the neutrino mass
matrix at 1-loop given in Eq.~(\ref{eq:m-neu-1loop}). We have then
numerically diagonalized the 1-loop neutrino mass matrix in
Eq.~(\ref{eq:m-neu-1loop}) in order to obtain the neutrino masses and
mixing angles.
Notice that the HPS scheme only fixes neutrino mixing angles. Thus,
the neutrino masses are free parameters. Making use of this freedom,
we have used an iterative procedure in order to choose the parameters
$m_1$, $m_2$ and $m_3$, so that the numerically calculated 1-loop
neutrino masses are such that the solar and atmospheric squared-mass
splittings $\Dms$ and $\Dma$ reproduce the current best fit point
value. In our numerical calculation we concentrate on normal hierarchy.
We will comment on the case of inverse hierarchy at the end of the next section.
The numerically calculated atmospheric and reactor neutrino mixing
angles at low energies do not deviate significantly from its HPS
reference value at high energies. Indeed, the numerical results are:
\begin{align}
\begin{split}
\tan^2\theta_{\Atm}&\lesssim\tan^2\theta_{23}^0+\mathcal{O}(10^{-1})\,,\\
\sin^2\theta_{\textrm{Chooz}}&\lesssim\sin^2\theta_{13}^0+\mathcal{O}(10^{-7})\;.
\end{split}
\end{align}
However, the solar neutrino mixing angle can be significantly
affected. In Fig.~\ref{fig:tansqsol-mnu1}, we have plotted the
maximum deviation of the solar angle from the HPS reference value for
$\tan\beta\in[2.5,50]$, as a function of $m_{\nu_1}$, for both extreme
$CP$ parity combinations for $m_{\nu_1}$ and $m_{\nu_2}$: same sign
(left panel) and opposite sign (right panel). All the other $CP$
possiblities lie in between these two extreme cases.
As can be seen, the solar mixing angle remains essentially stable in
the case of opposite $CP$ signs, while deviations are maximal in the
case of same $CP$ signs. In this case, the solar mixing angle always
increases with respect to the HPS value, irrespective of mSUGRA
parameters.
Moreover we can get a rough upper bound on $m_{\nu_1}$ of order
\begin{equation}
\label{eq:bound}
m_{\nu_1}\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.2 \;\textrm{eV}
\end{equation}
for the mSUGRA parameter values: $M_{1/2} = 100$ GeV, $M_0 = -
A_0=10^3$ GeV, $\mu >0$ and $\tan\beta=2.5$.
Note that the upper bound is sensitive to the values of $\tan\beta$.
For higher values of $\tan\beta$ the radiative corrections are larger,
implying a more stringent bound on $m_{\nu_1}$, as indicated by the
upper boundary of the red (dark) band of the left panel in
Fig.~\ref{fig:tansqsol-mnu1}.
Here we have fixed solar and atmospheric mass squared splittings at
their best-fit values from Ref.~\cite{Maltoni:2004ei}. However, we
have explicitly checked that the effect of letting $\Dma$ and $\Dms$
vary within their current $3\sigma$ allowed range is negligible, i.~e.
the bands obatined at the extreme values almost coincide with the ones
in Fig.~\ref{fig:tansqsol-mnu1}.
\begin{figure}[htbp]
\centering
\includegraphics[height=6cm,width=8cm]{tansqsol-mnu1-CP-pp.eps}
\includegraphics[clip,height=6cm,width=8cm]{tansqsol-mnu1-CP-pm.eps}
\caption{Upper bound for the solar mixing parameter
$\tan^2\theta_{\Sol}$, as a function of $m_{\nu_1}$ (in eV), for
$\tan\beta=2.5$ (lower border of the red band) and $\tan\beta=50$
(upper border of the red band). On the left panel, $m_{\nu_1}$ and
$m_{\nu_2}$ have the same $CP$ sign. On the right panel,
$m_{\nu_1}$ and $m_{\nu_2}$ have opposite $CP$ sign. The neutrino
mass splittings are assumed to have their best fit value
from~\cite{Maltoni:2004ei}. The horizontal band corresponds to the
$3\sigma$ allowed range for
$\tan^2\theta_{\Sol}$~\cite{Maltoni:2004ei}.}
\label{fig:tansqsol-mnu1}
\end{figure}
\section{Analytical understanding}
\label{sec:analyt-underst}
The numerical results presented above can be understood analytically
as follows. If we perform the original HPS rotation to the 1-loop
neutrino mass matrix in Eq.~\ref{eq:m-neu-1loop}, we get:
\begin{align}
\hat{m}_{\nu}^{\textrm{1-loop}}&=U_{\textrm{HPS}}^{\textrm{T}}\cdot m_{\nu}^{\textrm{1-loop}}\cdot U_{\textrm{HPS}}\\[+1mm
\label{eq:diag1loop}
&=
\begin{pmatrix}
(1+\delta_{11})m_1 & \delta_{12}^{m_1}m_1+\delta_{12}^{m_2}m_2 & \delta_{13}^{m_1}m_1+\delta_{13}^{m_3}m_3\\[+2mm]
\delta_{12}^{m_1}m_1+\delta_{12}^{m_2}m_2 & (1+\delta_{22})m_2 & \delta_{23}^{m_2}m_2+\delta_{23}^{m_3}m_3\\[+2mm]
\delta_{13}^{m_1}m_1+\delta_{13}^{m_3}m_3 & \delta_{23}^{m_2}m_2+\delta_{23}^{m_3}m_3 & (1+\delta_{33})m_3
\end{pmatrix}\;,
\end{align}
where
\begin{eqnarray}
\label{eq:d11}
\delta_{11} & = &\frac{1}{3}(4 \delta_{ee}' + \delta_{\mu\mu}' +
\delta_{\tau\tau}' - 2 \delta_{e\mu} - 2 \delta_{\mu e} - 2
\delta_{e\tau} - 2 \delta_{\tau e} + \delta_{\mu\tau} +
\delta_{\tau\mu})\;, \nonumber \\
\delta_{22} & = &\frac{2}{3} ( \delta_{ee}' + \delta_{\mu\mu}' + \delta_{\tau\tau}' + \delta_{e\mu} + \delta_{\mu e} + \delta_{e\tau} + \delta_{\tau e} + \delta_{\mu\tau} + \delta_{\tau\mu} )\;, \nonumber\\
\delta_{33} & = &\delta_{\mu\mu}' + \delta_{\tau\tau}' - \delta_{\mu\tau} - \delta_{\tau\mu}
\;, \nonumber\\
\delta_{12}^{m_1} & =& \frac{1}{3\sqrt{2}}(2 \delta_{ee}' - \delta_{\mu\mu}' - \delta_{\tau\tau}' - \delta_{e\mu} + 2 \delta_{\mu e} - \delta_{e\tau} + 2 \delta_{\tau e} - \delta_{\mu\tau} - \delta_{\tau\mu}) \;, \nonumber\\
\delta_{12}^{m_2} & = &\frac{1}{3\sqrt{2}}(2 \delta_{ee}' -
\delta_{\mu\mu}' - \delta_{\tau\tau}' + 2 \delta_{e\mu} - \delta_{\mu
e} + 2 \delta_{e\tau} - \delta_{\tau e} - \delta_{\mu\tau} -
\delta_{\tau\mu})\;, \\
\delta_{13}^{m_1} & = &\frac{1}{2\sqrt{3}}(\delta_{\mu\mu}' -
\delta_{\tau\tau}' - 2 \delta_{\mu e} + 2 \delta_{\tau e} +
\delta_{\mu\tau} - \delta_{\tau\mu}) \;, \nonumber \\
\delta_{13}^{m_3} & = &\frac{1}{2\sqrt{3}}(\delta_{\mu\mu}' -
\delta_{\tau\tau}' - 2 \delta_{e\mu} + 2 \delta_{e\tau} -
\delta_{\mu\tau} + \delta_{\tau\mu})\;, \nonumber \\
\delta_{23}^{m_2} & = &\frac{1}{\sqrt{6}}(-\delta_{\mu\mu}' +
\delta_{\tau\tau}' - \delta_{\mu e} + \delta_{\tau e} -
\delta_{\mu\tau} + \delta_{\tau\mu}) \;, \nonumber \\
\delta_{23}^{m_3} & = &\frac{1}{\sqrt{6}}(-\delta_{\mu\mu}' + \delta_{\tau\tau}' - \delta_{e\mu} + \delta_{e\tau} + \delta_{\mu\tau} - \delta_{\tau\mu}) \nonumber\;.
\end{eqnarray}
The matrix in Eq.~(\ref{eq:diag1loop}) should be nearly diagonal and
its off-diagonal elements determine the deviations from
tri-bimaximality. We define the following parameters characterizing
the deviations from tri-bimaximality:
\begin{equation}
\label{eq:epsij}
\epsilon_{ij}\simeq\frac{1}{2}\tan(2\epsilon_{ij})=\frac{(\hat{m}_{\nu}^{\textrm{1-loop}})_{ij}}
{(\hat{m}_{\nu}^{\textrm{1-loop}})_{jj}-(\hat{m}_{\nu}^{\textrm{1-loop}})_{ii}}\;,
\end{equation}
so that
\begin{align}
\begin{split}\label{eq:angles}
\theta_{\Atm}&\equiv\theta_{23}\simeq\theta_{23}^0+\epsilon_{23}\;,\\% &&\textrm{where } \tan^2\theta_{23}^0=1\\
\theta_{\textrm{Chooz}}&\equiv\theta_{13}\simeq\theta_{13}^0+\epsilon_{13}\;, \\%&&\textrm{where } \sin^2\theta_{13}^0=0\\
\theta_{\Sol}&\equiv\theta_{12}\simeq\theta_{12}^0+\epsilon_{12}\;.
\end{split}
\end{align}
Substituting the matrix elements in Eq.~(\ref{eq:diag1loop}) into Eq.~(\ref{eq:epsij}), we get:
\begin{align}
\label{eq:eps23}
\epsilon_{23} & = \frac{\delta_{23}^{m_2} m_2 + \delta_{23}^{m_3} m_3}{(-1 - \delta_{22}) m_2 + (1 + \delta_{33}) m_3}\;, \\
\epsilon_{13} & = \frac{\delta_{13}^{m_1} m_1 + \delta_{13}^{m_3} m_3}{(-1 - \delta_{11}) m_1 + (1 + \delta_{33}) m_3}\;, \\
\label{eq:eps12}
\epsilon_{12} & = \frac{\delta_{12}^{m_1} m_1 + \delta_{12}^{m_2} m_2}{(-1 - \delta_{11}) m_1 + (1 + \delta_{22}) m_2} \;.
\end{align}
Taking into account that for mSUGRA the off-diagonal elements in the
matrix in Eq.~(\ref{eq:delta-matrix}) are all zero, see
Eq.(\ref{eq:zero-off-diag}), the $\delta$'s in Eq.~(\ref{eq:d11})
become
\begin{equation}
\begin{split}
\label{eq:d11-mSUGRA}
\delta_{11} & = \delta_{11}^0 = \frac{1}{3}(4 \delta_{ee}' + \delta_{\mu\mu}' + \delta_{\tau\tau}' ) \;,\\
\delta_{22} & = \delta_{22}^0 = \frac{2}{3} ( \delta_{ee}' + \delta_{\mu\mu}' + \delta_{\tau\tau}' ) \;,\\
\delta_{33} & = \delta_{33}^0 = \delta_{\mu\mu}' + \delta_{\tau\tau}' \;,\\
\delta_{12}^{m_1} & = \delta_{12}^{m_2} = \delta_{12}^0 = \frac{1}{3\sqrt{2}}(2 \delta_{ee}' - \delta_{\mu\mu}' - \delta_{\tau\tau}' ) \;,\\
\delta_{13}^{m_1} & = \delta_{13}^{m_3} =\delta_{13}^0 = \frac{1}{2\sqrt{3}}(\delta_{\mu\mu}' - \delta_{\tau\tau}' ) \;,
\\
\delta_{23}^{m_2} & = \delta_{23}^{m_3} =\delta_{23}^0 = \frac{-1}{\sqrt{6}}(\delta_{\mu\mu}' - \delta_{\tau\tau}' ) \;.
\end{split}
\end{equation}
The deviations of the neutrino mixing angles from the HPS value given
in Eqs.~(\ref{eq:eps23}-\ref{eq:eps12}) then become
\begin{align}
\label{eq:eps23-mSUGRA}
\epsilon_{23} & = \frac{\delta_{23}^0 ( m_2 + m_3 )}{(-1 - \delta_{22}^0) m_2 + (1 + \delta_{33}^0) m_3}\;, \\
\label{eq:eps13-mSUGRA}
\epsilon_{13} & = \frac{\delta_{13}^0 ( m_1 + m_3 )}{(-1 - \delta_{11}^0) m_1 + (1 + \delta_{33}^0) m_3}\;, \\
\label{eq:eps12-mSUGRA}
\epsilon_{12} & = \frac{\delta_{12}^0 ( m_1 + m_2 )}{(-1 - \delta_{11}^0) m_1 + (1 + \delta_{22}^0) m_2} \;.
\end{align}
If $\epsilon_{12}$, given in Eq.~(\ref{eq:eps12-mSUGRA}),
is always positive, $\theta_{\Sol}$ can only increase,
see Eq.~(\ref{eq:angles}). The denominator in
Eq.~(\ref{eq:eps12-mSUGRA}) can be approximated to
\begin{equation}
(-1 - \delta_{11}^0) m_1 + (1 + \delta_{22}^0)) m_2 \simeq - m_1 + m_2 > 0
\end{equation}
and hence, by assumption, is always positive. The sign of
$\epsilon_{12}$ will be the sign of $\delta_{12}^0$ given by
Eq.~(\ref{eq:d11-mSUGRA}). Considering the expressions for the deltas
given in Eq.~(\ref{radneumass}) and bearing in mind that the
Passarino-Veltmann functions depend rather smoothly on their
arguments, we can take them out of the sum. The following very rough
estimations of the threshold corrections result
\begin{align}\begin{split}\label{eq:del-ap-1}
\delta_{\alpha\alpha}&\simeq\frac{1}{32\pi^2}(3g^2(B_1+4C_{00})+g'^2(B_1+4C_{00}))\;,\qquad (\alpha=e,\mu)\;,\\%\label{eq:del-tau-1}
\delta_{\tau\tau}&\simeq\frac{1}{32\pi^2}(3g^2(B_1+4C_{00})+g'^2(B_1+4C_{00})+2h^{2}_{\tau}B_1)\;,
\end{split}\end{align}
where we have neglected the charged lepton Yukawa couplings for
$\alpha=e,\mu$. Using
\begin{equation}
\lim_{m_{\tilde L_i}^2\to\infty}\frac{B_1(m_{\chi_{A}}^2,m_{\tilde L_i}^2)}{C_{00}(m_{\chi_{A}}^2,m_{\chi_{B}}^2,m_{\tilde L_i}^2)}
=-2\;,
\end{equation}
Eq.~(\ref{eq:del-ap-1}) becomes
\begin{align}\begin{split}\label{eq:del-ap-2}
\delta_{\alpha\alpha}&\simeq\frac{-B_1}{32\pi^2}(3g^2+g'^2)\;,\qquad (\alpha=e,\mu)\;,\\%\label{eq:del-tau-2}
\delta_{\tau\tau}&\simeq\frac{-B_1}{32\pi^2}(3g^2+g'^2-2h^{2}_{\tau})\;.
\end{split}\end{align}
Therefore, the contribution of the threshold corrections to $\delta_{12}^0$ is roughly
\begin{equation}
2\delta_{ee}-\delta_{\mu\mu}-\delta_{\tau\tau}\simeq\frac{-B_1}{16\pi^2}h_{\tau}^2\;.
\end{equation}
Besides the threshold correction contributions, one has also to
consider the RGE running contribution. Here the dominant part
obviously is $\delta_{\tau}$, given in Eq.~(\ref{eq:del-RGE}). The
approximated expression for $\delta_{12}^0$, defined in
Eq.~(\ref{eq:eps12-mSUGRA}), is then
\begin{equation}\label{eq:apr-d12-1}
\delta_{12}^0\simeq \frac{1}{3\sqrt2} \left( 2\delta_{ee}-\delta_{\mu\mu}-\delta_{\tau\tau}-\delta_{\tau} \right)
\simeq \frac{1}{3\sqrt2} \frac{h_{\tau}^2}{16\pi^2}\left[-B_1+\ln\left(\frac{M_{\textrm{GUT}}}{M_{\textrm{EWSB}}}\right) \right]\;.
\end{equation}
Considering that in the limit where the slepton mass goes to infinity,
the Passarino-Veltman function $B_1$ behaves as
\begin{equation}
\lim_{m_{\tilde L_i}^2\to\infty}B_1(m_{\chi_{A}}^2,m_{\tilde L_i}^2)
\simeq\frac{1}{2}\ln\left(\frac{m_{\tilde L_i}^2}{m_{\chi_{A}}^2}\right)\;,
\end{equation}
one obtains, from Eq.~(\ref{eq:apr-d12-1}),
\begin{equation}\label{eq:apr-d12-2}
\delta_{12}^0\simeq \frac{1}{3\sqrt2} \left(2\delta_{ee}-\delta_{\mu\mu}-\delta_{\tau\tau}-\delta_{\tau} \right)
\simeq \frac{1}{3\sqrt2} \frac{h_{\tau}^2}{16\pi^2}\left[\ln\left(\frac{M_{\textrm{GUT}}}{M_{\textrm{EWSB}}}\right)
-\ln\left(\frac{m_{\tilde L_i}}{m_{\chi_{A}}}\right)\right]\;,
\end{equation}
which is always positive, thus explaining why $\epsilon_{12}>0$.
Note that although the threshold corrections are in general larger
than the RGE contributions, in $\delta_{12}^0$ there is a cancellation
among the threshold corrections so that the $\delta_{\tau}$ RGE
contribution becomes the relevant term. We have numerically checked
that
\begin{equation}
2\delta_{ee}-\delta_{\mu\mu}-\delta_{\tau\tau}\sim\mathcal{O}(10^{(-6,-3)})\;.
\end{equation}
This cancellation among the threshold corrections is the reason why
the solar neutrino mixing angle can only increase with respect its HPS
reference value.
We now turn to the other two neutrino mixing angles. In the mSUGRA
framework the deviations from the HPS predictions are much smaller
than found for the solar mixing parameter, and fit within their
current experimental $3\sigma$ allowed range given in
Ref.~\cite{Maltoni:2004ei} for acceptable $m_{\nu_1}$ values.
The reason for this can be understood from
Eqs.~(\ref{eq:eps23-mSUGRA}-\ref{eq:eps12-mSUGRA}). On the one hand,
the deltas on the numerators, given by Eq.~(\ref{eq:d11-mSUGRA}), are
all of the same order. For small values of $m_{\nu_1}$ the deviations
are all negligible, since they are all proportional to the previous
deltas. For large $m_{\nu_1}$ values the neutrino masses are nearly
degenerate so that the numerators in
Eqs.~(\ref{eq:eps23-mSUGRA}-\ref{eq:eps12-mSUGRA}) are all of the same
order. The denominators in
Eqs.~(\ref{eq:eps23-mSUGRA}-\ref{eq:eps12-mSUGRA}) can be approximated
as
\begin{align}
\label{eq:denominators-1}
(-1 - \delta_{22}^0) m_2 + (1 + \delta_{33}^0) m_3 & \simeq m_3 - m_2\;,\\
(-1 - \delta_{11}^0) m_1 + (1 + \delta_{33}^0) m_3 & \simeq m_3 - m_1\;,\\
(-1 - \delta_{11}^0) m_1 + (1 + \delta_{22}^0) m_2 & \simeq m_2 - m_1\;.
\label{eq:denominators-2}
\end{align}
Although these mass differences are very small, $m_3-m_2$ and
$m_3-m_1$ are larger than $m_2-m_1$, thus making $\epsilon_{23}$ and
$\epsilon_{13}$ smaller than $\epsilon_{12}$.
We now comment briefly on inverse hierarchy. As can be seen from Eqs.~(\ref{eq:denominators-1}-\ref{eq:denominators-2}),
for inverse hierarchy, $m_2-m_1$ is still much smaller than $m_3-m_2$ or $m_3-m_1$, while the latter two
just change sign but not the magnitude. We therefore expect that the above discussion remains essentially correct
also for inverse hierarchy.
We should stress that we have considered so far the $CP$
conserving case HPS ansatz, with same-$CP$-sign neutrino mass
eigenvalues,
\begin{equation}
m_1,m_2,m_3>0\;.
\end{equation}
However, for all other CP combinations the denominators in
Eqs.~(\ref{eq:eps23-mSUGRA}-\ref{eq:eps12-mSUGRA}) are larger such
that the deviations from HPS mixing pattern become smaller and
correspondingly relax the bound in Eq.~(\ref{eq:bound}). In particular
for the case of opposite CP signs there is no bound, as seen in right
panel in Fig.~\ref{fig:tansqsol-mnu1}.
\section{Summary and discussion}
We have studied the stability of the HPS mixing ansatz that could
arise from a flavor symmetry valid at some high energy scale, against
supersymmetric radiative corrections (RGE running and threshold
effects). We have adopted a model-independent minimal supergravity
framework where supersymmetry breaking is universal and flavor-blind
at unification. In this case we have found that the solar mixing
angle can only be increased with respect to the HPS reference value.
Under the assumption that all neutrino masses have the same $CP$-sign,
this sets a rough upper bound on the mass of the lightest neutrino
which, in turn, implies that the neutrinoless double beta decay rate
is also bounded as a function of the mSUGRA parameters. In contrast,
in the case of opposite CP signs there is no bound on the lightest
neutrino mass. We have also shown that the atmospheric and reactor
mixing angles remain essentially stable in all cases. It should not
be surprising that the effect of radiative corrections is more
important for the solar angle than for the others. It simply reflects
the fact that the solar is the smallest of the two neutrino mass
splittings.
We stress that in our approach we have assumed only that the matrix
$m_{\nu}^{\textrm{tree}}$ is diagonalized by the HPS matrix at the
unification scale and this gets modified only by minimal supergravity
radiative corrections, universal and flavor-blind at unification.
This concerns the structure of the dimension-five operator,
Fig.~\ref{fig:d-5}. Additional radiative
corrections~\cite{Babu:1993qv} to the solar angle HPS prediction are
expected, if the neutrino mass arises {\sl a la
seesaw}~\cite{Minkowski:1977sc,Orloff:2005nu,schechter:1980gr,Lazarides:1980nt}.
Their magnitude will be determined by the strength of the Yukawa
coupling characterizing the Dirac neutrino mass entry in the seesaw
mass matrix~\cite{borzumati:1986qx}.
This will depend strongly on the details of the model, in particular,
on whether Higgs triplets are present in the
seesaw~\cite{schechter:1980gr} or on whether the seesaw is
extended~\cite{mohapatra:1986bd}.
Scrutinizing the schemes for which it is possible to decrease the
solar mixing angle value predicted by the HPS mixing pattern towards
its currently preferred best fit point value will be considered
elsewhere~\cite{prep}, together with the related issue of the lepton
flavor violating processes that would be expected in these schemes.
\section*{Acknowledgements}
We thank Werner Porod for useful discussions about SPheno. This work was
supported by Spanish grants FPA2005-01269 and BFM2002-00345 and by the
EC RTN network MRTN-CT-2004-503369. M.~H. was supported by a Ramon y
Cajal contract. E.~M. was supported in part by the U.S. Department of Energy under Grant No. DE-FG03-94ER40837. A.~V.~M. was supported by Generalitat Valenciana.
\renewcommand{\baselinestretch}{1.2}
|
2,869,038,156,871 | arxiv | \section{Introduction}
Many university courses involve some element of team-based project work. A set of projects is available for a course and each student submits a subset of projects as acceptable. For each acceptable student--project pair $(s,p)$, there is a weight $w(s,p)$ denoting the \emph{utility} of assigning $s$ to~$p$. The question of whether a given project can run is often contingent on the number of students assigned to it.
Such quota constraints also arise in various other contexts involving the centralized formation of groups, including organizing activity groups at a leisure center, opening facilities to serve a community and coordinating rides within car-sharing systems. In these and similar applications, the goal is to maximize the utility of the assigned agents under the assumption that the number of participants for each open activity is within the activity's prescribed limits.
We model this problem using a weighted bipartite graph $G= (A\,\dot\cup\,P, E)$, where the vertices in $A$ represent \emph{applicants}, while the vertices in $P$ are \emph{posts} they are applying to. So in the above student--project allocation example, $A$ and $P$ represent the students and projects respectively, and $E$ represents the set of acceptable student--project pairs. The edge weights capture the cardinal utilities of an assigned applicant--post pair. Each post has a lower and an upper quota on the number of applicants to be assigned to it, while each applicant can be assigned to at most one post. In a feasible assignment, a post is either \emph{open} or \emph{closed}: the number of applicants assigned to an open post must lie between its lower and upper quota, whilst a closed post has no assigned applicant. The objective is to find a maximum weight many-to-one matching satisfying all lower and upper quotas. We denote this problem by {\sc wmlq}.
In this paper, we study the computational complexity of {\sc wmlq} from various perspectives: Firstly, in \cref{sec:com_rest}, we show that the problem can be solved efficiently if the degree of every post is at most $2$, whereas the problem becomes hard as soon as posts with degree~$3$ are permitted, even when lower and upper quotas are all equal to the degree and every applicant has a degree of~$2$. Furthermore, we show the tractability of the case of pair projects, i.e., when all upper quotas are at most~2. Then, in \cref{sec:bounded_tw}, we study the fixed parameter tractability of {\sc wmlq}. To this end, we generalize the known dynamic program for maximum independent set with bounded treewidth to \textsc{wmlq}. The running time of our algorithm is exponential in the treewidth of the graph, with $u_{\max}$, the maximum upper quota of any vertex, as the basis. This yields a fixed-parameter algorithm when parameterizing by both the treewidth and $u_{\max}$. We show that this exponential dependence on the treewidth cannot be completely separated from the remaining input by establishing a $W[1]$-hardness result for {\sc wmlq} parameterized by treewidth. Finally, in \cref{se:approx}, we discuss the approximability of the problem. We show that a simple greedy algorithm yields an approximation guarantee of $u_{\max}+1$ for {\sc wmlq} and $\sqrt{|A|}+1$ in the case of unit edge weights. We complement these results by showing that these approximation factors are asymptotically best possible, unless $\P = \NP$.
\vspace{-2mm}
\subsection*{Related work}
Among various applications of centralized group formation, perhaps the assignment of medical students to hospitals has received the most attention. In this context, as well as others, the underlying model is a bipartite matching problem involving lower and upper quotas. The \emph{Hospitals / Residents problem with Lower Quotas} ({\sc hrlq})~\cite{BFIM10,HIM14} is a variant of {\sc wmlq} where applicants and posts have ordinal preferences over one another, and we seek a \emph{stable matching} of residents to hospitals. Hamada et al.~\cite{HIM14} considered a version of {\sc hrlq} where hospitals cannot be closed, whereas the model of Bir\'o et al.~\cite{BFIM10} permitted hospital closures. Strategyproof mechanisms have also been studied in instances with ordinal preferences and no hospital closure~\cite{GHIKUYY14}.
The \emph{Student / Project Allocation problem}~\cite[Section 5.6]{Man13} models the assignment of students to projects offered by lecturers subject to upper and lower quota restrictions on projects and lecturers. Several previous papers have considered the case of ordinal preferences involving students and lecturers~\cite{AIM07,IMY12,MO08} but without allowing lower quotas. However two recent papers~\cite{Kam13,MT13} do permit lower quotas together with project closures, both in the absence of lecturer preferences. Monte and Tumennasan~\cite{MT13} considered the case where each student finds every project acceptable, and showed how to modify the classical Serial Dictatorship mechanism to find a Pareto optimal matching. Kamiyama~\cite{Kam13} generalized this mechanism to the case where students need not find all projects acceptable, and where there may be additional restrictions on the sets of students that can be matched to certain projects. This paper also permits lower quotas and project closures, but our focus is on cardinal utilities rather than ordinal preferences.
The unit-weight version of {\sc wmlq} is closely related to the \emph{$D$-matching problem}~\cite{Cor88,Lov73,Seb93}, a variant of graph factor problems~\cite{Plu07}. In an instance of the $D$-matching problem, we are given a graph $G$, and a domain of integers is assigned to each vertex. The goal is to find a subgraph $G'$ of $G$ such that every vertex has a degree in $G'$ that is contained in its domain. Lov\'asz~\cite{Lov72} showed that the problem of deciding whether such a subgraph exists is $\NP$-complete, even if each domain is either $\{1\}$ or~$\{0,3\}$. On the other hand, some cases are tractable. For example, if for each domain $D$, the complement of $D$ contains no consecutive integers, the problem is polynomially solvable~\cite{Seb93}.
As observed in~\cite{SS11}, $D$-matchings are closely related to \emph{extended global cardinality constraints} and the authors provide an analysis of the fixed-parameter tractability of a special case of the $D$-matching problem; see~\cref{sec:bounded_tw} for details.
The problem that we study in this paper corresponds to an optimization version of the $D$-matching problem. We consider the special case where $G$ is bipartite and the domain of each applicant vertex is $\{0,1\}$, whilst the domain of each post vertex $p$ is $\{0\}\cup \{\ell(p),\dots,u(p)\}$, where $\ell(p)$ and $u(p)$ denote the lower and upper quotas of $p$ respectively. Since the empty matching is always feasible in our case, our aim is to find a domain-compatible subgraph $G'$ such that the total weight of the edges in $G'$ is maximum.
\section{Degree- and quota-restricted cases}\label{sec:com_rest}
First, we provide a formal definition of the maximum weight many-to-one matching problem with lower quotas (\textsc{wmlq}). Then, we characterize the complexity of the problem in terms of degree constraints on the two vertex sets: applicants and posts. At the end, we discuss the case of bounded upper quota constraints.
\subsection{Problem definition}
In our problem, a set of applicants $A$ and a set of posts $P$ are given. $A$ and $P$ constitute the two vertex sets of an undirected bipartite graph~$G = (V, E)$ with $V = A\,\dot\cup\, P$. For a vertex $v \in V$ we denote by $\delta(v) = \{\{ v, w\} \in E : w\in V\}$ the set of edges incident to $v$ and by $\Gamma(v) = \{w \in V : \{v, w\} \in E\}$ the \emph{neighborhood} of $v$, i.e., the set of vertices that are adjacent to $v$. For a subset of vertices $V'\subset V$, we define $\delta(V') = \bigcup_{v\in V'}\delta(v)$. Each edge carries a \emph{weight} $w: E \rightarrow \mathbb{R}_{\geq 0}$, representing the utility of the corresponding assignment. Each post is equipped with a \emph{lower quota} $\ell: P \rightarrow \mathbb{Z}_{\geq 0}$ and an \emph{upper quota} $u: P \rightarrow \mathbb{Z}_{\geq 0}$ so that $\ell(p) \leq u(p)$ for every $p \in P$. These functions bound the number of admissible applicants for the post (independent of the weight of the corresponding edges). Furthermore, every applicant can be assigned to at most one post.
Thus, an \emph{assignment} is a subset $M \subseteq E$ of the edges such that $|\delta(a) \cap M| \leq 1$ for every applicant $a \in A$ and $|\delta(p) \cap M| \in \left\{ 0, \ell(p), \ell(p)+1, ..., u(p) \right\}$ for every $p \in P$. A post is said to be \emph{open} if the number of applicants assigned to it is greater than~$0$, and \emph{closed} otherwise. The \emph{size} of an assignment $M$, denoted $|M|$, is the number of assigned applicants, while the \emph{weight} of $M$, denoted $w(M)$, is the total weight of the edges in $M$, i.e., $w(M) = \sum_{e \in M} w(e)$. The goal is to find an assignment of maximum weight.
\begin{remark}
Note that when not allowing closed posts, the problem immediately becomes tractable. It is easy to see this in the unweighted case as any algorithm for maximum flow with lower capacities can be used to determine an optimal solution in polynomial time. This problem can be easily reduced to the classical maximum flow problem. The method can be naturally extended to the weighted case as the flow based linear program has integral extreme points due to its total unimodularity property.
\end{remark}
\begin{pr}\textsc{wmlq}\ \\
Input: $\mathcal{I} = (G, w, \ell, u)$; a bipartite graph $G = (A\,\dot\cup\,P, E)$ with edge weights $w$.\\
Task: Find an assignment of maximum weight.\\
If $w=1$ for all $e \in E$, we refer to the problem as \textsc{mlq}.
\end{pr}
Some trivial simplification of the instance can be executed right at start. If $u(p) > |\Gamma(p)|$ for a post $p$, then $u(p)$ can be replaced by $|\Gamma(p)|$. On the other hand, if $\ell(p) > |\Gamma(p)|$, then post $p$ can immediately be deleted, since no feasible solution can satisfy the lower quota condition. Moreover, posts with $\ell(p) = 1$ behave identically to posts without a lower quota. From now on we assume that the instances have already been simplified this way.
\subsection{Degree-restricted cases}
In this subsection, we will consider \textsc{wmlq}$(i,j)$, a special case of \textsc{wmlq}, in which we restrict us to instances in which every applicant submits at most $i$ applications and every post receives at most $j$ applications. In order to establish our first result, we reduce the maximum independent set problem (\textsc{mis}) to \textsc{mlq}. In \textsc{mis}, a graph with $n$ vertices and $m$ edges is given and the task is to find an independent vertex set of maximum size.
\textsc{mis} is not approximable within a factor of~$n^{1-\varepsilon}$ for any~$\varepsilon > 0$, unless $\P = \NP$~\cite{Zuc07}. The problem remains $\APX$-complete even for cubic (3-regular) graphs~\cite{AK00}.
\begin{theorem}
\label{th:max_spa_np}
\textsc{mlq(2,3)} is $\APX$-complete.
\end{theorem}
\begin{proof}
First of all, \textsc{mlq(2,3)} is in $\APX$ because feasible solutions are of polynomial size and the problem has a $4$-approximation (see Theorem~\ref{thm:greedy-approximation}).
To each instance $\mathcal{I}$ of \textsc{mis} on cubic graphs we create an instance $\mathcal{I}'$ of \textsc{mlq} such that there is an independent vertex set of size at least $K$ in $\mathcal{I}$ if and only if $\mathcal{I}'$ admits an assignment of size at least~$3K$, yielding an approximation-preserving reduction. The construction is as follows. To each of the $n$ vertices of graph $G$ in~$\mathcal{I}$, a post with upper and lower quota of~3 is created. The $m$ edges of $G$ are represented as $m$ applicants in~$\mathcal{I}'$. For each applicant $a \in A$, $|\Gamma(a)| =2$ and $\Gamma(a)$ comprises the two posts representing the two end vertices of the corresponding edge. Since we work on cubic graphs, $|\Gamma(p)| = 3$ for every post~$p \in P$.
First we show that an independent vertex set of size $K$ can be transformed into an assignment of at least $3K$ applicants. All we need to do is to open a post with its entire neighborhood assigned to it if and only if the vertex representing that post is in the independent set. Since no two posts stand for adjacent vertices in~$G$, their neighborhoods do not intersect. Moreover, the assignment assigns exactly three applicants to each of the $K$ open posts.
To establish the opposite direction, let us assume that an assignment of cardinality at least $3K$ is given. The posts' upper and lower quota are both set to~3, therefore, the assignment involves at least $K$ open posts. No two of them can represent adjacent vertices in $G$, because then the applicant standing for the edge connecting them would be assigned to both posts at the same time.
The reduction given here is an $L$-reduction~\cite{PY91} with constants $\alpha=\beta=3$. Since \textsc{mlq(2,3)} belongs to $\APX$ and \textsc{mis} is $\APX$-complete in cubic graphs, it follows that \textsc{mlq(2,3)} is $\APX$-complete.
\qed
\end{proof}
So far we have established that if $|\Gamma(a)| \leq 2$ for every applicant $a \in A$ and $|\Gamma(p)| \leq 3$ for every post $p \in P$, then \textsc{mlq} is $\NP$-hard. In the following, we also show that these restrictions are the tightest possible. If $|\Gamma(p)| \leq 2$ for every post $p \in P$, then a maximum weight matching can be found efficiently, regardless of~$|\Gamma(a)|$. Note that the case \textsc{wmlq(1,$\infty$)} is trivially solvable.
\begin{theorem}
\label{th:infty_2}
\textsc{wmlq($\infty$,2)} is solvable in $O(n^2 \log n)$ time, where $n = |A| + |P|$.
\end{theorem}
\begin{proof}
After executing the simplification steps described after the problem definition, we apply two more changes to derive our helper graph~$H$. Firstly, if $\ell(p) = 0$, $u(p) = 2$ and $|\Gamma(p)| = 2$, we separate $p$'s two edges, splitting $p$ into two posts with upper quota~1. After this step, all posts with $u(p) = 2$ also have $\ell(p) = 2$. All remaining vertices are of upper quota~1. Then, we substitute all edge pairs of posts with $\ell(p) = u(p) = 2$ with a single edge connecting the two applicants. This edge will carry the weight equal to the sum of the weights of the two deleted edges.
Clearly, any matching in $H$ translates into an assignment of the same weight in $G$ and vice versa. Finding a maximum weight matching in a general graph with $n$ vertices and $m$ edges can be done in $O(n(m + n \log n))$ time~\cite{Gab90}, which reduces to $O(n^2 \log n)$ in our case.
\end{proof}
\subsection{Quota-restricted cases}
In this section, we address the problem of \textsc{wmlq} with bounded upper quotas. Note that Theorem~\ref{th:max_spa_np} already tells us that the case of $u(p) \leq 3$ for all posts $p \in P$ is $\NP$-hard to solve. We will now settle the complexity of the only remaining case, where we have instances with every post $p\in P$ having an arbitrary degree and $u(p) \le 2$. This setting models posts that need to be assigned to pairs of applicants.
The problem is connected to various known problems in graph theory, one of them being the \emph{$S$-path packing problem}. In that problem, we are given a graph with a set of terminal vertices~$S$. The task is to pack the highest number of vertex-disjoint paths so that each path starts and ends at a terminal vertex, while all its inner vertices are non-terminal. The problem can be solved in $O(n^{2.38})$ time~\cite{CLL14,SS04} with the help of matroid matching~\cite{Lov80}. An instance of \textsc{mlq} with $\ell(p) = u(p) = 2$ for every post $p \in P$ corresponds to an $S$-path packing instance with $S = A$. The highest number of vertex-disjoint paths starting and ending in $A$ equals half of the cardinality of a maximum assignment. Thus, \textsc{mlq} with $\ell(p) = u(p) = 2$ can also be solved in $O(n^{2.38})$ time. On the other hand, there is no straightforward way to model posts with $u(p) = 1$ in $S$-path packing and introducing weights to the instances also seems to be a challenging task. Some progress has been made for weighted edge-disjoint paths, but to the best of our knowledge the question is unsettled for vertex-disjoint paths~\cite{HP14}.
Here we present a solution for the general case \textsc{wmlq} with $u(p) \leq 2$. Our algorithm is based on $f$-factors of graphs. In the {\em $f$-factor problem}, a graph $G$ and a function $f: V \rightarrow \mathbb{Z}_{\geq 0}$ is given. A set of edges $F \subseteq E$ is called an \emph{$f$-factor} if $\deg_F(v) = f(v)$ for every $v \in V$, where $\deg_F(v)$ denotes the degree of $v$ in the graph~$(V,F)$. Constructing an $f$-factor of maximum weight in a graph with $n$ vertices and $m$ edges or proving that none exists can be done in $O(\phi(m + n \log{n}))$ time, where $\phi$ is the sum of all $f$-values in the graph~\cite{Gab83,Gab90}.
\begin{restatable}{theorem}{restatepairs}
\label{th:u2}
\textsc{wmlq} with $u(p) \leq 2$ for every $p \in P$ can be solved in $O(nm + n^2 \log{ n})$ time, where $n = |V|$ and $m = |E|$.
\end{restatable}
\begin{proof}
In the remainder of this section we assume that $1 \leq \ell(p) = u(p) \leq 2$ for every post~$p$. Posts with $\ell(p) \neq u(p) = 2$ can be transformed into posts with $\ell(p) = u(p) = 2$ by giving them a dummy edge with zero weight, allowing us to pick these edges in order to make up for the raised lower quota. Let us denote the set of posts with $\ell(p) = u(p) = 1$ by~$P_1$.
The graph $G' =(V',E')$ of the constructed $f$-factor instance contains the graph $G =(V,E)$ of our \textsc{wmlq} instance, as shown in Figure~\ref{fi:ffactor}. We add a dummy post $p_d$ to $V$ and connect it to every applicant in~$A$. For every post $p_i$ in $P_1$ we add a dummy vertex $q_i^1$ and connect $p_i$ to $q_i^1$ to $p_d$ with a path of length~2, and for every post $p_i$ with $\ell(p_i) = u(p_i) = 2$ we add two dummy vertices $q_i^1$ and $q_i^2$ and a triangle containing $p$ and two new dummy nodes. All new edges in $E' \setminus E$ carry zero weight.
We set $f(p_d) = K$, $f(p) = u(p)$ for every $p \in P$ and $f(v) = 1$ for the rest of the vertices. In the initial version of our algorithm, we solve a weighted $f$-factor problem for every $K \in \{0, 1, ..., |A| + |P_1|\}$, and later we will show a slightly modified version of the $f$-factor instance so that it is sufficient to construct only 2 instances.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1, transform shape]
\tikzstyle{vertex} = [circle, draw=black, fill=black, scale= 0.5]
\tikzstyle{edgelabel} = [rectangle, fill=white]
\definecolor{MyPurple}{RGB}{197,0,205}
\pgfmathsetmacro{\d}{0.7}
\pgfmathsetmacro{\b}{3}
\node[vertex, label=above:$p_d$] (m_d) at (-2, 3) {};
\node[vertex, label=above:$p_1$] (m_1) at (0, 3) {};
\node[vertex, label=above:$p_2$] (m_2) at (2, 3) {};
\node[vertex, label=above:$p_3$] (m_3) at (4, 3) {};
\node[vertex, label=above:$p_4$] (m_4) at (6, 3) {};
\node[vertex, label=above:$p_5$] (m_5) at (8, 3) {};
\node[vertex, label=below:$a_1$] (w_1) at (-1, 0) {};
\node[vertex, label=below:$a_2$] (w_2) at (1, 0) {};
\node[vertex, label=below:$a_3$] (w_3) at (3, 0) {};
\node[vertex, label=below:$a_4$] (w_4) at (5, 0) {};
\node[vertex, label=below:$a_5$] (w_5) at (7, 0) {};
\node[vertex, label=below:$a_6$] (w_6) at (9, 0) {};
\node[vertex, label=above:$q_1^1$] (q_11) at ($(m_1) + (-\d, 1)$) {};
\node[vertex, label=above:$q_1^2$] (q_12) at ($(m_1) + (\d, 1)$) {};
\node[vertex, label=above:$q_4^1$] (q_41) at ($(m_4) + (-\d, 1)$) {};
\node[vertex, label=above:$q_4^2$] (q_42) at ($(m_4) + (\d, 1)$) {};
\node[vertex, label=above:$q_5^1$] (q_51) at ($(m_5) + (-\d, 1)$) {};
\node[vertex, label=above:$q_5^2$] (q_52) at ($(m_5) + (\d, 1)$) {};
\node[vertex, label=above:$q_2^1$] (q_21) at ($(m_2) + (-2, 2)$) {};
\node[vertex, label=above:$q_3^1$] (q_31) at ($(m_3) + (-3, 2.7)$) {};
\draw [very thick, dotted, MyPurple] (m_d) -- (w_1);
\draw [very thick, dotted, MyPurple] (m_d) -- (w_2);
\draw [very thick, dotted, MyPurple] (m_d) -- (w_3);
\draw [very thick, dotted, MyPurple] (m_d) -- (w_4);
\draw [very thick, dotted, MyPurple] (m_d) -- (w_5);
\draw [very thick, dotted, MyPurple] (m_d) -- (w_6);
\draw [very thick, dotted, MyPurple] (q_41) -- (q_42);
\draw [very thick, dotted, MyPurple] (m_4) -- (q_41);
\draw [very thick, dotted, MyPurple] (m_4) -- (q_42);
\draw [very thick, dotted, MyPurple] (q_11) -- (q_12);
\draw [very thick, dotted, MyPurple] (m_1) -- (q_11);
\draw [very thick, dotted, MyPurple] (m_1) -- (q_12);
\draw [very thick, dotted, MyPurple] (q_51) -- (q_52);
\draw [very thick, dotted, MyPurple] (m_5) -- (q_51);
\draw [very thick, dotted, MyPurple] (m_5) -- (q_52);
\draw [thick] (m_1) -- (w_1);
\draw [thick] (m_1) -- (w_2);
\draw [thick] (m_1) -- (w_3);
\draw [thick] (m_2) -- (w_2);
\draw [thick] (m_3) -- (w_3);
\draw [thick] (m_3) -- (w_2);
\draw [thick] (m_4) -- (w_4);
\draw [thick] (m_4) -- (w_6);
\draw [thick] (m_5) -- (w_4);
\draw [thick] (m_5) -- (w_3);
\draw [thick] (m_3) -- (w_4);
\draw [thick] (m_5) -- (w_5);
\draw [thick] (m_5) -- (w_6);
\draw [very thick, dotted, MyPurple] (m_2) to[out=120,in=0, distance=1cm ] (q_21);
\draw [very thick, dotted, MyPurple] (q_21) to[out=180,in=60, distance=1cm ] (m_d);
\draw [very thick, dotted, MyPurple] (m_3) to[out=120,in=0, distance=2cm ] (q_31);
\draw [very thick, dotted, MyPurple] (q_31) to[out=180,in=60, distance=2cm ] (m_d);
\end{tikzpicture}
\caption{The transformation from \textsc{wmlq} to an $f$-factor problem. The solid edges form $G$, while the dotted edges are the added ones, carrying weight~0.}
\label{fi:ffactor}
\end{figure}
First we show that if there is a feasible assignment $M$ in $G$ so that the number of unmatched applicants and the number of matched posts in $P_1$ add up to $K$, then it can be extended to an $f$-factor $M'$ of the same weight in~$G'$. We construct $M'$ starting with $M$ and add the following edges to it:
\begin{itemize}
\item $\set{p_d,a_i}$ for every applicant $a_i$ that is unmatched in~$M$;
\item $\set{q_i^1,p_i}$ and $\set{q_i^2,p_i}$ for every post $p_i \notin P_1$ that is closed in~$M$;
\item $\set{q_i^1,q_i^2}$ for every post $p_i \notin P_1$ that is open in~$M$;
\item $\set{q_i^1,p_i}$ for every post $p_i \in P_1$ that is closed in~$M$;
\item $\set{p_d,q_i^1}$ for every post $p_i \in P_1$ that is open in~$M$;
\end{itemize}
For all vertices~$v \neq p_d$, it immediately follows from the construction that $\deg_{M'}(v) = f(v)$. The same holds for $p_d$ as well, because edges are assigned to it if and only if an applicant is unmatched or a post in $P_1$ is matched and we assumed that this adds up to~$K$.
It is easy to see that if there is an $f$-factor $M'$ in~$G'$, then its restriction to $G$ is a feasible assignment $M$ of the same weight so that the number of unmatched applicants and the number of matched posts in $P_1$ add up to~$K$. Since $f(q_i^1) = 1$ for every $q_i^1$ added to a post $p_i \in P_1$, it is either the case that $p_i$ is closed in $M$ or $p_d q_i^1 \in M'$. Regarding posts outside of $P_1$, we need to show that the two edges incident to them are either both in $G$ or neither of them are in~$G$. Assume without loss of generality that $p_i q_i^1 \in M'$ and $p_i q_i^2 \notin M'$ for some $p_i \notin P_1$. Since $f(q_i^2) = 1$ and $\deg_{M'}(q_i^2) = 0$, $M'$ cannot be an $f$-factor.
So far we have shown that it is sufficient to test $|A| + |P_1| +1$ values for $f(p_d)$, and collect the optimal assignments given by the maximum weight $f$-factors. Comparing the weight of these locally optimal solutions delivers a global optimum. A slight modification on the $f$-factor instance will allow us to construct only two instances. Similarly to the triangles attached to posts in $P \setminus P_1$, triangles are added to $p_d$ as well. The added vertices have $f$-value~1 and the added edges carry weight~0. The number of such triangles hanging on $p_d$ is~$\ceil{\frac{|A| + |P_1|}{2}}$. These triangles can take up all the $f$-value of $p_d$ if necessary, but choosing the edge not incident to $p_d$ they can also let $p_d$ to fill up its $f$-value with other edges. Since a triangle either takes up 0 or 2 of $p_d$'s $f$-value, we need to separate the two different parity cases. Thus, to cover all the $|A| + |P_1| + 1$ cases for possible values for $f(p_d)$, once we set $f(p_d)$ to $|A| + |P_1| + 1$ and in the other instance $f(p_d) = |A| + |P_1|$.
\qed
\end{proof}
\section{Bounded treewidth graphs}
\label{sec:bounded_tw}
In this section, we investigate \textsc{wmlq} from the point of view of fixed-parameter tractability and analyze how efficiently the problem can be solved for instances with a bounded treewidth.
\paragraph{Fixed-parameter tractability.}
This field of complexity theory is motivated by the fact that in many applications of optimization problems certain input parameters stay small even for large instances.
A problem, parameterized by a parameter $k$, is fixed-parameter tractable ($\FPT$) if there is an algorithm solving it in time $f(k)\cdot \phi(n)$, where $f : \mathbb{R} \rightarrow \mathbb{R}$ is a function, $\phi$ is a polynomial function, and $n$ is the input size of the instance. Note that this definition not only requires that the problem can be solved in polynomial time for instances where $k$ is bounded by a constant, but also that the dependence of the running time on $k$ is separable from the part depending on the input size. On the other hand, if a problem is shown to be \emph{$\W[1]$-hard}, then the latter property can only be fulfilled if $\FPT = \W[1]$, which would imply $\NP \subseteq \DTIME(2^{o(n)})$.
For more details on fixed-parameter algorithms see, e.g.,~\cite{Nie06}.
\paragraph{Treewidth.}
In case of {\sc wmlq} we focus on the parameter \emph{treewidth}, which, on an intuitive level, describes the likeness of a graph to a tree.
A \emph{tree decomposition} of graph $G$ consists of a tree whose nodes---also called \emph{bags}---are subsets of~$V(G)$. These must satisfy the following three requirements.
\begin{enumerate}
\item Every vertex of $G$ belongs to at least one bag of the tree.
\item For every edge $\{a, p\} \in E(G)$, there is a bag containing both $a$ and $p$.
\item If a vertex in $V(G)$ occurs in two bags of the tree, then it also occurs in all bags on the unique path connecting them.
\end{enumerate}
The \emph{width} of a tree decomposition with a set of bags $B$ is $\max_{b \in B} |b| - 1$.
The \emph{treewidth} of a graph $G$, $\tw(G)$, is the smallest width among all tree decompositions of~$G$. It is well known that a tree decomposition of smallest width can be found by a fixed-parameter algorithm when parameterized by $\tw(G)$~\cite{Bod96}.
\medskip
In the following, we show that {\sc wmlq} is fixed-parameter tractable when parameterized simultaneously by the treewidth and $u_{\max}$, whereas it remains $W[1]$-hard when only parameterized by the treewidth.
A similar study of the fixed-parameter tractability of the related \emph{extended global cardinality constraint problem} (\textsc{egcc}) was conducted in~\cite{SS11}.
\textsc{egcc} corresponds to the special case of the $D$-matching problem where the graph is bipartite and on one side of the bipartition all vertices have the domain $\{1\}$. Differently from \textsc{wmlq}, \textsc{egcc} is a feasibility problem (note that the feasibility version of \textsc{wmlq} is trivial, as the empty assignment is always feasible).
The authors of~\cite{SS11} provide a fixed-parameter algorithm for \textsc{egcc} when parameterized simultaneously by the treewidth of the graph and the maximum domain size, and they show that the problem is $\W[1]$-hard when only parameterized by the treewidth. These results mirror our results for \textsc{wmlq}, and indeed both our FPT-algorithm for \textsc{wmlq} and the one in~\cite{SS11} are extensions of the same classic dynamic program for the underlying maximum independent set problem. However, our hardness result uses a completely different reduction than the one in~\cite{SS11}. The latter makes heavy use of the fact that the domains can be arbitrary sets, whereas in \textsc{wmlq}, we are confined to intervals.
\subsection*{Algorithm for bounded treewidth graphs}
For every tree decomposition with a specific treewidth, a nice tree decomposition
of the same treewidth can be found in linear time~\cite{Klo94}. A nice tree decomposition
is characterized by an exclusive composition of the following four kinds of bags:
\begin{enumerate}
\item leaf bag: $|b| = 1$ and $b$ has no child;
\item introduce bag: $b$ has exactly one child $b_1$, so that $b_1 \subset b$ and $|b \setminus b_1| = 1$;
\item forget bag: $b$ has exactly one child $b_1$, so that $b \subset b_1$ and $|b_1 \setminus b| = 1$;
\item join bag: $b$ has exactly two children $b_1$ and $b_2$, so that $b = b_1 = b_2$.
\end{enumerate}
\paragraph{Basic notation.}
For ease of exposition, we will define $\ell(a) := u(a) := 1$ for all $a \in A$. Furthermore, throughout this section we will deal with vectors $\alpha \in \mathbb{Z}^U$ for some $U \subseteq V$. We define the notion of extension and restriction of such a vector $\alpha$.
For $U' \subseteq U$ and $\alpha \in \mathbb{Z}^U$ define $\alpha|_{U'}$ as the restriction of $\alpha$ to $U'$, i.e., $\alpha|_{U'} \in \mathbb{Z}^{U'}$ and $\alpha|_{U'}(v) = \alpha(v)$ for all $v \in U'$. For $v \in V \setminus U$ and $i \in \mathbb{Z}$ let further $[\alpha, i]_v$ be the extension of $\alpha$ to $U \cup \{v\}$ defined by $[\alpha, i]_v(v') := \alpha(v')$ for all $v' \in U$ and $[\alpha, i]_v(v) := i$.
For a set of edges $S \subseteq E_b$ we define $\alpha_{S, U} \in \mathbb{Z}^{U}$ by $\alpha_{S, U}(v) := |\delta(v) \cap S|$, for all $v\in U$.
\paragraph{Assignment vectors.}
For any bag $b$, let $V_b \subseteq V$ denote the set of vertices contained in the union of bags present in the subtree rooted at~$b$. We will define graph $G_b = (V_b, E_b)$ where $E_b := E[V_b] \setminus E[b]$. A \emph{partial assignment} for bag $b$ is an assignment $M \subseteq E_b$ of $G_b$ such that $|M \cap \delta(v)| = 0$ or $\ell(v) \leq |M \cap \delta(v)| \leq u(v)$ for all $v \in V_b \setminus b$.{\footnote{Note that this definition allows applicants and posts in $b$ to be assigned arbitrarily often and that by definition of $G_b$, no vertex in $b$ is assigned to another vertex in~$b$.}}
An \emph{assignment vector} for bag $b$ is a vector $\alpha \in X_b := \{0, \dots, u_{\max}\}^b$. We say a partial assignment $M$ for $b$ \emph{agrees} with an assignment vector $\alpha \in X_b$, if $\alpha(v) = |M \cap \delta(v)|$ for all $v \in b$.
For every bag $b$ and every $\alpha \in X_b$, let $\mathcal{M}_b(\alpha)$ be the set of partial assignments for $b$ that agree with $\alpha$ and let
\[W_b(\alpha) := \max\left\{\{w(M) : M \in \mathcal{M}_b(\alpha)\right\} \cup \{-\infty\}\}\]
denote the optimal value of any assignment that agrees with $\alpha$ for the graph $G_b$ (note that a value of $-\infty$ implies that $\alpha$ cannot be attained). We further denote
\[\mathcal{M}^*_b(\alpha) := \{M \in \mathcal{M}_b(\alpha) : w(M) = W_b(\alpha)\}.\]
In the following, we will provide a series of lemmas that reveals how to efficiently obtain an element of $\mathcal{M}^*_b(\alpha)$ for every $\alpha \in X_b$ for a bag $b$ (or showing $W_b(\alpha) = -\infty$), assuming such representatives of each set $\mathcal{M}^*_{b'}(\alpha)$ have already been computed for every child $b'$ of $b$ for all $\alpha \in X_{b'}$.
\begin{lemma}\label{lem:leaf}
Let $b$ be a leaf bag. Then $\mathcal{M}^*_b(0) = \{\emptyset\}$ and $\mathcal{M}^*_b(\alpha) = \emptyset$ for any $\alpha \in X_b \setminus \{0\}$.
\end{lemma}
\begin{proof}
This follows directly from the fact that $E_b = \emptyset$ for all leaf bags and thus the only element in $b$ cannot be assigned.\qed
\end{proof}
\begin{lemma}\label{lem:introduce}
Let $b$ be an introduce bag such that $b'$ is the only child of $b$ and $b \setminus b' = \{v'\}$. Let $\alpha \in X_b$. Then
\begin{align*}
\mathcal{M}^*_b(\alpha) = \begin{cases}
\mathcal{M}^*_{b'}(\alpha|_{b'}) & \textup{if } \alpha(v') = 0,\\
\emptyset & \textup{otherwise.}
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
Note that $\Gamma(v') \cap V_b \subseteq b$ by Properties 2 and 3
of the tree decomposition. This implies $\delta(v') \cap E_b = \emptyset$ and hence the lemma.\qed
\end{proof}
\begin{lemma}\label{lem:forget}
Let $b$ be a forget bag such that $b'$ is the unique child of $b$ and $b = b' \setminus \{v'\}$ for some $v' \in b'$. Let $\alpha \in X_b$.
Let $(S^*, i^*)$ be an optimal solution to
\begin{align*}
\textup{[forget]}\qquad\max \ \ & w(S) + W_{b'}([\alpha - \alpha_{S, b}, i - |S|]_{v'})\\
\textup{s.t.} \ \ & |S| \leq i,\\
& \alpha_{S, b} \leq \alpha\\
& S \subseteq \delta(v') \cap \delta(b),\\
& i \in \{0, \ell(v'), \dots, u(v')\}.
\end{align*}
Then $M \cup S^* \in \mathcal{M}^*_b(\alpha)$ for all $M \in \mathcal{M}_{b'}^*([\alpha - \alpha_{S, b}, i - |S|]_{v'})$. If the optimal solution to [forget] has value $-\infty$, then $\mathcal{M}^*_b(\alpha) = \emptyset$.
\end{lemma}
\begin{proof}
Assume $\mathcal{M}_b(\alpha) \neq \emptyset$ and let $M' \in \mathcal{M}^*_b(\alpha)$. Let $S' := M' \cap \delta(v') \cap \delta(b)$ and let $i' := |M' \cap \delta(v')|$. Observe that $(S', i')$ is a feasible solution to [forget] and that $M' \setminus S' \in \mathcal{M}_{b'}([\alpha - \alpha_{S',b}|_b, i' - |S'|]_{v'})$. We conclude that $w(M') \leq w(S') + W_{b'}([\alpha, i' - |S'|]_{v'}) \leq w(S^*) + W_{b'}([\alpha, i^* - |S^*|]_{v'})$. In particular, this implies that the optimal solution value of [forget] is finite and that $\mathcal{M}^*([\alpha - \alpha_{S',b}|_b, i^* - |S^*|]_{v'}) \neq \emptyset$.
Thus let $M^* := M \cup S^*$ for some $M \in \mathcal{M}^*([\alpha - \alpha_{S^*, b}, i^* - |S^*|]_{v'})\}$. Observe that indeed $|M^* \cap \delta(v)| = |M \cap \delta(v)| + |S^* \cap \delta(v)| = \alpha(v) - \alpha_{S^*, b}(v) + \alpha_{S^*, b}(v) = \alpha(v)$ for all $v \in b$. Furthermore $|M^* \cap \delta(v)| = |M \cap \delta(v)| \in \{0, \ell(v), \dots, u(v)\}$ for all $v \in V_b \setminus b'$ by feasibility of $M$ and $|M^* \cap \delta(v')| = i^* \in \{0, \ell(v'), \dots, u(v')\}$, implying $M^* \in \mathcal{M}_b(\alpha)$. As $w(M^*) = w(S^*) + W_{b'}([\alpha, i^* - |S^*|]_{v'}) \geq w(M')$, we conclude that indeed $M^* \in \mathcal{M}^*_b(\alpha)$.
\qed
\end{proof}
\begin{lemma}\label{lem:join}
Let $b$ be a join bag such that $b = b_1 = b_2$ for the two children $b_1, b_2$ of $b$.
Let $\alpha \in X_b$. Let $(\alpha_1^*, \alpha_2^*)$ be optimal solutions to
\begin{align*}
\textup{[join]}\qquad\max \ \ & W_{b_1}(\alpha_1) + W_{b_2}(\alpha_2)\\
\textup{s.t.} \ \ & \alpha_1 + \alpha_2 = \alpha\\
& \alpha_1 \in X_{b_1},\ \alpha_2 \in X_{b_2}
\end{align*}
Then $M_1 \cup M_2 \in \mathcal{M}^*_b(\alpha)$ for all $M_1 \in \mathcal{M}^*_{b_1}(\alpha_1),\ M_2 \in \mathcal{M}^*_{b_2}(\alpha_2)$.
If the optimal solution of [join] has value $-\infty$, then $\mathcal{M}^*_b(\alpha) = \emptyset$.
\end{lemma}
\begin{proof}
Let $M^* := M_1 \cup M_2$ for some $M_1 \in \mathcal{M}^*_{b_1}(\alpha_1),\ M_2 \in \mathcal{M}^*_{b_2}(\alpha_2)$. We first observe that $V_{b_1} \cap V_{b_2} = b$ by Properties 2 and 3
of the tree decomposition and hence $M_1 \cap M_2 = \emptyset$. This implies that
\begin{align*}
|M^* \cap \delta(v)| = \begin{cases}
|M_1 \cap \delta(v)| \in \{0, \ell(v), \dots, u(v)\} & \text{if } v \in V_{b_1} \setminus b,\\
|M_2 \cap \delta(v)| \in \{0, \ell(v), \dots, u(v)\} & \text{if } v \in V_{b_2} \setminus b,\\
|M_1 \cap \delta(v)| + |M_2 \cap \delta(v)| = \alpha(v) & \text{if } v \in b.
\end{cases}
\end{align*}
This implies $M^* \in \mathcal{M}_b(\alpha)$.
Now let $M' \in \mathcal{M}_b(\alpha)$. Let $M'_1 := M' \cap E_{b_1}$ and $M'_2 := M' \cap E_{b_2}$. For $i \in \{1, 2\}$ let $\alpha'_i \in \mathbb{Z}^{b_i}$ be defined by $\alpha'_i(v) := |M_i \cap \delta(v)|$ for all $v \in b_i$. We observe that $(\alpha'_1, \alpha'_2)$ is a feasible solution to [join] and hence
$w(M') = w(M'_1) + w(M'_2) \leq w(M_1) + w(M_2) = w(M^*)$.\qed
\end{proof}
Finally, we observe that after computing $W_r(\alpha)$ and corresponding elements of $\mathcal{M}^*_r(\alpha)$ for each $\alpha$ for the root bag $r$, an optimal assignment for $G$ can be easily obtained.
\begin{lemma}\label{lem:root}
Let $(S^*, \alpha^*)$ be an optimal solution of
\begin{align*}
\textup{[root]}\qquad \max\ \ & W_r(\alpha) + w(S) \\
\textup{s.t.}\ \ & \alpha(v) + |\delta(v) \cap S| \in \{0, \ell(v), \dots, u(v)\} \quad \forall\,v \in r\\
& \alpha \in X_r,\ S \subseteq E[r].
\end{align*}
Then $S^* \cup M$ is an optimal solution to \textsc{wmlq} for any $M \in \mathcal{M}^*_r(\alpha^*)$.
\end{lemma}
\begin{proof}
Let $M^* := S^* \cup M$ for some $M \in \mathcal{M}^*_r(\alpha^*)$. Note that $S^* \cap \delta(v) = \emptyset$ for $v \in V \setminus r$ and in this case $|M^* \cap \delta(v)| = |\delta(v) \cap M| \in \{0, \ell(v), \dots, u(v)\}$ by feasibility $M$. On the other hand, if $v \in r$, then $|M^* \cap \delta(v)| = \alpha^*(v) + |S^* \cap \delta(v)| \in \{0, \ell(v), \dots, u(v)\}$ by feasibility of $S^*$ for [root]. We conclude that $M^*$ is indeed a feasible solution to \textsc{wmlq}.
Now let $M' \subseteq E$ be any solution to \textsc{wmlq} and let $\alpha' \in \mathbb{Z}^r$ be defined by $\alpha'(v) := |M' \cap \delta(v)|$ for all $v \in r$. Define $S' := M' \cap E[r]$. Observe that $(S', \alpha' - \alpha'_{S',r})$ is a feasible solution to [root] and $M' \setminus S' \in \mathcal{M}_r(\alpha'-\alpha'_{S',r})$.
We conclude that
\begin{align*}
w(M') \leq W_r(\alpha') + w(S') \leq W_r(\alpha^*) + w(S^*) = w(M^*),
\end{align*}
and thus $M^*$ is indeed an optimal solution to \textsc{wmlq}.\qed
\end{proof}
\begin{restatable}{theorem}{restateThmFPT}
\textsc{wmlq} can be solved in time $O(T + (u_{\max})^{3\tw(G)}|E|)$, where $T$ is the time needed for computing a tree decomposition of~$G$. In particular, {\sc wmlq} can be solved in polynomial time when restricted to instances of bounded treewidth, and {\sc wmlq} parameterized by $\max\{\tw(G), u_{\max}\}$ is fixed-parameter tractable.
\end{restatable}
\begin{proof}
In order to solve a given \textsc{wmlq} instance, the algorithm starts by computing a nice tree decomposition of~$G$. Note that $T$ is of the same order for tree decompositions and nice tree decompositions. Using \cref{lem:leaf,lem:introduce,lem:forget,lem:join,lem:root}, we can inductively compute a representative $M \in \mathcal{M}^*_b(\alpha)$ for every bag $b$ and every $\alpha \in X_b$, or deduce that $\mathcal{M}^*_b(\alpha) = \emptyset$. We first observe that $|X_b| = (u_{\max})^{\tw(G)}$, thus only $(u_{\max})^{\tw(G)}$ representatives have to be computed per bag. Furthermore, for each of the above lemmas, the necessary computations to derive an $M \in \mathcal{M}^*_b(\alpha)$ from representatives $\mathcal{M}^*_{b'}(\alpha')$ of children $b'$ of $b$ can be done in time $O((u_{\max})^{2\tw(G) + 1})$. This is obvious for \cref{lem:leaf,lem:introduce}. For \cref{lem:forget,lem:join,lem:root} we observe that the sets of feasible solutions for the corresponding optimization problems [forget], [join], and [root] have size at most $2^{|b|} \cdot (u_{\max} + 1)$, $(u_{\max})^{2\tw(G)}$, and $2^{|r|^2} \cdot (u_{\max})^{\tw(G)}$, respectively (note that without loss of generality we can assume $|r|$ to be of constant size by introducing at most $\tw(G)$ additional forget bags). The theorem then follows from the fact that the number of bags is linear.\qed
\end{proof}
While our algorithm runs in polynomial time for bounded treewidth, the degree of the polynomial depends on the treewidth and the algorithm only becomes a fixed-parameter algorithm when parameterizing by treewidth and $u_{\max}$ simultaneously. We will now show by a reduction from \textsc{Minimum Maximum Outdegree} that this dependence is necessary under the assumption that $\FPT \neq \W[1]$.
\begin{pr}\textsc{Minimum Maximum Outdegree}\ \\
Input: A graph $G = (V, E)$, edge weights $w: E \rightarrow \mathbb{Z}_+$ encoded in unary, a degree-bound $r \in \mathbb{Z}_+$.\\
Task: Find an orientation $D$ of $G$ such that $\sum_{e \in \delta_D^+(v)} w(e) \leq r$ for all $v \in V$, where $\delta_D^+(v)$ stands for the set of edges oriented so that their tail is~$v$.
\end{pr}
\begin{theorem}[Theorem~5 from~\cite{Sze11}]
\textsc{Minimum Maximum Outdegree} is $W[1]$-hard when parameterized by treewidth.
\end{theorem}
\begin{theorem}
\textsc{mlq} is $\W[1]$-hard when parameterized by treewidth, even when restricted to instances where $\ell(p) \in \{0, u(p)\}$ for every $p \in P$.
\end{theorem}
\begin{proof}
Given an instance $(G = (V, E), w, r)$ of \textsc{Minimum Maximum Outdegree}, we construct an instance $(G' = (A\,\dot\cup\, P, E'), \ell, u)$ of \textsc{mlq} as follows. For every vertex $v \in V$ we introduce a post $p_v \in P$ and let $\ell(p_v) = 0$ and $u(p_v) = r$. Furthermore, for every edge $e = \{v, v'\} \in E$, we introduce two posts $p_{e, v}$ and $p_{e, v'}$ with $\ell(p_{e,v}) = \ell(p_{e,v'}) = u(p_{e,v}) = u(p_{e,v'}) = w(e) + 1$, and $2w(e) + 1$ applicants $a^1_{e, v}, \dots, a^{w(e)}_{e, v}, a^{1}_{e, v'}, \dots, a^{w(e)}_{e, v'}, z_e$, for which we introduce the edges $\{p_v, a^i_{e,v} \}$, $\{a^i_{e,v}, p_{e,v}\}$, $\{p_{v'}, a^i_{e,v'}\}$, and $\{a^i_{e,v'}, p_{e,v'}\}$ for $i \in \{1, \dots, w(e)\}$ as well as $\{p_{e,v}, z_e\}$ and $\{z_e, p_{e, v'}\}$.
We show that the constructed instance has a solution serving all applicants if and only if the \textsc{Minimum Maximum Outdegree} instance has an orientation respecting the bound on the outdegree.
First assume there is an orientation $D$ of $G$ with maximum outdegree at most~$r$. Then consider the assignment that assigns for every oriented edge $(v, v') \in D$ the $w(e)$ applicants $a^i_{e, v}$ to $p_v$ and the $w(e) + 1$ applicants $a^i_{e, v'}$ and $z_e$ to~$p_{e, v'}$. As the weighted outdegree of vertex $v$ is at most $r$, every post $p_{v}$ gets assigned at most $r = u(p_v)$ applicants.
Now assume $M$ is a feasible assignment of applicants to posts serving every applicant. In particular, for every edge $e = \{v,v'\} \in E$, applicant $z_e$ is assigned to either $p_{e, v}$ or $p_{e, v'}$ and exactly one of these two posts is open because the lower bound of $w(e) + 1$ can only be met if $z_e$ is assigned to the respective post. If $p_{e, v}$ is open then all $w(e)$ applicants $a^i_{e, v'}$ are assigned to $p_{v'}$ and none of the applicants $a^i_{e, v}$ is assigned to $p_{v}$, and vice versa if $p_{e, v'}$ is open. Consider the orientation obtained by orienting every edge $e$ from $v$ to $v'$ if and only if $p_{e, v}$ is open. By the above observations, the weighted outdegree of vertex $v$ corresponds to the number of applicants assigned to post $p_v$, which is at most~$r$.
Finally, note that $G'$ can be constructed in time polynomial in the input size of the \textsc{Minimum Maximum Outdegree} instance as the weights are encoded in unary there. Furthermore, the treewidth of $G'$ is at most $\max \{\tw(G), 3\}$. To see this, start with a tree decomposition of $G$ and identify each vertex $v \in V$ with the corresponding post~$p_v$. For every edge $e = \{v, v'\} \in E$, there is a bag $B$ with $p_v, p_v' \in B$. We add the new bag $B_e = \{p_v, p_v', p_{e, v}, p_{e, v'}\}$ as a child to~$B$. We further add the bags $B_{z_e} = \{p_{e, v}, p_{e, v'}, z_e\}$, $B_{a^i_{e,v}} = \{p_v, p_{e, v}, a^i_{e,v}\}$ and $B_{a^i_{e,v'}} = \{p_{v'}, p_{e, v'}, a^i_{e,v}\}$ for $i \in \{1, \dots, w(e)\}$ as children to~$B_e$. Observe that the tree of bags generated by this construction is a tree decomposition. Furthermore, since we did not increase the size of any of the existing bags and added only bags of size at most $4$ the treewidth of $G'$ is at most $\max \{\tw(G), 3\}$.\qed
\end{proof}
\section{Approximation}
\label{se:approx}
Having established the hardness of \textsc{wmlq} even for very restricted instances in Theorem~\ref{th:max_spa_np}, we turn our attention towards approximability.
In this section, we give an approximation algorithm and corresponding inapproximability bounds expressed in terms of $|A|, |P|$ and upper quotas in the graph.
The method, which is formally listed in Algorithm~\ref{al:greedy}, is a simple greedy algorithm. We say a post $p$ is \emph{admissible} if it is not yet open and $|\Gamma(p)| \geq \ell(p)$. The algorithm iteratively opens an admissible post maximizing the assignable weight, i.e., it finds a post $p' \in P$ and a set $A'$ of applicants in its neighborhood $\Gamma(p')$ with $\ell(p') \leq |A'| \leq u(p')$ such that $\sum_{a \in A'} w(a, p')$ is maximized among all such pairs. It then removes the assigned applicants from the graph (potentially rendering some posts inadmissible) and re-iterates until no admissible post is~left.
\begin{algorithm}[h]
\caption{Greedy algorithm for \textsc{wmlq}}
\label{al:greedy}
\begin{algorithmic}
\State Initialize $P_0 = \{p \in P \,:\, |\Gamma(p)| \geq \ell(p)\}$.
\State Initialize $A_0 = A$.
\While{$P_0 \neq \emptyset$}
\State Find a pair $p' \in P_0$ and $A' \subseteq \Gamma(p')$ with $|A'| \leq u(p')$ such that $\sum_{a \in A'} w(a, p')$ is
\State maximized among all such pairs.
\State Open $p'$ and assign all applicants in $A'$ to it.
\State Remove $p'$ from $P_0$ and remove the elements of $A'$ from $A_0$.
\For{$p \in P_0$ with $\ell(p) > |\Gamma(p) \cap A_0|$}
\State Remove $p$ from $P_0$.
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
We point out a reduction from \textsc{wmlq} to the set packing problem here. The elements in the universe of the set packing problem would be $A\cup P$. For each post $p$ and for each subset $S \subset \Gamma(p)$, such that $l(p) \le |S| \le u(p)$, we create a set $S \cup \{p\}$ for the set packing instance. However, if the difference between upper and lower quota is not bounded, this would create an exponential sized input for the set packing problem and we could only employ an oracle based algorithm known for set packing problem to solve \textsc{wmlq}. The greedy algorithm known for set packing problem~\cite{CH01} can be made to work in a fashion similar to the algorithm presented above.
In the following we give a tight analysis of the algorithm, establishing approximation guarantees in terms of the number of posts $|P|$, number of applicants $|A|$, and the maximum upper quota $u_{\max} := \max_{p \in P} u(p)$ over all posts. We also provide two examples that show that our analysis of the greedy algorithm is tight for each of the described approximation factors. We further show there that the approximation ratios given above for \textsc{wmlq} are almost tight from the point of view of complexity theory.
\begin{restatable}{theorem}{restategreedypos}\label{thm:greedy-approximation}
Algorithm~\ref{al:greedy} is an $\alpha$-approximation algorithm for \textsc{wmlq} with \linebreak \mbox{$\alpha = \min \{|P|,\, |A|,\, u_{\max} + 1\}$}. Furthermore, for \textsc{mlq}, Algorithm~\ref{al:greedy} is a $\sqrt{|A|} + 1$-approximation algorithm. It can be implemented to run in time $O(|E| \log |E|)$.
\end{restatable}
\begin{proof}
Let $p'_1, \dots, p'_{\ell}$ be the posts chosen by the algorithm and let $A'_1, \dots, A'_{\ell}$ be the corresponding sets of applicants. Furthermore, consider an optimal solution of weight~$\OPT$, consisting of open posts $p_1, \dots, p_k$ and the corresponding sets of applicants $A_1, \dots, A_k$ assigned to those posts.
We first observe that the first two approximation ratios of $|P|$ and $|A|$ are already achieved by the initial selection of $p'_1$ and $A'_1$ chosen in the first round of the algorithm: For every~\mbox{$i \in \{1, \dots, k\}$}, post $p_i$ is an admissible post in the first iteration of the algorithm. The first iteration's choice of the pair~$(p'_1, A'_1)$ implies $\sum_{a \in A'_1} w(a, p'_1) \geq \sum_{a \in A_i} w(a, p_i) \geq w(a', p_i)$ for every $a' \in A_i$. As the optimal solution opens at most $|P|$ posts and serves at most $|A|$ applicants, we deduce that~$\sum_{a \in A'_1} w(a, p'_1) \geq \min \{|P|, |A|\} \cdot \OPT$.
We now turn our attention to the remaining approximation guarantees, which are $u_{\max} + 1$ for \textsc{wmlq} and $\sqrt{|A|} + 1$ for \textsc{mlq}. For every $i \in \{1, \dots, k\}$, let $\pi(i)$ denote the first iteration of the algorithm such that $A'_{\pi(i)} \cap A_i \neq \emptyset$ or $p'_{\pi(i)} = p_i$. This iteration is the one in which post $p_i$ is open or an applicant assigned to it in the optimal solution becomes assigned. Note that such an iteration exists, because $p_i$ is not admissible after the termination of the algorithm. Furthermore, observe that $\sum_{a \in A'_{\pi(i)}} w(a, p'_{\pi(i)}) \geq \sum_{a \in A_i} w(a, p_i)$, because the pair $(p_i, A_i)$ was a valid choice for the algorithm in iteration $\pi(i)$. Now for iteration $j$ define $P_j := \{i \, : \, \pi(i) = j\}$ and observe that $|P_j| \leq |A'_j| + 1$, because $P_j$ can only contain one index~$i'$ with $p_{i'} = p'_j$, and all other $i \in P_j \setminus \{i'\}$ must have $A_i \cap A'_j \neq \emptyset$ (where the sets $A_i$ are disjoint). We conclude that
\begin{align*}
\OPT & \ = \ \sum_{i = 1}^k \sum_{a \in A_i} w(a, p_i) \ \leq \ \sum_{i = 1}^k \sum_{a \in A'_{\pi(i)}} w(a, p'_{\pi(i)}) \\
& \ \leq \ \sum_{j = 1}^{\ell} |P_j| \sum_{a \in A'_j} w(a, p'_j) \ \leq \ \sum_{j = 1}^{\ell} (|A'_j| + 1) \sum_{a \in A'_j} w(a, p'_j).
\end{align*}
Note that $|A'_j| \leq u_{\max}$ and therefore $\OPT \leq (u_{\max} + 1) \sum_{j = 1}^{\ell} \sum_{a \in A'_j} w(a, p'_j)$, proving the third approximation guarantee.
Now consider the case that $w(a, p) = 1$ for all $p \in P$ and $a \in A$ and define $A' = \bigcup_{j = 1}^{\ell} A'_j$. If $|A'| \geq \sqrt{A}$, then $\sqrt{A}|A'| \geq |A| \geq \OPT$. Therefore assume $|A'| < \sqrt{A}$.
Note that in this case, the above inequalities imply $\OPT \leq (|A'| + 1)|A'| \leq (\sqrt{A} + 1)|A'|$, proving the improved approximation guarantee for \textsc{mlq}.
We now turn to proving the bound on the running time. We will describe how to implement the search for the greedy choice of the pair $(p', A')$ in each iteration efficiently using a heap data structure. Initially, for every post $p$, we sort the applicants in its neighborhood by non-increasing order of $w(a, p)$. This takes time at most $O(|E| \log |E|)$ as the total number of entries to sort is $\sum_{p \in P} |\Gamma(p)| = |E|$. We then introduce a heap containing all admissible posts, and associate with each post $p$ the total weight of the first $u(p)$ edges in its neighborhood list.
Note that these entries can be easily kept up to date by simply replacing applicants assigned to other posts with the first not-yet-assigned entry in the neighborhood list (or removing the post if less than $\ell(p)$ applicants are available). As every edge in the graph can only trigger one such replacement, only $O(|E|)$ updates can occur and each of these requires $O(\log |P|)$ time for reinserting the post at the proper place in the heap. Now, in each iteration of the algorithm, the optimal pair $(p', A')$ can be found by retrieving the maximum element from the heap. This happens at most $|P|$ times and requires $O(\log |P|)$ time in each step. \qed
\end{proof}
\begin{ex}
The following two examples show that our analysis of the greedy algorithm is tight for each of the described approximation factors.
\begin{itemize}
\item[(a)] Consider an instance of \textsc{mlq} with $k + 1$ posts $p_0, \dots, p_k$ and $k(k + 1)$ applicants $a_{0,1}, \dots, a_{0,k}, a_{1,1}, \dots, a_{k,k}$. Let $\ell(p_i) = u(p_i) = k$ for $i \in \{0, \dots, k\}$. Each applicant $a_{i,j}$ applies for posts $i$ and additionally to post $0$.
For the greedy algorithm, opening post $p_0$ and assigning applicants $a_{1,1}, \dots, a_{k,k}$ to it is a valid choice in its first iteration, after which no further posts are admissible. Thus, it only assigns $k$ applicants in total. The optimal solution, however, can assign all $k(k+1)$ applicants by assigning applicants $a_{i,1}, \dots, a_{i,k}$ to $p_i$ for each $i$. Therefore, the greedy algorithm cannot achieve an approximation factor better than $k + 1$ on this family of instances, for which $|P| = k+1$, $\sqrt{|A|} < k+1$, and $u_{\max} = k$.
\item[(b)] To see that the approximation ratio of $|A|$ is tight for \textsc{wmlq} consider the following instance with $k$ posts $p_1, \dots, p_k$ and $k$ applicants $a_1, \dots, a_k$. Let $\ell(p_i) = 0$ and $u(p_i) = k$ for every $i$. Every applicant applies for every post, and $w(a_i, p_i) = 1$ for every $i$ but $w(a_i, p_j) = \varepsilon$ for every $j \neq i$ for some arbitrarily small $\varepsilon > 0$. In its first iteration, the greedy algorithm might choose to open post $p_1$ and assign all applicants to it. This solution accumulates a weight of $1 + (k - 1)\varepsilon$, while the weight of the optimal solution is $k = |A|$.
\end{itemize}
\end{ex}
\begin{restatable}{theorem}{restategreedyneg}
\label{th:inappr}
\textsc{mlq} is not approximable within a factor of $|P|^{1-\varepsilon}$ or $\sqrt{|A|}^{1-\varepsilon}$ or~$u_{\max}^{1-\varepsilon}$ for any~$\varepsilon > 0$, unless $\P = \NP$, even when restricting to instances where $\ell(p) = u(p)$ for every $p \in P$ and $|\Gamma(a)| \leq 2$ for every $a \in A$.
\end{restatable}
\begin{proof}
Once again we use the maximum independent vertex set problem. Given an instance of \textsc{mis} on a graph $G = (V, E)$ with $|V| = n$ and $|E| = m$, we create a \textsc{mlq} instance with $n$ posts $p_1, \dots, p_n$, post $p_i$ corresponding to vertex $v_i$. We also introduce $n^2 - m$ applicants as follows. Initially, we introduce $n$ applicants $a_{i,1}, a_{i,2}, ..., a_{i,n}$ applying for each post $p_i$. Then, for every edge $\{v_i, v_j\} \in E$, we merge the applicants $a_{i,j}$ and $a_{j,i}$, obtaining a single applicant applying for both $p_i$ and $p_j$. Furthermore, we set $\ell(p_j) = u(p_j) = n$ for every post.
Note that due to the choice of upper and lower bounds, any open post must be assigned to all the applicants in its neighborhood. Thus, a solution to the \textsc{wmlq} instance is feasible if and only if $\Gamma(p_i) \cap \Gamma(p_j) = \emptyset$ for all open posts $p_i$ and $p_j$ with $i \neq j$, which is equivalent to $v_i$ and $v_j$ not being adjacent in $G$ by construction of the instance. Therefore, the \textsc{wmlq} instance has a feasible solution opening $k$ posts (and thus serving $kn$ applicants) if and only if there is an independent set of size $k$ in~$G$. We conclude that $\OPT_{\textsc{mlq}} = n \cdot \OPT_{\textsc{MIS}}$ for the two instances under consideration.
Note that in the constructed \textsc{wmlq} instance, $n =|P| = u_{\max} \geq \sqrt{|A|}$. Therefore any approximation algorithm with a factor better than $|P|^{1-\varepsilon}$ or $\sqrt{|A|}^{1-\varepsilon}$ or $u_{\max}^{1-\varepsilon}$ for $\varepsilon > 0$ yields a solution of the instance serving at least $(1/n^{1-\varepsilon}) \OPT_{\textsc{mlq}}$ applicants, thus opening at least $(1/n^{-\varepsilon}) \OPT_{\textsc{mlq}} = (1/n^{1 - \varepsilon}) \OPT_{\textsc{mis}}$ posts, corresponding to an independent set of the same size. By~\cite{Zuc07}, this implies $\P = \NP$. \qed
\end{proof}
\section*{Acknowledgements}
We would like to thank Andr\'as Frank and Krist\'of B\'erczi for their observations that led us to \cref{th:u2} and the anonymous reviewers for their valuable comments, which have helped to improve the presentation of this paper.
\bibliographystyle{abbrv}
|
2,869,038,156,872 | arxiv | \section{Introduction}
A nontrivial group $G$ is called left-orderable, if and only if it admits a total ordering $\le$ which is invariant under left multiplication, that is, $g\le h$ if and only if $fg\le fh$. An $L$-space is a rational homology $3$-sphere with minimal Heegaard Floer homology, that is, $\dim\widehat{HF} =|H_1(Y)|$. The following equivalence relationship is conjectured by Boyer, Gordon and Watson.
\begin{conjecture} \label{BGW-conj} \cite{BGW}
An irreducible rational homology $3$-sphere is an $L$-space if and only if its fundamental group is not left-orderable.
\end{conjecture}
Issa and Turner \cite{IT} studied a family of pretzel knots named $P(3,-3,-2k-1)$. They defined a family of two-fold quasi-alternating knots $L(k_1,k_2,\ldots, k_n)$, as shown in Figure \ref{fig:link}, and constructed the homeomorphism $\Sigma_n(P(3,-3,-2k-1)) \cong \Sigma_2(L(-k,-k,\ldots, -k))$. Because two-fold quasi-alternating links are Khovanov homology thin \cite{SS}, the double branched covers of them are $L$-spaces \cite{OS05}. In this way, they proved that all $n$-th fold cyclic branched covers $\Sigma_n(P(3,-3,-2k-1))$ are $L$-spaces.
\begin{figure}[!htbp]\label{fig:link}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.9\textwidth]{IntroLinkLn2Neg.pdf}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\node at (0.246,0.21) {\large $k_1$};
\node at (0.565, 0.21){\large $k_{n-1}$};
\node at (0.831, 0.21){\large $k_n$};
\end{scope}
\end{tikzpicture}
\caption{The link $L(k_1,k_2,\ldots, k_n)$. This diagram is copied from \cite{IT}.}
\end{figure}
We prove the following result which is consistent with Conjecture \ref{BGW-conj}.
\begin{theorem}
For any integers $k_1,k_2,\ldots, k_n$, the double branched cover of $L(k_1,k_2,\ldots, k_n)$ has non-left-orderable fundamental group.
\end{theorem}
As a corollary, the fundamental group of the $n$-th fold cyclic branched cover of the pretzel knot $P(3,-3,-2k-1)$ is not left-orderable.
In Section \ref{sec:2}, we derive a presentation of the fundamental group of double branched cover of $L(k_1,k_2,\ldots, k_n)$.
In Section \ref{sec:3}, we prove our main result.
\section*{Acknowledgement}
The authors thank Wei Chen, Nathan Dunfield, Heng Liao, Luigi Lunardon and Hannah Turner for helpful discussions.
\section{The Brunner's presentation}\label{sec:2}
A tool to compute the fundamental group of the double branched cover of an unsplittable link is the Brunner's presentation \cite{Brunner}. One could also use the Wirtinger's presentation to derive an equivalent form, but the computation would be longer. Historically, Ito \cite{Ito} used the coarse Brunner's presentation, a generalized version of Brunner's presentation, to prove the non-left-orderability of double branched covers of unsplittable alternating links. Abchir and Sabak \cite{AS} proved the non-left-orderability of double branched covers of certain kinds of quasi-alternating links in a similar way.
Consider the checkerboard coloring of the knot diagram $D$ in Figure \ref{fig:link}. Let $G$ and $\tilde{G}$ be the decomposition graph
and the connectivity graph. They can be chosen as shown in Figure \ref{fig:G}.
\begin{figure}[!htbp]\label{fig:G}
\centering\begin{tikzpicture}[node distance={20mm}, main/.style = {draw, circle}]
\node[main] (1) {$1$};
\node[main] (2) [right of=1] {$2$};
\node[main] (3) [above right of=2] {$3$};
\node[main] (4) [above of=3] {$4$};
\node[main] (5) [above left of=4] {$5$};
\node[main] (6) [left of=5] {$6$};
\node[main] (7) [below left of=6] {$7$};
\node[main] (8) [below of=7] {$8$};
\node[main] (0) [above right=18mm and 5mm of 1]{$0$};
\draw[-] (1) -- node[midway, above , sloped] {$k_1$} (2);
\draw[-] (2) -- node[midway, above , sloped] {$+1$} (3);
\draw[-] (3) -- node[midway, above , sloped] {$k_2$} (4);
\draw[-] (4) -- node[midway, above , sloped] {$+1$} (5);
\draw[-] (5) -- node[midway, above , sloped] {$k_3$} (6);
\draw[-] (6) -- node[midway, above , sloped] {$+1$} (7);
\draw[-] (7) -- node[midway, above , sloped] {$k_4$} (8);
\draw[-] (8) -- node[midway, above , sloped] {$+1$} (1);
\draw[-] (0) -- node[midway, above , sloped] {$-2$} (1);
\draw[-] (0) -- node[midway, above , sloped] {$+1$} (2);
\draw[-] (0) -- node[midway, above , sloped] {$-2$} (3);
\draw[-] (0) -- node[midway, above , sloped] {$+1$} (4);
\draw[-] (0) -- node[midway, above , sloped] {$-2$} (5);
\draw[-] (0) -- node[midway, above , sloped] {$+1$} (6);
\draw[-] (0) -- node[midway, above , sloped] {$-2$} (7);
\draw[-] (0) -- node[midway, above , sloped] {$+1$} (8);
\end{tikzpicture}
\begin{tikzpicture}[node distance={20mm}, main/.style = {draw, circle}]
\node[main] (1) {$1$};
\node[main] (2) [right of=1] {$2$};
\node[main] (3) [above right of=2] {$3$};
\node[main] (4) [above of=3] {$4$};
\node[main] (5) [above left of=4] {$5$};
\node[main] (6) [left of=5] {$6$};
\node[main] (7) [below left of=6] {$7$};
\node[main] (8) [below of=7] {$8$};
\node[main] (0) [above right=18mm and 5mm of 1]{$0$};
\draw[->] (2) -- node[midway, above , sloped] {$e_1$} (1);
\draw[->] (2) -- node[midway, above , sloped] {$b_4$} (3);
\draw[->] (4) -- node[midway, above , sloped] {$e_4$} (3);
\draw[->] (4) -- node[midway, above , sloped] {$b_3$} (5);
\draw[->] (6) -- node[midway, above , sloped] {$e_3$} (5);
\draw[->] (6) -- node[midway, above , sloped] {$b_2$} (7);
\draw[->] (8) -- node[midway, above , sloped] {$e_2$} (7);
\draw[->] (8) -- node[midway, above , sloped] {$b_1$} (1);
\draw[->] (0) -- node[midway, above , sloped] {$f_1$} (1);
\draw[->] (0) -- node[midway, above , sloped] {$g_1$} (2);
\draw[->] (0) -- node[midway, above , sloped] {$f_4$} (3);
\draw[->] (0) -- node[midway, above , sloped] {$g_4$} (4);
\draw[->] (0) -- node[midway, above , sloped] {$f_3$} (5);
\draw[->] (0) -- node[midway, above , sloped] {$g_3$} (6);
\draw[->] (0) -- node[midway, above , sloped] {$f_2$} (7);
\draw[->] (0) -- node[midway, above , sloped] {$g_2$} (8);
\end{tikzpicture}
\caption{The decomposition graph $G$ and the connectivity graph $\tilde{G}$ for $n=4$.}
\end{figure}
In the Brunner’s presentation of $\pi_1(\Sigma_2(L(k_1,k_2,\ldots, k_n)))$. Let $e_i, f_i, g_i,b_i$ ($1\le i\le n$) be the edge generators, and $a_i, c_i$ ($1\le i\le n$) be the region generators. Then the local edge relations are $e_i=a_i^{k_i}$, $b_i=c_i^{-1}$, $f_i=(a_{i}^{-1} c_i)^{-2}$ and $g_i=c_{i-1}^{-1} a_i$ ($1\le i\le n$). And the global cycle relations are $f_i^{-1} e_i g_i=1$ and $f_{i}^{-1} b_i g_{i+1}=1$ ($1\le i\le n$). Here the subscripts are considered modulo $n$.
By simplification, we get the following presentation of $\pi_1(\Sigma_2(L(k_1,k_2,\ldots,k_n)))$:
$$\left \langle a_i,b_i| a_{i+1}=b_i^{-1} a_i b_i a_i, a_i^{k_i}=b_i a_i b_i b_{i-1}^{-1}\right\rangle,$$
where $i=1,\ldots,n$ and we view subscripts modulo $n$.
\section{The non-left-orderability}\label{sec:3}
We are going to prove our main non-left-orderability result. Let $\le$ be any left order on the group $\pi_1(\Sigma_2(L(k_1,k_2,\ldots,k_n)))$, then we have the following lemmas.
\begin{lemma}\label{l-1}
If $\forall m.\left(b_i^{-1} a_{i+1}^{m}\ge 1\right),$ then $\forall m_2\exists m_1 .\left(a_{i+1}^{-m_1} b_i^{-1} a_{i}^{m_2}\ge 1\right).$
\end{lemma}
\begin{proof}
By assumption, we have
$$b_i^{-1}\ge 1.$$
Suppose, for the sake of contradiction, that for some integer $m_2$,
$$\forall m_1.\left(a_{i+1}^{-m_1} b_i^{-1} a_{i}^{m_2}\le 1\right),$$
then we have $$ b_i^{-1} a_{i}^{m_2}\le 1.$$
Since $a_{i+1}=b_i^{-1} a_i b_i a_i$, we have $$\left(a_{i+1} a_i^{-1}\right)^{m_2} b_i^{-1}= b_i^{-1} a_i^{m_2}.$$
Hence we have $$a_i^{m_2} \le 1,\left(a_{i+1}a_i^{-1}\right)^{m_2} \le 1,$$
which implies $$a_{i+1}^{m_2}\le 1.$$
By assumption, we have
$$\forall m.\left(a_{i+1}^{-m} a_i^{m_2}\le 1\right).$$
So we have
$$\forall \left(m \mbox{ has same sign as } m_2\right).\left(a_{i+1}^{-m} a_i^{m_2}a_{i+1}^{m} \le 1\right).$$
If $m_2\ge 0$, then
$$\left(a_{i+1} a_i^{-1}\right)^{m_2} = a_{i+1}^{m_2} \left(a_{i+1}^{-m_2+1}a_i^{-1} a_{i+1}^{m_2-1}\right)\cdots\left(a_{i+1}^{-1}a_i^{-1} a_{i+1}\right) a_i^{-1}.$$
If $m_2<0$, then
$$\left(a_{i+1} a_i^{-1}\right)^{m_2}= a_{i+1}^{m_2} \left(a_{i+1}^{-m_2}a_i a_{i+1}^{m_2}\right)\cdots\left(a_{i+1}^{2}a_i a_{i+1}^{-2}\right)\left(a_{i+1}a_i a_{i+1}^{-1}\right).$$
Therefore, we have
$$a_{i+1}^{-m_2}\left(a_{i+1} a_i^{-1}\right)^{m_2} \ge 1.$$
So we have
$$ a_{i+1}^{-m_2} b_i^{-1} a_{i}^{m_2}=a_{i+1}^{-m_2}\left(a_{i+1} a_i^{-1}\right)^{m_2} b_i^{-1} \ge 1,$$
which implies the conclusion.
\end{proof}
\begin{lemma}\label{l-2}
If $\forall m.\left(b_i^{-1} a_{i+1}^{m}\ge 1\right),$ then $\forall m.\left(b_{i-1}^{-1} a_{i}^{m}\ge 1\right).$
\end{lemma}
\begin{proof}
Because $a_{i+1}=b_i^{-1} a_i b_i a_i$ and $a_i^{k_i}=b_i a_i b_i b_{i-1}^{-1}$, we have
$$b_{i-1}^{-1} a_i^{m}= b_i^{-1} a_i^{-1} b_i^{-1} a_i^{m+k_i}= b_i^{-1} a_{i+1}^{-1} b_i^{-1} a_i^{m+k_i+1}.$$
By Lemma \ref{l-1}, we have $$\exists m_1.\left(a_{i+1}^{-m_1} b_i^{-1} a_i^{m+k_i+1} \ge 1\right).$$
By assumption, we have $$b_i^{-1} a_{i+1}^{m_1-1}\ge 1.$$
Therefore $$b_{i-1}^{-1} a_i^{m}=\left(b_i^{-1} a_{i+1}^{m_1-1}\right)\left(a_{i+1}^{-m_1} b_i^{-1} a_i^{m+k_i+1} \right)\ge 1.$$
\end{proof}
Let us apply the fixed point method on $a_1$. This is a common technique in the proofs of many non-left-orderability results. If the group $\pi_1(\Sigma_2(L(k_1,k_2,\ldots, k_n)))$ has a left order $\le$, then \cite{Holland} there exists a homomorphism $\rho$ from this group to $\mbox{Homeo}_+(\mathbf{R})$ with no global fixed points, and $g\le h$ if and only if $\rho(g)(0)\le \rho(h)(0)$. Then for the element $a_1$, there are two situations:
\begin{enumerate}[(1)]
\item If $\rho(a_1)$ has a fixed point $s$, then there is a left total preorder $\le_{a_1}$, defined as $g\le h$ if and only if $\rho(g)(s)\le \rho(h)(s)$, with $a_1\ge_{a_1} 1$ and $a_1^{-1} \ge_{a_1} 1$.
\item Otherwise, any conjugate of $a_1$ has the same sign as $a_1$.
\end{enumerate}
Let us assume the first situation. Without loss of generality, we assume that $b_n \le_{a_1} 1$, then $\forall m.\left(b_n^{-1} a_{1}^{m}\ge_{a_1} 1\right).$ Since the antisymmetry is never used in the proofs of Lemma \ref{l-1} and Lemma \ref{l-2}, they apply to $\le_{a_1}$. By inductive application of Lemma \ref{l-2}, we have $\forall m.\left(b_{i}^{-1} a_{i+1}^{m}\ge_{a_1} 1\right)$ for any $i=1,\ldots,n$. By Lemma \ref{l-1}, we have the relation $\forall m_2\exists m_1 .\left(a_{i+1}^{-m_1} b_i^{-1} a_{i}^{m_2}\ge_{a_1} 1\right)$ for any $i=1,\ldots,n$.
Because $a_{i+1}=b_i^{-1} a_i b_i a_i$ and $a_i^{k_i}=b_i a_i b_i b_{i-1}^{-1}$, we have
$$b_i^{-1} a_i^{k_i}= a_i b_i b_{i-1}^{-1}=b_i a_{i+1}a_i^{-1} b_{i-1}^{-1}= (b_i a_{i+1})(b_{i-1}a_i)^{-1}.$$
Hence we have $$\forall m_2\exists m_1 .\left( (b_{i-1}a_i)^{-1} a_{i}^{m_2}\ge_{a_1} (b_i a_{i+1})^{-1} a_{i+1}^{m_1}\right).$$
By induction, we have $$\forall m_2\exists m_1 .\left( (b_{1}a_2)^{-1} a_{2}^{m_2}\ge_{a_1} (b_n a_{1})^{-1} a_{1}^{m_1}\right),$$
especially
$$\exists m_1 .\left( (b_{1}a_2)^{-1} \ge_{a_1} (b_n a_{1})^{-1} a_{1}^{m_1}\right).$$
Then we have
$$\exists m_1 .\left( 1 \ge_{a_1} b_1^{-1} a_1^{m_1+k_1}\right),$$
which implies $b_1\ge_{a_1} 1$. Remember that $\forall m.\left(b_{i}^{-1} a_{i+1}^{m}\ge_{a_1} 1\right)$ implies $b_{1}^{-1}\ge_{a_1} 1$.
Therefore every fixed point of $a_1$ is also a fixed point of $b_1$. By $a_{i+1}=b_i^{-1} a_i b_i a_i$, every fixed point of $a_1$ is also a fixed point of $a_2$. By symmetry and induction, any fixed point of $a_1$ is a global fixed point.
Now we assume any conjugate of $a_1$ is positive. By $a_{i+1}=b_i^{-1} a_i b_i a_i$, if any conjugate of $a_i$ is positive, then any conjugate of $a_{i+1}$ is positive. By induction, any conjugate of $a_i$ is positive. By $a_{i+1}a_i^{-1}=b_i^{-1} a_i b_i \ge 1$ and their product is identity, we have $a_i=a_{i+1}$ for all $i=1,\ldots, n$. Furthermore, we have $a_i=1$. Then the fundamental group is finite, so it is not left-orderable.
Therefore, we proved the non-left-orderability.
|
2,869,038,156,873 | arxiv | \section{Introduction} \label{sec:intro}
White dwarf (WD) stars are the end stages of all stars with initial masses less than roughly $8-10$\,$M_{\sun}$ \citep{2015MNRAS.446.2599D} and have been used for decades as stellar chronometers (see review by \citealt{2001PASP..113..409F}). Since WDs no longer undergo core fusion, their evolution is generally a cooling problem. The development of robust cooling models (e.g., \citealt{1995PASP..107.1047B}) enables WDs to act as precise and accurate age indicators. However, to determine the total age of a WD, the time from the zero-age main sequence (ZAMS) to the current state of the WD is required. The progenitor lifetimes from ZAMS to the WD phase are determined through the use of a semi-empirical initial-final mass relation (IFMR, \citealt{2008MNRAS.387.1693C}, \citealt{2008ApJ...676..594K}, \citealt{2009ApJ...692.1013S}, \citealt{2018ApJ...866...21C}) in conjunction with stellar evolution model grids, such as MESA \citep{2016ApJS..222....8D} and PARSEC \citep{2012MNRAS.427..127B}.
The total ages of WDs can be used to date a variety of astronomical objects, including wide binary companions, such as M dwarfs (e.g., \citealt{2019ApJ...870....9F,2021ApJS..253...58Q}, \citealt{2021AJ....161..277K}), brown dwarfs \citep{2020ApJ...891..171Z}, and a growing number of planet-host stars \citep{2021MNRAS.507.4132M}. WDs can also age-date larger populations, such as stars in the disk (e.g., \citealt{1987ApJ...315L..77W}), or even the halo of our Galaxy \citep{2019MNRAS.482..965K,2021MNRAS.502.1753T}. WD ages can also be used to study the star formation history of the Milky Way (e.g., \citealt{2014ApJ...791...92T}). An understanding of the limitations and reliability of WD total ages is crucial to the inferences derived from these studies.
The total age derived for a WD is strongly dependent on the IFMR used, especially for the lowest-mass WDs ($M<0.7$\,$M_{\sun}$). However, the low-mass region of the IFMR is poorly sampled because nearby clusters containing at least one WD are not old enough. Attempts have been made to alleviate this problem by using wide binary companions to WDs \citep{2012ApJ...746..144Z, 2021ApJ...923..181B} and the distribution of WDs in the color-magnitude diagram \citep{2018ApJ...860L..17E}, but this still results in a large range of predicted progenitor masses across different IFMRs. Further, the IFMR may show evidence of a kink where the mass-loss relationship is no longer monotonic \citep{2020NatAs...4.1102M}. The IFMR is likely also sensitive to the metallicity and initial rotation of the progenitor star \citep{2019ApJ...871L..18C}. These parameters are not easy to determine, and thus assumptions for their values must be made.
Widely separated double WD binaries (WD+WD binaries) are ideal systems for checking the viability of our existing methods for determining WD total ages. It is safe to assume the two WDs bound to each binary system formed from the same collapsing molecular cloud at nearly the same time with the same chemical composition \citep{2001ApJ...556..265W, 2009ApJ...704..531K}. Wide WD+WD binaries that are sufficiently separated ($>100$\,au) have not experienced any significant mass transfer events or other interactions between the two stars during their lifetimes. At intermediate separations (1-100\,au) where tidal forces, mass transfer, and accretion are not a significant source of interaction, it has been shown that the interaction between a companion and the circumstellar disk can affect the angular momentum evolution of the star \citep{2012AJ....144...93M}. In the absence of any interactions between the two stars, any methods for determining the total age of a WD should independently return the same total age for both components of the binary.
Since the launch of the Gaia spacecraft (\citealt{2016A&A...595A...1G}, \citealt{2018A&A...616A...1G}, \citealt{2021A&A...649A...1G}), the sample of wide WD+WD binary systems has increased by an order of magnitude (\citealt{2015ApJ...815...63A}, \citealt{2018MNRAS.480.4884E}, \citealt{Tian}, \citealt{2021MNRAS.506.2269E}). This increase in systems presents the opportunity for a large statistical study into the accuracy of the models and methods used to determine the total ages of WDs, which is the focus of this work. In Section 2, we present the sample selection of resolved WD+WD binary systems, including 23 new WD+WD+MS systems and one bound WD+WD+WD system. Section 3 discusses the techniques used to determine their ages. In Section 4, we discuss comparisons of the total ages in each binary. We end with a summary of the findings and results in Section 5.
\begin{figure*}[t]
\centering
\includegraphics[width=1.95\columnwidth]{sample_panels.pdf}
\caption{Basic properties of the full sample of 3179 WDs in wide WD+WD binaries from \citet{2021MNRAS.506.2269E} used in this work, including 23 of our newly discovered wide WD+WD+MS triples and one WD+WD+WD triple: distribution in galactic coordinates, color-magnitude diagrams for the primaries and secondaries, apparent magnitude distributions, magnitude difference, distances, and projected angular and physical separations. The primaries are defined to be the more massive WD component. A large range of the WD cooling tracks is covered by the sample, which allows us to test WD total ages across a representative sample of WD masses and effective temperatures. Cooling tracks for a 0.6\,$M_{\sun}$ and 0.9\,$M_{\sun}$ WD model from \citet{2020ApJ...901...93B} are shown for reference on the color-magnitude diagrams, as well as dots representing WD cooling ages of 0.5, 1, 2, 4, and 6 Gyr for each. The separation distribution of pairs with $\texttt{R\_chance\_align}>0.1$ is shown in cyan for reference in the bottom right panels. The upturn in the distribution at high separations is caused by these possible chance alignments.}
\label{fig:sample}
\end{figure*}
\section{Sample Selection}\label{sec:sample}
Obtaining a large sample of wide double WD pairs is crucial for this work. The improved parallaxes and proper motions for more than 1 billion sources all-sky with the release of Gaia EDR3 (\citealt{2021A&A...649A...1G}) significantly increased the potential yield of searches for wide common-proper-motion pairs. With this larger sample we can delve into the systematic errors present in WD total age determinations.
\subsection{Binary Sample}
For this work, the base of our sample is composed of the 1565 wide WD+WD binaries from \citet{2021MNRAS.506.2269E}. Candidate pairs in this work were selected to have projected separations less than a parsec, parallaxes that are consistent within $3\sigma$, and proper motions consistent within $2\sigma$ of a bound orbit. This sample is restricted to objects with fractional parallax uncertainties less than 20\% for both components, with most systems within 300\,pc. The sample covers a wide range of magnitudes, spanning $13<G<21$\,mag, with roughly 68\% of the 3130 individual WDs in the sample fainter than $G>19$\,mag.
\citet{2021MNRAS.506.2269E} calculate a chance alignment factor (\texttt{R\_chance\_align}) for each candidate binary pair through the use of a shifted Gaia catalog. Conducting a search for binaries in the shifted catalog only produces chance alignments, and thus can be used as a way to determine the likelihood that a binary pair is a chance alignment (see their Sec 3). This parameter can be used to filter out low-fidelity wide binaries.
\subsection{Triple Sample}
To further increase the sample of wide double WDs, we conduct a search for wide, resolved triple star systems containing at least two WDs. In \citet{2021MNRAS.506.2269E}, all triple systems are intentionally removed from the sample by the neighboring pairs cut (see their Sec 2.1). In this work, we take the sample of binary pairs from \citet{2021MNRAS.506.2269E} before the cleaning of the sample and remove the neighboring pairs cut, but retain the cut on neighboring sources of $N < 30$, where a neighboring source is defined, following \citet{2021MNRAS.506.2269E}, as a source with (i) projected on-sky separation less than 5 parsecs; (ii) proper motion difference less than 5 $\mathrm{km\ s^{-1}}$ with a $2\sigma$ tolerance; (iii) parallaxes consistent within $2\sigma$.
Next, we make a list of systems of various sizes by combining all binaries with at least one common source between them. We iterate this process until each system is independent of each other.
To clean this new sample of possible wide systems, we use the classifier for spurious astrometric solutions provided in \citet{2021arXiv210111641R}. We require each component of the triple to have the \texttt{fidelity\_v1} parameter greater than 0.5. This assures that each source has a reliable astrometric solution. We are only concerned with finding systems with three components in this extended list and throw out any systems with a size not equal to three. (Repeating this exercise for $N=4,5,\ldots$, we do not find any systems with a size greater than $3$ in this extended list of wide systems with at least two WDs.)
With these cuts, we identify 23 new WD+WD+MS triples and one WD+WD+WD triple. This list includes the first resolved triple WD system found by \citet{2019MNRAS.483..901P}. All WD+WD+MS and WD+WD+WD triple systems are detailed in Table~\ref{tab:triples}. All but three of the triples in our sample are hierarchical, where we have defined a hierarchical triple as one in which the largest separation is greater than five times the smallest separation. An excess of hierarchical triples is expected since the hierarchy improves dynamical stability in the system \citep{1983ApJ...268..319H}.
The total ages on the non-hierarchical systems (WDWDMS EDR3 3681220906103896192, WDWDMS EDR3 6756233901661753472, WDWDMS EDR3 929001808377561216) can provide interesting constraints on the timescales of dissipation of such systems. In both WDWDMS EDR3 3681220906103896192 and WDWDMS EDR3 6756233901661753472, one of the WDs does not have a total age estimate since it has a mass of $0.2$$M_{\odot}$ and $0.35$$M_{\odot}$, respectively, but the total ages of the other WDs in these triples are $92^{+3}_{-5}$ Myr and $2.9^{+0.4}_{-0.2}$ Gyr, respectively. The weighted mean total age of the WDs in WDWDMS EDR3 929001808377561216 is $3.3^{+3.1}_{-1.3}$ Gyr. Further information is found in Table~\ref{tab:triples}.
Although two of these non-heirarchical wide triples seem too old ($>2.5$\,Gyr) to still be bound, it has recently been shown that the unstable phase of these triples on average lasts $10^3-10^4$ crossing times, and can even last as long as $10^{6}-10^{7}$ \citep{2021arXiv210804272T}. Assuming the crossing time is of similar magnitude to the outermost orbital period, the two older systems would have a crossing time of $\sim{10^4}$ years, indicating the older systems have experienced $\sim{10^5}$ crossing times. Of the 21 systems that are hierarchical, 19 systems have an inner pair of two WDs, although this is likely an observational bias caused by the high luminosity ratio of a MS star to a WD.
Our search is sensitive to wide triple WD systems, but we do not find any new wide WD+WD+WD systems. We do find a possible candidate for a new WD+WD+WD system (Gaia EDR3 source IDs: 261249666477351552, 261249670772776832, 261249636413169920) but one component has a \texttt{fidelity\_v1} parameter from \citet{2021arXiv210111641R} of 0.01. Still, all three WDs have significant parallaxes and proper motions that are in better than $2\sigma$ agreement. With all components $G>20$\,mag, this potential new WD+WD+WD is significantly fainter than the first published triple WD system. This limits the parallaxes of each WD to 10\%-15\% uncertainties. The system lies somewhere between $100-130$\,pc, with the weighted mean distance being $118\pm8$\,pc. This possible WD triple has a similar hierarchy as the first such identified system in \cite{2019MNRAS.483..901P}; the system has an inner pair separated by $\approx$$170$\,au and an outer component separated by $\approx$$6500$\,au. (The first triple WD system has an inner separation of $\approx$$300$\,au and outer separation of $\approx$$6400$\,au.) Follow-up analysis is needed to confirm the true nature of this possible WD+WD+WD system.
\subsection{Full Sample}\label{sec:sample_full}
Various parameter distributions of the full sample, including a color-magnitude diagram (CMD) of all wide double WDs, is shown in Figure~\ref{fig:sample}. Our full sample of wide WD+WD pairs spans a large range of the WD cooling sequence, covering many masses and effective temperatures. The sample also spans many different evolutionary stages. In fact, we expect roughly 430 WDs (13\%) to be mostly crystallized, based on the 80\% crystallization boundary from \citet{2019Natur.565..202T}. However, many of our targets are quite faint; the majority of sources are fainter than $G>19.5$\,mag (see Figure~\ref{fig:sample}). The separation distribution shows an excess of high separation pairs. These pairs are likely chance alignments and can be removed with a cut on \texttt{R\_chance\_align} from \citet{2021MNRAS.506.2269E}, as discussed in Section~\ref{sec:results}.
In preparation for the eventual total age determination, we gather additional photometric data on each WD in order to sample a larger range of wavelengths than just the three broad photometric bands in Gaia EDR3. We use the crossmatches provided by the Gaia team for EDR3 (\citealt{2021gdr3.reptE...9M}) to get photometric data from the Sloan Digital Sky Survey (SDSS), the Panoramic Survey Telescope and Rapid Response System (PanSTARRS), the SkyMapper Southern Survey, and the Two Micron All Sky Survey (2MASS). We find that 32\%, 70\%, 23\%, and 4\% of the WDs in our sample have photometry in SDSS, PanSTARRS, Skymapper, and 2MASS, respectively.
\section{Total Age Determination}
WD total ages have two necessary components that need to be determined: the time the star has spent as a WD (its cooling age) and its progenitor main-sequence and giant branch lifetimes. To calculate the cooling age of a WD, atmospheric parameters of the WD must be determined, which can be derived using either spectroscopic or photometric data in conjunction with model atmospheres. Fundamentally, the WD surface gravity and temperature must be determined.
Spectroscopic observations are available in the literature for roughly 300 of the WDs in our sample, so we lack spectroscopy for the vast majority of our 3179 objects. Therefore, we resort to conducting spectral energy distribution (SED) fitting of the average fluxes in various photometric bands from all sky surveys. In practice, SED fitting is sensitive to the radius and temperature of a WD. This radius can be converted into a surface gravity through the use of the mass-radius relationship for WDs \citep{2020ApJ...901...93B}.
Once a surface gravity and temperature are determined, an estimate of the mass of the WD can be calculated using the same mass-radius relationship. This estimate of the mass, with the use of an IFMR and stellar evolutionary grids, can be used to generate an estimate of the lifetime of the ZAMS progenitor. We discuss this process in more detail below.
\subsection{WD Atmospheric Parameters}
\label{sec:atm}
To determine masses and effective temperatures of the WDs in our sample, we use available photometry from Gaia EDR3 and the all-sky surveys discussed in Section \ref{sec:sample_full}, in conjunction with a fitting technique similar to the photometric technique described in \cite{2019ApJ...876...67B}.
Reddening effects from interstellar gas and dust can be important for the WDs in our sample (the majority are at a distance beyond 150\,pc). Thus, the observed magnitudes in each band are dereddened following the process outlined in \citealt{2019MNRAS.482.4570G}, which we summarize here. We use reddening maps from \cite{2011ApJ...737..103S} to determine the extinction coefficient in the Johnson $V$ band, $A_{{\rm V}}$. Then, the $A_{{\rm V}}$ values are converted to extinction coefficients in the other bands using the appropriate $R$ values. We assume the material is concentrated along the galactic plane and has a scale height of 200\,pc. When parameterised this way, the dereddened magnitude in a given band $x$ is determined by,
\begin{equation}
M_{{\rm x}} = M_{{\rm x,obs}} - A_{{\rm x}}\left(1-\exp \left(-\frac{\mathrm{sin\left(|b|\right)}}{200\omega}\right)\right)
\end{equation}
where $M_{{\rm x,obs}}$ is the observed magnitude, $A_{{\rm x}}$ is the extinction coefficient, $b$ is the galactic latitude, and $\omega$ is the parallax in arcseconds. A more sophisticated reddening determination can be used similar to what is done in \cite{2021MNRAS.508.3877G}, but in comparisons with atmospheric parameters derived in their work, we find no systematic shifts and the parameters are in good agreement (as demonstrated below).
The observed, dereddened magnitudes are converted to average fluxes using the appropriate zero points for each bandpass. We apply the $G$-band magnitude correction for all sources with a six-parameter astrometric solution provided in \cite{2021A&A...649A...1G}. To account for the offset of SDSS magnitudes from the AB magnitude standard, we apply an offset of $-$0.04 and 0.02 mags in the $u$ band and $z$ band, respectively \citep{2006AJ....132..676E}. For each survey, we remove photometry that has been flagged for various issues. For SDSS, we remove photometry that has been flagged with \texttt{EDGE}, \texttt{PEAKCENTER}, \texttt{SATUR}, and \texttt{NOTCHECKED}. We remove photometry from PanSTARRS that uses rank detections less than one. For SkyMapper, we only use photometry that has no raised flags. We also enforce a minimum uncertainty in each bandpass of 0.03\,mag to account for systematic errors in the conversion of magnitudes to average fluxes. This also gives equal weight to each bandpass and one point with a small error will not dominate the behavior of the fits.
We generate synthetic averages fluxes in a given bandpass using synthetic hydrogen atmosphere WD (spectral type DA) spectra from \cite{2010MmSAI..81..921K}, convolved with the appropriate filter transmission profiles. The observed and synthetic fluxes are adjusted to a distance of 10\,pc using the weighted mean of the parallaxes of the two components of the binary, or the weighted mean of all three components for our triples. To adjust the synthetic fluxes to 10\,pc, the radius of the WD must be calculated from a mass-radius relation. We use the WD evolutionary models from \cite{2020ApJ...901...93B} to convert the temperature and surface gravity of a given model to a radius. The photometric uncertainties are then added in quadrature with the uncertainty on this weighted mean parallax. The use of the weighted parallax in these systems allows for more precise determinations of the atmospheric parameters due to its higher precision. For the sample of triples, the precision of the parallax is six times better on average, because the MS companions are much brighter. This leads to the WDs in these triples having smaller mass uncertainties than what is expected given their brightness (an average improvement of 40\% for the brighter WD component and 15\% for the fainter WD component).
Due to the lack of spectral information for the majority of our objects, we assume all of our WDs are DAs. Although fitting a non-DA WD with a DA model can introduce systematic mass errors on the order of $10-15$\% \citep{2012ApJS..199...29G}, we are forced to choose a model. A DA model is the simplest assumption, considering $>$65\% of WDs in magnitude-limited samples within Gaia are DAs \citep{2013ApJS..204....5K}.
We use the python package emcee \citep{2013PASP..125..306F}, which uses an affine invariant Markov Chain Monte Carlo \citep{2010CAMCS...5...65G}, to get the posterior range on the temperature and surface gravity for each WD in the sample. We use the $50^{\mathrm{th}}$ percentile of these posteriors as an estimate of the best fit. We use flat priors on both effective temperature and surface gravities with bounds defined from the limits of the models ($3000-40{,}000$\,K and $\log{g}$ of $6.0-9.5$ dex) and assume Gaussian errors. Using the WD evolutionary models from \citet{2020ApJ...901...93B}, which assume thick H layers, we convert the surface gravities and temperatures to masses and cooling ages. The lower and upper errors on these parameters are taken as the 16th and 84th percentiles, respectively. To account for any unknown systematics, we set a lower limit on the uncertainty of the atmospheric parameters derived from the fits of 0.038 dex for surface gravity and 1.2\% in temperature \citep{2005ApJS..156...47L}. An example of a fit can be seen in Figure~\ref{fig:example_fit}.
\begin{figure}[t!]
\centering
\gridline{\fig{SED_for_paper_title.pdf}{0.99\columnwidth}{}} \gridline{\fig{corner_for_paper.pdf}{0.99\columnwidth}{}}
\caption{An example fit to get atmospheric parameters of our WDs. The top two panels shows the SED, including all observed photometry and a representative atmospheric model, as well as the residual from the fit. The posterior distributions from the MCMC fitting code used in this work are shown in the bottom part of the figure.}
\label{fig:example_fit}
\end{figure}
As a check on our methods, we compare the atmospheric parameters we derive to the parameters from the 100-pc WD sample of \citet{2020ApJ...898...84K}, and to the full Gaia EDR3 WD catalog from \citet{2021MNRAS.508.3877G}, which were also determined from the photometric technique we employ. The comparison to the 100-pc WD sample from \citet{2020ApJ...898...84K} reveals no significant systematic differences between the two works. The atmospheric parameters on average agree within $0.9\sigma$ (100\,K) and $0.6\sigma$ (0.03\,dex) for effective temperature and surface gravity, respectively. We observe some small dispersion at higher temperatures, which we expect since our work incorporates interstellar reddening whereas \citet{2020ApJ...898...84K} does not. Comparing to the Gaia EDR3 WD catalog, we find that the atmospheric parameters on average agree within $0.4\sigma$ (300\,K) and $0.35\sigma$ (0.1\, dex) for effective temperature and surface gravity, respectively. The quoted uncertainties we find are significantly smaller than the ones in \citet{2021MNRAS.508.3877G} due to the use of more photometric points in the SED and the weighted parallax. Although we have smaller uncertainties and use different methods to determine the amount of interstellar reddening, we do not find any systematic shifts in derived WD atmospheric parameters.
We also search for spectral types of our objects that have been published in the literature, finding 305 objects with previously reported spectra. The sources for these spectral types can be found in the catalog of WD pairs described in Table~\ref{tab:description}.
We supplement these previously published spectral types with 57 spectra taken on the DeVeny Spectrograph mounted on the 4.3-m Lowell Discovery Telescope (LDT, \citealt{2014SPIE.9147E..2NB}) in Happy Jack, Arizona. Using a 300 line $\mathrm{mm}^{-1}$ grating we obtain a roughly 4.5 \AA\ resolution, and rotate the position angle of the slit to capture both WDs simultaneously in our observations. We extracted the spectra using a quick-look pipeline written using the ccdproc and specutils packages from the AstroPy package \citep{2018AJ....156..123A}, and wavelength calibrated the spectra with a HgArNe lamp. We do not attempt to fit these spectra and only conduct a visual inspection for the presence of hydrogen absorption features, and include their spectral type in the sp\_type column described in Table~\ref{tab:description}. Future work will further analyze atmospheric parameters derived from this spectroscopy.
Comparing the derived atmospheric parameters of the previously identified DAs with the parameters derived in this work, we find that the effective temperatures agree well but that the determination of the surface gravities from spectroscopic observations vary from our own values derived from photometric information. We find that the atmospheric parameters of our work and the spectroscopic parameters on average agree within $0.9\sigma$ (300\,K) and $1.3\sigma$ (0.1\,dex) for effective temperature and surface gravity, respectively. We also find that the spectroscopic fits find higher surface gravities for the WDs on average and thus hotter temperatures as well. Although the absolute differences are similar to the other comparisons, the quoted uncertainties from the spectroscopic parameters are smaller and result in a stronger disagreement in surface gravity ($1.3\sigma$ compared to $0.9\sigma$ and $0.35\sigma$).
\subsection{Initial Masses and Progenitor Lifetimes}
To get the total ages of the WDs in our sample, we need an estimate of the lifetime of the ZAMS progenitor. The first step towards this is to determine an initial main-sequence mass for each of the WDs in our sample. To get these masses, an initial-final mass relation is needed. In this work, we use the theoretical IFMR from \citet{2016ApJ...823...46F}, which was generated by running a suite of solar metallicity stellar evolution models from the Modules for Experiments in Stellar Astrophysics (MESA, \citealt{2011ApJS..192....3P}, \citealt{2013ApJS..208....4P}, \citealt{2015ApJS..220...15P}) starting at the pre-MS phase through the thermally pulsing asymptotic giant branch to the final WD phase. This allows us to remain self-consistent within both our IFMR and our determination of ZAMS progenitor lifetimes using the same open-source stellar evolution code and allows us to use a more theoretically motivated shape for the IFMR.
The theoretical IFMR (shown as the dashed line in Figure~\ref{fig:IFMR_fit}) lies below what is observed for WDs in near solar metallicity clusters, an effect that is still not fully understood, though might involve uncertainties in the amount of convective overshoot that is used in the models \citep{2016ApJ...823...46F} or uncertainties in nuclear reaction rates that can affect the core mass (e.g., \citealt{2017RvMP...89c5007D}). To account for this systematic effect, we fit a positive offset to the IFMR using the same WDs in solar metallicity clusters. We find the best-fitting offset to apply to the final WD mass is $+0.0636$\,$M_{\sun}$. The points of this shifted IFMR can be found in Table~\ref{tab:IFMR}.
Using this modified IFMR (the solid line in Figure~\ref{fig:IFMR_fit}), we generate progenitor MS masses for all the WDs in our sample. We convert these MS masses into progenitor lifetimes by interpolating the solar metallicity, zero rotation MESA evolutionary tracks of \citet{2016ApJS..222....8D} and \citet{2016ApJ...823..102C} and summing the main-sequence and giant branch lifetimes.
The uncertainties on the progenitor main-sequence masses and progenitor lifetimes are determined by taking the $1\sigma$ uncertainties on the WD mass, determined from the model-atmosphere fitting described in Section~\ref{sec:atm}, which determine an upper- and lower-limit on the MS mass. Then the uncertainty on the progenitor lifetime is quoted as the difference between these $1\sigma$ bounds and the determined value. Total ages are then determined through the sum of the cooling age and the progenitor lifetime, and the uncertainties on total age are calculated by adding the uncertainties on cooling age and progenitor lifetime in quadrature. A table of all double WD pairs with atmospheric parameters and total ages can be accessed at \href{https://sites.bu.edu/buwd/files/2022/05/Table_A1.csv}{https://sites.bu.edu/buwd/files/2022/05/Table\_A1.csv}; a description of the table columns and data can be seen in Table~\ref{tab:description}.
As a check, we compare the progenitor lifetimes from our MESA IFMR to ones derived from the three-piece, cluster-calibrated IFMR from \cite{2018ApJ...866...21C}. We find that the progenitor lifetimes from both IFMRs are within 25\% of each other on average. The difference between the total ages is on average 10\% when comparing the two IFMRs. Overall, we find that the use of the IFMR from \cite{2018ApJ...866...21C} does not significantly change the conclusions discussed in subsequent sections.
\begin{figure}
\centering
\includegraphics[width=0.975\columnwidth]{IFMR_fit_notPiecewise.pdf}
\caption{The initial-to-final mass relation (IFMR) used in this work to connect a WD mass to a single ZAMS progenitor mass. The dashed line shows the theoretical IFMR from MESA calculated by \citet{2016ApJ...823...46F}. The observational data points are WDs in solar metallicity clusters along with Sirius B from \citet{2008MNRAS.387.1693C}, \citet{2015ApJ...807...90C} and \citet{2016ApJ...818...84C}. The solid line shows our adopted IFMR, which include a constant y-axis offset of $+0.0636$\,$M_{\sun}$ found as a best-fit to the cluster observations (the values for which are in Table~\ref{tab:IFMR}).}
\label{fig:IFMR_fit}
\end{figure}
\subsection{Precision of Total Ages}\label{sec:age_precision}
The WD mass is the dominant factor for determining the total age of the WDs in our sample, and subsequently, the uncertainties in the WD mass strongly influence the degree of precision on the total ages.
\begin{figure*}
\centering
\includegraphics[width=1.99\columnwidth]{age_precision.pdf}
\caption{Visualization of how the precision on the ZAMS progenitor lifetime changes as a function of the WD mass uncertainty for a $1.0$\,$M_{\sun}$ WD (left) and $0.6$\,$M_{\sun}$ WD (right). The grey shaded regions represent the 1$\sigma$ uncertainties on the progenitor lifetimes. The axes are scaled to show the relative size of the uncertainties. The ZAMS progenitor lifetimes are given in the top left of each plot. The dashed vertical lines show the average relative uncertainty on the WD mass at a given $G$-band magnitude. The uncertainty on the WD mass is also affected by the precision of the parallax, and thus, for a given $G$-band magnitude there is some dispersion where more precise parallaxes give better mass constraints.}
\label{fig:age_precision}
\end{figure*}
This is especially true for the lower-mass WDs. For a typical $0.6$\,$M_{\sun}$ WD \citep{2007MNRAS.375.1315K,2016MNRAS.461.2100T}, the initial progenitor MS mass is around $1.2$\,$M_{\sun}$ which corresponds to a progenitor lifetime of $\approx 6$\,Gyr. A $5\%$ decrease in the WD mass results in a ZAMS progenitor mass of $0.88$\,$M_{\sun}$ and progenitor lifetime around 17\,Gyr. Thus, for WDs near this mass range, it is extremely difficult to constrain the total age without very precise WD mass estimates. This problem is compounded by the fact that the lower-mass region of the IFMR is the least constrained. Depending on the IFMR used, initial mass values for a 0.6\,$M_{\odot}$\ WD can range between $1.2-2$\,$M_{\odot}$\ which corresponds to progenitor lifetimes between $1.4-6$\,Gyr. This problem is also exaggerated when considering that for these low-mass WDs, the progenitor lifetimes are a major fraction of the total age.
We stress that low-mass WDs are still useful in providing a lower limit on total ages. A $0.6$\,$M_{\sun}$ WD with a 5\% error on its mass can provide a $1\sigma$ lower limit of 1.4 Gyr on the progenitor lifetime, modulo uncertainties on the IFMR used. This, in conjunction with a precise cooling age, can still provide useful information on these WDs and their companions.
Fortunately, the opposite is true for the higher-mass WDs. Higher-mass WDs ($>0.8$\,$M_{\sun}$) come from massive progenitors, which have short ZAMS progenitor lifetimes. Thus, even larger mass uncertainties for high-mass WDs do not result in large errors on the progenitor lifetime (see Figure~\ref{fig:age_precision}). Put another way, high-mass WDs can have large relative uncertainties in their progenitor lifetimes but still have relatively precise total ages. Additionally, the IFMR of these high-mass WDs are better empirically constrained from cluster WDs.
\section{Coeval Analysis}\label{sec:results}
Before engaging in any analysis on the total ages, we remove systems where we know our methods described in the previous section return unreliable total ages. We apply cuts on the sample to remove all systems with on-sky separation $<$2\arcsec, as well as systems with \texttt{R\_chance\_align}~$<$~0.1. We also remove systems with at least one component with $\mathrm{T_{eff}}<3200$\,K, as well as those with an inferred total age $>14$\,Gyr.
The cut to on-sky separation removes systems with significant blending of the photometry, which confuses the SED fits (154 of the WD pairs have on-sky separations $<2$\arcsec). We adopt the cut on the \texttt{R\_chance\_align} parameter provided in \citet{2021MNRAS.506.2269E}; cutting at 0.1 ensures the remaining systems have a high probability of being bound ($>90$\%). Although we cut on a $<10$\% chance alignment probability, we emphasize that the expected contamination fraction from chance alignments is much less than 10\%, because most of the pairs in the sample have chance alignment probabilities that are much lower than 10\%. From the actual chance alignment probabilities of the pairs in our sample, we estimate that there are $4\pm2$ chance alignments in the sample, corresponding to a contamination fraction of about 0.3\%. The wide WD+WD sample contains 1390 high fidelity pairs with \texttt{R\_chance\_align} $< 0.1$ (see Figure~\ref{fig:sample}).
The cut on effective temperature is employed because our model grid stops at 3000\,K. Since our fitting routine does not attempt to extrapolate, objects with reported temperatures near the cutoff will have underestimated uncertainties and their temperatures will be artificially hotter than their true temperatures. We also ignore all systems with a total age greater than that of the Universe. Many of these systems come from overluminous sources in the Gaia CMD (see Figure~\ref{fig:sample}). These sources will have low WD masses that, if they are actually so low mass, could not have formed from single-star evolution. At lower temperatures, the observed WD sample does not follow the cooling model trends which may result in these lower unreliable masses (see Figure~\ref{fig:sample}). There are a total of 670 WDs with inferred total ages $>14$ Gyr, which results in 548 pairs being removed.
After these cuts, the high-fidelity sample used for analysis contains 1272 WDs in 636 wide double WD pairs. The mean magnitude of this surviving sample is $G=19.3$\,mag.
\subsection{Age Comparison}\label{sec:age_compare}
The first test of the total ages made is to check the absolute age difference in each pair. Since we assume the stars in each wide pair in our sample are coeval, we expect them to return identical total ages. This absolute age difference is plotted in Figure~\ref{fig:masscut}. The total age difference between each component is typically close to zero until one of the components masses is below a value of $\sim0.63 M_\odot$. These systems do not provide very precise ages, and it is evident that the systematic uncertainty in these systems is too large to provide statistically significant comparisons.
This leads us to remove any systems that have a WD with a mass less than $<0.63\ M_\odot$ from any further analysis. These systems can still be useful to provide lower limits on the total age, as discussed in Section~\ref{sec:age_precision}, but we are not able to assess their reliability due to the large systematics in conjunction with their large uncertainties. After removing these systems, the number of wide WD+WD pairs is reduced to 423, with a total of 846 WDs.
\begin{figure}
\centering
\includegraphics[width=0.975\columnwidth]{masscut.pdf}
\caption{The total age difference in each binary as a function of the mass of the secondary in each binary (the least massive component) for our 1272 systems surviving the cuts outlined in Section~\ref{sec:results}. The red line shows the cut used to remove the poorly behaved low mass systems, whose behavior change drastically at $M < 0.63 M_\odot$. Since the secondary is always the least massive component, the cut shown by the red line removes all systems with WDs $< 0.63 M_\odot$.}
\label{fig:masscut}
\end{figure}
The second test of the accuracy of the total ages is to compare the ages of the WDs in each binary in relation to the uncertainties on those total ages. To do this, we explore distributions of age agreement, which we define as
\begin{equation}
\mathbf{\sigma_{1-2} = }\frac{\Delta \tau}{\sigma_{\Delta \tau}} = \frac{\tau_1 - \tau_2}{\sqrt{\sigma_{\tau_1}^2 + \sigma_{\tau_2}^2}}
\end{equation}
where $\tau_1$ and $\sigma_{\tau_1}$ are the total age and uncertainty on the total age of the primary (the most massive component of each binary) and $\tau_2$ and $\sigma_{\tau_2}$ are the total age and uncertainty on the total age of the less massive component. Systems with an age agreement ($\mathbf{\sigma_{1-2}}$) equal to 5 have individual component ages that disagree at a $5\sigma$ level.
We show in Figure~\ref{fig:diff_age} the distribution of the $\sigma_{1-2}$ disagreement for the 423 high-fidelity pairs where both WDs have $M > 0.63 M_\odot$. The distribution exhibits a long tail out to large absolute values and has a notable bias towards negative values. Fewer than half (47\%) of the systems have total age agreement within $1\sigma$. The slight tendency towards negative $\sigma_{1-2}$ disagreement values shows that the more massive component of each pair has a younger total age on average compared to its companion. Possible causes of this asymmetry are further explored in Section~\ref{sec:ill-behaved}. The long tail of the distribution reveals that there is a population of at least 60 systems (14\%) with values of the $\sigma_{1-2}$ disagreement greater than five where the systematic errors ($\Delta\tau$) are non-negligible compared to the random errors reported ($\sigma_\tau$).
\begin{figure}
\centering
\includegraphics[width=0.975\columnwidth]{diff_age_over_error_full_sample.pdf}
\caption{The distribution of $\sigma_{1-2}$ disagreement for the 423 wide WD+WD pairs in our sample that pass the cuts discussed in Section \ref{sec:results}. The red dashed line shows the expected distribution for a Gaussian with $\sigma=1$ for reference. The distribution is skewed towards negative values, where the primary is the more massive of the two components. The black dashed line represents the $50^{th}$ percentile and the lower and upper errors come from the $16^{th}$ and $84^{th}$ percentiles respectively.}
\label{fig:diff_age}
\end{figure}
\subsection{Ill-Behaved Systems}
\label{sec:ill-behaved}
Substantial uncertainty in the total ages in these WDs comes from the process of converting the WD mass to a ZAMS progenitor lifetime. We remove the most model-dependent aspect in the determination of the WD's progenitor lifetime by simply considering that the IFMR is monotonic and increasing, such that a higher-mass WD comes from a higher-mass progenitor. This assumption may not hold for all WD masses, as some evidence of a kink is seen in the IFMR for a range of WD masses around $0.65$\,$M_{\odot}$\ (\citealt{2020NatAs...4.1102M}).
Still, for the vast majority of cases, the more massive WD in the binary comes from a more massive main-sequence star, and it's progenitor lifetime should be smaller than the progenitor lifetime of its companion. For a coeval binary pair, the cooling age of the more massive WD must be longer than the cooling age of its companion. Any systems where the more massive WD has a shorter cooling age than its lower mass companion are discrepant before invoking any theoretical IFMR or other process for determining a progenitor lifetime.
Our sample contains many systems where this is the case, as shown by Figure~\ref{fig:dm_dtc}. The figure includes systems that pass the temperature, separation, and chance alignment cuts discussed in Section~\ref{sec:results}, but we do not apply the total age and mass cut described in that section for this comparison. Roughly 43\% lie in the negative region of the plot, where the more massive WD has a shorter cooling age. This statistic requires we properly define the more massive WD (see representative uncertainty in Figure~\ref{fig:dm_dtc}); 21\% of our systems are more than 1$\sigma$ away from having the more massive component with a shorter cooling age. This is a conservative estimate for the number of these ill-behaved systems because the masses of the two WDs in each pair have a positive correlation. This is due to the fact that the two WDs are effectively at the same distance, and the uncertainty on the distance affects the determined uncertainties on the masses. If in fact the system is closer and therefore brighter, then both WDs would be brighter and the derived masses would both be lower, leaving the determination of the more-massive primary unchanged.
\begin{figure}
\centering
\includegraphics[width=0.97\columnwidth]{dM_dtc_full_and_DAs.pdf}
\caption{A comparison of the cooling age difference versus mass difference for the full wide binary sample (1259 grey points) and a subset of pairs where both WDs are known to be DA (79 red diamonds). A representative error bar is provided in the bottom right of the figure. A significant fraction of objects (43\% for the full sample, 35\% for the DA+DA systems) sit in the negative region, which implies the more massive component has a shorter cooling age, which is unphysical and likely implies a large fraction of wide WD+WD binaries were once triple systems.}
\label{fig:dm_dtc}
\end{figure}
Thus, somewhere between 21-43\% of the wide WD+WD systems have the more massive WD with a shorter cooling age. Given a monotonic and increasing IFMR, these systems will have large $\sigma_{1-2}$ disagreement values. The systems that have negative cooling age differences shown in Figure~\ref{fig:dm_dtc} show no bias towards any mass or temperature range. Removal of the 187 ill-behaved systems that pass the cuts described in Section~\ref{sec:results} results in a substantial fraction of systems in the long negative tail of the $\sigma_{1-2}$ disagreement distribution being removed (see Figure~\ref{fig:diff_age_ill_behaved}).
\begin{figure}
\centering
\includegraphics[width=0.97\columnwidth]{diff_age_over_error_ill_behaved.pdf}
\caption{The distribution of $\sigma_{1-2}$ disagreement for the 423 wide binaries that pass the cuts outlined in Section \ref{sec:results} (black) compared to the distribution of $\sigma_{1-2}$ disagreement for the sample of systems where the more massive WD has a shorter cooling age (red). These systems can account for the bias towards negative values including the most discrepant systems.}
\label{fig:diff_age_ill_behaved}
\end{figure}
One possible explanation for this discrepancy is that we are assuming all of our WDs have a hydrogen dominated atmosphere. Applying a DA (hydrogen-dominated atmosphere) model to a non-DA WD can cause systematic errors on the order of 10-15\% on the WD mass (\citealt{2012ApJS..199...29G}) which can lead to the wrong determination of the primary. The objects in the sample that are known to have two pure DA WDs still show objects in the negative region where the higher mass WD has a shorter cooling age (see Figure~\ref{fig:dm_dtc}). The amount of wide DA+DAs that are ill-behaved is around 36\% of 74 systems (with 21\% being more than 1$\sigma$ discrepant). Systems where one of the WDs is known from spectroscopy to be a non-DA have 63\% out of 73 systems in the negative region (with 52\% being more than 1$\sigma$ discrepant).
Our spectroscopic follow-up using the LDT was designed to specifically target systems in the negative region of Figure~\ref{fig:dm_dtc} and thus biases our results to this region.
An alternative way to determine if there is an excess of non-DA systems in the ill-behaved region of Figure \ref{fig:dm_dtc} without having to worry about the bias introduced by our follow-up observations is to compare relative rates of spectral types within the sample of the ill-behaved objects. There are 101 systems in the negative region and 156 of the 202 WDs have a spectral type determined. 66\% of these WDs are DAs. In magnitude limited samples, we expect $\approx65$\% of the WDs to be DAs \citep{2013ApJS..204....5K}. For the systems that are more than $1\sigma$ discrepant, 59\% of the WDs with known spectral types are DAs.
Although there is some evidence for an overabundance of non-DAs in the negative region of Figure \ref{fig:dm_dtc}, we conclude it is unlikely that a large population of non-DAs is the full explanation for the number of these ill-behaved systems. To avoid any contamination from non-DAs, we will refer to the values for the DA+DA subsample (21-36\%) when discussing the percentage of ill-behaved systems.
Instead, we propose that most of the ill-behaved systems are the descendants of triples that have an inner pair that either experienced a merger that effectively reset the age of the more massive WD or is currently unresolved. Through binary population modelling, \citet{2020A&A...636A..31T} find that assuming single star evolution for a merger remnant can result in underestimating the age by $3-5$ times on average, which could manifest as an anomalous cooling age like those in Figure~\ref{fig:dm_dtc}. They also find $10-30$\% of isolated WDs have experienced a merger in their past, usually before the WD phase is reached, which is similar to the percentage of ill-behaved systems in our sample ($21-36$\%). While that value is for single WDs once in binaries, it is a valuable number to anchor expectations.
If all the systems in the negative region of Figure~\ref{fig:dm_dtc} are or were previously triples, this implies that roughly 21-36\% of the wide WD+WD pairs in our sample started as a triple system and have either undergone a merger or have a close inner pair that is unresolved. This range is consistent with the expected binary and triple fraction ($30-35$\% and $10-20$\%, respectively) for WD progenitor stars that are $2-4$\,$M_{\sun}$ on the ZAMS \citep{2017ApJS..230...15M}. We also find that 50\% of the double WD pairs in the triple systems are in the negative region of Figure~\ref{fig:dm_dtc}. This would imply that these systems are descended from quadruple systems.
At least one well-studied merger remnant is included in our sample: the massive ($>1.1$\,$M_{\sun}$, \citealt{2010ANA...524A..36K}), strongly magnetic (up to 800\,MG, \citealt{1999ApJ...510L..37B}), rapidly rotating (725\,s, \citealt{2003ApJ...593.1040V}) object REJ0317-853 \citep{1997MNRAS.292..205F}. This WD is in a system with a $\sigma_{1-2}$ disagreement value of $-17$ and its effective temperature is at least twice that of its lower mass companion. We conclude that the majority of these ill-behaved systems are likely merger remnants or unresolved triples. Spectroscopic follow-up of these systems can help to disentangle the subset of these objects that are the result of a merger from the unresolved triples which can place empirical constraints on the triple fraction of WD progenitors.
\subsection{Dependence on Mass and Temperature} \label{sec:dependence}
The dependence of the age disagreement on the WD mass and temperature can provide insights into where our models are insufficient. A treatment and correction for these dependencies is crucial if WD ages are to be used on large scales to age various populations.
We explore the dependence of the age disagreement on the WD mass and temperature by splitting the systems into individual WDs, and inspecting the median of the absolute value of the age disagreement of each system in various bins of effective temperature and mass. In this analysis, we do not remove any of the ill-behaved systems discussed in Section~\ref{sec:ill-behaved}, since it is not usually feasible to separate these possible merger remnants when looking at an isolated WD or WD+MS system in the field.
\begin{figure}
\centering
\includegraphics[width=0.975\columnwidth]{Mass_Agreement.pdf}
\caption{The absolute value of the total age disagreement and the resulting total age error inflation factors as a function of WD mass. Each point represents the median of the absolute value of the total age disagreement of a bin centered at the point. The total age error inflation factors are calculated by dividing this median by the expectation for a folded Gaussian distribution. The black points represent the full sample of 423 systems that survive the cuts described in Section~\ref{sec:results}. The grey background points represent the median age disagreement for a sub-sample of systems where both WDs have masses contained in each bin. Each bin contains 20\% of the sample, which is roughly 170 individual WDs per bin for the black points (the grey points are around $3-5$ times fewer). The red line is a fit to the black points that is described in Section~\ref{sec:dependence} to define an inflation factor; a coarse mapping for this fit can be found in Table~\ref{tab:inflation}.}
\label{fig:agreement}
\end{figure}
We find that the age disagreement is effectively constant as a function of temperature, but shows a strong trend as a function of mass. This can be seen in Figure~\ref{fig:agreement}. The highest masses ($>0.8$$M_{\odot}$) have a median $|\sigma_{1-2}|$ disagreement value of $\approx{1.5}$ while the lowest mass bin ($<0.67$$M_{\odot}$) has a median value of $\approx{0.6}$. The uncertainties in the medians shown are found through a bootstrapping method with replacement. To ensure that this trend is the true underlying trend, we calculate the median age disagreement for a sub-sample of systems where the mass of both WDs in the system are in the same bin. This removes cross contamination from systems that have WDs with very different masses. The grey points in Figure~\ref{fig:agreement} show the trend for this sub-sample, and are consistent with the points for the whole sample.
This trend in mass can be caused by larger systematic errors in the total ages of massive WDs causing larger relative age differences in binaries containing them, or by an underestimation of the total age uncertainties of high mass WDs. We find that the relative age difference is essentially constant ($\approx{25\%}$) as a function of WD mass. This leads us to conclude that the uncertainties on the total ages of the high mass WDs in the sample are underestimated.
To determine the amount of inflation of the total age errors needed, we take the median age disagreement and divide by the expected median value of a folded Gaussian distribution with a standard deviation of one. These values can be seen as a second y-axis in Figure~\ref{fig:agreement}. Next, we fit these points with a sigmoid function, defined as,
\begin{equation}
f_{\mathrm{inflate}}\left(M_\odot\right) = \frac{a}{1+\mathrm{e}^{-c (M - b)}}
\end{equation}
where $M$ is the mass of the WD and $a$, $b$, and $c$ are all constants. We fit this function to the inflation factor points shown in Figure~\ref{fig:agreement}. The best fit parameters are $a=2.2$, $b=0.68$, and $c=22$. The resultant fit can be seen in Figure~\ref{fig:agreement} as the red dashed line, and select points are given in Table~\ref{tab:inflation}. The total age error inflation factor for the lowest mass bin is 0.85, but we do not recommend deflating any total age errors. The uncertainty makes the point consistent with a total age error inflation factor of 1.0. Thus, we recommend using the fit function only for WDs with masses greater than $0.67 M_\odot$.
\begin{deluxetable}{cc|cc}
\label{tab:inflation}
\tablecaption{Total Age Error Inflation Factors}
\tablewidth{700pt}
\tabletypesize{\scriptsize}
\tablehead{\colhead{WD Mass $\left[M_\odot\right]$} & \colhead{$f_{\mathrm{inflate}}$} & \colhead{WD Mass $\left[M_\odot\right]$} & \colhead{$f_{\mathrm{inflate}}$}}
\startdata
0.67 & 1.00 & 0.74 & 1.74\\
0.68 & 1.10 & 0.75 & 1.81\\
0.69 & 1.20 & 0.76 & 1.88\\
0.70 & 1.34 & 0.77 & 1.90\\
0.71 & 1.45 & 0.78 & 2.00\\
0.72 & 1.55 & 0.80 & 2.05\\
0.73 & 1.65 & $>$0.90 & 2.20\\
\enddata
\end{deluxetable}
To further understand where WD ages are the most reliable, we split the parameter space into 25 bins in temperature and mass. For each bin, we check the median total age, absolute age difference, and total age error inflation factor needed. The results of this are shown in Figure~\ref{fig:2D_agreement}. The most problematic areas of parameter space are the higher masses, which is what is seen in Figure~\ref{fig:agreement}, and especially at the hotter temperatures where the cooling age contribution to the total age is minimized. These hot, high-mass regions of parameter space are where we would expect the merger remnants to be at their highest contamination rate. This is also where the Q-branch is located, which was first seen in the HR diagram in \emph{Gaia} DR2 \citep{2018A&A...616A..10G,2019ApJ...886..100C}, as well as the onset of crystallization \citep{2019Natur.565..202T}. The cooling models we use do include crystallization, but not other exotic cooling effects, such as sedimentation of neutron-rich ions (e.g., \citealt{2001ApJ...549L.219B,2020ApJ...902...93B,2021ApJ...911L...5B}).
\begin{figure*}
\centering
\includegraphics[width=1.99\columnwidth]{2D_agreement.pdf}
\caption{The inflation factor and median total age and age difference for 25 bins in mass and temperature of roughly similar size. Each box is colored by the inflation factor and the white box shows the median total age (Gyr, upper left), median total age difference (Gyr, upper right), the total age error inflation factor (lower left), and the number of WDs in the bin (lower right). The upper right bin also includes the parameters represented in the white boxes. For example, the 50 WDs in the bottom left bin ($4000-6350$\,K, $0.63-0.67$\,$M_{\odot}$) have a median total age of 4.6 Gyr, their ages differ from their companions by 1.4 Gyr on average, and their total age uncertainties need to be inflated by a factor of 1.2 for their ages to come into 1$\sigma$ agreement.}
\label{fig:2D_agreement}
\end{figure*}
The typical percent age difference across all of the bins is around $25\%$, ranging from 60\% in the worst case ($T_{\mathrm{eff}} > 12000$ K, $0.8$ $M_{\odot}$ $ < M < 0.9$ $M_{\odot}$) and 15\% in the best case ($7550$ K $< T_{\mathrm{eff}} < 9100$ K, $0.73$ $M_{\odot}$ $< M < 0.8$ $M_{\odot}$). The WDs with the hottest temperatures show larger percent age differences on the order of 35\%-60\% (0.2-0.5 Gyr) compared to the rest of parameter space. Thus, 25\% total age uncertainties are a representative approximation for most WD age estimates on an individual WD for WDs $>0.63$\,$M_\odot$ and temperatures cooler than 12,000~K. This median WD age precision compares well to similar results from an independent analysis of many of the same objects \citep{2021ApJS..253...58Q}. It should be noted that with additional information through spectroscopy we can achieve better age precision \citep{2022ApJ...929...26M}.
\section{Conclusions}
In this work, we constructed a sample of 1565 wide WD+WD binaries as well as 24 mostly new wide triple WD+WD+X systems to test the accuracy and precision of WD total age determinations when no spectral type information is known. We measured effective temperatures, surface gravities, and masses for all WDs in our sample by using hydrogen-dominated atmosphere models and fitting photometry from a variety of all-sky surveys. Using these atmospheric parameters, we determined the total ages of each WD through the use of WD cooling models, a theoretically motivated and observationally calibrated initial-to-final-mass relation, and stellar evolution model grids.
Using a high-fidelity sample of wide WD+WD pairs with uncontaminated photometry described in Section~\ref{sec:results}, the main conclusions of this work can be summarized as follows:
\begin{itemize}
\item We find 24 mostly new widely separated triples with at least two WDs, and possibly the second ever resolved triple-WD system. We find 21 of the 24 triples have a hierarchical structure. The three triples not in a hierarchical structure will likely dissipate due to dynamical instability, but the ages of the WDs in these systems can provide useful lower limits on the dissipation timescale. We find that two of the non-heirarchical triple systems are $\approx{3}$ Gyr old, while the other is quite young at $\approx{90}$ Myr old.
\item We find that the total ages of the lowest mass WDs in the sample ($<0.63 M_\odot$) are too uncertain to be used in the age comparisons conducted in this work. The absolute age difference of systems containing at least one lower-mass WD quickly diverges towards high values of a few Gyr. A lower limit on the total age can still be determined for these WDs, but we do not attempt to test the absolute reliability of their total ages. This is unfortunate, since the mean mass of field WDs is roughly 0.63\,$M_{\odot}$\ \citep{2016MNRAS.461.2100T, 2016MNRAS.455.3413K, 2019MNRAS.486.2169K}.
\item When comparing the total ages of each component of the WD pairs, we find a significant fraction of systems ($21-43$\%) where the more massive WD has a shorter cooling age. We find that this trend holds even for a subsample of DA+DA systems, with 21-36\% having the more massive WD with a shorter cooling age. This is the opposite of what is expected for a monotonic and increasing initial-to-final-mass relation. We attribute this discrepancy to the presence of many merger remnants or unresolved triples in the sample, such that roughly $21-36$\% of our sample of currently wide WD+WD binaries started as bound triple systems. These WDs have large age disagreement values and can cause problems when determining ages for field WDs, but unresolved companions or a merger history is a problem for many techniques for determining stellar ages.
\item We find that the level of age disagreement strongly depends on the mass of the WD. Wide binaries containing at least one higher-mass WD ($>$0.8\,$M_{\odot}$) generally have a lower level of agreement compared to those with lower-mass WDs, reflecting the fact that the formal age uncertainties are often underestimated for higher-mass WDs. We fit this dependence to determine the amount of error inflation on the total ages needed to improve the agreement. We find that errors on the total ages of the highest-mass WDs ($>$0.9\,$M_{\odot}$) need to be inflated, up to a factor of 2.2, when assuming that the WD is a DA. Fortunately, higher-mass WDs also have more precisely determined ages (median total age uncertainties of $\approx{0.2}$ Gyr), so they remain very useful chronometers.
\item Looking at 25 bins in mass and effective temperature space in Figure~\ref{fig:agreement}, we find that total ages are roughly accurate at the 25\% level for WDs with masses $>$0.63\,$M_{\odot}$\ using only photometry. The WDs with the hottest temperatures have the largest inaccuracy, with uncertainties of 35\%-60\%, but these are relatively small absolute errors on the order of 0.3 Gyr.
\end{itemize}
Due to the large sample size, we assumed that all the WDs in our sample are DA. When a spectral type is known, a more accurate total age can be determined and the recommendations discussed above are not immediately applicable. A large sample of wide double WDs with known spectral types needs to be investigated to determine the accuracy of WD total ages when the spectral type is known.
Future work is planned to quantify how atmospheric parameters from spectroscopy can alleviate the systematic errors mentioned above. Spectroscopic observations do not suffer from the same problems as the photometric methods employed in this work, and should be more sensitive to the WD mass. A spectrum can also provide important clues as to whether a WD has a prior merger history or is part of an unresolved binary (for example, if it is strongly magnetic, or if it is radial-velocity variable). With our follow-up efforts using the LDT, we continue to increase the sample of wide WD+WD pairs with available spectra in hopes of gathering a large sample of wide double WDs with spectroscopically determined atmospheric parameters.
In general, we find that WD total ages agree very well in wide double WDs, but the uncertainties occasionally need to be inflated to compensate for systematics on the total ages, the presence of an unresolved companion, or merger history. Given the massive increase in the number of systems with a wide WD companion thanks to Gaia, we hope this result provides a first road map to providing more reliable stellar ages using these wide binaries.
\section{Acknowledgements}
We would like to acknowledge Jeff Andrews for helpful discussions. T.M.H, J.J.H., and C.W. acknowledge support from the National Science Foundation under Grant No. AST-1908119. J.v.S. acknowledges support from the National Science Foundation under Grant No. AST-1908723. We thank Phil Muirhead and Adam Samuels for support with the quick-look spectroscopic tool.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement. These results made use of the Lowell Discovery Telescope (LDT) at Lowell Observatory. Lowell is a private, non-profit institution dedicated to astrophysical research and public appreciation of astronomy and operates the LDT in partnership with Boston University, the University of Maryland, the University of Toledo, Northern Arizona University and Yale University. The upgrade of the DeVeny optical spectrograph has been funded by a generous grant from John and Ginger Giovale and by a grant from the Mt. Cuba Astronomical Foundation.
|
2,869,038,156,874 | arxiv | \section{Introduction}
\label{sec_intro}
The notion of the index of a subgroup is a fundamental concept in group theory. It may be viewed as providing a way of measuring the size of a subgroup relative to its containing group. From this point of view, a subgroup of finite index may be thought of as differing from its parent by only a finite amount. This intuitive idea gains more significance when one considers the long list of properties that are known to be preserved when passing to finite index subgroups or extensions, which include: finite generation and presentability (and more generally property ${\rm F}_n$ for every $(n\geq 1)$), solubility of the word problem, automaticity, the homological finiteness property ${\rm FP}_n$,
residual finiteness, periodicity, and local-finiteness (see \cite{Brown1982,delaHarpe2000,epstein_wordproc,lyndon_cgt,Magnus1} for details of these classical results).
On the other hand, it is still an open question as to whether the property of being presented by a finite complete rewriting system is inherited by subgroups of finite index; see \cite{Pride2000}.
Important problems about finite index subgroups and extensions continue to receive a great deal of attention; see for example \cite{Behrstock1,Burillo1,Haglund1,Katsuya1,Nikolaev1,Nikolov2003,Nikolov1,Nikolov2007(1),Nikolov2007(2)}.
In semigroup theory various notions of index have arisen, in several different contexts. For example, the index of a subgroup of a semigroup was considered by Bergman in \cite{Bergman}, while a notion of index for cancellative semigroups arose in work of Grigorchuk \cite{Grigorchuk1} on growth of semigroups. While these are direct generalisations of group index, they are limited since they do not apply to semigroups in general.
For subsemigroups of semigroups in general, until recently, the most widely studied notion of index
has been the so-called Rees index. The \emph{Rees index} of a subsemigroup is defined simply as the cardinality of its complement. It was originally introduced by Jura in \cite{Jura1}, and since then, in
analogy with group index, many finiteness conditions have now been shown to be inherited when passing to finite Rees index substructures or extensions (see \cite{Campbell1996}, \cite{Ruskuc1} and \cite{hoffmann_autofinrees}).
However, Rees index is not a generalisation of group index.
In fact, it is obvious that an infinite group cannot have any proper subgroups of finite Rees index. So although Rees index results have the same look and feel as group index results, this is as far as the connection goes,
and in particular they cannot be applied to recover the corresponding group-theoretic results on which they are modelled.
This problem was the original motivation for the work in \cite{gray_green1} where a new notion of index was introduced, called \emph{Green index}.
The Green index of a subsemigroup $T$ of a semigroup $S$ is given by counting strong orbits (called $T$-relative $\mathscr{H}$-classes) in the complement $S \setminus T$ under the natural actions of $T$ on $S$ via right and left multiplication (see Section~2 for more details). In particular, when $S \setminus T$ is finite $T$ will have finite Green index in $S$, while if $S$ is a group and $T$ a subgroup then $T$ will have finite Green index in $S$ if and only if it has finite group index in $S$. In \cite{gray_green1} it was shown that several important finiteness conditions are preserved when taking finite Green index subsemigroups or extensions. Thus, Green index is both general enough to simultaneously subsume Rees index and group index, but also strong enough that a semigroup will share many interesting properties with its subsemigroups of finite Green index. In this paper we continue the investigation of Green index, and in particular extend the list of finiteness conditions that are known to be preserved under taking finite Green index subsemigroups and extensions.
Extending the classical ideas of Sch\"{u}tzenberger \cite{Schutzenberger1957,Schutzenberger1958},
with each $T$-relative $\mathscr{H}$-class $H$ we associate a group $\Gamma(H)$, called its ($T$-\emph{relative}) \emph{Sch\"{u}tzenberger group}, obtained by taking the setwise stabiliser of the action of $T$ on $H$ by right multiplication and making it faithful (see Section~2 for full details). Our results then relate properties of $S$, $T$ and the family of (relative) Sch\"{u}tzenberger groups $\Gamma(H)$.
The article is laid out as follows. After the preliminaries in Section \ref{sec_prelims}, in Section \ref{sec_rewriting} we prove a fundamental lemma (the Rewriting Lemma) which underpins many of the results appearing later in the paper.
This rewriting technique is utilised in Section \ref{secsubsemigroups} to
obtain a generating set for $T$ from a generating set for $S$.
In Section \ref{secschgps} we obtain generating sets for the relative
Sch\"{u}tzenberger groups from a generating set for $S$.
In the case of finite Green index, finite generation is preserved in both these situations.
In Section \ref{sec_presentations} we give a presentation for $S$ in terms of given presentations for $T$ and each of the Sch\"{u}tzenberger groups. Again, when the Green index is finite, finite presentability is preserved.
Whether finite presentability is preserved in the other direction, i.e. from $S$ to $T$ and the Sch\"{u}tzenberger groups, remains an open problem, but in Section \ref{secmalcev}
we show that this is the case for finite Malcev (group-embeddable) presentations (in the sense of \cite{c_survey}).
In the remaining sections we consider a range of other properties related to generators in one way or another:
the word problem (Section \ref{secwp}), type of growth (Section \ref{secgrowth}),
and automaticity (Section \ref{secautomatic}) in the sense of \cite{campbell_autsg,hoffmann_relatives}.
These results provide common generalisations of the corresponding classical results from group theory, and Rees index results from semigroup theory.
\section{Preliminaries}
\label{sec_prelims}
Let $S$ be a semigroup and let $T$ be a subsemigroup of $S$.
We use $S^1$ to denote the semigroup $S$ with an identity element $1 \not\in S$ adjoined to it. This notation will be extended to subsets of $S$, i.e. $X^1 = X \cup \{ 1 \}$. For $u,v \in S$ define:
\[
u \mathscr{R}^T v \ \Leftrightarrow \ uT^1 = vT^1, \quad u \mathscr{L}^T v \ \Leftrightarrow \ T^1u = T^1v,
\]
and $\mathscr{H}^T = \mathscr{R}^T \cap \mathscr{L}^T$. Each of these relations is an equivalence relation on $S$; their equivalence classes are called the
($T$-)\emph{relative} $\mathscr{R}$-, $\mathscr{L}$-, and $\mathscr{H}$-classes, respectively.
Furthermore, these relations respect $T$, in the sense that each
$\mathscr{R}^T$-, $\mathscr{L}^T$-, and $\mathscr{H}^T$-class lies either wholly
in $T$ or wholly in $S \setminus T$. Following \cite{gray_green1} we define the \defterm{Green index} of $T$
in $S$ to be one more than the number of relative $\mathscr{H}$-classes in $S \setminus T$. Relative Green's relations were introduced by Wallace in \cite{Wallace} generalising the the fundamental work of Green \cite{Green1951}. For more on the classical theory of Green's relations, and other basic concepts from semigroup theory, we refer the reader to \cite{howie_fundamentals}.
Throughout this paper $S$ will be a semigroup, $T$
will be a subsemigroup of $S$, and Green's relations in $S$ will
always be taken relative to $T$, unless otherwise stated. In other words, we shall write $x \mathscr{R} y$ to mean that $xT^1 = yT^1$ rather than $xS^1 = yS^1$. On the few occasions that we need to refer to Green's $\mathscr{R}$ relation in $S$ we will write $\mathscr{R}^S$. The same goes for the relations $\mathscr{L}$ and $\mathscr{H}$.
The following result summarises some basic facts about relative Green's relations (see \cite{Wallace,gray_green1} for details).
\begin{proposition}\label{basicproperties}
Let $S$ be a semigroup and let $T$ be a subsemigroup of $S$.
\begin{enumerate}[(i)]
\item
\label{prop_LandRcong}
The relative Green's relation $\mathscr{R}$ is a left congruence on $S$, and $\mathscr{L}$ is a right congruence.
\item
\label{prop_{Rel_Greens_Lemma}}
Let $u,v \in S$ with $u \mathscr{R} v$, and let $p, q \in T$ such that $up = v$ and $vq = u$. Then the mapping $\rho_p$ given by $x \mapsto xp$ is an $\mathscr{R}$-class preserving bijection from $L_u$ to $L_v$, the mapping $\rho_q$ given by $x \mapsto xq$ is an $\mathscr{R}$-class preserving bijection from $L_v$ to $L_u$, and $\rho_p$ and $\rho_q$ are mutually inverse.
\end{enumerate}
\end{proposition}
\begin{sloppypar}
With each relative $\mathscr{H}$-class we may associate a group, which we call the \emph{Sch\"{u}tzenberger group} of the $\mathscr{H}$-class.
This is done by extending, in the obvious way, the classical definition (introduced by Sch\"{u}tzenberger in \cite{Schutzenberger1957,Schutzenberger1958}) to the relative case.
For each $T$-relative $\mathscr{H}$-class $H$ let
$\mathrm{Stab}(H) = \{ t \in T^1 : Ht = H \}$
(the \emph{stabilizer} of $H$ in $T$), and
define an equivalence $\gamma=\gamma(H)$ on $\mathrm{Stab}(H)$ by $(x,y) \in \gamma$ if and only if $hx = hy$
for all $h\in H$. Then $\gamma$ is a congruence on $\mathrm{Stab}(H)$ and $\mathrm{Stab}(H) / \gamma$ is a group.
The group $\Gamma(H)=\mathrm{Stab}(H) / \gamma$ is called the relative \emph{Sch\"{u}tzenberger group} of $H$.
The following basic observations about relative Sch\"{u}tzenberger groups will be needed (see \cite{Wallace,gray_green1} for details).
\end{sloppypar}
\begin{proposition}
\label{schutzstabproperties}
Let $S$ be a semigroup, let $T$ be a subsemigroup of $S$, let $H$ be a relative
$\mathscr{H}$-class of $S$, and let
$h \in H$ be an arbitrary element. Then:
\begin{enumerate}[(i)]
\item
\label{item-sch1}
$\mathrm{Stab}(H) = \{ t \in T^1 : ht \in H \}$.
\item
\label{item-sch2}
$\gamma(H) = \{ (u,v) \in \mathrm{Stab}(H) \times \mathrm{Stab}(H) : hu = hv \}$.
\item
\label{item-sch3}
$H = h \mathrm{Stab}(H)$.
\item
\label{item-sch4}
If $H'$ is an $\mathscr{H}$-class belonging to the same $\mathscr{L}$-class of $S$ as $H$ then $\mathrm{Stab}(H) = \mathrm{Stab}(H')$ and
$\Gamma(H) = \Gamma(H')$.
\item
\label{item-sch6}
If $H'$ is an $\mathscr{H}$-class of $S$ belonging to the same $\mathscr{R}$-class as $H$ then $\Gamma(H') \cong \Gamma(H)$.
\end{enumerate}
\end{proposition}
\section{The Rewriting Lemma}
\label{sec_rewriting}
The aim of this section is to prove a rewriting lemma which arises naturally from the theory of relative Green's relations, and which will be a vital tool for the proofs of many of the results about finiteness conditions that follow.
Throughout this section $S$ will be a semigroup and $T$ will be a subsemigroup of $S$.
We let $\{ H_i : i \in I \}$ be the set of relative $\mathscr{H}$-classes in $S \setminus T$, with a fixed set of representatives $h_i \in H_i \ (i \in I)$, and relative Sch\"{u}tzenberger groups $\Gamma_i = \Gamma(H_i) = \mathrm{Stab}_T (H_i) / \gamma_i$. Set $I^1 = I \cup \{ 1 \}$ where we assume $1 \not\in I$. We introduce the convention $H_1 = \{ 1 \}$ and $h_1 = 1$ where $1$ is the external identity adjoined to $S$.
Next we introduce two mappings
\[
\rho: S^1 \times I^1 \rightarrow I^1, \quad \quad \lambda: I^1 \times S^1 \rightarrow I^1
\]
which reflect the way that the elements of $S^1$ act on the representatives $h_i$:
\begin{eqnarray}
\rho(s,i) & = & \begin{cases}
j & \mbox{if $sh_i \in H_j$} \\
1 & \mbox{if $sh_i \in T$},
\end{cases} \label{(1.4)}
\end{eqnarray}
and
\begin{eqnarray}
\lambda(i,s) & = & \begin{cases}
j & \mbox{if $h_i s \in H_j$} \\
1 & \mbox{if $h_i s \in T$}.
\end{cases} \label{(1.5)}
\end{eqnarray}
The following lemma introduces related elements $\sigma(s,i)$ and $\tau(i,s)$ which `connect'
$s h_i$ and $h_i s$ to their respective $\mathscr{H}$-class representatives.
\begin{lemma}\label{lem_sigmaandtau}
For all $i \in I^1$ and $s \in S^1$ there exist $\sigma(s,i), \tau(i,s) \in T^1$ satisfying:
\begin{eqnarray}
s h_i & = & h_{\rho(s,i)} \sigma(s,i), \label{(1.2)}
\end{eqnarray}
and
\begin{eqnarray}
h_i s & = & \tau(i,s) h_{\lambda(i,s)}. \label{(1.3)}
\end{eqnarray}
\end{lemma}
\begin{proof}
If $\rho(s,i) \neq 1$ we have $sh_i \mathscr{H} h_{\rho(s,i)}$ and so there exists $\sigma(s,i) \in T^1$ satisfying
\[
sh_i = h_{\rho (s,i)} \sigma(s,i).
\]
Otherwise $\rho (s,i) = 1$, and setting $\sigma(s,i) = sh_i \in T^1$ equality \eqref{(1.2)} holds trivially.
The existence of $\tau(i,s)$ is proved dually.
\end{proof}
The following lemma describes the effect of pushing an $\mathscr{H}$-class representative through a product of elements of $S$ from left to right.
\begin{lemma}[Rewriting lemma] \label{lemma_bookkeeping}
Let $i \in I^1$ and let $s_1, s_2, \ldots, s_n \in S$. Then
\begin{equation}
h_i s_1 s_2 \ldots s_n = t_1 t_2 \ldots t_n h_j \label{(5)}
\end{equation}
where $t_1, \ldots, t_n \in T^1$ and $j \in I^1$ are obtained as a result of the following recursion:
\begin{alignat}{3}
&i_1 && = i&\quad& \label{(6)} \\
&i_{k+1} && = \lambda (i_k, s_k) && (k = 1, \ldots, n), \label{(7)} \\
&j && = i_{n+1} &&\label{(8)} \\
&t_k && = \tau(i_k, s_k) && (k = 1, \ldots, n). \label{(9)}
\end{alignat}
Furthermore:
\begin{enumerate}[(i)]
\item If all $s_q \in T$ and $h_i s_1 s_2 \ldots s_n \not\in T$ then
$
h_i s_1 s_2 \ldots s_n \mathscr{L} h_j.
$
\item If all $s_q \in T$ and $h_i s_1 s_2 \ldots s_n \in T$ then $j=1$ and so $h_j = h_1 =1$.
\item If all $s_q \in T$ and $h_i s_1 s_2 \ldots s_n \mathscr{R} h_i$ then
$
h_i s_1 s_2 \ldots s_n \mathscr{H} h_j.
$
\end{enumerate}
\end{lemma}
\begin{lem_dual}[Dual rewriting lemma] \label{lemma_bookkeeping'}
\it
Let $i \in I^1$ and let $s_1, s_2, \ldots, s_n \in S$. Then
\begin{equation}
s_1 s_2 \ldots s_n h_i = h_j t_1 t_2 \ldots t_n \label{(5')}
\end{equation}
where $t_1, t_2, \ldots, t_n \in T^1$ and $j \in I^1$ are obtained as a result of the following recursion:
\begin{eqnarray}
i_n & = & i \label{(6')} \\
i_{k-1} & = & \rho (s_k, i_k) \quad (k = n, \ldots, 1), \label{(7')} \\
j & = & i_{0} \label{(8')} \\
t_k & = & \sigma(s_k, i_k) \quad (k = n, \ldots, 1). \label{(9')}
\end{eqnarray}
Furthermore:
\begin{enumerate}[(i)]
\item If all $s_q \in T$ and $s_1 s_2 \ldots s_n h_i \not\in T$ then
$
s_1 s_2 \ldots s_n h_i \mathscr{R} h_j.
$
\item If all $s_q \in T$ and $s_1 s_2 \ldots s_n h_i \in T$ then $j=1$ and so $h_j = h_1 =1$.
\item If all $s_q \in T$ and $s_1 s_2 \ldots s_n h_i \mathscr{L} h_i$ then
$
s_1 s_2 \ldots s_n h_i \mathscr{H} h_j.
$
\end{enumerate}
\end{lem_dual}
\begin{proof}
We just prove Lemma~\ref{lemma_bookkeeping}. Lemma~\ref{lemma_bookkeeping}' may be proved using a dual argument.
For the first part we proceed by induction on $n$. The result holds trivially when $n=0$. Supposing that the result holds for $n$, the inductive step is as follows:
\[
\begin{array}{rcll}
h_i s_1 s_2 \ldots s_n s_{n+1} & = &
t_1 \ldots t_n h_{i_{n+1}} s_{n+1} & \quad \mbox{(by induction)} \\
& = &
t_1 \ldots t_n \tau(i_{n+1}, s_{n+1}) h_{\lambda(i_{n+1}, s_{n+1})} & \quad \mbox{(by \eqref{(1.3)})} \\
& = &
t_1 \ldots t_n t_{n+1} h_{i_{n+2}}.
\end{array}
\]
(i) We prove the result by induction on $n$. When $n=0$ there is nothing to prove. Now suppose that the result holds for $n-1$.
Because $s_n \in T$ and $h_i s_1 \ldots s_n \not\in T$ it follows that $h_i s_1 \ldots s_{n-1} \not\in T$ so we may apply induction to obtain:
\[
h_i s_1 s_2 \ldots s_{n-1} \mathscr{L} h_{i_n}.
\]
This implies
\[
h_i s_1 \ldots s_{n-1} s_n \mathscr{L} h_{i_n} s_n \mathscr{H} h_{\lambda (i_n,s_n)} = h_{i_{n+1}}
\]
by \eqref{(1.5)} and \eqref{(7)}.
(ii) If $i=1$ then from \eqref{(1.5)}, \eqref{(6)}, \eqref{(7)} and \eqref{(8)} it follows that $1 = i_1 = i_2 = \ldots = i_{n+1} = j$.
Otherwise, since $h_i s_1 \ldots s_n \in T$ there exists $0 \leq k \leq n-1$ such that
\[
h_i s_1 \ldots s_k \not\in T \quad \& \quad h_i s_1 \ldots s_k s_{k+1} \in T.
\]
By (i) applied to $h_i s_1 \ldots s_k$ we obtain
\[
h_{i_{k+1}} \mathscr{L} h_i s_1 \ldots s_k
\]
which implies
\[
h_{i_{k+1}} s_{k+1} \mathscr{L} h_i s_1 \ldots s_k s_{k+1}
\]
and so, $h_{i_{k+1}} s_{k+1} \in T$. Hence by \eqref{(1.5)} it follows that $i_{k+2} = \lambda(i_{k+1}, s_{k+1}) = 1$.
Then as above \eqref{(7)} gives $1 = i_{k+2} = i_{k+3} = \ldots = i_{n+1} = j$.
(iii) Again we proceed by induction on $n$. There is nothing to prove when $n=0$. Suppose that the result holds for $n-1$.
Since $h_i s_1 \ldots s_n \mathscr{R} h_i$ there exists $t \in T$ such that $h_i s_1 \ldots s_n t = h_i$. But since $s_n \in T$
and $s_1 \ldots s_{n-1} \in T$
it follows that $h_i s_1 \ldots s_{n-1} \mathscr{R} h_i$ and so we may apply induction. This gives
\[
h_i s_1 \ldots s_{n-1} \mathscr{H} h_{i_n}.
\]
Since $h_i s_1 \ldots s_{n-1} \mathscr{R} h_i s_1 \ldots s_{n-1} s_n$, by Proposition~\ref{basicproperties}\ref{prop_{Rel_Greens_Lemma}} the mapping $x \mapsto xs_n$ sends the $\mathscr{H}$-class of $h_i s_1 \ldots s_{n-1}$ bijectively onto the $\mathscr{H}$-class of $h_i s_1 \ldots s_{n-1} s_n$. In particular
\[
h_{i_n} s_n \mathscr{H} h_i s_1 \ldots s_{n-1} s_n.
\]
On the other hand, $h_{i_n} s_n \in H_{\lambda(i_n,s_n)} = H_{i_{n+1}}$ by \eqref{(1.5)} and \eqref{(7)}, and so
\[
h_{i_{n+1}} \mathscr{H} h_{i_n} s_n \mathscr{H} h_i s_1 \ldots s_n,
\]
as required.
\end{proof}
\section{Generators for Subsemigroups}
\label{secsubsemigroups}
Let $S$ be a semigroup, $T$ be a subsemigroup of $S$ and $\{ H_i : i \in I \}$ the set of relative $\mathscr{H}$-classes in $S \setminus T$. In this section we show how to relate generating sets for $S$, $T$ and the relative Sch\"{u}tzenberger groups $\Gamma(H_i)$. Throughout the section we use the same notation and conventions introduced in Section~\ref{sec_rewriting}.
If $B$ is a generating set for $T$ and $C$ is a subset of $S$ satisfying $S^1 = C^1 T^1$ then obviously $B \cup C$ generates $S$. In particular we have the following easy result.
\begin{theorem}\label{thm_smalltobig}
Let $S$ be a semigroup and let $T$ be a subsemigroup of $S$. If $B$ is a generating set for $T$ and $C = \{ h_i : i \in I \}$ is a set of representatives of the relative $\mathscr{H}$-classes of $S \setminus T$, then $B \cup C$ is a generating set for $S$. In particular, if $T$ is finitely generated and has finite Green index in $S$ then $S$ is finitely generated.
\end{theorem}
\begin{remark}
Of course, in the above theorem we can replace $C$ by a transversal of just the relative $\mathscr{R}$-classes (or $\mathscr{L}$-classes) in $S \setminus T$, and $B \cup C$ will still generate $S$.
\end{remark}
Now we go on to consider the more interesting converse statement. We begin by fixing a particular choice of $\sigma$ and $\tau$ from Lemma~\ref{lem_sigmaandtau}.
The following result provides a common generalisation of the classical result of Schreier for groups (see \cite[Chapter II]{lyndon_cgt} for example) and the analogous theorem for subsemigroups of finite Rees index due to Jura \cite{Jura1}.
\begin{theorem} \label{thm_finitegeneration}
Let $S$ be a semigroup generated by $A \subseteq S$, let $T$ be a subsemigroup of $S$, and let $I$, $\sigma$, $\tau$ be as above. Then $T$ is generated by the set
\[
B = \{
\tau(i, \sigma(a,j)) : i,j \in I^1, \; a \in A
\}.
\]
In particular, if $S$ is finitely generated and $T$ has finite Green index in $S$, then $T$ is finitely generated.
\end{theorem}
\begin{proof}
Let $t \in T$ and write $t = a_1 a_2 \ldots a_n$, a product of generators from $A$. Applying Lemma~\ref{lemma_bookkeeping}' gives
\[
t = h_{i_0} \sigma(a_1, i_1) \sigma(a_2, i_2) \ldots \sigma(a_n,i_n)
\]
where
\[
i_n = 1, \quad i_{k-1} = \rho(a_k, i_k), \quad k=n,n-1, \ldots, 1.
\]
This rewriting may be viewed as pushing the representative $h_1 = 1$ through the product from right to left using Lemma~\ref{lemma_bookkeeping}'. Note that $i_0$ is not necessarily equal to $1$ here, but if it were then we would be done since $\sigma(a_k,i_k) = \tau(1, \sigma(a_k, i_k)) \in B$. Applying Lemma~\ref{lemma_bookkeeping} we now perform an analogous rewriting pushing the representative $h_{i_0} = h_{j_1}$ back through the product from left to right giving
\[
\begin{array}{rcll}
& & h_{j_1} \sigma(a_1, i_1) \sigma(a_2, i_2) \ldots \sigma(a_n, i_n) & \\
& = &
\tau(j_1, \sigma(a_1, i_1)) \tau(j_2, \sigma(a_2, i_2)) \ldots \tau(j_n, \sigma(a_n, i_n)) h_{j_{n+1}},
\end{array}
\]
where
\[
j_1 = i_0, \quad j_{k+1} = \lambda(j_k, \sigma (a_k, i_k)), \quad k = 1,2, \ldots, n.
\]
Now by Lemma~\ref{lemma_bookkeeping}(ii) since each $\sigma(a_k, i_k) \in T$ and
\[
h_{j_1} \sigma(a_1, i_1) \sigma(a_2, i_2) \ldots \sigma(a_n, i_n) \in T
\]
it follows that $j_{n+1} = 1$ and therefore
\[
t = \tau(j_1, \sigma(a_1, i_1)) \tau(j_2, \sigma(a_2, i_2)) \ldots \tau(j_n, \sigma(a_n, i_n)) \in \langle B \rangle. \]
The last statement in the theorem follows since if $A$ and $I$ are both finite then $B$ is finite.
\end{proof}
One natural question we might ask at this point is whether Theorem~\ref{thm_finitegeneration} might be
proved under
the weaker assumption that $S \setminus T$ is a union of finitely many $\mathscr{R}$-classes (or dually $\mathscr{L}$-classes).
Such a weakening is possible, for example, in the case of
groups (and more generally inverse semigroups) where for the complement
the properties of having
finitely many relative $\mathscr{R}$-, $\mathscr{L}$- or $\mathscr{H}$-classes are all equivalent conditions.
The following example (and its dual) shows that for arbitrary semigroups
such a weakening of the hypotheses is not possible.
\begin{example}
Let $S$ be the semigroup, with a zero element $0$ and an identity $1$, defined by the following presentation:
\[
\langle a,b, b^{-1}, c \ | \ a^2 = c^2 = 0, \ ba = b^{-1}a = ca = cb = cb^{-1} = 0, \ bb^{-1} = b^{-1}b = 1 \rangle.
\]
It is easily seen that a set of normal forms for the elements of $S$ is:
\[
N = \{0 \} \cup \{ a^i b^j c^k : i,k \in \{0,1\}, j \in \mathbb{Z} \}.
\]
From this it follows that this semigroup is isomorphic to the semigroup of triples $S = \mathbb{Z}_2 \times \mathbb{Z} \times \mathbb{Z}_2 \cup \{ 0 \} $ with multiplication:
\[
(u,v,w)(d,e,f) = \begin{cases}
(u,v+e,f) & \mbox{if} \ w=d=0 \\
0 & \mbox{otherwise}.
\end{cases}
\]
Clearly $S$ is generated by $A = \{ (1,0,0), (0,1,0), (0,0,1), (0,-1,0) \}$. Now define:
\[
T = \{ (x,y,z) \in S : z \geq x \} \cup \{ 0 \},
\]
where $\{ 0 ,1 \}$ is ordered in the usual way $0 < 1$. So $T$ contains all triples except those of the form $(1,i,0)$. Let $(x_1,y_1,z_1), (x_2,y_2,z_2) \in T$ be arbitrary. Then
\[
(x_1,y_1,z_1)(x_2,y_2,z_2) =
\begin{cases}
(x_1,y_1 + y_2,z_2) & \mbox{if $z_1 = x_2 = 0$} \\
0 & \mbox{otherwise},
\end{cases}
\]
and in the first of these two cases $(x_1,y_1 + y_2,z_2) \in T$ since $z_2 \geq x_1 = 0$. It follows that $T$ is a subsemigroup of $S$.
Now $S \setminus T$ has a single relative $\mathscr{R}$-class since $S \setminus T = \{ (1,i,0): i \in \mathbb{Z} \}$ and
\[
(1,i,0)(0,j-i,0) = (1,j,0).
\]
On the other hand, $T$ is not finitely generated since the elements in the set $\{ (1,j,1) : j \in \mathbb{Z} \}$ cannot be properly decomposed in $T$, as:
\[
(x_1,y_1,z_1)(x_2,y_2,z_2) = (1,j,1)
\]
(where $(x_i,y_i,z_i) \neq (1,j,1)$) implies that $x_1=1$, $z_2=1$ and $z_1 = x_2 = 0$. But then $(x_1,y_1,z_1) = (1,y_1,0) \not\in T$ which is a contradiction.
In conclusion, $S$ is finitely generated, $S \setminus T$ has finitely many relative $\mathscr{R}$-classes, but $T$ is not finitely generated.
\end{example}
Before giving the next example we introduce a construction which will be used
several times throughout the paper.
It is a special case of the well known \emph{strong semilattice of semigroups},
where the underlying semilattice is just a $2$-element chain; see \cite[Chapter~4]{howie_fundamentals} for details of the general construction.
\begin{definition
\label{def_strong_semilattice}
Let $T$ and $U$ be semigroups and let $\phi: T \rightarrow U$ be a homomorphism. From this triple we construct a monoid $S = \mathcal{S}(T,U,\phi)$ where $S = T \ensuremath{\mathaccent\cdot\cup} U$ and multiplication is defined in the following way. Given $x,y \in S$ if $x,y \in T$ then we multiply as in $T$; if $x,y \in U$ then we multiply as in $U$; if $x \in T$ and $y \in U$ then take the product of $\phi(x)$ and $y$ in $U$; if $x \in U$ and $y \in T$ then take the product of $x$ and $\phi(y)$ in $T$.
\end{definition}
Another natural way that one might consider weakening the hypotheses of
Theorem~\ref{thm_finitegeneration} would be to replace the condition that there are finitely many relative $\mathscr{H}$-classes in $S \setminus T$ with the weaker property that there is a finite subset $C$ of $S$ such that
\begin{equation}
\forall s \in S, \exists c \in C, \exists t, t' \in T : s = ct' = tc. \label{***}
\end{equation}
The following example shows that Theorem~\ref{thm_finitegeneration} cannot be proved under this weaker assumption.
\begin{example}
Let $M$ be a monoid finitely generated by a set $A$, and
with a two-sided ideal $R$ and suppose that, as a two-sided ideal, $R$ is not finitely generated.
Such examples exist; for example we could take $M$ to be the free monoid on $\{ a, b \}$ and $R$ to be the two-sided ideal generated by all words of the form $ab^ia$ $(i \in \mathbb{N})$.
Let $\overline{M}$ be an isomorphic copy of $M$ with isomorphism:
\[
\phi: M \rightarrow \overline{M}, \quad m \mapsto \overline{m}.
\]
Define $S = \mathcal{S}(M,\overline{M},\phi)$ and $T = M \cup \overline{R}$ where $\overline{R} = \{ \overline{r} : r \in R \}$.
Then $S$ is finitely generated, by $A \cup \{ \overline{e} \}$ where $e$ is the identity of $M$, and $T$ is a subsemigroup of $S$. Also $T \leq S$ satisfies condition $\eqref{***}$ with $C = \{ e, \overline{e} \}$, since for all $s \in S$
\[
s = \begin{cases}
es = se & \mbox{if $s \in M \subseteq T$} \\
\overline{e} m = m \overline{e} & \mbox{if $s = \overline{m}$ for some $m \in M \subseteq T$.}
\end{cases}
\]
However, $T$ is not finitely generated. Indeed, if $T$ were finitely generated then there would be a finite subset $X$ of $R$ satisfying $T = \langle M \cup \overline{X} \rangle$. Then for every $r \in R$ we could write $\overline{r} \in T$ as a product of elements of $M \cup \overline{X}$ where, since $M \leq T$, this product would need to have at least one term from $\overline{X}$. Thus we would have $\overline{r} = \alpha \overline{x} \beta$ for some $x \in X$ and $\alpha, \beta \in T^1$ and applying $\phi^{-1}$ it would follow that, in $M$, $X$ generates $R$ as a two-sided ideal. Since $X$ is finite, this would contradict the original choice of $R$.
\end{example}
\section{Generators for the Sch\"{u}tzenberger Groups}
\label{secschgps}
As above, let $S$ be a semigroup and let $T$ be a subsemigroup of $S$. In this section we show how generating sets for the $T$-relative Sch\"{u}tzenberger groups in $S$ may be obtained from generating sets of $T$.
Fix an arbitrary relative $\mathscr{H}$-class $H$ of $S$ and fix a representative $h \in H$. We do not insist here that $H$ is a subset of the complement $S \setminus T$, and thus allow the possibility that $H \subseteq T$ (meaning that $H$ is just an $\mathscr{H}$-class of $T$ in the classical sense). Let $\mathrm{Stab}(H) \leq T$ be the stabilizer of $H$, let $\gamma$ be the Sch\"{u}tzenberger congruence and $\Gamma = \mathrm{Stab}(H) / \gamma$ be the corresponding relative Sch\"{u}tzenberger group. Let $\{H_{\lambda}: \lambda \in \Lambda\}$ be the collection of all $\mathscr{H}$-classes in the $\mathscr{R}$-class of $H$. By Proposition~\ref{basicproperties}(ii) we can choose elements $p_{\lambda}, p_{\lambda}' \in T^1$ such that
\[
H p_{\lambda} = H_{\lambda}, \quad h_1 p_{\lambda} p_{\lambda}' = h_1, \quad h_2 p_{\lambda}' p_{\lambda} = h_2, \quad (\lambda \in \Lambda, \ h_1 \in H, \ h_2 \in H_{\lambda}).
\]
Also we assume that $\Lambda$ contains a distinguished element $\lambda_1$ with
\[
H_{\lambda_1} = H, \quad p_{\lambda_1} = p_{\lambda_1}' = 1.
\]
We can define an action of $T^1$ on the set $\Lambda \cup \{ 0 \}$ by:
\[
\lambda \cdot t = \begin{cases}
\mu & \mbox{if} \ \lambda, \mu \in \Lambda \ \& \ H_{\lambda}t = H_{\mu} \\
0 & \mbox{otherwise}.
\end{cases}
\]
In the classical (non-relative) case generating sets for
Sch\"{u}tzenberger groups may be obtained from a generating set of the containing monoid by
adapting the classical method in group theory for computing Schreier generators (see \cite[Chapter II]{lyndon_cgt}) for a subgroup (this may be found implicitly in Sch\"{u}tzenberger's original papers \cite{Schutzenberger1957}, \cite{Schutzenberger1958}, and explicitly in \cite{Ruskuc2}). In the following we record the easy generalisation of that result to the relative setting (the original classical results may be obtained by setting $S = T$).
\begin{theorem}\label{thm_schutz_gen}
Let $S$ be a semigroup, let $T$ be a subsemigroup of $S$ generated by a set $B$, and let $H$ be an arbitrary $T$-relative $\mathscr{H}$-class of $S$. Then the relative Sch\"{u}tzenberger group $\Gamma = \Gamma(H)$ of $H$ is generated by:
\[
X = \{ (p_{\lambda} b p_{\lambda \cdot b}') / \gamma: \lambda \in \Lambda, \ b \in B, \ \lambda \cdot b \neq 0 \}.
\]
In particular, if $T$ is finitely generated, and the relative $\mathscr{R}$-class of $H$ contains only finitely many relative $\mathscr{H}$-classes, then $\Gamma$ is finitely generated.
\end{theorem}
\begin{proof}
First we prove that with
\[
\Gamma' = \{ (p_{\lambda} t p_{\lambda \cdot t}') / \gamma: \lambda \in \Lambda, \ t \in T, \ \lambda \cdot t \neq 0 \}
\]
we have $\Gamma = \Gamma'$. On one hand, given $(p_{\lambda} t p_{\lambda \cdot t}') / \gamma \in \Gamma'$ since:
\[
H p_{\lambda} t p_{\lambda \cdot t}' = H_{\lambda} t p_{\lambda \cdot t}' = H_{\lambda \cdot t} p_{\lambda \cdot t}' = H
\]
it follows that $p_{\lambda} t p_{\lambda \cdot t}' \in \mathrm{Stab}(H)$, the stabilizer of $H$, and therefore $\Gamma'$ is well-defined and $\Gamma' \subseteq \Gamma$. On the other hand, given $v / \gamma \in \Gamma$ since $Hv = H$ it follows that $\lambda_1 \cdot v = \lambda_1$ and therefore that $v / \gamma = (p_{\lambda_1} v p_{\lambda_1}') / \gamma \in \Gamma'$, and $\Gamma \subseteq \Gamma'$.
To finish the proof we must show that an arbitrary element $g = (p_{\lambda} t p_{\lambda \cdot t}') / \gamma \in \Gamma'$ can be written as a product of generators from $X$. Write $t = b_1 \ldots b_m$ $(b_j \in B)$. We proceed by induction on $m$. If
$m=1$ we have $g \in X$. Now let $m>1$ and assume that the result holds for all smaller values. Let $a = b_1$ and $u = b_2 \ldots b_m$. Now we have:
\[
\begin{array}{rcll}
g & = & (p_{\lambda} t p_{\lambda \cdot t }') / \gamma \\
& = &
(p_{\lambda} au p_{\lambda \cdot au}') / \gamma & \\
& = &
(p_{\lambda} a p_{\lambda \cdot a}' p_{\lambda \cdot a} u p_{(\lambda \cdot a) \cdot u}') / \gamma
& \mbox{(by definition of $p_{\lambda \cdot a}$)} \\
& = &
(p_{\lambda} a p_{\lambda \cdot a}') / \gamma \; (p_{\lambda \cdot a} u p_{(\lambda \cdot a) \cdot u}') / \gamma
& \mbox{(since $p_{\lambda} a p_{\lambda \cdot a}', \; p_{\lambda \cdot a} u p_{(\lambda \cdot a) \cdot u}' \in T$)} \\
& \in & \langle X \rangle & \mbox{(by induction).}
\end{array}
\]
The last part of the theorem follows since if $B$ is finite and $\Lambda$ is finite then $X$ is finite. \end{proof}
Combining this result with Theorem~\ref{thm_finitegeneration} we obtain the following.
\begin{theorem}\label{thm_main_fg}
Let $S$ be a semigroup, let $T$ be a subsemigroup of $S$ with finite Green index, and let $\{ H_i : i \in I \}$ be the $T$-relative $\mathscr{H}$-classes in the complement $S \setminus T$. Then $S$ is finitely generated if and only if $T$ is finitely generated, in which case all the
relative Sch\"{u}tzenberger groups $\Gamma(H_i)$ are finitely generated as well.
\end{theorem}
\section{Building a presentation from the subsemigroup and Sch\"{u}tzenberger groups}
\label{sec_presentations}
Given a semigroup $S$ and a subsemigroup $T$, in this section we show how one can obtain a presentation for $S$ in terms of a given presentation for $T$ and presentations for all the relative Sch\"{u}tzenberger groups of $S \setminus T$. In the case that the Green index of $T$ in $S$ is finite we shall see that finite presentability is preserved.
A (semigroup) \emph{presentation} is a pair $\mathfrak{P} = \langle A| \mathfrak{R} \rangle$ where $A$ is a
an alphabet and $\mathfrak{R} \subseteq A^+ \times A^+$ is a set
of pairs of words. An element $(u,v)$ of $\mathfrak{R}$ is called a \emph{relation} and is
usually written $u=v$. We say that $S$ is the \emph{semigroup defined by the presentation}
$\mathfrak{P}$ if $S \cong A^+ / \eta$ where $\eta$ is the smallest congruence on $A^+$
containing $\mathfrak{R}$. We may think of $S$ as the largest semigroup
generated by the set $A$ which satisfies all the relations of $\mathfrak{R}$. We say that
a semigroup $S$ is \emph{finitely presented} if it can be defined by $\langle A|\mathfrak{R} \rangle$ where $A$ and
$\mathfrak{R}$ are both finite.
Let $S$ be a semigroup defined by a presentation $\langle A|\mathfrak{R} \rangle$, where we identify $S$ with $\mathcal{A}^+ / \eta$. We say that the word $w \in A^+$ \emph{represents the element} $s \in S$ if $s = w / \eta$.
Given two words $w,v \in A^+$ we write $w = v$ if $w$
and $v$ represent the same element of $S$ and write $w \equiv v$ if $w$ and $v$ are
identical as words.
We continue to follow the same notation and conventions as in previous sections, so $S$ is a semigroup, $T$ is a subsemigroup, and $\Gamma_i = \mathrm{Stab}(H_i) / \gamma_i = \Gamma(H_i)$ $(i \in I)$ are the Sch\"{u}tzenberger groups of the $T$-relative $\mathscr{H}$-classes in $S \setminus T$.
As above we also assume $1 \not\in I$ and follow the convention $H_1 = \{ 1 \}$ and $h_1=1$ where $1$ is the external identity adjoined to $S$.
Let $\langle B | Q \rangle$ be a presentation for $T$ and $\beta: B^+ \rightarrow T$ be the natural homomorphism associated with this presentation (mapping each word to the element it represents). Next define $A = B \cup \{ d_i : i \in I \}$ and
extend $\beta$ to $\alpha: A^+ \rightarrow S$ given by extending the map
\[
\alpha(a) = \begin{cases}
\beta(a) & \mbox{if $a \in B$} \\
h_i & \mbox{if $a=d_i$ for some $i \in I$}
\end{cases}
\]
to a homomorphism.
It follows from Theorem~\ref{thm_smalltobig} that $\alpha$ is surjective.
We also
introduce the symbol $d_1$ which we use to denote the empty word.
For every $i \in I$ let $\langle C_i | W_i \rangle$ be a (semigroup) presentation for the group $\Gamma_i$ and let $\xi_i: C_i^+ \rightarrow \Gamma_i$ be the associated homomorphism.
By Proposition~\ref{schutzstabproperties}(iv), for all $i,j \in I$ if $h_i \mathscr{L}^T h_j$ then
$\mathrm{Stab}(H_i) = \mathrm{Stab}(H_j)$ and $\Gamma(H_i) = \Gamma(H_j)$.
Therefore we may suppose without loss of generality that
for all $i,j \in I$:
\begin{equation}
\label{dagger}
\begin{array}{lll}
h_i \mathscr{L}^T h_j & \Rightarrow& C_i = C_j\ \&\ W_i = W_j\\
(h_i,h_j)\not\in\mathscr{L}^T &\Rightarrow& C_i \cap C_j = \varnothing\ \&\ W_i \cap W_j = \varnothing.
\end{array}
\end{equation}
For every letter $c \in C_i$ ($i \in I$) we have
\[
\xi_i (c) \in \Gamma_i = \mathrm{Stab}(H_i) / \gamma_i.
\]
Since $\mathrm{Stab}(H_i) \subseteq T$ and $\beta: B^+ \rightarrow T$ is surjective there exists a word $\overline{\xi_i}(c) \in B^+$ with $\beta(\overline{\xi_i}(c) ) \in \mathrm{Stab}(H_i)$ and
\[
\beta( \overline{\xi_i}(c) ) / \gamma_i = \xi_i(c).
\]
This defines a family of mappings $\overline{\xi}_i : C_i \rightarrow B^+$ ($i \in I$), which when taken together define a
mapping from $C = \bigcup_{i \in I} C_i$ to $B^+$, which in turn extends uniquely to a homomorphism $\overline{\xi} : C^+ \rightarrow B^+$. For $i \in I$ define $\overline{\xi}_i = \overline{\xi} \upharpoonright_{C_i^+}$, the restriction of $\overline{\xi}$ to the set $C_i^+ \subseteq C^+$.
Since $\beta$ and $\xi_i$ are homomorphisms, and $\gamma_i$ is a congruence, the mapping $\overline{\xi}_i$ satisfies:
\[
\beta( \overline{{\xi}_i}(w) ) / \gamma_i = \xi_i(w)
\]
for all $w \in C_i^+$.
In order to write down our presentation for $S$ we need to lift the mappings $\rho$, $\lambda$, $\sigma$ and $\tau$ introduced
in Section~\ref{sec_rewriting} from elements of $S$ to words, in the obvious way.
Abusing notation we shall use the same symbols for these liftings. Thus, considered as mappings on words,
we have
$$
\begin{array}{ll}
\rho: A^* \times I^1 \rightarrow I^1, & \lambda: I^1 \times A^* \rightarrow I^1,\\
\sigma: A^* \times I^1 \rightarrow B^*,& \tau: I^1 \times A^* \rightarrow B^*,
\end{array}
$$
where
\[
\begin{array}{rclrcl}
\rho(w,i) & = & \begin{cases}
j & \mbox{if $\alpha(w) h_i \in H_j$} \\
1 & \mbox{if $\alpha(w) h_i \in T$},
\end{cases} &
\lambda(i,w) & = & \begin{cases}
j & \mbox{if $h_i \alpha(w) \in H_j$} \\
1 & \mbox{if $h_i \alpha(w) \in T$},
\end{cases}
\\
\alpha(w) h_i & = & h_{\rho(w,i)} \alpha(\sigma(w,i)), &
h_i \alpha(w) & = & \alpha(\tau(i,w)) h_{\lambda(i,w)}.
\end{array}
\]
\begin{theorem}\label{thm_pres_smalltobig}
Suppose that $T$ is a subsemigroup of $S$, and that $\langle B\:|\: Q\rangle$ is a presentation for $T$.
With the remaining notation as above,
$S$ is defined by the presentation with generators $A = B \cup \{ d_i | i \in I \}$
and set of defining relations $Q$ together with:
\begin{alignat}{3}
&ad_i & & = d_{\rho(a,i)} \sigma(a,i) &\quad& (a \in A, i \in I^1), \label{22} \\
& d_j b && = \tau(j,b) d_{\lambda(j,b)} && (b \in B, j \in I^1), \label{33} \\
&d_i \bar{\xi} (u) && = d_i \bar{\xi} (v) && (i \in I^1, (u,v) \in W_i). \label{44}
\end{alignat}
In particular if $T$ has finite Green index in $S$, and all of the Sch\"{u}tzenberger groups $\Gamma_i$ are finitely presented, then $S$ is finitely presented.
\end{theorem}
\begin{proof} The defining relations $Q$ and \eqref{22}--\eqref{44} clearly all hold. We want to show that any relation $w_1 = w_2$ ($w_1, w_2 \in A^+$) that holds in $S$ is a consequence of these relations.
Consider the word $w_1$ and transform it using our defining relations as follows. First write $w_1 = w_1 d_1$. Then use relations \eqref{22} to move $d_1$ through the word $w_1$ from right to left, one letter at a time. We obtain a word $d_i w_1'$ where $w_1' \in B^+$ and the subscript $i$ is computed by the algorithm given in Lemma~\ref{lemma_bookkeeping}'.
Next, use relations \eqref{33} to move $d_i$ through $w_1'$ from left to right, one letter at a time, to obtain a word $w_1'' d_j$ where $w_1'' \in B^+$ and $d_j$ is computed by the algorithm given in Lemma~\ref{lemma_bookkeeping}.
If $\alpha(w_1) \in T$ we have $j=1$ by Lemma~\ref{lemma_bookkeeping}(ii), and so we have transformed $w_1$ into a word $w_1'' \in B^+$. The same process applied to $w_2$ would then give a word $w_2'' \in B^+$. Since $\langle B | Q \rangle$ is a presentation for $T$, the relation $w_1'' = w_2''$ is a consequence of $Q$, and so $w_1 = w_2$ is a consequence of the relations in this case.
Now consider the case $\alpha(w_1) = \alpha(w_2) \not\in T$. In this case, applying Lemma~\ref{lemma_bookkeeping}(i) shows that $h_j = \alpha(d_j) \mathscr{L} \alpha(w_1)$. Using relations \eqref{22} once more, we rewrite $w_1'' d_j$ into $d_k w_1'''$. This time Lemma~\ref{lemma_bookkeeping}'(iii) applies, and so $h_k = \alpha(d_k) \mathscr{H} \alpha(w_1)$. Furthermore, because $\alpha(d_j) \mathscr{L} \alpha(w_1) \mathscr{H} \alpha(d_k)$, it follows that all the intermediate $d_l$ appearing in this rewriting also satisfy $\alpha(d_l) \mathscr{L} \alpha(w_1)$, and so $C_l = C_k$ by \eqref{dagger}. Thus all $\sigma(b,l)$ arising from applications of \eqref{22} are in the image of $\bar{\xi}_k$, and, since $\bar{\xi}_k$ is a homomorphism it follows that $w_1''' \equiv \bar{\xi}_k(\overline{w}_1) \equiv \bar{\xi} (\overline{w}_1)$ for some $\overline{w}_1 \in C_k^+$. The same process applied to $w_2$ rewrites it into a word $d_r \bar{\xi} (\overline{w}_2)$. From
$$
h_1 = \alpha(d_r) \mathscr{H} \alpha(w_2) = \alpha(w_1) \mathscr{H} \alpha(d_k) = h_k
$$
it follows that $r=k$, and $\overline{w}_2 \in C_k^+$.
From $\alpha(w_1) = \alpha(w_2)$ we have $h_k \alpha( \bar{\xi} (\overline{w}_1)) = h_k \alpha (\bar{\xi} (\overline{w}_2))$, and so
$$( \alpha(\bar{\xi}(\overline{w}_1)), \alpha(\bar{\xi}(\overline{w}_2)) ) \in \gamma_k.$$ Since $\langle C_k | W_k \rangle$ is a presentation
for $\Gamma_k$, it follows that $\overline{w}_1 = \overline{w}_2$ is a consequence of the relations $W_k$. So, $\overline{w}_2$ can be obtained from $\overline{w}_1$ by applying relations from $W_k$. We shall now show that this can be translated into a sequence of applications of the relations \eqref{33} and \eqref{44} transforming $d_k \bar{\xi} (\overline{w}_1)$ into $d_k \bar{\xi} (\overline{w}_2)$.
Clearly it is sufficient to consider the case where $\overline{w}_2$ is obtained from $\overline{w}_1$ by a single application of a relation from $W_k$, so:
\[
\overline{w}_1 \equiv xuy, \quad \overline{w}_2 \equiv xvy, \quad x,y \in C_k^*, \ (u=v) \in W_k.
\]
There is a sequence of applications of \eqref{33} transforming $d_k \bar{\xi}(x)$ into $zd_t$ where $z \in B^*$. Moreover, since $x \in C_k^+$, it follows that $\alpha(\bar{\xi}(x)) \in \mathrm{Stab}(H_k)$ and so $$\alpha(d_k \bar{\xi} (x)) \mathscr{H} \alpha (d_k),$$
implying $t=k$. Now applying \eqref{44} we obtain:
$$
d_k \bar{\xi} (\overline{w}_1) \equiv d_k \bar{\xi} (x) \bar{\xi} (u) \bar{\xi} (y) = z d_k \bar{\xi} (u) \bar{\xi}(y)
= z d_k \bar{\xi} (v) \bar{\xi} (y) = d_k \bar{\xi} (x) \bar{\xi} (v) \bar{\xi} (y) \equiv d_k \bar{\xi} (\overline{w}_2),
$$
thus completing the proof of the theorem.
\end{proof}
At present we do not know how to obtain `nice' presentations in the converse direction.
In particular, we pose:
\begin{question}
\label{presentationsquestion}
Let $T$ be a subsemigroup of finite Green index in a semigroup $S$.
Supposing that $S$ is finitely presented, is it true that:
(i) $T$ is necessarily finitely presented?
(ii) All $T$-relative Sch\"{u}tzenberger groups of $\mathscr{H}^T$-classes in $S\setminus T$ are necessarily finitely presented?
\end{question}
If the answers are affirmative, the proof is likely to involve a combination of the methods used in
the classical Reidemeister--Schreier theory for groups, those for Rees index \cite{Ruskuc1}, and Sch\"{u}tzenberger groups \cite{Ruskuc2}.
A major obstacle at present is the nature of the rewriting process employed in the proof of Theorem
\ref{thm_main_fg}, whereby a word is first rewritten from left to right, and then once again from right to left.
This is in contrast with the rewritings employed in all the other contexts mentioned above, which are all essentially `one-sided'.
In the remainder of this section we give some corollaries, examples and further comments concerning Theorem \ref{thm_pres_smalltobig}
To begin with, note that Theorem~\ref{thm_pres_smalltobig} applies when
the complement is finite, in which case all of the relative Sch\"{u}tzenberger groups $\Gamma_i$ are finite and hence finitely presented, so we recover the following result, originally proved in \cite{Ruskuc1}.
\begin{corollary}[{\cite[Theorem~4.1]{Ruskuc1}}]
Let $S$ be a semigroup and let $T$ be a subsemigroup of $S$ with finite Rees index. If $T$ is finitely presented then $S$ is finitely presented.
\end{corollary}
In Example \ref{ex_fpcounter} and Theorems \ref{thm_aptrick1}, \ref{thm_aptrick2} below we will make use of the construction $\mathcal{S}(U,V,\phi)$, introduced in Definition~\ref{def_strong_semilattice}.
But first we record the following properties of this construction; the proofs are straightforward and are omitted:
\begin{lemma}
\label{lem_free_strong_semilattice}
Let $\phi\::\: T\rightarrow U$ be a surjective homomorphism of semigroups, and let $S=\mathcal{S}(T,U,\phi)$.
\begin{enumerate}[(i)]
\item
$T\leq S$ and $S\setminus T=U$.
\item
The relative $\mathscr{R}^T$-classes, $\mathscr{L}^T$-classes and $\mathscr{H}^T$-classes in $U$ are precisely
$\mathscr{R}$-classes, $\mathscr{L}$-classes and $\mathscr{H}$-classes respectively of $U$.
\item
The $T$-relative Sch\"{u}tzenberger group of an $\mathscr{H}^T$-class $H\subseteq U$ is isomorphic to
the Sch\"{u}tzenberger group of $H$.
\end{enumerate}
\end{lemma}
We now proceed to exhibit an example which shows that the the condition of finite presentability on the relative Sch\"{u}tzenberger groups in Theorem~\ref{thm_pres_smalltobig} cannot be dropped.
\begin{example}
\label{ex_fpcounter}
Let $G$ be a finitely presented group which has a non-finitely presented homomorphic image $H$,
and let $\phi\::\: G\rightarrow H$ be an epimorphism.
($H$ can be chosen to be any finitely generated, non-finitely presented group, say with $r$ generators, and $G$ to be free of rank $r$.)
Let $S=\mathcal{S}(G,H,\phi)$.
By Lemma \ref{lem_free_strong_semilattice}, $G$ has Green index 2 in $S$.
On the other hand, $S$ is not finitely presented.
To see this one can check the easy facts that $H$ is a retract of $S$, and that finite presentability is
preserved under retracts (see also \cite{Wang2000}).
Alternatively one can apply results on strong semilattices of monoids from
\cite{Araujo2001}.
\end{example}
As another application of Theorem~\ref{thm_pres_smalltobig}, we obtain a rapid proof of the following result from \cite{Ruskuc2}:
\begin{theorem}[{\cite[Corollary~3.3]{Ruskuc2}}]
\label{thm_aptrick1}
Let $S$ be a semigroup with finitely many left and right ideals. If all Sch\"{u}tzenberger groups of $S$ are finitely presented then so is $S$.
\end{theorem}
\begin{proof}
Let $\{ H_i: i \in I \}$ be the set of all $\mathscr{H}$-classes of $S$ where for each $i \in I$, $h_i \in H_i$ is a fixed representative and $\Gamma_i = \Gamma(H_i)$ denotes the Sch\"{u}tzenberger group of $H_i$.
Suppose that all the Sch\"{u}tzenberger groups of $S$ are finitely presented. In particular they are all finitely generated and from this it easily follows that $S$ itself is finitely generated. Indeed, for each $i \in I$ we may fix a finite subset $A_i$ of $\mathrm{Stab}(H_i)$ satisfying $\langle A_i / \gamma_i \rangle = \Gamma_i$. Then it is easily seen that
\[
A = (\bigcup_{i \in I} A_i) \cup \{ h_i : i \in I \}
\]
is a finite generating set for $S$.
Now let $W = \mathcal{S}(F,S,\phi)$ where $F$ is an appropriate free semigroup of finite rank.
Since $S$ has only finitely many $\mathscr{H}^S$-classes and all the Sch\"{u}tzenberger groups are finitely presented, by Lemma~\ref{lem_free_strong_semilattice} if follows that $F$ is a subsemigroup of $W$ with finite Green index and with all the $F$-relative Sch\"{u}tzenberger groups of $\mathscr{H}$-classes in $W\setminus F=S$ finitely presented.
Since $F$ is free of finite rank, and hence is finitely presented, it follows from Theorem~\ref{thm_pres_smalltobig} that
$W$ is finitely presented. As in Example~\ref{ex_fpcounter} this implies that $S$ is finitely presented, since $S$ is a retract of $W$.
\end{proof}
We end this section by observing that the same trick used in the previous theorem may be applied to recover the corresponding result (originally proved in \cite{GrayRuskucResFin}) for residual finiteness, by using the
following result from \cite{gray_green1}:
\begin{proposition}[{\cite[Theorem 20]{gray_green1}}]
\label{schgprf}
Suppose $T$ is a subsemigroup of finite Green index in a semigroup $S$.
Then $S$ is residually finite if and only if $T$ and all the $T$-relative Sch\"{u}tzenberger
groups of $S\setminus T$ are residually finite.
\end{proposition}
Recall that a semigroup $S$ is \emph{residually finite} if for any pair $x, y \in S$ of distinct elements
there exists a homomorphism $\phi$ from $S$ into a finite semigroup such that $x \phi \neq y \phi$.
Clearly this is equivalent to the existence of a congruence with finitely many classes separating $x$ from $y$.
\begin{theorem}[{\cite[Theorem 7.2]{GrayRuskucResFin}}]
\label{thm_aptrick2}
Let $S$ be a semigroup with finitely many left and right ideals. Then $S$ is residually finite if and only if all of the Sch\"{u}tzenberger groups of $S$ are residually finite.
\end{theorem}
\begin{proof}
Let $\phi\::\: F\rightarrow S$ be an epimorphism from a (not necessarily finitely generated this time) free semigroup onto $S$,
and let $W=\mathcal{S}(F,S,\phi)$.
It is not hard to see that $W$ is residually finite if and only if
$S$ is residually finite. The direct part of this claim is trivial since $S$ is a subsemigroup of $W$. For the converse, given $x,y \in W$ with $x \neq y$ we have the following possibilities: If $x \in F$ and $y \in S$ (or vice versa) then the congruence with two classes $F$ and $S$ separates $x$ from $y$. If $x,y \in F$ then we may identify all the elements in $S$ and apply the fact that $F$ is residually finite to separate $x$ from $y$ with a finite index congruence.
Finally, if $x,y \in S$ then since $S$ is residually finite there is a finite index congruence $\sigma$ on $S$ separating $x$ from $y$, and this may be extended to a finite index congruence on $W$ by taking the preimage of $\sigma$ under $\phi$, completing the proof of our assertion.
On the other hand since $F$ has finite Green index in $W$, and $F$ is residually finite, it follows from Proposition \ref{schgprf} that $W$ is residually finite if and only if all of the $F$-relative Sch\"{u}tzenberger groups of $\mathscr{H}^F$-classes in $S$ are residually finite. But by Lemma~\ref{lem_free_strong_semilattice} these are precisely the Sch\"{u}tzenberger groups of $S$, and this completes the proof of the theorem.
\end{proof}
\section{Malcev presentations}
\label{secmalcev}
In the previous section we outlined the difficulties, related to the specific nature of our rewriting process, that at present prevent us from proving
that finite presentability is preserved when passing to subsemigroups of finite Green index.
In this section we prove such a result for so-called Malcev presentations, which are presentations of
semigroups that can be embedded into groups.
(For a survey of the theory of Malcev presentations, see \cite{c_survey}.)
We do this by dispensing with rewriting altogether, and using properties of universal groups instead.
A congruence $\sigma$ on a semigroup $S$ is said to be a
\defterm{Malcev congruence} if $S/\sigma$ is embeddable in a group.
If $\{\sigma_i : i \in I\}$ is a set of Malcev congruences on $S$,
then $\sigma = \bigcap_{i \in I} \sigma_i$ is also a Malcev congruence
on $S$. This is true because $S/\sigma_i$ embeds in a group $G_i$ for
each $i \in I$, so $S/\sigma$ embeds in $\prod_{i \in I} S/\sigma_i$,
which in turn embeds in $\prod_{i \in I} G_i$.
Let $A^+$ be a free semigroup; let $\rho \subseteq A^+ \times A^+$ be
any binary relation on $A^+$. Let $\rho^M$ denote the smallest
Malcev congruence containing $\rho$ --- namely,
\[
\rho^M = \bigcap \left\{\sigma : \sigma \supseteq \rho, \text{
$\sigma$ is a Malcev congruence on }A^+\right\}.
\]
Then $\langle A \:|\: \rho\rangle$ is a
\defterm{Malcev presentation} for (any semigroup isomorphic to)
$A^+/\rho^M$.
The main result of this section (generalising \cite[Theorem 1]{crr_finind}) is:
\begin{theorem}
\label{thmmalcev1}
Let $S$ be a group-embeddable semigroup, and let $T$ be a subsemigroup of finite Green index in $S$.
Then $S$ has a finite Malcev presentation if and
only if $T$ has a finite Malcev presentation.
\end{theorem}
The proof of Theorem \ref{thmmalcev1} is at the end of the section.
We begin by recalling
the concept of universal groups of semigroups and
their connection to Malcev presentations. For further background on
universal groups refer to \cite[Chapter~12]{clifford_semigroups2};
for their interaction with Malcev presentations, see
\cite[\S1.3]{c_phdthesis}.
Let $S$ be a group-embeddable semigroup. The \defterm{universal group}
$U$ of $S$ is the largest group into which $S$ embeds and which $S$
generates, in the sense that all other such groups are homomorphic
images of $U$.
The concept of a universal group can be defined for all semigroups, not
just those that are group-embeddable. However, the definition above
will suffice for the purposes of this paper. The universal group of a
semigroup is unique up to
isomorphism.
\begin{proposition}[{\cite[Construction~12.6]{clifford_semigroups2}}]
\label{prop:ugsamepres}
Let $S$ be a group-embeddable semigroup. Suppose $S$ is presented by (an ordinary semigroup presentation)
$\langle A\:|\: \rho\rangle$
for some alphabet $A$ and set of defining relations $\rho$. Then
the group defined by the presentation $\langle A\:|\: \rho\rangle$ is {\rm[}isomorphic to{\rm]} the universal group of $S$.
\end{proposition}
The following two results show the connection between universal groups
and Malcev presentations. The proof of the first result is somewhat long and
technical; the second is a fairly direct corollary of the first.
\begin{proposition}[{\cite[Proposition~1.3.1]{c_phdthesis}}]
\label{prop:sgmpresiffugpres}
Let $S$ be a semigroup that embeds into a group. If
$\langle A\:|\: \rho\rangle$ is a Malcev presentation for $S$, then the universal
group of $S$ is presented by $\langle A\:|\: \rho\rangle$ considered as a group presentation. Conversely, if
$\langle A\:|\: \rho\rangle$ is a presentation for the universal group of $S$,
where $A$ represents a generating set for $S$ and $\rho \subseteq A^+
\times A^+$, then $\langle A\:|\: \rho\rangle$ is a Malcev presentation for $S$.
\end{proposition}
In other words, Malcev presentations for $S$ are precisely group presentations for its universal group
involving no inverses of generators.
\begin{proposition}[{\cite[Corollary~1.3.2]{c_phdthesis}}]
\label{prop:finsgmpresifffinugpres}
If a group-embeddable semigroup $S$ has a finite Malcev presentation,
then its universal group $G$ is finitely presented. Conversely, if the
universal group of $S$ is finitely presented and $S$ itself is
finitely generated, then $S$ admits a finite Malcev presentation.
\end{proposition}
Our strategy in proving Theorem \ref{thmmalcev1} relies on a dichotomy:
either $S$ and $T$ are both groups, in which
case the problem reduces to the finite presentability of groups, or else
$S$ and $T$ have isomorphic universal groups. The key technical observation is the following:
\begin{lemma}
\label{lem:rlquotients}
Let $G$ be a group, let $S$ be a subsemigroup of $G$, and let $T$ be a subsemigroup of finite Green index in $S$.
Then either $T$
is a group or for any $s \in S\setminus T$ there exist $u_s,v_s,w_s,x_s \in T$
with $s = u_sv_s^{-1}$ and $s = w_s^{-1}x_s$ in $G$.
\end{lemma}
\begin{proof}
Let $J$ be the group of units of $T$, if $T$ is a monoid, and otherwise set $J = \varnothing$. If $J = T$ there is nothing to
prove; so suppose $T \neq J$.
Let $s \in S\setminus T$. Pick any $t \in T \setminus J$ and consider the elements $s, st, st^2,\ldots$.
Since $T$ has finite Green index in $S$, either we have $st^i\in T$ for some $i$, or else
$st^i\mathscr{R} st^j$ for some $i<j$.
If $st^i\in T$ the elements $u_s=st^i$ and $v_s=t^i$ belong to $T$ and satisfy $u_sv_s^{-1}=s$.
On the other hand, if $st^i\mathscr{R} st^j$ then there exists $u\in S^1$ such that $st^ju=st^i$, which implies
$t^{j-i}u=1$, and contradicts the assumption $t\not\in J$.
Similar reasoning
using $\rel{L}$ yields $w_s$ and $x_s$.
\end{proof}
Any finite cancellative semigroup is a group, so for the class of cancellative semigroups the property of being a group is a
finiteness condition. The following result shows that for cancellative semigroups this property is preserved when taking finite Green index subsemigroups or extensions.
\begin{proposition}
\label{lem_cancellative}
Let $S$ be a cancellative semigroup and let $T$ be a subsemigroup with finite Green index in $S$.
Then $S$ is a group if and only if $T$ is a group.
\end{proposition}
\begin{proof}
In \cite[Theorem~5.1 \& Proposition~5.3]{gray_green1}
it is shown that $T$ is a group if $S$ is a group.
For the converse, suppose that $T$ is a group, say with identity element $e$. Since $S$ is cancellative and $e$ is an idempotent, $e$ is a two-sided identity in $S$. Therefore $S$ is a monoid and $T$ is a subgroup of the group of units of $S$.
Let $s \in S$ be arbitrary. We claim that $s^i \in T$ for some $i \in \mathbb{N}$. Otherwise, since the Green index is finite there would exist $i < j$ with $s^i \mathscr{R}^T s^j$, implying $s^j = s^i t$ for some $t \in T$ which by left cancellativity yields $s^{j-i} = t \in T$, a contradiction. Therefore $s^i$ belongs to the group of units of $S$ for some $i \in \mathbb{N}$ which is clearly only possible if $s$ itself is invertible. Thus every element is invertible and we conclude that $S$ is a group.
\end{proof}
\begin{corollary}
\label{corol:sgrpifftgrp}
Let $G$ be a group, let $S$ be a subsemigroup of $G$, and let $T$ be a subsemigroup of finite Green index in $S$.
Then $T$ is a
group if and only if $S$ is a group.
\end{corollary}
\begin{theorem}
\label{thm:gorisoug}
Let $S$ be a group-embeddable semigroup, and let $T$ be a subsemigroup of finite
Green index. Then either $S$ and $T$ are both groups or $S$ and
$T$ have isomorphic universal groups.
\end{theorem}
\begin{proof}
Let $G$ be the universal group of $S$ and view $S$ and $T$ as being
subsemigroups of $G$. By Corollary~\ref{corol:sgrpifftgrp} either
both $S$ and $T$ are groups or neither are groups.
In the former case,
the proof is complete.
In the latter case, Lemma~\ref{lem:rlquotients} says that every element of $S\setminus T$ can be
expressed as a right or left quotient of elements of $T$. The proof of
\cite[Theorem~3.1]{crr_finind} thus applies to show that the universal
group of $T$ is isomorphic to $G$.
\end{proof}
The following is now immediate:
\begin{corollary}
\label{corol:ugfi}
Let $S$ be a group-embeddable semigroup, and let $T$ be a subsemigroup of finite
Green index. Let $G$ and $H$ be the universal groups of $S$ and
$T$ respectively. Then $G$ contains a subgroup of finite index isomorphic to $H$.
\end{corollary}
We are now in a position to prove our main result of this section.
\begin{proof}[of Theorem \ref{thmmalcev1}]
Let $G$ and $H$ be the universal groups of $S$ and $T$,
respectively. By Corollary~\ref{corol:ugfi}, $H$ is a finite index
subgroup of $G$; hence, by the Reidemeister--Schreier Theorem
\cite[\S II.4]{lyndon_cgt}, $G$ is finitely presented if and only
if $H$ is finitely presented. Furthermore, from Theorem~\ref{thm_finitegeneration} above $S$ is finitely generated if and only if $T$ is finitely generated.
Now, by the observations in the foregoing paragraph and by using Proposition~\ref{prop:finsgmpresifffinugpres} twice, one sees that:\\
$\phantom{\iff} S$ has a finite Malcev presentation\\
$\iff S$ is finitely generated and $G$ is finitely presented \\
$\iff S$ is finitely generated and $H$ is finitely presented \\
$\iff T$ is finitely generated and $H$ is finitely presented \\
$\iff T$ has a finite Malcev presentation.
\end{proof}
\begin{remark}
It is natural to ask whether preservation of finite presentability when passing to subsemigroups of finite Green index holds for other types of presentations, e.g. presentations of cancellative semigroups, left or right cancellative semigroups, or inverse semigroups.
The corresponding results for finite Rees index are known
(\cite[Theorems 2, 3]{crr_finind} and \cite[Theorem 1.2]{inverserees}),
but rely on the result for the `ordinary' presentations \cite[Theorem 1.3]{Ruskuc1}.
Consequently, for Green index, these results either have to wait for a positive solution to Problem \ref{presentationsquestion}, or else entirely new methods are required.
\end{remark}
The method of proof used above reduces either to the case where $S$
and $T$ are both groups, or to the case where, as for finite Rees
index, every element of $S$ can be expressed as a right or left
quotient of $T$. In light of this, one might suspect that perhaps
finite Green index for group-embeddable semigroups reduces either to
finite group index or to finite Rees index. The following example
dispels these suspicions:
\begin{example}
Let $n \in \mathbb{N}$. Let $S = \mathbb{Z} \times (\mathbb{N} \cup\{0\})$ and let $T
= \mathbb{Z} \times ((\mathbb{N} \cup\{0\}) - \{1,\ldots,n\})$. Then $S$ and $T$
are both group-embeddable and $T$ is a subsemigroup of $S$. Furthermore,
\[
S-T = \mathbb{Z} \times \{1,\ldots,n\}.
\]
Let $k \in \{1,\ldots,n\}$. Then for any $z \in \mathbb{Z}$, the
$\rel{R}^T$-class of $(z,k)$ is $\mathbb{Z} \times \{k\}$. Since $S$ is
commutative, these are the $\rel{L}^T$ and thus the
$\rel{H}^T$-classes. Therefore there are only $n$ different
$\rel{H}^T$-classes in $S-T$. Thus $T$ has finite Green index in
$S$. Since $S-T$ is infinite, $T$ does not have finite Rees index in
$S$. Furthermore, neither $S$ nor $T$ are groups.
\end{example}
\section{The Word Problem}
\label{secwp}
In this section we consider some questions relating to decidability. Recall that for a semigroup $S$ finitely generated by a set $A$ we say that $S$ has a \emph{soluble word problem} (with respect to $A$) if there exists an algorithm which, for any two words $u,v \in A^+$, decides whether the relation $u=v$ holds in $S$ or not. For finitely generated semigroups it is easy to see that solubility of the word problem does not depend on the choice of (finite) generating set for $S$.
The following result concerning the word problem essentially follows from the arguments in the proof of Theorem~\ref{thm_pres_smalltobig}.
\begin{theorem}\label{thm_soluble}
Let $S$ be a finitely generated semigroup with $T$ a subsemigroup of $S$ with finite Green index. Then $S$ has soluble word problem if and only if $T$ and all the relative Sch\"{u}tzenberger groups of $S \setminus T$ have soluble word problem.
\end{theorem}
\begin{proof}
\begin{sloppypar}
Assume that $T$ has soluble word problem and that all of the relative Sch\"{u}tzenberger groups $\Gamma_i$ of $S \setminus T$ have soluble word problem. By Theorem~\ref{thm_main_fg}, $T$ is generated by a finite subset $B \subseteq T$ say, and $S = \langle A \rangle$ where $A = B \cup \{ h_i : i \in I \}$. Theorem~\ref{thm_pres_smalltobig} gives a (possibly infinite) presentation for $S$ but where the sets of relations \eqref{22} and \eqref{33} are both finite since $A$, $B$ and $I$ are all finite.
\end{sloppypar}
Let $w_1, w_2 \in A^+$. As in the proof of Theorem~\ref{thm_pres_smalltobig} using the relations \eqref{22} and \eqref{33} we can rewrite $w_1$ into a word of the form $w_1'' d_j$ where $w_1'' \in B^+$ and similarly rewrite $w_2$ into a word of the form $w_2'' d_k$ with $w_2'' \in B^+$. By Lemma~\ref{lemma_bookkeeping}(i) $w_1$ represents an element of $T$ if and only if $j=1$, while $w_2$ represents an element of $T$ if and only if $k=1$. So if $j=1$ and $k \neq 1$ (or vice versa) we deduce that $w_1 \neq w_2$. If $j=k=1$ then $w_i = w_i'' \in B^+$ ($i = 1,2$) and $w_1 = w_2$ if and only if $w_1'' = w_2''$ in $T$ which can be decided since $T$ has soluble word problem.
The remaining possibility is that $j \neq 1$ and $k \neq 1$ so $w_1$ and $w_2$ both represent elements from $S \setminus T$. Now, again following the argument in the proof of Theorem~\ref{thm_pres_smalltobig} using the relations \eqref{22} and \eqref{33} we deduce:
\[
w_1 = d_r \overline{\xi} (\overline{w_1}),\
w_2 = d_r \overline{\xi} (\overline{w_2})
\]
where $\overline{w_1}, \overline{w_2} \in C_k^+$. Now $w_1 = w_2$ in $S$ if and only if $\overline{w_1} = \overline{w_2}$ in the Sch\"{u}tzenberger group $\Gamma_k$ and this can be decided since $\Gamma_k$ has soluble word problem by assumption.
For the converse, suppose that $S$ has soluble word problem. Then immediately $T$ has soluble word problem since it is a finitely generated subsemigroup of $S$. Finally let $H$ be a $T$-relative $\mathscr{H}$-class in $S \setminus T$, with fixed representative $h \in H$. The group $\Gamma = \Gamma(H) = \mathrm{Stab}(H) / \gamma$ is finitely generated by Theorem~\ref{thm_schutz_gen}. Let $Y$ be a finite subset of $\mathrm{Stab}(H)$ such that $\langle Y / \gamma \rangle = \Gamma(H)$. Let $w_1, w_2 \in (Y / \gamma)^*$ Then $w_i = w_i' / \gamma$ where $w_i' \in B^*$ ($i = 1,2$) and $w_1 = w_2$ if and only if $hw_1' = hw_2'$ in $S$ which is decidable since $S$ is assumed to have soluble word problem.
\end{proof}
As with other results in this article,
Theorem~\ref{thm_soluble} generalises the well-known classical result from group theory and the corresponding result for finite Rees index subsemigroups proved in \cite{Ruskuc1}.
Just as for
Theorems~\ref{thm_aptrick1} and \ref{thm_aptrick2},
Theorem~\ref{thm_soluble} may be used to prove that a finitely generated semigroup with finitely many left and right ideals has soluble word problem if and only if all of its Sch\"{u}tzenberger groups have soluble word problem.
A finitely generated group $G$ has only finitely many subgroups of any given finite index $n$.
If $G$ is finitely presented, then a list of generating sets of all these subgroups can be obtained effectively.
In \cite[Corollary~32]{gray_green1} it was shown that the first of these two facts generalises to semigroups: a finitely generated semigroup has only finitely many subsemigroups of any given finite Green index $n$. We now show that the second statement does not generalise to semigroups and Green index.
\begin{theorem}
There does not exist an algorithm which would take as its input a finite semigroup presentation (defining a semigroup $S$)
and a natural number $n$, and which would return as the output a list of generators of all subsemigroups of $S$ of Green index $n$.
\end{theorem}
\begin{proof}
Let $S_0$ denote $S$ with a zero element $0$ adjoined. The Green index of the subsemigroup $\{ 0 \}$ in $S_0$ is equal to $| S_0 \setminus \{0 \} | = |S|$.
This observation along with the argument \cite[Theorem~5.5]{Ruskuc&Thomas} suffices to prove the theorem.
\end{proof}
\section{Growth}
\label{secgrowth}
A (discrete) \emph{growth function} is a monotone non-decreasing function from $\mathbb{N}$ to $\mathbb{N}$. For growth functions $\alpha_1, \alpha_2$ we write $\alpha_1 \preccurlyeq \alpha_2$ if there exist natural numbers $k_1, k_2 \geq 1$ such that
$\alpha_1(t) \leq k_1 \alpha_2(k_2 t)$
for all $t \in \mathbb{N}$. We define an equivalence relation on growth functions
by $\alpha_1 \sim \alpha_2$ if and only if $\alpha_1 \preccurlyeq \alpha_2$ and $\alpha_2 \preccurlyeq \alpha_1$.
The $\sim$-class $[\alpha]$ of a growth function $\alpha$ is called the
\emph{growth type} or just \textit{growth} of the function $\alpha$.
Let $S$ be a semigroup and let $X$ be a subset of $S$. Note that we do not insist here that $X$ generates $S$. Then for $s \in S^1$ and $n \in \mathbb{N}$ we define:
\[
\overrightarrow{\mathcal{B}}_X(s,n) = \{ s x_1 \ldots x_r \in S : x_i \in X^1, r \leq n \}
\]
and call this the \emph{out-ball of radius $n$ around $s$ with respect to $X$}.
For a
semigroup $S$ generated by a finite set $A$ the function
\[
g_S: \mathbb{N} \rightarrow \mathbb{N},
\quad
g_S(m) = |\overrightarrow{\mathcal{B}}_A(1,m)|
\]
is called the \emph{growth function} of the semigroup $S$.
It is well-known (and easily proved) that the growth type of a semigroup is independent of the choice of finite generating set. Also note that if $T$ is a finitely generated subsemigroup of a finitely generated semigroup $S$ then $g_T \preccurlyeq g_S$ (since we may take a finite generating set for $S$ that contains a finite generating set for $T$). In general the converse is not true, but it is in the case that $S$ is a group and $T$ is a subgroup of finite index (this follows from the more general fact that growth type is a quasi-isometry invariant; see \cite[p115, Section 50]{delaHarpe2000}).
Here we shall show that this fact is more generally true for subsemigroups of finite Green index. In fact, the result goes through under far weaker hypotheses as we now see.
The following result is very straightforward to prove and it is quite likely that it is already known. We include it here for completeness.
\begin{proposition}
Let $S$ be a semigroup and let $T$ be a subsemigroup of $S$.
Suppose that $T$ is finitely generated and that there exists a finite subset $R$ of $S^1$ with $1 \in R$ and $S^1 = RT^1$. Then $S$ and $T$ are both finitely generated and have the same type of growth.
\end{proposition}
\begin{proof}
Let $B \subseteq T$ be a finite generating set for $T$ and define $A = B \cup R$ which is clearly a finite generating set for $S$.
For $t \in T$ let $l_B(t)$ be the shortest length of a word in $B^+$ representing $t$ (i.e. the length of the element $t$ with respect to $B$).
Now $g_T \preccurlyeq g_S$ since $T \leq S$ so we just have to prove $g_S \preccurlyeq g_T$.
As in Lemma~\ref{lem_sigmaandtau}, for all $a_1, a_2 \in A$ there exists $r = r(a_1, a_2) \in R$ and $\mu(a_1, a_2) \in T^1$ satisfying:
\begin{equation}
a_1 a_2 = r(a_1, a_2) \mu(a_1, a_2). \label{mix}
\end{equation}
We claim that with $k_1 = |R|$ and $k_2 = \max\{ l_B(\mu(a_1,a_2)) :a_1, a_2 \in A \}$ we have
\[
g_S(n) \leq k_1 g_T(k_2n)
\]
for all $n \in \mathbb{N}$. Indeed, applying \eqref{mix}, given any word $a_1 \ldots a_k \in A^+$ there exists $r \in R$ and $\mu_i \in \{ \mu(a_1,a_2)) :a_1, a_2 \in A \}$ ($i \in \{1,\ldots,k\}$) with:
\[
a_1 \ldots a_k = r \mu_1 \ldots \mu_k.
\]
(This is proved in much the same way as the first part of Lemma~\ref{lemma_bookkeeping}.)
For all $i = 1, \ldots, k$ we have $\mu_i \in B^+$ and $l_B(\mu_i) \leq k_2$.
It follows that for all $n \in \mathbb{N}$:
\begin{equation}
\overrightarrow{\mathcal{B}}_A (1,n)
\subseteq
\bigcup_{r \in R}
\overrightarrow{\mathcal{B}}_B (r,k_2 n).
\label{balls_eq}
\end{equation}
But for all $s \in S$ and $m \in \mathbb{N}$ clearly we have:
\[
| \overrightarrow{\mathcal{B}}_B (s,m) | \leq | \overrightarrow{\mathcal{B}}_B (1,m ) |.
\]
Therefore by \eqref{balls_eq}:
\[
g_S(n) =
|\overrightarrow{\mathcal{B}}_A(a,n) \leq
|R||\overrightarrow{\mathcal{B}}_B(1,k_2n)| =
k_1 g_T (k_2 n),
\]
for all $n \in \mathbb{N}$.
\end{proof}
\begin{corollary}
Let $S$ be a semigroup and let $T$ be a subsemigroup of finite Green index.
Then $S$ is finitely generated if and only if $T$ is finitely generated, in which case $S$ and $T$ have the same type of growth.
\end{corollary}
\section{Automaticity}
\label{secautomatic}
In this section we apply our results concerning generators and rewriting
to investigate how the property of being automatic behaves with respect to finite Green index subsemigroups.
In what follows we will give a very rapid summary of the basic definitions; for a better paced introduction we
refer the reader to
\cite{campbell_autsg}, \cite{hoffmann_relatives}, or
\cite{c_phdthesis}.
Following \cite{epstein_wordproc}, and unlike the previous sections, throughout this section
we will make a strict distinction between a word over an alphabet and the element of the semigroup this word represents. Let $A$ be an alphabet representing a generating set for a
semigroup $S$. If $w$ is a word in $A^+$, it represents an element
$\elt{w}$ in $S$. If $K \subseteq A^+$, then $\elt{K}$ denotes the set
of elements of $S$ represented by at least one word in $K$.
Now suppose $A$ and $B$ are two alphabets, and let $\$$ be a symbol belonging to neither.
Let
$C = \{(a,b) : a,b \in A \cup \{\$\}\} - \{(\$,\$)\}$ be a new
alphabet. Define the mapping $\delta : A^+ \times A^+ \to C^+$
by
\[
(u_1\cdots u_m,v_1\cdots v_n) \mapsto
\begin{cases}
(u_1,v_1)\cdots(u_m,v_n) & \text{if }m=n,\\
(u_1,v_1)\cdots(u_n,v_n)(u_{n+1},\$)\cdots(u_m,\$) & \text{if }m>n,\\
(u_1,v_1)\cdots(u_m,v_m)(\$,v_{m+1})\cdots(\$,v_n) & \text{if }m<n,
\end{cases}
\]
where $u_i\in A$, $v_i \in B$.
Suppose now that $L$ is a regular language over $A$ such that
$\elt{L} = S$. For any $w \in A^*$, define the relation
\[
L_w = \{(u,v) : u,v \in L, \elt{uw} = \elt{v}\}.
\]
The pair $(A,L)$ forms an \defterm{automatic structure} for $S$ if the
language $L_a\delta$ is regular for each $a \in A \cup
\{\varepsilon\}$. An \defterm{automatic semigroup} is a semigroup that
admits an automatic structure.
Our main result for this section is:
\begin{theorem}
\label{thm:fgiautdown}
Let $S$ be a semigroup and let $T$ be a subsemigroup of $S$ of finite Green index. If $S$ is automatic, then $T$ is automatic.
\end{theorem}
\begin{proof}
Suppose that $S$ admits an automatic structure $(A,L)$.
All the notation fixed in Section \ref{sec_rewriting} will remain in force throughout this proof.
The goal is
to construct an automatic structure for $T$.
The proof is based on the rewriting
technique given in Lemma~\ref{lemma_bookkeeping} and Theorem~\ref{thm_finitegeneration} above.
In Theorem ~\ref{thm_finitegeneration} we proved that the set
\begin{equation}
\label{eqaut1}
\{
\tau(i, \sigma(\elt{a},j)) : i,j \in I^1, \; a \in A
\}
\end{equation}
generates $T$.
More precisely, we proved that an element $\elt{a_1}\, \elt{a_2}\ldots \elt{a_n}\in T$, where $a_i\in A$,
can be re-written as
$$
\elt{a_1}\, \elt{a_2}\ldots \elt{a_n} =
\tau(j_1, \sigma(\elt{a_1}, i_1)) \tau(j_2, \sigma(\elt{a_2}, i_2)) \ldots \tau(j_n, \sigma(\elt{a_n}, i_n)),
$$
where the indices $i_k$, $j_k$ are computed by the following recursion:
\begin{enumerate}[(i)]
\item $i_n = 1$,
\item $i_{k-1} = \rho(\elt{a_k},i_k)$ for $k =n,n-1,\ldots,2$,
\item $j_1 = \rho(\elt{a_1},i_1)$,
\item $j_{l+1} = \lambda(j_l,\sigma(\elt{a_l},j_l))$ for $l=1,2,\ldots,n-1$,
\item $\lambda(j_n,\sigma(\elt{a_n},i_n)) = 1$.
\end{enumerate}
Let us introduce a new alphabet representing the elements of (\ref{eqaut1}):
$$
B=\{b_{j,a,i}\::\: i,j\in I^1,\ a\in A\},\ \elt{b_{j,a,i}}=\tau(j, \sigma(\elt{a},i)).
$$
Let $R \subseteq A^+ \times B^+$ be the relation consisting of pairs of strings
\begin{equation}
\label{eq:transrel}
(a_1a_2\cdots a_n,b_{j_1,a_1,i_1}b_{j_2,a_2,i_2}\cdots b_{j_n,a_n,i_n})
\end{equation}
such that the properties (i)--(v) above are satisfied.
Notice in particular the correspondence between the letters of $a_i$
and the middle subscripts of the letters $b_{j_k,a_k,i_k}$ in
\eqref{eq:transrel}. It is clear that the set of pairs of \emph{all} strings
\eqref{eq:transrel} -- or rather the image of this under $\delta$ --
is regular. An automaton recognizing this set can easily be adapted to
check the properties (i)--(v): conditions~(i), (iii), and~(v) are all single
`local' checks, and conditions~(ii) and~(iv) require only that the automaton
store the subscripts from the previously read letter of $B$. Thus the
language $R\delta$ is regular.
We now have:
\begin{enumerate}[(i)]
\setcounter{enumi}{5}
\item If $u \in A^+$ represents an element of $T$, then there is a unique string $v \in B^+$ with $(u,v) \in R$ and $\elt{u} = \elt{v}$.
\item If $v = b_{j_1,a_1,i_1}b_{j_2,a_2,i_2}\cdots b_{j_n,a_n,i_n} \in B^+$ satisfies conditions (i)--(v), then there is a unique string $u \in A^+$ with $(u,v) \in R$.
\item If $(u,v) \in R$ then $\elt{u} = \elt{v}$ and so $\elt{u} \in T$.
\end{enumerate}
Let $M = \{v \in B^+ : (\exists u \in L)((u,v) \in R)\}$. The aim is
now to show that $(B,M)$ is an automatic structure for $T$. Clearly, the language $M$ maps onto $T$.
Let $b \in B$. Let $w \in A^+$ be such that $\elt{w} = \elt{b}$. The
language $L_w\delta$ is regular by
\cite[Proposition~3.2]{campbell_autsg}.
The language $(R^{-1} \circ L_w \circ R)\delta$ is thus also regular
and
\begin{align*}
&\;\phantom{\iff}\;(u,v) \in R^{-1} \circ L_w \circ R\\
&\iff u,v \in M \land (\exists p,q \in L)((u,p) \in R^{-1} \land (p,q) \in L_w \land (q,v) \in R)\\
&\iff u,v \in M \land (\exists p,q \in L)(\elt{p} = \elt{u} \land \elt{q} = \elt{v} \land \elt{pw} = \elt{q})\\
&\iff u,v \in M \land (\exists p,q \in L)(\elt{p} = \elt{u} \land \elt{q} = \elt{v} \land \elt{u}\,\elt{w} = \elt{v})\\
&\iff u,v \in M \land \elt{u}\,\elt{w} = \elt{v}\\
&\iff u,v \in M \land \elt{u}\,\elt{b} = \elt{v}\\
&\iff (u,v) \in M_b.
\end{align*}
Thus $M_b = R^{-1} \circ L \circ R$. So $M_b\delta$ is regular
and so $(B,M)$ is an
automatic structure
for $T$.
\end{proof}
This theorem provides a common generalisation of the corresponding group theoretic result
\cite[Theorem~4.1.4]{epstein_wordproc}
(without relying on the geometric `fellow traveller' property) and
\cite[Theorem~1.1]{hoffmann_autofinrees} for Rees index.
A variation of the notion of automatic semigroup is that of an asynchronously automatic semigroup.
Here we require that each relation $L_a$ (for $a\in A\cup\{\epsilon\})$ is recognised by an asynchronous two-tape automaton;
see \cite[Definition 3.3]{hoffmann_relatives} for details.
The proof of Theorem \ref{thm:fgiautdown} carries over verbatim to the asynchrnous case; the reference to
\cite[Proposition~3.2]{campbell_autsg} should be replaced by \cite[Proposition~2.1(3)]{hoffmann_relatives}.
Thus we have:
\begin{theorem}
\label{thm:fgiautdown1}
Let $S$ be a semigroup and let $T$ be a subsemigroup of $S$ of finite Green index. If $S$ is asynchronously automatic, then $T$ is asynchronously automatic.
\end{theorem}
The converses of Theorems~\ref{thm:fgiautdown} and \ref{thm:fgiautdown1} do not hold.
We demonstrate this by using the following example, which was introduced
in \cite[Example~5.1]{campbell_autcompsimple} for a different
purpose, viz., to show that a Clifford semigroup whose group
maximal subgroups are all automatic need not itself be automatic:
\begin{example}
Let $F$ be the free group on two generators $a,b$, and let $G$ be the free product
of two cyclic groups of order 2, i.e. $G=\langle c,d \:|\: c^2=d^2=1\rangle$.
Let $\phi:F\rightarrow G$ be the epimorphism defined by $a\mapsto c$, $b\mapsto d$.
Form the strong semilattice $S=\mathcal{S}(F,G,\phi)$.
Now, $F$, being a finitely
generated free group, is automatic.
Furthermore, $F$ has finite Green index in $S$, with $G$ a unique $\mathscr{H}$-class in $S\setminus F$. The Sch\"{u}tzenberger group of this $\mathscr{H}$-class is $G$, and so is automatic.
But in \cite[Example~5.1]{campbell_autcompsimple} it is proved that $S$ is
not automatic.
We will actually go further and show $S$ is not even
asynchronously automatic.
Suppose for \textit{reductio ad absurdum} that $(A,L)$ is an
asynchronous automatic structure for $S$. Let $A_F = \{a \in A :
\elt{a} \in F\}$. Let $L_F = L \cap (A_F)^+$ and $L_G = L \setminus
L_F$.
Then $\elt{L_G} = G$. Choose a representative $w \in L_G$ of
the identity $1_G$ of $G$. Construct the rational relation
$L_w$. Let $K = \{u : (u,w) \in L_w\}$; then $K$ is regular and
represents all those elements $s$ of $S$ with $s1_G = 1_G$. Therefore,
by the definition of $S$ we have
$\elt{K} = \{1_G\} \cup ( 1_G\phi^{-1})$.
Let $J = \{u : (u,w) \in L_\varepsilon\}$. Then $J$ is regular and
consists of all elements of $L$ that represent $1_G$. Thus $K \setminus J$ is
regular and represents the kernel (in the group-theoretical sense) of
$\phi$. Thus this kernel, $\elt{K - J}$, is a rational subset of
the free group $F$. But it is thus a non-trivial normal rational
subgroup of infinite index in $F$, which is a contradiction by
\cite[Corollary~4]{frougny89} and \cite[Theorem~1]{karass57}.
\end{example}
\begin{remark}
There are other possible definitions of automaticity depending on which side generators act, and on which side the padding symbols are placed;
see \cite{hoffmann03}.
Straightforward modification of the above argument yields the corresponding results for each of them.
\end{remark}
\section*{Acknowledgements}
We thank Simon Craik and Victor Maltcev for useful discussions during the preparation of this paper.
\bibliographystyle{abbrv}
|
2,869,038,156,875 | arxiv | \section{Introduction}
\label{sec:intro}
The dimensionality of an optimization problem significantly impacts any optimization procedure used to solve it, including evolutionary algorithms (EAs)~\cite{Chen2015, Omidvar2011}. However, there is still an open question of whether the difficulties of EAs with high dimensional problems are mainly due to the larger number of variables or the problem becoming harder to solve. Often, these two factors are correlated: a higher number of variables increases the search space size, which in turn makes the problem more complex -- for example, by increasing the number of local minima, as in the case of rugged fitness landscapes~\cite{grundel07}. Therefore, from existing research, it is difficult to deduce how increasing the dimension space influences the algorithms' performance since changes in performance could also be due to an increase in difficulty.
\thispagestyle{fancy}
This paper aims to observe how the performance of EAs correlates with different dimension sizes while keeping the same problem size. This means that the algorithms operate on a different number of variables than required by the optimization problem. These algorithm-side variables are then transformed in a specific way to obtain the correct number of problem-specific variables before the evaluation procedure. In this way, the complexity of the original problem is preserved (i.e., the problem does not become easier or more difficult to solve), whereas the search space size is artificially enlarged or reduced. Doing so makes it possible to obtain a more objective estimation of the influence of
the problem dimensionality for the performance of optimization methods.
Approaches that artificially increase or decrease the search space did not receive much attention in the literature yet. Salcedo-Sanz et al.~\cite{salcedo07} proposed a genotype compression-expansion strategy to improve the convergence speed of a Genetic Algorithm (GA) on the Inductive Query By Example (IQBE) optimization problem related to information retrieval systems. The GA genotype encodes the Boolean terms in a query, and the compression step groups in fixed-size subsets the terms belonging to the same topic, representing them with a single bit. The compressed individual is then expanded back for fitness evaluation.
Koutnik et al.~\cite{Koutnik2010} considered the problem of reducing the search space in the context of neuroevolution. In that work, the weight matrix of a neural network is represented in the frequency domain utilizing a Fourier transform. Evolutionary algorithms are then used to search in this compressed representation, where high-frequency coefficients are removed. Steenkiste et al.~\cite{steenkiste16} adopted a similar approach but used a wavelet transform instead. In both cases, the underlying assumption for this compression strategy is that successful networks have spatially correlated weights. Moreno et al. ~\cite{moreno} proposed methods based on artificial neural network autoencoders for the automatic generation of genotype-phenotype mappings to improve evolvability. One of the proposed methods uses the encoder part of a bottlenecked autoencoder to reduce the genotype size and the decoder segment for the genotype-phenotype mapping. Stanley et al. ~\cite{Stanley2003ATF} developed a taxonomy for Artificial Embriogeny, a subdiscipline of evolutionary computation, where some genes from the genotype are used multiple times while mapping from genotype to phenotype. This allows for a simpler representation of a complex phenotype, which therefore reduces the search space. Similarly, Bongard et al. ~\cite{Bongard} used a developmental encoding scheme based on Artificial Ontogeny to map the genotype in a phenotype that represents a complete agent.
From the above examples, the few works focusing on genotype compression and expansion mainly consider specific optimization problems, and the proposed strategies leverage domain-specific knowledge. As far as we know, this paper represents the first study on the influence of genotype compression/expansion for evolutionary algorithms, where the compression and expansion strategies are independent of the underlying optimization problem. Our experimental setup considers continuous optimization problems and several evolutionary algorithms to draw conclusions. More precisely, we consider three problems: the COCO benchmark functions, Physical Unclonable Functions (PUF) modeling, and neural network weights optimization. For the compression procedure, we investigate the scenario where two variables are combined into a single variable (reducing the dimensionality of the problem by half). The compression procedure is conducted via two strategies: interleaving and concatenation. We experiment with several sizes of the expanded variables for the expansion procedure and two procedures to obtain those: summation and multiplication.
Our results show that the compression procedure consistently gives poor results, but expansion can significantly improve the performance of the algorithm (as evaluated by the final solution).
\section{Genotype Compression and Expansion}
\label{sec:setup}
\subsection{Genotype Compression}
The compression strategies presume that a number of original variables are combined into a single \textit{compressed variable}. Let $m$ represent the number of original variables combined into a single one; then the total number of variables after the compression will be equal to $\lceil \frac{t}{m} \rceil$.
Regardless of the original uncompressed variables' scope, we keep every value of the compressed variable in the interval $[0,1]$.
In the floating-point representation, the number of decimal places is denoted by $p$ and given by the precision of the number format used to store those values.
In our experiments, we used a standard double-precision format with 16 significant digits.
Since the values of the compressed variables are in $[0, 1]$ range, the decimal part of the compressed value is used to decode $m$ uncompressed numbers.
As a result, each original variable will use $d = \left \lceil \frac{p}{m} \right \rceil$ decimal digits of the compressed number.
A compressed variable is decompressed by the following two strategies: \textit{sequential} and \textit{alternating}.
In the sequential strategy (concatenated), the decoding is performed so that the first $p/m$ digits represent the first original value, the following $p/m$ digits the next value, etc.
In the alternating scheme (interleaved), the digits of the compressed variable are distributed so that each $m$-th digit is used for recreating one original variable.
After the decomposition, in both schemes, the resulting values are additionally mapped to the desired domain interval with a simple linear transformation, i.e., the value $0$ represents the lower, while the value $1$ the upper bound.
Regardless of the decoding scheme, the described compression approach inevitably incurs a loss of precision to the original variables; in the above example, the original variable would have only half of the significant digits representing its value.
Depending on the actual problem domain, the loss of precision may affect the algorithm performance; unfortunately, this is not immediately evident in every case.
In all experiments in this paper, we limit the division factor to $m=2$, so the resulting precision is $p/2$ significant digits in the original domain.
\subsection{Genotype Expansion}
Let $t$ denote the number of variables of a particular continuous optimization problem, referred to as the \textit{original variables}.
In the genotype expansion, each original variable is represented with several \textit{expanded variables}; let $m$ represent the number of expanded variables for a single original variable.
The optimization algorithm operates on individuals comprised of expanded variables; only when evaluating a potential solution, the corresponding original values are recreated, and the fitness is calculated and assigned to the individual.
To recover the original variables, we consider the following two strategies: \textit{summation} and \textit{multiplication}.
In the summation scheme, the original variable $x_i$ can be split into the sum of $m$ variables $x_i=x_{i1}+x_{i2}+\ldots+x_{im}$. In the multiplication scheme, the original variable $x_i$ is represented as a product of $m$ variables $x_i=x_{i1}\cdot x_{i2}\cdot \ldots \cdot x_{im}$.
This means that the expanded representation can be decoded into the original variable by consecutively summing or multiplying $m$ expanded variables. In both cases, $m$ can be an arbitrary integer number larger than 1. The number of variables in the expanded scheme is then $t\cdot m$.
The expansion strategies were selected with both simplicity and efficiency in mind to demonstrate the applicability of such an approach, although more complex transformations can be defined. However, one potential pitfall of the multiplication strategy has to be outlined. When the allowed domain is defined only between 0 and 1, then as the number of multiplications increases, the result will approach 0. This can be resolved either by increasing the domain to have multiplications with larger numbers or by decreasing the number of multiplications.
The expanded variables use the same domain (defined with lower and upper bound) as the original variables.
The decoded original value can be clipped to the bound value if it exceeds a specific interval to facilitate constrained optimization with explicit bounds. Unlike in ~\cite{Koutnik2010} where certain elements are discarded from the search space, both proposed strategies use the entire genotype during the transformation to phenotype. In that way, all elements are directly considered during optimization.
\section{Experimental Results and Discussion}
\label{sec:results}
\subsection{Experimental Setup}
The genotype expansion or compression is used independently of the optimization algorithm; however, the efficiency of the approach may still be influenced by the chosen algorithm.
We have performed the experiments using several well-known evolutionary algorithms, each using the original variables (denoted as ``default''), the expanded, and the compressed variables.
We investigate the genetic algorithm (GA), evolutionary strategy (ES), differential evolution (DE), and clonal selection algorithm (CLONALG). However, as GA and DE achieved the best results and the tested approaches exhibited a similar behavior on the other algorithms as well, we report the detailed results only for GA and DE.
The GA uses a steady-state selection scheme where, in each iteration, three individuals are selected at random. The worst one is eliminated and replaced with the crossover offspring from the remaining two.
The resulting new individual is additionally mutated with a mutation rate of 0.3.
The differential evolution uses a scaling constant (differential weight) $F = 1$ and the crossover rate $CR = 0.9$.
The population size in all experiments is set to 100.
The parameters we use are selected based on a short preliminary tuning.
All the algorithms operate on the same genotype, where individuals are represented as vectors of floating-point values.
In GA, a single mutation operator is used, which alters a randomly chosen gene (a single floating-point value) to a new random value from the domain with a uniform distribution.
For the crossover in GA, we use multiple operators where a random one is selected every time crossover is performed.
The crossover operator is selected from the following: discrete crossover~\cite{Eiben03}, simple and whole arithmetic crossover~\cite{Eiben03}, local crossover~\cite{Dumitrescu00}, SBX crossover~\cite{boyer}\cite{agrawal}, BLX-alpha crossover~\cite{Eshelman92}, flat crossover~\cite{radcliffe}, BGA crossover~\cite{muhlenbein}, heuristic crossover\cite{wright}, and average crossover~\cite{nomura}.
We use the Mann-Whitney non-parametric test to check whether the proposed approaches' results are significantly better than the standard (default) number of variables (pair-wise comparison). First, we test the hypothesis that the proposed approach is significantly better than the standard one. If this hypothesis is rejected, we perform an additional test to check whether the proposed approach is significantly worse than the default one. The results are considered significant if the obtained $p$-values are less than 0.05. Each table will include the results of the statistical tests between the proposed approach and the default approach. The test results are listed after each value in the tables and denoted as +, - or =, representing the result is significantly better, significantly worse, or that there is no significant difference, respectively.
\subsection{Benchmark Problems}
The COCO platform is used to analyze the performance of the proposed expansion/compression approaches~\cite{hansen2016}. From this platform, the set of 24 noiseless functions of arbitrary variable sizes is selected for the experiments. Based on their properties, the functions can be grouped into five categories:
\begin{compactitem}
\item Separable functions - variables of the function are mutually independent, meaning that the optimum can be obtained by performing optimization in each dimension separately (functions $f1$ - $f5$).
\item Functions with low or moderate conditioning - non-separable unimodal functions (which contain a single minimum), where a small change in the input variables does not lead to a large change in the function value (functions $f6$ - $f9$).
\item Unimodal functions with high conditioning - non-separable unimodal functions, where a small change in the input variables leads to a large change in the function value, making it more difficult to obtain the correct solution (functions $f10$ - $f14$).
\item Multi-modal functions with adequate global structure - non-separable functions with multiple minima that are uniformly distributed over the search space (functions $f15$ - $f19$).
\item Multi-modal functions with weak global structure - non-separable functions with multiple minima with a non-uniform distribution over the search space (functions $f20$ - $f24$).
\end{compactitem}
The benchmark problems were optimized with all four algorithms, but the results are only shown for GA and DE.
Each algorithm used the maximum number of function evaluations as the stopping criterion, which is set to the value of $D \cdot 100\,000$, where $D$ represents the number of dimensions for the considered problems. The approaches are benchmarked on 2, 5, 10, and 20-dimensional functions. The search space is defined as the interval $[-5,5]$ in every dimension. For each tested function, 50 instances are created in the standard procedure used by the COCO platform by shifting or rotating the functions (for example, the separable functions are only shifted). Additionally, the COCO platform defines a run as successful if an objective value of $1e-8$ or less is reached, which is then denoted as a hit.
The experiments indicate that the GA obtained the best results among the tested methods, followed by DE. The GA results will be outlined for most of this paper, as the observations made there are also applicable for the other tested algorithms.
Table~\ref{tab:coco_2} shows the results obtained by GA for optimizing the set of test functions in two dimensions.
The columns in the table represent different encodings, with ``\textit{exp}'' corresponding to the expansion and ``\textit{com}'' to genotype compression, while ``\textit{def}'' denotes the default encoding in which the number of variables is unchanged.
We also report the expansion factor (2 or 3) and the decoding scheme; summation (\textit{s}) and multiplication (\textit{m}) for the expansion, sequential (\textit{seq}), and alternating (\textit{alt}) for the compression.
Each row denotes the median fitness value obtained over the 50 executed instances, the number of hits for each function (denoted in brackets), and whether the approach in this column is significantly better, equal, or worse concerning the default genotype encoding. The experiments in which the proposed methods obtained significantly better results than the default encoding are emphasized with grey cells.
It can be observed that compressing the search space to fewer variables leads to poor results, even for the simplest functions. This is backed up by statistical tests, demonstrating that the compression-based approaches achieve significantly worse results than the default encoding with the original number of variables for each function.
For the expansion of variables, the results are more interesting. For the group of separable functions, the results show that all variants achieved the maximum number of hits. This shows that none of the expansion approaches has difficulties converging to the minimum on simple functions. On the second group of functions, we see that the expanded representations achieve a larger number of hits than the default method, except for function f7, where all methods achieved the maximum number of hits. Additionally, the expansion-based approaches obtained significantly better results in most cases for the other functions. As such, the proposed approaches perform better on non-separable functions that are not ill-conditioned.
In the next group of functions, we observe a difference between the effectiveness of the summation and multiplication-based extension. While the summation-based approach has a similar performance to the default one, the multiplication-based one encounters problems and has a smaller number of hits, and in some cases, obtains significantly worse results. For unimodal functions (functions f1-f5), there is little difference between the default and summation-based approach, as both can obtain the optimum.
For the third group of functions, we observe no significant difference between the summation-based and default approaches. We postulate that this happens as those functions exhibit a strong dependence between small changes in the specific gene values and the fitness value.
On the fourth group of functions, multi-modal with an adequate global structure, the expansion-based approaches achieve their best results compared to the default one. The summation-based approach obtained significantly better results for all five functions, whereas the multiplication-based approach is slightly less successful. On the final set of functions (weak global structure), the improvements over the default approach are less prominent but present in two cases.
To conclude, the proposed expansion approaches seem to perform better for multi-modal problems than the default approach, while for unimodal cases, the differences are smaller.
\begin{table*}[]
\small
\caption{Results obtained by GA for the COCO benchmark with 2 dimensional functions.}
\label{tab:coco_2}
\adjustbox{max width=\textwidth}{
\begin{tabular}{@{}clllllll@{}}
\toprule
\multicolumn{1}{l}{} & \multicolumn{1}{c}{\textbf{def}} & \multicolumn{1}{c}{\textbf{exp-s-2}} & \multicolumn{1}{c}{\textbf{exp-s-3}} & \multicolumn{1}{c}{\textbf{exp-m-2}} & \multicolumn{1}{c}{\textbf{exp-m-3}} & \multicolumn{1}{c}{\textbf{com-seq}} & \multicolumn{1}{c}{\textbf{com-alt}} \\ \midrule
\textbf{f1} & 0.00e+00 (50) & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) + & 0.00e+00 (50) + & 5.51e-05 (0) - & 7.99e-08 (22) - \\
\textbf{f2} & 0.00e+00 (50) & 0.00e+00 (50) = & 0.00e+00 (50) - & 0.00e+00 (50) = & 0.00e+00 (50) = & 1.57e-02 (0) - & 1.87e-01 (0) - \\
\textbf{f3} & 0.00e+00 (50) & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) + & 0.00e+00 (50) = & 4.88e-02 (0) - & 8.27e-04 (1) - \\
\textbf{f4} & 0.00e+00 (50) & 0.00e+00 (50) + & 0.00e+00 (50) + & 0.00e+00 (50) + & 0.00e+00 (50) + & 1.05e-01 (0) - & 2.29e-03 (1) - \\
\textbf{f5} & 0.00e+00 (50) & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) = & 6.93e-03 (23) - & 1.20e-06 (23) - \\ \midrule
\textbf{f6} & 3.37e-09 (31) & \cellcolor[HTML]{dfdfdf}2.38e-10 (43) + & \cellcolor[HTML]{dfdfdf}4.88e-12 (50) + & \cellcolor[HTML]{dfdfdf}3.03e-10 (45) + & \cellcolor[HTML]{dfdfdf}1.10e-10 (48) + & 2.79e-03 (0) - & 2.81e-04 (1) - \\
\textbf{f7} & 0.00e+00 (50) & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) = & 3.19e-08 (12) - & 2.24e-13 (48) - \\
\textbf{f8} & 2.73e-09 (31) & \cellcolor[HTML]{dfdfdf}1.98e-11 (44) + & \cellcolor[HTML]{dfdfdf}3.84e-12 (42) + & \cellcolor[HTML]{dfdfdf}6.14e-11 (40) + & 2.71e-09 (34) = & 5.81e-04 (0) - & 9.48e-05 (1) - \\
\textbf{f9} & 1.84e-10 (34) & 1.55e-11 (43) = & \cellcolor[HTML]{dfdfdf}1.46e-12 (46) + & \cellcolor[HTML]{dfdfdf}2.86e-11 (45) + & \cellcolor[HTML]{dfdfdf}2.43e-12 (46) + & 8.22e-04 (0) - & 9.44e-05 (3) - \\ \midrule
\textbf{f10} & 9.82e-06 (9) & 3.67e-06 (9) = & 2.83e-05 (6) = & 5.00e-04 (5) - & 5.11e-03 (2) - & 1.75e-01 (0) - & 8.49e-02 (0) - \\
\textbf{f11} & 2.69e-06 (10) & 9.29e-07 (14) = & 1.60e-06 (13) = & 4.81e-03 (10) - & 4.62e-04 (8) - & 1.18e-01 (0) - & 9.70e-02 (0) - \\
\textbf{f12} & 3.48e-05 (15) & 6.92e-06 (20) = & 3.07e-05 (19) = & 1.28e-04 (19) = & 1.52e-04 (17) = & 1.97e-01 (0) - & 8.26e-02 (0) - \\
\textbf{f13} & 6.11e-05 (0) & 3.74e-05 (2) = & 5.86e-05 (0) = & 2.98e-04 (2) - & 1.48e-04 (0) = & 7.19e-02 (0) - & 3.79e-02 (0) - \\
\textbf{f14} & 7.32e-07 (7) & 4.75e-07 (10) = & 9.27e-07 (7) = & 1.78e-06 (8) = & 1.80e-06 (3) = & 7.30e-04 (0) - & 8.21e-05 (0) - \\ \midrule
\textbf{f15} & 8.11e-11 (39) & \cellcolor[HTML]{dfdfdf}0.00e+00 (48) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (48) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (47) + & 2.93e-02 (0) - & 3.15e-04 (4) - \\
\textbf{f16} & 1.32e-11 (32) & \cellcolor[HTML]{dfdfdf}0.00e+00 (36) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (41) + & 0.00e+00 (36) = & \cellcolor[HTML]{dfdfdf}0.00e+00 (37) + & 4.43e-03 (0) - & 4.29e-04 (3) - \\
\textbf{f17} & 3.32e-05 (13) & \cellcolor[HTML]{dfdfdf}2.04e-10 (29) + & \cellcolor[HTML]{dfdfdf}2.18e-11 (31) + & \cellcolor[HTML]{dfdfdf}1.21e-12 (30) + & \cellcolor[HTML]{dfdfdf}1.60e-14 (32) + & 1.94e-02 (0) - & 3.64e-03 (0) - \\
\textbf{f18} & 9.90e-04 (0) & \cellcolor[HTML]{dfdfdf}9.85e-04 (2) + & \cellcolor[HTML]{dfdfdf}1.62e-04 (6) + & 9.87e-04 (1) = & 9.90e-04 (1) = & 9.78e-02 (0) - & 3.25e-02 (0) - \\
\textbf{f19} & 1.93e-12 (50) & \cellcolor[HTML]{dfdfdf}2.66e-14 (50) + & \cellcolor[HTML]{dfdfdf}5.95e-14 (50) + & \cellcolor[HTML]{dfdfdf}3.55e-15 (49) + & \cellcolor[HTML]{dfdfdf}7.11e-15 (50) + & 9.43e-06 (0) - & 7.56e-07 (7) - \\ \midrule
\textbf{f20} & 2.23e-12 (50) & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & 3.64e-03 (0) - & 9.84e-04 (3) - \\
\textbf{f21} & 0.00e+00 (50) & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) = & 2.34e-08 (18) - & 2.04e-12 (45) - \\
\textbf{f22} & 0.00e+00 (50) & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) = & 0.00e+00 (50) = & 5.79e-08 (14) - & 5.75e-10 (35) - \\
\textbf{f23} & 4.86e-03 (6) & \cellcolor[HTML]{dfdfdf}7.70e-04 (17) + & \cellcolor[HTML]{dfdfdf}2.30e-04 (16) + & \cellcolor[HTML]{dfdfdf}1.52e-04 (15) + & \cellcolor[HTML]{dfdfdf}5.35e-06 (21) + & 1.90e-01 (0) - & 2.02e-01 (0) - \\
\textbf{f24} & 6.09e-02 (10) & 5.84e-02 (11) = & 5.92e-02 (14) = & 2.81e-02 (16) = & 2.37e-02 (11) = & 3.46e-01 (0) - & 1.57e-01 (0) - \\ \bottomrule
\end{tabular}}
\end{table*}
Unfortunately, as the number of dimensions increases, the difference between the results becomes less prominent; this can be observed from Table~\ref{tab:coco_20}, which shows the results obtained when optimizing 20-dimensional functions.
The compression-based approaches still achieve inferior results, whereas the expansion-based approaches, in most cases, obtain results that are not significantly different from the reference encoding. In this case, the expansion approach based on summation achieves equally good results as without expansion. However, this is not the case with the expansion based on multiplication, which achieves significantly worse results for around 40\% of tested functions, both when the number of variables is doubled and tripled. It can be concluded that the summation-based approach scales better with the dimensionality of the problem. Still, note that relatively poor results here do not necessarily mean that the expansion procedure does not work. The default approach results suggest that the considered optimization procedures are not powerful enough for high-dimensional problems (at least with the experimental setup as considered in this paper).
\begin{table*}[]
\small
\caption{Results obtained by GA for the COCO benchmark with 20 dimensional functions.}
\label{tab:coco_20}
\adjustbox{max width=\textwidth}{
\begin{tabular}{@{}clllllll@{}}
\toprule
\multicolumn{1}{l}{} & \multicolumn{1}{c}{\textbf{def}} & \multicolumn{1}{c}{\textbf{exp-s-2}} & \multicolumn{1}{c}{\textbf{exp-s-3}} & \multicolumn{1}{c}{\textbf{exp-m-2}} & \multicolumn{1}{c}{\textbf{exp-m-3}} & \multicolumn{1}{c}{\textbf{com-seq}} & \multicolumn{1}{c}{\textbf{com-alt}} \\ \midrule
\textbf{f1} & 3.98e-12 (50) & 6.85e-12 (50) = & 6.43e-12 (50) = & 6.27e-12 (50) = & 5.68e-12 (50) = & 9.28e-04 (0) - & 5.58e-05 (0) - \\
\textbf{f2} & 3.79e-09 (38) & 2.49e-09 (34) = & 4.72e-09 (37) = & 2.50e-09 (35) = & \cellcolor[HTML]{dfdfdf}1.24e-09 (42) + & 9.84e+01 (0) - & 3.42e+01 (0) - \\
\textbf{f3} & 5.94e-09 (34) & 6.28e-09 (33) = & 6.19e-09 (30) = & 1.23e-08 (25) - & 8.85e-09 (29) = & 9.56e-01 (0) - & 1.90e-01 (0) - \\
\textbf{f4} & 6.12e-08 (3) & 5.89e-08 (6) = & 5.83e-08 (6) = & 5.81e-08 (8) = & 5.92e-08 (5) = & 2.35e+00 (0) - & 1.55e+00 (0) - \\
\textbf{f5} & 9.95e-14 (50) & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & \cellcolor[HTML]{dfdfdf}0.00e+00 (50) + & 5.67e-01 (0) - & 8.77e-03 (0) - \\ \midrule
\textbf{f6} & 2.50e-02 (0) & 2.50e-02 (0) = & 2.46e-02 (0) = & 3.97e-01 (0) - & 2.51e-01 (0) - & 1.57e+00 (0) - & 1.58e+00 (0) - \\
\textbf{f7} & 9.97e+00 (0) & 9.94e+00 (0) = & 9.96e+00 (0) = & 1.50e+01 (0) - & 1.58e+01 (0) - & 1.55e+01 (0) = & 1.25e+01 (0) = \\
\textbf{f8} & 6.31e+00 (0) & 9.98e+00 (0) = & 1.58e+01 (0) = & \cellcolor[HTML]{dfdfdf}1.75e+00 (0) + & 3.25e+00 (0) = & 2.35e+01 (0) - & 2.40e+01 (0) - \\
\textbf{f9} & 1.58e+01 (0) & 1.58e+01 (0) = & 1.58e+01 (0) = & \cellcolor[HTML]{dfdfdf}1.58e+01 (0) + & \cellcolor[HTML]{dfdfdf}1.58e+01 (0) + & 2.47e+01 (0) - & 2.48e+01 (0) - \\ \midrule
\textbf{f10} & 2.51e+03 (0) & 2.51e+03 (0) = & 2.51e+03 (0) = & 3.98e+03 (0) - & 3.98e+03 (0) - & 1.58e+04 (0) - & 1.58e+04 (0) - \\
\textbf{f11} & 3.98e+00 (0) & 3.97e+00 (0) = & 5.65e+00 (0) = & 1.58e+01 (0) - & 1.57e+01 (0) - & 1.51e+01 (0) - & 9.98e+00 (0) - \\
\textbf{f12} & 9.63e-01 (0) & 2.50e+00 (0) - & 1.58e+00 (0) = & 2.51e+00 (0) - & 2.48e+00 (0) - & 9.38e+02 (0) - & 9.78e+01 (0) - \\
\textbf{f13} & 2.51e+00 (0) & 3.97e+00 (0) = & 2.51e+00 (0) = & 2.50e+00 (0) = & 3.97e+00 (0) = & 1.52e+01 (0) - & 9.89e+00 (0) - \\
\textbf{f14} & 1.00e-03 (0) & 1.00e-03 (0) = & 1.00e-03 (0) = & 1.00e-03 (0) = & 1.57e-03 (0) = & 2.43e-02 (0) - & 1.57e-02 (0) - \\ \midrule
\textbf{f15} & 6.30e+01 (0) & 6.29e+01 (0) = & \cellcolor[HTML]{dfdfdf}6.20e+01 (0) + & 9.96e+01 (0) - & 1.57e+02 (0) - & 9.87e+01 (0) - & 6.31e+01 (0) - \\
\textbf{f16} & 7.54e+00 (0) & 7.98e+00 (0) = & 9.43e+00 (0) = & 6.29e+00 (0) = & 7.87e+00 (0) = & 9.41e+00 (0) = & 9.94e+00 (0) - \\
\textbf{f17} & 1.56e+00 (0) & 1.55e+00 (0) = & 1.58e+00 (0) = & 2.51e+00 (0) - & 3.96e+00 (0) - & 1.58e+00 (0) - & 1.58e+00 (0) - \\
\textbf{f18} & 3.98e+00 (0) & 3.98e+00 (0) = & 3.97e+00 (0) = & 9.93e+00 (0) - & 1.50e+01 (0) - & 6.13e+00 (0) = & 6.10e+00 (0) = \\
\textbf{f19} & 1.00e+00 (0) & 1.58e+00 (0) = & 1.58e+00 (0) - & \cellcolor[HTML]{dfdfdf}6.30e-01 (0) + & \cellcolor[HTML]{dfdfdf}2.51e-01 (0) + & 3.94e+00 (0) - & 3.89e+00 (0) - \\ \midrule
\textbf{f20} & 5.57e-01 (0) & 4.71e-01 (0) = & 3.97e-01 (0) = & 9.67e-01 (0) - & 9.82e-01 (0) - & 3.98e-01 (0) = & 6.21e-01 (0) - \\
\textbf{f21} & 2.49e+00 (15) & 2.45e+00 (16) = & 2.51e+00 (10) = & 2.50e+00 (15) = & 2.51e+00 (11) = & 2.39e+00 (0) = & 2.51e+00 (0) = \\
\textbf{f22} & 9.73e+00 (0) & 6.18e+00 (0) = & 6.23e+00 (0) = & 9.64e+00 (0) = & 6.29e+00 (0) = & \cellcolor[HTML]{dfdfdf}3.98e+00 (0) + & 3.90e+00 (0) = \\
\textbf{f23} & 1.51e+00 (0) & 1.00e+00 (0) = & 1.00e+00 (0) = & 9.99e-01 (0) = & \cellcolor[HTML]{dfdfdf}9.93e-01 (0) + & 1.43e+00 (0) = & \cellcolor[HTML]{dfdfdf}9.99e-01 (0) + \\
\textbf{f24} & 6.29e+01 (0) & 6.30e+01 (0) - & 6.30e+01 (0) = & 9.99e+01 (0) - & 1.00e+02 (0) - & 1.20e+02 (0) - & 9.96e+01 (0) - \\ \bottomrule
\end{tabular}}
\end{table*}
Figure~\ref{fig:cocographs} displays the empirical runtime distributions of the results obtained by GA and DE for the 2-dimensional function set. The x-axis shows the number of function evaluations, while the y-axis represents the proportion of problems for which the algorithm achieved the desired target values $\delta_i$, where $\delta_i \in \{10^{2}, 10^{1.8}, 10^{1.6}, 10^{1.4},\ldots, 10^{-8}\}$. More precisely, the figure depicts the ratio of the number of targets hit across all functions and the number of function, target pairs for a specific number of function evaluations.
Figure~\ref{fig:all}, presenting the total results on all functions, shows that the expansion-based approaches reach the desired function value in more cases than the default setup. For the first group of functions denoted in Figure~\ref{fig:separable}, there is little difference between the expanded and the default approach for the GA. Similar results can also be observed in other function groups, with slightly larger differences being observed only on the last group of functions.
DE is constantly performing worse than GA, except for the multi-modal group of functions with adequate local structure, on which it obtained the desired results more often than the GA.
In all results for DE, we observe that the expansion approaches perform better than the default DE approach. This supports the observation from the GA, where the same behavior is noticed. As such, the choice of the algorithm only slightly influences the performance of the considered approaches.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pprldmany_02D_hcond.pdf}
\caption{all functions}
\label{fig:all}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pprldmany_02D_separ.pdf}
\caption{separable functions}
\label{fig:separable}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pprldmany_02D_lcond.pdf}
\caption{functions with low or moderate conditioning}
\label{fig:low conditioning}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pprldmany_02D_hcond.pdf}
\caption{functions with high conditioning and unimodal}
\label{fig:high conditioning}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pprldmany_02D_multi.pdf}
\caption{multi-modal functions with adequate global structure}
\label{fig:adequate global structure}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pprldmany_02D_mult2.pdf}
\caption{multi-modal functions with weak global structure}
\label{fig:weak global structure}
\end{subfigure}
\caption{Empirical runtime distributions for 2 dimensional problems.}
\label{fig:cocographs}
\end{figure*}
Additionally, tests are performed on the two-dimensional set of functions to test the number of variables to which the original representation should be expanded. The tested expansions are $2t$, $3t$, $5t$, and $10t$. In all cases, the obtained results are quite similar, and the approaches did not show a significant deterioration in the results as the expanded number of variables increased. For that reason, we decided to keep the number of expanded variables minimal, as there is no benefit of using larger values.
\subsection{Modeling Physical Unclonable Functions (PUFs).}
Physical Unclonable Functions (PUFs) are lightweight hardware devices commonly used in authentication schemes and anti-coun\-ter\-feit\-ing applications. PUFs use inherent manufacturing differences within every physical object to give each physical instance (a PUF) a unique identity.
PUFs are usually divided into two categories: weak PUFs and strong PUFs.
A strong PUF can be queried with an exponential number of challenges to receive an exponential number of responses (challenge-response pairs - CRP). Existing strong PUFs can be simulated in software, and the required parameters for such a software model can be approximated by using machine learning or evolutionary algorithms~\cite{10.1007/978-3-662-48324-4_27, 10.1145/3067695.3082535}.
Usually, strong PUFs rely on delayed-based Arbiter PUFs (APUFs) as their main building blocks for PUF constructs and protocols~\cite{7450665}. Such APUFs can be modeled by a linear function, which is at the foundation of various AI-based attacks using challenge-response pairs~\cite{DBLP:journals/tifs/Delvaux19}.
Arbiter PUF consists of one or more chains of two 2-bit multiplexers that have identical layouts.
Each multiplexer pair is denoted as a stage, with $n$ stages in a single chain.
A single input signal is introduced to the first stage to both the bottom and top multiplexer in the pair.
The chain is fed a control signal of $n$ bits called a challenge, where each bit determines whether the two input signals in that stage would be switched (crossed over) or not.
In ideal conditions, the input signal would propagate at the same speed through each stage, and both the lower and upper signal would arrive at the arbiter (at the end of the chain) at the same time.
Due to the manufacturing inconsistencies, each delay of a multiplexer is slightly different, and the top and bottom input signals are not synchronized.
The arbiter at the end of the chain determines which signal arrived earlier and thus forms the response (0 or 1).
The response of a PUF is determined by the delay difference between the top and bottom input signal, which is, in turn, the sum of delay differences of the individual stages.
To efficiently model a PUF, one tries to determine the delay vector $w=(w_1,\ldots,w_{n+1})$ which models the delay differences in each stage.
Lim~\cite{1561249} proposed a linear additive model that captures the APUF behavior where we require the map $f(c) = \phi$ of the applied challenge $c$ of length $n$ to a feature vector $\phi$ of length $n + 1$. The product of the feature vector and delay vector decides what signal came first, and based on it, what is the response bit $r$:
\begin{eqnarray}
\phi_i = \prod_{l=i}^{k}(-1)^{c_l}, \text{for } 1 \leq i \leq k.\\
r = \begin{cases}
1 &\text{if \ $\vec{w}\vec{\phi}^T < 0$}\\
0 &\text{if \ $\vec{w}\vec{\phi}^T > 0$}
\end{cases}
\end{eqnarray}
The optimization algorithm aims to find a delay vector that reproduces the target PUF behavior, with its actual delay values being unknown.
The delay vector is a sequence of floating-point values with $n$ elements, which correspond to $n$ stages in a PUF.
As the optimized delay vector approaches the actual one, the clone PUF model will reproduce the target PUF responses more accurately, which is the goal of the attacker.
The performance measure of the PUF model is commonly defined as the number of wrong responses in a given set of challenge-response pairs.
This value is minimized, and the lower the value, the more accurate the PUF model.
Our experiments modeled PUF targets with two chain sizes, 32 and 64 elements (corresponding to 32 and 64 variables).
In Table~\ref{tab:puf}, we give the results optimizing both PUF targets with 2\,000 challenge-response pairs using GA and DE.
Although there is no significant difference for the GA, the expansion summation method slightly improved upon the default encoding.
The multiplication approach obtained slightly worse results, following the observation that the summation variant scales better with increased dimensionality, which is considerably larger in this case. We depict the results for PUF with 64 stages in Figure~\ref{fig:puf64}, where the y axis denotes the number of incorrectly modeled challenge-response pairs. Observe how the expansion procedure with summation improves slightly over the default approach, while the compression approaches work significantly worse on average. Still, the best solutions for the sequential strategy compression perform similarly to the default strategy.
\begin{table}[]
\centering
\small
\caption{Results for PUFs with GA and DE for two different PUF sizes. The number of challenge response pairs is set to 2\,000.}
\adjustbox{max width=\textwidth}{
\begin{tabular}{@{}clllll@{}}
\toprule
\multicolumn{1}{l}{} & \multicolumn{1}{c}{\textbf{def}} & \multicolumn{1}{c}{\textbf{exp-s-2}} & \multicolumn{1}{c}{\textbf{exp-m-2}} & \multicolumn{1}{c}{\textbf{com-seq}} & \multicolumn{1}{c}{\textbf{com-alt}} \\ \midrule
\textbf{GA\_2000\_32} & 38.0±15.35 & 35.0±17.72 = & 46.5±18.79 = & 73.5±68.58 - & 70.5±83.02 - \\
\textbf{GA\_2000\_64} & 148.0±41.13 & 144.5±31.07 = & 225.5±54.25 - & 261.5±72.17 - & 299.5±83.96 - \\\midrule
\textbf{DE\_2000\_32} & 43.0±47.19 & \cellcolor[HTML]{dfdfdf}18.0±33.73 + & 252.0±62.32 - & 440.5±56.34 - & 446.5±69.91 - \\
\textbf{DE\_2000\_64} & 183.0±57.68 & \cellcolor[HTML]{dfdfdf}141.0±40.35 + & 317.5±63.47 - & 455.0±46.24 - & 449.5±40.05 - \\
\bottomrule
\end{tabular}}
\label{tab:puf}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/puf64.pdf}
\caption{Results for DE optimization of PUF with chain size of 64.}
\label{fig:puf64}
\end{figure}
\begin{table*}[!ht]
\small
\caption{Results for the neural network weight optimization.}
\label{tab:nnres}
\adjustbox{max width=\textwidth}{
\begin{tabular}{@{}cclllllll@{}}
\toprule
\multicolumn{1}{l}{$f$} &\multicolumn{1}{c}{arch} & \multicolumn{1}{c}{\textbf{def}} & \multicolumn{1}{c}{\textbf{exp-s-2}} & \multicolumn{1}{c}{\textbf{exp-s-3}} & \multicolumn{1}{c}{\textbf{exp-m-2}} & \multicolumn{1}{c}{\textbf{exp-m-3}} & \multicolumn{1}{c}{\textbf{com-seq}} & \multicolumn{1}{c}{\textbf{com-alt}} \\ \midrule
$f_1$ & 1-5-3-1 & 1053±58.27 & \cellcolor[HTML]{dfdfdf}295.8±78.33 + & \cellcolor[HTML]{dfdfdf}271.1±71.62 + & \cellcolor[HTML]{dfdfdf}295.1±58.92 + & \cellcolor[HTML]{dfdfdf}301.3±273.0 + & 1241±208.9 - & 1184±271.5 - \\
$f_1$ & 1-5-5-1 & 452.2±67.7 & \cellcolor[HTML]{dfdfdf}279.4±86.07 + & \cellcolor[HTML]{dfdfdf}263.1±58.31 + & \cellcolor[HTML]{dfdfdf}269.4±77.11 + & \cellcolor[HTML]{dfdfdf}286.5±91.56 + & 478.2±81.21 = & 449.2±83.05 = \\
$f_1$ & 1-7-5-1 & 345.9±64.88 & \cellcolor[HTML]{dfdfdf}238.6±48.31 + & \cellcolor[HTML]{dfdfdf}233.6±47.45 + & \cellcolor[HTML]{dfdfdf}217.7±72.52 + & \cellcolor[HTML]{dfdfdf}231.4±63.95 + & 366.5±57.02 = & 340.2±50.27 = \\
$f_1$ & 1-7-7-1 & 324.4±60.28 & \cellcolor[HTML]{dfdfdf}235.4±41.78 + & \cellcolor[HTML]{dfdfdf}201.9±42.75 + & \cellcolor[HTML]{dfdfdf}233.5±66.67 + & \cellcolor[HTML]{dfdfdf}241.1±72.04 + & 348.6±65.3 - & 378.6±75.53 - \\
$f_2$ &2-5-3-1 & 18.45±3.883 & 19.62±3.813 = & 19.54±4.06 = & 23.76±3.964 - & 20.22±5.248 - & 22.59±2.327 - & 24.47±3.195 - \\
$f_2$ &2-7-5-1 & 14.37±2.615 & 14.67±3.53 = & 14.31±3.712 = & 14.89±4.679 = & 13.73±6.559 = & 18.77±3.324 - & 18.15±3.842 - \\
$f_2$ & 2-7-7-1 & 14.31±3.172 & 13.13±4.356 = & 14.41±3.779 = & 14.3±4.345 = & 11.78±5.178 = & 18.0±2.73 - & 18.1±3.533 - \\
$f_2$ &2-5-5-1 & 17.81±4.17 & 17.61±4.347 = & 18.92±3.614 = & 18.4±4.71 = & 19.81±6.001 = & 22.47±3.941 - & 21.75±3.747 - \\
$f_3$ &1-2-2-1 & 5.604±0.574 & 5.614±1.246 = & \cellcolor[HTML]{dfdfdf}5.53±0.616 + & 5.728±5.562 = & 5.82±0.994 - & 5.952±6.089 - & 6.018±8.752 - \\
$f_3$ &1-3-3-1 & 5.525±0.572 & \cellcolor[HTML]{dfdfdf}4.691±0.826 + & \cellcolor[HTML]{dfdfdf}5.315±0.645 + & 5.37±1.884 = & 5.439±1.44 = & 5.508±6.551 = & 5.57±6.626 = \\
$f_3$ &1-5-5-1 & 4.263±0.961 & 4.166±1.04 = & \cellcolor[HTML]{dfdfdf}3.803±0.922 + & 4.287±0.906 = & 4.122±0.822 = & 4.168±1.695 = & 4.215±7.981 = \\
$f_3$ &1-7-5-1 & 3.761±0.682 & \cellcolor[HTML]{dfdfdf}3.427±0.877 + & 3.609±0.747 = & 3.797±1.006 = & 3.619±1.19 = & 4.611±1.275 - & 4.022±10.11 = \\ \bottomrule
\end{tabular}}
\end{table*}
\subsection{Optimizing Neural Network Weights.}
Artificial neural networks (ANNs) are a widely used model applied for various problem types. However, determining the multiplexer's optimal weights of a neural network for a given problem is a difficult optimization problem. As a result, various algorithms are used to optimize the weights of ANNs, ranging from gradient-based to evolutionary methods. We apply the proposed approaches for optimizing the weights of ANNs for three selected regression problems. The considered problems are:
\begin{align}
&f_1(x)=3\cdot \sin(x)+x. \\
&f_2(x,y)=x+y. \\
&f_3(x)=x\cdot \sin(x).
\end{align}
We selected these problems as they include both linear and nonlinear functions. For each problem, the number of training samples is between 250 and 300. We consider a simple feed-forward fully-connected ANN consisting of two hidden layers. The notation \textit{a-b-c-d} is used to denote the architecture of a network with \textit{a} nodes in the input layer, \textit{b} and \textit{c} nodes in the first and second hidden layers, and \textit{d} nodes in the output layer.
The Sigmoid function is used as the activation function in the hidden layers, whereas the linear sum function is used in the output layer. The following architectures are applied for the first and second regression problems: 1-5-3-1, 1-5-5-1, 1-7-5-1, 1-7-7-1. For the second function, the input layer consisted of two nodes instead of one. Since the third regression problem was less difficult to solve, it is used to test some smaller architectures: 1-2-2-1, 1-3-3-1, 1-5-5-1, 1-7-5-1. Each experiment is run 30 times to obtain statistically significant results. The domain is set to $[-5,5]$, and the termination criterion is set to 100\,000 function evaluations. The fitness function of this problem is defined as the mean squared error between the real values and the ANN output. Note that the problems and architectures of the considered neural networks are simple by today's deep learning standards, but they were selected to test the feasibility of the proposed approach for such problems.
Table~\ref{tab:nnres} displays the results obtained by GA for each regression problem and the selected architectures, while Figure~\ref{fig:nn_results} represents the distribution of solutions. Each row contains the median MSE value of the 30 executions and the standard deviation, denoted after the $\pm$ sign. The results support what was already demonstrated in previous experiments: the compression-based approaches again obtain inferior performance compared to all other tested approaches. On the other hand, the summation-based expansion approach achieved equally good or significantly better results than the default approach in all experiments. The multiplication-based approaches also performed well, although slightly worse than the summation-based ones. Surprisingly, for this problem, the increase of the number of variables, i.e., weights, did not affect the expansion approaches' performance, unlike in the case of the COCO benchmarks. Therefore, the results also demonstrate that the expansion approaches' performance depends on the considered optimization problem.
\begin{figure}[!ht]
\centering
\begin{subfigure}[t]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/sinx+siny2-7-7-1.pdf}
\caption{Optimizing function $x+y$ with the 2-7-7-1 architecture.}
\label{fig:nnbox1}
\end{subfigure}
\begin{subfigure}[t]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/sinxbyx1-7-5-1.pdf}
\caption{Optimizing function $xsin(x)$ with the 1-7-7-1 architecture.}
\label{fig:nnbox2}
\end{subfigure}
\begin{subfigure}[t]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3sinx+x1-7-7-1.pdf}
\caption{Optimizing function $3sin(x)+x$ with the 1-7-7-1 architecture.}
\label{fig:nnbox3}
\end{subfigure}
\caption{Distribution of the results obtained for optimizing neural network weights with GA.}
\label{fig:nn_results}
\end{figure}
The probable reason why the compression-based methods achieve inferior results is that two variables are fused into one. This means that changes are being performed on both original variables at the same time as the changes are performed on the compressed variable. As such, it is quite difficult to ensure that both variables are being changed in a meaningful way, which would lead us closer to the minimum, or that only a single of those two variables is updated. The compression-based methods also suffer from a reduced precision since the same space is used to store more than one variable at a time. This can limit the algorithms since they cannot perform an equally fine-grained search as without the genotype reduction. On the other hand, the expansion approaches seem to perform better due to a larger degree of freedom.
However, in the expansion approaches, due to each variable being expressed with multiple values, the algorithm now has an infinite number of combinations in which it can represent a single value with several ones. Although the search space also increases, it seems that this additional freedom proves to be beneficial and allows the algorithms to find new paths towards the minimum.
\section{Conclusions and Future Work}
\label{sec:conclusions}
This paper discusses how to expand or compress genotypes for continuous optimization and EAs. We compare four evolutionary algorithms' performance, four strategies for reconstructing the original genotype after compression/expansion, and three sets of problems.
Our analysis shows that compression works poorly in all the tested cases, while expansion manages to outperform the default encoding in numerous settings. We find the summation-based expansion strategy to be especially promising.
As the approach we discuss here is novel, there are multiple research directions one could follow. While we discuss the continuous optimization problems where we see the improvements stemming from the genotype expansions, it is not difficult to imagine problems where such an approach would not be beneficial. We plan to investigate such problems and understand what the differences are.
Also, we considered a scenario where every variable is either expanded or compressed. It would be interesting to see what happens when only a subset of variables is adapted in this way. This could provide a trade-off between the improvements in the performance and the size of the genotype.
\bibliographystyle{abbrv}
|
2,869,038,156,876 | arxiv | \section{Introduction}
Light absorption can be described in terms of
a process where polarization is
induced in the medium.
To linear order in the electric
field component of the traversing light, ${\cal E}$,
the induced polarizability, ${\cal P}$,
can be expressed in terms of the dielectric susceptibility $\chi$ as
$$
{\cal P}(t) = \int_{-\infty}^t{\rm d}t'\,\chi (t,t'){\cal E}(t').
$$
If the absorbing medium is in a stationary state, the
susceptibility depends only on the difference of its time arguments, i.e.,
$\chi (t,t') = \chi (t-t')$. Under these conditions
Maxwell's equation for ${\cal E}$
is an algebraic equation in frequency space and one finds that the absorption
is proportional to the imaginary part of $\chi (\omega )$.
However, under non-equilibrium conditions, which are the
topic of the present study, the susceptibility is
a two-time function, and Maxwell's
equation remains an integral equation even in the
frequency domain.
Further progress hinges upon two steps: first, one has to develop
methods to calculate the non-equilibrium susceptibility function
and second,
one has to specify what sort
of light wave or pulse is used in the absorption experiment.
The present work addresses both of these problems.
As far as the time-dependence of the probe pulse is concerned,
two specific situations are examined.
First, consider an undoped semiconductor placed in an intense
THz field; we assume that the THz field is not able to induce polarization,
i.e., no carriers are excited in the conduction band. Such a system is in
a non-equilibrium state, i.e., the susceptibility is a two time function.
Properties of such systems have been investigated experimentally using
the Free Electron Laser (FEL) as a source for intense THz
fields\cite{CER95}; many interesting properties
have been discovered, and others predicted,
such as photon-assisted tunneling \cite{GUI93},
dynamical localization and absolute negative conductivity \cite{KEA95},
ac Stark effect\cite{HOL94},
dynamical Franz-Keldysh effect
and formation of sidebands\cite{YAC68,JAU96,KON97,BIR95}.
A second example consists of ultra-fast transients.
Consider an undoped semiconductor structure subject to an external
intense static field. At some time instant a population of carriers is
pumped into the conduction band. These mobile charges will rearrange
themselves so as to
screen the external field. While the screening is building up
the susceptibility of the system is a two time function.
Using femtosecond laser techniques, which are able to probe
the time scales in which screening is building up, experiments
investigating the non-equilibrium properties of such systems have been
performed\cite{HEE93,CAM96}.
Bandgap engineering techniques of semiconductor compounds,
such as Molecular Beam Epitaxy (MBE), allow
spatial modulation of the bandgap down to atomic resolution.
It is possible to break the translational symmetries
of bulk crystals, induce new ones and reduce
the degrees of freedom with these techniques.
Construction of systems which are effectively two dimensional (2D) and even
one dimensional (1D) with regard to electron mobility and optical
properties is today a standard procedure. Another example
of such man-made structures are superlattices (SL), i.e.,
an engineered periodic potential in the growth direction of the
sample. The interplay between the mesoscopic properties and
the dynamical properties can lead to many interesting phenomena
such as absolute negative resistance for the transport
properties\cite{WAC97} and the rich features in the optical properties
which are the subject of the present work.
The purpose of this paper is to present a
theoretical study of light absorption in mesoscopic systems
subject to intense THz [far infrared (FIR)] fields.
We consider undoped systems, which implies that there are no carriers
in the conduction band. Thus, the near infrared (NIR) inter-band
absorption is the dominant absorption process.
The non-equilibrium nature of the system necessitates the
use of special theoretical tools; we have
chosen to apply the non-equilibrium Green function
techniques\cite{HAU96,POT96}.
In particular, this method allows us to treat the intense FIR field
non-perturbatively, and defines a framework in which screening
can be treated systematically.
Our analysis consists of the following steps.
Starting from a two-band Hamiltonian we derive a
formal
expression for the inter-band susceptibility in terms of non-equilibrium
Green functions. Next, we use the general
expression to derive the NIR absorption spectrum
for non-interacting particles
(Coulomb interactions will be discussed in a subsequent paper,
see also below). Finally, we give explicit results for a number
of special cases (3-, 2-, 1-d systems and superlattices), and
discuss the physical implications.
This paper is organized as follows. In section \ref{susc_sec} we derive
a general expression for the two-time
dielectric inter-band susceptibility. Section \ref{abs_sec}
relates the susceptibility to the measured absorption, considering both
continuous wave measurements as well as short white light pulses.
The single-particle Green functions and the corresponding spectral
functions, which determine the susceptibility, are defined
in section \ref{gre_sec} and related to the generalized density of
states\cite{JAU96} (GDOS), which, in turn, is shown to determine the
optical absorption.
Section \ref{bulk_sec} considers bulk-like
systems, and we obtain analytic results for the GDOS,
which is analyzed in some detail in terms of the side-band picture.
The properties of light absorption in superlattices are treated in section
\ref{sup_sec}. Specific attention is paid to conditions where
dynamical localization occurs,
and we show how it affects the absorption spectrum. Finally,
in section \ref{con_sec} we have some concluding remarks.
\section{The dielectric inter-band susceptibility}
\label{susc_sec}
We shall now derive an expression for the dielectric inter-band
susceptibility
using non-equilibrium Green functions.
The microscopic operator describing inter-band
polarization is
\begin{equation}
\vec{P}(t) = \sum_{k}\vec d_k \left[a_k^\dagger (t)b_k(t)+b_k^\dagger (t)a_k(t)
\right]\;.
\label{pol}
\end{equation}
Here $\vec d_k$ is the dipole matrix-element, $a_k^\dagger (t)$ [$a_k(t)$]
are the conduction band electron creation
[annihilation] operators and $b_k^\dagger (t)$
[$b_k(t)$] are the valence band creation [annihilation]
operators. The linearized Hamiltonian associated
with a polarization $\vec{P}(t)$ induced by the external field ${\cal E}(t)$ is
$H_P(t) = - \vec{P}(t)\cdot {\cal E}(t)$.
Linear response theory now yields the Cartesian $l$-component
of the
induced inter-band polarization
due to a weak external field ${\cal E}$:
\begin{equation}
{\cal P}_l(t) = -{i\over\hbar}\int_{-\infty}^\infty {\rm d}t'\,\theta (t-t')
\langle [ \vec{P}(t'),P_l(t)]\rangle\cdot {\cal E}(t').
\label{is5}
\end{equation}
The retarded susceptibility tensor can be identified from Eq.(\ref{is5}):
\begin{equation}
\chi_{lm}^r(t,t') = -{i\over\hbar}\theta (t-t')
\langle [ P_m(t'),P_l(t)]\rangle.
\end{equation}
Following the standard line of attack in non-equilibrium theory \cite{HAU96},
we first consider the causal (time-ordered) response function:
\begin{equation}
\chi_{lm}^c(t,t') =-{i\over\hbar}\langle {\rm T}\{ P_m(t')P_l(t)\}\rangle
\end{equation}
where ${\rm T}$ is the time-ordering operator.
In non-equilibrium, the causal response function is replaced by its
contour ordered counterpart:
$\chi_{lm}^c(t,t')\to\chi_{lm}^c(\tau,\tau')$,
where the complex-time variables $\tau,\tau'$ reside on
the Keldysh contour. Finally we obtain the retarded
tensor by an analytic continuation
using the Langreth rules\cite{HAU96Table41}.
We use Eq.(\ref{pol}) to write the susceptibility as
\begin{eqnarray}\label{chifull}
\chi_{lm}^c(\tau,\tau')
&=& -{i\over\hbar}\sum_{q\,k}d_l(k)d_m(q) \\
&\times&[\langle{\rm T}_c\{a_q^\dagger (\tau')b_q(\tau')
a_k^\dagger (\tau)b_k(\tau)\}\rangle
\nonumber \\
&&+\langle{\rm T}_c\{a_q^\dagger (\tau')b_q(\tau')
b_k^\dagger (\tau)a_k(\tau)\}\rangle
\nonumber \\ &&
+\langle{\rm T}_c\{b_q^\dagger (\tau')a_q(\tau')
a_k^\dagger (\tau)b_k(\tau)\}\rangle
\nonumber \\
&&+\langle{\rm T}_c\{b_q^\dagger (\tau')a_q(\tau')
b_k^\dagger (\tau)a_k(\tau)\}\rangle
]\;. \nonumber
\end{eqnarray}
where ${\rm T}_c$ is the contour-ordering operator.
In equilibrium, the two-particle
correlation functions occurring in Eq.(\ref{chifull}) would be found via the
Bethe-Salpeter equation\cite{HAU84}.
In what follows, however, we shall consider the noninteracting limit
of Eq.(\ref{chifull}).
This approach is motivated by the
following considerations.
The noninteracting limit will allow significant analytic progress,
and the results, which we believe are interesting in their
own right, will form the basis for
any subsequent interacting theories.
Secondly, experimentally it is known that in undoped semiconductor
quantum wells excitons are quenched if the system is subject to intense
FIR\cite{NOR97}, thus implying that for the present
situation Coulomb interactions
may not play an equally dominant role as is the case in
equilibrium situations.
A quantitative assessment requires a non-equilibrium theory for the
two-particle Green functions and will be a topic of our future
work. For noninteracting particles we can use Wick's theorem
to factorize the two-particle correlation functions.
Thus, the non-equilibrium susceptibility can be expressed
in terms of single-particle Green functions.
The following Green functions are needed:
\begin{eqnarray}
g_c(k,\tau;q,\tau') = -i\langle{\rm T}_c\{a_k(\tau)a_q^\dagger (\tau')\}
\rangle \\
g_v(k,\tau;q,\tau') = -i\langle{\rm T}_c\{b_k(\tau)b_q^\dagger (\tau')\}
\rangle \\
g_{ab}(k,\tau;q,\tau') = -i\langle{\rm T}_c\{a_k(\tau)b_q^\dagger (\tau')\}
\rangle
\end{eqnarray}
and
\begin{equation}
g_{ba}(k,\tau;q,\tau') = -i\langle{\rm T}_c\{b_k(\tau)a_q^\dagger (\tau')\}\rangle.
\end{equation}
We assume that the frequency, $\Omega$, of the FIR
field is such that $\hbar\Omega\ll\epsilon_g$.
In typical experiments on III-V systems $\epsilon_g$ is of the
order of eV, while $\hbar\Omega$ is a few meV, so this
condition is satisfied.
Consequently, inter-band transitions due to the
perturbing field can be ignored, and
the Green functions related to Zener-effect, i.e.,
$g_{ab}(k,\tau;q,\tau')$ and $g_{ba}(k,\tau;q,\tau')$,
are neglected from this on.
The first order non-equilibrium susceptibility reads thus
\begin{eqnarray}
\chi_{lm}^c(\tau,\tau') &=&
-{i\over\hbar}\sum_{q\,k}d_l(k)d_m(q)[
g_c(k,\tau;q,\tau')g_v(q,\tau';k,\tau)\nonumber\\
&&+g_v(k,\tau;q,\tau')g_c(q,\tau';k,\tau)
].
\end{eqnarray}
The analytic continuation to real times is performed with
the Langreth rules \cite{HAU96Table41}, which state that
if on contour
\begin{equation}
C(\tau ,\tau') = A(\tau ,\tau')B(\tau' ,\tau)\;,
\nonumber
\end{equation}
then the retarded function on the real time axis is
\begin{equation}
C^r(t,t') = A^<(t ,t')B^a(t',t) + A^r(t,t')B^<(t',t).
\nonumber
\end{equation}
We thus have
\begin{eqnarray}
\chi_{lm}^r(t,t') &=& -{i\over\hbar}\sum_{k}d_l(k)d_m(k)[
g_c^<(k,t,t')g_v^a(k,t',t)\nonumber\\
&&+g_c^r(k,t,t')g_v^<(k,t',t)
+g_v^<(k,t,t')g_c^a(k,t',t) \nonumber \\
&&+g_v^r(k,t,t')g_c^<(k,t',t) ],
\end{eqnarray}
with
\begin{eqnarray}
g^<(t,t') &=& i\langle c^\dagger (t')c(t)\rangle ,\\
g^a(t,t') &=& i\theta (t'-t)\langle\{c(t),c^\dagger (t')\}\rangle ,\\
g^r(t,t') &=& -i\theta (t-t')\langle\{c(t),c^\dagger (t')\}\rangle.
\end{eqnarray}
We recall the following relations:
\begin{eqnarray}
\left[g^<(t,t')\right]^* &=& - g^<(t',t) \nonumber\\
\left[g^a(t,t')\right]^* &=& g^r(t',t) \\
\left[g^r(t,t')\right]^* &=& g^a(t',t)\nonumber
\end{eqnarray}
For certain applications, e.g. Section
\ref{cw_sec}, it is convenient to introduce the center of mass variables
$T = (t'+t)/2$ and $\tau = t-t'$
\cite{notation}.
In terms of these variables the symmetry
relations of the Green functions are:
\begin{eqnarray}
\left[g^<(T,\tau )\right]^* &=& - g^<(T,-\tau ) \nonumber\\
\left[g^a(T,\tau )\right]^* &=& g^r(T,-\tau ) \\
\left[g^r(T,\tau )\right]^* &=& g^a(T,-\tau )\nonumber
\end{eqnarray}
The retarded susceptibility expressed in center of mass coordinates
is
\begin{eqnarray}
\label{chicm}
\chi_{lm}^r(T,\tau) &=& -{i\over\hbar}\sum_{k}d_l(k)d_m(k)\nonumber \\
&\quad&\times\{[ g_c^<(k,T,\tau )g_v^a(k,T,-\tau ) \nonumber \\
&&\quad + g_v^<(k,T,\tau )g_c^a(k,T,-\tau ) ] - h.c.\}.
\end{eqnarray}
Note that in equilibrium $\chi_{lm}^r(T,\tau) = \chi_{lm}^r(\tau)$.
As shown in Section \ref{cw_sec}, the relevant quantity for
continuous wave measurements at frequency $\omega_l$ is
\begin{equation}
{\rm Im}\chi_{lm}^r(T,\omega_l ) = {\rm Im}
\left\{\int_{-\infty}^\infty{\rm d}\tau\,
e^{i\omega_l\tau}\chi_{lm}^r(T,\tau) \right\}
\end{equation}
to first order in $\Omega /\omega_l$ (here $\Omega$ is the FIR
frequency). Now,
$\chi_{lm}^r(T,\tau)$ is a real quantity as is evident from (\ref{chicm}).
Using the properties of the Fourier transform
we obtain
\begin{eqnarray}
\chi_{lm}^r(T,\omega_l ) &=&
-{i\over\hbar}\sum_{k}d_l(k)d_m(k)
\int_{-\infty}^\infty {{\rm d}\omega\over 2\pi}\Big\{
g_c^<(k,T,\omega ) \nonumber \\
&&
\times [ g_v^a(k,T,\omega-\omega_l ) + g_v^r(k,T,\omega + \omega_l )]
\nonumber \\
&&+
g_v^<(k,T,\omega )\nonumber \\
&&\times
[g_c^a(k,T,\omega-\omega_l )
+ g_c^r(k,T,\omega+ \omega_l )]\Big\}.\nonumber\\
\end{eqnarray}
Since $\chi_{lm}^r(T,\tau )$ is real, the
imaginary part of its Fourier transform is obtained through
\begin{equation}
\nonumber
{\rm Im}\chi_{lm}^r(T,\omega_l ) =
{1\over 2i}[\chi_{lm}^r(T,\omega_l ) - \chi_{lm}^r(T,-\omega_l )].
\end{equation}
We can therefore write, in terms of the spectral functions
\begin{equation}
\nonumber
A_c(k,T,\omega ) = i[g_c^r(k,T,\omega )-g_c^a(k,T,\omega )]
\end{equation}
and
\begin{equation}
\nonumber
A_v(k,T,\omega ) = i[g_v^r(k,T,\omega )-g_v^a(k,T,\omega )],
\nonumber
\end{equation}
that
\begin{eqnarray}\label{ImchiT}
{\rm Im}\chi_{lm}^r(T,\omega_l )
&=&{i\over 2\hbar }
\sum_{k}d_l(k)d_m(k)
\int_{-\infty}^\infty {{\rm d}\omega\over 2\pi}\Big\{g_c^<(k,T,\omega )
\nonumber \\
&&\times
[ A_v(k,T,\omega-\omega_l)-A_v(k,T,\omega+\omega_l) ] \nonumber \\
&&+
g_v^<(k,T,\omega ) \nonumber \\ &&\times
[ A_c(k,T,\omega-\omega_l)-A_c(k,T,\omega+\omega_l)] \Big\}.\nonumber
\\
\end{eqnarray}
The lesser functions can be expressed in the form\cite{HAU96}
\begin{equation}
\nonumber
g_a^<(k,T,\omega ) = if_a(k,T,\omega ) A_a(k,T,\omega )
\end{equation}
where $f_a(k,T,\omega )$ is a generalized
particle distribution for particles of species
$a$, and $A_a(k,T,\omega )$ is the corresponding spectral function.
In accordance with our assumption about no
FIR field induced inter-band transitions, we can set
$f_c(k,T,\omega )=0$ (zero occupation of conduction
band), and that $f_v(k,T,\omega )=1$ (all valence states are occupied).
In the general case, e.g., when considering nonlinear effects
in the {\it probing} light field,
one would have to find $f_a(k,T,\omega )$ via, say,
a Monte Carlo solution of semiconductor
Bloch equations \cite{HAU93,KUH92}
or by a direct integration of quantum kinetic
equations for
$g_a^<(k,T,\omega )$ \cite{HAU96}.
With these assumptions the susceptibility reduces to
\begin{eqnarray}\label{convolution}
{\rm Im}\chi_{lm}^r(T,\omega_l )
&=& {-1\over 2\hbar }\sum_{k}d_l(k)d_m(k)
\int_{-\infty}^\infty {{\rm d}\omega\over 2\pi}
A_v(k,T,\omega )\nonumber\\
&\quad&\times \{ A_c(k,T,\omega-\omega_l)-A_c(k,T,\omega+\omega_l) \}
\nonumber \\
&=&
{1\over 2\hbar }\sum_{k}d_l(k)d_m(k)\nonumber\\
&\quad&
\times\int_{-\infty}^\infty {{\rm d}\omega\over 2\pi}
A_v(k,T,\omega )A_c(k,T,\omega+\omega_l). \nonumber\\
\end{eqnarray}
The second equality comes about because we do not consider
overlapping bands. Eq.(\ref{convolution}), which is the central
result of this section, expresses the fact that the non-equilibrium
inter-band susceptibility function can be calculated from a
{\it joint spectral function}, which is a convolution of the
individual band spectral function. A similar result is known
from high-field quantum transport theory \cite{HAU96Chap11}: there the
field-dependent scattering rate is expressed as a joint spectral
function for the initial and final states.
In order to make a connection to the equilibrium case,
we recall the exact identity $g^<_{\rm eq}(k,\omega)
=ia_{\rm eq}(k,\omega)n(\omega)$ ($n(\omega)$ is
the Fermi function), and obtain from Eq.(\ref{ImchiT})
\cite{HAU84,SCH89}
\begin{eqnarray}
{\rm Im}\chi_{lm}^r(\omega_l )
&=&-{1\over 2\hbar}
\sum_{k}d_l(k)d_m(k)
\nonumber\\
&\quad&\times\int_{-\infty}^\infty {{\rm d}\omega\over 2\pi}
\Big\{ n_c(\omega )a_{c,{\rm eq}}(k,\omega ) \nonumber \\
&\quad&\times [a_{v,{\rm eq}}(k,\omega-\omega_l)-
a_{v,{\rm eq}}(k,\omega+\omega_l)] \nonumber \\
&\quad&+ n_v(\omega )a_{v,{\rm eq}}(k,\omega ) \nonumber \\
&\quad& \times [ a_{c,{\rm eq}}(k,\omega-\omega_l)-
a_{c,{\rm eq}}(k,\omega+\omega_l)] \Big\}.\nonumber\\
\end{eqnarray}
Here $n_c(\omega )$ is the conduction band
electron occupation function,
and $n_v(\omega )$ is corresponding function for the valence band electrons.
\section{Absorption coefficient in terms of the time dependent
dielectric susceptibility}
\label{abs_sec}
The dielectric susceptibility $\chi$ links the
induced polarization
${\cal P}$ to the field ${\cal E}$ via
\begin{equation}
{\cal P}(t) = \int_{-\infty}^t{\rm d}t'\,\chi (t,t'){\cal E}(t').
\label{ad1}
\end{equation}
The wave equation for light is then
\begin{equation}\label{waveq}
\nabla^2{\cal E}(t) - {1\over c^2}{\partial^2 {\cal D}(t)\over\partial t^2}
= 0
\end{equation}
where
$
{\cal D}(t) = {\cal E}(t) + 4\pi {\cal P}(t)$.
The absorption coefficient $\alpha (\omega )$ is defined as
the inverse of the length which light has to traverse
in the medium at frequency $\omega$ in
order for the intensity of the light
to decrease by a factor of $1/e$. In equilibrium
$
{\cal D}(\omega ) = [1+4\pi\chi (\omega)]{\cal E}(\omega )
= \epsilon (\omega){\cal E}(\omega )
$
and the
absorption coefficient \cite{HAU93} becomes
\begin{equation}\label{alphastat}
\alpha (\omega ) = 4\pi\omega{{\rm Im}\chi (\omega )\over cn (\omega )}.
\end{equation}
Here $n^2(\omega)= \frac{1}{2}
[{\rm Re}\epsilon(\omega)+|\epsilon(\omega)|]$ is the
refraction coefficient which
usually depends only weakly
on $\omega$.
In non-equilibrium this analysis must be generalized, and
we consider two special cases:
(i) monochromatic continuous wave
measurements, and (ii) white light short pulse measurements.
\subsection{Continuous wave measurements}
\label{cw_sec}
Consider a system out
of equilibrium which is probed by a light field (which is assumed
to be weak)
of frequency $\omega_l$:
\begin{equation}
{\cal E}(r,t) = {\cal E}_0 \exp [i(r\cdot k-\omega_l t)].
\end{equation}
The polarization can then be expressed as
\begin{equation}\label{pol1}
{\cal P}(t) = {\cal E}(r,t)
\int_{-\infty}^{\infty} dt'
e^{i\omega_l(t-t')}\chi^r(t,t')\;.
\end{equation}
This form is suggestive: it is advantageous to
express $\chi^r$ in terms of the center-of-mass and difference
coordinates, $\chi^r(t,t')\to{\tilde\chi}^r({1\over 2}(t+t'),t-t')$.
The characteristic time-scale for the center-of-mass time
is set by the ``slow'' frequency $\Omega$, while the difference-time
varies on the scale of the ``fast'' frequency $\omega_l$. We thus
gradient-expand:
${\tilde\chi}^r({1\over 2}(t+t'),t-t')\simeq{\tilde\chi}^r(t,t-t')+
{1\over 2}(t'-t)
\tilde\chi^{r^\prime}(t,t-t') + \cdots$,
where the prime indicates differentiation with respect to the slow
temporal variable. Substitution in (\ref{pol1}) then yields
(we introduce a new variable $\tau\equiv t-t'$)
\begin{eqnarray}\label{gradexp}
{\cal P}(t)&=&{\cal E}(r,t)\int d\tau e^{i\omega_l\tau}[
{\tilde\chi}^r(t,\tau) + (-{1\over 2}\tau)\tilde\chi^{r^\prime}(t,\tau)
+\cdots]\nonumber\\
&=&{\cal E}(r,t)[{\tilde\chi}^r(t,\omega_l) + {\partial\over\partial\omega_l}
{\partial\over\partial t} {i\over 2}{\tilde\chi}^r(t,\omega_l)+\cdots]\nonumber\\
&=&{\cal E}(r,t) \exp\left[{i\over 2}{\partial^2\over\partial t
\partial\omega_l}\right]
{\tilde\chi}^r(t,\omega_l)\;.
\end{eqnarray}
Eq.(\ref{gradexp}) can now be used in the Maxwell equation; note however
that upon Fourier-transforming the dominant frequency comes from ${\cal E}(t)$
and we can keep $t$ in $\chi(t,\omega_l)$ fixed. The slow time-variation
will from this on be indicated by $T$.
Proceeding as in deriving the static result (\ref{alphastat}),
we identify the {\it time-dependent} absorption
coefficient
\begin{equation}
\alpha_T(\omega ) =
4\pi\omega{{\rm Im}{\tilde\chi}^r (T,\omega)\over cn_T(\omega )} +
{\cal O}(\Omega /\omega ).
\end{equation}
If the driving force is periodic in $T$ (the harmonic time-dependence
due to a FEL-laser is an important special case), then the average
absorption coefficient is
\begin{eqnarray}
\bar\alpha (\omega ) &=& {1\over T_{\rm period}}\int_{\rm period}{\rm d}T\,
\alpha_T(\omega ) \\
&=& {1\over T_{\rm period}}\int_{\rm period}{\rm d}T\,
4\pi\omega{{\rm Im}{\tilde\chi}^r (T,\omega)\over cn_T(\omega )} \nonumber
\end{eqnarray}
to {\it all} orders in $\Omega /\omega$. We stress that here ${\tilde\chi}^r (T,\omega)$
is Fourier transformed with respect to the difference variable $\tau$.
Below we shall represent numerical examples for the generalized
absorption coefficient.
\subsection{Short white light pulse measurements}
\label{sl_sec}
Consider now an instantaneous measurement performed on
a non-equilibrium system: at some specific
time $t=t_m$ the system is probed
with a weak pulse whose duration is short compared to the characteristic
dynamics of the system.
We approximate the pulse with delta function
in time:
\begin{equation}\label{deltapulse}
{\cal E}(r,t)
= {\cal E}_0e^{ir\cdot k} \delta (t-t_m).
\end{equation}
In principle, construction of such a pulse would take infinite energy due to its
time dependence. The pulse is therefore hardly ``weak''. When we refer to the
pulse as weak we assume that ${\cal E}_0 \ll 1$, that is the intensity of the
light is small at all frequencies.
Using (\ref{deltapulse}) in the Maxwell equation
yields dispersion relation
\begin{equation}
k^2 = {\omega^2\over c^2}[1+4\pi\chi^r (\omega,t_m)].
\end{equation}
This dispersion relation looks quite
similar to the one obtained in the
previous section. The difference is that here $\chi^r (\omega,t_m)$ is Fourier
transformed with respect to $t'$, {\em not} the difference variable
$\tau$. In the present case we obtain the time dependent
absorption coefficient
\begin{equation}
\alpha_t(\omega ) = 4\pi\omega{{\rm Im}\chi^r (\omega_l,t)\over cn_t(\omega )}.
\end{equation}
For examples of experiments which probe systems in this manner
see, e.g., Refs. \cite{HEE93,CAM96}.
\subsection{Differential Transmission Spectrum}
\label{dts_sec}
Consider a sample of thickness $L$;
then the ratio of the intensity transmitted through the sample with its initial
intensity is $T(\omega )= \exp [-\bar\alpha (\omega )L]$ where
$\bar\alpha (\omega )$
is the absorption coefficient of the sample.
Experimental setups for
measuring the change in absorption due to externally controlled perturbations
commonly measure the differential transmission spectrum (DTS) defined
by\cite{HAU93}
\begin{equation}
DTS( \omega )= {T(\omega )-T_0(\omega )\over T_0(\omega )}.
\end{equation}
Here $T(\omega )$ is the transmission with the perturbation present and
$T_0(\omega )$ is the transmission through the unperturbed sample.
Below we give examples of $DTS(\omega)$ in non-equilibrium situations.
\section{Single particle Green functions and the spectral functions}
\label{gre_sec}
In this section we determine the single-particle
Green functions and their associated spectral
functions. We show that, under conditions
specified below, that the convolutions of the spectral functions,
encountered in Section \ref{susc_sec},
result in effective single band spectral functions.
\subsection{The single particle Green functions}
Let $\vec A$ be the vector potential which defines the
FIR field. Considering harmonic, translationally
invariant external fields we choose
\begin{equation}
\vec A (t) = -\vec E{\sin (\Omega t)\over\Omega},
\label{vecpot}
\end{equation}
which represents the physical uniform
electric field $\vec E\cos (\Omega t)$.
The Hamiltonian describing the two bands is then
\begin{equation}
H = \sum_{\vec k}\left\{
\epsilon_c[\vec k-e\vec A(t)]a_{\vec k}^\dagger a_{\vec k}
+\epsilon_v[\vec k-e\vec A(t)]b_{\vec k}^\dagger b_{\vec k}\right\}.
\end{equation}
The Dyson equation for the retarded/advanced free-particle Green function is
\begin{equation}
\left( i\hbar{\partial\over\partial t} -
\epsilon_\alpha [\vec k-e\vec A(t)]\right)
g_\alpha^{r/a} (\vec k,t,t') = \delta (t-t')\;,
\end{equation}
where $\alpha\in \{c,v\}$
is the band index. This equation is
readily integrated with the solutions
\begin{equation}
g_\alpha^{r/a}(\vec k,t,t') = \mp{i\over\hbar}\theta (\pm t\mp t')
\exp\Big\{
-{i\over\hbar}\int_{t'}^t{\rm d}s\epsilon_\alpha [\vec k-e\vec A(s)]
\Big\},
\end{equation}
and the spectral function $A=i(g^r-g^a)$ becomes
\begin{equation}
A_\alpha (\vec k,t,t')
={1\over\hbar}
\exp\Big\{
-{i\over\hbar}\int_{t'}^t{\rm d}s\epsilon_\alpha [\vec k-e\vec A(s)]
\Big\}.
\end{equation}
\subsection{Convolution of the spectral functions}
\label{conv_sec}
According to Section \ref{susc_sec}
the susceptibility is obtained through the trace of a convolution of the
spectral functions. We shall now show that within the present model the
convolution of spectral functions results in a new effective single-band
spectral function.
In terms of the center of mass variables, $\tau = t-t'$ and $T = (t+t')/2$,
we write the spectral functions as
\begin{eqnarray}
A_\alpha (\vec k,T,\omega)&=& {1\over\hbar}
\int_{-\infty}^\infty{\rm d}\tau e^{i\omega\tau} \\ &&
\times\exp\Big\{
-{i\over\hbar}
\int_{T-\tau/2}^{T+\tau/2}{\rm d}s\epsilon_\alpha [\vec k-e\vec A(s)]
\Big\}.\nonumber
\end{eqnarray}
Then the convolution,
\begin{equation}\label{convolution1}
b(\vec k,T,\omega_l ) =\hbar
\int_{-\infty}^\infty{{\rm d}\omega\over 2\pi}
A_e(\vec k,T,\omega)A_v(\vec k,T,\omega-\omega_l),
\end{equation}
of the two spectral functions becomes
\begin{eqnarray}
b(\vec k,T,\omega_l ) &=& {1\over\hbar}
\int_{-\infty}^\infty{\rm d}\tau e^{i\omega_l\tau}
\exp\Big\{
-{i\over\hbar}
\int_{T-\tau/2}^{T+\tau/2}{\rm d}s
\\&&\times
\big( \epsilon_c [\vec k-{e}\vec A(s)] -\epsilon_v [\vec k-e\vec A(s)]\big)
\Big\}.\nonumber
\end{eqnarray}
In the case of parabolic bands we have
\begin{equation}
\epsilon_c [\vec k] = {\hbar^2k^2\over 2m_e}\quad , \quad
\epsilon_v [\vec k] = -{\hbar^2k^2\over 2m_h}-\epsilon_g.
\end{equation}
Here $m_e$ is the electron mass, $m_h$ is the positive hole mass and
$\epsilon_g$ is the band gap. We define a single effective band for the
system:
\begin{equation}
\epsilon_{{\rm eff}}[\vec k] \equiv \epsilon_c [\vec k] - \epsilon_v [\vec k]
= {\hbar^2k^2\over 2m_{{\rm eff}}}+\epsilon_g,
\end{equation}
where
$
m_{{\rm eff}} = {m_em_h/( m_e+m_h)}
$
is the effective reduced mass. Thus, the effective band is parabolic like
the original bands but with their reduced mass.
It is therefore evident that the convolution,
writing $A_{{\rm eff}}(\vec k,T,\omega_l ) = b(\vec k,T,\omega_l )$,
is a spectral function for a parabolic band,
\begin{eqnarray}
A_{{\rm eff}} (\vec k,T,\omega)&=& {1\over\hbar}
\int_{-\infty}^\infty{\rm d}\tau e^{i\omega\tau} \\ &&
\times\exp\{
-{i\over\hbar}
\int_{T-\tau/2}^{T+\tau/2}{\rm d}s\epsilon_{{\rm eff}}
[\vec k-e\vec A(s)]
\}.\nonumber
\end{eqnarray}
In the case of tight-binding minibands for a type I superlattice
(with period $a$) we write the
bands as
\begin{eqnarray}
\epsilon_c[\vec k] &=& {1\over2}\lambda_c\cos (ak_\parallel )
+{\hbar^2k_\perp^2\over 2m_e}
\\
\epsilon_v[\vec k] &=& -{1\over2}\lambda_v\cos (ak_\parallel )
-{\hbar^2k_\perp^2\over 2m_h} -\epsilon_g,
\end{eqnarray}
where $\lambda_c$ is the electron miniband width, $\lambda_h$ is the
corresponding bandwidth for the holes; $k_\parallel$ is the
(crystal) momentum
component parallel to the growth direction of the superlattice and
$k_\perp$ is the magnitude of the component perpendicular to the growth
direction. The effective band thus becomes
\begin{equation}
\label{sup_dis}
\epsilon_{{\rm eff}}[\vec k] =
{1\over2}\lambda_{{\rm eff}}\cos (ak_\parallel )
+{\hbar^2k_\perp^2\over 2m_{{\rm eff}}}+\epsilon_g,
\end{equation}
where $\lambda_{{\rm eff}} = \lambda_c+\lambda_v$ is the effective
bandwidth, which again is of the same form as the original bands.
This shows that also for superlattices
the convolution (\ref{convolution1}) leads to
an effective spectral function of the original form.
In terms of the effective spectral function the imaginary part of
the susceptibility can be written as
\begin{equation}
{\rm Im}\chi_{lm}^r(T,\omega_l ) = {d_ld_m\over 2\hbar}
\sum_{\vec k}A_{{\rm eff}}(\vec k,T,\omega_l),
\end{equation}
where we assume that the dipole matrix elements are $k$-independent.
In equilibrium the trace of the spectral function yields the density of
states for the system.
Analogously, the {\it generalized time-dependent
density of states} (GDOS) \cite{JAU96} is defined as
\begin{equation}
\label{the_trace}
\rho (T,\omega_l) =
{1\over\pi}\sum_{\vec k}A_{{\rm eff}}(\vec k,T,\omega_l),
\end{equation}
allowing us to write the absorption coefficient as
\begin{equation}
\alpha_T(\omega_l)\approx {2\pi^2\omega_l |d|^2\over cn\hbar}\rho (T,\omega_l).
\end{equation}
For the remainder of this work we shall investigate the properties of
$\rho (T,\omega_l)$ for various systems.
\subsection{Gauge invariance}
To conclude this section we briefly comment on gauge invariance.
We might have, from the outset, chosen to work within
the gauge invariant formulation which has been developed in
of high-field transport\cite{DAV88,BER91,HAU96}. Considering translationally
invariant systems, correlation functions are made gauge invariant with the
transformation
\begin{equation}
\vec k\rightarrow \vec k+ {e\over t-t'}\int_{t'}^t{\rm d}s\vec A(s).
\label{ginvtrans}
\end{equation}
However, the absorption coefficient
follows from a trace operation (\ref{the_trace}),
which makes the transformation (\ref{ginvtrans}) redundant: a simple
change of variables when performing the trace
undoes (\ref{ginvtrans}) and proves that our formulation of the absorption
is gauge invariant.
\section{Parabolic bands}
\label{bulk_sec}
In this section we shall investigate the properties of the generalized
density of states for systems which can be effectively described
by Hamiltonians yielding
parabolic bands, be it in one, two or three dimensions.
We write the effective single-band dispersion as
\begin{equation}
\epsilon [\vec k] = {\hbar^2k^2\over 2m_{{\rm eff}}} + \epsilon_g.
\end{equation}
For convenience we write $m=m_{{\rm eff}}$ and set $\epsilon_g =0$ which
shifts the energy axis such that the reference point is the band gap
energy. We calculate the generalized density of states from
\begin{equation}
\rho^{nD} (T,\epsilon ) =
\int_{-\infty}^\infty {\rm d}\tau e^{i\epsilon\tau/\hbar}
\rho^{nD} (T,\tau ),
\label{fourier}
\end{equation}
where
\begin{equation}
\rho^{nD} (T,\tau ) = {1\over\hbar}\int {{\rm d}^n {\vec k}\over (2\pi )^n}
\exp\Big\{
-{i\over\hbar}\int_{T-\tau/2}^{T+\tau/2}{\rm d}s
\epsilon [{\vec k}-e\vec A(s)]
\Big\}.
\end{equation}
With the vector potential (\ref{vecpot}) one obtains explicitly that
\begin{eqnarray}\nonumber
\rho^{nD} (T,\tau ) &=&
{1\over\hbar}\int {{\rm d}^n{\vec k}\over (2\pi)^n}
\exp\Bigl\{-i\big[
(\epsilon_k+\epsilon_f)\tau/\hbar \\
&&+2{e{\vec k}\cdot{\vec E}\over m\Omega^2}
\sin (\Omega T)\sin ({\Omega\tau\over 2}) \nonumber \\
&&- {\omega_f\over\Omega}\cos (2\Omega T)\sin (\Omega\tau )
\big]\Bigr\}.
\end{eqnarray}
Here $\epsilon_k = \hbar^2 k^2/2m$ and we have
defined the fundamental energy scale
\begin{equation}
\epsilon_f = \hbar\omega_f = {e^2E^2\over 4 m\Omega^2}.
\label{omegaf}
\end{equation}
The energy $\epsilon_f$ can be interpreted classically in the following
way: consider a classical particle with charge $e$ and mass $m$
subjected to an electric field
${\vec E}(t)={\vec E}\cos(\Omega t)$. From Newton's equation of
motion one finds that
the mean kinetic energy of such a particle equals $\epsilon_f$.
In order to perform the Fourier-transform (\ref{fourier}) we utilize the
identity \cite{GRADSHTEYN}
\begin{equation}
\label{expansion}
\exp (ix\sin\theta ) =\sum_{n}
J_n(x)\exp (in\theta ),
\end{equation}
where $J_n(x)$ are Bessel functions; we shall henceforth write
$\sum_{n}\equiv \sum_{n = -\infty}^\infty$ to simplify the notation.
The generalized density of states becomes
\begin{eqnarray}\nonumber
\rho^{nD} (T,\epsilon ) &=&
\sum_{l,j}
\int {{\rm d}^n{\vec k}\over (2\pi )^{n-1}}
\delta [\epsilon-\epsilon_k-\epsilon_f +l\hbar\Omega ] \\
&&\times
J_{2j}\Bigl(2{e{\vec k}\cdot{\vec E}\over m\Omega^2}\sin (\Omega T) \Bigr)
\nonumber \\
&&\times
J_{l+j}\Bigl({\omega_f\over\Omega}\cos (2\Omega T)\Bigr)\;,
\label{BULK_GDOS}
\end{eqnarray}
The dimensionality is entirely contained in the remaining
momentum integration $\int{\rm d}^n{\vec k}/(2\pi )^{n-1}$.
We note that Eq.(\ref{BULK_GDOS})
implies a shift of the absorption edge by $\epsilon_f$.
The term $l\hbar\Omega$ in the Dirac-delta function
gives rise to photonic side bands.
Since $J_{2l}(x)$ is an even function, the
density of states is invariant under the transformation
${\vec E}\rightarrow -{\vec E}$, as expected.
In the following subsections we shall consider the 1D, 2D and 3D systems
separately and show how the density of states smoothly evolves from a
low field intensity regime into a high field intensity regime making the
non-linear effects of the THz field apparent.
\subsection{Generalized density of states, 1D}
The density of states for a single-mode 1D-system
(``quantum wire'') in the absence of external
fields is
\begin{equation}
\label{GDOS1D_1}
\rho_0^{1D}(T,\epsilon ) = {1\over\pi}\left({2 m\over\hbar^2}\right)^{1/2}
\epsilon^{-1/2}\theta (\epsilon ).
\end{equation}
In the presence of a external strong oscillating field we get from
(\ref{BULK_GDOS}) that the GDOS is
\begin{eqnarray}\label{GDOS1D}
\rho^{1D}(T,\epsilon ) &=&
\sum_{l,j}\int_{-\infty}^\infty
{\rm d}k\delta [\epsilon-\epsilon_k-\epsilon_f+l\hbar\Omega ]\nonumber
\\
&&\times J_{2j}\left({\sqrt{32\epsilon_f\epsilon_k}\over\hbar\Omega}
\sin (\Omega T)\right)
\nonumber \\
&&\times J_{l+j}\left({\epsilon_f\over\hbar\Omega}\cos (2\Omega T)\right)
\nonumber\\
&=&
\sum_l r^{1D}_l(T,\epsilon-\epsilon_f+l\hbar\Omega )
\rho_0^{1D}(\epsilon-\epsilon_f+l\hbar\Omega)\;,\nonumber\\
\end{eqnarray}
where the side-band weights are
\begin{eqnarray}
r^{1D}_l(T,\epsilon ) &=&
\sum_jJ_{2j}\left({\sqrt{32\epsilon_f\epsilon}\over\hbar\Omega}
\sin (\Omega T)\right)
\nonumber\\
&&\times J_{l+j}\left({\epsilon_f\over\hbar\Omega}\cos (2\Omega T)\right).
\end{eqnarray}
We note that in the limit $\epsilon_f\rightarrow 0$
\begin{equation}
r^{1D}_l(T,\epsilon ) \rightarrow \delta_{l,0}
\end{equation}
and $\rho^{1D}(T,\epsilon ) \rightarrow \rho_0^{1D}(\epsilon )$, as expected.
If $\Omega$ is in the THz regime then most experiments would probe
the time averaged absorption.
The time-average of the side-band weights is calculated from
\begin{eqnarray}
\bar{r}^{1D}_l(\epsilon ) &=&
\sum_j
\int_0^{2\pi}{{\rm d}s\over 2\pi}
J_{2j}\left({\sqrt{32\epsilon_f\epsilon}\over\hbar\Omega}\sin (s )\right)
\nonumber\\
&&\times J_{l+j}\left({\epsilon_f\over\hbar\Omega}\cos (2s )\right) \;,
\end{eqnarray}
which yields for $l$ odd:
\begin{eqnarray}
\bar{r}^{1D}_l(\epsilon ) &=&
\sum_j
\int_0^{2\pi}{{\rm d}s\over 2\pi}
J_{4j+2}\left({\sqrt{32\epsilon_f\epsilon}\over\hbar\Omega}\sin (s )\right)
\nonumber\\
&&\times J_{2j+l+1}\left({\epsilon_f\over\hbar\Omega}\cos (2s )\right)\;,
\end{eqnarray}
and for $l$ even
\begin{eqnarray}
\bar{r}^{1D}_l(\epsilon ) &=&
\sum_j
\int_0^{2\pi}{{\rm d}s\over 2\pi}
J_{4j}\left({\sqrt{32\epsilon_f\epsilon}\over\hbar\Omega}\sin (s )\right)
\nonumber\\
&&\times J_{2j+l}\left({\epsilon_f\over\hbar\Omega}\cos (2s )\right).
\end{eqnarray}
At the onset of side band $l$ the side-band weight is
\begin{equation}
\bar{r}^{1D}_l(0) =
\left\{
\begin{array}{cl}
0 & \mbox{{\rm if $l$ odd}} \\
J_{l/2}^2(\epsilon_f/\hbar\Omega) & \mbox{{\rm if $l$ even,}}
\end{array}
\right.
\end{equation}
where we used the identity \cite{GRADSHTEYN}
\begin{equation}
\int_0^{2\pi}{\rm d}\theta J_{2l}(a\cos\theta) = 2\pi J_l^2(a).
\label{ident1}
\end{equation}
This shows that processes involving an odd number of photons of the THz field
are strongly suppressed.
In Fig. \ref{1rf3} we illustrate
$\rho_{{\rm ave}}^{1D}(\epsilon )$ for
a range of values of $\epsilon_f/\hbar\Omega$. In the figures we write $\epsilon_e = \hbar\Omega$.
We observe all the signatures of the
Dynamical Franz-Keldysh effect (DFK) \cite{JAU96}: Stark-like blue shift of the
main absorption-edge by $\epsilon_f$, formation of side-bands at
$\epsilon_g+\epsilon_f\pm N\hbar\Omega$, and finite absorption within the
band-gap.
\subsection{Generalized density of states, 2D}
Several authors have considered fields perpendicular to the quantum well, cf.
\cite{WAG96} and references in these papers; here we focus on the situation
where the electric field is in the plane of the two-dimensional electron
gas.
In such a system
with no external field the density of states is constant,
\begin{equation}
\rho_0^{\rm 2D}(\epsilon ) = {m\over \pi\hbar^2}
\theta (\epsilon).
\label{rho0_2D}
\end{equation}
With a harmonically oscillating field we obtain from (\ref{BULK_GDOS})
\begin{eqnarray}
\label{GDOS2D_1}
\rho^{2D}(T,\epsilon ) &=&
\sum_{l,j}\int_{0}^\infty
{{\rm d}k}\,k
\int_0^{2\pi}{{\rm d}\theta\over 2\pi}
\delta [\epsilon-\epsilon_k-\epsilon_f+l\hbar\Omega ]
\nonumber\\
&&\times J_{2j}\left({\sqrt{32\epsilon_f\epsilon_k}\over\hbar\Omega}
\cos\theta\sin (\Omega T)\right)
\nonumber \\
&&\times J_{l+j}\left({\epsilon_f\over\hbar\Omega}\cos (2\Omega T)\right).
\end{eqnarray}
The integrals in (\ref{GDOS2D_1}) are again performed
using (\ref{ident1}) and writing
the result in the side-band picture we obtain \cite{cmp_note}
\begin{equation}
\label{GDOS2D}
\rho^{2D}(T,\epsilon ) =
\sum_l r^{2D}_l(T,\epsilon-\epsilon_f+l\hbar\Omega )
\rho_0^{2D}(\epsilon-\epsilon_f+l\hbar\Omega)
\end{equation}
where the side-band weights are
\begin{eqnarray}
r^{2D}_l(T,\epsilon ) &=&\sum_j
J_{j}^2\left({\sqrt{32\epsilon_f\epsilon}\over\hbar\Omega}
\sin (\Omega T)\right) \\
&&\times J_{l+j}({\epsilon_f\over\hbar\Omega}\cos (2\Omega T)). \nonumber
\end{eqnarray}
Identical arguments as in the 1D case lead to
\begin{equation}
\bar{r}^{2D}_l(0) =
\left\{
\begin{array}{cl}
0 & \mbox{{\rm if $l$ odd}} \\
J_{l/2}^2(\epsilon_f/\hbar\Omega) & \mbox{{\rm if $l$ even,}}
\end{array}
\right.
\end{equation}
i.e., the same result as in the 1D-case.
As in the 1D case we have numerically investigated the time averaged GDOS
$
\rho_{{\rm ave}}^{2D}(\epsilon ) = {\Omega\over 2\pi}\int_0^{2\pi /\Omega}
{\rm d}T\,\rho^{\rm 2D}(T,\epsilon ).
$
In Fig. \ref{2rf1} we illustrate $\rho_{{\rm ave}}^{2D}(\epsilon )$
for various values of $\epsilon_f/\hbar\Omega$. Again, as in the
1D case, we observe all the characteristics of the DFK \cite{JAU96}.
Finally, Fig. \ref{2df1}
shows the differential transmission (DTS) signal.
\subsection{Generalized density of states, 3D}
Absorption in bulk semiconductors subject to THz radiation
was considered already long time ago by Yacoby \cite{YAC68}.
He studied transition rates between bands by investigating
approximate solutions to the corresponding
time-dependent Schr\"{o}dinger equation. He concluded that
transitions occur in the gap and noted reduced rates
above the gap, both in agreement with the present work.
General quantitative results, were not presented.
The 3D field-free density of states is
\begin{equation}\nonumber
\rho^{\rm 3D}_0(\epsilon ) = {1\over 2\pi^2}\Bigl({2m\over\hbar}\Bigr)^{3/2}
\theta (\epsilon)\epsilon^{1/2}.
\end{equation}
With the external field the density of states becomes
\begin{equation}
\rho^{\rm 3D}(T,\epsilon ) =
\sum_l r^{3D}_l(T,\epsilon-\epsilon_f+l\hbar\Omega )
\rho_0^{3D}(\epsilon-\epsilon_f+l\hbar\Omega)\;,
\end{equation}
with the side-band weights
\begin{eqnarray}
r^{3D}_l(T,\epsilon )
&=&
\sum_j
J_{l+j}\left({\epsilon_f\over\hbar\Omega}\cos (2\Omega T)\right)
\nonumber\\&& \times
\int_0^1{\rm d}\eta
J_{2j}\left({\sqrt{32\epsilon_f\epsilon}\over\hbar\Omega}
\sin (\Omega T)\eta \right).
\end{eqnarray}
Again, we have
\begin{equation}
\bar{r}^{3D}_l(0) =
\left\{
\begin{array}{cl}
0 & \mbox{{\rm if $l$ odd}} \\
J_{l/2}^2(\epsilon_f/\hbar\Omega) & \mbox{{\rm if $l$ even.}}
\end{array}
\right.
\end{equation}
In Fig. \ref{3rf1}
we illustrate
$\rho_{{\rm ave}}^{3D}(\epsilon )$ for various values of
$\epsilon_f/\hbar\Omega$;
the DTS signal for the 3D case
is shown in Fig. \ref{3df3}.
\subsection{Summary}
The main physical consequences of the THz field on linear
absorption spectrum for systems with parabolic dispersion
can be summarized as follows.
The dynamical modifications of the absorption spectrum
(i) Appear near the absorption
edge;
(ii) They extend a few $\epsilon_e = \hbar\Omega$ around the edge, and
(iii) They are most pronounced when $\omega_f /\Omega$ is of order unity.
If $\Omega$ is in the THz regime,
and fields like those attainable with free electron lasers are considered
\cite{GUI93,UNT96}, then $\omega_f /\Omega\approx 1$ and the fine structure
extends over an area of several meV. Consequently, an experimental
verification of these effects should be possible.
\section{SUPERLATTICES}
\label{sup_sec}
According to the semiclassical Bloch-Boltzmann theory of transport,
a uniform electric field causes charge carriers in a periodic potential
to execute a time-periodic motion with
frequency $\omega_B=eaE/\hbar$, where $a$ is the lattice periodicity.
Conditions for observing these Bloch oscillations are much more favorable
in superlattices than in ordinary bulk materials, and recent years have
witnessed an intense research effort culminating in the observation
of Bloch oscillations \cite{WAS93}.
In ac-fields a phenomenon called dynamical localization may occur:
if the parameter $\gamma\equiv aeE_\parallel/\hbar\Omega$ equals
a zero of $J_0$, the average velocity vanishes \cite{dynloc}.
In this section we investigate how dynamical localization
\cite{HOL94,dynloc,ZHA94,HOL95,ROT96} manifests itself in the free particle
absorption spectra. Recently, Meier et al. \cite{MEI95}
presented results of a detailed
numerical solution of the semiconductor Bloch equations,
including excitonic effects,
and found that at dynamical localization the relative motion
exciton wave function changes from a 3D-character (i.e., localized
in $k_z$-space) to a 2D-structure (extended in $k_z$-space),
and below we shall illustrate how the same phenomenon reflects itself
in the present analytic study of free-particle properties.
\subsection{Generalized Density of States}
The starting point for our analysis is the effective dispersion
(\ref{sup_dis}) introduced in section \ref{conv_sec} which we reproduce here
for the convenience of the reader,
\begin{equation}
\epsilon_{{\rm eff}}[\vec k] =
{1\over2}\lambda_{{\rm eff}}\cos (ak_\parallel )
+{\hbar^2k_\perp^2\over 2m_{{\rm eff}}}+\epsilon_g.
\nonumber
\end{equation}
Henceforth we put $\epsilon_g=0$ and drop the ``eff'' subscript. We
consider the effect of the THz field described by the vector potential
${\vec A}(t) = -{\vec E}\sin (\Omega t)/\Omega$ where
${\vec E} = (0,0,E_\parallel )$.
In accordance with Section \ref{gre_sec}
we calculate the generalized density of states from
\begin{eqnarray}\label{rhoslTtau}
\rho^{sl} (T,\tau ) &=&
{m\over 2\pi^2\hbar^3}\int_0^\infty {\rm d}\epsilon_\perp\int_0^{2\pi /a}
{\rm d}k_\parallel e^{-i\epsilon_\perp\tau/\hbar}
\nonumber\\
&&\times\exp[I(T,\tau)]\;,\\
I(T,\tau)&=&-i{\lambda\over 2\hbar}
\int_{T-\tau/2}^{T+\tau/2}{\rm d}s\cos [ak_\parallel+\gamma\sin (\Omega s)].
\end{eqnarray}
We evaluate the integral
within the exponential using the identity (\ref{expansion}) with the result
\begin{eqnarray}
I(T,\tau)&=&{\lambda\over 2\hbar\Omega}
\Big\{
\cos (ak_\parallel)[{\cal C}(\Omega\tau )+J_0(\gamma ){\lambda\tau\over 2\hbar}]
\nonumber\\
&\quad&+\sin (ak_\parallel){\cal S}(\Omega\tau )
\Big\}\;,
\end{eqnarray}
where
\begin{eqnarray}
{\cal C}(z) &=& 2\sum_{n=1}^\infty
{J_{2n}(\gamma )\cos [2n\Omega T]\over n}\sin [n z] \\
{\cal S}(z) &=& 2\sum_{n=1}^\infty {J_{2n-1}(\gamma )\sin [(2n-1)\Omega T]
\over n-1/2}\nonumber \\&&\times
\sin [(n-1/2)z].
\end{eqnarray}
We have suppressed the explicit dependence of $\Omega T$ and $\gamma$ in
${\cal C}(z)$ and ${\cal S}(z)$ for simplicity. Note that both
of these functions are anti-symmetric in $z$, i.e.,
${\cal C}(-z) =-{\cal C}(z)$ and ${\cal S}(-z) =-{\cal S}(z)$. The identity
\cite{GRADSHTEYN}
\begin{equation}
\int_0^{2\pi}{{\rm d}\theta\over 2\pi}
\exp\left\{
ia\cos\theta+ib\sin\theta
\right\} = J_0(\sqrt{a^2+b^2})
\end{equation}
is the key to the next step in the evaluation of (\ref{rhoslTtau})
and allows us to write
\begin{equation}\label{rhoTtau}
\rho^{sl} (T,\tau )=
{m\over 2\pi^2\hbar^3a}\int_0^\infty{\rm d}
\epsilon_\perp\,e^{-i\epsilon_\perp\tau /\hbar}{\cal K}(\Omega\tau )
\end{equation}
where we have defined the kernel
\begin{equation}
\label{Kkernel}
{\cal K}(z)=
J_0\left(
{\lambda\over 2\hbar\Omega}\sqrt{ ({\cal C}(z)+J_0(\gamma )z)^2+{\cal S}^2(z) }
\right).
\end{equation}
Also here we have suppressed the explicit dependence of $\Omega T$ and
$\gamma$. In distribution sense we can write
\begin{equation}
\int_0^\infty{\rm d}\epsilon_\perp\,e^{-i\epsilon_\perp\tau/\hbar} =
{-i\hbar\over\tau -i0^+},
\end{equation}
where $0^+$ indicates a positive infinitesimal. This expression allows us
compute the Fourier transform of (\ref{rhoTtau}):
\begin{equation}
\label{rhosl}
\rho^{sl} (T,\epsilon ) = {m\over 2\pi^2\hbar^2 a}\left(
\int_{-\infty}^\infty{\rm d}\tau{\sin (\epsilon\tau/\hbar )
\over\tau}{\cal K}(\Omega\tau )+\pi
\right).
\end{equation}
In what follows we shall examine several properties of this result.
\subsection{The field-free limit}
In the limit of vanishing field strength we have
\begin{equation}
\lim_{\gamma\rightarrow 0}{\cal K}(z) =
J_0\left({\lambda z\over 2\hbar\Omega}\right).
\label{gammalimit}
\end{equation}
Using the identity \cite{GRADSHTEYN}
\begin{equation}
\int_{-\infty}^{\infty}{{\rm d}x\over x}\,\sin (\beta x)J_0(x) =
\left\{
\begin{array}{ll}
\pi &[\beta>1]\\
2\arcsin\beta & [\beta^2<1] \\
-\pi &[\beta<-1]
\end{array}
\right.
\end{equation}
we obtain the density of states for a tight-binding superlattice,
\begin{equation}
\rho^{sl} (\epsilon ) =
{m\over\pi\hbar^2 a}\left\{
\begin{array}{ll}
1&[\epsilon>\lambda/2]\\
{1\over\pi}\arcsin (2\epsilon/\lambda )+1/2&[|\epsilon |\le\lambda/2] \\
0 &[\epsilon<-\lambda/2]
\end{array}
\right.
\end{equation}
which is familiar.
\subsection{The static limit}
In the limit $\Omega\rightarrow 0$ one obtains
\begin{equation}
\lim_{\Omega\rightarrow 0}{\cal K}(\Omega\tau ) =
J_0\left({\lambda\over 2\hbar\omega_B}\sin (\omega_B\tau )\right).
\label{Omegalimit}
\end{equation}
Recalling the identities \cite{GRADSHTEYN}
\begin{equation}
J_0(z\sin\alpha ) = \sum_jJ_j^2(z/2)\cos (2k\alpha )
\end{equation}
and
\begin{equation}
\int_{-\infty}^\infty{\rm d}x\cos (\alpha x)\sin (\beta x)/x = \pi[
\theta (\alpha+\beta)-\theta (\alpha-\beta)
]\;,
\end{equation}
we obtain the density of states
\begin{equation}
\rho^{sl} (\epsilon) = {m\over\pi\hbar^2 a}
\sum_jJ^2_j\left({\lambda\over 4\hbar\omega_B}\right)
\theta (\epsilon + 2j\hbar\omega_B).
\label{static_rhosl}
\end{equation}
This result coincides with
the one obtained in Refs.\cite{BLE88,VOI88}, which study both theoretically and
experimentally the effects of strong static fields on the absorption in
superlattice structures. They conclude that the step-like behavior of
(\ref{static_rhosl}) is due localization in the growth direction
(Wannier-Stark localization).
\subsection{Dynamic localization}
As seen in the previous subsection the signature of localization in the
growth direction in a superlattice is a step-like behavior of the
density of states. This is intuitively clear since the density of states
for a 2D system (\ref{rho0_2D}) is constant. We therefore expect the
density of states to be composed of a step function for each well the
states extend into, with weight relative to the occupation in that
particular well. We shall now
show that if $J_0(\gamma ) =0$, i.e., the conditions for dynamical
localization are met, then the GDOS indeed is of this kind.
The argument runs as follows.
If $J_0(\gamma ) =0$ then the kernel (\ref{Kkernel}) is periodic in $z$
with period $2\pi$. Furthermore, the kernel is an even function:
${\cal K}(z) = {\cal K}(-z)$. We can therefore formally write
$$
{\cal K}(\Omega\tau ) = \sum_j{\cal K}_j \cos(k\Omega\tau),
$$
which is of the same functional form as in the static limit,
Eq.(\ref{Omegalimit}).
Consequently, we
may conclude that the generalized density of states must be of the form
\begin{equation}\label{rhodynloc}
\rho^{sl}_{dyn.loc}(\epsilon ) = {m\over \pi\hbar^2 a}\sum_j{\cal K}_j
\theta (\epsilon +j\hbar\Omega),
\end{equation}
i.e., it is a superposition of step-functions.
The weights $K_j$, however,
must be evaluated numerically, and examples are
given in the next section.
It is important to note that the ``step-length'' in the
ac-case is determined by the frequency of the THz-field in
contrast to the static case, where it is determined by the
field strength. The field strength enters the density of
states only through the weight-factors $K_j$.
The result (\ref{rhodynloc}) suggests that it should be possible to probe
dynamic localization by photo-absorption: when the appropriate conditions
are approached, the absorption coefficient should change
qualitatively from a generic smooth behavior to a sharply defined step-like
structure. The number of distinct steps appearing in the spectrum is determined
by the ratio $\lambda/\hbar\Omega$ which is also a measure of
the number of wells
the localized states span. This is fully consistent with
the results of
\cite{MEI95}, who considered a miniband of width $\lambda = 21$meV
and FIR frequency $\hbar\Omega = 20$meV, which corresponds to a single step,
and hence maximum binding energy of the corresponding
exciton which would be mostly confined to a single well.
\subsection{Numerical results}
We again focus our numerical study to the
time averaged generalized density of states
$
\rho_{{\rm ave}}^{sl}(\epsilon ) = {\Omega\over 2\pi}\int_0^{2\pi /\Omega}
{\rm d}T\,\rho^{sl}(T,\epsilon ).
$
In Figs. \ref{slrf4} and \ref{slrf5} we show the absorption spectra
for a superlattice
with effective bandwidth $\lambda = 3.4\hbar\Omega$.
The numerical results confirm the expectations of
the previous section: when the $\gamma=aeE_\parallel/\hbar\Omega$
approaches the first zero of $J_0$, which occurs at the
argument value of 2.4048.., the gradually evolving
replicas of the zero-field density of states converge
into plateaus of finite width. The exactness of the plateaus
can be judged from Fig. \ref{slrf5}: at $\gamma=2.4048..$ the
line joining the the steps appears near vertical.
Finally, in Fig.~\ref{sldf2} we show the DTS spectra
at dynamical localization (DL) and non-DL conditions.
There are two characteristic differences: (i) Outside
the zero-field miniband DL leads to a step-like structure
in contrast to the smooth behavior found otherwise, and
(ii) Inside the miniband the DL-spectrum distinguishes itself
by its sharp jagged structure.
\section{Conclusions}
\label{con_sec}
We have presented a theoretical formulation of linear photo-absorption
for samples under strongly non-equilibrium conditions. Typical
non-equilibrium agents would be THz-radiation from free-electron lasers,
or ultra-short pulse measurements of transient effects.
In the present work noninteracting carriers are considered, but
the formulation allows an extension to Coulomb interactions, which
will be addressed in our future work. Two central concepts emerge
from our analysis: a generalization of the density of states into
time-dependent conditions [GDOS defined in Eq.(\ref{the_trace})],
and photonic side-bands, which form a convenient framework for
discussing the various features of the absorption spectra.
\begin{acknowledgements}
We wish to acknowledge useful discussions with Ben Hu, Andreas Wacker
and Bj\"{o}rn Birnir. Furthermore we wish to thank Fausto Rossi
and Junichiro Kono for
valuable comments of an earlier version of the manuscript
and Kent Nordstrom for sharing details of his
unpublished data\cite{NOR97}.
This work was partially supported by a LACOR grant no.
4157U0015-3A from Los Alamos National Laboratory.
\end{acknowledgements}
\begin{figure}
\epsfxsize=8.5cm
\hspace{0.5cm}\epsfbox{joh_f1.ps}
\caption{
Time averaged generalized density of states for 1D-system
shown for a range of FIR-intensities, $\epsilon_f/\hbar\Omega \in [0,2]$.
The band edge and the side-bands display a blue-shift,
which scales linearly with the
intensity. Absorption extends
below the bandgap (dynamical Franz-Keldysh effect).
}
\label{1rf3}
\end{figure}
\begin{figure}
\epsfxsize=8.5cm
\hspace{0.5cm}\epsfbox{joh_f2.ps}
\caption{
The time averaged GDOS for a 2D-system for a range of
FIR-intensities,
$\epsilon_f/\hbar\Omega = (0.2, 0.5, 0.8, 1.1, 1.4, 1.7, 2.0)$.
At low intensities one observes a Stark-like
blue-shift of the band edge as well
as finite absorption within the band gap.
With increasing
intensity side bands emerge
at $\epsilon=\epsilon_g+\epsilon_f\pm 2\hbar\Omega$.
}
\label{2rf1}
\end{figure}
\begin{figure}
\epsfxsize=8.5cm
\hspace{0.5cm}\epsfbox{joh_f3.ps}
\caption{
The DTS signal for a 2D-system
for a range of intensities,
$\epsilon_f/\hbar\Omega = (0.2, 0.5, 0.8, 1.1, 1.4, 1.7, 2.0)$.
}
\label{2df1}
\end{figure}
\begin{figure}
\epsfxsize=8.5cm
\hspace{0.5cm}\epsfbox{joh_f4.ps}
\caption{
The time averaged generalized density of
states for a 3D-system for a range of FIR-intensities,
$\epsilon_f/\hbar\Omega = (0.0, 0.2, 0.5, 0.8, 1.1, 1.4, 1.7, 2.0)$.
}
\label{3rf1}
\end{figure}
\begin{figure}
\epsfxsize=8.5cm
\hspace{0.5cm}\epsfbox{joh_f5.ps}
\caption{
DTS-spectrum for a 3D-system.
}
\label{3df3}
\end{figure}
\begin{figure}
\epsfxsize=8.5cm
\hspace{0.5cm}\epsfbox{joh_f6.ps}
\caption{
The time-averaged GDOS
for a superlattice with effective band-width
$\lambda = 3.4 \hbar\Omega$.
Dots: $\gamma = 1.5$; dashes: $\gamma=2.4$.
The proximity of dynamical localization occurring
at $\gamma=2.4048...$ reflects itself in the step-wise
structure of the dashed curve.}
\label{slrf4}
\end{figure}
\begin{figure}
\epsfxsize=8.5cm
\hspace{0.5cm}\epsfbox{joh_f7.ps}
\caption{
The time averaged GDOS
for a superlattice with $\lambda = 3.4 \hbar\Omega$
as a function of FIR-intensity.
At low $\gamma$ side bands are observed, which
merge at $\gamma = 2.4048..$
corresponding to dynamical localization. }
\label{slrf5}
\end{figure}
\begin{figure}
\epsfxsize=8.5cm
\hspace{0.5cm}\epsfbox{joh_f8.ps}
\caption{
The differential transmission signal for superlattice
structure with effective
mini bandwidth $\lambda = 8 \hbar\Omega$. s
Solid line: $\gamma =2.4$; dashes:
$\gamma =2.0$. Outside the zero-field
mini-band a step-like behavior is seen, while
inside the mini-band DTS for dynamical localization develops
a jagged shape in contrast to the smooth behavior for the
extended state.
}
\label{sldf2}
\end{figure}
|
2,869,038,156,877 | arxiv | \section{Background: addition Cayley graphs}\label{s:background}
For a subset $S$ of the abelian group $G$, we denote by $\CP(S)$ the
addition Cayley graph induced by $S$ on $G$; recall that this is the
undirected graph with the vertex set $G$ and the edge set
$\{(g_1,g_2)\in G\times G\colon g_1+g_2\in S\}$. Note that $S$ is
not assumed to be symmetric, and that if $S$ is finite, then
$\CP(S)$ is regular of degree $|S|$ (if one considers each loop to
contribute $1$ to the degree of the corresponding vertex).
The twins of the usual Cayley graphs, addition Cayley graphs (also called
\emph{sum graphs}) received much less attention in the literature; indeed,
\refb{a} (independence number), \refb{cgw} and \refb{l2} (hamiltonicity),
\refb{chu} (expander properties), and \refb{g} (clique number) is a nearly
complete list of papers, known to us, where addition Cayley graphs are
addressed. To some extent, this situation may be explained by the fact that
addition Cayley graphs are rather difficult to study. For instance, it is
well-known and easy to prove that any connected Cayley graph on a finite
abelian group with at least three elements is hamiltonian, see \refb{mr};
however, apart from the results of \refb{cgw}, nothing seems to be known on
hamiltonicity of \emph{addition} Cayley graphs on finite abelian groups.
Similarly, the connectivity of a Cayley graph on a finite abelian group is
easy to determine, while determining the connectivity of an \emph{addition}
Cayley graph is a non-trivial problem, to the solution of which the present
paper is devoted. The reader will see that investigating this problem leads
to studying rather involved combinatorial properties of the underlying group.
\section{Preliminaries and summary of results}\label{s:intro}
Let $\Gamma$ be a graph on the finite set $V$. The (vertex) connectivity of
$\Gamma$, denoted by $\kappa(\Gamma)$, is the smallest number of vertices which are
to be removed from $V$ so that the resulting graph is either disconnected or
has only one vertex. Clearly, if $\Gamma$ is complete, then $\kappa(\Gamma)=|V|-1$,
while otherwise we have $\kappa(\Gamma)\le|V|-2$, and $\kappa(\Gamma)$ can be
alternatively defined as the size of a minimum vertex cut of $\Gamma$. (A
complete graph does not have vertex cuts.) Evidently, vertex cuts and
connectivity of a graph are not affected by adding or removing loops.
Our goal is to determine the connectivity of the addition Cayley graphs,
induced on finite abelian groups by their subsets, and accordingly we use
additive notation for the group operation. In particular, for subsets $A$
and $B$ of an abelian group, we write
$$ A\pm B:=\{a\pm b\colon a\in A,\,b\in B\}, $$
which is abbreviated by $A\pm b$ in the case where $B=\{b\}$ is a singleton
subset.
For the rest of this section, we assume that $S$ is a subset of the finite
abelian group $G$.
It is immediate from the definition that, for a subset $A\subseteq G$, the
neighborhood of $A$ in $\CP(S)$ is the set $S-A$, and it is easy to derive
that $\CP(S)$ is complete if and only if either $S=G$, or $S=G\setminus\{0\}$ and
$G$ is an elementary abelian $2$-group (possibly of zero rank). Furthermore,
it is not difficult to see that $\CP(S)$ is connected if and only if $S$ is
not contained in a coset of a proper subgroup of $G$, with the possible
exception of the non-zero coset of a subgroup of index $2$; this is
\cite[Proposition~1]{b:l2}. Also, since $\CP(S)$ is $|S|$-regular, we have
the trivial bound $\kappa(\CP(S))\le|S|$.
If $H$ is a subgroup of $G$ and $g$ is an element of $G$ with $2g\in S+H$,
then $g+H\subseteq S-(g+H)$; consequently, the boundary of $g+H$ in $\CP(S)$ has
size
$$ |(S-(g+H))\setminus(g+H)| = |S+H|-|H|. $$
Assuming in addition that $S+H\neq G$, we obtain $(S-(g+H))\cup(g+H)=S+H-g\ne
G$, implying $\kappa(\CP(S))\le|S+H|-|H|$. Set
$$ 2\ast G := \{ 2g\colon g\in G \}, $$
so that the existence of $g\in G$ with $2g\in S+H$ is equivalent to the
condition $(S+2\ast G)\cap H\neq\varnothing$. Motivated by the above observation, we
define
$$ {\mathcal H}_G(S) := \{ H\le G\colon (S+2\ast G)\cap H\neq\varnothing,\, S+H\neq G \} $$
and let
$$ \eta_G(S):= \min \{ |S+H|-|H| \colon H\in{\mathcal H}_G(S) \}. $$
In the latter definition and throughout, we assume that the minimum of an
empty set is infinite, and we allow comparison between infinity and real
numbers according to the ``na{\i}ve'' rule. Thus, for instance, we have
$\kappa(\CP(S))\le\eta_G(S)$ even if ${\mathcal H}_G(S)$ is vacuous.
Another important family of sets with small boundary is obtained as
follows. Suppose that the subgroups $L\le G_0\le G$ and the element
$g_0\in G_0$ satisfy
\begin{itemize}
\item[(i)] $|G_0/L|$ is even and larger than $2$;
\item[(ii)] $S+L=(G\setminus G_0)\cup(g_0+L)$.
\end{itemize}
Fix $g\in G_0\setminus L$ with $2g\in L$ and consider the set
$A:=(g+L)\cup(g+g_0+L)$. The neighborhood of this set in $\CP(S)$ is
$$ S-A = (G\setminus G_0) \cup (g+L) \cup (g+g_0+L) = (G\setminus G_0) \cup A, $$
whence $(S-A)\cup A\neq G$ and $|(S-A)\setminus A|=|G\setminus
G_0|=|S+L|-|L|$. Consequently, $\kappa(\CP(S))\le|S+L|-|L|$. With this
construction in mind, we define ${\mathcal L}_G(S)$ to be the family of all
those subgroups $L\le G$ for which a subgroup $G_0\le G$, lying
above $L$, and an element $g_0\in G_0$ can be found so that
properties (i) and (ii) hold, and we let
$$ \lambda_G(S):= \min \{ |S+L|-|L| \colon L\in{\mathcal L}_G(S) \}. $$
Thus, $\kappa(\CP(S))\le\lambda_G(S)$.
Our first principal result is the following.
\begin{theorem}\label{t:main1}
If $S$ is a proper subset of the finite abelian group $G$, then
$$ \kappa(\CP(S)) = \min \{ \eta_G(S), \lambda_G(S), |S| \}. $$
\end{theorem}
Let $\Gamma$ be a graph on the vertex set $V$. We say that the
non-empty subset $V_0\subset V$ is a \emph{fragment} of $\Gamma$ if
the neighborhood $N(V_0)$ of $V_0$ satisfies $|N(V_0)\setminus
V_0|=\kappa(\Gamma)$ and $N(V_0)\cup V_0\ne V$; that is, the boundary of
$V_0$ is a minimum vertex-cut, separating $V_0$ from the (non-empty)
remainder of the graph. Notice that if $\Gamma$ is not complete, then
it has fragments; for instance, if $\Gamma'$ is obtained from $\Gamma$
by removing a minimum vertex cut, then the set of vertices of any
connected component of $\Gamma'$ is a fragment of $\Gamma$.
As the discussion above shows, if $\kappa(\CP(S))=\eta_G(S)$, then $\CP(S)$ has
a fragment which is a coset of a subgroup $H\in{\mathcal H}_G(S)$ with
$|S+H|-|H|=\eta_G(S)$; similarly, if $\kappa(\CP(S))=\lambda_G(S)$, then $\CP(S)$
has a fragment which is a union of at most two cosets of a subgroup
$L\in{\mathcal L}_G(S)$ with $|S+L|-|L|=\lambda_G(S)$.
The reader will easily verify that Theorem \reft{main1} is an
immediate corollary of Theorem \reft{main2} below. The latter shows
that the minimum in the statement of Theorem \reft{main1} is
attained, with just one exception, on either $\eta_G(S)$ or $|S|$.
Being much subtler, Theorem \reft{main2} is also more technical, and
to state it we have to bring into consideration a special sub-family
of ${\mathcal L}_G(S)$. Specifically, let ${\mathcal L}^\ast_G(S)$ be the family of
those subgroups $L\le G$ such that for some $G_0\le G$, lying above
$L$, and some $g_0\in G_0$, the following conditions hold:
\begin{itemize}
\item[(L1)] $G_0/L$ is a cyclic $2$-group of order $|G_0/L|\ge 4$, and
$\<g_0\>+L=G_0$;
\item[(L2)] $G/G_0$ is an elementary abelian $2$-group (possibly of zero
rank);
\item[(L3)] $\exp(G/L)=\exp(G_0/L)$;
\item[(L4)] $S+L=(G\setminus G_0)\cup(g_0+L)$ and $S\cap(g_0+L)$ is not
contained in a proper coset of $L$.
\end{itemize}
A little meditation shows that ${\mathcal L}^\ast_G(S)\subseteq{\mathcal L}_G(S)$ and that
conditions (L1)--(L3) imply
$$ G/L \cong (G_0/L)\oplus({\mathbb Z}/2{\mathbb Z})^r \cong ({\mathbb Z}/2^k{\mathbb Z})\oplus({\mathbb Z}/2{\mathbb Z})^r, $$
for some $k\ge 2$ and $r\ge 0$. Notice also that if $L$, $G_0$, and $g_0$ are
as in (L1)--(L4), and $G_0=G$, then $L$ is a subgroup of $G$ of index at
least $4$, and $S$ is contained in an $L$-coset, whence $\CP(S)$ is
disconnected.
\begin{theorem}\label{t:main2}
Let $S$ be a proper subset of the finite abelian group $G$. There exists
at most one subgroup $L\in{\mathcal L}^\ast_G(S)$ with $|S+L|-|L|\le |S|-1$.
Moreover,
\begin{itemize}
\item[(i)] if $L$ is such a subgroup, then
$\kappa(\CP(S))=\lambda_G(S)=|S+L|-|L|$ and
$\eta_G(S)\ge |S|$;
\item[(ii)] if such a subgroup does not exist, then
$\kappa(\CP(S))=\min\{\eta_G(S),|S|\}$.
\end{itemize}
\end{theorem}
Postponing the proof to Section \refs{proofs}, we now list some of the
consequences.
\begin{corollary}
Let $S$ be a proper subset of the finite abelian group $G$ such that $\CP(S)$
is connected. If either $|S|\le|G|/2$ or $G$ does not contain a subgroup
isomorphic to $({\mathbb Z}/4{\mathbb Z})\oplus({\mathbb Z}/2{\mathbb Z})$, then
$\kappa(\CP(S))=\min\{\eta_G(S),|S|\}$.
\end{corollary}
\begin{proof}
If $\kappa(\CP(S))\ne\min\{\eta_G(S),|S|\}$, then by Theorem \reft{main2}
there exists $L\in{\mathcal L}^\ast_G(S)$ with $|S+L|-|L|\le|S|-1$. Choose $L\le
G_0\le G$ and $g_0\in G_0$ satisfying (L1)--(L4). Since $\CP(S)$ is
connected, the subgroup $G_0$ is proper. Consequently,
$$ |S| \ge |S+L| - |L| + 1 = |G| - |G_0| + 1 > \frac12\,|G|, $$
and it also follows that $G/L$ contains a subgroup isomorphic to
$({\mathbb Z}/4{\mathbb Z})\oplus({\mathbb Z}/2{\mathbb Z})$, which implies that $G$ itself contains such a
subgroup.
\end{proof}
Our next result shows that under the extra assumption $\kappa(\CP(S))<|S|$,
the conclusion of Theorem \reft{main1} can be greatly simplified.
\begin{theorem}\label{t:simple}
Let $S$ be a proper subset of the finite abelian group $G$. If
$\kappa(\CP(S))<|S|$, then
$$ \kappa(\CP(S)) = \min \{ |S+H|-|H|\colon H\le G,\, S+H\neq G \}. $$
\end{theorem}
Theorem \reft{simple} will be derived from Theorem \reft{main2} in Section
\refs{proofs}. Note that the assumption $\kappa(\CP(S))<|S|$ of Theorem
\reft{simple} cannot be dropped: say, if $S$ is the non-zero coset of a
subgroup $H\le G$ of index $2$, then $\CP(S)$ is a complete bipartite graph
of connectivity $|G|/2$, while $|S+H|-|H|=0$ and $S+H\neq G$. We also notice
that, despite its simple and neat conclusion (and one which mirrors the
corresponding result for usual Cayley graphs), Theorem \reft{simple} gives no
way to determine whether $\kappa(\CP(S))<|S|$ holds, and hence no way to find
the connectivity unless it is known to be smaller than $|S|$ a priori. Of
course, a necessary and sufficient condition for $\kappa(\CP(S))<|S|$ to hold
follows readily from Theorem \reft{main2}.
\begin{corollary}\label{c:kappalarge}
If $S$ is a proper subset of the finite abelian group $G$, then in order for
$\kappa(\CP(S))<|S|$ to hold it is necessary and sufficient that there is a
subgroup $K\in{\mathcal H}_G(S)\cup{\mathcal L}^\ast_G(S)$ with $|S+K|\le|S|+|K|-1$.
\end{corollary}
Observe that if $g$ is an element of $G$ with $2g\in S$, then $g$ is
a neighbor of itself in $\CP(S)$; consequently, the boundary of
$\{g\}$ contains $|S|-1$ elements so that $\kappa(\CP(S))<|S|$. Hence
Theorem \reft{simple} implies the following corollary.
\begin{corollary}\label{c:2astS}
Let $S$ be a proper subset of the finite abelian group $G$. If
$S\cap(2\ast G)\neq\varnothing$, and in particular if $G$ has odd order and $S$ is
non-empty, then
$$ \kappa(\CP(S)) = \min \{ |S+H|-|H|\colon H\le G,\, S+H\neq G \}. $$
\end{corollary}
We conclude this section with two potentially useful lower-bound
estimates for $\kappa(\CP(S))$.
\begin{corollary}\label{c:Sover2}
Let $S$ be a proper subset of the finite abelian group $G$. If $\CP(S)$
is connected, then in fact
$$ \kappa(\CP(S))\ge\frac12\,|S|. $$
\end{corollary}
Corollary \refc{Sover2} follows from Theorem \reft{simple} and the
observation that if $|S+H|-|H|=\kappa(\CP(S))>0$ for a subgroup $H\le G$,
then $S$ intersects at least two cosets of $H$, so that $|S+H|\ge 2|H|$,
and therefore $|S+H|-|H|\ge\frac12\,|S+H|\ge\frac12\,|S|$.
\begin{corollary}\label{c:charG}
Let $S$ be a proper subset of the finite, non-trivial abelian group $G$, and
let $p$ denote the smallest order of a non-zero subgroup of $G$. If $\CP(S)$
is connected, then in fact
$$ \kappa(\CP(S)) \ge \min \{ |S|-1,\, p \}. $$
\end{corollary}
The proof is similar to that of the previous corollary: if
$\kappa(\CP(S))<|S|-1$, then by Theorem \reft{simple} there exists a subgroup
$H\le G$ with $|S+H|-|H|=\kappa(\CP(S))>0$; this subgroup is non-zero and hence
$|S+H|-|H|\ge|H|\ge p$.
\section{Auxiliary results}\label{s:aux}
In this section, we gather the tools needed for the proof of Theorems
\reft{main2} and \reft{simple}. This includes a simple consequence from
\refb{d1} or \refb{d2} (rephrased), a classical theorem of Kneser on
periodicity of sumsets, a result from \refb{l1}, which is a `dual' version of
a well-known structure theorem of Kemperman \refb{k}, and three original
lemmas.
Given a subgroup $H$ of the abelian group $G$, by $\pH$ we denote the
canonical homomorphism from $G$ onto $G/H$. Though the notation $\pH$ does
not specify the underlying group $G$, it is always implicit from the context
and no confusion will arise.
For a subset $S$ of the abelian group $G$, the (maximal) period of $S$ will
be denoted by $\pi(S)$; recall that this is the subgroup of $G$ defined by
$$ \pi(S) := \{g\in G\colon S+g=S \}, $$
and that $S$ is called \emph{periodic} if $\pi(S)\neq\{0\}$ and
\emph{aperiodic} otherwise. Thus, $S$ is a union of $\pi(S)$-cosets, and
$\pi(S)$ lies above any subgroup $H\le G$ such that $S$ is a union of
$H$-cosets. Observe also that $\pi(S)=G$ if and only if either $S=\varnothing$
or $S=G$, and that $\pH[\pi(G)](S)$ is an aperiodic subset of the group
$G/\pi(S)$.
\begin{theirproposition}[Grynkiewicz, \protect{\cite[(c.5)]{b:d1}; see also
\cite[Proposition 5.2]{b:d2}}]\label{l:david}
Let $A$ be a finite, non-empty subset of an abelian group. If
$|\pi(A\setminus\{a\})|>2$ for some $a\in A$, then $|\pi(A\setminus\{a'\})|=1$ for
every group element $a'\ne a$.
\end{theirproposition}
\begin{theirtheorem}[Kneser, \cite{b:kn1,b:kn2}; see also \refb{m}]
\label{t:kneser}
Let $A$ and $B$ be finite, non-empty subsets of an abelian group $G$. If
$$ |A+B| \le |A|+|B|-1, $$
then, letting $H:=\pi(A+B)$, we have
$$ |A+B| = |A+H|+|B+H|-|H|. $$
\end{theirtheorem}
We now turn to the (somewhat involved) statement of \cite[Theorem 2]{b:l1};
the reader can consult the source for the explanations and comments.
By an arithmetic progression in the abelian group $G$ with difference
$d\in G$, we mean a set of the form $\{g+d,g+2d,\dotsc, g+kd\}$, where $g$ is
an element of $G$ and $k$ is a positive integer. Thus, cosets of finite
cyclic subgroups (and in particular, singleton sets) are considered
arithmetic progressions, while the empty set is not. For finite subsets $A$
and $B$ of an abelian group and a group element $c$, we write
$$ \nu_c(A,B) := |\{ (a,b)\in A\times B\colon c=a+b \}|; $$
that is, $\nu_c(A,B)$ is the number of representations of $c$ as a sum of an
element of $A$ and an element of $B$. Observe that $\nu_c(A,B)>0$ if and only
if $c\in A+B$. The smallest number of representations of an element of $A+B$
will be denoted by $\mu(A,B)$:
$$ \mu(A,B) := \min \{ \nu_c(A,B) \colon c\in A+B \}. $$
Following Kemperman \refb{k}, we say that the pair $(A,B)$ of finite
subsets of the abelian group $G$ is \emph{elementary} if at least one of
the following conditions holds:
\begin{itemize}
\item[(I)] $\min\{|A|,|B|\}=1$;
\item[(II)] $A$ and $B$ are arithmetic progressions sharing a common
difference, the order of which in $G$ is at least $|A|+|B|-1$;
\item[(III)] $A=g_1+(H_1\cup\{0\})$ and $B=g_2-(H_2\cup\{0\})$, where
$g_1,g_2\in G$, and where $H_1$ and $H_2$ are non-empty subsets of a subgroup
$H\le G$ such that $H=H_1\cup H_2\cup\{0\}$ is a partition of $H$;
moreover, $c:=g_1+g_2$ is the only element of $A+B$ with $\nu_c(A,B)=1$;
\item[(IV)] $A=g_1+H_1$ and $B=g_2-H_2$, where $g_1,g_2\in G$,
and where $H_1$ and $H_2$ are non-empty, aperiodic subsets of a subgroup $H\le G$
such that $H=H_1\cup H_2$ is a partition of $H$; moreover, $\mu(A,B)\ge 2$.
\end{itemize}
We say that the pair $(A,B)$ of subsets of an abelian group satisfies
\emph{Kemperman's condition} if either $A+B$ is aperiodic or $\mu(A,B)=1$
holds.
\begin{theirtheorem}[\protect{Lev, \cite[Theorem 2]{b:l1}}]\label{t:dual}
Let $A$ and $B$ be finite, non-empty subsets of the abelian group $G$. A
necessary and sufficient condition for $(A,B)$ to satisfy both
$$ |A+B| \le |A| + |B| - 1 $$
and Kemperman's condition is that there exist non-empty subsets
$A_0\subseteq A$ and $B_0\subseteq B$ and a finite, proper subgroup $F<G$ such that
\begin{itemize}
\item[(i)] each of $A_0$ and $B_0$ is contained in an $F$-coset,
$|A_0+B_0|=|A_0|+|B_0|-1$, and the pair $(A_0,B_0)$ satisfies Kemperman's
condition;
\item[(ii)] each of $A\setminus A_0$ and $B\setminus B_0$ is a (possibly empty) union
of $F$-cosets;
\item[(iii)] the pair $(\pH[F](A),\pH[F](B))$ is elementary; moreover, either
$F$ is trivial, or $\pH[F](A_0)+\pH[F](B_0)$ has a unique representation as
a sum of an element of $\pH[F](A)$ and an element of $\pH[F](B)$.
\end{itemize}
\end{theirtheorem}
\begin{lemma}\label{l:directsum}
Let $L\le G_0\le G$ be finite abelian groups. If $G_0/L$ is a cyclic
$2$-group and $2\ast(G/L)$ is a proper subgroup of $G_0/L$, then
$\exp(G_0/L)=\exp(G/L)$.
\end{lemma}
\begin{proof}
Write $|G_0/L|=2^k$ so that $k$ is a positive integer. Since $|2\ast(G/L)|$
is a proper divisor of $2^k$, we have $2^{k-1}g=0$ for every
$g\in 2\ast(G/L)$. Equivalently, $2^kg\in L$ for every $g\in G$, whence
$\exp(G/L)\le 2^k=\exp(G_0/L)$. The inverse estimate
$\exp(G_0/L)\le\exp(G/L)$ is trivial.
\end{proof}
The following lemma is similar in flavor to a lemma used by Kneser to prove
Theorem \reft{kneser}; cf. \cite{b:kn2,b:k}.
\begin{lemma}\label{l:geom}
Suppose that $S$ is a finite subset, and that $H$ and $L$ are finite
subgroups of the abelian group $G$ satisfying $|L|\le|H|$ and $S+H\ne S+H+L$.
Let $I:=H\cap L$. If
$$ \max \{ |S+H| - |H|, |S+L| - |L| \} \le |S+I| - |I|, $$
then in fact
$$ |S+H| - |H| = |S+L| - |L| = |S+I| - |I|; $$
moreover, there exists $g\in G$ such that $(S+I)\setminus(g+H+L)$ is a (possibly
empty) union of $(H+L)$-cosets, and one of the following holds:
\begin{itemize}
\item[(i)] $(S+I)\cap(g+H+L)=g+I$;
\item[(ii)] $(S+I)\cap(g+H+L)=(g+H+L)\setminus(g+(H\cup L))$ and $|H|=|L|$.
\end{itemize}
\end{lemma}
\begin{proof}
Factoring by $I$, we assume without loss of generality that $I=\{0\}$. Since
$S+H\neq S+H+L$, there exists $s_0\in S$ with $s_0+L\nsubseteq S+H$, and we
let $S_0:=S\cap(s_0+H+L)$. It is instructive to visualize the coset $s_0+H+L$
as the grid formed by $|L|$ horizontal lines (corresponding to the $H$-cosets
contained in $s_0+H+L$) and $|H|$ vertical lines (corresponding to the
$L$-cosets contained in $s_0+H+L$). The intersection points of these two
families of lines correspond to the elements of $s_0+H+L$, and the condition
$s_0+L\nsubseteq S+H$ implies that there is a horizontal line free of
elements of $S$.
Let $h:=\pH[L](S_0)$ (the number of vertical lines that intersect $S_0$) and
$l:=\pH(S_0)$ (the number of horizontal lines that intersect $S_0$); thus,
$1\le h\le |H|$ and $1\le l<|L|$. We also have, in view of the hypotheses,
\begin{equation}\label{e:geom0}
(|H|-h)l \le |(S_0+H)\setminus S_0| \le |(S+H)\setminus S| \le |H| - 1,
\end{equation}
whence
\begin{equation}\label{e:geom1}
(|H|-h)(l-1) \le h-1,
\end{equation}
and similarly,
\begin{equation}\label{e:geom2}
(|L|-l)(h-1) \le l-1.
\end{equation}
To begin with, suppose that $l=1$, and hence $h=1$ by \refe{geom2}. In this
case, $|S_0|=1$, whence $S\cap(s_0+H+L)=\{s_0\}$. Furthermore, \refe{geom0}
yields $(S_0+H)\setminus S_0=(S+H)\setminus S$, and likewise we have
$(S_0+L)\setminus S_0=(S+L)\setminus S$. This shows that
\begin{equation}\label{e:geom2a}
|S+H|-|S|=|H|-1, \quad |S+L|-|S|=|L|-1,
\end{equation}
and $S\setminus S_0$ is a union of $(H+L)$-cosets, thus establishing the assertion
(with $g=s_0$) in the case $l=1$. So we assume $l>1$ below.
Observe that \refe{geom1} and \refe{geom2} imply
$$ l-1 \ge (|L|-l)(h-1) \ge (|L|-l)(|H|-h)(l-1), $$
whence it follows from $l>1$ that
\begin{equation}\label{e:geom3}
(|L|-l)(|H|-h) \le 1.
\end{equation}
If $|H|=h$, then \refe{geom2} gives
$$ l-1 \ge (|H|-1)(|L|-l) \ge (|L|-1)(|L|-l) \ge 2|L|-l-2 \ge l, $$
which is wrong. Therefore $|H|>h$. Thus we deduce from \refe{geom3}
and $l<|L|$ that $h=|H|-1$ and $l=|L|-1$, whence \refe{geom2} gives
$|H|=|L|$. Consequently, \refe{geom0} yields $(S_0+H)\setminus
S_0=(S+H)\setminus S$, and similarly $(S_0+L)\setminus S_0=(S+L)\setminus S$, which
(as above) proves \refe{geom2a} and shows that $S\setminus S_0$ is a
union of $(H+L)$-cosets. Furthermore, $S+H$ misses exactly one
$H$-coset in $s_0+H+L$, and $S+L$ misses exactly one $L$-coset in
$s_0+H+L$. Let $g\in s_0+H+L$ be the common element of these two
cosets, so that $S_0+H=(s_0+H+L)\setminus(g+H)$ and
$S_0+L=(s_0+H+L)\setminus(g+L)$. Then
$$ S_0 \subseteq (s_0+H+L)\setminus(g+(H\cup L)) = (g+H+L)\setminus(g+(H\cup L)), $$
and thus
$$ |L|-1 = |H|-1 \ge |(S+H)\setminus S| = |(S_0+H)\setminus S_0|
= (|L|-1)|H| - |S_0|, $$
so that
$$ |S_0| \ge (|H|-1)(|L|-1) = |(g+H+L)\setminus(g+(H\cup L))|.$$
Hence, in fact $S_0=(g+H+L)\setminus(g+(H\cup L))$, completing the proof.
\end{proof}
\begin{lemma}\label{l:LcapH}
Let $G$ be a finite abelian group, and suppose that the proper
subset $S\subset G$, the subgroups $L\le G_0\le G$, and the element
$g_0\in G_0$ satisfy conditions (L1)--(L4) in the definition of
${\mathcal L}^\ast_G(S)$. Suppose, moreover, that $|S+L|-|S|\le|L|-1$. If $H$
is a subgroup of $G$ with $|S+H|-|S|\le|H|-1$ and $S+H\ne G$, then
$H$ is actually a subgroup of $G_0$.
\end{lemma}
\begin{proof}
Suppose for a contradiction that $H\nleq G_0$ and fix $h\in H\setminus G_0$. For
each $g\in G_0$, we have $g+h\in G\setminus G_0\subseteq S+L$, whence $g\in S+H+L$.
Hence $G_0\subseteq S+H+L$, and since, on the other hand, we have
$G\setminus G_0\subseteq S+L\subseteq S+H+L$, we conclude that
\begin{equation}\label{e:LcapH1}
S+H+L = G.
\end{equation}
In view of $S+H\ne G$, this leads to $L\nleq H$, and we let $I:=H\cap L$.
Thus $I$ is a proper subgroup of $L$.
Write $n:=|G_0/L|$ so that $G_0$ consists of $n\ge 4$ cosets of $L$, of which
$n-1$ are free of elements of $S$. Let $\{g_i\colon 0\le i\le n-1\}$ be a
system of representatives of these $n$ cosets.
Fix $i\in[1,n-1]$. Since $H\nleq G_0$ and $g_i\in G_0$, we have
$g_i+H\nsubseteq G_0$, whence $(G\setminus G_0)\cap(g_i+H)\neq\varnothing$; as
$G\setminus G_0\subseteq S+L$, this yields $S\cap(g_i+H+L)\neq\varnothing$. On the
other hand, from $g_i+L\subseteq G_0\setminus(g_0+L)$ it follows that
$(S+L)\cap (g_i+L)=\varnothing$. Therefore,
\begin{equation}\label{e:LcapH2}
0 < |(S+I)\cap(g_i+H+L)| < |H+L|; \quad i\in[1,n-1].
\end{equation}
In view of \refe{LcapH1} and the hypotheses $S+H\neq G$, we have
$S+H\ne S+H+L$ and $S+L\ne S+H+L$. Also, our assumptions imply
$$ \max \{ |S+H|-|H|, |S+L|-|L| \} < |S| \le |S+I|, $$
and since both the left and right hand side are divisible by $|I|$, we
actually have
$$ \max \{ |S+H|-|H|, |S+L|-|L| \} \le |S+I|-|I|. $$
Thus we can apply Lemma \refl{geom}. Choose $g\in G$ such that
$(S+I)\setminus(g+H+L)$ is a union of $(H+L)$-cosets. Then it follows from
\refe{LcapH2} that
\begin{equation}\label{e:LcapH3}
g_i+H+L = g+H+L; \quad i\in[1,n-1],
\end{equation}
and consequently $G_0\setminus(g_0+L)\subseteq g+H+L$. Hence $n\ge 4$ implies
$G_0\le H+L$ and $g\in H+L$. Thus, since $S\cap(g_0+L)$ is not contained in a
coset of a proper subgroup of $L$, and in particular in a coset of $I$, we
conclude that
$$ |(S+I)\cap(g+H+L)| = |(S+I)\cap(g_0+L)| \ge 2|I|. $$
This shows that Lemma \refl{geom}~(i) fails. On the other hand, \refe{LcapH3}
gives $g_i+L\subseteq g+H+L$, and hence $g+H+L$ contains at least $n-1\ge 3$
cosets of $L$, all free of elements of $S+I$. Thus Lemma \refl{geom}~(ii)
fails too, a contradiction.
\end{proof}
\section{Proofs of Theorems \reft{main2} and \reft{simple}}\label{s:proofs}
Our starting point is the observation that if $S$ is a subset of the finite
abelian group $G$ such that $\CP(S)$ is not complete, then
$$ \kappa(\CP(S)) = \min \{ |(S-A)\setminus A|
\colon \varnothing\ne A\subseteq G,\, (S-A)\cup A\ne G \}. $$
For the following proposition, the reader may need to recall the notion of a
fragment, introduced in Section \refs{intro} after the statement of Theorem
\reft{main1}.
\begin{proposition}\label{p:lessS}
Let $S$ be a subset of the finite abelian group $G$, and suppose that
$\kappa(\CP(S))<|S|$. If $A$ is a fragment of $\CP(S)$, then, writing
$H:=\pi(S-A)$, we have
\begin{align}
A &\subseteq S-A, \label{e:-t}\\
A+H &= A, \label{e:t+h}\\
\kappa(\CP(S)) &= |S+H|-|H|, \label{e:per}\\
\intertext{and}
\kappa(\CP[G/H]{\pH(S)}) &= |\pH(S)|-1. \label{e:g/h}
\end{align}
\end{proposition}
\begin{proof}
Fix $a\in A$. Since $a$ has $|S|$ neighbors, all lying in $S-A$, and since
$|(S-A)\setminus A|=\kappa(\CP(S))<|S|$ by the assumptions, it follows that $a$ has
a neighbor in $A$; in other words, there is $a'\in A$ with $a+a'\in S$.
Consequently, $a\in S-A$, and \refe{-t} follows.
By \refe{-t} we have
$$ (S-(A+H))\cup(A+H) = S-A+H = S-A \neq G, $$
and obviously,
$$ |(S-(A+H)) \setminus (A+H)| = |(S-A) \setminus (A+H)| \le |(S-A) \setminus A|. $$
Since $A$ is a fragment, we conclude that in fact
$|(S-A)\setminus(A+H)|=|(S-A)\setminus A|$ holds, which gives \refe{t+h}.
By \refe{-t} and the assumptions, we have
$$ |S-A| = |(S-A)\setminus A| + |A| = \kappa(\CP(S)) + |A| \le |S|+|A|-1. $$
Hence it follows from Theorem \reft{kneser} and \refe{t+h} that
\begin{equation}\label{e:loc1}
|S-A| = |S+H| + |A+H| - |H| = |S+H| + |A| - |H|.
\end{equation}
Thus
$$ \kappa(\CP(S)) = |(S-A)\setminus A| = |S-A|-|A| = |S+H|-|H|, $$
yielding (\ref{e:per}).
Finally, we establish \refe{g/h}. The neighborhood of $\pH(A)$ in the
graph $\CP[G/H](\pH(S))$ is $\pH(S)-\pH(A)=\pH(S-A)$, and it follows in
view of \refe{-t} that
$$ \pH(S-A) \cup \pH(A) = \pH(S-A) \neq G/H.$$
Consequently, the set $\pH(S-A)\setminus\pH(A)$ is a vertex cut in
$\CP[G/H](\pH(S))$, whence using \refe{-t}, \refe{t+h}, and \refe{loc1}
we obtain
\begin{multline*}
\kappa(\CP[G/H](\pH(S))) \le |\pH(S-A)\setminus\pH(A)|=|\phi_H(S-A)|-|\phi_H(A)| \\
= (|S-A|-|A|)/|H| = |S+H|/|H| - 1 = |\pH(S)| - 1.
\end{multline*}
To prove the inverse estimate, notice that the graph $\CP[G/H](\pH(S))$ is
not complete (we saw above that it has vertex cuts) and choose $A'\subseteq G$
such that $\pH(A')$ is a fragment of this graph. Replacing $A'$ with $A'+H$,
we can assume without loss of generality that $A'+H=A'$. Since
$$ \pH((S-A')\cup A') = (\pH(S) - \pH(A')) \cup \pH(A') \ne G/H, $$
we have $(S-A')\cup A'\neq G$. Hence in view of \refe{per} it follows that
\begin{align*}
\kappa(\CP[G/H](\pH(S)))
&= |(\pH(S)-\pH(A'))\setminus \pH(A')| \\
&= |\pH(S-A')\setminus\pH(A')| \\
&= |(S-A')\setminus A'|/|H| \\
&\ge |\kappa(\CP(S))|/|H| \\
&= |\pH(S)|-1,
\end{align*}
as desired.
\end{proof}
For a subset $S$ of a finite abelian group $G$, write
$$ \lambda^\ast_G(S) := \min \{ |S+L|-|L| \colon L\in{\mathcal L}^\ast_G(S) \}. $$
Clearly, we have $\lambda^\ast_G(S)\ge\lambda_G(S)$.
\begin{lemma}\label{l:Shift}
Let $S$ be a proper subset of the finite abelian group $G$. If $g\in G$,
then ${\mathcal H}_G(S-2g)={\mathcal H}_G(S),\ {\mathcal L}^\ast_G(S-2g)={\mathcal L}^\ast_G(S)$, and
$\CP(S-2g)$ is isomorphic to $\CP(S)$; consequently,
\begin{gather*}
\eta_G(S-2g)=\eta_G(S),\ \lambda^\ast_G(S-2g)=\lambda^\ast_G(S), \\
\intertext{and}
\kappa(\CP(S-2g))=\kappa(\CP(S)).
\end{gather*}
\end{lemma}
\begin{proof}
The isomorphism between $\CP(S-2g)$ and $\CP(S)$ is established by mapping
every group element $x$ to $x-g$, and the equality ${\mathcal H}_G(S-2g)={\mathcal H}_G(S)$ is
immediate from the observation that $S+2\ast G-2g=S+2\ast G$. To show that
${\mathcal L}^\ast_G(S-2g)={\mathcal L}^\ast_G(S)$, suppose that $L\in{\mathcal L}^\ast_G(S)$ and let
$G_0\le G$ (lying above $L$) and $g_0\in G_0$ be as in (L1)--(L4). By (L2) we
have $2g\in G_0$. Consequently, $(G\setminus G_0)-2g=G\setminus G_0$, and hence it
follows from (L4) that
$$ S-2g+L = (G\setminus G_0) \cup (g_0-2g+L). $$
Furthermore, since $\pH[L](g_0)$ is a generator of the cyclic $2$-group
$G_0/L$, so is $\pH[L](g_0-2g)$; that is, $\<g_0-2g\>+L=G_0$. This shows
that $L\in{\mathcal L}^\ast_G(S-2g)$, and hence
${\mathcal L}^\ast_G(S)\subseteq{\mathcal L}^\ast_G(S-2g)$. By symmetry, we also have
${\mathcal L}^\ast_G(S-2g)\subseteq{\mathcal L}^\ast_G(S)$, implying the assertion.
\end{proof}
We now pass to our last lemma, which will take us most of the way towards
the proof of Theorem \reft{main2}; the reader may compare the statement
of this lemma with that of Theorem \reft{main1}.
\begin{lemma}\label{l:main}
If $S$ is a proper subset of the finite abelian group $G$, then
\begin{equation}\label{e:THEresult}
\kappa(\CP(S)) = \min \{ \eta_G(S), \lambda^\ast_G(S), |S| \}.
\end{equation}
\end{lemma}
\begin{proof}
Since each of $\eta_G(S),\,\lambda^\ast_G(S)$, and $|S|$ is an upper bound for
$\kappa(\CP(S))$, it suffices to show that $\kappa(\CP(S))$ is greater than or
equal to one of these quantities. Thus we can assume that
$\kappa(\CP(S))\le |S|-1\le |G|-2$. Hence $S\ne\varnothing$ and $\CP(S)$ is not
complete.
It is not difficult to see that the assertion holds true if $|G|\le 2$; we
leave verification to the reader. The case $|S|=1$ is also easy to establish
as follows. Suppose that $|G|>2$ and $S=\{s\}$, where $s$ is an element of
$G$. If $\<s\>\ne G$, then $\<s\>\in{\mathcal H}_G(S)$ and $|S+\<s\>|-|\<s\>|=0$,
implying $\kappa(\CP(S))=\eta_G(S)=0$. Next, if $G$ is not a $2$-group, then
there exists an element $g\in G$ which is an odd multiple of $s$ and such
that the subgroup $\<g\>$ is proper; in this case $g\in(S+2\ast G)\cap\<g\>$
showing that $\<g\>\in{\mathcal H}_G(S)$ and leading to $\kappa(\CP(S))=\eta_G(S)=0$, as
above. In both cases the proof is complete, so we assume that $\<s\>=G$ is a
$2$-group. Since $|G|>2$, in this case we have $\{0\}\in{\mathcal L}^\ast_G(S)$ (take
$G_0=G$ and $g_0=s$ in (L1)--(L4)) and $|S+\{0\}|-|\{0\}|=0$, whence
$\kappa(\CP(S))=\lambda^\ast_G(S)=0$.
Having finished with the cases $|S|=1$ and $|G|\le 2$, we proceed by
induction on $|G|$, assuming that $\kappa(\CP(S))\le|S|-1$. Choose $A\subseteq G$
such that $A$ is a fragment of $\CP(S)$ and fix arbitrarily $a\in A$. In view
of Lemma \refl{Shift}, and since the set $A-a$ is a fragment of the graph
$\CP(S-2a)$, by passing from $S$ to $S-2a$, and from $A$ to $A-a$, we ensure
that
\begin{equation}\label{e:0inT}
0 \in A.
\end{equation}
Also, by Proposition \refp{lessS} we have $A\subseteq S-A\ne G$.
If each of $S$ and $A$ is contained in a coset of a proper subgroup $K<G$,
then from $A\subseteq S-A$ and \refe{0inT} it follows that in fact $S$ and $A$ are
contained in $K$, whence $K\in{\mathcal H}_G(S)$; furthermore, $|S+K|-|K|=0$, showing
that $\kappa(\CP(S))=\eta_G(S)=0$. Accordingly, we assume for the rest of the
proof that for any proper subgroup of $G$, at least one of the sets $S$ and
$A$ is not contained in a coset of this subgroup.
Let $H:=\pi(S-A)$. We distinguish two major cases according to whether or not
$H$ is trivial.
\subsection*{Case 1:} $H$ is non-trivial. Applying the induction
hypothesis to $\CP[G/H](\pH(S))$ and using \refe{g/h}, we conclude
that either $\eta_{G/H}(\pH(S))=|\pH(S)|-1$ or
$\lambda^\ast_{G/H}(\pH(S))=|\pH(S)|-1$, giving two subcases.
\subsubsection*{Subcase 1.1.}
Assume first that $\eta_{G/H}(\pH(S))=|\pH(S)|-1$, and hence that there
exists a subgroup $H'\le G$, lying above $H$, such that
$H'/H\in{\mathcal H}_{G/H}(\pH(S))$ and
$$ |\pH(S)+H'/H| - |H'/H| = \eta_{G/H}(\pH(S)) = |\pH(S)|-1. $$
The former easily implies that $H'\in{\mathcal H}_G(S)$, while the latter, in
conjunction with \refe{per}, implies that
$$ |S+H'| - |H'| = |S+H| - |H| = \kappa(\CP(S)). $$
This shows that $\kappa(\CP(S))\ge\eta_G(S)$, whence in fact
$\kappa(\CP(S))=\eta_G(S)$.
\subsubsection*{Subcase 1.2.}
Assume now that $\lambda^\ast_{G/H}(\pH(S))=|\pH(S)|-1$, and let $L\le G$ be
a subgroup, lying above $H$, such that $L/H\in{\mathcal L}^\ast_{G/H}(\pH(S))$ and
$$ |\pH(S)+L/H| - |L/H| = \lambda^\ast_{G/H}(\pH(S)) = |\pH(S)|-1. $$
In view of \refe{per} and the assumptions, the last equality yields
\begin{equation}\label{e:yikess}
|S+L|-|L| = |S+H|-|H| = \kappa(\CP(S)) \le |S|-1.
\end{equation}
Since $L/H\in{\mathcal L}^\ast_{G/H}(\pH(S))$, we can find a subgroup $G_0\le G$,
lying above $L$, and an element $g_0\in G_0\setminus L$, so that $G/G_0$ is an
elementary abelian $2$-group, $G_0/L$ is a cyclic $2$-group of order at least
$4$ generated by $\pH[L](g_0)$, and $S+L=(G\setminus G_0)\cup(g_0+L)$. Without
loss of generality, we can assume that $g_0\in S$.
If $S_0:=S\cap(g_0+L)$ is not contained in a coset of a proper subgroup of
$L$, then $L\in{\mathcal L}^\ast_G(S)$, and hence it follows in view of \refe{yikess}
that $\kappa(\CP(S))=\lambda^\ast_G(S)$. Therefore we assume that there exists a
proper subgroup $R<L$ such that $S_0$ is contained in an $R$-coset, and we
choose $R$ to be minimal subject to this property; thus, $S_0=S\cap(g_0+R)$
and $\<(S-g_0)\cap L\>=R$.
Since $S_0$ is contained in an $R$-coset, from \refe{yikess} we obtain
$$ |(S\setminus S_0)+L| - |S\setminus S_0|
= |S+L|-|L| - |S| + |S_0| < |S_0| \le |R|. $$
Hence every $R$-coset in
$G\setminus G_0=(S\setminus S_0)+L$ contains at least one element of $S$; that is,
\begin{equation}\label{e:S+R=}
S+R = (G\setminus G_0)\cup(g_0+R).
\end{equation}
Consequently, using \refe{yikess} once again, we obtain
\begin{equation}\label{e:S+R-R}
|S+R|-|R| = |G\setminus G_0| = |S+L|-|L| = \kappa(\CP(S)).
\end{equation}
Applying the previously completed singleton case to the set
$\pH[R](S_0)\subseteq G_0/R$, we get two further subcases.
\subsubsection*{Subcase 1.2.1.}
Suppose that $\kappa(\CP[G_0/R](\pH[R](S_0)))=\eta_{G_0/R}(\pH[R](S_0))$.
Choose a subgroup $R'\le G_0$, lying above $R$, such that
$R'/R\in{\mathcal H}_{G_0/R}(\pH[R](S_0))$. Since $R\le R'\le G_0$, it follows in view
of \refe{S+R=} and \refe{S+R-R} that
$$ |S+R'|-|R'| = |S+R|-|R| = \kappa(\CP(S)). $$
Thus, since $R'\in{\mathcal H}_{G_0}(S_0)\subseteq{\mathcal H}_G(S)$, we conclude that
$\kappa(\CP(S))=\eta_G(S)$.
\subsubsection*{Subcase 1.2.2.}
Assume now that $\kappa(\CP[G_0/R](\pH[R](S_0)))\ne\eta_{G_0/R}(\pH[R](S_0))$.
As $|G_0/R|\ge|G_0/L|\ge 4$, from the singleton case analysis at the
beginning of the proof it follows that $G_0/R$ is a cyclic $2$-group
generated by $\pH[R](S_0)=\{\pH[R](g_0)\}$.
If $R\in{\mathcal H}_G(S)$, then it follows in view of \refe{S+R-R} that
$\kappa(\CP(S))=\eta_G(S)$; therefore, we assume that $R\notin{\mathcal H}_G(S)$. Hence
in view of $S+R\subseteq S+L\ne G$ we infer that $2\ast(G/R)\cap\pH[R](S)=\varnothing$.
Consequently, since \refe{S+R=} implies that $\pH[R](S)$ contains
$(G/R)\setminus(G_0/R)$ as a proper subset, we have $2\ast(G/R)\lneqq G_0/R$.
Applying Lemma \refl{directsum}, we conclude that $\exp(G_0/R)=\exp(G/R)$.
Thus \refe{S+R=}, the remark at the beginning of the present subcase, and the
above-made observation that $G/G_0$ is an elementary $2$-group show that
$R\in{\mathcal L}^\ast_G(S)$, whence \refe{S+R-R} yields
$\kappa(\CP(S))=\lambda^\ast_G(S)$.
\subsection*{Case 2:} $H$ is trivial. Thus by \refe{per} we have
$\kappa(\CP(S))=|S|-1$, and therefore \refe{-t} gives
$$ |S-A|-|A| = |(S-A)\setminus A| = \kappa(\CP(S)) = |S|-1. $$
Applying Theorem \reft{dual} to the pair $(S,-A)$, we find a subgroup $F<G$
such that conclusions (i)--(iii) of Theorem \ref{t:dual} hold true; in
particular, $(\pH[F](S),-\pH[F](A))$ is an elementary pair in $G/F$ of one of
the types (I)--(IV), and $|S+F|\le |S|+|F|-1$. By the last inequality, we
have
$$ |S+F|-|F| \le |S|-1 = \kappa(\CP(S)). $$
Hence, if $F\in{\mathcal H}_G(S)$, then $\kappa(\CP(S))=\eta_G(S)$; consequently, we
assume that
\begin{equation}\label{e:FnotincH}
F\notin{\mathcal H}_G(S).
\end{equation}
Observe that if $\pH[F](S)=G/F$, then $F$ is non-zero, whence by Theorem
\reft{dual} (iii) we have $|\pH[F](A)|=1$. Thus, if $(\pH[F](S),-\pH[F](A))$
is not of type (I), then
\begin{equation}\label{e:technicality-condition}
S+F \ne G.
\end{equation}
We proceed by cases corresponding to the type of the pair
$(\pH[F](S),-\pH[F](A))$.
\subsubsection*{Subcase 2.1.}
Suppose that $(\pH[F](S),-\pH[F](A))$ is of type (IV). In this case, we have
$\mu(\pH[F](S),-\pH[F](A))\ge 2$, whence it follows by
Theorem~\reft{dual}~(iii) that $F$ is trivial. Hence $(S,-A)$ is an
elementary pair of type (IV). Thus, since $S$ and $A$ are not both contained
in a coset of the same proper subgroup, it follows that $A=g+(G\setminus S)$ for
some $g\in G$, implying $-g\notin S-A$. Therefore \refe{-t} yields
$-g\notin g+(G\setminus S)$ and thus $-2g\in S$; consequently,
$\{0\}=F\in{\mathcal H}_G(S)$, contradicting \refe{FnotincH}.
\subsubsection*{Subcase 2.2.}
Suppose that $(\pH[F](S),-\pH[F](A))$ is of type (III), but not of type (I).
Then, since $S$ and $A$ are not both contained in a coset of the same proper
subgroup and since $S-A\neq G$, it follows that $F$ is non-zero, that
$$ \pH[F](S) = \pH[F](g_1) + (H_1\cup\{0\})
,\ -\pH[F](A) = \pH[F](g_2) - (H_2\cup\{0\}) $$
for some $g_1,g_2\in G$, where $H_1\cup H_2\cup\{0\}$ is a partition
of $G/F$, and that $g_1+g_2+F$ has a non-empty intersection with
$S-A$, while every $F$-coset, other than $g_1+g_2+F$, is contained
in $S-A$; moreover, from $\pi(S-A)=\{0\}$ we derive that
\begin{equation}\label{e:r_0-not-a-subset}
g_1+g_2+F\nsubseteq S-A.
\end{equation}
By Theorem \reft{dual}, all $F$-cosets corresponding to
$$ (-\pH[F](A)) \setminus \{\pH[F](g_2)\} = \pH[F](g_2)-H_2,$$
are contained in $-A$. Hence, if
$$ -\pH[F](g_1+g_2) \in \pH[F](g_2)-H_2, $$
then $-g_1-g_2+F\subseteq -A$, and it follows in view of \refe{-t} that
$g_1+g_2+F\subseteq A\subseteq S-A$, contradicting \refe{r_0-not-a-subset}. Therefore,
assume instead that $-\pH[F](g_1+g_2)\notin\pH[F](g_2)-H_2$, so that
$\pH[F](g_1+2g_2)\in H_1\cup\{0\}$.
Then $2\pH[F](g_1+g_2)\in\pH[F](g_1)+(H_1\cup\{0\})=\pH[F](S)$,
whence by \refe{technicality-condition} we have $F\in{\mathcal H}_G(S)$,
contradicting \refe{FnotincH}.
\subsubsection*{Subcase 2.3.}
Suppose that $(\pH[F](S),-\pH[F](A))$ is of type (II), but not of type (I).
Letting $u:=|\pH[F](S)|$ and $v:=|\pH[F](A)|$, and choosing
$s_0\in S,\,a_0\in A$, and
$d\in G\setminus\{0\}$ appropriately, we write
\begin{align*}
\pH[F](S) &= \{ \pH[F](s_0), \pH[F](s_0)+\pH[F](d),
\ldots, \pH[F](s_0)+(u-1)\pH[F](d) \} \\
\intertext{and}
-\pH[F](A) &= \{ \pH[F](a_0), \pH[F](a_0)+\pH[F](d),
\ldots, \pH[F](a_0)+(v-1)\pH[F](d) \}.
\end{align*}
Since $(\pH[F](S),-\pH[F](A))$ is not of type (I), we have $u,\,v\ge 2$.
Next, it follows from \refe{-t} that
$$ -\pH[F](a_0) = \pH[F](s_0)+\pH[F](a_0) + r\pH[F](d), $$
and therefore $\pH[F](s_0)=-2\pH[F](a_0)-r\pH[F](d)$, for some integer $r$.
Thus either $\pH[F](s_0)$ (if $r$ is even) or $\pH[F](s_0)+\pH[F](d)$ (if $r$
is odd) belongs to $2\ast(G/F)$. In either case, in view of $u\ge 2$ we have
$\pH[F](S)\cap(2\ast(G/F))\neq\varnothing$, which by \refe{technicality-condition}
leads to $F\in{\mathcal H}_G(S)$, contradicting \refe{FnotincH}.
\subsubsection*{Subcase 2.4.}
Finally, suppose that $(\pH[F](S),-\pH[F](A))$ is of type (I); that is,
either $|\pH[F](S)|=1$ or $|\pH[F](A)|=1$ holds.
Suppose first that $|\pH[F](S)|=1$. In this case, $F$ is non-zero (as
$|S|>1$) and $S+F\neq G$ (as $F$ is a \emph{proper} subgroup); moreover, from
\refe{-t} we obtain
\begin{equation}\label{e:cackle-cackle}
\pH[F](S)-\pH[F](A) = \pH[F](A).
\end{equation}
By Theorem \reft{dual}, we can write $A=A_1\cup A_0$, where $A_1$ is a union
of $F$-cosets and $A_0$ is a non-empty subset of an $F$-coset disjoint from
$A_1$. If $\pH[F](S)-\pH[F](A_0)\subseteq\pH[F](A_1)$, then
$S-A_0+F\subseteq A_1+F=A_1\subseteq S-A$, whence $S-A=(S-A_1)\cup (S-A_0)$
is a union of $F$-cosets, contradicting the assumption that $S-A$ is
aperiodic. Therefore \refe{cackle-cackle} gives
$\pH[F](S)-\pH[F](A_0)=\pH[F](A_0)$, which together with $S+F\ne G$ implies
$F\in{\mathcal H}_G(S)$, contradicting \refe{FnotincH}. So we assume for the remainder
of the proof that $|\pH[F](S)|>|\pH[F](A)|=1$, and consequently in view of
\refe{0inT} that $A\subseteq F$.
Thus from \refe{-t} we derive that $0\in\pH[F](S)$, and it follows in view of
\refe{FnotincH} that $S+F=G$. Hence $F$ is nontrivial, and Theorem
\reft{dual} shows that there exists $s_0\in S$ such that $S=(G\setminus
(s_0+F))\cup S_0$, where $S_0\subset s_0+F$.
If there exists $g\in G$ with $\pH[F](g)\ne -\pH[F](g)+\pH[F](s_0)$, then it
follows in view of $\pH[F](S)=G/F$ that
$\pH[F](g)\in -\pH[F](g)+\pH[F](S\setminus S_0)$, whence
$$ g\in -g+(S\setminus S_0)+F \subseteq -g+S; $$
consequently, $\{0\}\in{\mathcal H}_G(S)$ and $\kappa(\CP(S))=\eta_G(S)$. Therefore we
assume that $\pH[F](g)=-\pH[F](g)+\pH[F](s_0)$ for all $g\in G$. Hence
$2\ast(G/F)=\{\pH[F](s_0)\}$, which implies that $G/F$ is an elementary
$2$-group and that $\pH[F](s_0)=0$; consequently, $S_0=S\cap F$.
From $A\subseteq F$ and \refe{-t}, it follows that $A\subseteq(S-A)\cap F=S_0-A$, and
since $S-A\ne G$ and $S+F=G$ we have $S_0-A\ne F$. Consequently, Theorem
\reft{dual} (i) yields
\begin{equation}\label{e:statue}
\kappa(\CP[F](S_0)) \le |(S_0-A)\setminus A| = |S_0-A|-|A| \le |S_0|-1.
\end{equation}
Since $S_0$ is a proper subset of $F$, it follows in view of \refe{statue}
that $\kappa(\CP[F](S_0))\le |F|-2$, whence $\CP[F](S_0)$ is not complete. Let
$A'\subseteq F$ be a fragment of $\CP[F](S_0)$. By \refe{-t} and \ref{e:statue},
we have $A'\subseteq S_0-A'\ne F$, and consequently $A'\subseteq S-A'\ne G$. Hence from
\refe{statue} and $S\setminus S_0=G\setminus F$ we obtain
\begin{multline*}
|S|-1 = \kappa(\CP(S)) \le |(S-A')\setminus A'|
\le |G\setminus F|+|(S_0-A')\setminus A'| \\
= |S\setminus S_0|+\kappa(\CP[F](S_0)) \le |S|-1,
\end{multline*}
implying $\kappa(\CP[F](S_0))=|S_0|-1$ and
$$ \kappa(\CP(S)) = |S\setminus S_0|+\kappa(\CP[F](S_0)). $$
Consequently, if $F'\leq F$ has the property that
$\kappa(\CP[F](S_0))=|S_0+F'|-|F'|$, then
\begin{equation}\label{e:statue2}
\kappa(\CP(S))=|S+F'|-|F'|.
\end{equation}
With \refe{statue} in mind, we apply the induction hypothesis to the
graph $\CP[F](S_0)$. If $\kappa(\CP[F](S_0))=\eta_F(S_0)$, then by
\refe{statue2} any subgroup $F'\in{\mathcal H}_F(S_0)\subseteq{\mathcal H}_G(S)$ with
$\kappa(\CP[F](S_0))=|S_0+F'|-|F'|$ satisfies
$\kappa(\CP(S))=|S+F'|-|F'|$, whence $\kappa(\CP(S))=\eta_G(S)$.
Therefore we assume instead that
$\kappa(\CP[F](S_0))=\lambda^\ast_F(S_0)$.
Choose $L\in{\mathcal L}^\ast_F(S_0)$ with $\lambda^\ast_F(S_0)=|S_0+L|-|L|$, and let
$G_0$ and $g_0\in G_0$ be as in (L1)--(L4), with $F$ playing the role of $G$.
Then it follows in view of \refe{statue2} that
\begin{equation}\label{e:k}
\kappa(\CP(S)) = |S+L|-|L|.
\end{equation}
If $\pH[L](S)\cap2\ast(G/L)\ne\varnothing$, then $L\in{\mathcal H}_G(S)$, whence \refe{k}
yields $\kappa(\CP(S))=\eta_G(S)$. Therefore we assume that
\begin{equation}\label{e:hren1}
\pH[L](S) \cap 2\ast (G/L)=\varnothing
\end{equation}
and we proceed to show that $L\in{\mathcal L}^\ast_G(S)$; in view of \refe{k}, this
will complete the proof.
Since $L\in{\mathcal L}^\ast_F(S_0)$, and by the choice of $G_0$ and $g_0$, we see
that $G_0/L$ is a cyclic $2$-group with $|G_0/L|\ge 4$ and $\<g_0\>+L=G_0$;
furthermore, $S\cap (g_0+L)$ is not contained in a proper coset of $L$, and
$S_0+L=(F\setminus G_0)\cup(g_0+L)$, which in view of $S=(G\setminus F)\cup S_0$ and
$L\le F$ yields
\begin{equation}\label{e:hren2}
S+L = (G\setminus G_0) \cup (g_0+L).
\end{equation}
It remains to show that $\exp(G/L)=\exp(G_0/L)$ and that $G/G_0$ is
an elementary $2$-group. To prove the former, we observe that
\refe{hren1} and \refe{hren2} yield $2\ast(G/L)\lneqq G_0/L$ and
invoke Lemma \refl{directsum}. To establish the latter, simply
observe that $2\ast(G/L)\lneqq G_0/L$ implies $2\ast G\le
G_0+L=G_0$, whence $2(g+G_0)=G_0$ for every $g\in G$.
\end{proof}
We can now prove Theorem \reft{main2}.
\begin{proof}[Proof of Theorem \reft{main2}]
We first show that there is at most one subgroup $L\in{\mathcal L}^\ast_G(S)$ with
\begin{equation}\label{e:main2-0}
|S+L|-|L|\le |S|-1.
\end{equation}
For a contradiction, assume that $L,\,L'\in{\mathcal L}^\ast_G(S)$ are distinct, $L$
satisfies \refe{main2-0}, and $|S+L'|-|L'|\le |S|-1$. Find $G_0\le G$ and
$g_0\in G_0$ such that (L1)--(L4) hold, and let $S_0=S\cap(g_0+L)$. It
follows from Lemma \refl{LcapH} that $L'\le G_0$, whence
\begin{equation}\label{e:main2-1}
|L'|-1 \ge |S+L'|-|S| \ge |S_0+L'|-|S_0|.
\end{equation}
Suppose that $L\nleq L'$ and $L'\nleq L$, and write $t=\pH[L'](S_0)$; that
is, $t$ is the number of $L'$-cosets that intersect $S_0$. Since $S_0$ is not
contained in a proper coset of $L$, and since $L\nleq L'$, we have $t\ge 2$.
Consequently, from $L'\nleq L$ it follows that
$$ |S_0+L'|-|S_0| \ge t(|L'|-|L\cap L'|) \ge t|L'|/2 \ge |L'|, $$
contradicting \refe{main2-1}. So we may assume either $L\le L'$ or $L'\le
L$; switching the notation, if necessary, and recalling that $L'\ne L$,
we assume that $L<L'$.
Since $L'\in{\mathcal L}^\ast_G(S)$, there exists a subgroup $G_0'\le G$, lying above
$L'$, and an element $g_0'\in G_0'$ such that $|G_0'|\ge 4|L'|$,
$(S+L')\setminus(g_0'+L')=G\setminus G_0'$, and $(g_0'+L')\cap S$ is not contained in a
proper coset of $L'$. If $\pH[L'](g_0')=\pH[L'](g_0)$, then $(g_0'+L')\cap
S=(g_0+L')\cap S$, while, in view of $L'\le G_0$, the right-hand side is
contained in an $L$-coset, which, in view of $L<L'$, contradicts that
$(g_0'+L')\cap S$ is not contained in a proper coset of $L'$. Therefore, we
conclude instead that $\pH[L'](g_0)\ne\pH[L'](g_0')$. Thus, since
$|\pi(\pH[L'](S)\setminus\{\pH[L'](g_0')\})|=|\pi(G_0'/L')|\ge 4$, it follows from
Proposition \refl{david} that $|\pi(\pH[L'](S)\setminus\{\pH[L'](g_0)\})|=1$,
which is equivalent to
$$ \pi((S+L')\setminus(g_0+L'))=L'. $$
Hence, since $L<L'\leq G_0$, so that $(S+L')\setminus(g_0+L')=G\setminus G_0$, it
follows that $L'=G_0$, whence $S+L'=S+G_0=G$, contradicting the assumption
$L'\in{\mathcal L}^\ast_G(S)$. This establishes uniqueness of $L\in{\mathcal L}^\ast_G(S)$
satisfying \refe{main2-0}.
Clearly, Lemma \refl{main} implies assertion (ii) of Theorem 2, and therefore
it remains to establish assertion (i). To this end, suppose that
$L\in{\mathcal L}^\ast_G(S)$ satisfies \refe{main2-0}, and that $G_0$ and $g_0$ are as
in (L1)--(L4). We will show that $\eta_G(S)\ge|S|$ and
$\kappa(\CP(S))=\lambda_G(S)=\lambda^*_G(S)=|S+L|-|L|$.
Suppose that there exists $H\in{\mathcal H}_G(S)$ with
\begin{equation}\label{e:main2-2}
|S+H|-|H| \le |S|-1.
\end{equation}
Then $H\le G_0$ by Lemma \refl{LcapH}. If $H\le L$, then from $(S+2\ast
G)\cap H\neq\varnothing$ we obtain $(S+2\ast G)\cap L\neq\varnothing$, contradicting
(L1)--(L4). Therefore $H\nleq L$.
Let $S_0=(g_0+L)\cap S$, and denote by $t$ the number of $H$-cosets
intersecting $S_0$. In view of \refe{main2-2}, and taking into account
$H\le G_0$ and $H\nleq L$, we obtain
$$ |H|-1 \ge |S+H|-|S| \ge |S_0+H|-|S_0|
\ge t(|H|-|H\cap L|) \ge t|H|/2. $$
Hence $t=1$. Thus, since $S_0$ is not contained in a coset of a proper
subgroup of $L$, we conclude that $L\le H$. Consequently, from (L1)--(L3) we
get $2\ast(G/H)=2\ast(G_0/H)$, and thus, in view of
$(S+2\ast G)\cap H\neq\varnothing$ and taking into account (L4), we have
\begin{equation}\label{e:main2-3}
\varnothing \ne \pH(S) \cap 2\ast(G/H) = \pH(S) \cap 2\ast(G_0/H) \\
= \{\pH(g_0)\} \cap 2\ast (G_0/H).
\end{equation}
Since $\pH[L](g_0)$ generates $G_0/L$, it follows from $H\ge L$ that
$\pH(g_0)$ generates the cyclic $2$-group $G_0/H$. Thus \refe{main2-3}
implies that $H=G_0$, whence $S+H=S+G_0=G$, a contradiction. So we conclude
that there are no subgroups $H\in{\mathcal H}_G(S)$ satisfying \refe{main2-2}; that
is, $\eta_G(S)\ge |S|$. Thus it follows by Lemma \refl{main} that
\begin{equation}\label{e:main2-4}
\kappa(\CP(S))=\min\{\lambda^\ast_G(S),|S|\}.
\end{equation}
The uniqueness of $L$, established above, implies that
$\lambda^\ast_G(S)=|S+L|-|L|$, and now \refe{main2-0} shows that
$$ \kappa(\CP(S)) \le \lambda_G(S) \le \lambda^\ast_G(S) = |S+L|-|L| \le |S|-1. $$
Comparing this with \refe{main2-4}, we see that, indeed, the first two
inequalities are actually equalities.
\end{proof}
Finally, we prove Theorem \reft{simple}.
\begin{proof}[Proof of Theorem \reft{simple}]
By Theorem \reft{main2}, we have $\kappa(\CP(S))=|S+L|-|L|$ with a subgroup
$L\le G$, belonging to either ${\mathcal H}_G(S)$ or ${\mathcal L}^\ast_G(S)$. Let $F\le G$ be
a subgroup that minimizes $|S+F|-|F|$ over all subgroups with $S+F\ne G$.
Assuming that
\begin{equation}\label{e:simple1}
|S+F|-|F| < |S+L|-|L| \le |S|-1,
\end{equation}
we will obtain a contradiction; evidently, this will prove the assertion.
>From Lemma \refl{geom} and \refe{simple1}, it follows that either $S+F+L=S+L$
or $S+F+L=S+F$; in either case,
\begin{equation}\label{e:simple2}
S+F+L \ne G.
\end{equation}
Suppose first that $|L|\le |F|$. Then Lemma \refl{geom} yields $S+F+L=S+F$,
and thus
$$ |S+F+L|-|F+L| = |S+F|-|F+L|. $$
The minimality of $F$ now implies that $|F+L|=|F|$, whence $L\le F$. If
$L\in{\mathcal H}_G(S)$, then it follows in view of $L\le F$ and $S+F\ne G$ that
$F\in{\mathcal H}_G(S)$, implying $\kappa(\CP(S))\le|S+F|-|F|$. However, since
$\kappa(\Cay_G^+(S))=|S+L|-|L|$, this contradicts \refe{simple1}. Therefore
we may assume $L\in{\mathcal L}^\ast_G(S)$. Let $G_0$ be the subgroup from the
definition of ${\mathcal L}^\ast_G(S)$. By Lemma \refl{LcapH} we then have $L\le F\le
G_0$, whence
$$ |S+F| = |G\setminus G_0| + |F| = (|S+L|-|L|) + |F|, $$
which contradicts \refe{simple1} once more.
Next, suppose that $|F|\le|L|$. Thus it follows by Lemma \refl{geom} that
$S+L=S+F+L$. Hence
\begin{equation}\label{e:simple3}
|S+F+L|-|F+L| = |S+L|-|F+L|.
\end{equation}
If $L\in{\mathcal H}_G(S)$, then it follows in view of $L\le F+L$ and \refe{simple2}
that $F+L\in {\mathcal H}_G(S)$; now \refe{simple3} and the minimality of $L$ give
$|F+L|=|L|$, leading to $F\le L$. We proceed to show that this holds in the
case $L\in{\mathcal L}^\ast_G(S)$ as well. In this case, in view of \refe{simple3} and
\refe{simple1}, Lemma \refl{LcapH} gives $F+L\le G_0$, where $G_0$ is the
subgroup from the definition of ${\mathcal L}^\ast_G(S)$. Thus (as in the previous
paragraph)
$$ |S+F+L| = |G\setminus G_0| + |F+L| = (|S+L|-|L|) + |F+L|. $$
Hence, since $|S+F+L|=|S+L|$, we obtain $|F+L|=|L|$, and therefore $F\le L$,
as desired.
We have just shown that $F\le L$ holds true in either case. Consequently,
from $|S+L|-|L|<|S|\le |S+F|$ and divisibility considerations, it follows
that indeed $|S+L|-|L|\le|S+F|-|F|$, contradicting \refe{simple1} and
completing the proof.
\end{proof}
|
2,869,038,156,878 | arxiv | \section{Introduction}
Understanding the physical processes in the very early Universe is a crucial ingredient for deciphering the physics at energies that we cannot currently probe in terrestrial experiments. While most observables have been washed away by the thermal bath of the pre-recombination era and do not have observational consequences, \emph{three observables} provide crucial information of the physics at high-energies. These are the spectrum of energy density fluctuations~\cite{Book-Kolb.Turner,Book-Mukhanov,Book-Padmanabhan-III,Book-Gorbunov.Rubakov}, excess of baryons over antibaryons (baryon asymmetry)~\cite{1999-Riotto.Trodden,2003-Dine.Kusenko-RevModPhy,2006-Cline-arXiv,2011-Riotto-JPCS,2007-Yoshimura-JPSJ,2015-Cui-ModPhyLetA,2020-Garbrecht-PPNP}, and coherent large-scale magnetic fields~\cite{2001-Grasso.etal-PhyRep,2002-Widrow-Rev.Mod.Phys.,2013-Durrer.Neronov-Arxiv,2016-Subramanian-Arxiv,2004-Giovannini-IJMPD,2020-Vachaspati-arXiv}.
The inflationary paradigm provides an attractive mechanism to generate the primordial density perturbations that lead to anisotropies in the cosmic microwave background (CMB) and the formation of large-scale structures~\cite{Book-Kolb.Turner,Book-Mukhanov,Book-Padmanabhan-III,Book-Gorbunov.Rubakov}. During inflation, the early Universe underwent an accelerated expansion, stretching quantum fluctuations to super-horizon scale density perturbations. Besides providing a causal mechanism to density perturbations, inflation also solves the standard cosmological model's long-standing puzzles, such as the horizon, flatness, and monopole problems.
The predictions of inflation are in good agreement with the present-day observations of CMB anisotropies and polarization~\cite{2018-Planck}. However, within the standard electrodynamics, inflation cannot provide a mechanism to generate large-scale B fields. This is because in 4-dimensions electromagnetic field is conformally invariant. Since FRW models are conformally flat, the electromagnetic field vacuum in FRW is the same as the Minkowski space-time. Hence, the standard electromagnetic fields generate negligible magnetic fields. More importantly, even if the baryon asymmetry or cosmological magnetic fields existed before the epoch of inflation, these would have been diluted by a factor of $e^{-3 N}$, where $N$ is the number of e-foldings of inflation~\cite{2016-Fujita.Kamada-PRD,2019-Domcke.etal-JCAP,2014-Long.Sabancilar.Vachaspati-JCAP}.
The present Universe is observed to contain essentially only matter and no antimatter, except for the rare antiparticles produced by cosmic rays. The asymmetry between baryons and antibaryons, referred to as Baryon Asymmetry of the Universe (BAU), can be expressed as~\cite{PartDataGroup,2018-Planck}
\begin{equation}
\label{def:etaObs}
\eta_B =\frac{n_{b}-n_{\bar{b}}}{n_{\gamma}}=\left\{\begin{array}{r}
{[5.8-6.6] \times 10^{-10}}~~\text{(from BBN)} \\
(6.09 \pm 0.06) \times 10^{-10}~~\text{(from CMB)}
\end{array}\right.
\end{equation}
where $n_{b}, n_{\bar{b}}, n_{\gamma}$ refer to the density of baryons, antibaryons and photons, respectively.
Magnetic fields permeate the Universe. Coherent magnetic fields in spiral galaxies and clusters of galaxies have a magnitude of the order of $\mu$Gauss~\cite{2001-Grasso.etal-PhyRep,2013-Durrer.Neronov-Arxiv,2016-Subramanian-Arxiv,2002-Widrow-Rev.Mod.Phys.}. There is also indirect evidence of a lower limit of order $10^{-16}~$G for the magnetic field contained in the voids between galaxies and clusters of galaxies~\cite{2010-Neronov.Vovk-Sci}.
The origin of primordial magnetic fields and baryon asymmetry of the Universe are still unresolved issues and require physics beyond the standard models of cosmology and particle physics. This leads to the following questions: As the Universe cooled, from the early Universe to today, what were the processes responsible for generating baryon asymmetry and large-scale magnetic fields?
Are these processes cosmological or particle physics or both?
Since both require physics beyond the standard model, there is a \emph{tantalizing possibility} that the same new physics can solve both. In this work, we consider such a possibility and show that the mechanism that leads to primordial helical magnetic fields also leads to baryogenesis at the beginning of the radiation-dominated epoch. Interestingly, our mechanism \emph{also requires} stretching of the primordial helical magnetic fields to super-horizon scales
during inflation --- the same mechanism that leads to primordial density perturbations.
Before we discuss the model itself, it is necessary to understand the key ingredients to generate baryon-asymmetry and magnetic fields and why the same new physics can potentially solve both these problems~\cite{1967-Sakharov,1996-Davidson-PLB}. In 1967, Sakharov listed three necessary conditions for creating the BAU~\cite{1967-Sakharov,1999-Riotto.Trodden}: (1) baryon number violation, (2) charge ($C$) and charge parity ($CP$) violation, and (3) departure from thermal equilibrium. All three of the Sakharov conditions are satisfied in the Standard Model; however, the electroweak phase transition is not sufficiently strong in the first order~\cite{1999-Riotto.Trodden,2003-Dine.Kusenko-RevModPhy,2006-Cline-arXiv,2011-Riotto-JPCS}. The CP-violating effects are not sufficiently pronounced to account for as large a BAU as we observe. As a result, there must have been additional physics beyond the standard model to produce it. This physics could have been operating anywhere between the weak scale and the GUT scale. Corresponding to out-of-equilibrium conditions, the baryogenesis scenarios are divided into two categories: (a) by the universe expansion itself or (b) by fast phase transition and bubble nucleation. In particular, the latter concerns the electroweak baryogenesis schemes, while the former is typical for a GUT
type baryogenesis or leptogenesis~\cite{1999-Riotto.Trodden,2003-Dine.Kusenko-RevModPhy,2006-Cline-arXiv,2011-Riotto-JPCS}.
More than two decades ago, Davidson pointed out an interesting relation between the primordial magnetic field and Sakharov's conditions~\cite{1996-Davidson-PLB}. She argued that the presence of background magnetic fields in the early Universe could lead to the breaking of $C, CP, SO(3)$ symmetries and thermal equilibrium. Specifically, she argued that the presence of the magnetic fields leads to the following three conditions: (1) There should be some moderately out-of-thermal-equilibrium dynamics because in equilibrium, the photon distribution is thermal, and there are no particle currents to sustain a "long-range" field, (2) Since $\textbf{B}$ is odd under $C$ and $CP$, the presence of magnetic field will lead to $CP$ violation, (3) Since the magnetic field is a vector quantity, it chooses a particular direction hence breaks the isotropy (rotational invariance). Thus, Davidson provided a possible link between the presence of magnetic fields to the conditions required for baryogenesis~\cite{1996-Davidson-PLB}.
Davidson's conditions are necessary \emph{but not} sufficient. One key missing ingredient, as we show, is the requirement of \emph{primordial helical magnetic fields} (details in Sec. \ref{sec:Baryo-magnetic}). Primordial helical magnetic fields are generated by the terms that break conformal invariance and parity symmetry~\cite{2001-Vachaspati-PRL,2003-Caprini.etal-PRD,2005-Campanelli-Giannotti-PRD,2018-Sharma.Subramanian.Seshadri.PRD,2009-Caprini.Durrer.Fenu-JCAP,2009-Campanelli-IJMPD,2019-Shtanov-Ukr.PJ,2020-Kushwaha.Shankaranarayanan-PRD}.
If we could measure them, primordial helical magnetic fields provide evidence of CP violation in the early Universe. Interestingly, the presence of primordial helical fields leads to non-zero Chern-Simons number~\cite{2016-Fujita.Kamada-PRD,2015-Anber.Sabancilar-PRD,2016-Kamada.Long-PRD} and, eventually, the change in the Fermion number.
Recently, the current authors constructed a simple model of inflationary magnetogenesis that couples the electromagnetic fields with the Riemann tensor~\cite{2020-Kushwaha.Shankaranarayanan-PRD}. We showed that this model leads to a primordial helical magnetic field where one helical mode is enhanced while the other mode is suppressed. The model has two key advantages over other models~\cite{2005-Campanelli-Giannotti-PRD,2018-Sharma.Subramanian.Seshadri.PRD,2009-Caprini.Durrer.Fenu-JCAP,2009-Campanelli-IJMPD,2019-Shtanov-Ukr.PJ}: First, it does not require the coupling of the electromagnetic field with any scalar field. Hence, unlike Ratra model~\cite{1991-Ratra-Apj.Lett,2019-Shakeri.etal-PRD,2009-Demozzi.etal-JCAP}, there is no strong-coupling problem caused by the extra degrees of freedom.
Second, the model is free from backreaction for generic slow-roll inflation models~\cite{2020-Kushwaha.Shankaranarayanan-PRD}. In Ref.~\cite{2018-Sharma.Subramanian.Seshadri.PRD}, authors have shown the strong-coupling problem in Ratra model~\cite{1991-Ratra-Apj.Lett} can be avoided by choosing a particular coupling function.
In that work, the current authors used the general effective field theory of gravity coupled to the Standard Model of particle physics framework to obtain leading order gravity terms that couple to the standard model Bosons~\cite{2019-Ruhdorfer.etal-JHEP}. As we have done in the previous work, we limit to mass dimension 6-operators coupling to the gauge field Lagrangian, specifically, to the electromagnetic field.
In this work also, we limit to mass dimension 6-operators coupling to the gauge field, specifically, to the electromagnetic field.
We show that the generation of primordial helical magnetic fields from the above model leads to baryogenesis. Since the model produces helical fields over large length scales, we show that the Chern-Simons (CS) number density is non-zero (details in Sec. \ref{sec:Baryo-magnetic}). Considering that the model generates primordial helical modes at all length scales, we focus on the last ten e-foldings of inflation. This is because the modes that leave the Hubble radius during the last 10 e-foldings of inflation will reenter the Universe after reheating; these primordial helical modes will lead to baryogenesis just at the beginning of the radiation-dominated epoch. Furthermore, we show that the BAU is independent of inflation models and depends \emph{only on} the energy scale at the exit of inflation and reheating temperature.
In Sec. (\ref{sec:Baryo-magnetic}), we discuss the relation between primordial helical magnetic fields and baryogenesis, in particular, the chiral anomaly in the presence of the magnetic field, and obtain the expression for Chern-Simon number density. In Sec. (\ref{sec:Model}), we discuss the generation of primordial helical modes and show that primordial helical modes lead to a non-zero CS number density. Then we evaluate the baryon asymmetry parameter in Sec.(\ref{sec:baryon_asymm}). Sec. \eqref{sec:conc} contains the implications of the results. Appendices contain the details of the calculations.
In this work, we use $(+,-,-,-)$ signature for the 4-D space-time metric. Greek alphabets denote the 4-dimensional space-time coordinates, and Latin alphabets denote the 3-dimensional spatial coordinates. \emph{A prime} stands for a derivative with respect to conformal time $(\eta)$ and \emph{subscript} $,i$ denotes a derivative w.r.t spatial coordinates. We use the Heaviside-Lorentz units such that $c = k_B = \epsilon_0 = \mu_0 = 1$. The reduced Planck mass is denoted by $M_{\rm P} = (8 \pi G)^{-1/2}$.
\section{Conditions on baryogenesis in the presence of primordial magnetic field}
\label{sec:Baryo-magnetic}
As we mentioned in the introduction, Davidson's conditions are necessary but not sufficient. One key missing ingredient is the requirement of \emph{primordial helical magnetic fields}. In this section, we briefly discuss this.
In the very early Universe, just after the exit of inflation, the energy scale of the Universe was close to $10^{14}~{\rm GeV}$. All particles, including Fermions, are highly relativistic and can be treated as massless.
Although the massless Dirac equation is invariant under chiral transformations in the classical theory, the chiral symmetry is broken due to quantum mechanical effects in the presence of the external electromagnetic fields. This phenomenon, known as the quantum axial anomaly, affects the transport properties of the chiral medium, leading to experimentally accessible signatures such as the chiral magnetic effect~\cite{2008-Fukushima.etal-PRD} and the
chiral separation effect~\cite{2005-Metlitski.Zhitnitsky-PRD}.
In the early Universe, the generation of the non-zero primordial helical magnetic fields leads to a chiral anomaly resulting from the imbalance between left and right-handed fermions.
In the presence of an electromagnetic field in curved space-time, the chiral anomaly is given by the following equation~\cite{2009-Parker.Toms-Book,2014-Barrie.Kobakhidze-JHEP}:
\begin{align}\label{eq:chiralAnomaly}
\nabla_{\mu}J_A^{\mu} = -\frac{1}{384 \pi^2} \epsilon^{\mu\nu\rho\sigma} R_{\mu\nu\alpha\beta} R^{\alpha\beta}\,_{\rho\sigma} + \frac{e^2}{16 \pi^2} \epsilon^{\mu\nu\alpha\beta} F_{\mu\nu} F_{\alpha\beta}
\end{align}
where $J^{\mu}_A$ is the chiral current, $R_{\rho\sigma}\,^{\alpha\beta}$ is the Riemann tensor and $A_{\mu}$ is the four-vector potential of the electromagnetic field, $F_{\mu\nu} = \nabla_{\mu}A_{\nu} - \nabla_{\nu}A_{\mu} $. $\epsilon^{\mu\nu\rho\sigma} = \frac{1}{\sqrt{-g}}\, \eta^{\mu\nu\rho\sigma}$ is a fully antisymmetric tensor, $\eta^{\mu\nu\rho\sigma}$ is Levi-Civita symbol whose values are $\pm1$ and we set $\eta^{0123} = 1 = - \eta_{0123}$. It is easy to see from the above equation that the
anomaly contribution from the electromagnetic field and the gravity act independently and, for most parts, can be treated independently.
In the case of flat FRW background in conformal time ($\eta$):
\begin{align}\label{eq:FRW}
ds^2 = a^2(\eta) \,(d\eta^2 - \delta_{ij} dx^i dx^j)
\end{align}
the contribution of the first term in the RHS of Eq.~(\ref{eq:chiralAnomaly}) vanishes, i. e.,
\begin{align}
\epsilon^{\mu\nu\rho\sigma} R_{\mu\nu\alpha\beta} R^{\alpha\beta}\,_{\rho\sigma} = 0 \, .
\end{align}
It can be shown that even at the first-order, the gravitational contribution vanishes, and the non-zero contribution arises only at second order~\cite{2006-Alexander.Peskin.Jabbari-PRL}. Due to the presence of the antisymmetric tensor, the gravitational fluctuations lead to gravitational birefringence and can lead to net chiral current.
In the flat FRW background, the second term in the RHS of Eq.(\ref{eq:chiralAnomaly}) is given by:
\begin{align}
\frac{e^2}{16 \pi^2} \epsilon^{\mu\nu\alpha\beta} F_{\mu\nu} F_{\alpha\beta} = \frac{e^2}{4 a^4} \epsilon_{ijk} \partial_j A_k \, \partial_0 A_i \, .
\end{align}
In the presence of the magnetic field, this term is non-zero and hence leads to
a net chiral current.
Thus, if we consider only up to the first-order in perturbations,
\emph{only} the second term in the RHS of Eq.~(\ref{eq:chiralAnomaly}) contributes and the chiral anomaly equation reduces to:
\begin{align}\label{eq:chiralAnomaly_FF}
\partial_{\mu}\left( \sqrt{-g} J_A^{\mu} \right) = \frac{e^2}{16 \pi^2} \eta^{\mu\nu\alpha\beta} F_{\mu\nu} F_{\alpha\beta} \, ,
\end{align}
where we have used
\[
\nabla_{\mu}J^{\mu}_A = \frac{1}{\sqrt{-g}} \partial_{\mu} \left( \sqrt{-g} J^{\mu}_A \right), \quad \epsilon^{\mu\nu\alpha\beta} = \frac{1}{\sqrt{-g}} \eta^{\mu\nu\alpha\beta} \, .
\]
Note that during inflation, LHS in Eq. (\ref{eq:chiralAnomaly_FF}) is zero,
and due to the exponential expansion, standard model particles are diluted. However, if we can generate non-zero primordial helical fields during inflation, then
these non-zero primordial helical fields can lead to chiral current at the radiation-dominated epoch (or during reheating when the standard model particles are created). To see this, we rewrite Eq.~(\ref{eq:chiralAnomaly_FF})
using $\eta^{\mu\nu\alpha\beta} F_{\mu\nu} F_{\alpha\beta} = 4 \partial_{\mu} \left( \eta^{\mu\nu\alpha\beta} A_{\nu} \partial_{\alpha} A_{\beta} \right)$, i.e.,
\begin{align}\label{eq:chiral_topo_current}
\partial_{\mu} \left( \sqrt{-g} J^{\mu}_A \right) = \frac{e^2}{4 \pi^2} \partial_{\mu} \left( \eta^{\mu\nu\alpha\beta} A_{\nu} \partial_{\alpha} A_{\beta} \right) = \frac{e^2}{4 \pi^2} \partial_{\mu} \left( \sqrt{-g} K^{\mu} \right)
\end{align}
where
\[
K^{\mu} = \frac{\eta^{\mu\nu\alpha\beta} }{\sqrt{-g} } A_{\nu} \partial_{\alpha} A_{\beta}
\]
is the topological current. For FRW background, the components are given by
\begin{align}
K^0 = a^{-4}(\eta) \, \epsilon_{ijk} A_{i} \partial_j A_k \qquad \text{and} \qquad K^i = a^{-4}(\eta) \, \epsilon_{ijk} A_{j} \partial_0 A_k.
\end{align}
Solving Eq.~(\ref{eq:chiral_topo_current}), we get,
\[
J^{\mu}_A = \frac{e^2}{4 \pi^2} K^{\mu} \, .
\]
Thus, the net baryon number density, $n_B = n_b - n_{\bar{b}} = a(\eta) \langle 0 | J^0_A | 0 \rangle$ is related to Chern-Simon number density $n_{CS} = \langle 0 | K^0 | 0 \rangle$ as~\cite{2014-Barrie.Kobakhidze-JHEP},
\begin{align}\label{eq:n_B-n_CS-definition}
n_B \equiv \frac{e^2}{4\pi^2} a(\eta) n_{CS}.
\end{align}
Note that $n_{CS} = 0$ at the start of inflation, and due to the absence of standard model particles $n_B = 0$ during inflation. Using the expression for $K^0$, we can write the Chern-Simon number density as
\begin{align}\label{eq:n_cs-relation}
n_{CS} = \frac{1}{a^4} \epsilon_{i j k} \langle 0 | A_i \, \partial_j A_k | 0 \rangle = \frac{1}{a^4}\int_{\mu}^{\Lambda} \frac{dk}{k} \frac{k^4}{2\pi^2} \left( | A_+ |^2 - |A_-|^2 \right) \, ,
\end{align}
where $\Lambda$, and $\mu$ set the possible energy range (or epoch) during which baryon asymmetry is generated after inflation, and $A_{\pm}$ refer to the positive and negative helicity modes of the electromagnetic field. The above expression is key in illuminating a useful relation between primordial helical magnetic fields generated during inflation and baryogenesis:
First, we see that the contribution to $n_{CS}$ is from all the modes that reenter the horizon at the beginning of the radiation-dominated epoch. Thus, the value of $n_{CS}$ depends on the upper cut-off $\Lambda$.
Second, the expression corresponds to the total Chern-Simons number density generated from the modes
in the energy range $[\mu, \Lambda]$ --- when these helical modes re-enter during the radiation-dominated epoch.
The helicity modes $A_+ $ and $ A_-$ are generated during inflation, and ${a^{-4}(\eta)}$ is the dilution due to the expansion of the Universe during this epoch.
Finally, $n_{CS}$ vanishes if the primordial magnetic fields are non-helical, i. e. $|A_+ | = |A_-|$. Hence, as mentioned at the beginning of this section, the generation of non-helical magnetic fields will not lead to baryogenesis. Thus, the key missing ingredient of Davidson's argument is the requirement of primordial helical magnetic fields.
In the following two sections, we explicitly evaluate the Chern-Simons number for our model and show that it is not sensitive to inflationary and reheating dynamics.
\section{The model and the primordial helical fields}
\label{sec:Model}
We consider the following action \cite{2020-Kushwaha.Shankaranarayanan-PRD} :
\begin{align}\label{eq:action}
S = S_{\rm{Grav}} + S_{\phi} + S_{\rm{EM}} + S_{\rm CB}
\end{align}
where $ S_{\rm{Grav}}$ is the Einstein-Hilbert action
\begin{align}\label{eq:EH-action}
S_{\rm Grav} = -\frac{M_{\rm P}^2}{2}\int d^4x \sqrt{-g} \, R \, ,
\end{align}
and $ S_{\phi} $ is the action for the minimally coupled, self-interacting canonical scalar field:
\begin{align}\label{eq:inflation-action}
S_{\phi} = \int d^4x \sqrt{-g} \left[ \frac{1}{2} \partial_{\mu}\phi \partial^{\mu}\phi - V(\phi) \right].
\end{align}
$S_{\rm{EM}}, S_{\rm CB}$ refer to the standard electromagnetic (EM) and conformal breaking part of the electromagnetic terms, respectively, which are given by:
\begin{align}\label{eq:S_EM}
S_{\rm{EM}} &= -\frac{1}{4} \int d^4x \, \sqrt{-g} \, F_{\mu\nu} F^{\mu\nu}, \hspace{0.5cm}\\
%
\label{eq:S_h}
S_{\rm{CB}} &= - \frac{1}{M^2} \,\int d^4x \, \sqrt{-g} \, R_{\rho\sigma}\,^{\alpha\beta} F_{\alpha\beta} \, \tilde{F}^{\rho\sigma} = - \frac{1}{M^2} \,\int d^4x \, \sqrt{-g} \, \tilde{R}^{\mu\nu\alpha\beta} F_{\alpha\beta} \, F_{\mu\nu} \, ,
\end{align}
where $\tilde{R}^{\mu\nu\alpha\beta} = \frac{1}{2}\epsilon^{\mu\nu\rho\sigma} R_{\rho\sigma}\,^{\alpha\beta}$ is the dual of Riemann tensor and $\tilde{F}^{\rho\sigma} = \frac{1}{2} \epsilon^{\mu\nu\rho\sigma}F_{\mu\nu} $ is the dual of $F_{\mu\nu}$. The standard electromagnetic action $S_{\rm{EM}}$ is conformally invariant; however, the presence of Riemann curvature in $S_{\rm CB}$ breaks the conformal invariance. $M$ is the energy scale, which sets the scale for the breaking of conformal invariance. Note that the signs of $S_{\rm{EM}}$ and $S_{\rm{CB}} $ are chosen with respect to the positive electromagnetic energy density.
In Ref. \cite{2019-Ruhdorfer.etal-JHEP}, the authors systematically showed that the first gravity operators appear at mass dimension 6 in the series expansion of the coupling between gravity and the standard model of particle physics. These operators only couple to the standard model Bosons. They also showed that (i) no new gravity operators appear at mass dimension 7, (ii) in mass dimension 8, the standard model Fermions
appear, and (iii) coupling between the scalar (Higgs) field and the standard model gauge Bosons appear only at mass dimension 8. Since mass dimension 8 operators are highly suppressed, like in Ref. \cite{2020-Kushwaha.Shankaranarayanan-PRD}, we limit ourselves to mass dimension 6 operators. Due to Riemann coupling, $M$ appears as a time-dependent coupling in the FRW background i.e., $1/M_{\rm eff} \sim H/M$.
At the current epoch where
$H_0 \approx 10^{-42} \rm{GeV}$ and assuming the parameter $M \approx 10^{17} \rm{GeV}$, we obtain ${H_0}/{M} \sim 10^{-59}$. Therefore, the coupling (Riemann tensor) is tiny and the non-minimal coupling term in the electromagnetic action will have significant contribution only in the early universe.
We also would like to point that the coupling term
($S_{\rm CB}$) is tiny near the Schwarzschild radius of a solar mass black-hole
(for details, see appendix \ref{app:blackhole}).
We assume that the scalar field ($\phi$) dominates the energy density in the during inflation and leads to $60 \, - \, 70$ e-foldings of inflation with $H_{\rm Inf} \sim 10^{14} {\rm GeV}$. Specifically, we consider power-law inflation in which the scale factor (in conformal time) is~\cite{2004-Shankaranarayanan.Sriramkumar-PRD}:
\begin{align}\label{eq:powerLaw}
a(\eta) = \left( - \frac{\eta}{\eta_0} \right)^{(\beta+1)}
\end{align}
where, the constant $\eta_0$ denotes the scale of inflation and $\beta \leq -2$. $\beta = -2$ corresponds to exact de Sitter. During inflation, $\eta \in (-\infty, 0)$. For slow-roll inflation $ \beta \approx -2-\epsilon$ and $ \mathcal{H} \equiv a^{\prime}/{a} \approx - (1 + \epsilon)/{\eta}$, where
$\mathcal{H}$ is the Hubble parameter in conformal time and
$\epsilon $ is the slow roll parameter. For our discussion below, we also assume that $10^{-3} \leq (H_{\rm Inf}/M) \leq 1$~\cite{2018-Nakonieczny-JHEP,2016-Goon.Hinterbichler-JHEP,2016-Goon-JHEP,2013-Balakin.etal-CQG}.
Equation of motion of the gauge field can be obtained by varying the action (\ref{eq:action}) with respect to $A^{\mu}$. In the Coulomb gauge ($A^{0} = 0, \partial_iA^i = 0$), we have:
\begin{align}\label{eq:equation_of_motion}
A_i^{\prime\prime} + \frac{4 \, \epsilon_{i j l}}{M^2} \, \left( \frac{a^{\prime\prime\prime}}{a^3} - 3\frac{a^{\prime\prime} a^{\prime} }{a^4} \right) \partial_j A_l
- \partial_j \partial_j A_i = 0
\end{align}
where $\epsilon_{i j l}$ is the Levi-Civita symbol in the 3-D Euclidean space. The above equation is different from other models in the literature and leads to distinct evolution of the magnetic field fluctuations in comparison to non-minimally coupled scalar field models~\cite{2020-Kushwaha.Shankaranarayanan-PRD}. In the helicity basis, the above equation reduces to (see appendix \ref{app:helicity_basis}):
\begin{align}\label{eq:eom_helicity}
A_h^{\prime\prime} + \left[ k^2 - \frac{4kh}{M^2} \,
\left( \frac{a^{\prime\prime\prime}}{a^3} - 3\frac{a^{\prime\prime} a^{\prime} }{a^4} \right) \right] A_h= 0 \, .
\end{align}
For the two helicity states ($h = \pm$), the above expression leads to two different evolution equations [cf. Eqs.~(\ref{eq:sup_mode_h+}, \ref{eq:sup_mode_h-})]. From Eq. \eqref{eq:n_cs-relation} we see that to obtain appreciable value of Chern-Simons number ($n_{CS}$), the difference between the two helicity states should be non-zero, and it is maximum
if one helicity mode is enhanced compared to other.
In our previous work \cite{2020-Kushwaha.Shankaranarayanan-PRD}, we showed that for a range of parameters of interest, negative helicity mode decays while the positive helicity mode is enhanced. Hence, negative helicity mode ($A_-$) will have negligible contribution and can be set to zero, i. e.,
$|A_-| = 0$. Using the series expansion of the Bessel functions, in the leading order, the positive helicity mode takes the following form
(\ref{eq:sup_mode_h+}):
\begin{align}\label{eq:A+Series}
A_+(\tau,k) &= C \, k^{\frac{1}{4\alpha}}
- C_2 \frac{\mathcal{F}^{-1} }{\pi}
\Gamma \left( \frac{1}{2\alpha} \right) \,k^{-\frac{1}{4\alpha}} \tau^{ - \frac{1}{\alpha} }
%
\end{align}
where,
\begin{align}\label{eq:C_F}
| C | \approx \varsigma^{-1} |C_2| \approx
\frac{M^{3/2} \eta_0}{\sqrt[4]{\eta_{end} 10^{45} GeV^3}},~~
\mathcal{F} \approx |\varsigma|^{-1} \approx \sqrt{M^2 \eta_0}, ~~
\alpha = -\frac{1}{2} -\epsilon
\end{align}
For details, see Appendix \eqref{app:Helical}.
Our model generates primordial magnetic fields through the non-minimal coupling of the electromagnetic field. The model requires inflation. Inflation generates density perturbations at all scales and provides a causal mechanism to generate the structure formation. Similarly, our model generates magnetic fields at all length scales, including the current Horizon radius~\cite{2001-Grasso.etal-PhyRep,2013-Durrer.Neronov-Arxiv,2016-Subramanian-Arxiv,2002-Widrow-Rev.Mod.Phys.,2004-Giovannini-IJMPD}. This has to be contrasted from the models where the magnetic field is generated during recombination. In these models, the coherence scale of the generated fields cannot exceed the size of the horizon radius at that time.
In Appendix \ref{app:Helical}, we have plotted the power spectrum of the present-day helical magnetic field ($B_0$) as a function of $k$. Assuming $M = 10^{17} GeV$, our model predicts the primordial helical magnetic fields of strength $10^{-20} \rm{G}$ on Gpc scales at the current epoch. From \ref{fig:PowerSpectrum} we can see that our model predicts the present-day helical magnetic field of strength $10^{-15} G$ on Mpc scales. The primordial fields generated from our model are within the upper bounds on the strength of the seed magnetic fields needed to explain the current galactic magnetic fields~\cite{2010-Kahniashvili.Ratra.etal-PRD}. These primordial fields are amplified by the dynamo mechanism and can lead to the observed magnetic fields; hence our model requires the dynamo mechanism.
\section{Baryon Asymmetry of the Universe}
\label{sec:baryon_asymm}
In this section, we compute the baryon asymmetry parameter due to the primordial helical magnetic fields. Specifically, we compute it for the maximum helicity modes --- one mode is enhanced compared to the other. Substituting Eq~(\ref{eq:A+Series}) in Eq.~(\ref{eq:n_cs-relation}), we obtain
\begin{align}\label{eq:n_CS-integral}
n_{CS} = \frac{1}{2\pi^2 \, a^4(\eta)}\int^{\Lambda}_{\mu} dk \left( \left| C \right|^2 \, k^{3 + \frac{1}{2\alpha}}
+ \left| C_2 \frac{\mathcal{F}^{-1} }{\pi}
\Gamma \left( \frac{1}{2\alpha} \right) \right|^2\,k^{3 -\frac{1}{2\alpha}} \tau^{ - \frac{2}{\alpha} } \right).
\end{align}
Integrating the above expression, we get
\begin{align}\label{eq:n_CS}
n_{CS} = \frac{1}{2\pi^2 \, a^4(\eta)}\left[ \left. \left| C \right|^2 \, \frac{ k^{4 + \frac{1}{2\alpha}} }{ 4 + \frac{1}{2\alpha}} \right|^{\Lambda}_{\mu}
+ \left. \left| C_2 \frac{\mathcal{F}^{-1} }{\pi}
\Gamma \left( \frac{1}{2\alpha} \right) \right|^2 \,\frac{ k^{4 - \frac{1}{2\alpha}} }{ 4 - \frac{1}{2\alpha}} \tau^{ - \frac{2}{\alpha} } \right|^{\Lambda}_{\mu} \,\, \right].
\end{align}
We want to make the following remarks regarding the above expression: First, the BAU is generated similarly to the inflationary mechanism of the generation of density perturbation. During inflation, the primordial helical magnetic field fluctuations are stretched exponentially and exit the horizon. The modes that reenter during the radiation-dominated epoch are responsible for the generation of baryon asymmetry. Second, the generation of baryon asymmetry does not strongly depend on the reheating dynamics since only the modes that
reenter the Hubble radius during the radiation-dominated epoch are relevant.
Assuming a de-Sitter (or approximately de-Sitter) Universe, from Eq. \eqref{eq:C_F},
we have $\tau^{-\frac{2}{\alpha}} = a^{-2}(\eta)$. Substituting this in the
Eq. (\ref{eq:n_CS}), we see that the
the second term in the RHS decays faster compared to the first term by $a^{-2}(\eta)$. Hence, we can neglect the second term. Substituting the
resulting form of $n_{CS}$ in Eq.~(\ref{eq:n_B-n_CS-definition}) leads to:
\begin{align}\label{eq:n_B}
n_{B} = \frac{e^2}{4\pi^2} \frac{1}{2\pi^2 \, a^3(\eta)} \left. \left| C \right|^2 \, \frac{ k^{4 + \frac{1}{2\alpha}} }{ 4 + \frac{1}{2\alpha}} \right|^{\Lambda}_{\mu}.
\end{align}
To obtain the ranges of $\Lambda$ and $\mu$, we need to know the modes exited during inflation. For the density perturbations, the largest scales observed in the CMB are produced around 40 - 60 e-foldings before the end of inflation~\cite{2006-Bassett.Tsujikawa.Wands-RevModPhys}.
This is because the adiabatic quantum fluctuations responsible for the density perturbations reenter the Hubble radius around $z \sim 1500$. Hence, in Ref. \cite{2020-Kushwaha.Shankaranarayanan-PRD}, the current authors only looked at primordial helical fields generated around 40 - 60 e-foldings before the end of inflation. However, in this case, we will concentrate on the primordial helical fields that renter the horizon very early (at the beginning of the radiation-dominated epoch) to generate the required BAU. This means that the modes that left the horizon around the last 5 to 10 e-foldings of inflation are only relevant. Since these modes have already left the Hubble radius during inflation, the reheating dynamics do not alter these primordial helical modes. Hence, the model is insensitive to the reheating dynamics.
Our focus now shifts to explicitly evaluating BAU for our model. First step is to evaluate the
dilution factor $a^{-3}$ in Eq. \eqref{eq:n_B}. To do this, we define
$a_{\Lambda}$ (and $a_{\mu}$) as the scale factor at the time when the maximal helicity mode with energy $\Lambda$ (and $\mu$) left the Hubble radius during inflation. Assuming an instant reheating, and following the calculations given in Appendix \eqref{app:Calculations}, we have
$a_{\mu} = 10^6 a_{\Lambda}$. Taking into account that these modes
exited the Hubble radius during inflation in the last 5 e-foldings, the
the dilution factor [prefactor in Eq. \eqref{eq:n_B}]
becomes $a^{-3} \sim 10^{-24}$.
The second step is to obtain the constant $C$.
As discussed in previous section, for slow-roll inflation, $|C|$ is given by
Eq. \eqref{eq:C_F}. Thus, Eq. \eqref{eq:n_B} reduces to:
\begin{align}\label{eq:n_B-Lambda}
n_{B} \approx \quad \frac{10^{-24} \cdot |C|^2 \cdot e^2}{24\pi^4} \left( \Lambda^3 - \mu^3 \right) \, .
\end{align}
Third step is to compare the theoretically derived quantity ($n_B$)
with observations Eq.~\eqref{def:etaObs}. However, $n_{\gamma}$ is not constant in the early Universe (since the photon chemical potential is zero) and is approximately constant only after the last scattering surface.
Since entropy density per comoving volume is conserved, the quantity $n_B/s$ is better suited for theoretical calculations~\cite{Book-Kolb.Turner}.
Assuming that there was no significant entropy production after reheating phase, entropy density in the radiation-dominated epoch is:
\begin{equation}
s \simeq \frac{2\pi^2}{45} g \, T^3_{\rm{RH}} \, ,
\label{def:entdens}
\end{equation}
where $T_{\rm{RH}}$ is the reheating temperature and the effective relativistic degrees of freedom $g \sim 100$ at reheating. From Eqs. (\ref{eq:n_B-Lambda}, \ref{def:entdens}), we can define the following dimensionless BAU parameter:
\begin{align}\label{eq:baryon_Asym}
\eta_B = \frac{n_B}{s} \approx 10^{-24}
\frac{|C|^2 \cdot e^2}{24\pi^4 } \left( \Lambda^3 - \mu^3 \right)
\frac{45}{2\pi^2 g T_{RH}^3} \approx 10^{-29} |C|^2
\frac{\Lambda^3 }{ T^3_{\rm{RH}} }
\end{align}
where in the last expression we have neglected $\mu^3$ i.e., $ \Lambda^3 - \mu^3 \approx \Lambda^3$. Appendix \eqref{app:Calculations} contains
plots for different values of $\Lambda$ and $\mu$. From these plots, we infer that the results do not strongly depend on the exact value of $\mu$.
Finally, substituting the value of $|C|^2$ (from Eq. \eqref{eq:C_F} and
using the values in Appendix \ref{app:Helical}) in Eq.~\eqref{eq:baryon_Asym}, we obtain:
\begin{align}\label{eq:baryon_Asym-M}
\eta_B \approx \frac{ 10^{-29} \cdot \eta_0^2 }{ \sqrt{\eta_{end} \cdot 10^{45} GeV^3} } \frac{M^3 \Lambda^3 }{ T^3_{\rm{RH}} } \approx
10^{-2} \left( \frac{M}{M_P} \right)^3
\left( \frac{\Lambda}{T_{ \rm{RH}}} \right)^3
\end{align}
This is one of the crucial expressions in this work regarding which we would like to stress the following: First, the BAU parameter depends on three quantities --- $M$ (the conformal invariance breaking scale), $T_{\rm{RH}}$ (reheating temperature scale) and $\Lambda$ (the largest helical mode that catalyses
baryogenesis).
Second, the BAU parameter is inversely proportional to the reheating temperature. This behavior is different from the results of Ref. \cite{2006-Alexander.Peskin.Jabbari-PRL,2014-Barrie.Kobakhidze-JHEP,2014-Long.Sabancilar.Vachaspati-JCAP,2016-Fujita.Kamada-PRD}. In some of these models, BAU is linearly dependent on the reheating temperature. The difference in the relationship is because the detailed reheating dynamics is not required, only the information about the entropy production is required in our model. In other models, the exact detailed reheating dynamics is required, which is avoided in our approach.
Third, the BAU parameter is linearly proportional to $M$ and $\Lambda$. For smaller $M$, the contribution of the conformal breaking term \eqref{eq:S_h} will be much larger, and hence, more primordial helical fields are produced during inflation. However, for the same reheating temperature, $\Lambda$ has to be larger to produce the same amount of BAU.
Fourth, to get a better understanding of the dependence of BAU on various parameters, we use the following parametrization:
\begin{align}\label{eq:parametrization}
\eta_B = n \times 10^{-10}, \quad M = m \times 10^{14} GeV,
\quad \Lambda = \delta \times 10^{12} GeV, \quad
T_{RH} = \gamma \times 10^{12} GeV
\end{align}
where $n, m, \delta, \gamma$ are dimensionless parameters. The maximum reheating
corresponds to the inflation scale~\cite{2006-Bassett.Tsujikawa.Wands-RevModPhys}. With supersymmetry, the
requirement that not too many gravitinos are produced
after inflation provides a stringent constraint on the reheating temperature,
$T_{\rm RH} \sim 10^{10} - 10^{11}~$GeV~\cite{1984-Ellis.Kim.Nanopoulos-PLB,1999-Benakli.Davidson-PRD}. Hence, we consider the range of $\gamma$
to be $\{ 10^{-2}, 1000 \} $. Since the value of $M$ should be between the GUT and Planck scale, we consider the range of $m$ to be $\{ 1, 1000 \}$.
We assume that the modes that reenter during radiation epoch is around $10^{12}~$GeV. Hence, we consider the range of $\delta$ to be $\{ 1, 100 \} $. Using the above parametrization in Eq.~(\ref{eq:baryon_Asym-M}), we get:
\begin{align}\label{eq:baryon_Asym-Parameter}
\frac{m^3 \times \delta^3 }{\gamma^3} \approx n \, 10^7 .
\end{align}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.8\textwidth]{Plot-BAU-14GeV.pdf}
\caption{Plot of the rescaled reheating temperature $T_{RH}$ with the rescaled conformal symmetry breaking parameter $M$, for different values of $n$. Here, we have set $\Lambda = 10^{14} GeV, \mu = 10^{10} GeV$.}
\label{fig:Plot}
\end{figure}
\ref{fig:Plot} and \ref{fig:reheatingGeneric2} contain the plots of $\gamma$ versus $m$ for different values of $n$ and fixed $\delta$. In Appendix \eqref{app:Calculations} we have plotted the same for other values of $\delta$. From these plots, we deduce the following: First, for a range of values of
$\gamma,$ $\delta$, and $m$,
BAU can have values between $10^{-10}$ to $10^{-9}$. Thus, the model
can lead to the observed amount of baryon asymmetry of the Universe consistent with the Planck data~\cite{2018-Planck}. Second, the model does not depend on the nature of the reheating dynamics. As can be seen from the plots, for a range of values of $m, \delta$, the model can lead to BAU for a range of reheating temperatures. This has to be contrasted with other models in the literature~\cite{2014-Long.Sabancilar.Vachaspati-JCAP,2016-Fujita.Kamada-PRD} which requires detailed knowledge of the reheating phase of the Universe. Third, the unknown parameter in the model is $M$. In Ref.~\cite{2020-Kushwaha.Shankaranarayanan-PRD}, we showed that for the model to be consistent with the lower limit of $10^{-16}$ Gauss magnetic fields in the voids~\cite{2010-Neronov.Vovk-Sci}, then $M \sim 10^{17} GeV$. The
current analysis shows that $M \sim 10^{17} GeV$ is consistent with baryogenesis. Thus, the model is \emph{tantalizingly close} to solving baryogenesis and magnetogenesis using the same causal mechanism that solves the origin of density perturbations.
\begin{figure}[!hbt]
\centering
\subfigure[]{%
\includegraphics[height=2in]{Plot-BAU_11-7GeV.pdf}}%
\quad
\subfigure[]{%
\includegraphics[height=2in]{Plot-BAU_10-8GeV.pdf}}%
\quad
\subfigure[]{%
\includegraphics[height=2in]{Plot-BAU_9-8GeV.pdf}}%
\quad
\subfigure[]{%
\includegraphics[height=2in]{Plot-BAU_9-7GeV.pdf}}%
\caption{Plots showing the behaviour of reheating temperature $T_{RH}$ (vertical axis) with parameter $M$ (horizontal axis), for lower energy scales
of $\Lambda$ and $\mu$.}
\label{fig:reheatingGeneric2}
\end{figure}
\section{Conclusions and Discussions}
\label{sec:conc}
In this work, we have proposed a viable baryogenesis scenario in the early Universe that does not require any extension to the Standard Model of particle physics. The crucial ingredient is the generation of primordial helical magnetic fields
due to Riemann coupling. The advantage of the primordial helical fields is that the non-zero helicity suggests a non-zero contribution in the CP violation term.
An interesting feature of our model is the stretching of the primordial helical magnetic fields to super-horizon scales during inflation --- the same mechanism that leads to primordial density perturbations. While the helical modes generated around 40 - 60 e-foldings before the end of inflation lead to the observed large-scale magnetic fields, the helical modes that renter the horizon very early (at the beginning of the radiation-dominated epoch) lead to the baryon asymmetry. Thus, our mechanism provides \emph{possible testable evidence} for the entire inflationary epoch.
More than two decades ago, Davidson pointed out an interesting relation between the primordial magnetic field and Sakharov's conditions~\cite{1996-Davidson-PLB}. In this work, we have explicitly shown that Davidson's conditions are necessary \emph{but not} sufficient. The key missing ingredient is the requirement of \emph{primordial helical magnetic fields}. While the helical and non-helical fields break the isotropy and lead to CP violation, only the modes with maximal helicity contribute significantly to the Chern-Simon number density. We have shown that the BAU parameter predicted by our model is independent of any specific inflation model and reheating dynamics; however, it depends on the scale at which inflation ends and reheating temperature.
The BAU parameter \eqref{eq:baryon_Asym-M} obtained in our model is inversely proportional to reheating temperature. Assuming the exit of inflation at $10^{14}$~GeV, for the observed amount of baryon asymmetry $\eta_B \sim 10^{-10}$, we obtained that the reheating temperature should be in the range
$10^{12} - 10^{14}$~GeV, which is consistent with the constraints on the reheating temperature \cite{2006-Bassett.Tsujikawa.Wands-RevModPhys,1984-Ellis.Kim.Nanopoulos-PLB,1999-Benakli.Davidson-PRD}.
This means that our model \emph{does not} prefer a very low-energy reheating temperature~\cite{1999-Benakli.Davidson-PRD}.
In the literature, various mechanisms have been discussed to solve the BAU problem using the primordial helical magnetic fields~\cite{2014-Barrie.Kobakhidze-JHEP,2014-Long.Sabancilar.Vachaspati-JCAP,2016-Fujita.Kamada-PRD,2015-Anber.Sabancilar-PRD,2019-Domcke.etal-JCAP}.
In Ref.~\cite{2016-Fujita.Kamada-PRD}, the authors obtained the required BAU by assuming the presence of helical magnetic fields of present-day strength $10^{-14} G < B_0 < 10^{-12} G $ and coherence length $1 \rm{pc} < \lambda < 1 \rm{Mpc}$, and taking into account of the MHD effects. In Ref.~\cite{2014-Long.Sabancilar.Vachaspati-JCAP}, authors studied the generation of a primordial magnetic field in conjunction with the BAU generation through leptogenesis; however, the predicted value of the present-day coherence length of such magnetic fields is very small$\sim 10$~pc.
In Refs.~\cite{2015-Anber.Sabancilar-PRD,2019-Domcke.etal-JCAP}, the authors consider pseudoscalar inflation (axion inflation) model with a dimension five couplings. In these models, the authors assumed the scale of the baryogenesis to be electroweak scale, and they obtained the required BAU assuming the scale of inflation to be $10^{10} \rm{GeV}$ --- $10^{12} \rm{GeV}$~\cite{2019-Domcke.etal-JCAP,2015-Anber.Sabancilar-PRD}. In Ref. ~\cite{2014-Barrie.Kobakhidze-JHEP}, the authors considered the extension of the Standard Model with anomalous gauge symmetry. They obtained the
required BAU for $H_{Inf} \sim 10^{14} \rm{GeV}$ and reheating temperate at $10^{16}\rm{GeV}$. In Ref.~\cite{1999-Brustein.Oaknin-PRL}, the authors argued that to generate the observed baryon asymmetry, some asymmetry in the initial conditions of either $\bf{B}$ or scalar field $\phi$ is required, which can be induced from temperature-dependent potential or asymmetry in quantum fluctuations. Our model is robust to inflationary/reheating dynamics and uses the same success of inflationary perturbations to generate BAU. Thus, our model is \emph{tantalizingly close} to solving baryogenesis and magnetogenesis using the same causal mechanism that solves the origin of density perturbations.
In this work, we did not consider the gravity contribution to the chiral anomaly equation. In Ref.~\cite{2006-Alexander.Peskin.Jabbari-PRL}, the authors considered the phenomenon of gravitational birefringence to show that the gravitational fluctuations generated during inflation can give the Universe's observed amount of baryon asymmetry. However, as we showed in.
Sec.~\eqref{sec:Baryo-magnetic}, $R\tilde{R}$ contributes only in the second-order, and hence we have ignored it in this analysis. It may be interesting to look at the second-order corrections and analyze the parameter constraints.
In this work, we have used the general effective field theory of gravity coupled to the Standard Model of particle physics framework to obtain leading order gravity terms that couple to the standard model Bosons~\cite{2019-Ruhdorfer.etal-JHEP}. We have considered only the mass dimension 6-operators coupling to the gauge field Lagrangian, specifically, to the electromagnetic field.
The coupling to the Fermions arises at the mass dimension 8. Thus, coupling of Fermion-anti-Fermion with $U(1)$ field will play a role only at this order. While these are expected to be suppressed compared to mass-dimension 6 operators, they are relevant at Planck scale. We plan to look at the effects of mass dimension 8 operators on the baryogenesis.
In this work, we focused on the electromagnetic fields and the effects of the helical fields on baryogenesis. It will be interesting to extend the analysis to
Gluons and study the effects on the asymmetry generated in quarks and the Baryons. It is particularly important, and a study on this is currently
in progress to acquire more stringent constraints on the parameters $M$ and $T_{\rm RH}$~\cite{2021-Sharma.Ashu.Shanki}.
\noindent {\it Note added:} As we were finalizing this manuscript, the article \cite{2021-Giovannini-arXiv} appeared on the arXiv which also discusses Baryogenesis from Magnetic fields. However, the approach followed in the reference requires MHD amplification while our approach requires helical fields generated during inflation.
\begin{acknowledgments}
The authors thank Joseph P. Johnson and Urjit A Yajnik for comments on the earlier version of the manuscript. The authors thank Kandaswamy Subramanian for useful discussion. The authors thank the anonymous referee for raising some points which clarified important issues in the work. The MHRD fellowship at IIT Bombay financially supports AK. This work is supported by the ISRO-Respond grant.
\end{acknowledgments}
|
2,869,038,156,879 | arxiv |
\section{Introduction}
Inspired by the success of BERT~\cite{devlin2018bert}, vision-and-language pre-training (VLP) has becoming an increasingly central paradigm for vision-and-language (VL) research. Models such as LXMERT~\cite{tan2019lxmert}, ViLBERT~\cite{lu2019vilbert} and UNITER~\cite{chen2020uniter}, have achieved state-of-the-art performance across a wide range of VL tasks, such as visual question answering (VQA)~\cite{antol2015vqa,goyal2017making}, visual commonsense reasoning (VCR)~\cite{zellers2019recognition}, and image-text retrieval~\cite{lee2018stacked}. Despite its empirical success, the memory and computation footprint of these pre-trained models is huge because of their large number of parameters, making it infeasible to use them in resource-constrained scenarios. A natural question that came to our mind: \emph{Can we prune a large pre-trained VL model while preserving its performance and transferability?}
\input{figs/fig1_framework}
In this work, we aim to answer this question via the lens of \emph{lottery ticket hypothesis} (LTH)~\cite{frankle2019lottery}, which states that there exist matching subnetworks in dense neural networks that can be trained in isolation from initialization to reach a comparable accuracy to the full model within similar training iterations. LTH has been shown great success in various fields~\cite{yu2019playing,renda2020comparing,chen2020lottery}, and its properties have been widely studied~\cite{malach2020proving,pensia2020optimal,frankle2020linear}. However, LTH has not been introduced to VL tasks yet, it could be a powerful tool to understand the parameter redundancy in the current prevailing VLP models. To start, we use UNITER~\cite{chen2020uniter} as the main testbed, and consider 7 representative VL tasks for experiments, including VQA~\cite{goyal2017making}, VCR~\cite{zellers2019recognition}, GQA~\cite{hudson2019gqa}, NLVR$^2$~\cite{suhr2018corpus}, visual entailment~\cite{xie2019visual}, referring expression comprehension~\cite{yu2016modeling}, and image-text retrieval~\cite{lee2018stacked}. In our context, a \emph{ticket} means a VLP subnetwork, and a \emph{winning ticket} means a subnetwork that can match the performance of the original full VLP model. Based upon this, we ask the following three questions:
\begin{itemize}[leftmargin=*]
\item \textbf{\emph{Existence}}: Can we draw winning tickets successfully for various VL tasks?
\item \textbf{\emph{Transferability}}: Can we find tickets that transfer universally to all downstream VL tasks?
\item \textbf{\emph{Compatibility}}: Do the LTH observations still hold when switching to different backbones (\emph{e.g.}, LXMERT~\cite{tan2019lxmert}, ViLT~\cite{kim2021vilt}), and training strategies (\emph{e.g.}, adversarial training)?
\end{itemize}
First, \emph{can we draw VL winning tickets?} To answer this, we use the pre-trained weights as our model initialization for task-specific finetuning, and use Iterative Magnitude-based Pruning (IMP)~\cite{han2015deep} to draw the tickets for each VL task. However, finding tickets through iterative and repeated train-prune-retrain cycles for each task is very time-consuming, primarily when a large pre-trained model is used here. Then, it becomes critical to ask: \emph{how can we find subnetworks that transfer universally?} If this can be achieved, the extraordinary cost of finding a winning ticket can be amortized by transferring it to a range of downstream tasks. Inspired by~\citet{chen2020lottery}, a natural idea is to perform IMP on the pre-training tasks using the pre-training data, and assess whether such learned tickets are transferable or not, since pre-training can be considered as task-agnostic.
Besides this, we further comprehensively analyze the transfer behavior among all the downstream tasks to better understand the found task-specific winning tickets.
The above analysis is conducted on UNITER, which is a one-stream model and uses an object detection module to first extract visual features offline. To study the compatibility of LTH, we also experiment on LXMERT (a two-stream model instead), and ViLT (directly taking image patches and word tokens as model inputs). Moreover, instead of cross-entropy training, we further test LTH under adversarial training~\cite{gan2020large} to investigate its corresponding training behaviors. Through comprehensive analysis, we summarize our main findings as follows.
\begin{itemize}[leftmargin=*]
\item \textbf{\emph{VLP can play lottery tickets too}}: It is difficult to find UNITER subnetworks that \emph{strictly} match the full performance, even with rewinding. However, it is encouraging that \emph{``relaxed''} winning tickets that match 99\% of the full accuracy can be found at 50\%-70\% sparsity across all the VL tasks considered.
\item \textbf{\emph{One ticket to win them all}}: Matching subnetworks found via IMP on pre-training tasks transfer universally. Interestingly, matching subnetworks found on each downstream task also transfer to other tasks well, indicating that the learned task-specific subnetworks do not aggressively overfit to one specific task.
\item \textbf{\emph{Different VLP models behave differently}}: Though all the VLP models can play lottery tickets, we also observe that the highest sparsity we can achieve for ViLT is far lower than LXMERT and UNITER (30\% vs. 70\%).
\item \textbf{\emph{Playing lottery tickets adversarially}}: Compared with standard cross-entropy training, we observe that sparse winning tickets can also be identified with adversarial training, with enhanced performance.
\end{itemize}
We conclude that the primary LTH observations found in computer vision, NLP, and other areas also hold in the context of vision and language.
\section{Related Work}
\paragraph{Vision-and-Language Pre-training (VLP).}
The past two years have witnessed a boom of VLP methods.
By adopting transformer~\cite{vaswani2017attention} as the building block, early approaches use a two-stream architecture for multimodal fusion~\cite{lu2019vilbert,tan2019lxmert,lu201912}, while single-stream architecture has gained popularity later on~\cite{su2019vl,li2019visualbert,li2019unicoder,chen2020uniter,zhou2019unified,gan2020large,li2020oscar,zhang2021vinvl}.
While most of these methods rely on an object detection module to extract visual features offline, recently, end-to-end VLP methods~\cite{huang2020pixel,huang2021seeing,kim2021vilt,xue2021probing,li2021align,dou2021empirical} are becoming increasingly popular.
Different from these efforts on making VLP models larger and stronger, we focus on a different direction, making VLP models \emph{smaller}. Note that two recent works, MiniVLM~\cite{wang2020minivlm} and DistilVLM~\cite{fang2021compressing}, have also attempted to train a smaller VLP model; however, our focus is different from theirs. Specifically, MiniVLM directly adopts MiniLM~\cite{wang2020minilm} for the transformer
module, while spending a larger portion of efforts on designing a compact image feature extractor; DistilVLM focuses on knowledge distillation. Here, we study the over-parameterization of VLP models via the lens of \emph{lottery ticket hypothesis}, a popular concept in deep learning nowadays, but not introduced to VL research yet.
\paragraph{Lottery Ticket Hypothesis (LTH).}
LTH~\cite{frankle2019lottery} claims the existence of sparse, separate trainable subnetworks that are able to match or even surpass the performance of the original dense network. Though originally working only on small networks, later on, rewinding is found to be a useful technique to scale up LTH to large networks~\cite{renda2020comparing,frankle2020linear}. Since its birth, LTH has received wide attention and becomes an emerging subfield in deep learning. The properties of LTH are widely studied for image classification~\cite{liu2018rethinking,evci2019difficulty,Frankle2020The,savarese2020winning,grasp,You2020Drawing,ma2021good}. Recently, LTH has also been evidenced across other fields, such as NLP~\cite{gale2019state,yu2019playing,prasanna2020bert,chen2020lottery,chen2020earlybert}, object detection~\cite{girish2020lottery}, generative adversarial networks~\cite{chen2021gans,kalibhat2020winning,chen2021ultra}, graph neural networks~\cite{chen2021unified}, reinforcement learning~\cite{yu2019playing}, and life-long learning~\cite{chen2021long}.
Recent work has also started to investigate the existence of winning tickets in self-supervised pre-training of visual encoders~\cite{chen2020lottery2} and language models~\cite{chen2020lottery,chen2020earlybert}.
However, to the best of our knowledge, the study of lottery tickets in VLP remains untouched. As VLP becomes increasingly popular, it is critical to understand the parameter redundancy in such models, potentially making them small without sacrificing the performance.
\section{Preliminaries}
In this section, we detail the techniques we use to identify winning tickets, and present our setup for empirical study.
\paragraph{Backbones.}
We use UNITER~\cite{chen2020uniter} as an example to introduce the backbone, which shares the same structure as BERT, except that the input is a mixed sequence of two modalities.
Specifically, given a dataset that consists of image-text pairs ${\boldsymbol x}=({\boldsymbol x}_{img}, {\boldsymbol x}_{txt})$, UNITER first encodes the corresponding image regions and textual tokens into low-dimensional feature vectors ${\boldsymbol z}_{img}=g_{bu}({\boldsymbol x}_{img})$ and ${\boldsymbol z}_{txt}=g_{emb}({\boldsymbol x}_{txt})$, where $g_{bu}(\cdot)$ is the fixed bottom-up image feature extractor~\cite{anderson2018bottom}, $g_{emb}(\cdot)$ is a learnable word embedding function. Then, a transformer is applied on top to obtain contextualized representations: $\tilde{{\boldsymbol z}}_{img}, \tilde{{\boldsymbol z}}_{txt}, \tilde{{\boldsymbol z}}_{cls} = f_1({{\boldsymbol x}_{img}, {\boldsymbol x}_{txt}}; {\boldsymbol \theta})$, where a special \texttt{[CLS]} token is employed whose embedding $\tilde{{\boldsymbol z}}_{cls}$ is considered as the joint multimodal representation. ${\boldsymbol \theta} \in {\mathbb{R}}^{d_1}$ includes all the trainable parameters. For a particular downstream task, we add a final, task-specific classification layer on top of $\tilde{{\boldsymbol z}}_{cls}$ to obtain the output logit vector $f_2(\tilde{{\boldsymbol z}}_{cls}; {\boldsymbol \phi})$, where ${\boldsymbol \phi}\in {\mathbb{R}}^{d_2}$ denotes task-specific parameters. The whole UNITER network is abbreviated as $f({\boldsymbol x}; {\boldsymbol \theta}, {\boldsymbol \phi})$ that absorbs both $f_1(\cdot,\cdot)$ and $f_2(\cdot)$. For LXMERT~\cite{tan2019lxmert}, it takes the same image features from object detection as model input, but adopts a two-stream model architecture instead. For ViLT~\cite{kim2021vilt}, it uses the same one-stream architecture, but directly takes image patches and word tokens as inputs, and models all the intra- and inter-modality interaction via a single unified transformer.
Given the task-specific supervision signal ${\boldsymbol y}$ (typically a label in VL tasks), model training can be summarized as:
\begin{align} \label{eqn:std_xe}
\min_{{\boldsymbol \theta},{\boldsymbol \phi}} {\mathbb{E}}_{({\boldsymbol x}, {\boldsymbol y})\sim {\mathcal{D}}} [L(f({\boldsymbol x};{\boldsymbol \theta},{\boldsymbol \phi}),{\boldsymbol y})]\,,
\end{align}
where $L(\cdot)$ is the cross-entropy loss, and ${\mathcal{D}}$ denotes the dataset for a downstream task.
We use the official UNITER/LXMERT/ViLT code bases for experiments.
\paragraph{Subnetworks.}
A subnetwork of $f({\boldsymbol x}; {\boldsymbol \theta}, {\boldsymbol \phi})$ means a network $f({\boldsymbol x}; {\boldsymbol m} \odot {\boldsymbol \theta}, {\boldsymbol \phi})$ with a binary pruning mask ${\boldsymbol m} \in \{0,1\}^{d_1}$ indicating which part of the parameters are set to 0, and $\odot$ is the element-wise product. Following~\citet{frankle2019lottery}, we define a \emph{matching subnetwork} as a subnetwork that can be trained to the full accuracy of the dense network within similar training iterations. A \emph{winning ticket} is defined as a matching subnetwork $f({\boldsymbol x};{\boldsymbol m} \odot {\boldsymbol \theta}_0, \cdot)$ where ${\boldsymbol \theta}={\boldsymbol \theta}_0$, which is typically is a random weight initialization. However, in our context, ${\boldsymbol \theta}_0$ represents the pre-trained model weights. We also define a ``\emph{relaxed}'' winning ticket as one that matches $p\%$ of the the full accuracy, where $p$ is set to a large number close to $100$ (such as $99$).
\paragraph{Finding Subnetworks.}
As used in many lottery ticket papers, we use Iterative Magnitude-based Pruning (IMP)~\cite{han2015deep} to find the subnetwork. Specifically, the pruning mask ${\boldsymbol m}$ is determined by training the unpruned network to completion on a downstream task, then pruning individual weights with the lowest magnitudes globally throughout the network. The weights are then reset to the pre-trained initialization ${\boldsymbol \theta}_0$ (or, ${\boldsymbol \theta}_i$ for a specific \emph{rewinding} step $i$ in training), and only the learned mask ${\boldsymbol m}$ is stored. We prune a certain amount (\emph{e.g.}, 10\%) of non-zero weights after completion, and re-train the network several times to meet the sparsity requirement. The full IMP procedure is provided in the Appendix.
We consider finding subnetworks via both ($i$) task-specific finetuning and ($ii$) task-agnostic pre-training,\footnote{We only perform pre-training on UNITER, since pre-training is heavy; we perform finetuning for UNITER, LXMERT, and ViLT.} hoping that universal transferable subnetworks can be identified. For UNITER pre-training, we use all the pre-training tasks to learn the mask, including Masked Language Modeling, Masked Region Modeling, Image-Text Matching, and Word-Region Alignment. See~\citet{chen2020uniter} for details of these tasks. As our model is initialized by pre-trained UNITER, we further pre-train only 10\% of original training steps in each pruning round (we prune 9 rounds in total). Therefore, the total time spent for a full IMP process roughly equals the time used for pre-training UNITER from scratch.
\paragraph{Evaluation of Subnetworks.}
For a particular downstream task, after obtaining a subnetwork $f({\boldsymbol x}; {\boldsymbol m} \odot {\boldsymbol \theta}, \cdot)$, we reset the weights to ${\boldsymbol \theta}_0$ or ${\boldsymbol \theta}_i$ (if rewinding is used), and then completely re-train the subnetwork to test whether the final subnetworks can still achieve the original accuracy.
For pre-training, since the performance of the pre-training tasks validation loss does not correlate to the task-specific performance~\cite{chen2020uniter}, we finetune and test the identified subnetworks on all the downstream tasks. We use both the in-domain and out-of-domain image-text datasets for IMP-based pre-training, including COCO~\cite{lin2014microsoft}, Visual Genome~\cite{krishna2017visual}, Conceptual Captions~\cite{sharma2018conceptual}, and SBU Captions~\cite{ordonez2011im2text}.
\input{tables/tab1_uniter}
\paragraph{Downstream Tasks.}
We consider 7 VL tasks for experiments. ($i$) For VQA~\cite{goyal2017making}, GQA~\cite{hudson2019gqa} and VCR~\cite{zellers2019recognition}, given an image and an input question, the model selects an answer from a candidate pool. ($ii$) For NLVR$^2$~\cite{suhr2018corpus}, given a pair of images and a natural language description, the model judges the correctness of the description based on the input image pair. For Visual Entailment~\cite{xie2019visual}, the model predicts whether a given image entails a given sentence. ($iii$) For Referring Expression (RE) Comprehension, we evaluate on RefCOCO+~\cite{yu2016modeling}, where given a text description, the model selects the described region
from a set of image region proposals. ($iv$) For Image-Text Retrieval (ITR), we consider both image retrieval and text retrieval on Flickr30k dataset.
For VCR, 2nd-stage pre-training was found useful in UNITER finetuning. For simplicity and ease of study of transfer learning, we do not use 2nd-stage pre-training. For ITR, hard negative mining is necessary to boost performance. We do not use this as it is computationally heavy, and we aim to study LTH rather than chasing state-of-the-art performance. For VQA, we mainly report results on an internal mini-dev set for faster evaluation of the found tickets, and avoid submitting results to the VQA test server too frequently. This same mini-dev set is also used in UNITER~\cite{chen2020uniter}.
\section{Experiments}
In this section, we perform extensive experiments to examine the LTH in the context of vision and language.
\subsection{VLP Can Play Lottery Tickets Too} \label{sec:task-specific pruning}
First, we evaluate whether winning tickets exist in UNITER. In particular, we answer the following questions.
\paragraph{\emph{Q1: Are there winning tickets in UNITER?}}
To answer this, we first run IMP on a downstream task ${\mathcal{T}}$ to obtain a sparsity pattern ${\boldsymbol m}_{\text{IMP}}^{\mathcal{T}}$. This produces a subnetwork $f({\boldsymbol x};{\boldsymbol m}_{\text{IMP}}^{\mathcal{T}} \odot {\boldsymbol \theta}_0,\cdot)$. We then train this subnetwork again on task ${\mathcal{T}}$ to evaluate whether this is a winning ticket.
\input{figs/fig2_lth_uniter}
Results across all the sparsity levels (10\% to 90\%) on all the downstream tasks are shown in Figure~\ref{fig:IMP_vs_RP} (\textcolor{magenta}{magenta curves}). For tasks of image-text retrieval and NLVR$^2$, matching subnetworks with sparsity 40\% can be identified. However, it is generally challenging to find subnetworks that ``strictly'' match the performance of the full accuracy on the other tasks. Therefore, we define ``\emph{relaxed}'' winning tickets as the ones that can match 99\% of the full accuracy. It will still be encouraging if such subnetworks can be found.
Results are summarized in Table~\ref{tab:uniter_lottery_results}. Row \#1 reports the full UNITER$_{\text{B}}$ performance reported in the UNITER paper~\cite{chen2020uniter}. Row \#2 reports the results of our re-implementation, where different random seeds are used to account for fluctuations. We use the default hyper-parameters provided in the UNITER code base without any tuning. Row \#3 calculates 99\% of the full accuracy on each task for reference. As can be seen from Row \#4, on all VL tasks, ``relaxed'' winning tickets can be found. The highest sparsities range from 50\% (\emph{e.g.}, VCR) to 70\% (\emph{e.g.}, VQA). For VCR, it is challenging to find high-sparsity subnetworks. We hypothesize that commonsense knowledge is harder to learn, and smaller weights also play essential roles in improving model's commonsense reasoning abilities, making the subnetwork for VCR harder to prune.
\paragraph{\emph{Q2: Are winning tickets sparser than randomly pruned or initialized subnetworks?}} Previous work has shown that both the specific learned sparse mask and the specific initialization are necessary for finding winning tickets~\cite{frankle2019lottery}. To assess the importance of the learned mask in the context of UNITER, we compare with a random pruning baseline, and report results in Row \#5 of Table~\ref{tab:uniter_lottery_results}. That is, we finetune a randomly pruned UNITER model on each downstream task. Interestingly, for some tasks (\emph{e.g.}, GQA and RefCOCO+), random pruning achieves pretty strong performance. However, by comparing performance across the board, it is also clear that random pruning performs far worse than the identified winning tickets. In Figure~\ref{fig:IMP_vs_RP}, we further compare IMP and random pruning across all sparsities. Again, random pruning achieves far lower performance, confirming that the sparse structure found by IMP is crucial for the good performance of subnetworks.
To assess the importance of the initialization, we consider two different initializations with the learned mask unchanged: ($i$) using pre-trained BERT weights ${\boldsymbol \theta}_0'$ as initialization, and ($ii$) shuffling the UNITER pre-trained weights within each layer to obtain a new initialization ${\boldsymbol \theta}_0''$. Results of these two baselines are summarized in Row \#6 and \#7 of Table~\ref{tab:uniter_lottery_results}, respectively. Clearly, training from ${\boldsymbol \theta}_0''$ achieves far lower performance than training from ${\boldsymbol \theta}_0$. However, it is also interesting to observe that training from ${\boldsymbol \theta}_0'$ achieves much more reasonable performance, though still lagging behind training from ${\boldsymbol \theta}_0$, indicating the importance of the specific initialization. We hypothesize the good performance of ${\boldsymbol \theta}_0'$ is partially due to that ${\boldsymbol \theta}_0'$ is used as the initialization to pre-train UNITER; therefore, the structure of the UNITER weights may be partially inherited from BERT.
\noindent\textbf{\emph{Q3: Does rewinding improve performance?}} For large networks, \emph{rewinding} is found to be necessary to identify winning tickets~\cite{renda2020comparing}. After obtaining the masks, instead of resetting the weights to ${\boldsymbol \theta}_0$, one should rewind the weights to ${\boldsymbol \theta}_i$, the weights after $i$ steps of training. To examine whether rewinding is helpful in the context of UNITER, we run experiments at different rewinding ratios using VQA as the representative task. Results are shown in Figure~\ref{fig:IMP_vs_RP}(i). Rewinding does not have a notable effect on the VQA performance, with only minor performance improvement observed at high-sparsity ratio (90\%). Similar observations are also found on other downstream tasks.
\input{figs/fig3_lth_transfer}
\input{tables/tab2_universal_uniter}
\subsection{One Ticket to Win Them All} \label{sec:task-agnostic pruning}
Finding winning tickets on each downstream task separately is time-consuming, as each time when IMP is performed, it has to go through the full train-prune-retrain cycle multiple times. In this section, we aim to identify subnetworks that transfer well across all the VL tasks. In particular, we answer the following questions.
\paragraph{\emph{Q4: Do winning tickets found on pre-training tasks transfer?}} Pre-training is believed to learn \emph{universal} VL representations. As shown in~\citet{cao2020behind}, the pre-trained weights indeed have captured rich visual coreference and visual relation knowledge. This naturally leads to our hypothesis: can the subnetwork identified by the pre-training tasks on the pre-training data also transfer universally?
To study this, we first identify a subnetwork $f({\boldsymbol x};{\boldsymbol m}_{\text{IMP}}^{{\mathcal{P}}} \cdot {\boldsymbol \theta}_0,\cdot)$ on the pre-training tasks ${\mathcal{T}}$, and then train it on all the downstream tasks to evaluate its performance. Results are summarized in Figure~\ref{fig:IMP_vs_RP} (\textcolor{green}{green curves}). Interestingly, though
pre-training never obtains the supervision signal in the downstream tasks, the found subnetwork transfers pretty \emph{universally}; only when the sparsity is high (\emph{e.g.}, 80\%, 90\%), the found subnetwork performs worse than the ones found by task-specific IMP, indicating that the pre-training tasks are strong signals for learning how to prune.
\input{figs/fig4_lxmert}
\input{figs/fig5_adv_train}
\paragraph{\emph{Q5: Do winning tickets found on downstream tasks transfer?}}
One would also wonder whether such transfer learning behavior also exists among the downstream tasks themselves, \emph{i.e.}, whether the found subnetwork on a source task ${\mathcal{S}}$ transfers to a target task ${\mathcal{T}}$. We perform a systematic study in Figure~\ref{fig:trans_study}, where within each plot, 8 ticket sources are considered. There are several key observations. ($i$) The subnetworks found by task-specific signals typically perform the best, especially on the high-sparsity regime. ($ii$) Surprisingly, all the individual subnetworks found by downstream tasks transfer well, indicating that models on all the tasks have learned some shared essential knowledge. ($iii$) The subnetwork from pre-training generally performs better than those from other tasks (\emph{e.g.}, 0.71\%-2.69\% better than other source tickets at 70\% sparsity on the VCR task), indicating its universal transferability.
By taking a closer look at Figure~\ref{fig:trans_study}(a), excluding VQA itself, the best source ticket is from pre-training and GQA, as the task nature of VQA and GQA is similar. From Figure~\ref{fig:trans_study}(e) and (f), we can see the best source ticket for image-text retrieval is from pre-training. This is because the image-text matching task used in pre-training is similar to the downstream task itself. In Appendix, we also compare the similarity of sparsity patterns found on each downstream task.
Since subnetworks found on pre-training performs the best, we further compare their performance with the full model in more detail, and summarize results in Table~\ref{tab:universal_subnetwork_results}. The universal subnetwork at 60\%/70\% sparsity matches 98\%/96\%\footnote{This number changes to 99\%/97\% if VCR is not counted in.} of the full accuracy over all the tasks considered, effectively serving as a task-agnositic compressed model.
\input{tables/tab3_lxmert}
\subsection{Additional Study}
\paragraph{\emph{Q6: Do different VLP models behave differently?}}
So far, we have focused on UNITER. Below, we experiment with LXMERT and ViLT to provide a more complete picture of VL lottery tickets. Results are summarized in Table~\ref{tab:lxmert_lottery_results},~\ref{tab:vilt_lottery_results}, and Figure~\ref{fig:lxmert_lottery}. For LXMERT, similar observations can be found. Since both UNITER and LXMERT use the same visual features from object detection, but only differ in the use of one-/two-stream architecture, we conclude that the LTH observations are not sensitive to this one-/two-stream design. On the other hand, ViLT can only achieve a low sparsity ratio (30\%) if we want to keep impaired performance. This is partially due to that ViLT directly takes image patches as input, all the modeling power needs to be absorbed in a single unified transformer, therefore less can be pruned, while for UNITER and LXMERT, the extracted image features are kept intact.
\input{tables/tab4_vilt}
\paragraph{\emph{Q7: Can VLP models play lottery tickets adversarially?}}
Lottery tickets are typically found via standard cross-entropy training. Here, we study whether adversarial training can be used to find winning tickets as well. Results are shown in Figure~\ref{fig:adv_train_study}. Interestingly, on the 3 tasks considered, the ticket performance via adversarial training at 80\% and 70\% sparsity matches (or almost matches) the performance via standard finetuning at 70\% and 60\% sparsity, respectively. This suggests that adversarial training has the effect of making the sparse winning tickets 10\% sparser in order to match the performance of a standard trained one.
\section{Conclusion and Discussion}
In this paper, we have presented a comprehensive study of the lottery ticket hypothesis (LTH) for vision and language. Below, we discuss some limitations of the current study.
($i$) \emph{Efficiency}: We mainly focused on the scientific study of LTH. For future work, we plan to investigate the real speedup results on a hardware platform that is friendly to unstructured pruning, such as XNNPACK~\cite{elsen2020fast}.
($ii$) \emph{Object Detection}: For UNITER/LXMERT, we studied the LTH for multimodal fusion, while keeping the object detection module untouched. In terms of end-to-end VLP, we focused on ViLT. For future work, we plan to study the LTH of object detection and other end-to-end VLP models.
{\small
\section{Details on Pruning and Adversarial Training}
\subsection{Unstructured Pruning}
We mainly use Iterative Magnitude-based Pruning (IMP) to find winning tickets. The pruning procedure is summarized in Algorithm~\ref{alg:IMP}.
\begin{algorithm}[h]
\caption{Iterative Magnitude Pruning for VL Tickets.}
\label{alg:IMP}
\begin{algorithmic}
\STATE {\bfseries Input}\,\, Initial mask ${\boldsymbol m}=1^{d_1}$; Pre-trained parameters ${\boldsymbol \theta}_0$ and task-specific parameters ${\boldsymbol \phi}_0$; rewinding step $i$ (could be 0), sparsity level $s$, total training step $t$.
\STATE Train the pre-trained VL model $f({\boldsymbol x}; {\boldsymbol m} \odot {\boldsymbol \theta}_0,{\boldsymbol \phi}_0)$ to step $i$: $f({\boldsymbol x}; {\boldsymbol m} \odot {\boldsymbol \theta}_i,{\boldsymbol \phi}_i)$.
\REPEAT
\STATE Train $f({\boldsymbol x}; {\boldsymbol m} \odot {\boldsymbol \theta}_i,{\boldsymbol \phi}_i)$ to step $t$: $f({\boldsymbol x}; {\boldsymbol m} \odot {\boldsymbol \theta}_t,{\boldsymbol \phi}_t)$.
\STATE Prune $10\%$ of non-zero weights of ${\boldsymbol m} \odot {\boldsymbol \theta}_t$ based on the magnitudes and update ${\boldsymbol m}$ accordingly.
\UNTIL{the sparsity of ${\boldsymbol m}$ reaches $s$}
\STATE {\bfseries Return}\,\, $f({\boldsymbol x}; {\boldsymbol m} \odot {\boldsymbol \theta}_i,\cdot)$
\end{algorithmic}
\end{algorithm}
\input{figs/supp_fig6_lth_transfer}
\input{figs/supp_fig7_lth_vilt}
\subsection{Lottery Tickets with Adversarial Training}
When adding adversarial perturbations into the feature space (\emph{e.g.}, image regional features and word embeddings), recent work~\cite{gan2020large} has shown that adversarial training (AT) can be used as an effective regularization to improve model performance. When combined with the found lottery tickets, the new objective becomes:
\begin{align} \label{eqn:outer_min}
&\min_{{\boldsymbol \theta},{\boldsymbol \phi}} {\mathbb{E}}_{({\boldsymbol x},{\boldsymbol y})\sim {\mathcal{D}}} \Big[ {\mathcal{L}}_{std}({\boldsymbol x},{\boldsymbol y};{\boldsymbol m} \odot {\boldsymbol \theta},{\boldsymbol \phi}) + \\
&{\mathcal{R}}_{at} ({\boldsymbol x},{\boldsymbol y};{\boldsymbol m} \odot {\boldsymbol \theta},{\boldsymbol \phi}) + \alpha \cdot{\mathcal{R}}_{kl} ({\boldsymbol x};{\boldsymbol m} \odot {\boldsymbol \theta},{\boldsymbol \phi}) \Big]\,,
\end{align}
where ${\mathcal{L}}_{std}(\cdot)$ is the cross-entropy loss on clean data, ${\mathcal{R}}_{at} (\cdot)$ is the label-preserving AT loss, and ${\mathcal{R}}_{kl} (\cdot)$ is an adversarial regularization term. Specifically,
\begin{align} \label{eqn:inner_max}
{\mathcal{R}}_{at} (\cdot) &= \max_{||{\boldsymbol \delta}||\leq \epsilon} L(f({\boldsymbol x}+{\boldsymbol \delta};{\boldsymbol m} \odot {\boldsymbol \theta},{\boldsymbol \phi}),{\boldsymbol y})\,, \\
{\mathcal{R}}_{kl} (\cdot) &= \max_{||{\boldsymbol \delta}||\leq \epsilon} L_{kl}(f({\boldsymbol x}+{\boldsymbol \delta};{\boldsymbol m} \odot {\boldsymbol \theta},{\boldsymbol \phi}),f({\boldsymbol x};{\boldsymbol m} \odot {\boldsymbol \theta},{\boldsymbol \phi}))\,, \nonumber
\end{align}
where $L$ is the cross-entropy loss, $L_{kl}(p,q) = \mbox{KL}(p||q)+ \mbox{KL}(q||p)$, $p, q$ denote the two probability distributions, and $\mbox{KL}(\cdot)$ denotes the Kullback-Leibler divergence. Frobenius norm is used to constrain ${\boldsymbol \delta}$. For optimization, \citet{madry2017towards}~showed that the outer minimization in Eqn.(\ref{eqn:outer_min}) can be solved by SGD, while the inner maximization in Eqn.(\ref{eqn:inner_max}) can be solved reliably by projected gradient descent (PGD), which takes the following step (with step-size $\alpha$) in each iteration:
\begin{align}
{\boldsymbol \delta}_{t+1} = \Pi_{||{\boldsymbol \delta}||\leq \epsilon} ({\boldsymbol \delta}_{t}+\alpha g({\boldsymbol \delta}_{t})/ ||g({\boldsymbol \delta}_{t}) ||_F)\,,
\end{align}
where $g({\boldsymbol \delta}_{t}) = \nabla_{{\boldsymbol \delta}}L(f({\boldsymbol x}+{\boldsymbol \delta}; {\boldsymbol m} \odot {\boldsymbol \theta}, {\boldsymbol \phi}),{\boldsymbol y})$ is the gradient of the loss w.r.t. ${\boldsymbol \delta}$, and $\Pi_{||{\boldsymbol \delta}||\leq \epsilon}$ performs a projection onto the $\epsilon$-ball. Adversarial training is often used for dense neural netowrk training; here, we study the use of it for finding winning tickets.
\input{figs/supp_fig8_mask_similarity}
\subsection{Additional Results} \label{sec:add_results}
\paragraph{Additional Transfer Learning Study}
In Figure~\ref{fig:trans_study_supp}, we show additional results on the rest two tasks that have not been covered in the main text: (a) VE, and (b) GQA. Consistently with the findings in the main text, all the ticket sources demonstrate very well transferability. For GQA, it is interesting to observe that the ticket source from VQA performs the best. This is intuitive to understanding, as the task nature of VQA and GQA is close, and VQA has a larger training dataset, therefore demonstrating better transferability, even better than the ticket soruce from GQA itself.
\paragraph{Additional ViLT Lottery Ticket Curves}
We provide additional lottery ticket results of ViLT at all sparsity levels in Figure~\ref{fig:vilt_lottery_curve}. Since ViLT uses only a single unified transformer to directly take image patches and word tokens as model input, less can be pruned, and the highest sparsity we can achieve without impairing the performance is only around 30\%-40\%.
\paragraph{Similarity Between Sparsity Patterns}
In Figure~\ref{fig:mask_similarity}, we compare the overlap in sparsity patterns found on each downstream task and the pre-training tasks. Each cell contains the relative overlap ratio (\emph{i.e.}, $\frac{{\boldsymbol m}_i \bigcap {\boldsymbol m}_j}{{\boldsymbol m}_i \bigcup {\boldsymbol m}_j}\%$) between masks (\emph{i.e.}, ${\boldsymbol m}_i, {\boldsymbol m}_j$) from task ${\mathcal{T}}_i$ and ${\mathcal{T}}_j$. We find that different masks have been learned under different tasks. The mask learned by pre-training shares the most similarity with masks learned from VQA, NLVR$^2$ and VE. For GQA, RefCOCO+, and ITR, the learned masks share the least similarity with others.
\input{tables/supp_tab5_adv_results}
\paragraph{Enhancing Winning Tickets with AT}
Since the universal subnetworks found on pre-training
at 60\% and 70\% sparsities are the most interesting, we finetune them via adversarial training, and summarize results in
Table~\ref{tab:adv_results}. Clearly, performance on all the tasks is improved,
demonstrating adversarial training is also useful to enhance
sparse neural network training, at least in our VL context.
|
2,869,038,156,880 | arxiv | \section{Introduction}
In recent years, solid state magnetic cooling based on the magnetocaloric effect (MCE) has gained considerable attention as an alternative to the traditional gas compression refrigeration technology~\cite{Julia,Jian,Pecharsky1}. The MCE is defined and measured using primarily two broad categories: 1) change in the temperature of magnetic material due to the application of magnetic field in an adiabatic process ($\bigtriangleup T_{ad}$); and 2) change in the magnetic entropy ($\bigtriangleup S_M$) with application of magnetic field in an isothermal condition. In conventional magnetocaloric effect, spins align on application of magnetic field leading to reduction of magnetic entropy: the materials heat up when the magnetic field is applied and cool down when the field is removed. However, the opposite is also possible when the magnetic entropy increases with application of magnetic field. Such phenomenon is characterized as inverse MCE (IMCE)~\cite{Krenke,Xixiang,Anis2,Ajay,QZhang}. The IMCE materials exhibit minimum in $-\bigtriangleup S_M$ vs $T$ curve near AFM transition temperature ($T_N$). Generally, systems having AFM, ferrimagnetic transitions and ferromagnetic materials with martensitic phase transformation, are known to exhibit IMCE~\cite{Krenke,Anis2}. IMCE materials can be used for cooling by application of magnetic field under adiabatic condition and can also be employed as heat-sinks in case of cooling by conventional MCE materials~\cite{Krenke,Anis2}. Thus the discovery of new IMCE materials is equally important, if not more, in the search for suitable conventional MCE materials.
The refrigerant capacity (RC) provides an estimate of the amount of transferred heat between the hot end at $T_{hot}$ and cold end at $T_{cold}$ in one ideal thermodynamic cycle~\cite{QZhang}. While comparing the different materials to be utilized in the same thermodynamic cycle, the materials exhibiting larger RC is favored because of larger amount of heat transfer. RC is defined as the area under the $-\bigtriangleup S_M(T)$ curve for a particular magnetic field between $T_{hot}$ and $T_{cold}$, i.e., RC$=\int^{T_{hot}}_{T_{cold}}[\bigtriangleup S_M(T)]dT$. The two temperatures $T_{cold}$ and $T_{hot}$ define the working range of the refrigerator, which is associated with the full width at half maximum ($\delta T_{FWHM}$) of the $-\bigtriangleup S_M(T)$ curve.
It was proposed long back that the strongly frustrated classical Heisenberg antiferromagnets could potentially exhibit greater field induced adiabatic temperature change as compared to non-frustrated magnets~\cite{Zhitomirsky}. However, experimental reports of MCE in the frustrated pyrochlore oxides with formula A$_2$B$_2$O$_7$ (A=Y, Bi, rare earth elements; and B$=$ transition metal) are not very encouraging and are limited to Mn~\cite{Cai,Yikun,Cui,Ben1,Khachnaoui}, Ti~\cite{Aoki,Orendac,Sosin,Ben3,Wolf}, Mo~\cite{YaoDong} and Sn~\cite{Tkac} based pyrochlore oxides. Investigation of MCE on pyrochlore iridates could be interesting not only from applications point of view but also from the fundamental point of view. Pyrochore iridates provide a vast template to study the interplay of magnetic frustration, spin-orbit coupling and Coulomb correlation~\cite{Krempa,Abhishek1,Bikash,Vinod1,Vinod2,Vinod3,Vinod4,Vinod5}. The resulting complex magnetic phases can thus be probed using MCE as well. In the present work, we investigate the MCE of frustrated pyrochlore iridates Y$_2$Ir$_{2-x}$Cr$_x$O$_7$ (YICO). We discover the coexistence and enhancement of the conventional MCE and IMCE with chemical substitution, accompanied by high refrigerant capacity and relative cooling power with a broad working range around the liquid nitrogen temperature. The present study firmly places YICO in the category of new promising oxides candidates for magnetic cooling.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{VKD_Fig1}\\
\caption{(a) Magnetization as a function of temperature; inset shows enlarged view of $\chi_{ZFC}$-T of un-doped sample. The arrows indicate magnetic transitions labeled as $T_N$ in the text. (b) Magnetic field dependence of isotherm magnetization measured at different temperatures for $x =0.0$.}\label{fig:MtMh}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{VKD_Fig2}\\
\caption{First derivative of magnetization as a function of magnetic field for (a) $x=0.05$ and (b) $x=0.2$; peaks represent critical fields for meta-magnetic transitions. Insets show the corresponding magnetization isotherms.}\label{fig:Mh}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{VKD_Fig3}\\
\caption{Temperature dependence of the magnetic entropy change $-\bigtriangleup S_M$ for the samples (a) $x =0.0$, (b) $x =0.05$; inset shows $-\bigtriangleup S_M(T)$ at low fields and (c) x = 0.2.}\label{fig:Entropy}
\end{figure*}
The synthesis of polycrystalline YICO series with $x = 0.0, 0.05, 0.2$ were done by conventional solid state reaction~\cite{Vinod1}. The raw ingredients of high purity IrO$_2$ (99.99\%), Cr$_2$O$_3$ (99.99\%) and Y$_2$O$_3$ (99.99\%) were used. Mixture of raw materials was ground, pelletized and then sintered at $1000^0$C for $250$~h with several intermediate grindings. The room temperature (Cu-K$_\alpha$ radiation) X-ray diffraction pattern of polycrystalline YICO series shows a nearly pure phase with a cubic $Fd\bar{3}m$ structure~\cite{Vinod1}. Electronic structure characterization was performed using x-ray photoelectron spectroscopy~\cite{Vinod1}. Magnetic measurements were performed using Quantum Design MPMS SQUID VSM.
The field cooled (FC) and zero field cooled (ZFC) magnetic susceptibility $\chi=M/H$ measured as a function of temperature in the presence of applied magnetic field $H =1$~kOe is shown in Fig.~\ref{fig:MtMh}a. For $x=0.0$ sample, a bifurcation can be seen at the irreversibility temperature $T_{irr}$ $\sim158$~K. A maximum in the $\chi_{ZFC}(T)$ curve identified as freezing temperature $T_f$ can be seen for the substituted samples. The spin freezing is not as prominent in the parent undoped sample where $T_{irr}$ and $T_f$ are close to each other (inset Fig.~\ref{fig:MtMh}a) and there is an upturn in ZFC susceptibility below $T_{f}$, indicating only partial spin freezing at $T_f$. The values of $T_f$ for all the samples are given in Table~\ref{T1}. The existence of $T_{irr}$ at higher temperature and maximum in the $\chi_{ZFC}(T)$ curve at lower temperature suggests the possibility of cluster glass-like phase~\cite{Vinod1} particularly in the substituted samples. We estimated the Curie-Weiss temperature $\theta_{CW}$ by fitting the $\chi_{FC}-T$ curve in the high temperature range. The values of $\theta_{CW}$ are found to be negative and large for YICO series, indicating strong effective AFM correlation. The first derivative of $\chi_{ZFC}$ as a function of temperature reveals extremum points, which are identified as FM-like magnetic transition at higher temperature $T_C$~\cite{Ding} and AFM like transition at lower temperature $T_N$~\cite{Kazak}, respectively. The estimated values of $T_C$ and $T_N$ are shown in Table~\ref{T1}.
\begin{figure}
\centering
\includegraphics[width=7cm]{VKD_Fig4}\\
\caption{(a) Temperature variation of $-\frac{d(\bigtriangleup S_M)}{dH}$ at an applied field
$\lesssim 7$~T. (b) First derivative of $\chi_{ZFC}$ as a function of temperature plotted in the same scale.}\label{fig:Zhitormisky}
\end{figure}
The magnetization isotherms for all the samples have been measured at different closely spaced temperatures. Figure~\ref{fig:MtMh}b and Fig.~\ref{fig:Mh}a-b show that magnetization increases with field up to $7$~T without any sign of saturation. For the doped samples, below $T_N$, the magnetization exhibits an inflection point suggesting metamagnetic like transition further confirmed by the maximum in the slope of $M(H)$ curves (Fig.~\ref{fig:Mh}a,b). The critical field for metamagnetic transition labeled $H_{MT}$ is progressively reduced at elevated temperatures consistent with physical expectations~\cite{Midya}. Although, $H_{MT}$ usually vanishes at $T_N$ for the materials exhibiting PM-AFM transition~\cite{Midya}, it does not vanish completely at $T_N$ for the YICO series, suggesting a more complex scenario such as existence of AFM clusters beyond $T_N$. The high value of $H_{MT}$ suggests presence of stronger magnetic anisotropy in the substituted samples. For the $x=0.0$ sample, the maximum is not observed in $\frac{dM}{dH}$ vs $H$ curves. The metamagnetic transition in substituted samples is more likely of spin-reorientation type rather than the spin-flop type because of small value of $M$ ($0.23 \mu_B$ for $x = 0.2$ at $7$~T and $2$~K).
The change in magnetic entropy $\bigtriangleup S_M$ is calculated from the M-H isotherms using the Maxwell relation~\cite{Ding} $\Delta S_M=\int^H_0(\partial M/\partial T)dH$. Fig.~\ref{fig:Entropy} shows $-\bigtriangleup S_M(T)$ curves plotted for different magnetic fields. For $x=0$, $-\bigtriangleup S_M(T)$ exhibits positive maximum around $T_C$ (Fig.~\ref{fig:Entropy}a). With substitution, $-\bigtriangleup S_M(T)$ shows not just a positive maximum at high temperature, but undergoes sign inversion at $T_f$, followed by a minimum at lower temperature, thus exhibiting the coexistence of conventional MCE and IMCE (Fig.~\ref{fig:Entropy}b,c). Such a smooth cross-over from conventional MCE at high temperature to IMCE at low temperature clearly suggests coexistence and competition between AFM and FM phases. Curiously, for sufficiently low magnetic field, the sign inversion takes place close to $T_f$, presumably a cluster glass freezing transition in presence of competing FM and AFM interaction. The cluster glass behavior cannot be attributed to the geometry of the pyrochlore lattice. It is more likely due to RKKY interaction between local Cr$^{3+}$ moments mediated by Ir$^{4+}$ conduction electrons~\cite{Bikash}.
Further, we have investigated the behavior of MCE near the maximum applied magnetic field of $7$~T, presumably close to the saturation field. Figure~\ref{fig:Zhitormisky}a shows the field derivative of magnetic entropy as a function of temperature, plotted along with $\frac{d\chi_{ZFC}}{dT}$ curve in the Fig.~\ref{fig:Zhitormisky}b. For undoped sample, the value of $-\frac{d(\bigtriangleup S_M)}{dH}$ is positive throughout the temperature range, while the same for doped samples undergoes sign inversion below $T_f$. The enhanced value of $-\frac{d(\bigtriangleup S_M)}{dH}$ and the sign inversion for doped samples could be attributed to the increased frustration due to Cr substitution. The origin of frustration is not purely geometric for which we expect a power law temperature dependence in the field derivative of magnetic entropy~\cite{Zhitomirsky}. We attribute the increased frustration to the competing FM and AFM interactions due to the substitution by localized moment Cr$^{3+}$. The estimated value of frustration parameter $f=\frac{\mid \theta_{CW} \mid}{T_f}$ is listed in Table~\ref{T1}.
\begin{figure*}
\centering
\includegraphics[width=12 cm]{VKD_Fig5}\\
\caption{(a) Magnetic field dependent $\delta T_{FWHM}$ around $T_C$ [hollow symbol] and $T_N$ [solid symbol]; inset shows the method used to calculate the RC and RCP from the $-\bigtriangleup S_M(T)$ curves around $T_C$ and $T_N$. RC and RCP as a function of applied magnetic fields for the samples (b) $x=0.0$, (c) $x=0.05$, and (d) $x=0.2$. Inset shows $-\bigtriangleup S_M^{max}$ vs H.}\label{fig:RC}
\end{figure*}
\begin{table*}
\caption{Important parameters related to the magnetocaloric effect. Here, T1 and T2 represent the $\delta T_{FWHM}$ around $T_C$ and $T_N$, respectively. RCP1 and RCP2 are the values of Relative Cooling Power corresponding to T1 and T2, respectively at $7$~T. Curie-Weiss temperature $\theta_{CW}$ is estimated in the temperature range 180-240~K. \label{T1}}
\begin{center}
\begin{tabular}{c c c c c c c c c c c c c}
\hline
$x$&$T_C$&($-\bigtriangleup S_M^{max}$)$_{T_C}$&T1&RCP1&$T_N$&($-\bigtriangleup S_M^{max}$)$_{T_N}$&T2& RCP2& $T_f$& -$\theta_{CW}$& $f$ \\
&K& ($J/Kg.K$) & ($K$) & ($J/kg$) &($K$) & ($J/Kg.K$) &($K$) &($J/kg$) & ($K$) & (K) & \\
\hline
0.0 &130&0.042&49&2.1&16&0.12&81&9.4&130 &138& 1.2 \\
0.05 &65&0.25&75&19&20&-0.3&29&-3&46 &155& 2.5 \\
0.2 &70&0.68&78&52&22&-1.73&23&-17.6&50 &140& 3.0 \\
\hline
\end{tabular}
\end{center}
\end{table*}
The values of $-\bigtriangleup S_M^{max}$ are given in Table ~\ref{T1}. We observe a striking enhancement of MCE in substituted samples. However the value still remains small compared to standard magnetocaloric materials. We have calculated RC using the method as shown in the inset of Fig.~\ref{fig:RC}a. The shaded area under the $-\bigtriangleup S_M(T)$ curve is a measure of the RC. For conventional MCE at higher temperature, the value of $\delta T_{FWHM}$ increases with field and reaches up to $81$~K for $7$~T. On the other hand, for IMCE observed at lower temperature, the value of $\delta T_{FWHM}$ decreases with increasing field. The variation of $\delta T_{FWHM}$ as a function of field around $T_C$ and $T_N$ is shown in the Fig.~\ref{fig:RC}a. The calculated values of RC as a function of applied magnetic field around $T_C$ and $T_N$ are shown in Fig.~\ref{fig:RC}b-d. As the formula suggests, either a large value of $-\bigtriangleup S_M(T)$ or a broad $-\bigtriangleup S_M(T)$ curve or both produces large value of RC. In our case the small value of conventional MCE is more than compensated by the large value of $\delta T_{FWHM}$ leading to RC value comparable to standard magnetocaloric materials.
Although the values of $-\bigtriangleup S_M$ for the YICO series are similar to known pyrochlore oxides exhibiting MCE~\cite{Cai,Yikun,Cui,Ben1,Khachnaoui,Aoki,Orendac,Sosin,Ben3,Wolf,YaoDong,Tkac} and not as large as other MCE materials~\cite{Das,Paromita,Sanjay,Santanu1,Santanu2,Chaudhary}, the order of magnitude enhancement of both conventional MCE and IMCE with substitution is quite encouraging. Besides these observations, the relatively wide $\delta T_{FWHM}$ for the YICO samples offers a very competitive RC around both transition temperatures. The coexistence of conventional MCE and IMCE has been observed in a variety of magnetic systems. However, the two working temperature windows are usually small~\cite{Krenke,Xixiang,Anis2,Ajay,Bonilla}. From applications point of view, those magnetocaloric materials which can work over a broad temperature regime i.e, having large $\delta T_{FWHM}$, are highly attractive.
The important findings for the YICO series are that both the `low temperature negative' minimum (around $T_N$) and `high temperatures positive' maximum (around $T_C$) in $-\bigtriangleup S_M(T)$ span over a broad temperatures range. It should also be noted that the reported pyrochlore oxides~\cite{Cai,Yikun,Cui,Ben1,Khachnaoui,Aoki,Orendac,Sosin,Ben3,Wolf,YaoDong,Tkac} shows only conventional MCE and complete absence of IMCE. The value of $\delta T_{FWHM}$ is more than $40$~K around $T_C$ (Fig.~\ref{fig:RC}a) at low field ($0.1$~T for doped sample), which is larger than that of the established reference material Gd~\cite{Julia} but smaller than the recently reported value for Fe-Ni-Cr~\cite{Chaudhary} system. We have also calculated the relative cooling power (RCP) which is often used to estimate their potential for magnetic cooling and defined as RCP$=-\bigtriangleup S_M^{max}\delta T_{FWHM}$. The values of RCP and RC are generally close to each other with the former roughly scaling as $4/3$ times the value of RC. As an example, the value of RCP corresponding to conventional MCE for $x=0.2$ at $5$~T around liquid nitrogen temperature is $0.27$ Jcm$^{-3}$, quite competitive when pitted against known magnetocaloric materials~\cite{Tapas}. Although the IMCE is associated with metamagnetic transition, the hysteresis is negligible, thus minimizing possible energy loss. The RCP corresponding to IMCE being of similar value ($-0.14$ Jcm$^{-3}$ at $5$~T for $x=0.2$), we emphasize that the global working efficiency can be improved further by using both the conventional MCE and IMCE via a special working procedure employing magnetization and demagnetization processes~\cite{Krenke,Xixiang}. Additionally, since it is not based on rare earth elements, YICO might turn out to be an interesting candidate material for magnetic refrigeration.
In conclusion, we find that the Y$_2$Ir$_{2-x}$Cr$_x$O$_7$ (YICO) exhibit multiple magnetic transitions, possibly due to frustration induced coexistence of AFM and FM clusters, which lead to the conventional MCE at high temperature and IMCE at low temperature. One of the most important results of the present work is a plausible strategy whereby magnetic frustration introduced by chemical substitution can lead to not just coexistence of conventional and inverse MCE but order of magnitude enhancement of $-\bigtriangleup S_M$. Both the conventional MCE and IMCE span over a broad working temperature range in YICO, resulting in high values of RC and RCP, comparable to leading magnetocaloric materials. We expect that the present study may open up a pathway to explore more suitable candidate materials among non-rare-earth frustrated magnetic oxides.
\section{ACKNOWLEDGEMENTS}
SM would like to acknowledge Department of Science and Technology (DST), Government of India for financial support.
\section{AUTHOR DECLARATIONS}
\subsection{Data availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\subsection{Conflict of interest}
The authors have no conflicts to disclose.
\section{Introduction}
In recent years, solid state magnetic cooling based on the magnetocaloric effect (MCE) has gained considerable attention as an alternative to the traditional gas compression refrigeration technology~\cite{Julia,Jian,Pecharsky1}. The MCE is defined and measured using primarily two broad categories: 1) change in the temperature of magnetic material due to the application of magnetic field in an adiabatic process ($\bigtriangleup T_{ad}$); and 2) change in the magnetic entropy ($\bigtriangleup S_M$) with application of magnetic field in an isothermal condition. In conventional magnetocaloric effect, spins align on application of magnetic field leading to reduction of magnetic entropy: the materials heat up when the magnetic field is applied and cool down when the field is removed. However, the opposite is also possible when the magnetic entropy increases with application of magnetic field. Such phenomenon is characterized as inverse MCE (IMCE)~\cite{Krenke,Xixiang,Anis2,Ajay,QZhang}. The IMCE materials exhibit minimum in $-\bigtriangleup S_M$ vs $T$ curve near AFM transition temperature ($T_N$). Generally, systems having AFM, ferrimagnetic transitions and ferromagnetic materials with martensitic phase transformation, are known to exhibit IMCE~\cite{Krenke,Anis2}. IMCE materials can be used for cooling by application of magnetic field under adiabatic condition and can also be employed as heat-sinks in case of cooling by conventional MCE materials~\cite{Krenke,Anis2}. Thus the discovery of new IMCE materials is equally important, if not more, in the search for suitable conventional MCE materials.
The refrigerant capacity (RC) provides an estimate of the amount of transferred heat between the hot end at $T_{hot}$ and cold end at $T_{cold}$ in one ideal thermodynamic cycle~\cite{QZhang}. While comparing the different materials to be utilized in the same thermodynamic cycle, the materials exhibiting larger RC is favored because of larger amount of heat transfer. RC is defined as the area under the $-\bigtriangleup S_M(T)$ curve for a particular magnetic field between $T_{hot}$ and $T_{cold}$, i.e., RC$=\int^{T_{hot}}_{T_{cold}}[\bigtriangleup S_M(T)]dT$. The two temperatures $T_{cold}$ and $T_{hot}$ define the working range of the refrigerator, which is associated with the full width at half maximum ($\delta T_{FWHM}$) of the $-\bigtriangleup S_M(T)$ curve.
It was proposed long back that the strongly frustrated classical Heisenberg antiferromagnets could potentially exhibit greater field induced adiabatic temperature change as compared to non-frustrated magnets~\cite{Zhitomirsky}. However, experimental reports of MCE in the frustrated pyrochlore oxides with formula A$_2$B$_2$O$_7$ (A=Y, Bi, rare earth elements; and B$=$ transition metal) are not very encouraging and are limited to Mn~\cite{Cai,Yikun,Cui,Ben1,Khachnaoui}, Ti~\cite{Aoki,Orendac,Sosin,Ben3,Wolf}, Mo~\cite{YaoDong} and Sn~\cite{Tkac} based pyrochlore oxides. Investigation of MCE on pyrochlore iridates could be interesting not only from applications point of view but also from the fundamental point of view. Pyrochore iridates provide a vast template to study the interplay of magnetic frustration, spin-orbit coupling and Coulomb correlation~\cite{Krempa,Abhishek1,Bikash,Vinod1,Vinod2,Vinod3,Vinod4,Vinod5}. The resulting complex magnetic phases can thus be probed using MCE as well. In the present work, we investigate the MCE of frustrated pyrochlore iridates Y$_2$Ir$_{2-x}$Cr$_x$O$_7$ (YICO). We discover the coexistence and enhancement of the conventional MCE and IMCE with chemical substitution, accompanied by high refrigerant capacity and relative cooling power with a broad working range around the liquid nitrogen temperature. The present study firmly places YICO in the category of new promising oxides candidates for magnetic cooling.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{VKD_Fig1}\\
\caption{(a) Magnetization as a function of temperature; inset shows enlarged view of $\chi_{ZFC}$-T of un-doped sample. The arrows indicate magnetic transitions labeled as $T_N$ in the text. (b) Magnetic field dependence of isotherm magnetization measured at different temperatures for $x =0.0$.}\label{fig:MtMh}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{VKD_Fig2}\\
\caption{First derivative of magnetization as a function of magnetic field for (a) $x=0.05$ and (b) $x=0.2$; peaks represent critical fields for meta-magnetic transitions. Insets show the corresponding magnetization isotherms.}\label{fig:Mh}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{VKD_Fig3}\\
\caption{Temperature dependence of the magnetic entropy change $-\bigtriangleup S_M$ for the samples (a) $x =0.0$, (b) $x =0.05$; inset shows $-\bigtriangleup S_M(T)$ at low fields and (c) x = 0.2.}\label{fig:Entropy}
\end{figure*}
The synthesis of polycrystalline YICO series with $x = 0.0, 0.05, 0.2$ were done by conventional solid state reaction~\cite{Vinod1}. The raw ingredients of high purity IrO$_2$ (99.99\%), Cr$_2$O$_3$ (99.99\%) and Y$_2$O$_3$ (99.99\%) were used. Mixture of raw materials was ground, pelletized and then sintered at $1000^0$C for $250$~h with several intermediate grindings. The room temperature (Cu-K$_\alpha$ radiation) X-ray diffraction pattern of polycrystalline YICO series shows a nearly pure phase with a cubic $Fd\bar{3}m$ structure~\cite{Vinod1}. Electronic structure characterization was performed using x-ray photoelectron spectroscopy~\cite{Vinod1}. Magnetic measurements were performed using Quantum Design MPMS SQUID VSM.
The field cooled (FC) and zero field cooled (ZFC) magnetic susceptibility $\chi=M/H$ measured as a function of temperature in the presence of applied magnetic field $H =1$~kOe is shown in Fig.~\ref{fig:MtMh}a. For $x=0.0$ sample, a bifurcation can be seen at the irreversibility temperature $T_{irr}$ $\sim158$~K. A maximum in the $\chi_{ZFC}(T)$ curve identified as freezing temperature $T_f$ can be seen for the substituted samples. The spin freezing is not as prominent in the parent undoped sample where $T_{irr}$ and $T_f$ are close to each other (inset Fig.~\ref{fig:MtMh}a) and there is an upturn in ZFC susceptibility below $T_{f}$, indicating only partial spin freezing at $T_f$. The values of $T_f$ for all the samples are given in Table~\ref{T1}. The existence of $T_{irr}$ at higher temperature and maximum in the $\chi_{ZFC}(T)$ curve at lower temperature suggests the possibility of cluster glass-like phase~\cite{Vinod1} particularly in the substituted samples. We estimated the Curie-Weiss temperature $\theta_{CW}$ by fitting the $\chi_{FC}-T$ curve in the high temperature range. The values of $\theta_{CW}$ are found to be negative and large for YICO series, indicating strong effective AFM correlation. The first derivative of $\chi_{ZFC}$ as a function of temperature reveals extremum points, which are identified as FM-like magnetic transition at higher temperature $T_C$~\cite{Ding} and AFM like transition at lower temperature $T_N$~\cite{Kazak}, respectively. The estimated values of $T_C$ and $T_N$ are shown in Table~\ref{T1}.
\begin{figure}
\centering
\includegraphics[width=7cm]{VKD_Fig4}\\
\caption{(a) Temperature variation of $-\frac{d(\bigtriangleup S_M)}{dH}$ at an applied field
$\lesssim 7$~T. (b) First derivative of $\chi_{ZFC}$ as a function of temperature plotted in the same scale.}\label{fig:Zhitormisky}
\end{figure}
The magnetization isotherms for all the samples have been measured at different closely spaced temperatures. Figure~\ref{fig:MtMh}b and Fig.~\ref{fig:Mh}a-b show that magnetization increases with field up to $7$~T without any sign of saturation. For the doped samples, below $T_N$, the magnetization exhibits an inflection point suggesting metamagnetic like transition further confirmed by the maximum in the slope of $M(H)$ curves (Fig.~\ref{fig:Mh}a,b). The critical field for metamagnetic transition labeled $H_{MT}$ is progressively reduced at elevated temperatures consistent with physical expectations~\cite{Midya}. Although, $H_{MT}$ usually vanishes at $T_N$ for the materials exhibiting PM-AFM transition~\cite{Midya}, it does not vanish completely at $T_N$ for the YICO series, suggesting a more complex scenario such as existence of AFM clusters beyond $T_N$. The high value of $H_{MT}$ suggests presence of stronger magnetic anisotropy in the substituted samples. For the $x=0.0$ sample, the maximum is not observed in $\frac{dM}{dH}$ vs $H$ curves. The metamagnetic transition in substituted samples is more likely of spin-reorientation type rather than the spin-flop type because of small value of $M$ ($0.23 \mu_B$ for $x = 0.2$ at $7$~T and $2$~K).
The change in magnetic entropy $\bigtriangleup S_M$ is calculated from the M-H isotherms using the Maxwell relation~\cite{Ding} $\Delta S_M=\int^H_0(\partial M/\partial T)dH$. Fig.~\ref{fig:Entropy} shows $-\bigtriangleup S_M(T)$ curves plotted for different magnetic fields. For $x=0$, $-\bigtriangleup S_M(T)$ exhibits positive maximum around $T_C$ (Fig.~\ref{fig:Entropy}a). With substitution, $-\bigtriangleup S_M(T)$ shows not just a positive maximum at high temperature, but undergoes sign inversion at $T_f$, followed by a minimum at lower temperature, thus exhibiting the coexistence of conventional MCE and IMCE (Fig.~\ref{fig:Entropy}b,c). Such a smooth cross-over from conventional MCE at high temperature to IMCE at low temperature clearly suggests coexistence and competition between AFM and FM phases. Curiously, for sufficiently low magnetic field, the sign inversion takes place close to $T_f$, presumably a cluster glass freezing transition in presence of competing FM and AFM interaction. The cluster glass behavior cannot be attributed to the geometry of the pyrochlore lattice. It is more likely due to RKKY interaction between local Cr$^{3+}$ moments mediated by Ir$^{4+}$ conduction electrons~\cite{Bikash}.
Further, we have investigated the behavior of MCE near the maximum applied magnetic field of $7$~T, presumably close to the saturation field. Figure~\ref{fig:Zhitormisky}a shows the field derivative of magnetic entropy as a function of temperature, plotted along with $\frac{d\chi_{ZFC}}{dT}$ curve in the Fig.~\ref{fig:Zhitormisky}b. For undoped sample, the value of $-\frac{d(\bigtriangleup S_M)}{dH}$ is positive throughout the temperature range, while the same for doped samples undergoes sign inversion below $T_f$. The enhanced value of $-\frac{d(\bigtriangleup S_M)}{dH}$ and the sign inversion for doped samples could be attributed to the increased frustration due to Cr substitution. The origin of frustration is not purely geometric for which we expect a power law temperature dependence in the field derivative of magnetic entropy~\cite{Zhitomirsky}. We attribute the increased frustration to the competing FM and AFM interactions due to the substitution by localized moment Cr$^{3+}$. The estimated value of frustration parameter $f=\frac{\mid \theta_{CW} \mid}{T_f}$ is listed in Table~\ref{T1}.
\begin{figure*}
\centering
\includegraphics[width=12 cm]{VKD_Fig5}\\
\caption{(a) Magnetic field dependent $\delta T_{FWHM}$ around $T_C$ [hollow symbol] and $T_N$ [solid symbol]; inset shows the method used to calculate the RC and RCP from the $-\bigtriangleup S_M(T)$ curves around $T_C$ and $T_N$. RC and RCP as a function of applied magnetic fields for the samples (b) $x=0.0$, (c) $x=0.05$, and (d) $x=0.2$. Inset shows $-\bigtriangleup S_M^{max}$ vs H.}\label{fig:RC}
\end{figure*}
\begin{table*}
\caption{Important parameters related to the magnetocaloric effect. Here, T1 and T2 represent the $\delta T_{FWHM}$ around $T_C$ and $T_N$, respectively. RCP1 and RCP2 are the values of Relative Cooling Power corresponding to T1 and T2, respectively at $7$~T. Curie-Weiss temperature $\theta_{CW}$ is estimated in the temperature range 180-240~K. \label{T1}}
\begin{center}
\begin{tabular}{c c c c c c c c c c c c c}
\hline
$x$&$T_C$&($-\bigtriangleup S_M^{max}$)$_{T_C}$&T1&RCP1&$T_N$&($-\bigtriangleup S_M^{max}$)$_{T_N}$&T2& RCP2& $T_f$& -$\theta_{CW}$& $f$ \\
&K& ($J/Kg.K$) & ($K$) & ($J/kg$) &($K$) & ($J/Kg.K$) &($K$) &($J/kg$) & ($K$) & (K) & \\
\hline
0.0 &130&0.042&49&2.1&16&0.12&81&9.4&130 &138& 1.2 \\
0.05 &65&0.25&75&19&20&-0.3&29&-3&46 &155& 2.5 \\
0.2 &70&0.68&78&52&22&-1.73&23&-17.6&50 &140& 3.0 \\
\hline
\end{tabular}
\end{center}
\end{table*}
The values of $-\bigtriangleup S_M^{max}$ are given in Table ~\ref{T1}. We observe a striking enhancement of MCE in substituted samples. However the value still remains small compared to standard magnetocaloric materials. We have calculated RC using the method as shown in the inset of Fig.~\ref{fig:RC}a. The shaded area under the $-\bigtriangleup S_M(T)$ curve is a measure of the RC. For conventional MCE at higher temperature, the value of $\delta T_{FWHM}$ increases with field and reaches up to $81$~K for $7$~T. On the other hand, for IMCE observed at lower temperature, the value of $\delta T_{FWHM}$ decreases with increasing field. The variation of $\delta T_{FWHM}$ as a function of field around $T_C$ and $T_N$ is shown in the Fig.~\ref{fig:RC}a. The calculated values of RC as a function of applied magnetic field around $T_C$ and $T_N$ are shown in Fig.~\ref{fig:RC}b-d. As the formula suggests, either a large value of $-\bigtriangleup S_M(T)$ or a broad $-\bigtriangleup S_M(T)$ curve or both produces large value of RC. In our case the small value of conventional MCE is more than compensated by the large value of $\delta T_{FWHM}$ leading to RC value comparable to standard magnetocaloric materials.
Although the values of $-\bigtriangleup S_M$ for the YICO series are similar to known pyrochlore oxides exhibiting MCE~\cite{Cai,Yikun,Cui,Ben1,Khachnaoui,Aoki,Orendac,Sosin,Ben3,Wolf,YaoDong,Tkac} and not as large as other MCE materials~\cite{Das,Paromita,Sanjay,Santanu1,Santanu2,Chaudhary}, the order of magnitude enhancement of both conventional MCE and IMCE with substitution is quite encouraging. Besides these observations, the relatively wide $\delta T_{FWHM}$ for the YICO samples offers a very competitive RC around both transition temperatures. The coexistence of conventional MCE and IMCE has been observed in a variety of magnetic systems. However, the two working temperature windows are usually small~\cite{Krenke,Xixiang,Anis2,Ajay,Bonilla}. From applications point of view, those magnetocaloric materials which can work over a broad temperature regime i.e, having large $\delta T_{FWHM}$, are highly attractive.
The important findings for the YICO series are that both the `low temperature negative' minimum (around $T_N$) and `high temperatures positive' maximum (around $T_C$) in $-\bigtriangleup S_M(T)$ span over a broad temperatures range. It should also be noted that the reported pyrochlore oxides~\cite{Cai,Yikun,Cui,Ben1,Khachnaoui,Aoki,Orendac,Sosin,Ben3,Wolf,YaoDong,Tkac} shows only conventional MCE and complete absence of IMCE. The value of $\delta T_{FWHM}$ is more than $40$~K around $T_C$ (Fig.~\ref{fig:RC}a) at low field ($0.1$~T for doped sample), which is larger than that of the established reference material Gd~\cite{Julia} but smaller than the recently reported value for Fe-Ni-Cr~\cite{Chaudhary} system. We have also calculated the relative cooling power (RCP) which is often used to estimate their potential for magnetic cooling and defined as RCP$=-\bigtriangleup S_M^{max}\delta T_{FWHM}$. The values of RCP and RC are generally close to each other with the former roughly scaling as $4/3$ times the value of RC. As an example, the value of RCP corresponding to conventional MCE for $x=0.2$ at $5$~T around liquid nitrogen temperature is $0.27$ Jcm$^{-3}$, quite competitive when pitted against known magnetocaloric materials~\cite{Tapas}. Although the IMCE is associated with metamagnetic transition, the hysteresis is negligible, thus minimizing possible energy loss. The RCP corresponding to IMCE being of similar value ($-0.14$ Jcm$^{-3}$ at $5$~T for $x=0.2$), we emphasize that the global working efficiency can be improved further by using both the conventional MCE and IMCE via a special working procedure employing magnetization and demagnetization processes~\cite{Krenke,Xixiang}. Additionally, since it is not based on rare earth elements, YICO might turn out to be an interesting candidate material for magnetic refrigeration.
In conclusion, we find that the Y$_2$Ir$_{2-x}$Cr$_x$O$_7$ (YICO) exhibit multiple magnetic transitions, possibly due to frustration induced coexistence of AFM and FM clusters, which lead to the conventional MCE at high temperature and IMCE at low temperature. One of the most important results of the present work is a plausible strategy whereby magnetic frustration introduced by chemical substitution can lead to not just coexistence of conventional and inverse MCE but order of magnitude enhancement of $-\bigtriangleup S_M$. Both the conventional MCE and IMCE span over a broad working temperature range in YICO, resulting in high values of RC and RCP, comparable to leading magnetocaloric materials. We expect that the present study may open up a pathway to explore more suitable candidate materials among non-rare-earth frustrated magnetic oxides.
\section{ACKNOWLEDGEMENTS}
SM would like to acknowledge Department of Science and Technology (DST), Government of India for financial support.
\section{AUTHOR DECLARATIONS}
\subsection{Data availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\subsection{Conflict of interest}
The authors have no conflicts to disclose.
|
2,869,038,156,881 | arxiv |
\section{Conclusion and Discussion}
\label{sec:conclusions}
We presented a novel real-time SLAM system that jointly reconstructs a static scene and moving objects from either continuous or discrete observations (i.e. no smooth object motion required). The system automatically detects and models a moving object from the static scene and creates two independent maps. It extracts 3D points, 2D points, and 3D planes from RGB-D data and splits them into disjoint measurement sets that are independently used by the two maps for stable registration. The use of a sparse feature-based representation allows continuous and independent optimization for the two maps even on CPU. Thus, a mobile robot can reliably model an object during exploration and then use the reconstructed model for manipulation tasks. Note that we avoided modeling hand-object interactions in this work, as the focus was simultaneous reconstruction of multiple maps from various motion patterns. However, the use of the presented system on a mobile robot platform is possible by disabling SLAM system every time robot hand enters into the view to interact with the object and enabling it back when the hand is not visible.
Our moving object detection module relies on the outliers of frame localization. Thus, we define the dominant motion pattern as the static map, whereas the segments with high number of outliers initialize an object map. In other words, the objects are seen as a small subgroup of features from the input frames. However, static and object maps are just a matter of naming. Our algorithm would successfully work in the case, where the sensor is zoomed in to the object and the dominant motion pattern comes from the object. The key strength of our method is the multi-group registration procedure that considers the whole set of measurements and subsets of measurements for localization. It is also worth mentioning that the system can have difficulty detecting two different moving objects, if their motion is visible at the same frame. In this case, our system will initialize a single map for both objects and will fail to grow the object map. In the future, we would like to consider a sequence of frames for moving object detection instead of considering only consecutive frames. Another solution to this problem can be moving object detection in each object map in order to split multiple motion patterns.
One of the limitations of this work is convergence of the maps in case of a contamination from one of them to the other. This is due to the fact that the feature classification relies on the registration. Once some measurements are mistakenly added to the map, it might result in localization of incorrect regions. Our future work includes developing a pruning method that checks feature classification in past keyframes and corrects them in case of a misclassification. Also, the presented technique will have difficulty in texture-less areas and objects, since it is sparse feature-based.
This work focuses on rigidly moving objects in a scene. According to the presented approach, a moving object can be either continuously moving while being seen in the camera field of view or it can be seen in discrete time instances throughout the sequence, e.g., multiple instances appearing in far places in a scene. Our method can handle both situations. Due to our moving object assumption, the maps are independent from each other and they cannot share any geometric constraint. However, if the detected objects are stationary in some frames with respect to the static scene, the maps can be partially dependent to each other. This information can help improve accuracy in the bundle adjustment, which is another important future direction of this work.
\section{Experiments and Results}
\label{sec:Experiments}
We evaluated our method on different indoor scenes recorded from either a hand-held RGB-D camera (ASUS Xtion) or a mobile Fetch robot as shown in Figure \ref{fig:concept}.
The system was implemented in C++ on the Robot Operating System (ROS), and used images and depth maps at resolution of $640 \times 480$ pixels. We used $0.4$ as the RANSAC inlier ratio and set $\sigma = \max(1, 3 \sigma_{Z})$ in cm where $\sigma_{Z}$ is the depth-dependent measurement error~\cite{Khoshelham2012Accuracy} for deciding whether measurement and landmark associations are inliers. We did not proceed with RANSAC and reported localization failure if there were less then 10 feature matches. Bundle adjustment was performed using the Ceres Solver \cite{CeresSolver}. This online SLAM system runs at $\sim 3.5$ frames per second on CPU. The experiments described below aim to test the capability of our system on (i) simultaneously and independently reconstructing the static scene and rigidly moving objects and (ii) detecting and tracking moving objects \footnote{Video of the experiments available at \url{https://youtu.be/goflUxzG2VI}}.
\subsection{Experimental Scenarios}
\label{sec:expscenes}
\textbf{Scene 1} (Figure~\ref{fig:pic_experiment1}): For the first experiment we used a discrete set of RGB-D images showing different objects placed on a desk captured from different viewpoints. The red box was the only moving object in the scene. As soon as the red box moved (first frame in Figure \ref{fig:pic_experiment1}) the system initialized the static and object maps and started tracking the object. Figure \ref{fig:pic_experiment1}(B) shows some of the keyframes stored in the object map along with the position of the red box. The superimposed blue mask indicates the frame segments associated to the object map (i.e., sets of features fed to the object map). Figure \ref{fig:pic_experiment1}(A) shows the reconstructed static map (left) which contained $10$ keyframes, $2270$ point landmarks, and $17$ plane landmarks, as well as the combined object and static map (right). The object model is placed on the initial detected position. Figure \ref{fig:pic_experiment1}(C) shows the reconstructed object model and the object map ($11$ keyframes) having $813$ point landmarks and $4$ plane landmarks. Notice that the system is able to decouple the two maps and does not require smooth object motion.
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{experiment_3}
\caption{Reconstruction results on example scene 2, where we model an object which consists of mostly non-planar segments: example keyframes from the sequence (left), reconstructed point cloud (right).}
\label{fig:pic_experiment2}
\end{figure}
\textbf{Scene 2} (Figure~\ref{fig:pic_experiment2}): In the second experiment we used the robot in Figure \ref{fig:concept} to look at various objects on a desk. When the green toy moved the system initialized the object map. In this experiment, we show that the proposed method is able to model an object consisting of mostly non-planar regions. The object map included $812$ point landmarks and $3$ plane landmarks whereas the static map had $3038$ point landmarks and $11$ plane landmarks.
\textbf{Scene 3} (Figure~\ref{fig:pic_experiment3}): In the third experiment we used a hand-held RGB-D camera on an indoor office scene. The scene contained two instances of the target object (white-blue box) placed on different locations. Figure \ref{fig:pic_experiment3}(B) shows the keyframes added to both static and object maps during the whole experiment. The user started the experiment by pointing the camera at the white-blue box which was initially partially occluded as seen in position 1 of Figure~\ref{fig:pic_experiment3}(A). In this experiment, we manually specified the segments corresponding to the object in the first frame to initialize an object map since the object was stationary. The user then moved the camera away from the box, focusing on the rest of the office (frame $10$, the position 2). The system lost track of the object and stopped adding new keyframes to the object map. After a brief exploration ($32$ frames), the user pointed the camera at the second instance of the white-blue box (frame $42$, the position 3).
The system relocalized the object, generated a new keyframe based on feature classification as described in Section~\ref{sec:Method}, and started adding new keyframes to the object map. The number of landmarks increased as shown in Figure \ref{fig:pic_experiment3}(C), where we display point landmarks overlaid on the plane landmarks of the object map when respective frames were gradually added. This is possible because the two maps were always decoupled and the system always performed independent global registration of the current frame with respect to the two maps. Thus the registration failure of the frames $10 \to 42$ against the object map was not a problem for the system to relocalize the object again. The failure did not stop the growth of the static map in those frames since static map localization did not lose track. The user then moved the camera around the box (the position 4 in Figure \ref{fig:pic_experiment3}) and new plane and point landmarks were added to the object map as seen in Figure \ref{fig:pic_experiment3}(C), which shows the evolution of the object map on the described keyframes. Our plane extraction algorithm fitted closeby points into the plane boundaries resulting in the leaking representation of Figure \ref{fig:pic_experiment3}(C3-4). This, however, does not affect plane registration, which only considers plane equations. The reconstructed box model is displayed in Figure \ref{fig:pic_experiment3}(D) which was generated using all the estimated keyframe poses contained in the object map. Similarly, the combination of the static map (office) and the two box instances displayed in Figure~\ref{fig:pic_experiment3}(A) was generated using all the estimated keyframe poses contained in the static and object maps. Figure~\ref{fig:chart_experiment3} shows the chart for number of landmarks and number of keyframes with respect to the input frame indices. As can be seen, the static map grows larger whereas the object map only grows when the object is visible.
\begin{figure*} [t]
\center
\includegraphics[width=\textwidth]{experiment_1}
\caption{Reconstruction results on example scene 3: (A) reconstructed point cloud for static and object maps along with the 4 camera locations, where keyframes were added to both maps, (B) keyframes of the camera locations shown in (A), where blue color indicates the set of segments added to the object map, (C) point landmarks overlaid on the plane landmarks of the object map when respective keyframes were added, (D) point cloud visualization of the reconstructed object map from two different viewpoints. Notice that although the object map was partially occluded in the initial keyframe, the final reconstructed model was gradually completed using measurements from other keyframes.}
\label{fig:pic_experiment3}
\end{figure*}
\begin{figure} [t]
\center
\includegraphics[width=\columnwidth]{landmarks}
\caption{Plot of number of keyframes (left) and number of landmarks with respect to frame indices for static (red) and object (blue) map of experiment scene 3. The static map grows larger, while the object map only enlarges when the object is visible.}
\label{fig:chart_experiment3}
\end{figure}
\section{Introduction}
\label{sec:intro}
Scene understanding and navigation are crucial for autonomous agents to localize themselves with respect to a reconstructed map and interact with the surrounding environment. 3D object modeling and localization lie at the core of robot manipulation. Conventional simultaneous localization and mapping (SLAM) systems are successful for representing the environment, when the scene is static. Yet, in the case of a dynamic scene, large moving objects can degrade the localization and mapping accuracy. On the contrary, object motion can provide useful object information acquired from various viewpoints.
\begin{figure}[t]
\center
\includegraphics[width=0.8\columnwidth]{fetch}
\caption{Illustrative representation of the system. The mobile robot used in the experiments detects a moving object and generates an object map separately from a static map corresponding to the static environment.}
\label{fig:concept}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\textwidth]{flowchart.pdf}
\caption{System overview: Our method first initializes the static map with the first input frame and captures another RGB-D frame. At the next step, registration module performs a multi-group registration between the current input frame and each of the existing maps. If no object maps are initialized yet, moving object detection module finds the regions that belong to the object. If there are already existing object maps, then we first perform feature classification and split the measurements associated to the existing object maps. For the rest of the measurements, we run moving object detection in order to find if there are new objects in the scene. Depending on the estimated pose of the frame with respect to each map, the frame is added as novel keyframe to the respective map. Bundle adjustment procedure runs asynchronously with SLAM.}
\label{fig:system}
\end{figure*}
This paper presents a technique for simultaneous reconstruction of static regions and rigidly moving objects in a scene. While each of the camera localization and moving object tracking is already a challenging problem, addressing both of them simultaneously creates a chicken-and-egg problem: It is easy to map a scene when object motion is known and object regions can be removed from input frames beforehand. On the other hand, it is easy to detect moving object, when the camera pose is known. The presented method creates independent maps for static scene and moving objects, by tackling both problems following a multi-group registration scheme.
We first start with a single map and localize each frame with respect to this map, referred to as a static map. A frame is represented as a collection of segments, where each segment contains a group of features extracted from the frame. A moving object is detected as a set of segments that has high outlier ratio after frame localization with respect to the static map. Once we detect the features that fall inside dynamic segment measurements, we initialize a new map to represent the rigidly moving object, referred to as an object map. In the following observations, each frame is registered with respect to both the static and object maps. We distinguish the features belonging to the objects and static region based on the inliers resulting from these registrations.
Our main contribution is the accurate discrimination of features coming from dynamic and static regions following a multi-group geometric verification approach. Following the approach of~\cite{Cansizoglu16IROS}, we use feature grouping for object representation. In our SLAM framework, the keyframes are treated as a collection of features and objects are seen as a collection of segments, that are subset of features from keyframe. Our multi-group registration scheme considers registration of all features and various subsets of features of the frame against each map. First, we register all measurements against the static map and the object map. If there are sufficient features coming from both static and moving objects, this frame-based registration will succeed for both maps. However, if there is a dominating motion pattern in the frame, then localization of small moving objects can be missed. Thus, we also carry out a segment-based registration procedure, where we register the features falling inside a segment against the object map. To perform robust feature classification, we fuse these registration results obtained from multiple geometric verifications.
Although the technique in~\cite{Cansizoglu16IROS} also deals with object tracking on a SLAM framework, there are major differences with this study. First, the method in~\cite{Cansizoglu16IROS} only involves localization of static objects inside a SLAM system and it does not address the problem of forming multiple independent maps. On the other hand, in this study we further tackle the problem of classifying features into static and dynamic maps after localizing objects. Distinguishing features and building disjoint maps is challenging, as any contamination from one set to the other will severely affect the localization afterwards.
Second, the method of~\cite{Cansizoglu16IROS} will not work for moving objects in a sequence, as localization and bundle adjustment strongly relies on the static scene assumption. In this paper, we handle moving objects and do not have any assumption about the motion of the object (i.e., smooth or abrupt). The object motion is utilized as a means of 3D object model construction, as the motion provides a rich viewpoint variation of the object. Third, we provide a simultaneous navigation and object modeling system, while a separate object scanning step is required for object tracking in~\cite{Cansizoglu16IROS}.
An important advantage of our method is on-the-fly generation of object models, while mapping static environment at the same time. Just like a kid learning to model and manipulate objects by watching others, our method learns both object model and static scene map at the same time based on the motion of the object. An example use of the presented technique is simultaneous robot navigation and object modeling as seen in Figure~\ref{fig:concept}.
\subsection{Contributions}
We summarize the contributions of our work as follows:
\begin{itemize}
\item a multi-group registration scheme used to distinguish features coming from regions with different motion patterns, i.e., static region and moving object.
\item simultaneous reconstruction of static and object maps, yielding on-the-fly object model generation.
\item an automated technique to detect moving objects in a SLAM framework.
\end{itemize}
\section{Methodology}
\label{sec:Method}
We build our framework on the pinpoint SLAM system~\cite{Taguchi13ICRA,Cansizoglu16ICRA}, which localizes each frame with respect to a growing map using 3D planes, 3D points, and 2D points as primitives. The use of 2D measurements enables to exploit information in regions where the depth is not available (e.g., too close or far from the sensor). In this paper, our segments include 3D points and 3D planes (but not 2D points) as features similar to~\cite{Cansizoglu16IROS}, while the registration procedure exploits all the 2D points, 3D points, and 3D planes as features. We use the standard terminology of \textit{measurements} and \textit{landmarks}. Namely, the system extracts measurements from each input frame and generates/updates landmarks in a map. We use SIFT~\cite{Lowe04IJCV} detector and descriptor for generating 2D and 3D point measurements, while 3D plane measurements are extracted using the method of~\cite{Feng14ICRA}.
Figure~\ref{fig:system} shows an overview of our system. Our method consists of three modules. Dynamic object detection module finds a group of features coming from a moving object in order to initialize a map for the object, referred to as an object map. Registration module involves localization of the input frame with respect to each of the existing maps, including a static map corresponding to the static environment and a set of object maps. We perform a multi-stage registration to ensure accurate pose estimates, since distinguishing features into static and dynamic regions relies on the registration output. Finally, feature classification module divides the features into groups by detecting which map they are associated to based on the localization results. Map update and bundle adjustment procedures are carried out for enlarging the maps and refining the keyframe poses respectively.
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{landmarksassociation}
\caption{Landmarks association: (A) Measurements are extracted from the frame where points and planes are grouped into segments. (B) Registration is performed between all extracted measurements and the landmarks in the static map and object map. A further segment-based registration (as in Section~\ref{sec:localization}) is performed on the object map that helps remove erroneously matched measurements (dashed lines) as described in Section~\ref{sec:classificationland}. (C) The landmarks are assigned to the object map and static map.}
\label{fig:landmarksassoc}
\end{figure}
\subsection{Moving Object Detection}
Following a feature grouping approach, we generate segments in the frame after feature extraction. We first make use of plane extraction output and initiate a segment from each plane, i.e., each segment contains the extracted plane and all point features that fall inside the plane. Second, we carry out a depth-based segmentation algorithm to generate segments from the rest of the point cloud after plane extraction.
In this work, the first input frame initializes the static map\footnote{The map that is initialized with the first frame corresponds to the regions of the scene with the dominant motion pattern observed at the initial keyframes. Thus, without loss of generality, we use the phrase ``static map'' with the assumption that dominant motion pattern comes from the static region.}. Next, we register each following frame with respect to the static map, which results in inlier and outlier measurements. When the static regions dominate the frame, this procedure will result in the pose of the static frame. Thus, in order to detect segments that belong to the moving object, we consider the number of outliers per segment. If a segment has a large ratio of outlier measurements, then it is considered as an object region. All the features that fall inside the segment are used to initialize a new object map for the detected segments.
Dynamic object detection is executed for each frame, enabling initialization of multiple object maps. Hence, our system is capable of growing multiple object maps referring to different moving objects.
\subsection{Registration}
\label{sec:localization}
Each frame consists of sets of features coming from the moving object and the static region. We employ a multi-group registration scheme to verify the groups of features associated to different maps. We first perform registration between all the features of the frame and each existing map independently. Since objects might be small, this frame-based registration might fail for the object maps. Thus, we proceed with a segment-based registration that aims to register groups of features that are represented by segments against each object map.
After these registrations, we come up with multiple pose estimations for the input frame with respect to the maps. If both registrations succeeded, we use the result of the segment-based registration as the pose estimate since it achieves a more robust correspondence search due to a smaller number of measurements in segments. We also fuse inlier outputs from both registrations by prioritizing segment-based registration output.
\subsubsection{Frame-based Registration}
We match all features extracted from the frame with all features that come from the last $N$ keyframes of the target map. Let $\vec{p}_m$ be a measurement extracted from the input frame and $\vec{p}_l\in L$ be the corresponding landmark of the target map according to feature matching with the set of landmarks $L$. Let us denote the set of measurements of frame $i$ as $F_i$. By exploiting measurement-landmark matches in a RANSAC framework, we estimate frame-based pose $\mat{\hat{T}}_i$ by solving the following problem:
\begin{equation}
\mat{\hat{T}}_i = \underset{\mat{T}_i}{argmin} \sum_{\vec{p}_m \in I_i} d( \mat{T}_i(\vec{p}_m), \vec{p}_l ).
\label{eqn_frame}
\end{equation}
Here $\mat{T}(\vec{p}_m)$ indicates the transformation of measurement $\vec{p}_m$ by pose $\mat{T}$, and $d(\cdot, \cdot)$ denotes distances between features, which are defined for 3D-to-3D point correspondences, 2D-to-3D point correspondences, and 3D-to-3D plane correspondences as in~\cite{Cansizoglu16ICRA}. $I_i$ is the set of inlier measurements detected as
\begin{equation}
I_i = \{ \vec{p}_m \in F_i | \exists \vec{p}_l \in L \textrm{ s.t. } d(\mat{\hat{T}}_i(\vec{p}_m), \vec{p}_l) < \sigma \},
\end{equation}
where $\sigma$ is an inlier threshold.
Note that, for the static map, since the camera moves smoothly, restricting the features to the last $N$ keyframes provides faster correspondence search. On the other hand, if the object motion is abrupt, it is likely that frame-based registration can fail. Thus, we proceed with segment-based registration for localizing the frame with the object maps.
\subsubsection{Segment-based Registration}
As proposed in~\cite{Cansizoglu16IROS}, we detect and track objects by performing a segment-based registration with respect to the object maps. An object map is represented as a collection of segments that are registered with each other. For each segment in the input frame, we perform VLAD-based appearance similarity search~\cite{Jegou12PAMI} followed by RANSAC registration to register the segment with respect to an object map.
Let us denote the set of measurements of segment $j$ of frame $i$ as $S_{i,j} \subset F_i$, and the set of landmarks of the matching segment as $L_j \subset L$. Similar to frame-based registration, we carry out feature matching between $S_{i,j}$ and $L_j$ and solve the following optimization problem through RANSAC:
\begin{equation}
\mat{\hat{T}}'_{i,j} = \underset{\mat{T}_{i,j}}{argmin} \sum_{\vec{p}_m \in I'_{i,j}} d( \mat{T}_{i,j}(\vec{p}_m), \vec{p}_l).
\label{eqn_segment}
\end{equation}
Here $\mat{\hat{T}}'_{i,j}$ is the estimated object pose and $I'_{i,j}$ is the set of inlier measurements detected as
\begin{equation}
I'_{i,j} = \{ \vec{p}_m \in S_{i,j} | \exists \vec{p}_l \in L_j \textrm{ s.t. } d(\mat{\hat{T}}'_{i,j}(\vec{p}_m), \vec{p}_l) < \sigma \}.
\end{equation}
Since this pose is based on segment-to-segment registration, we proceed with a final refinement carrying out a prediction-based registration between all the measurements of the frame and the landmarks of the object map similar to~\cite{Cansizoglu16IROS}. In other words, we use the result of equation~\eqref{eqn_segment} as the predicted pose, and perform feature matching between the frame and the map based on that. Then, using these matches we perform RANSAC that minimizes the error between the measurements of the frame and the landmarks of the map as indicated in equation~\eqref{eqn_frame}, obtaining the refined object pose $\mat{\hat{T}}_{i,j}$. After refinement the inliers of segment $S_{i,j}$ are
\begin{equation}
I_{i,j} = \{ \vec{p}_m \in F_i | \exists \vec{p}_l \in L \textrm{ s.t. } d(\mat{\hat{T}}_{i,j}(\vec{p}_m), \vec{p}_l) < \sigma \}.
\end{equation}
Note that since final refinement is performed between all measurements of the frame and the map, there might be inliers that are outside of segment $S_{i,j}$ as indicated in the above equation. This way, we can handle the object features that do not belong to any segment and/or have invalid depth values, for example the features in small object regions that are missed during segmentation due to depth discontinuity or invalid depth values.
This step outputs the pose of the object in the current frame with respect to the object map and the matching segments of the frame along with the associations between the measurements and the landmarks of the object map. In the following step, we proceed with a classification method to distinguish features with different motion patterns using the registration output.
\subsection{Classification of Features into Regions}
\label{sec:classificationland}
The multi-group registration provides us pose estimates of the input frame with respect to the static map and object maps along with the associations between measurements and the landmarks. The segment-based registration also outputs the segments that are successfully matched with a segment in an object map.
Since the objects are smaller compared to the static scene and motion of the static region dominates the scene, we prioritize object maps while classifying the measurements. Thus, if a measurement falls inside a segment that is matched with a segment of an object map, then the measurement is considered as associated to the object. Otherwise, we investigate whether any of the registrations found the measurement as an inlier. If the measurement is found as an inlier in the object map registration, then it is considered as object measurement. Otherwise, the measurement is considered as belonging to the static scene. This means that at the end of this process the measurements extracted from the novel frame are binary associated to the two maps as shown in Figure \ref{fig:landmarksassoc}.
The steps of the method are summarized in Algorithm~\ref{alg:fusion}. In lines 1--2, $M^{static}$ and $M^{object}$ are initialized to empty sets, that keep measurements associated to static and object maps respectively. Frame-based registration is carried out with respect to both maps in lines 3--6, followed by segment-based registration in lines 7--13. Feature classification updates $M^{static}$ and $M^{object}$ in lines 14--24 and the maps are updated in lines 25--26. Note that map update does not happen if none of the registrations succeeds for the map.
\begin{algorithm}[t]
\caption{Algorithm for updating maps given frame measurements $F_i$, measurements of segments $S_{i,1}, S_{i,2},\ldots,S_{i,n}$, and the set of landmarks of static and object maps, $L^{static}$ and $L^{object}$. }\label{alg:fusion}
\begin{algorithmic}[1]
\small
\State{$M^{static} \gets \varnothing $} \Comment{measurements associated to static map}
\State{$M^{object} \gets \varnothing $} \Comment{measurements associated to object map}
\State{\texttt{Match features between $F_i$ and $L^{static}$}}
\State{\texttt{Compute $\mat{\hat{T}}_i^{static}$ and ${I}_i^{static}$ by eqn.~\eqref{eqn_frame}}}
\State{\texttt{Match features between $F_i$ and $L^{object}$}}
\State{\texttt{Compute $\mat{\hat{T}}_i^{object}$ and ${I}_i^{object}$ by eqn.~\eqref{eqn_frame}}}
\For{$j=1,\ldots,n$}
\State{\texttt{Match features between $S_{i,j}$ and $L_j$}}
\State{\texttt{Compute $\mat{\hat{T}}'_{i,j}$ and $I'_{i,j}$ by eqn.~\eqref{eqn_segment}}}
\State{\texttt{Match features between $F_i$ and $L^{object}$ based on $\mat{\hat{T}}'_{i,j}$}}
\State{\texttt{Compute $\mat{\hat{T}}_{i,j}$ and $I_{i,j}$ by eqn.~\eqref{eqn_frame}}}
\State{\texttt{Report $S_{i,j}$ as matching segment if RANSAC succeeds}}
\EndFor
\For{$\forall \vec{p}_m \in F_i$}
\If {\texttt{$\vec{p}_m$ is inside a matching segment}}
\State { $M^{object}\gets M^{object} \cup \{\vec{p}_m\}$}
\Else
\If {\texttt{$\vec{p}_m \in I_i^{object}$ or $\exists S_{i,j} | \vec{p}_m \in I_{i,j}$}}
\State { $M^{object}\gets M^{object} \cup \{\vec{p}_m\}$}
\Else
\State { $M^{static}\gets M^{static} \cup \{\vec{p}_m\}$}
\EndIf
\EndIf
\EndFor
\State{\texttt{Update $L^{static}$ with $M^{static}$}}
\State{\texttt{Update $L^{object}$ with $M^{object}$}}
\end{algorithmic}
\end{algorithm}
\begin{figure*} [t]
\center
\includegraphics[width=\textwidth]{experiment_22}
\caption{Reconstruction results on example scene 1: (A) 3D reconstructed static map (left) and object map overlaid on the static map based on the initial pose of the object (right), (B) example keyframes from the sequence where the leftmost frame is the first keyframe of the object map after automatic moving object detection, (C) reconstructed 3D map of the moving object in various viewpoints (top and middle) and point landmarks overlaid on plane landmarks with white circles (bottom). }
\label{fig:pic_experiment1}
\end{figure*}
\subsection{Map Update and Bundle Adjustment}
After the registration, we know which group of features are associated to static regions or the objects. We also have a pose estimation for each group of features with respect to the map they are associated to. For each map, if the estimated pose is different from the poses of existing keyframes of the map, then we initialize a new keyframe with the respective set of features and add the keyframe to the map. If registration fails for one of the maps, then the map is not updated with any information from that frame.
A bundle adjustment procedure runs asynchronously with the SLAM for each map, minimizing the registration error with respect to all the keyframe poses and landmark locations. Note that since the motion of the sensor and the motion of the objects are independent from each other, we do not utilize any constraints based on object correspondences in the bundle adjustment contrary to the approach in~\cite{Cansizoglu16IROS}.
\subsection{Related Work}
\label{sec:relatedWork}
Object SLAM aims to detect and track static objects occurring multiple times in a sequence of frames and use this information for building more accurate maps~\cite{Dharmasiri2016,Civera2011,Ma2014,SalasMoreno13CVPR,Fioraio2013}. Although it is widely studied in the scope of stationary objects, there are few studies on moving object tracking in an RGB-D SLAM framework~\cite{Cadena2016}.
Existing work on dynamic object tracking either focuses on detecting the moving object to remove it from the static map~\cite{Keller2013, Tan2013}, or generating a model of the moving object by ignoring reconstruction of static environment~\cite{Jiang2016, Mustafa2015, Yuheng2013, Shin2013, Ambrus2017, Dame2013}. Keller et al.~\cite{Keller2013} solved dynamic object detection and camera localization in alternating steps in order to remove moving objects from the reconstructed map. They have a dense SLAM system, which makes it computationally demanding compared to sparse feature-based systems. Similarly,~\cite{Mustafa2015, Yuheng2013, Dame2013} used a dense point cloud representation in order to generate 3D reconstruction of objects without modeling the static scene. Jiang et al.~\cite{Jiang2016} presented an object tracking method based on motion segmentation, hence making the system unable to operate online. Shin et al.~\cite{Shin2013} developed a framework for 3D reconstruction of multiple rigid objects from dynamic scenes. Their framework provides reliable and accurate correspondences of the same object among unordered and wide-baseline views, providing reconstruction of the object only.
Choudhary et al.~\cite{Choudhary2014} presented a method to create 3D models of static objects in the scene while performing object SLAM at the same time. Their method depends on segment associations and cannot handle objects that rigidly move with respect to the static scene. Finman et al.~\cite{Finman2013} created 3D models of objects by using differences between RGB-D maps, where they focused on re-identification of objects rather than generating complete 3D models. In a similar way, F{\"a}ulhammer et al.~\cite{Faulhammer2017} used segmentation of dense point cloud to generate 3D model of an object in an indoor environment, while mobile robot patrols a set of points.
The closest work to our technique was proposed by Wang et al.~\cite{Wang2007}, where they use a monocular camera and track the camera and dynamic object separately following a probabilistic approach. However, rather than constructing a complete 3D model of the object, their method only keeps track of the object locations while performing SLAM in the static environment. Moreover, their system can have difficulties when the object does not have any smooth motion (i.e., manipulated by a human). Recently Ataer-Cansizoglu and Taguchi~\cite{Cansizoglu16IROS} presented a technique to track objects in RGB-D SLAM following a hierarchical grouping approach. An important limitation of their algorithm is the inability to handle moving objects. Furthermore, their technique includes a separate object scanning step that generates the 3D model of the object, which is used later for object tracking and localization. On the other hand, this work focuses on on-the-fly object model generation and tracking of rigidly moving objects.
|
2,869,038,156,882 | arxiv | \section{Searches for new physics}\label{sec:bsm}
Like hadron beams, muon beams emit significantly less synchrotron radiation than their electronic counterpart due to the muon's much larger mass.
As a result, $\mu^+\mu^-$ colliders can reach {partonic} c.m.~energies that far exceed conventional $e^+e^-$ facilities, such as LEP II, and potentially even $pp$ colliders;
see section~\ref{sec:ppvsmuon} for further details.
Thus, in addition to the abundance of achievable SM measurements described in sections~\ref{sec:sm} and \ref{sec:eft},
a muon collider is able to explore new territory in the direct search for new physics.
In this section, we present a survey of BSM models and the potential sensitivity of a $\mu^+\mu^-$ collider.
Explicitly, we consider the $s$-channel annihilation and VBF processes
\begin{equation}
\mu^+\mu^- ~\to~ X \quad\text{and}\quad
\mu^+\mu^- ~\to~ X \ell \ell'.
\end{equation}
Here, $\ell\in\{\mu^\pm,\overset{(-)}{\nu_\mu}\}$ and $X$ is some BSM final state, which may include SM particles.
We focus on the complementarity of the two processes because while $s$-channel annihilation grants accesses to the highest available c.m.~energies,
it comes at the cost of a cross section suppression that scales as $\sigma\sim1/s$ when far above production threshold.
On the other hand, in VBF, the emission of transversely polarized, $t$-channel bosons gives rise to logarithmic factors that grow with the available collider energy.
Thus, VBF probes a continuum of mass scales while avoiding a strict $1/s$-suppression, but at the cost of EW coupling suppression.
To investigate this interplay, for each scenario, we consider the mass and collider ranges:
\begin{align}
m_X\in[0.4,4]\;\textrm{TeV}
\quad\text{and}\quad
\sqrt s\in [1,30]\;\textrm{TeV}.
\end{align}
We limit our study to $\sqrt{s}\leq30{\rm ~TeV}$ due to the emergence of EW Sudakov logarithms in the VBF channels that scale as $\sigma_{\rm VBF}\sim\alpha_W^k\log^k(s/M_V^2)$, for $V=W,Z$.
These logarithms can potentially spoil the perturbative reliability of cross sections at LO and necessitate resummation of EW Sudakov factors~\cite{Bauer:2016kkv,Chen:2016wkt,Bauer:2017isx,Manohar:2018kfx,Han:2020uid}.
While important for reliable predictions at higher $\sqrt{s}$, such resummation is beyond the present scope.
For the various BSM scenarios, we assume benchmark values for relevant couplings.
We omit generator-level phase space cuts where possible but stipulate them when needed to regulate matrix elements.
In the following, we present the production rates of new processes.
As a standard candle reference, in most scenarios, we also plot SM $H$ production via $W^+W^-$ fusion (black, solid curve).
We start our survey in section~\ref{sec:bsm_scalar} with minimally extending the SM by a scalar that is a singlet under the SM's gauge symmetries.
We then move onto the production of scalars in the context of the Two Higgs Doublet Model in section~\ref{sec:bsm_2hdm}, and the Georgi-Machacek Model in section~\ref{sec:bsm_gm}.
In section~\ref{sec:bsm_mssm}, we investigate the production of sparticles in the context of the Minimal Supersymmetric Standard Model.
We also consider representative phenomenological models describing the production of
leptoquarks in section~\ref{sec:bsm_LQ}, heavy neutrinos in section~\ref{sec:bsm_heavyN}, and
vector-like quarks in section~\ref{sec:bsm_vlq}.
We give an overview of this survey in section~\ref{sec:bsm_overview}.
A detailed comparison of $s$-channel and VBF production mechanisms in BSM searches at multi-TeV muon colliders is deferred to section~\ref{sec:bsm_vbf}.
\subsection{Scalar singlet extension of the Standard Model}\label{sec:bsm_scalar}
The scalar sector of the SM consists of a single scalar $SU(2)_L$ doublet with a nonzero $U(1)_Y$ charge.
While this is the minimal scalar content that supports the generation of fermion and weak boson masses through EWSB,
the measured couplings of the $M_H\approx125{\rm ~GeV}$ Higgs boson uphold this picture~\cite{ATLAS:2018doi,Sirunyan:2018koj}.
Theoretical motivation for extending this scalar sector, however, is well-established and the phenomenology of these scenarios have been studied extensively.
For reviews, see Refs.~\cite{Gunion:1989we,Branco:2011iw,Morrissey:2012db,Ivanov:2017dad,Khan:2015ipa,Kanemura:2016sos,Ilnicka:2018def} and references therein.
One of the simplest extensions that respects the SM symmetries
is the addition of a single, real scalar $(\sigma)$ that is neutral under all SM charges but carries an odd $\mathbb{Z}_2$ parity.
Such scenarios have received recent attention~\cite{Buttazzo:2018qqp,Ruhdorfer:2019utl}
as simplified models through which one can explore the sensitivity of multi-TeV muon colliders to new scalars.
In light of LHC data, the phenomenology of a singlet scenario is categorized by whether $\sigma$ acquires a nonzero vev:
In the so-called inert scenario, $\sigma$ does not acquire a vev, interacts at tree level only with the SM Higgs boson $(H)$,
and impacts $H$'s coupling to fermions and bosons at loop level~\cite{Craig:2013xia}.
If instead $\sigma$ acquires a vev, then it mixes with the SM Higgs,
which in turn modifies $H$'s coupling to SM particles at tree-level.
\begin{figure}[t!]
\centering\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfss}\label{fig:bsm_vbfsing_diag}}
\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfSINGpair}\label{fig:bsm_vbfsing_xsec}}
}
\caption{
(a) Diagrammatic representation of $SS$ pair production through $W^+W^-$ fusion in a scalar singlet extension of the SM.
(b) $SS$ pair production cross section [fb] via EW VBF in $\mu^+\mu^-$ collisions as a function of collider energy $\sqrt{s}$ [TeV] for representative coupling inputs.
Also shown for comparison is the $H$ production process via EW VBF (black curve) in the SM.
}\label{fig:bsm_vbfsing}
\end{figure}
We investigate the muon collider sensitivity to the SM with an extra scalar singlet by considering the case where the vev of $\sigma$ is nonzero, i.e., $\langle \sigma \rangle \approx v_\sigma +\sigma_0$.
The (unbroken) Lagrangian that describes such a scenario, including the $\mathbb{Z}_2$ symmetry, is given by
\begin{equation}
\mathcal{L}=\mathcal{L}_{\rm SM}+\frac{1}{2}\partial_\mu \sigma\, \partial_\mu \sigma-\frac{1}{2}m_\sigma^2\, \sigma^2-\frac{\lambda_\sigma}{4!}\sigma^4-\frac{\kappa_\sigma}{2} \sigma^2\, \varphi^\dagger\varphi\,,
\end{equation}
where $\mathcal{L}_{\rm SM}$ is the full SM Lagrangian.
After both the SM doublet $\varphi$ and $\sigma$ acquire their respective vevs, $v$ and $v_\sigma$,
a mass-mixing term between $\sigma_0$ and the neutral part of the doublet $\phi_0$,
and proportional to $\delta m^2\propto\kappa_\sigma v v_\sigma$, is generated.
Rotating $\phi_0$ and $\sigma_0$ from the gauge basis and into the mass basis by an angle $\theta$,
we obtain the mass eigenstates $H$ and $S$ with mass eigenvalues $M_H$ and $M_S$.
The coupling of the lightest neutral scalar, which we assume is $H$, to SM fermions and gauge bosons is suppressed relative to the SM by a factor of $\cos\theta$.
Owing to strong constraints on anomalous Higgs couplings~\cite{ATLAS:2018doi,Sirunyan:2018koj},
one scalar is aligned closely with the SM Higgs, which we assign to $H$, implying $\cos\theta\simeq1$.
The bare parameters $m_\sigma, \lambda_\sigma, \kappa_\sigma$, can subsequently be exchanged for the physical parameters $M_S, v_\sigma, \theta$,
which therefore permits us to express the trilinear scalar interactions as:
\begin{eqnarray}
\lambda_{hhh}&=&-\frac{3M_H^2 }{v\,v_\sigma}(v_\sigma \cos^3\theta+v \sin^3\theta)\,,\\
\lambda_{sss}&=&\frac{3M_S^2}{v\,v_\sigma}(v \cos^3\theta - v_\sigma \sin^3\theta)\,,\\
\lambda_{hss}&=&-\frac{(M_H^2+2M_S^2)}{2v\,v_\sigma} \sin2\theta(v\cos\theta+v_\sigma\sin\theta)\,,\\
\lambda_{hhs}&=&\frac{(2M_H^2+M_S^2)}{2v\,v_\sigma} \sin2\theta(v_\sigma\cos\theta-v\sin\theta)\,.
\end{eqnarray}
The non-inert singlet scenario\footnote{Similarly, the inert singlet scenario is available using the \texttt{SM\_Plus\_Scalars\_UFO} UFO libraries~\cite{vanderBij:2006ne}.}
is implemented in the \texttt{Minimal Dilaton Model} UFO libraries by Ref.~\cite{Abe:2012eu},
and hence can be simulated using general purpose event generators.
$S$ production in $\mu^+\mu^-$ collisions can proceed through several mechanisms, including $W^+W^-$ fusion, as shown in figure~\ref{fig:bsm_vbfsing_diag},
which is mediated by an $s$-channel $H$ boson.
As shown above, for a given $v_\sigma$ and $\theta$, the $\lambda_{hss}$ coupling is related to the $H-S$ mass difference.
Assuming the fixed, baseline mass splitting of Ref.~\cite{Abe:2012eu},
we show in figure~\ref{fig:bsm_vbfsing_xsec} the $SS$ pair production cross section [fb] via EW VBF as a function of collider energy $\sqrt{s}$ [TeV].
In the numerical analysis we have assumed $\theta=v/v_\sigma$ and $v_\sigma=20 M_S$.
For $M_S = 0.4- 0.8 {\rm ~TeV}$, we see that the VBF process rate spans roughly $\sigma\sim 10^{-3}-10^{-2}$ fb for $\sqrt{s}=5-30{\rm ~TeV}$. For $M_S = 2-4 {\rm ~TeV}$,
we observe that the corresponding rates reach the order of $10^{-4} -10^{-3}$ fb at $\sqrt{s}=30{\rm ~TeV}$.
By comparing these numbers with the SM productions of $H$ via VBF over the whole range of collider energies, we find that the latter are several order of magnitude larger, spanning $\sigma\sim 100-1000$ fb.
\subsection{Two Higgs Doublet Model}\label{sec:bsm_2hdm}
\begin{figure}[t!]
\centering\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfH2}
\label{fig:bsm_vbf2hdm_diag}
}
\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfH2}
\label{fig:bsm_vbf2hdm_xsec}
}
}
\caption{
(a) Diagrammatic representation of $H_2$ production through $W^+W^-$ fusion in the CP conserving 2HDM.
(b) $H_2$ production cross section [fb] via VBF in $\mu^+\mu^-$ collisions as a function of collider energy $\sqrt{s}$ [TeV] for representative $H_2$ mass $(M_{H_2})$.
}\label{fig:bsm_vbf2hdm}
\end{figure}
If a new neutral scalar does indeed exist, rather than being a SM singlet as posed in section~\ref{sec:bsm_scalar}, it may actually be a component of a second scalar $SU(2)_L$ doublet.
Such scenarios, known in the literature as Two Higgs Doublet Models (2HDMs), have been extensively reviewed~\cite{Gunion:1989we,Branco:2011iw,Crivellin:2013wna,Craig:2013hca},
particularly for their necessity to realize Supersymmetry in nature.
We consider the benchmark, CP-conserving 2HDM, the scalar potential of which is
\begin{align}
V&=\mu_1 \varphi_1^\dagger \varphi_1 + \mu_2 \varphi_2^\dagger \varphi_2 + \left(\mu_3 \varphi_1^\dagger \varphi_2 + {\rm H.c.}\right) + \lambda_1 \left(\varphi_1^\dagger \varphi_1\right)^2 + \lambda_2 \left(\varphi_2^\dagger \varphi_2\right)^2 \nonumber\\
&+\lambda_3 \left(\varphi_1^\dagger \varphi_1\right) \left(\varphi_2^\dagger \varphi_2\right) + \lambda_4 \left(\varphi_1^\dagger \varphi_2\right) \left(\varphi_2^\dagger \varphi_1\right) + \left(\lambda_5 \left(\varphi_1^\dagger \varphi_2\right)^2 + {\rm H.c.}\right)\nonumber\\
&+ \varphi_1^\dagger \varphi_1\left(\lambda_6 \left(\varphi_1^\dagger \varphi_2\right) + {\rm H.c.}\right) + \varphi_2^\dagger \varphi_2\left(\lambda_7 \left(\varphi_1^\dagger \varphi_2\right) + {\rm H.c.}\right).
\end{align}
Here, the couplings $\lambda_i$ are real and the scalar $SU(2)_L$ doublets $\varphi_1$ and $\varphi_2$ are given by
\begin{equation}
\varphi_1\equiv \left(\begin{array}{c}-ih_1^+\\\frac{h^0_1+i a_1+v_1}{\sqrt 2}\end{array}\right) \qquad\text{and}\qquad \varphi_2\equiv \left(\begin{array}{c}h_2^+\\\frac{h^0_2+i a_2+v_2}{\sqrt 2}\end{array}\right).
\end{equation}
After $\varphi_1$ and/or $\varphi_2$ acquire vacuum expectation values, EW is broken and fields with identical quantum numbers mix.
More specifically, the charged scalars and neutral, CP-odd scalars mix into the EW Goldstone bosons $G^\pm,~G^0$ and the physical states $H^\pm,~A^0$.
Likewise, the neutral, CP-even scalars mix by an angle $\theta$ into the physical states $H_1$ and $H_2$.
Here, $H_1$ is identified as the observed, SM-like Higgs with $M_{H_1}\approx 125{\rm ~GeV}$ and $H_2$ is heavier with $M_{H_2} > M_{H_1}$.
In terms of mass eigenstates, $h_1^0$ and $h_2^0$ are given explicitly by
\begin{eqnarray}
\begin{pmatrix} h^0_1 \\ h^0_2 \end{pmatrix} = \begin{pmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix} H_1 \\ H_2 \end{pmatrix}.
\end{eqnarray}
Among the simplest processes we can analyze at a muon collider is resonant production of $H_2$ from $W^+W^-$ fusion,
which we show diagrammatically in figure~\ref{fig:bsm_vbf2hdm_diag}.
To estimate the sensitivity to this process, we consider the 2HDM
in its CP-conserving scenario, as implemented in the \texttt{2HDM} model file \cite{Degrande:2014vpa}.
We show in figure~\ref{fig:bsm_vbf2hdm_xsec} the $H_2$ production cross section [fb] via EW VBF as a function of collider energy $\sqrt{s}$ [TeV] for representative $H_2$ mass $(M_{H_2})$.
\confirm{For $M_{H_2}=400-800{\rm ~GeV}$, we find that cross sections span approximately $\sigma\approx0.1-100$ fb for $\sqrt{s}=1-30{\rm ~TeV}$.
For $M_{H_2}=2-4{\rm ~TeV}$, we find that rates can reach several tens of fb at $\sqrt{s}=30{\rm ~TeV}$.
Over the entire range of collider energies, we see that the SM production of $H$ is over an order of magnitude larger, reaching $\sigma\sim100-1000$ fb.
}
\subsection{Georgi-Machacek Model}\label{sec:bsm_gm}
Another possibility at a future muon facility is the VBF production of electrically charged scalars.
These, of course, do not exist in the SM nor in the simplest, na\"ive extensions of the SM scalar sector.
In models such as the Georgi-Machacek (GM) model \cite{Georgi:1985nv}
and the Type II Seesaw model for neutrino masses~\cite{Konetschny:1977bn,Schechter:1980gr,Cheng:1980qt,Lazarides:1980nt,Mohapatra:1980yp},
VBF production of singly charged $(H^\pm)$ and doubly charged $(H^{\pm\pm})$ scalars is possible due to the existence of scalar triplet representations of $SU(2)_L$
with nonzero hypercharge. (Higher $SU(2)_L\otimes U(1)_Y$ representations also permit scalars with even larger electric charges.)
For present purposes, we focus on the feasibility of seeing exotically charged scalars from the GM
model\footnote{While it is also possible to model the Type II Seesaw with the \texttt{TypeIISeesaw} UFO libraries~\cite{Fuks:2019clu}, we do not anticipate a qualitative difference in sensitivity from the GM case.}.
Broadly speaking, the model extends the SM with a real and a complex triplet with hypercharge $Y=0$ and $1$, respectively.
If the vevs of the triplets' neutral components are aligned, then tree-level, custodial symmetry is respected
and strong constraints on the $\rho$ parameter are alleviated~\cite{Gunion:1989we,Chen:2005jx,Han:2005nk,Chen:2008jg,Perez:2008ha,Kanemura:2012rs,Das:2016bir,Ismail:2020zoz}.
More specifically, the GM scalar sector
consists of the usual SM complex doublet $(\phi^+,\phi^0)$ with $Y = 1/2$,
a real $SU(2)_L$ triplet $(\xi^+,\xi^0,\xi^-)$ with $Y = 0$,
and a complex $SU(2)_L$ triplet $(\chi^{++},\chi^+,\chi^0)$ with $Y=1$.
Writing the doublet and triplets in the form of a bi-doublet $(\Phi)$ and bi-triplet $(X)$, we have
\begin{eqnarray}
\Phi &=& \left( \begin{array}{cc}
\phi^{0*} &\phi^+ \\
-\phi^{+*} & \phi^0 \end{array} \right)
\quad\text{and}\quad
X =
\left(
\begin{array}{ccc}
\chi^{0*} & \xi^+ & \chi^{++} \\
-\chi^{+*} & \xi^{0} & \chi^+ \\
\chi^{++*} & -\xi^{+*} & \chi^0
\end{array}
\right).
\label{eq:PX}
\end{eqnarray}
\begin{figure}[t!]
\centering\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfch}\label{vbfgm_hpx_diag}}\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfsCH}\label{vbfgm_hpx_xsec}}
}
\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfdch}\label{vbfgm_hpp_diag}}\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfdCH}\label{vbfgm_hpp_xsec}}
}
\caption{
(a) Diagrammatic representation of $H^\pm$ through EW VBF in the GM model, in $\mu^+\mu^-$ collisions.
(b) The $H^\pm$ production rate [fb] via EW VBF in $\mu^+\mu^-$ collisions as a function of collider energy $\sqrt{s}$ [TeV] for representative $M_{H^\pm}$.
(c,d) Same as (a,b) but for $H^{++}$ in $\mu^+\mu^+$ collisions.
}\label{vbfgm}
\end{figure}
For our numerical results, we consider the decoupling limit of the GM model as implemented in the \texttt{GM\_UFO} UFO~\cite{Hartling:2014zca,Degrande:2015xnm}.
The (unbroken) scalar potential is given by
\begin{align}
V(\Phi,X) &= \frac{\mu_2^2}{2} \text{Tr}(\Phi^\dagger \Phi)
+ \frac{\mu_3^2}{2} \text{Tr}(X^\dagger X)
+ \lambda_1 [\text{Tr}(\Phi^\dagger \Phi)]^2
+ \lambda_2 \text{Tr}(\Phi^\dagger \Phi) \text{Tr}(X^\dagger X) \nonumber \\
& + \lambda_3 \text{Tr}(X^\dagger X X^\dagger X)
+ \lambda_4 [\text{Tr}(X^\dagger X)]^2
- \frac{\lambda_5}{4} \text{Tr}( \Phi^\dagger \tau_I \Phi \tau_J) \text{Tr}( X^\dagger t_I X t_J)
\nonumber \\
& - \frac{M_1}{4} \text{Tr}(\Phi^\dagger \tau_I \Phi \tau_J)(U X U^\dagger)_{IJ}
- M_2 \text{Tr}(X^\dagger t_I X t_J)(U X U^\dagger)_{IJ}.
\end{align}
Here $\tau_I$ are the Pauli $\sigma$ matrices.
The matrices $t_I$ and $U$ are \cite{Hartling:2014zca}
\begin{equation}
t_1= \frac{1}{\sqrt{2}} \left( \begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & 0 \end{array} \right), \quad
t_2= \frac{1}{\sqrt{2}} \left( \begin{array}{ccc}
0 & -i & 0 \\
i & 0 & -i \\
0 & i & 0 \end{array} \right), \quad
t_3= \left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & -1 \end{array} \right),
\end{equation}
\begin{equation}
U = \left( \begin{array}{ccc}
- \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \\
- \frac{i}{\sqrt{2}} & 0 & - \frac{i}{\sqrt{2}} \\
0 & 1 & 0 \end{array} \right).
\label{eq:U}
\end{equation}
After aligning all states into their mass eigenstates, we are left with $H^\pm$ and $H^{\pm\pm}$,
in addition to a number of neutral scalar and pseudoscalar states that we do not consider.
In order to keep a consistent measure of collider sensitivity, we restrict ourselves to EW VBF production of $H^\pm$ and $H^{\pm\pm}$.
In figure~\ref{vbfgm_hpx_diag}, we show a diagrammatic representation of the singly charged scalar $H^\pm$ produced resonantly through EW boson fusion,
and present in figure~\ref{vbfgm_hpx_xsec} the production cross section [fb] as a function of collider energy $\sqrt{s}$ [TeV] for representative masses $(M_{H^\pm})$.
For relatively light $M_{H^\pm}<1{\rm ~TeV}$, we find that resonant production rates are as low as $\sigma\sim0.01-1$ fb at $\sqrt{s}=2{\rm ~TeV}$ and can reach as high as $\sigma\sim5-10$ fb at $\sqrt{s}=30{\rm ~TeV}$.
For the relatively heavy $M_{H^\pm}=2-4{\rm ~TeV}$, rates can reach up to several fb at the largest $\sqrt{s}$ we consider.
In figures~\ref{vbfgm_hpp_diag} and \ref{vbfgm_hpp_xsec}, we show the same for $H^{++}$ in $\mu^+\mu^+$ collisions.
For the same mass and collider scales, we find that the resonant production rates of $H^{++}$ are a factor of a few larger than for $H^{\pm}$.
We attribute this to the fact that the $W\ell\nu$ coupling in the SM is larger than the $Z\ell\ell$ coupling.
\subsection{Minimal Supersymmetric Standard Model}\label{sec:bsm_mssm}
In the SM, the Higgs boson possesses no symmetry that protects or stabilizes its mass against quantum corrections that naturally drive the mass away from the EW scale and toward the scale of new physics.
As such, supersymmetric extensions of the SM (SUSY) are well-motivated theoretical scenarios.
Under SUSY, the so-called hierarchy problem is softened or removed by
hypothesizing that SM particles, along with missing members of a multiplet,
belong to nontrivial representations of the Poincar\'e group~\cite{Nilles:1983ge,Haber:1984rc,Martin:1997ns,Baer:2006rs}.
This leads to the existence of a new degree of freedom for each SM one that is mass-degenerate but with opposite spin-statistics,
and that order-by-order contribute oppositely to quantum corrections of the Higgs's mass.
The lack of experimental evidence for
superpartners~\cite{Baer:2006rs,Tanabashi:2018oca,Aad:2019pfy,Sirunyan:2019ctn,Aad:2019ftg,CMS:2019tlp,Aad:2019qnd,Aad:2019vvi,Sirunyan:2019glc,Sirunyan:2020ztc},
however, suggests that if SUSY is realized at a certain scale it is broken at the EW scale.
\begin{figure}[h!]
\centering\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfstst}\label{vbfmssm_stop_diag}}
\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfSToppair}\label{vbfmssm_stop_xsec}}
}
\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfneuneu}\label{vbfmssm_nuino_diag}}
\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfNEUpair}\label{vbfmssm_nuino_xsec}}
}
\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfchacha}\label{vbfmssm_chino_diag}}
\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfCHApair}\label{vbfmssm_chino_xsec}}
}
\caption{
Same as figure~\ref{fig:bsm_vbf2hdm} but in the MSSM for
(a,b) stop pair production,
(c,d) neutralino pair production, and
(e,f) chargino pair production.
}
\label{vbfmssm}
\end{figure}
While many variations of SUSY exist and are actively investigated,
the Minimal Supersymmetric Standard Model (MSSM) is the simplest supersymmetric model supported by phenomenology~\cite{Nilles:1983ge,Haber:1984rc,Martin:1997ns,Baer:2006rs}.
In it, the holomorphicity of the superpotential and anomaly cancellation require that two Higgs superfields be present
(implying also that the MSSM is a supersymmetric extension of the 2HDM).
The superpotential of the MSSM is given by
\begin{equation}
\mathcal{W}_{MSSM}=y_u \, \bar{u} Q H_u - y_d \, \bar{d} Q H_d - y_e \bar{e} L H_d + \mu H_u H_d,
\end{equation}
where $H_u,\,H_d,\,Q,\,L,\,\bar{u},\,\bar{d},\,\bar{e}$, are the chiral superfields to which the Higgs and fermions belong.
Apart from these terms are the vector superfields containing gauge bosons and gauginos as well as the K\"ahler potential, which describes particles' kinetic terms.
In studies and tests of the MSSM, one often also considers $R$-parity,
defined for each particle as
\begin{equation}
\mathcal{P}_R = (-1)^{3(B-L)+2s},
\end{equation}
where $B$, $L$, and $s$ are the baryon number, lepton number, and spin of the particle.
By construction, all SM particles (and 2HDM scalars) have $\mathcal{P}_R=+1$, whereas their superpartners have $\mathcal{P}_R=-1$.
A consequence of $R$ parity is that the lightest supersymmetric particle is stable and, if it is electrically neutral, it is a good dark matter (DM) candidate.
Generically, scalar superpartners of quarks and leptons (squark and sleptons) with the same electric charge and color quantum numbers mix.
In the MSSM, this results in two $6\times 6$ mixing matrices for the squarks (one each for the up and down sectors)
and a $3\times 3$ mixing matrix for charged sleptons.
(Neutrinos are natively massless in the MSSM as they are in the SM.)
The neutral and charged superpartners of SM scalar and vector bosons also mix.
The mass eigenstates, denoted by $\tilde\chi^0_k$ and $\tilde\chi^\pm_k$,
are given as linear combinations of the fields $\{\tilde B,\tilde W^0,\tilde H^0_d,\tilde H^0_u\}$ and $\{\tilde W^+,\tilde H_u^+,\tilde W^-,\tilde H_d^-\}$, respectively.
Despite extensive searches for these states~\cite{Nilles:1983ge,Haber:1984rc,Martin:1997ns,Baer:2006rs,Tanabashi:2018oca},
including direct searches at the LHC~\cite{Aad:2019pfy,Sirunyan:2019ctn,Aad:2019ftg,CMS:2019tlp,Aad:2019qnd,Aad:2019vvi,Sirunyan:2019glc,Sirunyan:2020ztc},
evidence for the MSSM at the weak scale has yet be established.
If the MSSM, or any variation of SUSY, is realized at the EW- or TeV-scale,
then a multi-TeV muon collider could be an optimal machine to discover missing superparticles or study the spectrum properties.
To investigate the sensitivity of muon colliders to the MSSM, we consider the benchmark, simplified scenario where
generation-1 and -2 sfermions decouple while generation-3 squarks mix in pairs, $(\tilde t_R,\,\tilde t_L)$ and $(\tilde b_R,\,\tilde b_L)$.
We use the \texttt{MSSM}~UFO libraries as developed by Ref.~\cite{Duhr:2011se},
and vary masses while keeping mass-splittings and couplings fixed.
In figure~\ref{vbfmssm} we show diagrammatically and numerically pair production of (a,b) top squarks, (c,d) neutralinos, and (c,d) charginos through VBF in $\mu^+\mu^-$ collisions.
Starting with figure~\ref{vbfmssm_stop_xsec}, we have the $\tilde{t}\overline{\tilde{t}}$ production cross section [fb] as a function of $\sqrt{s}$ [TeV], for representative stop masses.
For lighter stops with $m_{\tilde{t}}\lesssim1{\rm ~TeV}$, we see that cross sections span \confirm{$\sigma\sim0.01-1$ fb at $\sqrt{s}\sim2{\rm ~TeV}$ and reach $\sigma\sim50-75$ fb at $\sqrt{s}\sim30{\rm ~TeV}$}.
For heavier stops with $m_{\tilde{t}}=2-4{\rm ~TeV}$, production rates reach $\sigma\sim5-20$ fb at $\sqrt{s}\sim30{\rm ~TeV}$.
In figure~\ref{vbfmssm_nuino_xsec}, we show the same information but for $\tilde{\chi}^0\tilde{\chi}^0$.
Overall, the picture is bleaker.
For lighter neutralinos with \confirm{$m_{\tilde{\chi}^0}\lesssim1{\rm ~TeV}$, pair production rates through weak boson fusion remain below $\sigma\sim0.01$ fb for collider energies below $\sqrt{s}\sim7-10{\rm ~TeV}$.}
They reach just below \confirm{$\sigma\sim0.2$ fb at $\sqrt{s}\sim30{\rm ~TeV}$}.
For heavier neutralinos with $m_{\tilde{\chi}^0}=2-4{\rm ~TeV}$, we see that cross sections remain below \confirm{$\sigma\sim0.1$ fb for $\sqrt{s}\lesssim30{\rm ~TeV}$}.
In figure~\ref{vbfmssm_chino_xsec}, we again show the same information but for $\tilde{\chi}^+{\tilde{\chi}^-}$.
We find that the outlook is somewhere between the previous cases.
For lighter charginos with \confirm{$m_{\tilde{\chi}^\pm}\lesssim1{\rm ~TeV}$}, pair production rates quickly reach about \confirm{$\sigma\sim0.01$ fb for $\sqrt{s}\sim2-4{\rm ~TeV}$}
and about \confirm{$\sigma\sim75$ fb at $\sqrt{s}\sim30{\rm ~TeV}$}.
For heavier charginos with \confirm{$m_{\tilde{\chi}^\pm}\lesssim2-4{\rm ~TeV}$}, rates reach $\sigma\sim0.01-1$ fb when \confirm{$\sqrt{s}\sim7-12{\rm ~TeV}$},
and span roughly \confirm{$\sigma\sim20-40$ fb for the highest $\sqrt{s}$} considered.
\subsection{Vector leptoquarks}\label{sec:bsm_LQ}
\begin{figure}[t!]
\centering\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfveclq}\label{vbfveclq_diag}}
\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfvecLQ}\label{vbfveclq_xsec}}
}
\caption{
(a) Diagrammatic representation of $b\overline{b}$ production in $\mu^+\mu^-$ collisions via the $t$-channel exchange of the vector leptoquark $U_1^\mu$.
(b) The associated cross section [fb] as a function of collider energy $\sqrt{s}$ [TeV] for representative $M_{\rm U}$.
Also shown is SM $\mu^+\mu^- \to b\overline{b}$ production (dashed curve).
}
\label{vbfveclq}
\end{figure}
The existence of leptoquarks, i.e., scalar and vector bosons with nonzero baryon and lepton numbers,
that also possess SM gauge charges have long been predicted
due to their necessity in certain grand unified
theories (GUTs)~\cite{Pati:1974yy,Georgi:1974sy,Fritzsch:1974nn,Dimopoulos:1979es,Senjanovic:1982ex,Schrempp:1984nj,Hewett:1988xc,Frampton:1989fu}.
Though not conclusively established~\cite{Dorsner:2016wpm,Tanabashi:2018oca},
the possibility of leptoquarks is a viable solution to longstanding anomalies observed across several flavor experiments~\cite{Lees:2013uzd,Aaij:2014ora,Aaij:2015yra,Hirose:2016wfn,Aaij:2017deq,Aaij:2017vbb}.
These anomalies suggest a violation of lepton flavor universality beyond what is allowed by neutrino oscillations.
Hence, discovering and measuring properties of leptoquarks constitute an intriguing prospect at current and future experiments.
For reviews on the topic, see Refs.~\cite{Dorsner:2016wpm,Cerri:2018ypt} and references therein.
While the spectrum of leptoquark models is vast, especially interesting options are those featuring vector leptoquarks due to their direct role in GUTs
and recent demonstrations of their ultraviolet completions~\cite{Barbieri:2015yvd,Buttazzo:2016kid,Barbieri:2016las,DiLuzio:2017vat}.
For our purposes, we consider the concrete example~\cite{Baker:2019sli} where the vector leptoquark $U_1^\mu$ arises from the enlarged gauge group
\begin{equation}
\mathcal{G}_{\rm NP} = { SU}(4)\times { SU}(2)_L\times { U}(1)_{T_R^3},
\end{equation}
which itself is a subgroup of the Pati-Salam group $\mathcal{G}_{{\rm PS}} = SU(4)\times SU(2)_L\times SU(2)_R$~\cite{Pati:1974yy}.
In this case, $U_1^\mu$ is in the $(\mathbf 3, \mathbf 1, 2/3)$ representation of the SM gauge group.
At low energies, the relevant Lagrangian (before EWSB) can be described phenomenologically by~\cite{Baker:2019sli}:
\begin{align}
\mathcal{L}_{U_1}&=-\frac{1}{2}\,U_{1\,\mu\nu}^\dagger\, U_1^{\mu\nu}+M_U^2\,U_{1\,\mu}^\dagger\, U_1^{\mu}-ig_s\,(1-\kappa_U)\,U_{1\,\mu}^\dagger\,T^a\,U_{1\,\nu}\,G^{a\,\mu\nu}
\label{eq:bsm_LagLeptoquark}
\\
&\quad-ig_Y\,\frac{2}{3}\,(1-\tilde\kappa_U)\,U_{1\,\mu}^\dagger\,U_{1\,\nu}\,B^{\mu\nu}+\frac{g_U}{\sqrt{2}}\,[U_1^\mu\,(\beta_L^{ij}\,\bar q^i_L\gamma_\mu\ell_L^j+\beta_R^{ij}\,\bar d^i_R\gamma_\mu e_R^j)+{\rm H.c.}].\nonumber
\end{align}
Here, $G^{\mu\nu}=T^a G^{a\mu\nu}$ and $B^{\mu\nu}$ are the QCD and hypercharge field strengths, with associated gauge couplings $g_s$ and $g_Y$.
$U^{\mu\nu}_1$ and $M_U$ are the field strength and mass of $U_1$.
$\kappa_U$ and $\kappa_{\tilde{U}}$ are anomalous couplings that vanish in gauged leptoquark models.
$q_L,~\ell_L,~d_R,~e_R$ are the SM chiral fermion fields in the flavor basis,
and $g_U$ is a flavor-universal $q-\ell-U$ coupling strength while $\beta^{ij}$ absorbs possible flavor dependencies.
In view of the aforementioned flavor anomalies, we assume that leptoquarks, if they indeed exist,
couple mainly to generation-3 fermions with the possible extension to muons.
Hence, to explore the sensitivity of multi-TeV muon colliders, we consider the process
\begin{equation}\label{eq:veclq}
\mu^+ \mu^- \to b\,\bar b
\end{equation}
mediated by a $t$-channel exchange of the vector leptoquark $U_1^\mu$,
as shown in figure~\ref{vbfveclq_diag}.
We work in the framework of equation~\ref{eq:bsm_LagLeptoquark} as implemented into the \texttt{LeptoQuark} FeynRules UFO model~\cite{Baker:2019sli}.
For our purposes, the relevant parameters are $g_U$ and $\beta_{L/R}^{ij}$ and we assume the default values of the model file.
We report our results in figure~\ref{vbfveclq_xsec}, where we show the $\mu^+ \mu^- \to b\,\bar b$
cross section [fb] as a function of collider energy $\sqrt{s}$ [TeV] for representative $M_{U}$.
Also shown is the SM $\mu^+\mu^- \to b\overline{b}$ production rate (grey, dash curve).
For both light and heavy $U_1^\mu$ masses, we observe only a mild dependence on collider energy.
More specifically, for $M_{U} = 0.4 - 0.8{\rm ~TeV}$, we find cross sections are roughly \confirm{$\sigma\sim\mathcal{O}(0.01)$ fb for $\sqrt{s}\sim2-30{\rm ~TeV}$}.
For heavier masses in the range of $M_{U} = 2 - 4{\rm ~TeV}$, we see that cross sections span \confirm{$\mathcal{O}(10^{-4})-\mathcal{O}(10^{-3})$ fb for
collider energies of $\sqrt{s}\sim5-30{\rm ~TeV}$}.
\subsection{Heavy Dirac and Majorana neutrinos}\label{sec:bsm_heavyN}
In the SM, neutrinos are massless fermions.
Neutrino oscillation data~\cite{Ahmad:2002jz,Ashie:2005ik}, however,
unambiguously demonstrate that they in fact possess exceptionally tiny masses, with $m_{\nu_k} < \mathcal{O}(1)$ eV~\cite{Aker:2019uuj}.
If neutrinos are Majorana fermions, then their mass are also related to the breaking of lepton number conservation~\cite{Schechter:1981bd,Hirsch:2006yk,Duerr:2011zd,Moffat:2017feq},
an accidental symmetry in the SM.
In order to reconcile these observations with the SM paradigm, neutrino mass models, collectively known as Seesaw models,
hypothesize the existence of new particles that necessarily~\cite{Ma:1998dn} couple to SM leptons and the Higgs.
If kinematically accessible, such states may be discovered at collider experiments through spectacular processes that violate lepton flavor and lepton number conservation;
for comprehensive reviews, see Refs.~\cite{Cai:2017jrq,Cai:2017mow}.
\begin{figure}[t!]
\centering\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbfhN}\label{vbfhn_diag}}
\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfHN}\label{vbfhn_xsec}}
}
\caption{
(a) Diagrammatic representation of $\ell^+_i\ell^-_j$ production via $t$-channel exchange of a heavy neutrino $N$.
(b) The cross section [fb] as a function of collider energy $\sqrt{s}$ [TeV] for mass $M_{N}$.
}
\label{vbfhn}
\end{figure}
A commonality of many Seesaw models is the existence of heavy neutrino mass eigenstates $N_{m'}$ that can be either (pseudo-)Dirac or Majorana.
These states couple to the SM sector through mixing with SM neutrinos and/or new gauge couplings.
For our purposes, we neglect subtleties related to decoupling of lepton number-violating processes in simplified models with only heavy neutrinos~\cite{Pilaftsis:1991ug,Kersten:2007vk,Moffat:2017feq},
and consider the well-studied~\cite{delAguila:2008cj,Atre:2009rg,Pascoli:2018heg} Phenomenological Type I Seesaw benchmark model,
as implemented in the \texttt{HeavyN} UFO libraries of Ref.~\cite{Alva:2014gxa,Degrande:2016aje}.
In this model, neutrino flavor eigenstates $\nu_\ell$ can be expressed generically~\cite{Atre:2009rg} in terms of light and heavy mass eigenstates by the decomposition
\begin{equation}
\nu_\ell = \sum_{m=1}^3 U_{\ell m} \nu_m ~+~ \sum_{m'=4}^6 V_{\ell m'} N_{m'} \approx \sum_{m=1}^3 U_{\ell m} \nu_m ~+~ V_{\ell m'=4} N.
\label{eq:nuDecomposition}
\end{equation}
In the last expression we assumed that active-sterile mixing $V_{\ell N}$ is dominated by the lightest, heavy mass eigenstate $(m'=4)$,
which we relabel as $N\equiv N_{m'=4}$.
The relevant interaction Lagrangian coupling $N$ to the SM Weak bosons after EWSB is
\begin{eqnarray}
\mathcal{L}_N^{\rm Int.}
\approx & -& \cfrac{g}{\sqrt{2}} \sum_{\ell=e}^\tau \overline{N} V^*_{\ell 4} W_\mu^+ \gamma^\mu P_L \ell^-
-\cfrac{g}{2\cos\theta_W} \sum_{\ell=e}^\tau \overline{N} V^*_{\ell 4} Z_\mu \gamma^\mu P_L \nu_\ell \nonumber\\
& -& \cfrac{g}{2M_W} h \sum_{\ell=e}^\tau \overline{N} V^*_{\ell 4} M_{N} P_L \nu_\ell
+ {\rm H.c.}
\label{eq:vbfhn_lag}
\end{eqnarray}
Here, $g\approx0.65$ is the $SU(2)$ coupling constant, $\theta_W$ is the weak mixing angle,
and $P_{L/R}=(1\mp\gamma^5)/2$ are the usual chiral projection operators for four-component fermions.
While there exists a number of processes in which heavy neutrinos can participate,
we focus on the production of oppositely charged lepton pairs through $W^+W^-$ scattering:
\begin{equation}
W^+ W^- \to \ell^+_i \ell^-_j,
\end{equation}
as show in figure~\ref{vbfhn_diag}.
This signature complements conventional channel, including the $s$-channel $N\ell$ and $N\nu$ processes and $W^\pm\gamma\to N\ell^\pm$ fusion,
due to its particular sensitivity to active-sterile mixing, which scales as $\sigma_{\mu\mu}\sim\vert V_{\ell_i N}V_{\ell_j N}^* \vert^2$,
and not requiring that $N$ be on-shell.
Furthermore, observing this process for $\ell_i\neq \ell_j$ would give a clear indication of charged lepton flavor violation and provide guidance on the structure of neutrino mixing.
In figure~\ref{vbfhn_xsec}, we show the cross section [fb] for the flavor-conserving process,
\begin{equation}
\mu^+\mu^- \to \nu_\mu\overline{\nu}_\mu \mu^+ \mu^-,
\end{equation}
mediated by a heavy $t$-channel neutrino, for representative mass $M_{N}$, and as a function of collider energy $\sqrt{s}$ [TeV].
For concreteness, we take {$\vert V_{\mu N}\vert=0.1$.}
As in the leptoquark case in section~\ref{sec:bsm_LQ}, we observe only a slight rate dependence over a large range of neutrino masses.
For $m_N = 0.4-4{\rm ~TeV}$, we find that cross sections reach the \confirm{$\sigma\sim10^{-4}$ fb threshold at about $\sqrt{s}=1-5{\rm ~TeV}$.}
For much larger collider energies, we observe that \confirm{scattering rates can reach up to $\sigma\sim0.1-0.2$ fb for collider energies as large as $\sqrt{s}=30{\rm ~TeV}$.}
\subsection{Vector-like quarks}\label{sec:bsm_vlq}
A more curious aspect of the SM is the existence of three copies, or generations, of matter.
While at least three generations are necessary for CP violation in the quark sector~\cite{Kobayashi:1973fv},
no first-principle argument establishes this to be the case.
Moreover, as additional chiral generations are constrained by flavor and Higgs data~\cite{Eberhardt:2010bm,Djouadi:2012ae,Eberhardt:2012gv},
if more copies do exist, such matter particles likely belong to different EW representations or possess new quantum numbers.
One such example: vector-like fermions, which are characterized by their left- and right-handed chiral components possessing identical gauge transformations
but may nonetheless carry the same gauge charges as SM particles after EWSB.
As discussed in section~\ref{sec:bsm_heavyN}, vector-like electrons and neutrinos are key ingredients of neutrino mass models.
In addition, vector-like quarks (VLQ) offer viable, non-supersymmetric solutions to the Higgs mass hierarchy and dynamical EWSB~\cite{ArkaniHamed:2002qy,Schmaltz:2005ky,Martin:2009bg}.
The phenomenology of such models is rich, well-documented~\cite{Han:2003wu,Hubisz:2004ft,Perelstein:2005ka,Aguilar-Saavedra:2013qpa,Buchkremer:2013bha},
and has led to LHC searches for vector-like top and bottom quarks
in a variety of final states~\cite{Aaboud:2018xpj,Aaboud:2018pii,Aaboud:2018wxv,Sirunyan:2018qau,Sirunyan:2019sza,Sirunyan:2019xeh}.
\begin{figure}[t!]
\centering\mbox{
\subfigure[]{\includegraphics[width=.42\textwidth]{Plots/madMuon_diagram_vbftptp}\label{vbfvlq_diag}}
\hspace{0.75cm}
\subfigure[]{\includegraphics[width=.52\textwidth]{Plots/madMuon_cs_energy_scan_vbfVLQ}\label{vbfvlq_xsec}}
}
\caption{
Same as figure~\ref{fig:bsm_vbfsing} but for the VLQ pair $t'\overline{t'}$, as described by equation~\ref{eq:topL_proc}.
}
\label{vbfvlq}
\end{figure}
For fermionic top partners $t'$, i.e., a VLQ with the same quantum numbers as the top quark after EWSB,
the effective Lagrangian describing $t'$ can be parametrized by ``decomposing'' the top quark further into two mass eigenstate:
\begin{equation}
t(M_t\approx173{\rm ~GeV}) \to t \approx t(M_t\approx173{\rm ~GeV}) + \kappa t'(M_{t'}) + \mathcal{O}(\kappa^2).
\end{equation}
Here, $\kappa$ is a small, model-dependent mixing parameter and the abuse of notation is obvious.
While the Lorentz structure of the gluon and photon interactions with $t'$ are dictated by gauge invariance, those of the EW bosons are less restricted.
Generically, the EW couplings of a single $t'$ with $u$- and $d$-flavored, SM quarks can be written as~\cite{Buchkremer:2013bha}:
\begin{align}
\mathcal{L}_{t'-single} &= \kappa_W V_{L/R}^{4i} \frac{g}{\sqrt{2}}\; [\bar{t'}_{L/R} W_\mu^+ \gamma^\mu d^i_{L/R} ] + \kappa_Z V_{L/R}^{4i} \frac{g}{2 c_W} \; [\bar{t'}_{L/R} Z_\mu \gamma^\mu u^i_{L/R} ] \nonumber \\
&- \kappa_H V_{L/R}^{4i} \frac{M_{t'}}{v}\; [\bar{t'}_{R/L} H u^i_{L/R} ] + {\rm H.c.}
\label{eq:topL}
\end{align}
Here, $M_{t'}$ is the mass of the VLQ,
$V_{L/R}^{4i}$ is model-dependent and accounts for any potential flavor mixing,
the index $i$ runs over the three SM generations,
and the parameters $\kappa_V$ ($V = W$, $Z$, $H$) encode anomalous couplings to the EW bosons.
To investigate the sensitivity of multi-TeV muon colliders to VLQs, we consider $t'\overline{t'}$ pair production from $W^+W^-$ fusion,
as shown in figure~\ref{vbfvlq_diag} and given by
\begin{equation}
\mu^- \mu^+ \to \nu_\mu \overline{\nu}_\mu t' \overline{t'}.
\label{eq:topL_proc}
\end{equation}
Using equation~\ref{eq:topL} as implemented in the \texttt{VLQ} UFO libraries by Ref.~\cite{Buchkremer:2013bha},
we show in figure~\ref{vbfvlq_xsec} the $W^+W^- \to t' \overline{t'}$ cross section [fb] in
$\mu^+\mu^-$ collisions as a function of collider energy $\sqrt{s}$ [TeV], for representative $M_{t'}$.
We assume the \confirm{default couplings} of Ref.~\cite{Buchkremer:2013bha}.
Overall, we observe a large variation of production rates as a function of mass and collider energy.
For lighter $t'$, with $M_{t'} = 0.4-0.8{\rm ~TeV}$, we find that cross sections remain below the \confirm{$\sigma\sim10^{-4}$ fb level for $\sqrt{s}=2-3{\rm ~TeV}$},
but quickly grow with increasing $\sqrt{s}$.
For the same mass range, rates reach roughly \confirm{$\sigma\sim0.5-5$ fb by $\sqrt{s}=30{\rm ~TeV}$.}
For heavier $t'$ with $M_{t'}=2-4{\rm ~TeV}$, we see that the rate growth is milder,
with the \confirm{$\sigma\sim10^{-4}$ fb threshold achieved at $\sqrt{s}\sim7-15{\rm ~TeV}$.}
By \confirm{$\sqrt{s}=30{\rm ~TeV}$, rates reach up to $\sigma\sim10^{-3}-10^{-2}$ fb.}
\subsection{Overview of vector boson fusion sensitivity}\label{sec:bsm_overview}
\begin{figure}[t!]
\centering\mbox{{\includegraphics[width=.70\textwidth]{Plots/madMuon_bsm_sensitivities}}}
\caption{
Required luminosity [fb] for a $5\,\sigma$ discovery of
$H^{++}$ (red) in the GM model;
$\tilde{t}\overline{\tilde{t}}$ (blue),
$\tilde{\chi}^+\tilde{\chi}^-$ (purple), and
$\tilde{\chi}^0\tilde{\chi}^0$ (yellow) from in the MSSM,
using VBF in $\sqrt{s}=14{\rm ~TeV}$ (solid) and $30{\rm ~TeV}$ (dashed) muon collisions.
}
\label{fig:sensi}
\end{figure}
In this section we investigated the sensitivity of EW VBS to a variety of BSM scenarios at multi-TeV muon colliders.
In order to give an overview picture of this reach, we present in figure~\ref{fig:sensi} the
requisite integrated luminosity $\mathcal{L}$ [fb$^{-1}$] for a $5\sigma$ discovery as a function of new particle mass in $\sqrt{s}=14{\rm ~TeV}$ (solid) and $30{\rm ~TeV}$ (dashed) muon collisions.
We consider specifically
the doubly charged Higgs $H^{++}$ (red) from the GM model (see section~\ref{sec:bsm_gm});
$\tilde{t}\overline{\tilde{t}}$ (blue),
$\tilde{\chi}^+\tilde{\chi}^-$ (purple), and
$\tilde{\chi}^0\tilde{\chi}^0$ (yellow) pairs from the MSSM (see section~\ref{sec:bsm_mssm}).
As dedicated signal and background analyses are beyond the scope of this document, we crudely assume a zero background hypothesis and full signal acceptance.
We therefore also use
as a simple measure of statistical significance $(\mathcal{S})$ the formula, $\mathcal{S}=\sqrt{\mathcal{L}\times\sigma}$.
As a general feature, we see that less integrated luminosity is needed to achieve the same discovery at
higher collider energies (dashed lines) than is needed at lower collider energies (solid lines).
For example:
For $\tilde{\chi}^0\tilde{\chi}^0$ pair production with $M=2{\rm ~TeV}$, about
\confirm{$\mathcal{L}\approx 3000~(200){\rm ~fb^{-1}}$ at $\sqrt{s}=14~(30){\rm ~TeV}$ are needed to reach $5\sigma$.}
Similarly, for $\tilde{\chi}^\pm$ pair production with $M=5$ TeV, one can pass the $5\sigma$
threshold with roughly \confirm{$\mathcal{L}\approx 250~(1.5){\rm ~fb^{-1}}$.}
For $H^{++}$ of mass $M=10{\rm ~TeV}$, one would need \confirm{about $\mathcal{L}=60~(3.5){\rm ~fb^{-1}}$ at $\sqrt{s}=14~(30){\rm ~TeV}$.}
While highly intuitive for $pp$ colliders, this behavior is somewhat a novelty for lepton colliders because typical, $s$-channel annihilation processes
exhibit cross sections that \textit{decrease} with \textit{increasing} collider energy.
Hence, for $s$-channel annihilations, one typically needs more data at higher collider energies to achieve the same discovery potential.
We attribute this improved outlook to the increasing likelihood for forward, initial-state EW boson radiation at higher collider energies.
That is to say, the opening and increasing importance of EW vector boson fusion channels.
In terms of the parton luminosity language of section~\ref{sec:ppvsmuon},
a higher collider energy translates to a larger EW boson parton luminosity.
For a fixed ``partonic'' scattering rate, this leads to an increased, beam-level cross section, and therefore higher sensitivity.
In this sense, multi-TeV lepton colliders start resembling proton colliders, and effectively become high-luminosity, weak boson colliders.
While remaining in the context of the above BSM scenarios, we now explore this perspective further.
\section{New physics processes at muon colliders: annihilation vs fusion}\label{sec:bsm_vbf}
As we have shown here and throughout previous sections, VBF production cross sections $(\sigma^{\rm VBF})$ grow with increasing $\sqrt{s}$,
a phenomenon that follows from the propensity for forward emission of transverse gauge bosons at increasing collider energies.
While the precise dependence of $\sigma^{\rm VBF}$ on collider energies of course
depends on the precise BSM signature, for example on the particles involved, their underlying dynamics, and their kinematics,
it nevertheless contrasts with $s$-channel, annihilation processes.
These processes feature cross sections $(\sigma^{s-ch.})$ that instead decrease with collider energy as $\sigma^{s-ch.}\sim1/s$,
when well above kinematic thresholds.
Hence, just as in the SM, we find a commonality in all VBF process here:
assuming fixed model inputs, then for sufficiently high collider energies, VBF cross sections exceed those of analogous, $s$-channel production modes.
\begin{figure}[t!]
\centering
\subfigure[]{\includegraphics[width=.49\textwidth]{Plots/madMuon_cs_s_channel_vs_vbf_SZ}\label{fig:vbfschh2s_sing}}
\subfigure[]{\includegraphics[width=.49\textwidth]{Plots/madMuon_cs_s_channel_vs_vbf_H2Z}\label{fig:vbfschh2s_2hdm}}
\subfigure[]{\includegraphics[width=.49\textwidth]{Plots/madMuon_cs_s_channel_vs_vbf_SToppair}\label{fig:vbfschsttp_stop}}
\subfigure[]{\includegraphics[width=.49\textwidth]{Plots/madMuon_cs_s_channel_vs_vbf_VLQ}\label{fig:vbfschsttp_tptp}}
\subfigure[]{\includegraphics[width=.49\textwidth]{Plots/madMuon_cs_s_channel_vs_vbf_NEUpair}\label{fig:vbfschneucha_chi0}}
\subfigure[]{\includegraphics[width=.49\textwidth]{Plots/madMuon_cs_s_channel_vs_vbf_CHApair}\label{fig:vbfschneucha_chip}}
\caption{
For representative input parameters and as a function of muon collider energy [TeV],
the cross section [fb] via VBF (solid lines) and $s$-channel annihilation (dashed lines) for:
(a) $S Z$ associated production in a singlet-scalar extension of the SM (section~\ref{sec:bsm_scalar});
(b) $H_2 Z$ associated production in the 2HDM (section~\ref{sec:bsm_2hdm});
(c) $\tilde{t}\overline{\tilde{t}}$ pair production in the MSSM (section~\ref{sec:bsm_mssm});
(d) $t'\overline{t'}$ pair production in a vector-like quark scenario (section~\ref{sec:bsm_vlq});
(e) $\tilde{\chi}^0\tilde{\chi}^0$ pair production in the MSSM;
and (f) $\tilde{\chi}^+\tilde{\chi}^-$ pair production in the MSSM.
}
\label{fig:bsm_vbf}
\end{figure}
As in the SM case studies of section~\ref{sec:sm}, there is not a definite energy beyond which $s$-channel, $\mu^+\mu^-$ annihilations are categorically subdominant.
The situation is more nuanced.
For example: as in our SM cases, the more final-state particles involved, the larger the $\sqrt{s}$ needed for $\sigma^{\rm VBF}$ to exceed $\sigma^{s-ch.}$.
For the resonant production of BSM states, there is of course another important parameter that plays a role: the mass scale of new, final state particle(s).
New mass scales complicates the na\"ive scaling for VBS in two ways.
First is the aforementioned propensity for collinear emission of transverse gauge bosons, which, more precisely, grows with the invariant mass of the VBF system.
Second is the possible enhancement of ``soft'' EW boson emissions at small momentum fractions.
Third is the role of matrix elements featuring large longitudinal gauge boson couplings that nevertheless possess a relatively suppressed $V_0 V_0$ parton luminosity
(see section~\ref{sec:ppvsmuon_vbf}).
To explore how the mass scale of new particles impacts the threshold at which $\sigma^{\rm VBF}$ surpasses $\sigma^{s-ch.}$,
and working in the context of the BSM scenarios of section~\ref{sec:bsm},
we compare in figure~\ref{fig:bsm_vbf}
a variety of VBF and analogous $s$-channel, annihilation processes in multi-TeV $\mu^+\mu^-$ collisions.
Assuming representative input parameters and as a function of muon collider energy [TeV],
we show the VBF (solid lines) and $s$-channel (dashed lines) cross sections for:
\ref{fig:vbfschh2s_sing} $S Z$ associated production in a singlet-scalar extension of the SM (see section~\ref{sec:bsm_scalar});
\ref{fig:vbfschh2s_2hdm} $H_2 Z$ associated production in the 2HDM (see section~\ref{sec:bsm_2hdm});
\ref{fig:vbfschsttp_stop} $\tilde{t}\overline{\tilde{t}}$,
\ref{fig:vbfschneucha_chi0} $\tilde{\chi}^0\tilde{\chi}^0$,
and \ref{fig:vbfschneucha_chip} $\tilde{\chi}^+\tilde{\chi}^-$ pair production in the MSSM (see section~\ref{sec:bsm_mssm});
as well as \ref{fig:vbfschsttp_tptp} $t'\overline{t'}$ pair production in a vector-like quark scenario (see section~\ref{sec:bsm_vlq}).
Estimated collider energies $\sqrt{s}$ at which the VBF rates surpass the $s$-channel rates are summarized in Table~\ref{tab:bsm_vbf}.
From this exercise we observe several trends.
We start by noting that the VBF production rates supersede $s$-channel rates at relatively lower collider energies
for $SZ$, $H_2 Z$, $\tilde{t}\overline{\tilde{t}}$, and $\tilde{\chi}^0\tilde{\chi}^0$ production than for $t'\overline{t'}$ and $\tilde{\chi}^+\tilde{\chi}^-$ pair production.
In particular, for $S Z$ and $H_2 Z$, we report that $\sigma^{\rm VBF}$ becomes larger than $\sigma^{s-ch.}$ at around $\sqrt{s}\sim2-3{\rm ~TeV}$ for $M_S,~M_{H_2}=0.4-0.8{\rm ~TeV}$.
For heavier masses of $M_S,~M_{H_2}=2-4{\rm ~TeV}$, the transition energies both span $\sqrt{s}\sim4-5.5{\rm ~TeV}$.
The same mass dependence can be found for $\tilde{t}\overline{\tilde{t}}$ and $\tilde{\chi}^0\tilde{\chi}^0$ production.
For the same ranges of lighter and heavier masses, the VBF cross sections become prevalent at $\sqrt{s}\sim3-4{\rm ~TeV}$ and $\sqrt{s}\sim7-13{\rm ~TeV}$.
The two sets of processes can further be linked by noting that the $M_S,~M_{H_2}=0.8~(2.0)~[4.0]{\rm ~TeV}$ benchmark masses probe approximately the same scales
as the $M_{\tilde{t}},~M_{\tilde{\chi}^0}=0.4~(0.8)~[2.0]{\rm ~TeV}$ benchmarks, with reasonable consistency.
This trend suggests some universal-like scaling behavior.
\begin{table}[!t]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{c | c c c c c c | c}
mass $(M_X)$ [TeV] & $S Z$ (Singlet) & $H_2 Z$ (2HDM) & $t'\overline{t'}$ (VLQ) & $\tilde{t}\overline{\tilde{t}}$ (MSSM) & $\tilde{\chi}^0{\tilde{\chi}^0}$ (MSSM) & $\tilde{\chi}^+\tilde{\chi}^-$ (MSSM)
& Scaling (Eq.~\ref{eq:bsm_vbf_scaling})\\
\hline\hline
400 GeV & 2.1 TeV & 2.1 TeV & 11 TeV & 2.9 TeV & 3.2 TeV & 7.5 TeV & 1.0 (1.7) TeV\\
600 GeV & 2.5 TeV & 2.5 TeV & 16 TeV & 3.8 TeV & 3.8 TeV & 8.1 TeV & 1.3 (2.4) TeV \\
800 GeV & 2.8 TeV & 2.8 TeV & 22 TeV & 4.3 TeV & 4.3 TeV & 8.5 TeV & 1.7 (3.1) TeV \\
2.0 TeV & 4.0 TeV & 4.0 TeV & >30 TeV & 7.8 TeV & 6.9 TeV & 11 TeV & 3.7 (6.8) TeV \\
3.0 TeV & 4.8 TeV & 4.8 TeV & >30 TeV & 10 TeV & 9.0 TeV & 13 TeV & 5.3 (9.8) TeV \\
4.0 TeV & 5.5 TeV & 5.5 TeV & >30 TeV & 13 TeV & 11 TeV & 15 TeV & 6.8 (13) TeV \\
\hline
\end{tabular}
}
\end{center}
\caption{
For representative processes and inputs, the required muon collider energy $\sqrt{s}$ [TeV]
at which the VBF production cross section surpasses the $s$-channel, annihilation cross section,
as shown in figure~\ref{fig:bsm_vbf}.
Also shown are the cross over energies as estimated from the scaling relationship in equation~(\ref{eq:bsm_vbf_scaling})
assuming a mass scale $M_X~(2M_X)$.
} \label{tab:bsm_vbf}
\end{table}
For pair production of $t'\overline{t'}$ and $\tilde{\chi}^+\tilde{\chi}^-$, we find that the VBF channels become more important than $s$-channel production at much higher collider energies than the
previously discussed processes.
More specifically, for $\tilde{\chi}^+\tilde{\chi}^-$, we find that collider energies must exceed $\sqrt{s}\sim7.5-8.5~(11-15)$ TeV for lighter~(heavier) mass scales.
For $t'\overline{t'}$, the outlook is even worse. We find that VBF production only becomes important for $\sqrt{s}\sim11-22{\rm ~TeV}$ for relatively light masses of $M_{t'}=0.4-0.8{\rm ~TeV}$,
whereas for heavier masses of $M_{t'}=2-4{\rm ~TeV}$, one requires collider energies that exceed $\sqrt{s}=30{\rm ~TeV}$.
We attribute the qualitative differences between these two processes and the previous four processes to differences between subprocesses in the $s$-channel and VBF mechanisms.
In the first four cases, both $s$-channel and VBF proceed largely through the same EW gauge interactions.
In the latter two cases, the $s$-channel and VBF channels differ by additional $t$-channel exchanges that are governed not by gauge couplings but by mixing factors and Yukawa couplings.
The crossover, then, exhibits a stronger model dependency when VBF and $s$-channel diagrams adhere to different dynamics or interaction strengths.
As already stated, for the $S Z$, $H_2 Z$, $\tilde{t}\overline{\tilde{t}}$, and $\tilde{\chi}^0\tilde{\chi}^0$ channels, we observe a suggestive, universal-like
behavior at which the VBF cross sections surpass their $s$-channel counter parts for a given final-state mass scale $M_X$.
This behavior can be roughly estimated by noting that the kinematic scaling for
$\mu^+\mu^- \to X$, $s$-channel cross sections are of the form
\begin{equation}
\sigma^{s-ch.} \sim \frac{(s-M_X^2)}{(s-M_V^2)^2} \sim \frac{(s-M_X^2)}{s^2}.
\label{eq:bsm_sch_scaling}
\end{equation}
The denominator takes its form from the propagator of some intermediate state of mass $M_V\ll \sqrt{s}$ (it make no difference which state),
and the numerator from momentum conservation, which requires the cross section to vanish when the collider energy $\sqrt{s}$ dips to the mass threshold $M_X$ of the final-state.
Likewise, the differential rate for VBF processes that (importantly) proceed through the same interactions as $s$-channel process scale as
\begin{equation}
\frac{d\sigma^{\rm VBF}}{dz_1 dz_2} \sim f_V(z_1)f_{V'}(z_2) \frac{(z_1 z_2 s-M_X^2)}{(z_1 z_2 s-M_V^2)^2}
\sim
f_V(z_1)f_{V'}(z_2) \frac{(z_1 z_2 s-M_X^2) \sigma^{s-ch.}}{(z_1 z_2)^2~(s-M_X^2)}.
\label{eq:bsm_vbf_dSigma}
\end{equation}
Here we use the Effective $W$ Approximation (see section~\ref{sec:ppvsmuon_vbf}) to model the $VV'\to X$ hard process, which is mediated by EW bosons $VV'$ carrying energy fractions $z_1, z_2$.
(We make implicit a summation over all $VV'$ permutations that contribute to $VV'\to X$.)
In the final step, we assume that the invariant mass of the $VV'$-system is large, i.e., $M_{VV'}=z_1 z_2 s\gg M_V$, and express the $VV'\to X$ scaling in terms of equation~\ref{eq:bsm_sch_scaling}.
Now, as seen in equation~\ref{eq:ewa_VT}, the EWA PDFs contribute largest at small momentum fractions $(z_i\ll1)$, i.e., the limit where gauge radiation goes soft.
Moreover, as shown in figure~\ref{fig:p_vs_muon_VBF}, the $VV'$ luminosity is dominated by transverse polarizations.
Hence, in the small-$z_i$ limit (and setting the factorization scale $\mu_f=\sqrt{s}$),
the leading contribution to equation~\ref{eq:bsm_vbf_dSigma} scales as
\begin{equation}
\frac{d\sigma^{\rm VBF}}{dz_1 dz_2} \sim \mathcal{S} \times \frac{g^2_W}{4\pi z_1}\log\frac{s}{M_V^2} \times \frac{g_W^2}{4\pi z_2} \log\frac{s}{M_{V'}^2} \times
\frac{(z_1 z_2 s-M_X^2)}{(z_1 z_2)^2~(s-M_X^2)}\sigma^{s-ch.}.
\end{equation}
Here we introduce explicitly a multiplicity factor $ \mathcal{S}=4$ to account for (sum) the four transverse polarization permutations that contribute to $V_TV'_T\to X$ production.
If we make the strong assumption that the $VV'$-system's mass is also large in comparison to $M_X$,
then the VBF scaling, in terms of the $s$-channel scaling, simplifies to
\begin{equation}
\frac{\sigma^{\rm VBF}}{\sigma^{s-ch.}}
\sim \mathcal{S} \left(\frac{g_W^2}{4\pi}\right)^2 \log^2\frac{s}{M_V^2} \int \frac{dz_1 dz_2}{(z_1 z_2)^2} =
\mathcal{S} \left(\frac{g_W^2}{4\pi}\right)^2 \log^2\frac{s}{M_V^2} \int_{\tau_0}^1 d\tau \int_\tau^1 \frac{dz}{z} \frac{1}{\tau^2},
\end{equation}
where $\tau=z_1 z_2 = M_{VV'}^2/s$ is the dimensionless scale at which $VV'\to X$ proceeds,
and $\tau_0 = \min(\tau) = M_X^2/s$ is the smallest $\tau$ at which the hard process can kinematically occur.
In the first step, we group collinear logs under the stipulation that the $V-V'$ mass difference is negligible.
In the second, we made a change of variable to express the momentum integrals in terms of traditional collider variables.
After integrating and in terms of the $s$-channel scaling,
the VBF dependence on collider energy for $s\gg M_X^2$ scales as
\begin{eqnarray}
\sigma^{\rm VBF}
&\sim& \sigma^{s-ch.} \times \mathcal{S} \left(\frac{g_W^2}{4\pi}\right)^2 \log^2\frac{s}{M_V^2} \times \left[\frac{1}{\tau}-\frac{1}{\tau}\log\frac{1}{\tau}\right]_{\tau_0}^{1}
\\
&\sim& \sigma^{s-ch.} \times \mathcal{S} \left(\frac{g_W^2}{4\pi}\right)^2 \left(\frac{s}{M_X^2}\right) \log^2\frac{s}{M_V^2}\log\frac{s}{M_X^2}.
\label{eq:bsm_vbf_soln}
\end{eqnarray}
We observe that the scaling behavior for VBF processes exhibits a double collinear logarithmic dependence on $s$,
which stems from two collinear, EW PDFs, but remarkably only a single soft logarithm.
The double soft logarithm does not arise as the $VV'\to X$ hard process is power-suppressed by a relative factor of $1/(z_1z_2s) = 1/(\tau s)$.
This in turn manifests as a power-law factor that grows linearly with $(s/M_X^2)$.
Altogether, this enables us to roughly estimate the collider energy $\sqrt{s}$ at which $\sigma^{\rm VBF}$ surpasses $\sigma^{s-ch.}$ for a given final-state mass $M_X$.
Essentially, one must solve for when
\begin{equation}
\frac{\sigma^{\rm VBF} }{\sigma^{s-ch.}} \sim
\mathcal{S} \left(\frac{g_W^2}{4\pi}\right)^2 \left(\frac{s}{M_X^2}\right) \log^2\frac{s}{M_V^2}\log\frac{s}{M_X^2} > 1.
\label{eq:bsm_vbf_scaling}
\end{equation}
While the result is transcendental, the solution can easily be extracted numerically\footnote{Explicitly, we use the \texttt{Mathematica} function \texttt{NSolve} and report very quick runtime on a personal laptop.} for the representative $M_X$,
which we report in the rightmost column of Table~\ref{tab:bsm_vbf} assuming a mass scale $M_X~(2M_X)$.
For sub-TeV masses, the scaling behavior systematically underestimates the true crossover by roughly a factor of two.
This is unsurprising as equation~\ref{eq:bsm_vbf_scaling} assumes a large hierarchy between relevant scales.
For TeV masses and above, however, we find good agreement between equation~\ref{eq:bsm_vbf_scaling} and explicit computation from Monte Carlo computations.
We report differences ranging from the percent level to the 20\% level.
\section{Conclusions}\label{sec:conclusions}
The next generation of particle accelerators needed to explore the energy frontier will offer tremendous challenges.
Among these is a muon collider running at energies up to several TeVs and luminosities in the tens of inverse attobarns,
a dream machine both from the technology and physics points of view.
Overcoming the challenges posed by producing, storing, and colliding high-intensity beams of high-energy muons will take years of further research and development.
Exploring the physics potential of such machines, on the other hand, is a relatively easy task that can be undertaken on a short time scale.
In this paper, we have moved a small step forward in the latter direction by considering electroweak vector boson fusion/scattering (VBF) processes at a future multi-TeV lepton collider in a rather systematic way.
Our study is motivated by the simple observation that while $s$-channel production rates decrease with increasing collider energy as $1/s$,
VBF rates grow as a power of $\log s$,
and therefore, for any final state, VBF is consigned to eventually emerge as the leading production mechanism.
In this context, we have investigated and show in section~\ref{sec:ppvsmuon} that, compared to hadron colliders, VBF is a much more relevant production mechanism at a high-energy lepton collider.
We continue in section~\ref{sec:sm} and present
for a rather large set of SM final states involving EW vector bosons, Higgs bosons, and top quarks
the corresponding VBF cross sections and at what collider energy they surpass $s$-channel production modes.
We find that VBF becomes the dominant production mechanism at relatively low collider energies,
starting at just a few TeV for low final-state multiplicities and increases for higher multiplicities.
In order to further illustrate what could be attainable in terms of new physics reach, we then moved in two directions, focusing mostly on luminosity scenarios envisaged for a muon collider.
First, in section~\ref{sec:eft}, we considered prospects for precision measurements of the Higgs's self-couplings and the top quark's EW couplings,
and interpreted sensitivity in terms of Wilson coefficients within the SMEFT framework.
Second, in section~\ref{sec:bsm}, we explored a variety of simplified extensions of the SM and how large VBF luminosities can maximize the direct search for new physics.
In particular we find evidence that in several instances the reach of a multi-TeV muon collider is comparable or better than that attainable at a 100 TeV proton-proton collider.
A detailed comparison of VBF's utility over $s$-channel annihilations in BSM searches was then summarized in section~\ref{sec:bsm_vbf}.
The results presented in this work are meant to provide a first glimpse of what could be achieved at a multi-TeV muon collider in VBF channels,
and certainly motivate further and more refined investigations.
We close by stressing that while we focus on the specific prospects of a muon collider, our conclusions hold equally for other lepton colliders.
\section{Precision electroweak measurements}\label{sec:eft}
In this section we explore the potential of a muon collider to probe new physics indirectly.
As it is not realistic to be exhaustive,
after summarizing the effective field theory formalism in which we work (section~\ref{sec:eft_intro}),
we select a few representative examples related to the Higgs boson (section~\ref{sec:eft_higgs}) and the top quark (section~\ref{sec:eft_top}).
\subsection{SMEFT formalism}\label{sec:eft_intro}
Undertaking precision measurements of SM observables is of utmost importance if nature features heavy resonances at mass scales that are just beyond the kinematic reach of
laboratory experiments.
Be it perturbative or non-perturbative, the dynamics of such new states could leave detectable imprints through their interactions among the SM particles.
This is especially the case for the heaviest SM particles if the new physics under consideration is related to the flavor sector or the spontaneous breaking of EW symmetry.
Generically, two broad classes of observables, defined in different regions of phase space, can be investigated.
The first are bulk, or inclusive, observables for which large statistics are available and even small deviations from the null (SM) hypothesis are detectable.
The second are tail, or exclusive, observables, where the effects of new physics can be significantly enhanced by energy, say with selection cuts, and compensate for lower statistics.
A simple yet powerful approach to interpret indirect searches for new, heavy particles in low-energy observables is the
SMEFT framework \cite{Grzadkowski:2010es,Aebischer:2017ugx,Brivio:2017btx}.
The formalism describes a large class of models featuring states that live above the EW scale and provides a consistent, quantum field theoretic description of deformations of SM interactions.
This is done while employing a minimal set of assumptions on the underlying, ultraviolet theory.
In SMEFT, new physics is parametrized through higher dimensional, i.e., irrelevant, operators that augment the unbroken SM Lagrangian,
yet preserve the fundamental gauge symmetries of the SM
by only admitting operators that are both built from SM fields and invariant under $\mathcal{G}_{\rm SM}=SU(3)_c\times SU(2)_L \times U(1)_Y$ gauge transformations.
Accidental symmetries of the SM, such as lepton and baryon number conservation, are automatically satisfied under certain stipulations~\cite{Kobach:2016ami,Helset:2019eyc}.
Additional global symmetries can also be imposed on the Lagrangian.
In this work we assume the flavor
symmetry,\footnote{The labels $l, e, d, u, q$ refer, respectively, to the left-handed lepton doublets, the right-handed leptons, the right-handed down-type quarks, the right-handed up-type quarks, and the left-handed quark doublets.}
$\mathcal{S}=U(3)_l \times U(3)_e \times U(3)_d \times U(2)_u \times U(2)_q$.
This helps reduce the number of independent degrees of freedom while simultaneously singling out the top quark as a window onto new physics.
\begin{table}
{\centering
\input{operators}
\caption{\label{tab:OP_DEF}
SMEFT operators at dimension-six relevant for the Higgs boson and the top quark in EW observables, in the so-called Warsaw basis~\cite{Grzadkowski:2010es},
and where a $U(3)^3\times U(2)^2$ flavor symmetry is assumed.
$Q,\,t$, and $b$ denote the third generation components of $q,\,u$, and $d$.
}
}
\end{table}
Under these assumptions, then after neglecting the Weinberg operator at dimension five
and truncating the EFT expansion at dimension six, the SMEFT Lagrangian is
\begin{equation}
\mathcal{L}_{ SMEFT} = \mathcal{L}_{ SM} + \frac{1}{\Lambda^2} \sum C_i \mathcal{O}_i \, .
\end{equation}
Here, $C_i$ are the dimensionless Wilson coefficients of the dimension-six operators $\mathcal{O}_i$.
In the absence of additional symmetries, such as the flavor symmetry $\mathcal{S}$ defined above,
the number of independent $\mathcal{O}_i$ stands at 59 if one considers only one generation of fermions and
2499 with three generations.
In practice, one usually studies only a subset of operators in order to establish the sensitivity of a measurement.
Since we are mainly interested in the top quark and Higgs sectors, we consequentially retain only operators that explicitly involve top or Higgs fields and affect EW observables.
The full list of operators that we consider is given in table~\ref{tab:OP_DEF}, where the following conventions are adopted:
\begin{align}
\phi^\dag {\overleftrightarrow D}_\mu \phi=\phi^\dag D_\mu\phi-(D_\mu\phi)^\dag\phi,
&\qquad
\phi^\dag \tau_{\sss K} {\overleftrightarrow D}^\mu \phi=
\phi^\dag \tau_{\sss K}D^\mu\phi-(D^\mu\phi)^\dag \tau_{\sss K}\phi,
\\
W^{\sss K}_{\mu\nu} = \partial_\mu W^{\sss K}_\nu
- \partial_\nu W^{\sss K}_\mu
+ g \epsilon_{\sss IJ}{}^{\sss K} \ W^{\sss I}_\mu W^{\sss J}_\nu,
&\qquad
B_{\mu\nu} = \partial_\mu B_\nu - \partial_\nu B_\mu, \\
D_\mu\phi = \left(\partial_\mu - i \frac{g}{2} \tau_{\sss K} W_\mu^{\sss K} - i\frac12 g^\prime B_\mu\right)\phi.
&\quad
\end{align}
Here, $\tau_I$ denotes the Pauli $\sigma$ matrices, and $\epsilon_{IJK}$ is antisymmetric and normalized to unity.
\begin{table}[t]
\begin{center}
\resizebox{\textwidth}{!}{
\hspace*{-0.5cm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Operators} & \multicolumn{2}{c|}{ Limit on $C_i$ ${\rm ~TeV}^{-2}$} & \multirow{2}{*}{Operators} & \multicolumn{2}{c|}{ Limit on $C_i$ ${\rm ~TeV}^{-2}$}\\
\cline{2-3}\cline{5-6}
{} & Individual & Marginalized & {} & Individual & Marginalized \\
\hline
$\Op{\phi D}$ & [-0.021,0.0055]~\cite{Ellis:2018gqa} & [-0.45,0.50]~\cite{Ellis:2018gqa} &
$\Op{t \phi}$ & [-5.3,1.6]~\cite{Hartland:2019bjb} & [-60,10]~\cite{Hartland:2019bjb}\\
\hline
$\Op{\phi d}$ & [-0.78,1.44]~\cite{Ellis:2018gqa} & [-1.24,16.2]~\cite{Ellis:2018gqa} &
$\Op{tB}$ & [-7.09,4.68]~\cite{Buckley:2015lku} & $-$\\
\hline
$\Op{\phi B}$ & [-0.0033,0.0031]~\cite{Ellis:2018gqa} & [-0.13,0.21]~\cite{Ellis:2018gqa} &
$\Op{tW}$ & [-0.4,0.2]~\cite{Hartland:2019bjb} & [-1.8,0.9]~\cite{Hartland:2019bjb}\\
\hline
$\Op{\phi W}$ & [-0.0093,0.011]~\cite{Ellis:2018gqa} & [-0.50,0.40]~\cite{Ellis:2018gqa} &
$\Op{\phi Q}^{\sss (1)}$ & [-3.10,3.10]~\cite{Buckley:2015lku} & $-$\\
\hline
$\Op{\phi WB}$ & [-0.0051,0.0020]~\cite{Ellis:2018gqa} & [-0.17,0.33]~\cite{Ellis:2018gqa} &
$\Op{\phi Q}^{\sss (3)}$ & [-0.9,0.6]~\cite{Hartland:2019bjb} & [-5.5,5.8]~\cite{Hartland:2019bjb}\\
\hline
$\Op{W}$ & [-0.18,0.18]~\cite{Butter:2016cvz} & $-$ &
$\Op{\phi t}$ & [-6.4,7.3]~\cite{Hartland:2019bjb} & [-13,18]~\cite{Hartland:2019bjb}\\
\hline
{$\Op{\phi}$} & $-$ & $-$ &
{} & {} & {}\\
\hline
\end{tabular}
}
\end{center}
\caption{
Limits on the Wilson coefficients $C_i$ [TeV$^{-2}$] for the SMEFT operators listed in table~\ref{tab:OP_DEF}.}
\label{tab:OP_CONSTR}
\end{table}
In the following we perform a simple sensitivity study focusing on the Higgs self-couplings and the top quark's EW couplings.
In table~\ref{tab:OP_CONSTR} we summarize the current constraints on Wilson coefficients corresponding to the operators in table~\ref{tab:OP_DEF}.
\subsection{Higgs self-couplings at muon colliders}\label{sec:eft_higgs}
A precise determination of the Higgs boson's properties is one of the foremost priorities of the high-energy physics community~\cite{Strategy:2019vxc,EuropeanStrategyGroup:2020pow}.
At the moment, measurements of the Higgs's couplings to the heaviest fermions and gauge bosons are in full agreement with the SM predictions.
However, there exist several couplings that have yet to be measured, and in some cases bounds are only weakly constraining.
Among these are the Yukawa couplings to the first and second generation of fermions as well as the shape of the SM's scalar potential.
Subsequently, a determination of the Higgs's trilinear and the quartic self-couplings, which are now fully predicted in the SM,
would certainly help elucidate the EW symmetry breaking mechanism~\cite{Chung:2012vg} and its role in the thermal history of the universe.
Despite this motivation, measurements of the Higgs's self-interactions appears to be too challenging for the LHC, unless substantial deviations from the SM
exist~\cite{Sirunyan:2017guj, CMS:2018rig, CMS:2018ccd, ATLAS:2018otd, Aaboud:2018zhh, Aad:2019uzh, Aad:2019yxi, Sirunyan:2018iwt, CMS:2018dvu, ATLAS:2019pbo, ATL-PHYS-PUB-2014-019, ATL-PHYS-PUB-2017-001, Kim:2018cxf}.
As such, conclusively measuring the Higgs's properties is among the most compelling motivations for constructing
a lepton collider at a c.m.~energy of a few hundred GeV.\footnote{It is remarkable that a 100 m radius circular muon collider could reach this energy~\cite{Delahaye:2019omf}.}
The case for higher energies is also well-founded.
For example:
Higgs sensitivity studies for CLIC up to $\sqrt{s}=3$ TeV~\cite{Roloff:2018dqu, Vasquez:2019muw, Roloff:2019crr, Liu:2018peg, Maltoni:2018ttu, deBlas:2019rxi}
support the expectation that increasing collider energy provides additional leverage for precision measurements through VBF channels.
Indeed, as already shown in Fig.~\ref{fig:SMh},
VBF processes emerge as the dominant vehicles for $H, HH,$ and even $HHH$ production at high-energy lepton colliders and surpass $s$-channel processes below $\sqrt{s}=3$ TeV.
Likewise, at $\sqrt{s}=10$ TeV and with a benchmark luminosity of $\mathcal{L}=10 {\rm ~ab^{-1}}$, one anticipates $8\cdot 10^6$ Higgs bosons in the SM \cite{Delahaye:2019omf}.
As backgrounds are expected to be under good control, multi-TeV muon colliders essentially function as de facto Higgs factories.
Certainly, the limitations to extracting the Higgs's self-couplings at the LHC and $e^+e^-$ colliders motivate other opportunities, particularly those offered by muon colliders.
However, past muon collider studies on the Higgs have been limited in scope,
focusing largely on properties determination within the SM~\cite{Barger:1995hr,Han:2012rb,Greco:2016izi}
and
its minimal extensions~\cite{Barger:1995hr,Chakrabarty:2014pja,Buttazzo:2018qqp}.
Only recently have more robust, model-independent investigations been conducted~\cite{Ruhdorfer:2019utl,Chiesa:2020awd}.
Expanding on this work, we perform in this section a first exploratory study on determining the SM's full scalar potential in a model-independent fashion using SMEFT.
Within the SMEFT framework, three operators directly modify the Higgs potential:
\begin{equation}
\mathcal{O}_{\phi}, \quad \mathcal{O}_{\phi d}, \quad\text{and}\quad \mathcal{O}_{\phi D}.
\label{eq:eft_higgs_dim6ops}
\end{equation}
The first contributes to the Higgs potential's cubic and quartic terms and shifts the field's ($\varphi$'s) vev $v$.
The latter two modify the Higgs boson's kinetic term and a field redefinition is necessary to recover the canonical normalization.
All of these operators give a contribution to VBF production of $H, HH,$ and $HHH$ through the following Lagrangian terms:
\begin{align}
&\mathcal{O}_{\varphi}= \left(\varphi^\dagger \varphi - \frac{v^2}{2}\right)^3 \supset v^3 H^3 + \frac{3}{2}v^2 H^4, \label{eq:op_expansion_varphi}\\
&\mathcal{O}_{\varphi d}= \left(\varphi^\dagger\varphi\right)\Box \left(\varphi^\dagger\varphi\right) \supset 2v \left(H\Box H^2 + H^2\Box H\right) + H^2 \Box H^2, \\
& \mathcal{O}_{\varphi D}= \left(\varphi^\dagger D _\mu \varphi\right)^\dagger\left(\varphi^\dagger D^\mu\varphi\right) \supset \frac{v}{2}H \partial_\mu H \partial^\mu H +\frac{H^2}{4} \partial_\mu H \partial^\mu H.
\label{eq:op_expansion}
\end{align}
For conciseness, we investigate only the impact of $\mathcal{O}_{\phi}$ and $\mathcal{O}_{\phi d}$ on prospective Higgs's self-coupling measurements.
We neglect $\mathcal{O}_{\phi D}$ since it also modifies couplings to gauge bosons and hence is already well-constrained by precision EW measurements.
(See table~\ref{tab:OP_CONSTR} for details.)
In the following, we consider a high-energy $\mu^+\mu^-$ collider at a c.m.~energy of $\sqrt{s}=3, 14$, and 30 TeV,
with respective benchmark luminosities $\mathcal{L}=6, 20$, and 100 ${\rm ~ab^{-1}}$.
For the processes under consideration, we first discuss the impact of a single operator on inclusive cross sections while fixing all other higher dimensional Wilson coefficients to zero.
Within the SMEFT, the total cross section $(\sigma)$ of a process can be expressed by
\begin{equation}
\sigma = \sigma_{ SM} + \sum_i c_i \sigma_{ Int}^i + \sum_{i,j} c_{i,j} \sigma_{ Sq}^{i,j} \, .
\label{eq:eft_xsec_smeft}
\end{equation}
Here the $\sigma_{ Int}^i$ are the leading corrections in the $\Lambda$ power counting to the SM cross sections $( \sigma_{ SM})$
and are given by the interference between SM amplitudes and SMEFT amplitudes at $\mathcal{O}(\Lambda^{-2})$.
The $\sigma_{ Sq}^{i,j}$ corrections are the square contributions at $\mathcal{O}(\Lambda^{-4})$, and come purely from SMEFT amplitudes at $\mathcal{O}(\Lambda^{-2})$.
The indices $i,j$ run through the set of operators that directly affect the process.
We work under the assumption that the Wilson coefficients $C_i$ for operators in equation~\ref{eq:eft_higgs_dim6ops} are real.
This indicates that the coefficients in $\sigma$ correspond to $c_i = C_i$ and $c_{i,j} = C_i C_j$.
As a na\"ive measure of the sensitivity to the dimension-six operators $\mathcal{O}_i$
and considering only one operator at the time, we define the ratio
\begin{equation}
R(c_i)\equiv \frac{\sigma}{\sigma_{ SM}}= 1+ c_i\frac{\sigma_{ Int}^i}{\sigma_{ SM}} + c_{i,i}^2 \frac{\sigma_{ Sq}^{i,i}}{\sigma_{ SM}} = 1 + r_i + r_{i,i}.
\label{Eq:sens_ratio}
\end{equation}
\begin{figure}[t]
\centering\mbox{\subfigure[]{\includegraphics[width=.45\textwidth]{Plots/madMuon_HH_sens_cp_nocuts}}}
\centering\mbox{\subfigure[]{\includegraphics[width=.45\textwidth]{Plots/madMuon_HH_sens_cdp_nocuts}}}
\caption{Sensitivity to Higgs pair production from VBF
as a function of the Wilson coefficients for (a) $C_{\varphi}$ and (b) $C_{\varphi d}$ (right panel) at $\sqrt{s}=3$ TeV (red), 14 TeV (blue) and 30 TeV (green).}
\label{fig:WW_HH}
\end{figure}
In figures~\ref{fig:WW_HH} and \ref{fig:WW_HHH}, we respectively plot the sensitivity ratio, as defined in equation~\ref{Eq:sens_ratio},
for $HH$ and $HHH$ production from VBF in $\mu^+\mu^-$ collisions
as a function of Wilson coefficients for operators (a) $\mathcal{O}_{\varphi}$ and (b) $\mathcal{O}_{\varphi d}$,
for representative collider energies $\sqrt{s}=3$ (red), $14$ (blue) and $30{\rm ~TeV}$ (green).
Immediately, one sees that the two operators affect the ratio $R(c_i)=\sigma/\sigma_{\rm SM}$, and hence prospects for measuring the Higgs's self couplings, in qualitatively different ways.
To explore this, we first note that $\mathcal{O}_{\varphi}$ in equation~\ref{eq:op_expansion_varphi} only shifts the Higgs's trilinear and quartic couplings.
The operator does not generate an additional energy dependence in the squared matrix element,
apart from that which could be obtained by spoiling SM unitarity cancellations.
As a result, the highest sensitivity to $\mathcal{O}_{\varphi}$ is reached near threshold production.
Increasing $\sqrt{s}$ actually results in losing sensitivity to $HH$ production.
Similarly for $HHH$ production, no significant impact on the cross section ratio is observed with increasing the collider energy,
only a gain in the total number of events stemming from an increasing production rate.
For the particular case of $HHH$ production at $\sqrt{s}=3{\rm ~TeV}$, the cross section is negligible and no measurement for this process can be undertaken.
Independent of shifts to $R(c_i)$, it is important to point out that the higher the event rate the more feasible it becomes to study differential distributions of the above processes.
Generically, an increased number of events allow us to more fully explore, and therefore exploit, regions of phase space that are more sensitive to BSM.
\begin{figure}[t]
\centering\mbox{\subfigure[]{\includegraphics[width=.45\textwidth]{Plots/madMuon_WW_HHH_cp}}}
\centering\mbox{\subfigure[]{\includegraphics[width=.45\textwidth]{Plots/madMuon_WW_HHH_cdp}}}
\caption{Same as figure~\ref{fig:WW_HH} but for triple Higgs production from VBF.}
\label{fig:WW_HHH}
\end{figure}
Contrary to $\mathcal{O}_{\varphi}$, the operator $\mathcal{O}_{\varphi d}$ introduces a kinematical $p^2$ dependence in interaction vertices.
As a consequence, the impact of $\mathcal{O}_{\varphi d}$ grows stronger and stronger as collider energy increases, potentially leading to a substantial gain in sensitivity.
The imprint of this behavior is visible in the fact that the $c_i$ interference term between the SM and new physics becomes negligible as probing energy goes up.
In this limit, the squared $c_{i,i}$ term dominates as na\"ively expected from power counting at higher energies.
This follows from the purely new physics contributions in SMEFT forcing $R(c_i)$ to grow at most as $(E/\Lambda)^4$,
while the linear $c_i$ contributions force $R(c_i)$ to grow at most as $(E/\Lambda)^2$.
Leaving aside questions of the EFT's validity when $(E/\Lambda)^4$ corrections exceed those at $(E/\Lambda)^2$,
our point is that it is clear that sensitivity to $\mathcal{O}_{\varphi}$ and $\mathcal{O}_{\varphi d}$ are driven by complementary phase space regions.
As a final comment, we would like to note that while the study of individual SMEFT operators can give important and useful information,
in a realistic BSM scenario, multiple operators would simultaneously contribute to a given observable.
In this more complicated scenario, correlations and numerical cancellations among operators appear, and phenomenological interpretations become more nuanced, more difficult.
If we nevertheless put ourselves in the scenario where a measured cross section $(\sigma)$ is consistent with the SM,
then we can still define a simplified estimate of the experiment's constraining power.
In particular, we define the space of Wilson coefficients
that predicts a cross section indistinguishable from SM expectation at the 95\% confidence level (CL) by the following:
\begin{equation}
\frac{S}{\sqrt{B}} = \frac{|\mathcal{L} \cdot (\sigma - \sigma_{SM})|}{\sqrt{\mathcal{L} \cdot \sigma_{SM}}} \le 2 \, .
\label{eq:sig_back}
\end{equation}
Here $\sigma$ is the same SMEFT cross section as defined in equation~\ref{eq:eft_xsec_smeft}.
The number of background events $B$ is the SM expectation $(\sigma_{SM})$ at a given luminosity $\mathcal{L}$,
and the number of signal events $S$ is determined from the net difference between SMEFT and SM expectations.
\begin{figure}[t]
\centering\mbox{\subfigure[]{\includegraphics[width=.45\textwidth]{Plots/madMuon_2d_lim_3tev}}}
\centering\mbox{\subfigure[]{\includegraphics[width=.45\textwidth]{Plots/madMuon_2d_lim_14tev}}}
\caption{
Allowed Wilson coefficient space under hypothesizes measurements of (a) $HH$ at $\sqrt{s}= 3$ TeV and (b) both $HH$ plus $HHH$ at 14 TeV,
for when only linear $c_i$ corrections to cross sections are retained (red band) and when quadratic $c_{i,j}$ contributions are also included (blue band).
}
\label{fig:2d_lim}
\end{figure}
If we restrict ourselves to the two aforementioned operators,
then equation~\ref{eq:sig_back} identifies an annulus or a disk in the 2D parameter space of Wilson coefficients.
Hence, by combining observables one can gain constraining power by breaking such degeneracies.
To see this explicitly, we show in Fig.~\ref{fig:2d_lim} the 2D contour of allowed Wilson coefficients for $\mathcal{O}_{\varphi}$ and $\mathcal{O}_{\varphi d}$
at (a) $\sqrt{s}=3{\rm ~TeV}$ via $HH$ production and (b) $14{\rm ~TeV}$ via both $HH$ and $HHH$ production.
(As mentioned above, the $HHH$ rate at 3 TeV is insignificant and hence omitted here.)
Solutions to equation \ref{eq:sig_back} are provided under the assumption that only linear $(c_i)$ corrections to $\sigma$ are retained (red band)
as well as when quadratic $(c_{i,j})$ corrections are included (blue band).
We also report the projected, marginalized limits on the two Wilson coefficients in table~\ref{tab:2d_lim}.
Clearly, the lower energy machine leaves a much larger volume of parameter space unconstrained.
In the 3 TeV case, the absence of a second measurement leads to a flat constraint in the linear case,
which suggests an impossibility of conclusively constraining the parameter space.
Moreover, this represents a strong case for measuring the triple Higgs production at lepton colliders in order to pin down the Higgs's self-couplings.
From this perspective, we argue that a $\sqrt{s}=14$ TeV lepton (muon or electron) collider would be ideal over lower energy scenarios.
Such a machine allows us to take advantage of both double and triple Higgs production,
and at last measure the SM's scalar potential.
It is important to point out that in order to have realistic assessments, one would need to perform a global study that includes multiple processes and operators together.
In particular, while $\mathcal{O}_{\varphi}$ affects only $HH$ and
$HHH$ production, $\mathcal{O}_{\varphi d}$ also shifts the $HWW$ coupling. This means that the total rate of single Higgs production
is affected by $\mathcal{O}_{\varphi d}$ and therefore one can constrain the corresponding Wilson coefficient. Even if the sensitivity to the operator
is lower and does not grow with energy, the high statistics foreseen in multi-TeV lepton colliders is such that
this operator will be heavily constrained by the inclusive measurement of single Higgs production. Assuming the aforementioned luminosities,
we can estimate $95\%$ confidence level limits for $\mathcal{O}_{\varphi d}$ to be roughly equal to $[-0.01, 0.01] {\rm ~TeV}^{-2}$ at 3 TeV and $[-0.004, 0.004] {\rm ~TeV}^{-2}$ at 14 TeV. Nonetheless, we found instructive to include this operator in the study, given the high sensitivity
(see Fig.~\ref{fig:WW_HH} and Fig.~\ref{fig:WW_HHH}) caused by derivative couplings that lead to unitarity violating effects. Despite this,
we notice that higher limits can be obtained in single Higgs production and therefore only $\mathcal{O}_{\varphi}$ (and potentially dimension 8 operators) will be relevant for $HH$ and $HHH$ production.
In order to offer a comparison with other hypothetical future collider proposals, we quote here the projections from combined results at FCC-ee$_{240}$, FCC-ee$_{365}$, FCC-eh and FCC-hh, as reported in Ref.~\cite{deBlas:2019rxi}.
The first two are $e^+e^-$ colliders with $\mathcal{L}= 5$, $1.5{\rm ~ab^{-1}}$ at $\sqrt{s}=240$, $365{\rm ~GeV}$ respectively.
The third is an $e^\pm p$ collider with $\mathcal{L}= 2{\rm ~ab^{-1}}$ at $\sqrt{s}=3.5{\rm ~TeV}$,
while the last is a $pp$ collider with $\mathcal{L}= 30{\rm ~ab^{-1}}$ at $\sqrt{s}=100{\rm ~TeV}$.
Under these scenarios, the projected individual bounds at 68\% CL for operators we consider are
\begin{equation}
C_{\varphi} \sim [-0.79, 0.79] \textrm{ TeV$^{-2}$} \, \quad\text{and}\quad C_{\varphi d} \sim [-0.03, 0.03] \textrm{ TeV$^{-2}$} \, .
\end{equation}
At a $\sqrt{s}=14{\rm ~TeV}$ muon collider, we report that the anticipated sensitivity on the individual operators at 68\% CL from measuring
single Higgs production, as well as from double and triple Higgs production are
\begin{equation}
C_{\varphi} \sim [-0.02, 0.02] \textrm{ TeV$^{-2}$} \, \quad\text{and}\quad C_{\varphi d} \sim [-0.002, 0.002] \textrm{ TeV$^{-2}$} \, .
\end{equation}
The difference is roughly a factor of $40$ for $C_{\varphi}$ and a factor $15$ for $C_{\varphi d}$.
In the absence of $HH$ production, the results here are comparable to those reported elsewhere~\cite{Chiesa:2020awd}.
This na\"ive comparison again shows the potential of a high-energy lepton collider in studying EW physics, allowing us
to reach a precision that is certainly competitive with what attainable at other proposed colliders.
\begin{table}[!t]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
& 3 TeV & 14 TeV \\
$C_{\varphi}$ & [-3.33, 0.65] & [-0.66, 0.23]\\
$C_{\varphi d}$ & [-1.31, 1.39] & [-0.17, 0.30] \\
\hline
\end{tabular}
\end{center}
\caption{Marginalized projected limits at 95\% confidence level on the Wilson coefficients in TeV$^{-2}$.}
\label{tab:2d_lim}
\end{table}
\subsection{Top electroweak couplings at muon colliders}\label{sec:eft_top}
Due to its ultra heavy mass and complicated decay topologies,
the era of precision top quark physics has only recently begun in earnest at the LHC.
This is despite the particle's discovery decades ago
and rings particularly true for the quark's neutral EW interactions~\cite{Giammanco:2017xyn}.
For example:
The associated production channel $t\overline{t}Z$ was only first observed using the entirety of the LHC's Run I dataset~\cite{Aad:2015eua,Khachatryan:2015sha}.
Likewise, the single top channel $tZ$ was observed only for the first time during the Run II program~\cite{Aaboud:2017ylb,Sirunyan:2018zgs}.
And importantly, only recently has the direct observation of $t\bar{t}H$ production process
confirmed that the top quark's Yukawa coupling to the Higgs boson is $\mathcal{O}(1)$~\cite{CMS:2018rbc, Sirunyan:2018mvw, Sirunyan:2018shy, Aaboud:2017jvq, Aaboud:2017rss}.
Since a precision program for measuring the top quark's EW couplings is still in its infancy, there exists a margin for $\mathcal{O}(10\%)$ deviations from SM expectations.
This makes it of stark importance to understand how to best measure these couplings, as searching for deviations could lead to new physics.
On this pretext, Ref.~\cite{Maltoni:2019aot} studied a class of $2 \to 2$ scattering processes involving the top quark and the EW sector within the SMEFT framework.
There, the authors performed a systematic analysis of unitarity-violating effects induced by higher dimensional operators.
By considering $2 \to 2$ scattering amplitudes embedded in physical processes at present and future colliders, specific processes were identified that exhibited a distinct sensitivity to new physics.
Among these processes, VBF at future lepton colliders stands out.
The Wilson coefficients belonging to the operators in table~\ref{tab:OP_DEF} that impact VBF processes and involve the top quark are not strongly constrained.
Hence, an improved measurement of these channels is important for the indirect tests of a plethora of BSM models.
In the context of a multi-TeV muon collider and following the proposal of Ref.~\cite{Henning:2018kys},
in this subsection we consider and compare the constraining potential of $2 \to 3$ processes on anomalous couplings of the top quark.
Even though such processes feature more complex Feynman diagram topologies and additional phase space suppression,
their utility within the SMEFT framework stems from also featuring
higher-point (higher-leg) contact interactions with a stronger power-law energy dependence at tree-level.
In addition, a larger number of diagrammatic topologies translates into more possibilities to insert dimension-six operators,
which, roughly speaking, may trigger larger deviations from the SM.
(Though arguably larger cancellations are also possible.)
For rather understandable limitations, such as finite computing resources, such considerations were not widely investigated before.
As an example, we consider the operator $\Op{t W}$ from table~\ref{tab:OP_DEF}.
For the case of $W^+ W^- \to t \bar{t}$ scattering,
this operator generates the four-point contact vertex
\begin{align}
\Op{t W} ~=~ i\big(\bar{Q}\sigma^{\mu\nu}\,\tau_{\sss I}\,t\big)\,
\tilde{\phi}\,W^I_{\mu\nu}
+ \text{H.c.} ~
\supset \bar{t} \sigma^{\mu\nu} t \, v \, W_{\mu} W_{\nu} \, + \text{H.c.}
\end{align}
Here, one has to pay a vev penalty of $(v/\Lambda)$, where the $v$ originates from the Higgs doublet $\varphi$,
and thereby makes the term effectively a dimension-five contact term.
On the other hand, by extending the final state with a Higgs field one can saturate the operator:
\begin{equation}
\Op{t W} \supset \bar{t} \sigma^{\mu\nu} t \, H \, W_{\mu} W_{\nu} \, + \text{H.c.}
\end{equation}
Remarkably, instead of $(v/\Lambda)$, one is ``penalized'' by a factor of $(E/\Lambda)$,
where the energy dependence originates from the three-body phase space volume.
This mechanism is rather generic and hence can be exploited for other operators and multiplicities
in order to maximize the energy growth of amplitudes,
and therefore the sensitivity to new physics.
For concreteness, we compare the $2\to2$ production of $t\bar{t}$ from VBF
to the $2 \to 3$ associated production of $t\bar{t}H$ and $t\bar{t} Z$ from VBF.
For each process we present in Fig.~\ref{Fig:radars} the
ratio coefficients $\vert r_i \vert$ and $r_{i,i}$ of $R(c_i)$ as defined in equation~\ref{Eq:sens_ratio}, in the compact, radar plot format.
More specifically, for several SMEFT operators (presented in the polar direction) we plot
(left) the absolute value of the interference term $r_i$ at $\mathcal{O}(\Lambda^{-2})$
and (right) the quadratic term $r_{i,i}$ at $\mathcal{O}(\Lambda^{-4})$ in the radial direction (in logarithmic scale).
We representatively fix each Wilson coefficient to $C_\mathcal{O}=1$ TeV$^{-2}$ and consider collider energies $\sqrt{s}=3{\rm ~TeV}$ (blue dots) and $\sqrt{s}=14{\rm ~TeV}$ (red dots).
Contours at $r_i,~r_{i,i}=1$ are bolded for clarity.
Also reported in the figure are the total cross sections [fb] predicted in the SM.
\begin{figure}[!t]
\centerline{\subfigure[]{\includegraphics[width=.925\textwidth]{Plots/madMuon_WW_TT_radar} \label{fig:radars_ttX}}}
\centerline{\subfigure[]{\includegraphics[width=.925\textwidth]{Plots/madMuon_WW_TTH_radar} \label{fig:radars_ttH}}}
\centerline{\subfigure[]{\includegraphics[width=.925\textwidth]{Plots/madMuon_WW_TTZ_radar} \label{fig:radars_ttZ}}}
\caption{Impact of dimension-six operators (polar direction) on
(left) the interference term $\vert r_i\vert $ and (right) the quadratic term $r_{i,i}$ (radial direction in logarithmic scale) from the ratio $R$ in equation~\ref{Eq:sens_ratio}
for the EW VBF $\to t \bar{t} (H/Z)$ processes at a lepton collider of $\sqrt{s}=3{\rm ~TeV}$ (blue dots) and $14{\rm ~TeV}$ (red dots),
assuming a Wilson coefficient of $1 {\rm ~TeV}^{-2}$.}
\label{Fig:radars}
\end{figure}
We observe in the $t\bar{t}$ case (Fig.~\ref{fig:radars_ttX}) that the sensitivity to the operators under consideration is somewhat marginal.
For both the linear (left) and quadratic (right) ratios deviations reach at most $\mathcal{O}(10\%)$.
The exception is $\mathcal{O}_{tW}$, which features an $r_{i,i}$ term that can reach $\mathcal{O}(1-10)$ at $\sqrt{s}=3-14{\rm ~TeV}$.
For all operators, linear contributions do not vary appreciably when passing from a c.m.~energy of 3 TeV to 14 TeV.
On the other hand, the quadratic terms exhibit an overall growth, just not a dramatic one.
The smallness of $\vert r_i\vert$ contributions suggests that considering higher multiplicity processes, such as $t\overline{t}H$ and $t\overline{t}Z$, could prove more sensitive to new physics,
despite na\"ive phase space suppression.
Adding a Higgs boson (Fig.~\ref{fig:radars_ttH}) or a Z boson (Fig.~\ref{fig:radars_ttZ}) in the final state has a noticeable,
quantitative impact on the overall behavior of ratio coefficients in the radar plots.
When looking at the linear interference terms, it is surprising to see that many of the operators' contributions decrease when going to higher energies.
On the other hand, a sensitivity gain is unambiguous for all the operators in the quadratic case, which reach as much as $\mathcal{O}(100)$.
The behavior of interference is often more subtle to predict.
Being non-positive definite, cancellations can and do readily take place depending on the specific phase space region that is considered.
In particular, we infer that at higher energies these cancellations are enhanced, leading effectively to a lower sensitivity at the inclusive level.
Generically, each operator and process has a cancellation pattern of its own,
which is also reinforced by the linear independence of SMEFT operators.
Hence, designing a single recipe for every operator to invert cancellations with the aim of fully exploiting the increased sensitivity to energy is complicated.
On the other hand, dedicated studies could lead to the discovery of a most sensitive (or a highly optimized)
phase space region for a specific set of operators, enhancing the possibility to detect new physics.
While being more difficult to measure,
these $2\to3$ processes offer an overall improvement to sensitivity with respect to $2\to2$ production of $t \bar{t}$.
This is both from the energy-growing perspective and from an absolute one.
In essence, our very preliminary study here suggests that having a multi-TeV muon collider would benefit us for at least two reasons:
(i) Due to phase space enhancements $(E/\Lambda)$,
a higher energy collider would allow us to take advantage of larger deviations from SM expectations, and hence higher sensitivity to SMEFT operators.
(ii) The growth in the inclusive VBF cross section would allow us to have enough statistics to precisely measure higher multiplicity final states
that would otherwise be infeasible even at $\sqrt{s}=3{\rm ~TeV}$.
For example: we compare the $\sim 100$ $t \bar{t} H$ events at 3 TeV to the $\sim 3000$ at 14 TeV,
assuming the benchmark luminosities considered ($\mathcal{L}=6{\rm ~ab^{-1}}$ and $20{\rm ~ab^{-1}}$, respectively).
The program to precisely determine the top quark's EW interactions would therefore benefit greatly from a potential future muon collider
by allowing us to take into account new processes that could help break degeneracies among SMEFT operators and learn about the dynamics of EW symmetry breaking.
\section{Introduction}\label{sec:intro}
Standing out among the important results that the Large Hadron Collider (LHC) has thus far delivered are the discovery of the Higgs boson $(H)$ and the measurements of its properties.
On the other hand, long-awaited evidence of new physics based on theoretical arguments, such as the stabilization of the electroweak (EW) scale,
or on experimental grounds, such as dark matter and neutrino masses, have evaded our scrutiny.
Despite the fact that the LHC's physics program is far from over, and with Run III and the upgrade to the
high luminosity LHC (HL-LHC) already lined up,
the time has come for the high-energy community to assess what could be next in exploring the energy frontier.
Such a question, which has been the main theme of the activities built around the European Strategy Update for Particle Physics~\cite{Strategy:2019vxc,EuropeanStrategyGroup:2020pow}, is not an easy one:
Current physics and technology challenges are formidable.
The fact that we have no clear indication where the scale of new physics might reside hampers the definition of a clear target.
And depending on the properties of the new phenomena, either ``low-energy'' precision measurements or searching for new states in ``high-energy'' direct production may be the most sensitive and informative strategy to follow.
In any case, exploration of the energy frontier will require building a new collider.
So far, two main options have actively been discussed by the community:
a very energetic hadron collider with a center-of-mass (c.m.) energy of $\sqrt{s}=100$ TeV, and an $e^+e^-$ collider, at either high energy (up to a few TeV) or ultra high luminosity.
These two classes have very different characteristics.
The former has a much higher discovery reach of new states,
while the latter is feasible on a shorter time scale and allows a precision-measurement campaign of the Higgs/EW sector.
Both avenues entail incredible investments, an intense research and development program, and formidable engineering capabilities.
However, as construction of such collider experiments will not start for at least another 15-20 years from now and then require up to 20-40 more years of operation to achieve tentative physics targets,
the community has started to seriously consider other avenues.
This includes scenarios once thought too audacious or just impossible with even foreseeable technology.
In this context, both linear $e^+e^-$ and circular $\mu^+\mu^-$ machines running at energies of several-to-many TeV have recently experienced a boost of interest within the community.
In the former case, novel techniques based on plasma acceleration could potentially deliver up to several GeV/$m$ acceleration gradients, thereby reaching TeV scales on the km range~\cite{Gschwendtner:2015rni}.
An outstanding challenge in this case, however, is delivering the instantaneous luminosity needed to meet physics goals.
Accelerating muons, on the other hand, would allow one to merge the best of both hadron and $e^+e^-$ colliders~\cite{Palmer:1996gs,Ankenbrandt:1999cta}, i.e., a high energy reach on one side and a ``clean'' environment on the other.
Such a facility could possibly reach luminosities in the range of $L=10^{35}$ cm$^{-2}$ s$^{-1}$ (or 100 nb$^{-1}$ Hz)~\cite{Delahaye:2019omf} and, importantly, be hosted at preexisting laboratory sites and tunnel complexes.
These dream-like features are counterbalanced by a number of outstanding challenges, all of which eventually originate from a simple fact: muons are unstable and decay weakly into electrons and neutrinos.
Conceptual studies of muon colliders started decades ago and recently resulted in the Muon Accelerator Program (MAP) project \cite{Palmer:2014nza}.
In the MAP proposal, muons are produced as tertiary particles in the decays of pions,
which themselves are created by an intense proton beam impinging on a heavy-material target, as already achievable at accelerator-based muon and neutrino facilities.
The challenge is that muons from pion decays have relatively low energy but large transverse and longitudinal emittance. Hence, they must be ``cooled'' in order to achieve high beam luminosities.
More recently, a different approach to muon production has been proposed:
in the Low Emission Muon Accelerator (LEMMA) muons are produced in $e^+ e^-$ annihilation near the threshold for $\mu^+ \mu^-$ pair creation \cite{Antonelli:2013mmk,Antonelli:2015nla}.
A novelty is that muon beams do not require cooling to reach high instantaneous luminosities.
This is because when a high-energy positron beam annihilates with electrons from a target the resulting muons are highly collimated and possess very small emittance.
Muons are then already highly boosted with $\gamma\sim 200$ and reach a lifetime of $\tau\sim 500\,\mu$s \cite{Antonelli:2013mmk}.
The low emittance of the muons may further allow high beam luminosities with a smaller number of muons per bunch.
This results in a lower level of expected beam-induced background, alleviating also potential radiation hazards due to neutrinos \cite{Bartosik:2019dzq}.
Given the recent interest and fast progress on how to overcome technological challenges, the most urgent mission becomes to clearly identify the reach and physics possibilities that such machines could offer.
Available studies at the CLIC $e^+ e^-$ linear collider at 3 TeV have been used to gauge the potential of a muon collider in the multi-TeV range.
Earlier, dedicated studies focusing mostly on processes arising from $\mu^+ \mu^-$ annihilation are
available~\cite{Barger:1995hr,Palmer:1996gs,Ankenbrandt:1999cta,Han:2012rb,Chakrabarty:2014pja,Greco:2016izi,Buttazzo:2018qqp,Delahaye:2019omf,Ruhdorfer:2019utl},
and indicate promising potential for finding new physics from direct searches as well as from indirect searches with precision measurements of EW physics.
The work here is motivated by the observation that at sufficiently high energies we expect EW vector boson fusion and scattering (collectedly denoted as VBF)
to become the dominant production mode at a multi-TeV lepton collider.
While well-established for (heavy) Higgs production~\cite{Dawson:1984ta,Hikasa:1985ee,Altarelli:1987ue,Kilian:1995tr,Gunion:1998jc} and
more recently for the production of heavy singlet scalars~\cite{Buttazzo:2018qqp},
we anticipate that this behavior holds more broadly for all Standard Model (SM) final states relevant to studying the EW sector and/or the direct search of (not too heavy) new physics.
To this aim, we present a systematic exploration of SM processes featuring $W$, $Z$, $H$ bosons and top quarks $t$ in the final state.
We investigate and compare $s$-channel annihilation and VBF cross sections in high-energy, lepton-lepton collisions, quantifying the size and the growth of the latter with collider energy.
We consider the potential utility of precision SM measurements, focusing on a few representative examples, namely in the context of the SM effective field theory (SMEFT)~\cite{Grzadkowski:2010es,Aebischer:2017ugx,Brivio:2017btx}.
Finally, we consider the direct and indirect production of new, heavy states in a number of benchmark, beyond the SM (BSM) scenarios.
Having in mind the luminosity scenarios envisaged for a muon collider~\cite{Delahaye:2019omf},
we conclude that a multi-TeV lepton collider could offer a wide range of precision measurements of EW and Higgs couplings as well as sensitivity to new resonances beyond present experiments.
For example: a $\sqrt{s}=10$ TeV muon collider with an integrated luminosity of $\mathcal{L}=10\, {\rm ~ab^{-1}}$ would produce about $ 8\cdot 10^6$ Higgs bosons,
with about $ 3\cdot 10^4$ from pair production alone.
This provides direct access to the trilinear coupling of the Higgs~\cite{Delahaye:2019omf} and gives an excellent perspective on the quartic coupling~\cite{Chiesa:2020awd}.
This study is organized in the following manner:
In section~\ref{sec:setup} we briefly summarize our computational setup and SM inputs for reproducibility.
We then set the stage in section~\ref{sec:ppvsmuon} by presenting and critically evaluating simple methods to
estimate and compare the discovery potential of a hadron collider with that of a high-energy lepton collider.
In section~\ref{sec:sm} we present production cross sections for SM processes involving the Higgs bosons, top quark pairs, and EW gauge bosons in $\mu^+\mu^-$ collisions.
In particular, we report the total c.m.~energies at which cross sections for VBF processes, which grow as $\log {s}$,
overcome the corresponding $s$-channel, annihilation ones, which instead decrease as $1/{s}$.
In section~\ref{sec:eft} we consider the potential of a multi-TeV muon collider to facilitate precision measurements of EW processes.
We do this by exploring in detail limits that can be obtained in the context of the SMEFT by measuring $HH$ and $HHH$ production as well as final states involving the top quarks and weak bosons.
Section~\ref{sec:bsm} presents an overview on the possibilities for direct searches for new resonances at a multi-TeV muon collider,
comparing the reach with those attainable at hadron colliders.
We further investigate and compare the relative importance of VBF production in BSM searches in section~\ref{sec:bsm_vbf}.
We summarize our work in section~\ref{sec:conclusions}.
\section*{Acknowledgements}
FM, LM, and XZ would like to thank Mauro Chiesa, Roberto Franceschini, Barbara Mele and Fulvio Piccinini for many discussions on the physics of muon colliders. RR thanks Carlos Alishaan for helpful discussions.
This work has received funding from the European Union's Horizon 2020 research and innovation programme as part of the Marie Skłodowska-Curie Innovative Training Network MCnetITN3 (grant agreement no. 722104),
FNRS “Excellence of Science” EOS be.h Project No. 30820817.
The work of AC is supported by INFN research grant n. 20286/2018.
RR is supported under the UCLouvain fund “MOVE-IN Louvain” and acknowledge the contribution of the VBSCan COST Action CA16108.
Computational resources have been provided by the supercomputing facilities of the Universit\'e catholique de Louvain (CISM/UCL)
and the Consortium des \'Equipements de Calcul Intensif en F\'ed\'eration Wallonie Bruxelles (C\'ECI) funded by the Fond de la Recherche Scientifique de Belgique
(F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region.
\section{Comparing proton colliders and muon colliders}\label{sec:ppvsmuon}
In trying to assess and compare qualitatively different colliders, it is constructive to first define a translatable measure of reach.
Therefore, in this section, we propose a simple methodology for comparing the reach of a hypothetical muon collider to what is attainable at a proton collider.
The obvious difference between the two classes of colliders is that protons are composite particles while muons are not.
This means that proton collisions involve the scattering of (primarily) QCD partons that carry only a fraction of the total available energy,
whereas muon collisions, up to radiative corrections, involve the scattering of particles carrying the total available energy.
Concretely, we investigate three process categories:
(\ref{sec:ppvsmuon_2to1}) the annihilation of initial-state particles (partons in the $pp$ case) into a single-particle
final state at a fixed final-state invariant mass $(\sqrt{\hat{s}})$;
(\ref{sec:ppvsmuon_2to2}) the two-particle final state analogue of this; and
(\ref{sec:ppvsmuon_vbf}) the scattering of weak gauge bosons.
\subsection{$2\to1$ annihilations}\label{sec:ppvsmuon_2to1}
Despite obvious differences, our aim is to compare the reach of muon and hadron colliders in (as much as possible) a model independent manner.
In all cases, we find it useful to formulate the comparison in terms of ``generalized parton luminosities''~\cite{Quigg:2009gg},
where a parton can be any particle in the initial state, be it a lepton, a QCD parton, or an EW boson.
In this language, the total, inclusive cross section for a given process in $pp$ collisions is
\begin{equation}
\sigma(p p \to X + \text{anything}) = \int_{\tau_0}^1 d\tau \sum_{ij} \Phi_{ij}(\tau, \mu_f) \, \hat{\sigma}(i j \to X) \, .
\end{equation}
Here $X$ is a generic final state with invariant mass $m_X=\sqrt{\hat{s}}=\sqrt{\tau s}$;
the parton-level cross section is given by $\hat{\sigma}(i j \to X)$ and is kinematically forbidden for $\tau<\tau_0 \equiv \min(m_X^2)/s$;
and for a c.m.~hadron collider energy $\sqrt{s}$, $\Phi_{ij}$ is the $ij$ parton luminosity, defined as
\begin{equation}\label{eq:parton_lumi_def}
\Phi_{ij}(\tau, \mu_f) \equiv \frac{1}{1 + \delta_{ij}} \int_{\tau}^1 \frac{d\xi}{\xi} \left[f_{i/p}(\xi, \mu_f) \, f_{j/p}\left(\frac{\tau}{\xi}, \mu_f\right)
+ (i \leftrightarrow j)
\right] \, .
\end{equation}
The $f_{i/p}(\xi,\mu_f)$ are the collinear PDFs for parton $i$ carrying a longitudinal momentum $p_z^i = \xi E_p$,
out of a hadron $p$ with momentum $p_z^p = E_p= \sqrt{s}/2$, when renormalization group (RG)-evolved to a collinear factorization scale $\mu_f$.
Where applicable, we set $\mu_f$ to half the partonic c.m.~energy, i.e., set $\mu_f = \sqrt{\hat{s}}/2$.
The Kronecker $\delta_{ij}$ removes double counting of identical initial states in $i \leftrightarrow j$ beam symmetrization.
Given equation~\ref{eq:parton_lumi_def}, then for a muon collider process $\mu^+\mu^-\to Y$ and its cross section $\sigma_\mu$ at a fixed muon collider energy $\sqrt{s_\mu}$,
we define the ``equivalent proton collider energy'' as the corresponding $pp$ collider energy $\sqrt{s_p}$
such that the analogous hadron-collider process $pp\to Y$ has the same hadronic cross section $\sigma_p$.
That is, $\sqrt{s_\mu}$ and $\sqrt{s_p}$ such that $\sigma_p = \sigma_\mu$.
Now, for the case of a $1$-body final state $Y$ with mass $M=\sqrt{\hat{s}}$, we have
\begin{align}
\sigma_p(s_p) &= \int_{\tau_0}^1 d\tau ~ \sum_{ij} \Phi_{ij}(\tau, \mu_f) \, ~ [\hat{\sigma}_{ij}]_p \, \delta\left(\tau - \frac{M^2}{s_p}\right) \, , \\
\sigma_\mu(s_\mu) &= [\hat{\sigma}]_\mu \, ,
\end{align}
where $[\hat{\sigma}_{ij}]_p$ and $[\hat{\sigma}]_\mu$ are the characteristic partonic cross sections of the two collider processes.
For the $pp$ case, we make explicit the Dirac $\delta$ function arising from the $1$-body phase space measure.
For the $\mu^+\mu^-$ case, we assume that this is absorbed through, for example, the use of the narrow width approximation
and rescaling by a suitably chosen branching rate.
By construction, $s_\mu = \hat{s}=M^2$, since production can only happen on threshold.
Requiring that $\sigma_p = \sigma_\mu$ and evaluating the beam-threshold integral, we obtain
\begin{eqnarray}
[\hat{\sigma}]_\mu &=& \sigma_\mu(s_\mu) = \sigma_p(s_p) \\
&=& \sum_{ij} \Phi_{ij}\left(\frac{s_\mu}{s_p}, \mu_f\right) \times [\hat{\sigma}_{ij}]_p \approx
[\hat{\sigma}]_p \times \sum_{ij} \Phi_{ij}\left(\frac{s_\mu}{s_p}, \frac{\sqrt{s_\mu}}{2}\right) .
\label{eq:single_prod}
\end{eqnarray}
In the last step we assume that the $ij$-specific partonic cross section can be approximated by a universal,
$ij$-independent cross section $[\hat{\sigma}]_p$.
Crucially, in the luminosity function $\Phi(\tau,\mu_f)$, we identify the kinematic threshold as $\tau = s_\mu / s_p$,
and likewise the factorization scale as $\mu_f = \sqrt{s_\mu}/2$.
If one further assumes a relationship between the partonic cross sections,
then this identification allows us to write equation~\ref{eq:single_prod} as
\begin{equation}
\sum_{ij} \Phi_{ij}\left(\frac{s_\mu}{s_p}, \frac{\sqrt{s_\mu}}{2}\right) = \cfrac{[\hat{\sigma}]_\mu}{[\hat{\sigma}]_p } \equiv \frac{1}{\beta}.
\label{eq:single_prod_beta}
\end{equation}
which can be solved\footnote{Explicitly, we use the \texttt{scipy} function \texttt{fsolve} to carry out a brute force computation of this transcendental equation.
We report a reasonable computation time on a {2-core personal laptop}.
\label{foot:scipy}}
numerically for $s_p$ as a function of $s_\mu$ and $\beta$.
\begin{figure}[t]
\begin{center}
\subfigure[]{\includegraphics[width=.48\textwidth]{Plots/madMuon_p_vs_m_single_res_k_linear} \label{fig:p_vs_muon_1body}}
\subfigure[]{\includegraphics[width=.48\textwidth]{Plots/madMuon_p_vs_m_pair_prod_k_linear} \label{fig:p_vs_muon_2body}}
\end{center}
\caption{
The equivalent proton collider energy $\sqrt{s_p}$ [TeV]
required to reach the same beam-level cross section as a $\mu^+\mu^-$ collider with energy $\sqrt{s_\mu}$ [TeV]
for (a) $2\to1$ and (b) $2\to2$ parton-level process,
for benchmark scaling relationships between the parton-level cross sections $[\hat{\sigma}]_p$ and $[\hat{\sigma}]_\mu$
as well as for pair production of $\tilde{t}\overline{\tilde{t}}$ and $\tilde{\chi}^+\tilde{\chi}^-$ through their leading $2\to2$ production modes.
\label{fig:p_vs_muon}}
\end{figure}
For various benchmark assumptions $(\beta)$ on the partonic cross sections $[\hat{\sigma}]_p$ and $[\hat{\sigma}]_\mu$,
and for the parton luminosity configurations $ij=gg$ (red) and $ij=q\overline{q}$ (blue), where $q\in\{u,c,d,s\}$ is any light quark,
we plot in figure~\ref{fig:p_vs_muon_1body} the equivalent proton collider energy $\sqrt{s_p}$ as a function of $\sqrt{s_\mu}$, for a generic $2\to1$, neutral current process.
In particular, for each partonic configuration,
we consider the case where the $ij$ and $\mu^+\mu^-$ partonic rates are the same, i.e.,
when $\beta=1$ (solid line) in equation~\ref{eq:single_prod_beta}, as well as when $\beta=10$ (dash) and $\beta=100$ (dash-dot).
The purpose of these benchmarks is to cover various coupling regimes,
such as when $ij\to Y$ and $\mu^+\mu^-\to Y$ are governed by the same physics $(\beta=1)$
or when $ij\to Y$ is governed by, say, QCD but $\mu^+\mu^-\to Y$ by QED $(\beta=10)$.
Overall, we find several notable features.
First is the general expectation that a larger $pp$ collider energy is needed to achieve the same partonic cross section as a $\mu^+\mu^-$ collider.
This follows from the fact that $pp$ beam energies are distributed among many partons whereas $\mu^+\mu^-$ collider energies are effectively held by just two incoming partons.
Interestingly, we find a surprisingly simple \textit{linear} scaling between the two colliders for all $ij$ and $\beta$ combinations.
For the $ij=q\overline{q}$ configuration and equal partonic coupling strength, i.e., $\beta=1$, we report a scaling relationship of \confirm{$\sqrt{s_p}\sim 5\times \sqrt{s_\mu}$}.
Under the above assumptions, one would need a muon collider energy of $\sqrt{s_\mu} \sim 10~(20)~[30]$ to match the reach of a hadron collider with $\sqrt{s_p}\sim 50~(100)~[150]${\rm ~TeV}.
Specifically for the $\sqrt{s_p}=14{\rm ~TeV}$ LHC and its potential upgrade to \confirm{$\sqrt{s_p}=28{\rm ~TeV}$, one needs $\sqrt{s_\mu} \sim 3{\rm ~TeV}$ and $5.5{\rm ~TeV}$,} respectively.
For the realistic case where the $\mu^+\mu^-$ dynamics is ultra weakly coupled but $pp$ dynamics is strong, i.e., $\beta=100$,
and proceeds through the $ij=gg$ partonic channel, we report a milder scaling of \confirm{$\sqrt{s_p}\sim 3.3\times \sqrt{s_\mu}$.}
This translates to needing a higher $ \sqrt{s_\mu}$ to achieve the same reach at a fixed $\sqrt{s_p}$.
For example: for \confirm{$\sqrt{s_p}=14~(28){\rm ~TeV}$, one requires instead $ \sqrt{s_\mu}\sim4.25~(8.5){\rm ~TeV}$}.
As a cautionary note, we stress that the precise numerical values of scaling ratios reported here are somewhat accidental and can
shift were one to assume alternative PDF sets or $\mu_f$.
The qualitative behavior, however, should remain.
\subsection{$2\to2$ annihilations}\label{sec:ppvsmuon_2to2}
Instead of comparing the two colliders' equivalent reach for $2\to1$ processes,
another possibility is to compare the reach for the pair production of heavy states.
Doing so accounts for the nontrivial opening of new phase space configurations and kinematic thresholds.
In the $2\to2$ case, we assume that the muon collider is optimally configured, i.e.,
that $\sqrt{s_\mu}$ is chosen slightly above threshold, where the production cross section is at its maximum.
For $pp$ collisions, the situation differs from the previous consideration in that
pair production cross sections do not occur at fixed $\hat s$ and, in general, are suppressed by $[\hat{\sigma}_{ij} ]_p\sim1/\hat{s}$, once far above threshold.
Hence, we make the approximation that the quantity $[\hat{\sigma}_{ij} \hat{s}]_p$ does not depend on $\sqrt{\hat{s}}$,
and recast beam-level cross sections in the following way:
\begin{align}
\sigma_p(s_p) &= \frac{1}{s_p}\int_{\tau_0}^1 d\tau \frac{1}{\tau} ~ \sum_{ij} \Phi_{ij}(\tau, \mu_f) ~ \, [\hat{\sigma}_{ij} \hat{s}]_p \, , \\
\sigma_\mu(s_\mu) &= \frac{1}{s_\mu}[\hat{\sigma} \hat{s}]_\mu\,.
\end{align}
Assuming again that $[\hat{\sigma}_{ij} ]_p$ can be approximated by the $ij$-independent $[\hat{\sigma}]_p$,
and making analogous identifications as in equation~\ref{eq:single_prod}, then after equating $ \sigma_\mu(s_\mu)=\sigma_p(s_p)$, we obtain
\begin{equation}\label{eq:pair_prod}
\frac{s_\mu}{s_p} ~ \int_{\frac{s_\mu}{s_p}}^1 d\tau ~ \frac{1}{\tau} ~ \sum_{ij} \Phi_{ij}\left(\tau, \frac{\sqrt{s_\mu}}{2}\right) ~=~ \frac{[\hat{\sigma} \hat{s}]_\mu}{[\hat{\sigma} \hat{s}]_p} ~\equiv~ \frac{1}{\beta} \, .
\end{equation}
Here, the parton luminosity $ij$ runs over the same configurations as in the $2\to1$ case and $\beta$ similarly
models the relationship between the (weighted) characteristic, partonic cross sections $[\hat{\sigma}\hat{s}]_p$ and $[\hat{\sigma}\hat{s}]_\mu$.
As in the previous case, we can solve equation~\ref{eq:pair_prod}
numerically (see footnote \ref{foot:scipy}) for the equivalent $pp$ collider energy $\sqrt{s_p}$ as a function of ${s_\mu}$ and $\beta$.
For the same benchmark assumptions on parton luminosities and partonic cross sections $[\hat{\sigma}]_p$ and $[\hat{\sigma}]_\mu$
as considered in figure~\ref{fig:p_vs_muon_1body}, we plot in figure~\ref{fig:p_vs_muon_2body}
the equivalent proton collider energy $\sqrt{s_p}$ [TeV] as a function of $\sqrt{s_\mu}$ [TeV], for a generic, $2\to2$, neutral current process.
For concreteness, we also consider the LO production of
\confirm{top squark pairs $\tilde{t}\overline{\tilde{t}}$ through QCD currents in $pp$ collisions but EW currents in $\mu^+\mu^-$ collisions,
as well as of chargino pairs $\tilde{\chi}^+\tilde{\chi}^-$ through EW currents.}
For these cases, we fixed particle masses $M$ such that $2M$ constitutes 90\% of the total muon collider energy, i.e., $M=0.9\times\sqrt{s_\mu}/2$.
As in the previous case, we again observe that a much higher energy $pp$ collider exhibits the same reach as lower energy $\mu^+\mu^-$ colliders.
However, we find the scaling to be more drastic, with higher equivalent proton collider energies being reached for the same muon collider energies.
We attribute this difference to the fact that, while a spectrum of $\sqrt{\hat{s}}$ is sampled in $pp$ collisions,
pair production beyond threshold is kinematically suppressed; this is unlike $\mu^+\mu^-$ collisions where $\sqrt{\hat{s}}$ is fixed.
Remarkably, we also find that the scaling relationship between $\sqrt{s_p}$ and $\sqrt{s_\mu}$ for $2\to2$ processes retains its
linear behavior for all our representative cases.
In this measure of comparing colliders, we report a scaling relationship of \confirm{$\sqrt{s_p}\sim 22\times \sqrt{s_\mu}$}
for the $ij=q\overline{q}$ configuration and equal partonic coupling strength, i.e., $\beta=1$.
This indicates that a muon collider of \confirm{$\sqrt{s_\mu}\sim10~(20)~[30]{\rm ~TeV}$ has roughly the same reach
as a proton collider at $\sqrt{s_p}\sim220~(440)~[660]{\rm ~TeV}$.}
For the physics scenario where pair production is governed by weak (strong) dynamics in muon (proton), i.e., $\beta=100$,
we find very similar behavior for both the $ij=gg$ and $q\overline{q}$ parton configurations.
As in the $2\to1$ case, we report a smaller linear scaling of about \confirm{$\sqrt{s_p}\sim 5.5\times \sqrt{s_\mu}$},
indicating that the reach of a hypothetical muon collider of \confirm{$\sqrt{s}=2.5~(5)~[14]{\rm ~TeV}$ can only exceed or match the reach of
proton colliders up to $\sqrt{s_p}=14~(28)~[80]{\rm ~TeV}$.}
For the concrete cases of stop (dot) and chargino (diamond) pair production, we observe several additional trends in figure~\ref{fig:p_vs_muon_2body}.
Starkly, we see that the $\tilde{t}\overline{\tilde{t}}$ scaling is in good agreement with
the scenario where production is governed by ultra weak (strong) dynamics in muon (proton), i.e., $\beta=100$, for the $ij=q\overline{q}$ configuration.
The preferred agreement for $ij=q\overline{q}$ over $ij=gg$ follows from the production of high-mass states in $pp$ collisions being typically driven by $q\overline{q}$ annihilation,
where $q\in\{u,d\}$ is a valence quark.
For $\tilde{\chi}^+\tilde{\chi}^-$, we find poorer agreement with na\"ive scaling, with \confirm{$\sqrt{s_p}\sim 30\times \sqrt{s_\mu}$}.
This is about $\sim1.5\times$ the estimation of the $ij=q\overline{q}$ configuration with equal partonic coupling strength $(\beta=1)$.
We attribute this difference to the individual EW charges carried by elementary particles:
as the $\mu\mu Z$ coupling is suppressed, $\mu^+\mu^-\to\tilde{\chi}^+\tilde{\chi}^-$ is dominated by the $\gamma^*$ subchannel.
The $uuZ$ and $ddZ$ couplings in $qq\to \tilde{\chi}^+\tilde{\chi}^-$, on the other hand, are more sizable, and interfere destructively with the $\gamma^*$ subchannel,
which itself is suppressed due to quarks' fractional electric charge.
This is unlike stop pair production since QCD and QED processes are less flavor dependent.
The disagreement is hence tied to a breakdown of the assumption that $[\hat{\sigma}_{ij} ]_p$ are $ij$-independent.
Nevertheless, we importantly find that our scaling relationships, as derived from equations~\ref{eq:single_prod_beta} and \ref{eq:pair_prod},
provide reliable, if not conservative, estimates for the equivalent $pp$ collider energy for a given $\mu^+\mu^-$ collider energy.
\subsection{Weak boson fusion}\label{sec:ppvsmuon_vbf}
We conclude this section by comparing the potential for EW VBF at high-energy $\mu^+\mu^-$ and $pp$ collider facilities.
As we will analyze in the following sections, one of the main features of a multi-TeV lepton collider is the increased relevance of VBF
over $s$-channel scattering as the total collider energy increases.
From this perspective, a muon collider could effectively be considered a ``weak boson collider''.
It is therefore interesting to compare the potential for VBF at a muon collider to that at a $pp$ collider.
To make this comparison, we find it useful to continue in the language of parton luminosities and employ the Effective $W$ Approximation (EWA)~\cite{Dawson:1984gx,Kane:1984bb},
which allows us to treat weak bosons on the same footing as QCD partons.
That is to say, enabling us to consistently define $V_\lambda V'_{\lambda'}$ parton luminosities in both $pp$ and $\mu^+\mu^-$ collisions.
The validity of the EWA as an extension of standard collinear factorization in non-Abelian gauge theories~\cite{Collins:2011zzd} has long been
discussed in literature~\cite{Cahn:1984tx,Willenbrock:1986cr,Dawson:1986tc,Altarelli:1987ue,Kunszt:1987tk}.
More recent investigations have focused on
reformulations that make power counting manifest~\cite{Borel:2012by,Wulzer:2013mza,Chen:2019dkx}
and matching prescriptions between the broken and unbroken phases of the EW theory~\cite{Chen:2016wkt,Chen:2017ekt,Cuomo:2019siu}.
Under the EWA, splitting functions are used to describe the likelihood of forward emissions of weak bosons off light, initial-state fermions.
In the notation of equation~\ref{eq:parton_lumi_def},
the helicity-dependent PDFs that describe the radiation
of a weak vector boson $V$ in helicity state $\lambda$ and carrying a longitudinal energy fraction $\xi$ from a fermion $a$ are~\cite{Dawson:1984gx,Kane:1984bb,Kunszt:1987tk}:
\begin{align}
f_{V_\lambda/a}(\xi, \mu_f, \lambda=\pm1) &= \frac{C}{16\pi^2} \frac{(g_V^a \mp g_A^a)^2 +(g_V^a \pm g_A^a)^2(1-\xi)^2}{\xi}\log{\left(\frac{\mu_f^2}{M_V^2}\right)},
\label{eq:ewa_VT}
\\
f_{V_0/a}(\xi, \mu_f) &= \frac{C}{4\pi^2} (g_V^{a~2} + g_A^{a~2})\left( \frac{1-\xi}{\xi}\right)\,.
\label{eq:ewa_V0}
\end{align}
Here $C,~g_V^a,$ and $g_A^a$ represent the appropriate weak gauge couplings of $a$, given by
\begin{eqnarray}
\text{for}~V=W :& C=\frac{g^2}{8}, \qquad & g_V^a=-g_A^a=1 \, ,
\\
\text{for}~V=Z :& C=\frac{g^2}{\cos^2{\theta_W}}, \quad & g_V^a=\frac{1}{2}\left(T^3_L\right)^a- Q^a\sin^2{\theta_W}, \quad g_A^a = -\frac{1}{2} \left(T^3_L\right)^a .
\end{eqnarray}
At this order, the PDFs do not describe QED charge inversion, i.e., $f_{W^\mp/\mu^\pm}=0+\mathcal{O}(g^2)$.
For simplicity, we further define the spin-averaged transverse parton distribution as
\begin{equation}
f_{V_T/a}(\xi, \mu_f) \equiv \cfrac{f_{V_{+1}/a}(\xi, \mu_f^2, \lambda=+1) + f_{V_{-1}/a}(\xi, \mu_f^2, \lambda=-1)}{2} \, .
\end{equation}
For a lepton collider, the $V_\lambda V'_{\lambda'}$ luminosities $\Phi_{V_\lambda V'_{\lambda'}}(\tau,\mu_f)$ are defined as in equation~\ref{eq:parton_lumi_def},
but with substituting the QCD parton PDFs $f_{i/p}$ with the weak boson PDFs $f_{V_\lambda/a}$.
In particular, for $W^+_{\lambda_1} W^-_{\lambda_2}$ in $\mu^+\mu^-$ collisions for $\sqrt{s \tau}>2M_W$, we have
\begin{equation}
\Phi_{W^+_{\lambda_1} W^-_{\lambda_2} }(\tau, \mu_f) = \int_{\tau}^1 \frac{d\xi}{\xi} ~ f_{W_{\lambda_1} /\mu}\left(\xi, \mu_f\right) \, f_{W_{\lambda_2} /\mu}\left(\frac{\tau}{\xi}, \mu_f\right) \, .
\end{equation}
For the $pp$ case, the $V_\lambda V'_{\lambda'}$ luminosities are obtained after making the substitution
\begin{equation}
f_{i/p}(\xi,\mu_f) \to f_{V_\lambda/p} (\xi,\mu_f) = \sum_q \int^1_\xi \frac{dz}{z} f_{V_\lambda/q}(z,\mu_f) f_{q/p}\left(\frac{\xi}{z},\mu_f\right),
\end{equation}
which is essentially the EW gauge boson PDF of the proton.
The additional convolution accounts for the fact that $q$ in $p$ carries a variable momentum.
(For simplicity, we keep all $\mu_f$ the same as in equation~\ref{eq:parton_lumi_def}.)
The $V_\lambda V'_{\lambda'}$ luminosity at a scale $\tau$ in $pp$ collisions is then,
\begin{eqnarray}
\Phi_{V_\lambda V'_{\lambda'}}(\tau, \mu_f) &=& \frac{1}{1+\delta_{V_\lambda V'_{\lambda'}}} \int_\tau^1 \frac{d\xi}{\xi}\int_{\tau/\xi}^1 \frac{dz_1}{z_1}\int_{\tau/(\xi z_1)}^1 \frac{dz_2}{z_2} \sum_{q, q'}
\\
& &
f_{V_{\lambda}/q}(z_2)f_{V'_{\lambda'}/q'}(z_1)
\left[
f_{q/p}(\xi)f_{q'/p}\left(\frac{\tau}{\xi z_1 z_2}\right)
+
f_{q/p}\left(\frac{\tau}{\xi z_1 z_2}\right)f_{q'/p}(\xi)\right] \, .
\nonumber
\end{eqnarray}
\begin{figure}[t]
\begin{center}
\subfigure[]{\includegraphics[width=.49\textwidth]{Plots/madMuon_Phi_W_muon_proton}\label{fig:p_vs_muon_VBF_comparison}}
\subfigure[]{\includegraphics[width=.49\textwidth]{Plots/madMuon_Phi_WW_vs_ZZ}\label{fig:p_vs_muon_VBF_wz}}
\end{center}
\caption{
(a)
As a function of fractional scattering scale $\sqrt{\tau}=M_{VV'}/\sqrt{s}$,
the (dimensionless) parton luminosities $\Phi$ for
$W_T^+ W_T^-$ (red),
$W_T^\pm W_0^\mp$ (green),
$W_0^+W_0^-$ (blue)
in both $pp$ (hatched shading) and $\mu^+\mu^-$ (solid shading) collisions.
(b) The same but for
$W_\lambda^+ W_{\lambda'}^-$ (solid shading) and
$Z_\lambda Z_{\lambda'}$ (hatched shading)
in $\mu^+\mu^-$ collisions with
$(\lambda,\lambda')= (T,T$) (red), $(0,T)+(T,0)$ (green), and $(0,0)$ (blue).
Band thickness corresponds to the $\mu_f$ dependency as quantified in the text.
}
\label{fig:p_vs_muon_VBF}
\end{figure}
As a function of fractional scattering scale $\sqrt{\tau}=M_{VV'}/\sqrt{s}$, where $\sqrt{s}$ is the total collider energy and $M_{VV'}$ is the $VV'$-system invariant mass,
we plot in figure~\ref{fig:p_vs_muon_VBF_comparison} the parton luminosities for
$W_T^+ W_T^-$ (red),
$W_T^\pm W_0^\mp$ (green),
$W_0^+W_0^-$ (blue)
in both $pp$ (hatched shading) and $\mu^+\mu^-$ (solid shading) collisions.
Due to our choice to set the collinear factorization scale $\mu_f$ to half the partonic c.m.~energy (see below equation \ref{eq:parton_lumi_def} for details),
the curves possess a (logarithmic) dependence on the collider energy.
To take this ambiguity/dependency into account, we plot the envelopes
for each parton luminosity spanned by varying $\sqrt{s}=14{\rm ~TeV}-200{\rm ~TeV}~(3{\rm ~TeV}-30{\rm ~TeV})$ for the proton (muon) case.
The precise ranges of $\sqrt{s}$ and $\sqrt{\tau}$ that we consider help ensure that
the partonic fraction of energy is neither too small nor too big, and hence that the EWA remains reliable~ \cite{Borel:2012by}.
We report that this ``uncertainty'' has little impact on our qualitative and quantitative assessments.
In figure~\ref{fig:p_vs_muon_VBF_comparison}, we find that for each helicity polarization configuration,
the $W_\lambda W_{\lambda'}$ luminosity in $\mu^+\mu^-$ collisions unambiguously exceeds the analogous luminosity in $pp$ collisions over the $\sqrt{\tau}$ considered.
At $\sqrt{\tau}=0.2~(0.5)~[0.8]$, we find that the $W_\lambda W_{\lambda'}$ luminosities at a muon collider are roughly \confirm{$10^2-10^3~(10^4-10^6)~[10^8-10^9]$}
times those of a proton collider.
Hence, for a fixed collider energy $\sqrt{s_\mu}=\sqrt{s_p}$, the likelihood of $WW$ scattering in $\mu^+\mu^-$ collisions is much higher than for $pp$ collisions.
We attribute this to several factors:
First is that the emerging EW PDFs in proton beams are a subdominant phenomenon in perturbation theory whereas in muon beams they arise at lowest order.
Relatedly, as muons are point particles, they carry more energy than typical partons in a proton beam with the same beam energy.
This enables EW PDFs in $\mu^+\mu^-$ collisions to access smaller momentum fractions $\xi$, thereby accessing larger PDF enhancements at smaller $\xi$.
To further explore this hierarchy, we compare in figure~\ref{fig:p_vs_muon_VBF_wz}
the $\mu^+\mu^-$ collider luminosities for $W_\lambda^+W_{\lambda'}^-$ (solid shading) and $Z_{\lambda}Z_{\lambda'}$ (hatched shading) pairs,
for $(\lambda,\lambda')=(T,T)$ (red), $(0,T)+(T,0)$ (green), and $(0,0)$ (blue).
Globally, we see that the $WW$ and $ZZ$ luminosities exhibit a very similar shape dependence on $\sqrt{\tau}$,
which follows from the functional form of $f_{V/a}(\xi)$.
The normalization difference is due to the SU$(2)_L$ quantum number of muons, which results in the well-known suppression of $\mu\mu Z$ couplings in the SM.
Indeed, for $(\lambda,\lambda')=(0,0)$, we find that the ratio of luminosities exhibits the constant relationship
\begin{equation}
\cfrac{\Phi_{W_0W_0}(\tau)}{\Phi_{Z_0Z_0}(\tau)}\Bigg\vert_{\rm fixed~\tau} =
\left[\cfrac{\cos^2\theta_W}{\left(T^{3~\mu}_L- 2Q^\mu\sin^2{\theta_W}\right)^2 + \left(T^{3~\mu}_L\right)^2 }\right]^2
\approx
\cfrac{\cos^4\theta_W}{\left(T^{3~\mu}_L\right)^4 }
\approx 9.
\end{equation}
While not shown, we report that the $W_{\lambda}Z_{\lambda'}$ luminosities also have similar shapes and are located roughly at the geometric average of the $WW$ and $ZZ$ curves.
Furthermore, due to gauge coupling universality, we
anticipate that the luminosity hierarchy observed between muon and proton colliders
extends to luminosities involving $\gamma$ and $Z$ bosons.
\section{Computational setup}\label{sec:setup}
We briefly summarize here our computational setup.
Throughout this work the evaluation of leading order matrix elements and phase space integration are handled numerically using the general purpose event generator {\sc MadGraph5\_aMC@NLO} ({\sc mg5amc}) v2.6.5~\cite{Alwall:2014hca}.
For SM interactions we use the default setup, which assumes the following EW inputs:
\begin{eqnarray}
G_F &= 1.166390 \cdot 10^{-5} \textrm{ GeV}^{-2}, \quad \alpha_{EW}(M_Z) = 1/132.5
\nonumber\\
M_Z &= 91.188{\rm ~GeV}, \quad M_t = 173{\rm ~GeV}, \quad M_H = 125{\rm ~GeV}.
\end{eqnarray}
For relevant computations, we employ the NNPDF 3.0 LO parton distribution functions (PDFs) with $\alpha_s(M_Z)=0.118$~\cite{Ball:2014uwa},
as maintained using the LHAPDF 6 libraries~\cite{Buckley:2014ana}.
To gain confidence in our results, especially at very high energies where we find that phase space integration converges much more slowly,
we employ {\sc mg5amc}~v2.7.2, which includes a ``hard survey'' option for improved phase space sampling.
In addition, some SM results have been cross-checked with {\sc Whizard}~\cite{Kilian:2007gr} and in-house MC computations using matrix elements provided by {\sc Recola2} \cite{Denner:2017wsf}.
\section{Standard Model processes at muon colliders}\label{sec:sm}
In this section we investigate and present cross sections for
various EW boson and top quark final states of the form $X= n\, t \bar{t} + m\, V + k\, H $,
where $n$, $m$ and $k$ are integers that respectively denote the number of top quark pairs, weak vector bosons $V$, and Higgs bosons $H$.
One of our goals of this survey is to systematically compare $s$-channel annihilation processes with EW VBF production channels in $\mu^+\mu^-$ collisions,
and in particular identify the c.m.~energies at which VBF rates overtake $s$-channel ones.
Specifically, we consider VBF process $VV\to X$ as obtainable from a $\mu^+\mu^-$ initial state.
This consists of the sub-channels
$W^{+}W^{-}$ fusion (section~\ref{sec:sm_ww}),
$ZZ/Z\gamma/\gamma\gamma$ fusion (section~\ref{sec:sm_zz}),
and $W^{\pm}Z/W^\pm\gamma$ fusion (section~\ref{sec:sm_wz}):
\begin{center}
\begin{tabular}{ll}
$\mu^+\mu^-\to X\, {\nu}_{\mu}\overline{\nu}_{\mu}$ &($WW$\, \textrm{fusion}),\nonumber\\
$\mu^+\mu^-\to X\, \mu^+\mu^-$&($ZZ/Z\gamma/\gamma\gamma$\, \rm{fusion}).\nonumber\\
$\mu^+\mu^-\to X\, \mu^\pm \overset{(-)}{\nu_\mu}$& ($WZ$\, \rm{fusion}),\nonumber
\label{procs}
\end{tabular}
\end{center}
We also consider collisions with same-sign muon pairs, $\mu^+\mu^+$ (section~\ref{sec:sm_samesign}).
In this case, the $WZ$ and $ZZ$ modes give rise to the same final state $X$, up to charge multiplicities, at the same rate as $\mu^+\mu^-$ collisions.
The $W^+ W^+$ mode, on the other hand, opens truly new kinds of signatures
while possessing the same luminosity as $W^{+}W^{-}$ fusion reported in section~\ref{sec:ppvsmuon}.
Before presenting our survey, we briefly comment in section~\ref{sec:sm_technicalities} on a few technical details
related to simulating many-particle final states in multi-TeV lepton collisions.
\subsection{Technical nuances at high energies}\label{sec:sm_technicalities}
An important issue in this study involves the fact that the final states above also receive contributions from non-VBF processes,
like associated production of $X$ and a $W$ or $Z$ boson.
That is to say, from an $s$-channel process but with an additional $V$-strahlung emission that then splits into a lepton pair.
In general, these contributions interfere with VBF topologies at the amplitude level and are not all separately gauge-invariant subprocesses.
Therefore, in principle, they need to be considered together with VBF.
However, the $V$ boson decay contributions are dominated by regions of phase space where $V$ are on their mass shells.
Especially, at a lepton collider, where the initial-state energy of the collision is known accurately,
such resonant configurations can be experimentally distinguished from the non-resonant continuum.
In fact, the relative contributions of those resonant topologies
as well as
of their interference with the gauge-invariant VBF contributions
are small when far from the on-shell region,
i.e., where most of the VBF cross section is populated.
Therefore, in order to avoid double counting of results that we will present for $s$-channel processes, as well as to make computations more efficient,
we remove contributions with instances of on-shell $Z\to\mu^+\mu^-$, $Z \to \nu \bar{\nu}$, and $W^- \to \mu^- \bar{\nu}$ decays.
In general, removing diagrams breaks gauge invariance and so we refrain from doing this.
A simple solution, adopted for instance in Ref.~\cite{Chiesa:2020awd}, is to impose a minimum on the invariant mass of final-state lepton pairs.
In this work, we adopt an even simpler prescription by simulating an initial state possessing a non-zero, net muon and electron flavor, i.e. $\mu^- e^+$ collisions.
In so doing, $s$-channel annihilations are forbidden and VBF channels are automatically retained.
We have checked for a few processes that the numerical differences
with scattering rates of the analogous $\mu^+ \mu^-\to X$ channels in the far off-shell region are small at high energies.
\begin{figure}[!t]
\centering\mbox{\subfigure[]{\includegraphics[width=0.45\textwidth]{Plots/madMuon_cs_energy_scan_ttx_interp_log} \label{fig:SMt_ttVX}}}
\centering\mbox{\subfigure[]{\includegraphics[width=0.45\textwidth]{Plots/madMuon_cs_energy_scan_ttxx_interp_log} \label{fig:SMt_ttVV}}}
\caption{$W^+W^-$ fusion (solid) and analogous $s$-channel annihilation (dashed) cross sections $\sigma$ [fb] for (a) $t\overline{t}X$ and (b) $t\overline{t}XX$ associated production as a function of collider energy $\sqrt{s}$ [TeV].}
\label{fig:SMt}
\end{figure}
\begin{figure}[!t]
\centering\mbox{\subfigure[]{\includegraphics[width=0.445\textwidth]{Plots/madMuon_cs_energy_scan_h_interp_log} \label{fig:SMh_hxxx}}}
\centering\mbox{\subfigure[]{\includegraphics[width=0.445\textwidth]{Plots/madMuon_cs_energy_scan_hh_interp_log} \label{fig:SMh_hhxx}}}\\
\centering\mbox{\subfigure[]{\includegraphics[width=0.445\textwidth]{Plots/madMuon_cs_energy_scan_hhh_interp_log} \label{fig:SMh_hhhx}}}
\centering\mbox{\subfigure[]{\includegraphics[width=0.445\textwidth]{Plots/madMuon_cs_energy_scan_nv_interp_log} \label{fig:SMh_vvvx}}}
\caption{
Same as figure~\ref{fig:SMt} but for (a) $HX$, (b) $HHX$, and (c) $HHHX$ associated production as well as (d) multiboson production.}
\label{fig:SMh}
\end{figure}
\begin{table}[!t]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{l||cc||cc||cc||cc}
\toprule
\toprule
\multirow{2}{*}{$\sigma$ [fb]} & \multicolumn{2}{c}{$\sqrt s=$ 1 TeV} & \multicolumn{2}{c}{$\sqrt s=$ 3 TeV} & \multicolumn{2}{c}{$\sqrt s=$ 14 TeV} & \multicolumn{2}{c}{$\sqrt s=$ 30 TeV}\\
&VBF&s-ch.&VBF&s-ch.&VBF&s-ch.&VBF&s-ch\\
\midrule
\midrule
$t \bar{t}$ & 4.3$\cdot 10^{-1}$ &1.7$\cdot 10^{2}$ & 5.1$\cdot 10^{0}$ &1.9$\cdot 10^{1}$ & 2.1$\cdot 10^{1}$ &8.8$\cdot 10^{-1}$ & 3.1$\cdot 10^{1}$ &1.9$\cdot 10^{-1}$ \\
\hline
$t \bar{t} Z$ & 1.6$\cdot 10^{-3}$ &4.6$\cdot 10^{0}$ & 1.1$\cdot 10^{-1}$ &1.6$\cdot 10^{0}$ & 1.3$\cdot 10^{0}$ &1.8$\cdot 10^{-1}$ & 2.8$\cdot 10^{0}$ &5.4$\cdot 10^{-2}$ \\
$t \bar{t} H$ & 2.0$\cdot 10^{-4}$ &2.0$\cdot 10^{0}$ & 1.3$\cdot 10^{-2}$ &4.1$\cdot 10^{-1}$ & 1.5$\cdot 10^{-1}$ &3.0$\cdot 10^{-2}$ & 3.1$\cdot 10^{-1}$ &7.9$\cdot 10^{-3}$ \\
\hline
$t \bar{t} W W$ & 4.8$\cdot 10^{-6}$ &1.4$\cdot 10^{-1}$ & 2.8$\cdot 10^{-3}$ &3.4$\cdot 10^{-1}$ & 1.1$\cdot 10^{-1}$ &1.3$\cdot 10^{-1}$ & 3.0$\cdot 10^{-1}$ &5.8$\cdot 10^{-2}$ \\
$t \bar{t} Z Z$ & 2.3$\cdot 10^{-6}$ &3.8$\cdot 10^{-2}$ & 1.4$\cdot 10^{-3}$ &5.1$\cdot 10^{-2}$ & 5.8$\cdot 10^{-2}$ &1.3$\cdot 10^{-2}$ & 1.7$\cdot 10^{-1}$ &5.4$\cdot 10^{-3}$ \\
$t \bar{t} H Z$ & 7.1$\cdot 10^{-7}$ &3.6$\cdot 10^{-2}$ & 3.5$\cdot 10^{-4}$ &3.0$\cdot 10^{-2}$ & 1.0$\cdot 10^{-2}$ &5.3$\cdot 10^{-3}$ & 2.7$\cdot 10^{-2}$ &1.9$\cdot 10^{-3}$ \\
$t \bar{t} H H$ & 7.2$\cdot 10^{-8}$ &1.4$\cdot 10^{-2}$ & 3.4$\cdot 10^{-5}$ &6.1$\cdot 10^{-3}$ & 6.4$\cdot 10^{-4}$ &5.4$\cdot 10^{-4}$ & 1.6$\cdot 10^{-3}$ &1.5$\cdot 10^{-4}$ \\
\hline
$t \bar{t} t \bar{t}\;\;(i)$ & 5.1$\cdot 10^{-8}$ &5.4$\cdot 10^{-4}$ & 6.8$\cdot 10^{-5}$ &6.7$\cdot 10^{-3}$ & 1.1$\cdot 10^{-3}$ &2.5$\cdot 10^{-3}$ & 2.1$\cdot 10^{-3}$ &1.0$\cdot 10^{-3}$ \\
$t \bar{t} t \bar{t}\;\;(ii)$ & 6.2$\cdot 10^{-9}$ &7.9$\cdot10^{-4}$ & 3.7$\cdot 10^{-5}$ &6.9$\cdot10^{-3}$& 1.7$\cdot 10^{-3}$ &2.3$\cdot10^{-3}$ & 4.7$\cdot 10^{-3}$ &9.0$\cdot10^{-4}$ \\
\midrule
\midrule
$H$ & 2.1$\cdot 10^{2}$ &- & 5.0$\cdot 10^{2}$ &- & 9.4$\cdot 10^{2}$ &- & 1.2$\cdot 10^{3}$ &- \\
$H H$ & 7.4$\cdot 10^{-2}$ &- & 8.2$\cdot 10^{-1}$ &- & 4.4$\cdot 10^{0}$ &- & 7.4$\cdot 10^{0}$ &- \\
$H H H$ & 3.7$\cdot 10^{-6}$ &- & 3.0$\cdot 10^{-4}$ &- & 7.1$\cdot 10^{-3}$ &- & 1.9$\cdot 10^{-2}$ &- \\
\hline
$H Z$ & 1.2$\cdot 10^{0}$ &1.3$\cdot 10^{1}$ & 9.8$\cdot 10^{0}$ &1.4$\cdot 10^{0}$ & 4.5$\cdot 10^{1}$ &6.3$\cdot 10^{-2}$ & 7.4$\cdot 10^{1}$ &1.4$\cdot 10^{-2}$ \\
$H H Z$ & 1.5$\cdot 10^{-4}$ &1.2$\cdot 10^{-1}$ & 9.4$\cdot 10^{-3}$ &3.3$\cdot 10^{-2}$ & 1.4$\cdot 10^{-1}$ &3.7$\cdot 10^{-3}$ & 3.3$\cdot 10^{-1}$ &1.1$\cdot 10^{-3}$ \\
$H H H Z$ & 1.5$\cdot 10^{-8}$ &4.1$\cdot 10^{-4}$ & 4.7$\cdot 10^{-6}$ &1.6$\cdot 10^{-4}$ & 1.9$\cdot 10^{-4}$ &1.6$\cdot 10^{-5}$ & 5.1$\cdot 10^{-4}$ &5.4$\cdot 10^{-6}$ \\
\hline
$H W W$ & 8.9$\cdot 10^{-3}$ &3.8$\cdot 10^{0}$ & 3.0$\cdot 10^{-1}$ &1.1$\cdot 10^{0}$ & 3.4$\cdot 10^{0}$ &1.3$\cdot 10^{-1}$ & 7.6$\cdot 10^{0}$ &4.1$\cdot 10^{-2}$ \\
$H H W W$ & 7.2$\cdot 10^{-7}$ &1.3$\cdot 10^{-2}$ & 2.3$\cdot 10^{-4}$ &1.1$\cdot 10^{-2}$ & 9.1$\cdot 10^{-3}$ &2.8$\cdot 10^{-3}$ & 2.9$\cdot 10^{-2}$ &1.2$\cdot 10^{-3}$ \\
\hline
$H Z Z$ & 2.7$\cdot 10^{-3}$ &3.2$\cdot 10^{-1}$ & 1.2$\cdot 10^{-1}$ &8.2$\cdot 10^{-2}$ & 1.6$\cdot 10^{0}$ &8.8$\cdot 10^{-3}$ & 3.7$\cdot 10^{0}$ &2.5$\cdot 10^{-3}$ \\
$H H Z Z$ & 2.4$\cdot 10^{-7}$ &1.5$\cdot 10^{-3}$ & 9.1$\cdot 10^{-5}$ &9.8$\cdot 10^{-4}$ & 3.9$\cdot 10^{-3}$ &2.5$\cdot 10^{-4}$ & 1.2$\cdot 10^{-2}$ &9.5$\cdot 10^{-5}$ \\
\midrule
\midrule
$W W$ &1.6$\cdot 10^{1}$&2.7$\cdot 10^{3}$&1.2$\cdot 10^{2}$&4.7$\cdot 10^{2}$&5.3$\cdot 10^{2}$&3.2$\cdot 10^{1}$&8.5$\cdot 10^{2}$&8.3$\cdot 10^{0}$\\
$Z Z$ &6.4$\cdot 10^{0}$&1.5$\cdot 10^{2}$&5.6$\cdot 10^{1}$&2.6$\cdot 10^{1}$&2.6$\cdot 10^{2}$&1.8$\cdot 10^{0}$&4.2$\cdot 10^{2}$&4.6$\cdot 10^{-1}$\\
\hline
$W W Z$ &1.1$\cdot 10^{-1}$&5.9$\cdot 10^{1}$&4.1$\cdot 10^{0}$&3.3$\cdot 10^{1}$&5.0$\cdot 10^{1}$&6.3$\cdot 10^{0}$&1.0$\cdot 10^{2}$&2.3$\cdot 10^{0}$\\
$Z Z Z$ &2.3$\cdot 10^{-2}$&9.3$\cdot 10^{-1}$&9.6$\cdot 10^{-1}$&3.5$\cdot 10^{-1}$&1.2$\cdot 10^{1}$&5.4$\cdot 10^{-2}$&2.7$\cdot 10^{1}$&1.9$\cdot 10^{-2}$\\
\bottomrule
\bottomrule
\end{tabular}
\caption{Same as figures \ref{fig:SMt} and \ref{fig:SMh} but tabulated for representative collider energies. For the $t\overline{t}t\overline{t}$ processes, scenario $(i)$ considers mixed EW-QCD production and $(ii)$ considers pure EW production.}
\label{tab:neutralSM}
\end{center}
\end{table}
A second technical issue that requires care is the treatment of unstable particles, and in particular the inclusion of fixed widths $(\Gamma)$ in Breit-Wigner propagators.
While formally suppressed by $\mathcal{O}\left(\Gamma/M\right)$ for resonances of mass $M$,
these terms can break gauge invariance
as well as spoil delicate unitarity cancellations at high energies.
Indeed, we find that these disruptions can grow with energy for some processes and spoil the correctness of our calculations.
A well-known solution is to consider the complex mass scheme~\cite{Denner:1999gp,Denner:2005fg}, an option that is available in {\sc mg5amc}~\cite{Alwall:2014hca}.
However, in this case, all unstable particles can only appear as internal states, not as external ones.
This implies that when modeling each particle in our final state $X$ we always must include a decay channel (or decay channel chain), complicating our work considerably.
Subsequently, we have opted for the solution of simulating external, on-shell $W,Z,H,t$ with all widths set to zero.
In doing so, gauge invariance is automatically preserved.
Moreover, potential singularities in $W,Z,H,t$ propagators are also automatically regulated due to their mass differences.
\begin{table}[h!]
\begin{center}
\renewcommand*{\arraystretch}{1}
\begin{tabular}{l |cc || l |cc}
\toprule
\toprule
\multicolumn{1}{l}{} &$\sigma$ [fb]&$\sqrt s$ [TeV] & \multicolumn{1}{l}{} &$\sigma$ [fb]&$\sqrt s$ [TeV]\\
\hline\hline
$t\bar t$&$8.4\cdot 10^0$ &$4.5$ &$t\bar t ZZ$&$2.2\cdot 10^{-2}$ &$8.4$\\
$t\bar t Z$&$5.3\cdot 10^{-1}$ &$6.9$ &$t\bar t HZ$&$7.0\cdot 10^{-3}$ &$11$\\
$t\bar t H$&$7.6\cdot 10^{-2}$ &$8.2$ & $t\bar t HH$&$5.9\cdot 10^{-4}$ &$13$ \\
$t\bar t WW$&$1.2\cdot 10^{-1}$ &$15$ & $t\bar t t\bar t$&$1.6\cdot10^{-3}$ &$22$\\
\midrule
$HZ$&$4.3\cdot 10^0$ &$1.7$ &$HHWW$&$4.3\cdot 10^{-3}$ &$9.2$\\
$HHZ$&$2.1\cdot 10^{-2}$ &$4.2$ &$HZZ$&$9.4\cdot 10^{-2}$ &$2.7$ \\
$HHHZ$&$4.7\cdot 10^{-5}$ &$6.9$ & $HHZZ$&$5.9\cdot 10^{-4}$ &$5.7$\\
$HWW$&$6.6\cdot 10^{-1}$ &$4.5$ & &\\
\midrule
$WW$&$2.1\cdot 10^2$ &$4.8$ &$WWZ$ &$1.6\cdot 10^{1}$ &$6.2$\\
$ZZ$&$3.9\cdot 10^{1}$ &$2.4$ &$ZZZ$ &$4.8\cdot 10^{-1}$ &$2.3$\\
\bottomrule
\bottomrule
\end{tabular}
\caption{
The value of collider energy $\sqrt s$ [TeV] and the corresponding cross section $\sigma$ [fb] that satisfy $\sigma^{VBF}=\sigma^{s-ch.}$ for processes considered in figures \ref{fig:SMt} and \ref{fig:SMh}.
}
\label{tab:sameXS}
\end{center}
\end{table}
\subsection{$W^+ W^-$ fusion}\label{sec:sm_ww}
We begin our survey by considering the production of up to four heavy particles from $W^+W^-$ fusion (solid lines) and $s$-channel, $\mu^+\mu^-$ annihilation (dashed lines).
As a function of muon collider energy $(\sqrt{s})$ [TeV],
we plot cross sections $(\sigma)$ [fb]
in figure~\ref{fig:SMt} for (a) $t\overline{t}X$ and (b) $t\overline{t}XX$ associated production.
In figure~\ref{fig:SMh} we plot the same for (a) $HX$, (b) $HHX$, and (c) $HHHX$ associated production, and (d) multiboson production.
We summarize our findings in table~\ref{tab:neutralSM} for representative collider energies and processes.
To summarize the global picture: as expected from the different production mechanism,
$s$-channel annihilation rates categorically scale and decrease with collider energy at least as $\sigma\sim1/s$, when collider energies are far beyond kinematic threshold.
This is contrary to VBF processes where cross sections mildly increase with collider energy at least as a power of $\log(s/M_W^2)$, in the high energy limit.
Consequentially, we find that for all processes considered there is a $\sqrt{s}$ where VBF production overcomes $s$-channel production.
In table~\ref{tab:sameXS} we report this $\sqrt s$ and the corresponding $\sigma$ at which the $s$-channel and VBF cross sections are the same.
In general, the larger the final state multiplicity, the larger the value of $\sqrt s$ where the cross section curves cross.
A few more remarks are in order.
First, for processes involving a top quark pair, as shown in figure~\ref{fig:SMt},
the $s$-channel cross sections at lower energies of $\mathcal{O}(1)$ TeV are comparable to if not larger than those from VBF at $\mathcal{O}(30)$ TeV, i.e., the highest energy that we consider.
In terms of statistics only, $s$-channel annihilations at lower energies serve as a larger source of $t \bar t $ events.
Hence, one may wonder if there is any gain in going to higher $\sqrt{s}$.
This is addressed at length in section~\ref{sec:eft}.
Here it suffices to say that sensitivity to anomalous couplings greatly improves with increasing $\sqrt{s}$, in particular for VBF processes.
For example:
At lowest order, $\mu^+\mu^-\to \gamma^*/Z^* \to t\overline{t}$ is only sensitive to anomalous $ttZ/\gamma^*$ and $\mu\mu t t$ interactions;
the channel is insensitive, e.g., to unitarity cancellations in the Higgs sector.
This is unlike $W^+W^- \to t\overline{t}$, which is also sensitive to anomalous $WWH, ttH$, and $tWb$ couplings, including relative CP phases.
In addition, the VBF channel features a strong, non-Abelian gauge cancellation, and therefore probes anomalous contributions that are enhanced by energy factors.
A second interesting observation is the hierarchy of $t\overline{t}XX$ production from $W^+W^-$ fusion.
As seen in figure~\ref{fig:SMt_ttVV}, the rates for $t\overline{t}VV$ $(V=W,Z)$ between $\sqrt{s}=3-30{\rm ~TeV}$ systematically sit about an order of magnitude higher than $t\overline{t}HV$,
which in turn is another order of magnitude higher than $t\overline{t}HH$.
In fact, the $t\overline{t}HH$ rate sits just under the mixed EW-QCD $t\overline{t}t\overline{t}$ rate, despite being less phase space-suppressed.
We attribute the strong hierarchy to the relative minus signs among the top quark's Yukawa coupling, the Higgs boson's self-couplings, and the various weak gauge couplings,
which together lead to large destructive interference.
Third, for processes involving neutral bosons $H$ and/or $Z$ in the final state, VBF cross sections are systematically larger than $s$-channel ones already at collider energies of a few TeV.
This follows from the strong suppression of the $\ell\ell Z$ gauge coupling relative to the unsuppressed $\ell\nu W$ gauge interaction.
(Numerically, the further power of $\alpha_W$ in $WW$ fusion is still larger than
the vector and axial-vector couplings of electrically charged leptons to $Z$ bosons.)
Among the processes investigated multi-Higgs production in VBF stands out.
For the specific cases of $HZ$ annd $HHZ$ production in figures~\ref{fig:SMh_hxxx} and \ref{fig:SMh_hhxx},
we find that VBF already exceeds $s$-channel annihilation at $\sqrt{s}=2{\rm ~TeV}$ and $4{\rm ~TeV}$, respectively.
Lastly, the energy growth of VBF scattering rates is in general steeper for final states with larger particle multiplicities than for lower ones.
This is due to many reasons.
The first is that the increase in energy crucially opens phase space.
For example: $t\overline{t}WW$ and $t\overline{t}HH$ have kinematic thresholds of $M_{\min}\approx 0.5{\rm ~TeV}$ and $0.6{\rm ~TeV}$,
indicating that their VBF production rates are phase space-starved for $\sqrt{s}\lesssim2-3{\rm ~TeV}$.
The second relates to (collinear) logarithmic enhancements in processes with $t$-channel gauge bosons.
Final states with $m$ gauge bosons entail contributions from the exchange of $m$ $t$-channel gauge bosons.
At very high energies, such contributions become dominant and give rise to cross sections that behave at least as $\sigma \sim \log^m (s/M_V^2)$.
Even though this largest log might not always be dominant,
we verify that the growth pattern as a function of final-state multiplicity corresponds to this expectation and is rather clearly visible in plotted curves.
\subsection{$ZZ$, $Z\gamma$, and $\gamma\gamma$ fusion}\label{sec:sm_zz}
\begin{table}[!t]
\begin{center}
\renewcommand*{\arraystretch}{0.95}
\begin{tabular}{l||cccc}
\toprule
\toprule
$\sigma$ [fb] & $\sqrt s=$ 1 TeV & $\sqrt s=$ 3 TeV & $\sqrt s=$ 14 TeV & $\sqrt s=$ 30 TeV\\
\midrule
\midrule
$t \bar{t}$ & 1.0 $\cdot 10^{-1}$ & 1.1 $\cdot 10^{0}$ & 4.3 $\cdot 10^{0}$ & 6.2 $\cdot 10^{0}$ \\
\hline
$t \bar{t} Z$ & 1.2 $\cdot 10^{-4}$ & 6.7 $\cdot 10^{-3}$ & 5.2 $\cdot 10^{-2}$ & 8.5 $\cdot 10^{-2}$ \\
$t \bar{t} H$ & 5.3 $\cdot 10^{-5}$ & 2.8 $\cdot 10^{-3}$ & 2.7 $\cdot 10^{-2}$ & 5.0 $\cdot 10^{-2}$ \\
\hline
\hline
$H$ & 1.5 $\cdot 10^{1}$ & 3.8 $\cdot 10^{1}$ & 7.6 $\cdot 10^{1}$ & 9.6 $\cdot 10^{1}$ \\
$H H$ & 5.0 $\cdot 10^{-3}$ & 7.3 $\cdot 10^{-2}$ & 4.3 $\cdot 10^{-1}$ & 7.5 $\cdot 10^{-1}$ \\
$H H H$ & 3.6 $\cdot 10^{-7}$ & 3.1 $\cdot 10^{-5}$ & 8.4 $\cdot 10^{-4}$ & 2.3 $\cdot 10^{-3}$ \\
\hline
$H W W$ & 3.5 $\cdot 10^{-3}$ & 1.4 $\cdot 10^{-1}$ & 1.7 $\cdot 10^{0}$ & 3.8 $\cdot 10^{0}$ \\
$H Z Z$ & 2.5 $\cdot 10^{-5}$ & 4.9 $\cdot 10^{-4}$ & 3.6 $\cdot 10^{-3}$ & 5.9 $\cdot 10^{-3}$ \\
\hline
\hline
$W W$ & 2.2 $\cdot 10^{1}$ & 1.4 $\cdot 10^{2}$ & 5.2 $\cdot 10^{2}$ & 8.1 $\cdot 10^{2}$ \\
$Z Z$ & 1.2 $\cdot 10^{-1}$ & 4.0 $\cdot 10^{-1}$ & 7.4 $\cdot 10^{-1}$ & 8.0 $\cdot 10^{-1}$ \\
\bottomrule
\bottomrule
\end{tabular}
\caption{
SM cross sections [fb] for sample $ZZ/Z\gamma/\gamma\gamma$ fusion processes (with interference) in $\mu^+\mu^-$ collisions at representative collider energies [TeV].
}
\label{tab:ZZSM}
\end{center}
\end{table}
We continue our survey at a potential multi-TeV $\mu^+\mu^-$ facility by now exploring processes mediated through the neutral gauge bosons $Z$ and $\gamma$.
For a subset of final states considered in section~\ref{sec:sm_ww} for $W^+W^-$ fusion that can instead proceed through $ZZ$, $Z\gamma$, and $\gamma\gamma$ fusion,
we report in table~\ref{tab:ZZSM} cross sections [fb] for representative collider energies.
As described in section~\ref{sec:sm_technicalities} we do not remove diagrams by hand and include $\gamma/Z$ interference.
To regulate phase space singularities, a $p_T$ cut of 30 GeV is applied on outgoing charged leptons.
As foreseen from the scaling of the $ZZ$ luminosity in section~\ref{sec:ppvsmuon}, the cross sections for $ZZ/Z\gamma/\gamma\gamma$ fusion are smaller than for $WW$ by roughly an order of magnitude.
The exceptions to this are (i) $W^+W^-$ production, which is highly comparable to the $W^+W^-\to W^+W^-$ rate, and (ii) $ZZ$ production, which is about two orders of magnitude smaller than $W^+W^-\to ZZ$.
Despite being lower, these rates are not small enough to be neglected.
Indeed, $HH$ production already reaches $\sigma\sim5$ ab at $\sqrt{s}=1{\rm ~TeV}$ and grows to be as large as $\sigma\sim430~(750){\rm ~ab}$ at $\sqrt{s}=14~(30){\rm ~TeV}$.
Moreover, the presence of final-state charged leptons from $Z/\gamma$ splittings, for example, could be exploited to obtain a full reconstruction of the event.
For some particular channels it may also be useful to have charged lepton pairs to better identify a new resonance signal or increase sensitivity to an anomalous coupling.
A simple but important example that is applicable to both the SM and BSM is the production of invisible final states, for example the SM process $WW/ZZ \to H \to 4\nu$.
Whereas the $WW$ production mode would lead to a \textit{totally} invisible final state, the $ZZ$ mode gives a means to tag the process. Numerous BSM examples can also be constructed.
\subsection{$W Z$ and $W\gamma$ scattering}\label{sec:sm_wz}
\begin{table}[t!]
\begin{center}
\renewcommand*{\arraystretch}{0.95}
\begin{tabular}{l||cccccccc}
\toprule
\toprule
\multirow{1}{*}{$\sigma$ [fb]} & \multicolumn{1}{c}{$\sqrt s=$ 1 TeV} & \multicolumn{1}{c}{$\sqrt s=$ 3 TeV} & \multicolumn{1}{c}{$\sqrt s=$ 14 TeV} & \multicolumn{1}{c}{$\sqrt s=$ 30 TeV}\\
\midrule
\midrule
$W$&$9.9\cdot 10^2$& $2.4\cdot 10^3$& $4.6\cdot 10^3$& $5.7\cdot 10^3$\\
\hline
$WZ$&$5.8\cdot 10^0$& $5.0\cdot 10^1$& $2.3\cdot 10^2$& $3.7\cdot 10^2$\\
$WH$&$8.4\cdot 10^{-1}$& $7.2\cdot 10^0$& $3.3\cdot 10^1$& $5.5\cdot 10^1$\\
\hline
$WWW$&$1.4\cdot 10^{-1}$& $4.2\cdot 10^0$& $4.4\cdot 10^1$& $1.0\cdot 10^2$\\
$WZZ$&$1.8\cdot 10^{-2}$& $8.0\cdot 10^{-1}$& $1.0\cdot 10^1$& $2.3\cdot 10^1$\\
$WZH$&$1.7\cdot 10^{-3}$& $8.0\cdot 10^{-2}$& $1.1\cdot 10^0$& $2.5\cdot 10^0$\\
$WHH$&$9.5\cdot 10^{-5}$& $6.2\cdot 10^{-3}$& $9.7\cdot 10^{-2}$& $2.3\cdot 10^{-1}$\\
\hline\hline
$t\bar b$&$4.4\cdot 10^{-1}$& $2.9\cdot 10^0$& $9.5\cdot 10^0$& $1.3\cdot 10^1$\\
\hline
$t\bar b Z$&$1.3\cdot 10^{-3}$& $4.4\cdot 10^{-2}$& $4.1\cdot 10^{-1}$& $8.0\cdot 10^{-1}$\\
$t\bar b H$&$1.5\cdot 10^{-4}$& $6.6\cdot 10^{-3}$& $6.6\cdot 10^{-2}$& $1.3\cdot 10^{-1}$\\
$t\bar tW$&$1.0\cdot 10^{-3}$& $7.6\cdot 10^{-2}$& $9.0\cdot 10^{-1}$& $1.9\cdot 10^0$\\
\bottomrule
\bottomrule
\end{tabular}
\caption{Same as table~\ref{tab:ZZSM} but for $WZ/W\gamma$ fusion.}
\label{tab:chargedSM}
\end{center}
\end{table}
Turning away from final states with zero net electric charge, we now explore processes mediated by $W\gamma$ and $WZ$ fusion.
For several representative processes, we summarize in table~\ref{tab:chargedSM} their cross sections at our benchmark muon collider energies.
We apply a $p_T$ cut of 30 GeV on outgoing charged leptons to regulate phase space singularities.
Once again, following simple scaling arguments of the EWA luminosities in section~\ref{sec:ppvsmuon},
we expect and observe that cross sections here are somewhere between those of $WW$ and $ZZ$ fusion.
With the present VBF configuration, we find that the rates for $VVV$, $VVH$, and $VHH$ production (where $V=W/Z$) all exceed the $\sigma\sim1$ ab threshold at $\sqrt{s}=3{\rm ~TeV}$.
At $\sqrt{s}=1{\rm ~TeV}$, the $VHH$ processes are strongly phase space-suppressed.
At $\sqrt{s}=14{\rm ~TeV}$, we find that the $VVH$ and $VHH$ rates reach roughly the $\sigma\sim1~(0.1)$ fb level and more than double at $\sqrt{s}=30{\rm ~TeV}$.
Moreover, as the final states here are charged, the potential arises for qualitatively different signatures that cannot be produced via $s$-channel annihilations.
For example:
\confirm{
processes such as single $W$ production (with $\sigma\sim\mathcal{O}(1-5)$ pb),
single top quark (with $\sigma\sim\mathcal{O}(0.5-10)$ fb),
as well as
$WWW$ (with $\sigma\sim\mathcal{O}(0.1-100)$ fb)
}
all have appreciable cross sections for $\sqrt{s}=1-30{\rm ~TeV}$.
If one assumes $\mathcal{O}(1-100){\rm ~ab^{-1}}$ datasets, then in these cases,
interesting, ultra rare and ultra exclusive decay channels can be studied.
\subsection{$W^+ W^+$ fusion}\label{sec:sm_samesign}
Finally, we conclude our EW VBF survey by briefly exploring the case of same-sign muon collisions.
This setup allows the production of doubly charged final states and therefore, as we discuss in section~\ref{sec:bsm}, is a natural setup where one can study lepton number-violating processes~\cite{Cai:2017mow}.
For concreteness, we consider $\mu^+\mu^+$ collisions and in table~\ref{tab:doublychargedSM} present the cross sections for representative $VV$ and $VVH$ processes
at our benchmark collider energies.
We report that the production rates for $VV$ and $VVH$ are highly comparable to those for $W^+W^-$ fusion in table~\ref{tab:neutralSM}.
We anticipate this from CP invariance. This dictates that the $W^+W^-$ luminosity in $\mu^+\mu^-$ collisions is the same at lowest order
as the $W^+W^+$ luminosity in $\mu^+\mu^+$ collisions.
Differences between the two sets of rates originate from differences between the $W^+W^-\to X$ and analogous $W^+W^+\to X'$ matrix elements.
In $W^+W^+$ scattering, only $t$-channel exchanges of gauge and scalar bosons are allowed as there does not exist a doubly charged state in the EW sector.
In $W^+W^-$ scattering, these $t$-channel diagrams interfere (constructively and destructively) with allowed $s$-channel diagrams.
\begin{table}[t!]
\begin{center}
\renewcommand*{\arraystretch}{0.95}
\begin{tabular}{l||cccccccc}
\toprule
\toprule
\multirow{1}{*}{$\sigma$ [fb]} & \multicolumn{1}{c}{$\sqrt s=$ 1 TeV} & \multicolumn{1}{c}{$\sqrt s=$ 3 TeV} & \multicolumn{1}{c}{$\sqrt s=$ 14 TeV} & \multicolumn{1}{c}{$\sqrt s=$ 30 TeV}\\
\midrule
\midrule
$W^+ W^+$&2.2$\cdot 10^{1}$&1.4$\cdot 10^{2}$&5.6$\cdot 10^{2}$&9.0$\cdot 10^{2}$\\
$W^+ W^+ Z$&1.2$\cdot 10^{-1}$&4.2$\cdot 10^{0}$&4.9$\cdot 10^{1}$&1.1$\cdot 10^{2}$\\
$W^+ W^+ H$&9.3$\cdot 10^{-3}$&3.1$\cdot 10^{-1}$&3.7$\cdot 10^{0}$&8.5$\cdot 10^{0}$\\
\bottomrule
\bottomrule
\end{tabular}
\caption{SM cross sections [fb] for sample $W^+W^+$ fusion processes in $\mu^+\mu^+$ collisions at representative collider energies [TeV].}\label{tab:doublychargedSM}
\end{center}
\end{table}
|
2,869,038,156,883 | arxiv | \section{Introduction}
The streamline upwind Petrov-Galerkin formulations (SUPG) for solving the unsteady Navier-Stokes equation using the finite element method introduce an artificial diffusion stabilization term, weighted by $\tau_{\mathrm{SUPG}}$, to balance out the negative diffusion inherent to the Galerkin method~\cite{brooks1982streamline, shakib1991new, hughes1986new}.
Traditionally, the steady form of $\tau_{\mathrm{SUPG}}$ is derived from a 1D steady advection-diffusion model problem, such that the added diffusion to Galerkin's formulation is just enough to recover the exact solution, thereby eliminating numerical oscillations at higher element Peclet numbers~\cite{hughes1986new,tezduyar1991stabilized,shakib1989finite}.
While this steady form of $\tau_{\mathrm{SUPG}}$ works well for strongly advective steady-state flows, it is not readily applicable to time-varying flows, as it exhibits poor convergence behavior, particularly at smaller time-step sizes.
The traditional strategy to overcome this issue has been adding a time-step size dependent term to the definition of $\tau_{\mathrm{SUPG}}$ that dominates the contributions associated with the advection and diffusion (!!!!fluid acceleration?)~\cite{hughes1986new,shakib1989finite}.
This design, which has found widespread use for its excellent convergence characteristics, produces a solution dependency on the time-step size, such that the solution diverges as the time-step size is reduced (see Section~\ref{sec3}).
As a consequence, this conventional design of $\tau_{\mathrm{SUPG}}$ produces an overall inconsistent method that fails to converge to a unique solution as the time-step size approaches zero.
There has been some effort in the past to overcome the inconsistency issue associated with this design of $\tau_{\mathrm{SUPG}}$.
M. Hsu and others introduced an element-vector-based $\tau_{\mathrm{SUPG}}$ that uses the relative magnitude of terms in the Navier-Stokes equations~\cite{hsu2010improving}.
Their proposed formula still has a time-step size dependent term, which is identically zero when the solution is steady but has a contribution to $\tau_{\mathrm{SUPG}}$ if the solution changes in time.
R. Codina and others proposed a subscale-tracking approach that solves a time-dependent ordinary differential equation at each Gauss integration point to evolve $\tau$ in time~\cite{codina2007time}.
This formulation is proven stable and satisfies the global momentum conservation.
This method also eliminates the time-step size dependency for steady-state solutions.
However, the unsteady solution is still not fully consistent as the time-step size changes, especially when using higher-order elements.
The additional computational cost to solve an ordinary differential equation at each Gauss point is also not negligible.
In this study, we propose a new formulation for $\tau_{\mathrm{SUPG}}$ to eliminate the solution's dependency on the time-step size as it approaches zero.
More specifically, we replace the inverse of the time-step size in the definition of the $\tau_{\mathrm{SUPG}}$ with a measure of the flow frequency.
The motivation behind such formulation lies in the spectral formulation of the unsteady Stokes equation where the time-step size dependent parameter in the $\tau_{\mathrm{SUPG}}$ is replaced by the spectral mode number~\cite{esmaily2022stabilized}.
The same parameter, when expressed in a spatio-temporal setting, inspires the use of a flow-dependent time scale in the $\tau_{\mathrm{SUPG}}$ definition, hence motivating the present design.
The article is organized as follows: Section~\ref{sec2} contains the formulation of the stabilized finite element method for the Navier-Stokes equations, the motivation behind the proposed $\tau_{\mathrm{SUPG}}$, and its formulation.
In Section~\ref{sec3}, we present three cases to compare the present formulation against the conventional one: a pipe flow with steady boundary conditions, a 2-dimensional flow over a square, and a modified Blalock-Taussig shunt cardiac flow involving a complex geometry.
We will also analyze the convergence and computational cost of the present formulation.
We will conclude our study in Section~\ref{sec:con}.
\section{Formulation}
\label{sec2}
The Navier-Stokes equations for incompressible flows are stated as
\begin{align}
R_{\mathrm{M}} = \rho\left(\frac{\partial \matr{u}}{\partial t} + \matr{u}\cdot\boldsymbol\nabla\matr{u} - \matr{f}\right)-\boldsymbol\nabla\cdot\boldsymbol\sigma &= 0 \qquad \text{in }\Omega, \\
R_{\mathrm{C}} = \boldsymbol\nabla\cdot\matr{u} &= 0 \qquad \text{in }\Omega,
\end{align}
where $\rho$ is the density, $\matr{u}$ is the velocity, $\matr{f}$ is the external forcing, $\Omega$ is the fluid computational domain, and $\boldsymbol\sigma$ is the stress tensor where
\begin{align}
\boldsymbol\sigma(p,\matr{u}) &= -p\matr{I}+2\mu\boldsymbol\epsilon(\matr{u}), \\
\boldsymbol\epsilon(\matr{u}) &= \frac{1}{2}\left((\boldsymbol\nabla\matr{u})+(\boldsymbol\nabla\matr{u})^\intercal\right).
\end{align}
The Dirichlet and Neumann boundary conditions are defined as
\begin{align}
\matr{u} &= \matr{g} \;\; \mathrm{on \; \Gamma_g}, \\
\matr{n}\cdot\matr{\sigma} &= \matr{h} \;\; \mathrm{on \; \Gamma_h},
\end{align}
respectively, where $\mathrm{\Gamma_g}$ and $\mathrm{\Gamma_h}$ are subsets of the boundary $\Gamma$ where the Dirichlet and Neumann boundaries are prescribed, $\matr{n}$ is the outward unit normal vector, and $\matr{g}$ and $\matr{h}$ are the given Dirichlet and Neumann boundary conditions, respectively.
The SUPG/PSPG finite element formulation of the Navier-Stokes equations is stated as finding $\matr{u}^h \in S_\matr{u}^h$ and $p^h \in S_p^h$ such that for all $\matr{w}^h \in V_\matr{u}^h$ and $q^h \in V_p^h$,
\begin{multline}
\int_\Omega \matr{w}^h\cdot\rho\left(\frac{\partial\matr{u}^h}{\partial t}+\matr{u}^h\cdot\boldsymbol\nabla\matr{u}^h-\matr{f}\right)d\Omega + \int_\Omega\boldsymbol\epsilon\left(\matr{w}^h\right):\boldsymbol\sigma\left(p^h,\matr{u}^h\right)d\Omega -\int_{\Gamma^h}\matr{w}^h\cdot\matr{h}^h d\Gamma +\int_\Omega q^h\boldsymbol\nabla\cdot\matr{u}^h d\Omega \\
+\sum_{e=1}^{n_{el}}\int_{\Omega^e}\frac{1}{\rho}\tau_{\mathrm{SUPG}}\left(\rho\matr{u}^h\cdot\boldsymbol\nabla\matr{w}^h+\boldsymbol\nabla q^h\right)\cdot R_{\mathrm{M}}^h + \nu_{\mathrm{C}} \rho \nabla\cdot \matr{w}^h R_{\mathrm{C}}^h d\Omega= 0,
\end{multline}
where $S_\matr{u}^h$ and $S_p^h$ are the discrete solution spaces for the velocity and pressure, respectively, and $V_\matr{u}^h$ and $V_p^h$ are the finite-dimensional test function spaces for the velocity and pressure, respectively.
Traditionally, $\tau_{\mathrm{SUPG}}$ and $\nu_{\mathrm{C}}$ are given as
\begin{align}
\tau_{\mathrm{SUPG}} &= \left(\left(\frac{2}{\Delta t}\right)^2+\matr{u}^h \cdot \boldsymbol{\xi} \matr{u}^h+ C_{\mathrm{I}}\nu^2\boldsymbol{\xi}:\boldsymbol{\xi}\right)^{-\frac{1}{2}}, \label{eqn1}
\\
\nu_{\mathrm{C}} &= \left( \text{tr} \left(\boldsymbol{\xi} \right) \, \tau_{\mathrm{SUPG}} \right)^{-1},
\end{align}
~\cite{shakib1989finite, bazilevs2007variational} where $\nu$ is the kinematic viscosity, $\Delta{t}$ is the time-step size, $\boldsymbol{\xi}$ is the covariant tensor obtained from the mapping of the physical-parent elements, and $C_{\mathrm{I}}$ is a shape-function-dependent constant, which is 3 in our study.
The term involving $\nu_{\mathrm{C}}$, which appears in the residual-based variational multiscale method (RBVMS)~\cite{bazilevs2007variational}, may be removed from this formulation without affecting the results significantly.
One may verify that this formulation is inconsistent for steady flow given that $\tau_{\mathrm{SUPG}}$, hence the overall added diffusion, depends on the time-step size.
In the later sections, we will demonstrate that this inconsistency is not unique to steady-state flows and also occurs for unsteady flows.
In an earlier study, we introduced a pressure-stabilized technique for solving the unsteady Stokes equations expressed in the frequency domain rather than the time domain~\cite{esmaily2022stabilized}.
The resulting complex-valued stabilization parameter was derived systematically by taking the divergence of the momentum equation and estimating the Laplacian in the diffusion term from the characteristic element size.
The modulus of that parameter is
\begin{equation}
\left|\tau\right| \propto \left(\omega^2 + \nu^2\boldsymbol{\xi}:\boldsymbol{\xi}\right)^{-1/2},
\label{eqn:tau1}
\end{equation}
where $\omega$ is the spectral mode appearing as a source term in the frequency formulation of the unsteady Stokes equation.
This spectral formulation of $\tau$ closely resembles the conventional definition of $\tau_{\mathrm{SUPG}}$ in Equation~\ref{eqn1} if $2/\Delta{t}$ is replaced by $\omega$.
The $\matr{u}^h\cdot\boldsymbol{\xi}\matr{u}^h$ term does not appear in Equation~\ref{eqn:tau1} as the advection term is not present in the unsteady Stokes equation.
Adding this term into Equation~\ref{eqn:tau1} and incorporating the elemental shape function constant, $C_{\mathrm{I}}$, results in
\begin{equation}
\tau_{\mathrm{SUPG}} = \left(\omega^2+\matr{u}^h \cdot \boldsymbol{\xi} \matr{u}^h + C_{\mathrm{I}} \nu^2\boldsymbol{\xi}:\boldsymbol{\xi}\right)^{-1/2},
\label{eqn:tauom}
\end{equation}
which is suitable to use in solving the Navier-Stokes equation.
The new definition of $\tau_{\mathrm{SUPG}}$ in Equation~\ref{eqn:tauom} becomes identical to the traditional formulation (Equation~\ref{eqn1}) if $\omega = 2/\Delta{t}$.
This value is close to the largest frequency associated with the time discretization, namely $\pi/\Delta{t}$, indicating the velocity field, $\matr{u}(t)$, oscillates between consecutive time steps.
In practice, however, $\matr{u}(t)$ is a much smoother function, with a frequency content that peaks at a much smaller $\omega$ than $\pi/\Delta{t}$.
The distinction of these two frequencies inspires the proposed definition of $\tau_{\mathrm{SUPG}}$.
It is straightforward to evaluate Equation~\ref{eqn:tauom} in a spectral formulation as $\omega$ is the computed frequency and readily available as an independent parameter.
However, its adoption is not straightforward in a traditional time formulation, where $\omega$ does not appear as an independent parameter.
To overcome this issue, we have considered various options such as locally defining $\omega$ at each elemental Gauss point based on the local flow acceleration and velocity.
Such a definition, however, is observed to produce spurious results since it could produce widely varying values across the computational domain.
Ideally, $\omega$ must be defined so that the overall scheme remains stable.
Secondly, it must be extracted from physical variables, such as the velocity and acceleration, rather than the time-step size, so that $\tau_{\mathrm{SUPG}}$ converges to a unique quantity as the time-step size goes to zero.
Thirdly, $\omega$ must be defined so that it remains proportional to the acceleration term ($\frac{\partial{\matr{u}^h}}{\partial t}$) for convergence purposes.
Fourthly, it should be simple and cost-efficient to calculate.
A definition that satisfies all four criteria above is
\begin{equation}
\omega = \frac{\lVert\frac{\partial{\matr{u}^h}}{\partial t}\rVert_{L^2}}{\lVert\matr{u}^h\rVert_{L^2}},
\label{eqn2}
\end{equation}
where $\lVert \matr{f}\rVert^2_{L^2} = \int_\Omega \lVert\matr{f}\rVert^2 d\Omega$.
This formulation of $\omega$ is designed to go to zero as the flow reaches a steady state, where $\frac{\partial{\matr{u}^h}}{\partial t}$ goes to zero.
We will show in Section~\ref{sec3} that this formulation produces consistent results for both steady-state flows and unsteady flows as the time-step size goes to zero.
We will also show that the present formulation is relatively robust even though it increases the computational cost compared to the conventional method.
The proposed method presents negligible cost overheads as $\omega$ is calculated once every time step and used for all Gauss quadrature points in the entire domain.
In practice, since the velocity field may be initialized from zero, we set $\omega = 2/\Delta{t}$ at the first time step, recovering the conventional definition of $\tau_{\text{SUPG}}$.
Note that since $\omega$ is a slowly varying parameter, we may estimate $\frac{\partial{\matr{u}}}{\partial t}$ and $\matr{u}$ in Equation~\ref{eqn2} using $\frac{\partial{\matr{u}^h}}{\partial t}$ and $\matr{u}^h$ from the previous time step rather than the current time step.
This choice, which is adopted in this study, has little effect on the overall stability of the solver.
The simulations are performed using a validated in-house finite-element solver named Multi-physics finite-element solver (MUPFES)~\cite{moghadam2013modular,esmaily2015bi,esmaily2013new}.
A specialized iterative algorithm, preconditioner, and parallelization strategy are employed for an efficient and scalable solution of the linear system of equations that arises from the discretization of the SUPG formulation~\cite{esmaily2013new,esmaily2015bi,esmaily2015impact,marsden2015multiscale}.
The solver has been verified~\cite{steinman2013variability} and extensively employed for cardiovascular modeling~\cite{esmaily2012optimization,esmaily2015assisted,jia2021efficient}.
\section{Results}
\label{sec3}
Three cases were simulated using both the conventional formulation (Equation~\ref{eqn1}) and the present formulation (Equation~\ref{eqn2}) for $\tau_{\text{SUPG}}$: namely, a 3-dimensional pipe flow, a 2-dimensional flow over a square, and a modified Blalock-Taussig shunt cardiac flow~\cite{esmaily2012optimization}.
All of the simulations are initialized using $\matr{u} = 0$ and continued in time to reach cycle-to-cycle convergence or steady-state solutions.
These cases are selected to represent three stability classes of flow where 1) the boundary conditions and the solution are both steady, 2) the boundary conditions are steady but the solution is unsteady, in our case due to vortex shedding, and 3) the boundary condition and the solution are both unsteady.
For apple-to-apple comparison, all parameters, except for the $\tau_{\text{SUPG}}$ definition, are kept the same when comparing the present formulation against its conventional counterpart.
All simulations are run on our computing cluster with 1280 available cores at a 2.4 GHz CPU clock rate.
\subsection{Constant flow rate pipe flow}
\label{sec:pipe}
This case involves a steady-state constant flow rate pipe flow, where the pressure drop between the pipe inlet and outlet is predicted using the present and conventional formulations of $\tau_{\text{SUPG}}$.
The pipe geometry has a length of 15 m and a radius of 1 m.
The flow rate is fixed at 10 m\textsuperscript{3}/s in the axial direction at the inlet.
The outlet is a zero Neumann boundary.
The dynamic viscosity is fixed at 1 Pa-s while three different densities, 1.57, 15.71, and 157.08 kg/m\textsuperscript{3}, are used, resulting in three Reynolds numbers, 10, 100, and 1000, respectively.
This way of controlling the Reynolds number ensures the analytical solution of the pressure drop stays constant.
\begin{figure} [H]
\centering
\captionsetup{position=bottom}
\includegraphics[width=\textwidth]{om_pipe.eps}
\caption{The predicted pressure drop normalized by the analytical solution ($\Delta{P}/\Delta{P}_{\mathrm{ref}}$) for the pipe flow simulations as a function of the time-step size ($\Delta{t}$) for (a) the conventional formulation and (b) the present formulation. Three Reynolds numbers are used for these calculations: 10 (red circle), 100 (blue square), and 1,000 (black triangle).}
\label{fig:pipe}
\end{figure}
The mesh generated for this case contains $207,063$ tetrahedral elements.
Four different time-step sizes are used, $10^{-1}$, $10^{-2}$, $10^{-3}$, and $10^{-4}$ seconds, for each Reynolds number we considered.
The corresponding Courant numbers range from $5.2$ to $5.2\times10^{-3}$ based on the average element size indicating under-resolved to over-resolved time discretizations.
All cases were run in parallel with 32 cores simulated for five seconds.
The linear system is solved using the generalized minimal residual (GMRES) method with a tolerance of $10^{-2}$.
Nonlinear Newton-Ralphson iterations are performed in each time step to ensure the $l^2$-norm of the residual falls below 3.5 orders of magnitude of the residual at the beginning of the time step.
Considering three Reynolds numbers, four time-step sizes, and two formulations, we ran a total of 24 simulations for this case.
All the results for this case are condensed in Figure~\ref{fig:pipe}, which shows the predicted pressure drop, normalized by that of the Poiseuille solution~\cite{sutera1993history}, as a function of the time-step size.
The solutions calculated using the conventional $\tau_{\mathrm{SUPG}}$ formulation diverges as the time-step size is reduced (Figure~\ref{fig:pipe}(a)).
The predicted pressure drop deviates from the analytical solution more rapidly as the Reynolds number increases, with the worst case 3 times the analytical solution when the Reynolds number is $1,000$ and $\Delta{t} = 10^{-5}$.
This divergence is a result of the inconsistency of the conventional formulation that we discussed earlier in Section~\ref{sec2}.
Since the $\tau_{\text{SUPG}}$ terms act as an element-scale artificial diffusion, as $\tau_{\text{SUPG}}$ increases when the time-step size decreases, the numerically added viscous effect becomes non-negligible, which results in much higher pressure drops than physically expected at smaller time-step sizes.
This effect is further amplified at higher Reynolds numbers when the diffusion effect is much smaller than the advection.
\begin{figure}[H]
\centering
\includegraphics[width=0.475\textwidth]{pipe_om_val.eps}
\caption{Calculated $\omega(t)$ for the pipe flow using the present formulation of $\tau_{\text{SUPG}}$ as the simulations converges for four different time-step sizes, $\Delta{t} = 10^{-1}$, $10^{-2}$, $10^{-3}$, and $10^{-4}$. Re = 10.}
\label{fig:pipeomval}
\end{figure}
On the other hand, the flow rate calculated using the present formulation is insensitive to the time-step size, a result that is predicted from a consistent formulation for steady-state flows.
In other words, the $\tau_{\text{SUPG}}$ terms become independent from the time-step size as the simulation converges to a steady state for the present formulation.
The present formulation predicted a pressure drop of only $0.7\%$ over the analytical solution for the same case where the conventional formulation produced the largest error (three times the analytical solution).
The numerical values of $\omega$ in the present formulation of $\tau_{\text{SUPG}}$ (Equation~\ref{eqn:tauom}) all converged to machine epsilon as the simulations converge (Figure~\ref{fig:pipeomval}).
Since the $\omega$ in the present formulation numerically initializes from the equivalent of the conventional formulation, The plot also shows that the conventional formulation produces an $\omega$ nearly thirty orders of magnitude larger than that of the converged value of the present formulation.
The advantages of the present formulation, however, come at a cost of more linear solver iterations per time step (Figure~\ref{fig:pipeitr}).
As a result, the CPU hours when using the present formulation are on average twice that of the conventional formulation.
This increase in cost is acceptable considering the significant improvement in the solution's accuracy and consistency.
\begin{figure} []
{\captionsetup{position=bottom}
\centering
\includegraphics[width=0.7\textwidth]{pipe_itr.eps}
\caption{Average number of linear solver (GMRES) iterations per time step ($\bar{N}_{\mathrm{lsitr}}$) over the time-step size ($\Delta{t}$) for three Reynolds numbers: 10 (red circle), 100 (blue square), and 1,000 (black triangle) for the conventional formulation (solid line) and the present formulation (dot-dashed line) of $\tau_{\text{SUPG}}$.}
\label{fig:pipeitr}
}
\end{figure}
\subsection{Flow over a square}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{mshsq.eps}
\caption{The mesh constructed for the flow over a square obstacle simulation.}
\label{fig:meshsq}
\end{figure}
In this case, we consider a two-dimensional unsteady flow over a square object that is subjected to a steady boundary condition.
The geometry and mesh used for this case are shown in Figure~\ref{fig:meshsq}.
The square has a side length of 1 m in a 12 m by 29.2 m fluid domain.
The square obstacle is centered vertically and placed at a distance of 5 m from the inlet on the left side of the domain.
Uniform horizontal flow with a velocity magnitude of 51.3 m/s is prescribed at the inlet and the outlet is a zero Neumann boundary.
The top and bottom of the domain are both no-penetration boundaries ($u_y$ = 0 m/s) with zero traction in the vertical direction. (!!!!)
The fluid has a density of $1.18 \times 10^{-3}$ kg/m\textsuperscript{3} and a dynamic viscosity of $1.82 \times 10^{-4}$ Pa-s.
The Reynolds number of this case is 332, which results in vertex shedding downstream of the obstacle.
The mesh generated for this case contains $28,502$ triangular elements.
Three different time-step sizes are considered, $10^{-3}$, $4 \times 10^{-4}$, and $10^{-4}$ seconds.
The GMRES tolerance is set to $5 \times 10^{-2}$ for this case.
Nonlinear Newton-Ralphson iterations tolerance is set to $10^{-3.75}$.
The simulations are continued for 5 seconds to ensure solution convergence.
We used 16 cores to perform these calculations.
\begin{figure} [H]
\centering
\captionsetup{position=bottom}
\includegraphics[width=\textwidth]{om_square.eps}
\caption{Predicted lift on the obstacle ($F$\textsubscript{L}$(t)$) using $\Delta{t} = 10^{-3}$ (red circle), $4 \times 10^{-4}$ (blue square), and $10^{-4}$ (black triangle) seconds, where $\tau_{\text{SUPG}}$ is computed using (a) the conventional formulation and (b) the present formulation of $\tau_{\text{SUPG}}$. }
\label{fig:oversq}
\end{figure}
The lift exerted on the square obstacle during the last 0.5 seconds of the simulation is shown in Figure~\ref{fig:oversq}.
Consistent with what we observed with the pipe flow in Section~\ref{sec:pipe}, this case also shows significant improvement in the results when the present design of $\tau_{\text{SUPG}}$ is adopted.
The conventional formulation predicted reduced amplitude and increased frequency as the time-step size goes to zero.
This prediction is unanimous to an increase in overall diffusion, which is caused by the increase in $\tau_{SUPG}$ as the time-step size goes to zero for the conventional formulation.
In contrast, the presented formulation shows negligible dependency on $\Delta{t}$.
More specifically, the lift oscillation amplitude reduces by 58\% from 1.62 to 0.68 for the conventional formulation when the time-step size is reduced by ten, from $10^{-3}$ to $10^{-4}$ seconds.
On the other hand, the same change in the time-step size results in just a 0.008\% change in the amplitude using the present formulation of $\tau_{\text{SUPG}}$.
The contrast between the two methods can also be observed when comparing the pressure contours in Figure~\ref{fig:oversqcont}.
The snapshots shown in this figure are taken when the obstacle experiences a maximum lift.
The dependency lack of dependency on the time-step size of the conventional and present formulation, respectively, is evident in this figure.
\begin{figure} [H]
{\captionsetup{position=bottom}
\centering
\includegraphics[width=\textwidth]{pressure_cont.eps}
\caption{Pressure contour a flow over a square obstacle (Figure~\ref{fig:meshsq}) captured at at maximum lift. (a) and (c) are the results obtained from the conventional formulation while (b) and (d) are the results obtained from the present formulation. (a) and (b) are obtained using $\Delta{t} = 10^{-3}$ and (c) and (d) using $\Delta{t} = 10^{-4}$.}
\label{fig:oversqcont}
}
\end{figure}
Similar to the steady pipe flow, the convergence of the solutions as the time-step size decreases is determined by the convergence of $\omega$.
With the present formulation, the value of $\omega$, although not steady, converges to identical profiles as the time-step size reduces from $10^{-3}$ to $10^{-4}$ (Figure~\ref{fig:sqomval}), which is not a feature using the conventional formulation (Equation~\ref{eqn1}).
The average linear solver iteration per time step size for the present formulation is three times more than that of the conventional formulation, resulting in over twice longer averaged CPU hours.
This increase in cost for the present formulation should be considered acceptable given the consistency of the predictions.
\begin{figure}[H]
\centering
\includegraphics[width=0.475\textwidth]{sq_om_val.eps}
\caption{Calculated $\omega(t)$ for the flow over a square case using the present formulation of $\tau_{\text{SUPG}}$ after simulations converges for two different time-step sizes, $\Delta{t} = 10^{-3}$ and $10^{-4}$.}
\label{fig:sqomval}
\end{figure}
\subsection{Modified Blalock-Taussig shunt}
This case is a real-life application of simulating the cardiac blood flow of infants undergoing the modified Blalock-Taussig shunt procedure~\cite{esmaily2012optimization,esmaily2015assisted}.
The blood flow in this situation case can be partially chaotic in different sections of the circulation and at different times of the cardiac cycle.
The geometry contains multiple branches, the most significant of which are the ascending and descending aorta (Figure~\ref{fig:btshuntgeo}).
An unsteady flow is imposed at the ascending aorta, which is extracted from earlier multi-domain simulation results~\cite{jia2021efficient,jia2022characterization}.
Zero Neumann boundary conditions are imposed on all other branches.
The blood is assumed Newtonian with a density of $1.06$ g/cm\textsuperscript{3} and a dynamic viscosity of $0.04$ g/cm-s.
The Reynolds number ranges from 50 to 1300 depending on which branch is considered and the time in the cardiac cycle.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{btshunt.eps}
\caption{Mesh geometry and inlet condition for the modified Blalock-Taussig shunt simulation.}
\label{fig:btshuntgeo}
\end{figure}
The mesh contains $400,936$ tetrahedral elements.
Three time-step sizes are simulated, $2.5 \times 10^{-2}$, $2.5 \times 10^{-3}$, and $2.5 \times 10^{-4}$ seconds.
The GMRES linear solver and the Newton-Ralphson iteration tolerances are set to $10^{-2}$ and $10^{-2.5}$, respectively.
All simulations are run in parallel with 288 processors.
Simulations are continued for at least six cardiac cycles (3 seconds) to ensure cycle-to-cycle convergence.
\begin{figure} [h]
\centering
\captionsetup{position=bottom}
\includegraphics[width=\textwidth]{om_mbts.eps}
\caption{The predicted flow rate through the descending aorta ($Q_{AoD}(t)$ using $\Delta{t} = 2.5 \times 10^{-2}$ (dotted), $2.5 \times 10^{-3}$ (red circle), and $2.5 \times 10^{-4}$ (blue square) seconds, when the computations are performed using (a) the conventional formulation and (b) the present formulation of $\tau_{\text{SUPG}}$. }
\label{fig:mbts}
\end{figure}
The flow rate at the descending aorta (Figure~\ref{fig:btshuntgeo}), which is a good representation of the flow behavior in the remaining branches, is plotted over one cardiac cycle in Figure~\ref{fig:mbts}.
The same trend observed in the previous two cases also appears in this case.
The conventional formulation produces a prediction that changes by approximately 20\% as the time-step size is reduced from $2.5 \times 10^{-2}$ to $2.5 \times 10^{-4}$ seconds.
The present formulation, on the contrary, produces consistent results that are independent of the time-step size, benefiting from a converging formulation of $\tau_{\text{SUPG}}$.
Interestingly, the computational cost for the present formulation in this case is only around 50\% more than that of the conventional formulation, both in terms of the number of linear solver iterations and CPU hours.
This indicates that the cost advantage of the conventional formulation over the present one could be diminishing when the simulation is more complex, where the overall computational cost is not solely dominated by the linear solver convergence.
\section{Conclusion}
\label{sec:con}
A new design is proposed for the time constant that appears in the streamline upwind Petrov-Galerkin ($\tau_{\text{SUPG}}$) formulations.
The new $\tau_{\text{SUPG}}$, which uses a flow time-scale (rather than the time-step size, $\Delta{t}$) to account for the contribution of the acceleration term, successfully produces an overall stable technique that is consistent with regard to the time-step-size.
As a result, the new method overcomes the historical limitation of the conventional formulation of $\tau_{\text{SUPG}}$, which results in solution divergence as the time-step size goes to zero.
The new definition of $\tau_{\text{SUPG}}$ (Equatio~\ref{eqn:tauom}) is simple to implement.
Various cases considered in this study showed that the new formulation come at a cost of increasing the number of linear solver iterations.
In exchange, however, the results remain independent of the time-step size as it approaches zero.
\bibliographystyle{unsrt}
|
2,869,038,156,884 | arxiv | \section{Embedding \texorpdfstring{$\mathsf{CNOT}$}{CNOT} into \texorpdfstring{$\mathsf{ZX}_\pi$}{the real stabilizer fragment of the ZX-calculus} (proofs)}
\label{sec:embeddingcnotzx}
The following basic identity is needed:
\begin{lemma} \cite[Lemma 19]{vilmart}
\label{lem:dim}
\label{lem:dim:proof}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (1, -0) {};
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (1, 1) {};
\node [style=zmul] (1) at (1, -0) {};
\node [style=xmul] (2) at (2, -0) {};
\node [style=xmul] (3) at (2, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [style=simple] (0) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\end{lemma}
\begin{proof}
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (1, -0) {};
\end{pgfonlayer}
\end{tikzpicture}
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (4, 1) {};
\node [style=xmul] (1) at (3, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (4, 1) {};
\node [style=xmul] (1) at (2, 1) {};
\node [style=xmul] (2) at (3, 1) {};
\node [style=xmul] (3) at (5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (0);
\draw [style=simple, bend left=45, looseness=1.25] (0) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, bend right=45, looseness=1.25] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (4, 1) {};
\node [style=zmul] (1) at (2, 1) {};
\node [style=zmul] (2) at (3, 1) {};
\node [style=zmul] (3) at (5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (0);
\draw [style=simple, bend left=45, looseness=1.25] (0) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, bend right=45, looseness=1.25] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture} & \ref{ZX.pi:L}\times 2\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (4, 1) {};
\node [style=zmul] (1) at (3, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (3, 1) {};
\node [style=zmul] (1) at (4, 1) {};
\node [style=zmul] (2) at (2, 1.5) {};
\node [style=zmul] (3) at (2, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple, in=-153, out=0, looseness=1.00] (3) to (0);
\draw [style=simple, in=0, out=153, looseness=1.00] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (3, 1) {};
\node [style=xmul] (1) at (4, 1) {};
\node [style=zmul] (2) at (2, 1.5) {};
\node [style=zmul] (3) at (2, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple, in=-153, out=0, looseness=1.00] (3) to (0);
\draw [style=simple, in=0, out=153, looseness=1.00] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:L}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (1, 1) {};
\node [style=zmul] (1) at (1, -0) {};
\node [style=xmul] (2) at (2, -0) {};
\node [style=xmul] (3) at (2, 1) {};
\node [style=xmul] (4) at (3, 1) {};
\node [style=xmul] (5) at (4, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [style=simple] (0) to (3);
\draw [style=simple] (5) to (4);
\draw [style=simple, bend right, looseness=1.25] (4) to (3);
\draw [style=simple, bend right, looseness=1.25] (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:B.U}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (1, 1) {};
\node [style=zmul] (1) at (1, -0) {};
\node [style=xmul] (2) at (2, -0) {};
\node [style=xmul] (3) at (2, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [style=simple] (0) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\end{proof}
We also need:
\begin{lemma}
\label{lem:cnottozx}
\begin{enumerate}[label=(\roman*)]\
\item
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, -1) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=oplus] (2) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple] (2) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, -1) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=xmul] (2) at (1, -1) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (2) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$
\item
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (0, 0) {};
\node [style=none] (1) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (2, 3) {};
\node [style=none] (1) at (3, 3) {};
\node [style=xmul] (2) at (3, 4) {};
\node [style=zmul] (3) at (2, 4) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1.center);
\draw [style=simple, bend right=45, looseness=1.25] (2) to (3);
\draw [style=simple, bend right=45, looseness=1.25] (3) to (2);
\draw [style=simple] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.5cm}
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (0, 0) {};
\node [style=none] (1) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (2, 3) {};
\node [style=none] (1) at (3, 3) {};
\node [style=xmul] (2) at (3, 4) {};
\node [style=zmul] (3) at (2, 4) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1.center);
\draw [style=simple, bend right=45, looseness=1.25] (2) to (3);
\draw [style=simple, bend right=45, looseness=1.25] (3) to (2);
\draw [style=simple] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, -1) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=oplus] (2) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple] (2) to (1);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2.5, -1) {};
\node [style=rn] (1) at (-0.5, -1) {};
\node [style=xmul] (2) at (2.5, -0) {$\pi$};
\node [style=xmul] (3) at (-0.5, -0) {$\pi$};
\node [style=zmul] (4) at (1, -0) {};
\node [style=xmul] (5) at (1, -1) {};
\node [style=zmul] (6) at (0.5, 2) {};
\node [style=xmul] (7) at (1.5, 2) {};
\node [style=zmul] (8) at (-0.5, 1) {};
\node [style=xmul] (9) at (0.5, 1) {};
\node [style=zmul] (10) at (1.5, 1) {};
\node [style=xmul] (11) at (2.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (5) to (0);
\draw [style=simple] (4) to (2);
\draw [style=simple] (1) to (5);
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (4);
\draw [style=simple] (7) to (6);
\draw [style=simple, bend left=45, looseness=1.25] (9) to (8);
\draw [bend left=45, looseness=1.25] (8) to (9);
\draw (9) to (8);
\draw [style=simple, bend left=45, looseness=1.25] (11) to (10);
\draw [bend left=45, looseness=1.25] (10) to (11);
\draw (11) to (10);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -1) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=xmul] (2) at (3, -0) {};
\node [style=xmul] (3) at (0, -0) {};
\node [style=zmul] (4) at (1.5, -0) {};
\node [style=xmul] (5) at (1.5, -1) {$\pi$};
\node [style=zmul] (6) at (0, 1) {};
\node [style=xmul] (7) at (1, 1) {};
\node [style=zmul] (8) at (2, 1) {};
\node [style=xmul] (9) at (3, 1) {};
\node [style=xmul] (10) at (2, 2) {};
\node [style=zmul] (11) at (1, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (5) to (0);
\draw [style=simple] (4) to (2);
\draw [style=simple] (1) to (5);
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (4);
\draw [style=simple, bend left=45, looseness=1.25] (7) to (6);
\draw [bend left=45, looseness=1.25] (6) to (7);
\draw (7) to (6);
\draw [style=simple, bend left=45, looseness=1.25] (9) to (8);
\draw [bend left=45, looseness=1.25] (8) to (9);
\draw (9) to (8);
\draw [style=simple] (11) to (10);
\end{pgfonlayer}
\end{tikzpicture} & \ref{ZX.pi:IV}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -1) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=xmul] (2) at (3, -0) {};
\node [style=xmul] (3) at (1, -0) {};
\node [style=xmul] (4) at (1, -1) {$\pi$};
\node [style=zmul] (5) at (0, 1) {};
\node [style=xmul] (6) at (1, 1) {};
\node [style=zmul] (7) at (2, 1) {};
\node [style=xmul] (8) at (3, 1) {};
\node [style=xmul] (9) at (2, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (4) to (0);
\draw [style=simple] (1) to (4);
\draw [style=simple, bend left=45, looseness=1.25] (6) to (5);
\draw [bend left=45, looseness=1.25] (5) to (6);
\draw (6) to (5);
\draw [style=simple, bend left=45, looseness=1.25] (8) to (7);
\draw [bend left=45, looseness=1.25] (7) to (8);
\draw (8) to (7);
\draw (4) to (3);
\draw (9) to (2);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:B.U}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -1) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=xmul] (2) at (1.5, -0) {};
\node [style=xmul] (3) at (1.5, -1) {$\pi$};
\node [style=zmul] (4) at (0, 1) {};
\node [style=xmul] (5) at (1, 1) {};
\node [style=zmul] (6) at (2, 1) {};
\node [style=xmul] (7) at (3, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (0);
\draw [style=simple] (1) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (5) to (4);
\draw [bend left=45, looseness=1.25] (4) to (5);
\draw (5) to (4);
\draw [style=simple, bend left=45, looseness=1.25] (7) to (6);
\draw [bend left=45, looseness=1.25] (6) to (7);
\draw (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -1) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=xmul] (2) at (1.5, -1) {$\pi$};
\node [style=zmul] (3) at (0, 1) {};
\node [style=xmul] (4) at (1, 1) {};
\node [style=zmul] (5) at (2, 1) {};
\node [style=xmul] (6) at (3, 1) {};
\node [style=zmul] (7) at (0, -0) {};
\node [style=xmul] (8) at (1, -0) {};
\node [style=zmul] (9) at (2, -0) {};
\node [style=xmul] (10) at (3, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (0);
\draw [style=simple] (1) to (2);
\draw [style=simple, bend left=45, looseness=1.25] (4) to (3);
\draw [bend left=45, looseness=1.25] (3) to (4);
\draw (4) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (6) to (5);
\draw [bend left=45, looseness=1.25] (5) to (6);
\draw (6) to (5);
\draw [style=simple] (8) to (7);
\draw [style=simple] (10) to (9);
\end{pgfonlayer}
\end{tikzpicture}& \text{Lemma \ref{lem:dim}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, -1) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=xmul] (2) at (1, -1) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (2) to (1);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:IV}\\
\end{align*}
\endgroup
\item
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (0, 0) {};
\node [style=none] (1) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
&:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (1, 0) {};
\node [style=onein] (1) at (-1, 0) {};
\node [style=oplus] (2) at (0, 0) {};
\node [style=dot] (3) at (0, 1) {};
\node [style=oneout] (4) at (1, 1) {};
\node [style=onein] (5) at (-1, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (2);
\draw (2) to (1);
\draw [style=simple] (2) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}\\
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (-1, 4) {$\pi$};
\node [style=xmul] (1) at (-1, 2.75) {$\pi$};
\node [style=xmul] (2) at (3, 4) {$\pi$};
\node [style=zmul] (3) at (1, 4) {};
\node [style=xmul] (4) at (1, 2.75) {};
\node [style=none] (5) at (3, 2.75) {};
\node [style=zmul] (6) at (-0.5, 5) {};
\node [style=xmul] (7) at (0.5, 5) {};
\node [style=zmul] (8) at (-0.5, 6) {};
\node [style=xmul] (9) at (0.5, 6) {};
\node [style=zmul] (10) at (1.5, 5) {};
\node [style=xmul] (11) at (2.5, 5) {};
\node [style=zmul] (12) at (1.5, 6) {};
\node [style=xmul] (13) at (2.5, 6) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (4) to (5.center);
\draw [style=simple] (4) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple] (3) to (0);
\draw [style=simple] (1) to (4);
\draw [style=simple] (6) to (7);
\draw [style=simple, bend right=45, looseness=1.25] (8) to (9);
\draw [style=simple, bend left=45, looseness=1.25] (8) to (9);
\draw [style=simple] (9) to (8);
\draw [style=simple, bend right=45, looseness=1.25] (10) to (11);
\draw [style=simple, bend left=45, looseness=1.25] (10) to (11);
\draw [style=simple] (11) to (10);
\draw [style=simple, bend right=45, looseness=1.25] (12) to (13);
\draw [style=simple, bend left=45, looseness=1.25] (12) to (13);
\draw [style=simple] (13) to (12);
\end{pgfonlayer}
\end{tikzpicture} \\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (-1, 4) {};
\node [style=xmul] (1) at (3, 4) {};
\node [style=zmul] (2) at (1, 4) {};
\node [style=zmul] (3) at (-0.5, 5) {};
\node [style=xmul] (4) at (0.5, 5) {};
\node [style=zmul] (5) at (-0.5, 6) {};
\node [style=xmul] (6) at (0.5, 6) {};
\node [style=zmul] (7) at (1.5, 5) {};
\node [style=xmul] (8) at (2.5, 5) {};
\node [style=zmul] (9) at (1.5, 6) {};
\node [style=xmul] (10) at (2.5, 6) {};
\node [style=xmul] (11) at (1, 2) {};
\node [style=xmul] (12) at (-1, 2) {$\pi$};
\node [style=none] (13) at (3, 2) {};
\node [style=xmul] (14) at (1, 3) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [style=simple] (2) to (0);
\draw [style=simple] (3) to (4);
\draw [style=simple, bend right=45, looseness=1.25] (5) to (6);
\draw [style=simple, bend left=45, looseness=1.25] (5) to (6);
\draw [style=simple] (6) to (5);
\draw [style=simple, bend right=45, looseness=1.25] (7) to (8);
\draw [style=simple, bend left=45, looseness=1.25] (7) to (8);
\draw [style=simple] (8) to (7);
\draw [style=simple, bend right=45, looseness=1.25] (9) to (10);
\draw [style=simple, bend left=45, looseness=1.25] (9) to (10);
\draw [style=simple] (10) to (9);
\draw [style=simple] (11) to (13.center);
\draw [style=simple] (12) to (11);
\draw [style=simple] (11) to (14);
\draw [style=simple] (14) to (2);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:PI}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (-1.5, 4) {};
\node [style=xmul] (1) at (3, 4) {};
\node [style=zmul] (2) at (1, 4) {};
\node [style=zmul] (3) at (-0.5, 5) {};
\node [style=xmul] (4) at (0.5, 5) {};
\node [style=zmul] (5) at (-0.5, 6) {};
\node [style=xmul] (6) at (0.5, 6) {};
\node [style=zmul] (7) at (1.5, 5) {};
\node [style=xmul] (8) at (2.5, 5) {};
\node [style=zmul] (9) at (1.5, 6) {};
\node [style=xmul] (10) at (2.5, 6) {};
\node [style=xmul] (11) at (1, 3) {};
\node [style=none] (12) at (3, 3) {};
\node [style=xmul] (13) at (-1.5, 3) {$\pi$};
\node [style=xmul] (14) at (-0.25, 3) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [style=simple] (2) to (0);
\draw [style=simple] (3) to (4);
\draw [style=simple, bend right=45, looseness=1.25] (5) to (6);
\draw [style=simple, bend left=45, looseness=1.25] (5) to (6);
\draw [style=simple] (6) to (5);
\draw [style=simple, bend right=45, looseness=1.25] (7) to (8);
\draw [style=simple, bend left=45, looseness=1.25] (7) to (8);
\draw [style=simple] (8) to (7);
\draw [style=simple, bend right=45, looseness=1.25] (9) to (10);
\draw [style=simple, bend left=45, looseness=1.25] (9) to (10);
\draw [style=simple] (10) to (9);
\draw [style=simple] (11) to (12.center);
\draw [style=simple] (11) to (2);
\draw [style=simple] (11) to (14);
\draw [style=simple] (14) to (13);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:PH}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (-1.5, 4) {};
\node [style=xmul] (1) at (0, 4) {};
\node [style=zmul] (2) at (-0.5, 5) {};
\node [style=xmul] (3) at (0.5, 5) {};
\node [style=zmul] (4) at (1.5, 5) {};
\node [style=xmul] (5) at (2.5, 5) {};
\node [style=zmul] (6) at (0.5, 6) {};
\node [style=xmul] (7) at (1.5, 6) {};
\node [style=xmul] (8) at (1, 3) {};
\node [style=none] (9) at (3, 3) {};
\node [style=xmul] (10) at (-1.5, 3) {$\pi$};
\node [style=xmul] (11) at (-0.25, 3) {$\pi$};
\node [style=xmul] (12) at (1, 4) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, bend right=45, looseness=1.25] (2) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, bend right=45, looseness=1.25] (4) to (5);
\draw [style=simple, bend left=45, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple, bend right=45, looseness=1.25] (6) to (7);
\draw [style=simple, bend left=45, looseness=1.25] (6) to (7);
\draw [style=simple] (7) to (6);
\draw [style=simple] (8) to (9.center);
\draw [style=simple] (8) to (11);
\draw [style=simple] (11) to (10);
\draw [style=simple] (8) to (12);
\draw [style=simple] (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:B.U}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (-1.5, 4) {};
\node [style=xmul] (1) at (0, 4) {};
\node [style=zmul] (2) at (-0.5, 5) {};
\node [style=xmul] (3) at (0.5, 5) {};
\node [style=zmul] (4) at (1.5, 5) {};
\node [style=xmul] (5) at (2.5, 5) {};
\node [style=zmul] (6) at (0.5, 6) {};
\node [style=xmul] (7) at (1.5, 6) {};
\node [style=xmul] (8) at (1, 3) {};
\node [style=none] (9) at (3, 3) {};
\node [style=xmul] (10) at (1, 4) {};
\node [style=xmul] (11) at (-1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, bend right=45, looseness=1.25] (2) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, bend right=45, looseness=1.25] (4) to (5);
\draw [style=simple, bend left=45, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple, bend right=45, looseness=1.25] (6) to (7);
\draw [style=simple, bend left=45, looseness=1.25] (6) to (7);
\draw [style=simple] (7) to (6);
\draw [style=simple] (8) to (9.center);
\draw [style=simple] (8) to (10);
\draw [style=simple] (1) to (0);
\draw [style=simple] (11) to (8);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:PP}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (1.25, 4) {};
\node [style=xmul] (1) at (2.75, 4) {};
\node [style=zmul] (2) at (0.5, 5.25) {};
\node [style=xmul] (3) at (1.5, 5.25) {};
\node [style=zmul] (4) at (2.5, 5.25) {};
\node [style=xmul] (5) at (3.5, 5.25) {};
\node [style=xmul] (6) at (1, 3) {};
\node [style=none] (7) at (3, 3) {};
\node [style=zmul] (8) at (1.5, 6.25) {};
\node [style=xmul] (9) at (2.5, 6.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, bend right=45, looseness=1.25] (2) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, bend right=45, looseness=1.25] (4) to (5);
\draw [style=simple, bend left=45, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (6) to (7.center);
\draw [style=simple] (1) to (0);
\draw [style=simple, bend right=45, looseness=1.25] (8) to (9);
\draw [style=simple, bend left=45, looseness=1.25] (8) to (9);
\draw [style=simple] (9) to (8);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0.5, 5.25) {};
\node [style=xmul] (1) at (1.5, 5.25) {};
\node [style=zmul] (2) at (2.5, 5.25) {};
\node [style=xmul] (3) at (3.5, 5.25) {};
\node [style=xmul] (4) at (1, 3) {};
\node [style=none] (5) at (3, 3) {};
\node [style=xmul] (6) at (2.5, 4) {};
\node [style=zmul] (7) at (1.5, 4) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, bend left=45, looseness=1.25] (0) to (1);
\draw [style=simple] (1) to (0);
\draw [style=simple, bend right=45, looseness=1.25] (2) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple] (4) to (5.center);
\draw [style=simple] (6) to (7);
\draw [style=simple, bend right=45, looseness=1.25] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:dim}},\ref{ZX.pi:IV}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (2, 3) {};
\node [style=none] (1) at (3, 3) {};
\node [style=xmul] (2) at (3, 4) {};
\node [style=zmul] (3) at (2, 4) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1.center);
\draw [style=simple, bend right=45, looseness=1.25] (2) to (3);
\draw [style=simple, bend right=45, looseness=1.25] (3) to (2);
\draw [style=simple] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\endgroup
\end{enumerate}
\end{proof}
\begin{customlemma}{\ref{lem:cnotzxfunc}}
\label{lem:cnotzxfunc:proof}
The interpretation of $\mathsf{CNOT}$ into $\mathsf{ZX}_\pi$ is functorial.
\end{customlemma}
\begin{proof}
We prove that each axiom holds
\begin{description}
\item[\ref{CNOT.2}]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=rn] (2) at (3, -1) {};
\node [style=rn] (3) at (3, -0) {};
\node [style=dot] (4) at (1, -0) {};
\node [style=dot] (5) at (2, -0) {};
\node [style=oplus] (6) at (2, -1) {};
\node [style=oplus] (7) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (6);
\draw [style=simple] (6) to (7);
\draw [style=simple] (7) to (1);
\draw [style=simple] (7) to (4);
\draw [style=simple] (4) to (5);
\draw [style=simple] (5) to (6);
\draw [style=simple] (3) to (5);
\draw [style=simple] (4) to (0);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (0, -1) {};
\node [style=rn] (2) at (3, -1) {};
\node [style=rn] (3) at (3, -0) {};
\node [style=zmul] (4) at (1, -0) {};
\node [style=zmul] (5) at (2, -0) {};
\node [style=xmul] (6) at (2, -1) {};
\node [style=xmul] (7) at (1, -1) {};
\node [style=xmul] (8) at (2, 0.75) {};
\node [style=zmul] (9) at (1, 0.75) {};
\node [style=xmul] (10) at (2, 1.5) {};
\node [style=zmul] (11) at (1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (6);
\draw [style=simple] (6) to (7);
\draw [style=simple] (7) to (1);
\draw [style=simple] (7) to (4);
\draw [style=simple] (4) to (5);
\draw [style=simple] (5) to (6);
\draw [style=simple] (3) to (5);
\draw [style=simple] (4) to (0);
\draw [style=simple] (8) to (9);
\draw [style=simple] (10) to (11);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (5, -0) {};
\node [style=zmul] (2) at (1, -0) {};
\node [style=zmul] (3) at (2, -0.75) {};
\node [style=rn] (4) at (5, -1.5) {};
\node [style=rn] (5) at (0, -1.5) {};
\node [style=xmul] (6) at (4, -1.5) {};
\node [style=xmul] (7) at (3, -0.75) {};
\node [style=xmul] (8) at (4.25, 0.75) {};
\node [style=zmul] (9) at (3.25, 0.75) {};
\node [style=zmul] (10) at (0.75, 0.75) {};
\node [style=xmul] (11) at (1.75, 0.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (0);
\draw [style=simple] (4) to (6);
\draw [style=simple, in=180, out=15, looseness=1.00] (2) to (1);
\draw [style=simple, in=-37, out=180, looseness=1.00] (3) to (2);
\draw [style=simple, in=-150, out=0, looseness=0.75] (5) to (6);
\draw [style=simple, in=0, out=143, looseness=1.00] (6) to (7);
\draw [style=simple, bend right, looseness=1.25] (7) to (3);
\draw [style=simple, bend right, looseness=1.25] (3) to (7);
\draw (8) to (9);
\draw (11) to (10);
\end{pgfonlayer}
\end{tikzpicture} & \text{(co)associativity}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (5, -0) {};
\node [style=zmul] (2) at (1, -0) {};
\node [style=zmul] (3) at (2, -0.75) {};
\node [style=rn] (4) at (5, -1.5) {};
\node [style=rn] (5) at (0, -1.5) {};
\node [style=xmul] (6) at (3, -0.75) {};
\node [style=xmul] (7) at (4, -1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (0);
\draw [style=simple, in=180, out=15, looseness=1.00] (2) to (1);
\draw [style=simple, in=-37, out=180, looseness=1.00] (3) to (2);
\draw [style=simple] (4) to (7);
\draw [style=simple, in=-150, out=0, looseness=0.75] (5) to (7);
\draw [style=simple, in=0, out=143, looseness=1.00] (7) to (6);
\end{pgfonlayer}
\end{tikzpicture} & \ref{ZX.pi:B.H} \\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (2, -0) {};
\node [style=rn] (2) at (2, -1) {};
\node [style=rn] (3) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (3) to (2);
\end{pgfonlayer}
\end{tikzpicture} & \text{(co)unitality}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (2, -0) {};
\node [style=rn] (2) at (2, -1) {};
\node [style=rn] (3) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (3) to (2);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\item[\ref{CNOT.1}]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (0, -0) {};
\node [style=dot] (1) at (2, -0) {};
\node [style=dot] (2) at (1, -1) {};
\node [style=oplus] (3) at (1, -0) {};
\node [style=oplus] (4) at (0, -1) {};
\node [style=oplus] (5) at (2, -1) {};
\node [style=rn] (6) at (3, -0) {};
\node [style=rn] (7) at (3, -1) {};
\node [style=rn] (8) at (-1, -0) {};
\node [style=rn] (9) at (-1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (6) to (1);
\draw (3) to (1);
\draw (3) to (0);
\draw (0) to (8);
\draw (4) to (0);
\draw (3) to (2);
\draw (5) to (1);
\draw (7) to (5);
\draw (5) to (2);
\draw (2) to (4);
\draw (4) to (9);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -0) {};
\node [style=rn] (1) at (3, -1) {};
\node [style=rn] (2) at (-1, -0) {};
\node [style=rn] (3) at (-1, -1) {};
\node [style=zmul] (4) at (1, -1) {};
\node [style=zmul] (5) at (2, -0) {};
\node [style=zmul] (6) at (0, -0) {};
\node [style=xmul] (7) at (2, -1) {};
\node [style=xmul] (8) at (0, -1) {};
\node [style=xmul] (9) at (1, -0) {};
\node [style=zmul] (10) at (0, 0.75) {};
\node [style=xmul] (11) at (2, 0.75) {};
\node [style=zmul] (12) at (0, 2.25) {};
\node [style=xmul] (13) at (2, 2.25) {};
\node [style=zmul] (14) at (0, 1.5) {};
\node [style=xmul] (15) at (2, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (6);
\draw [style=simple] (6) to (9);
\draw [style=simple] (9) to (5);
\draw [style=simple] (5) to (0);
\draw [style=simple] (1) to (7);
\draw [style=simple] (7) to (4);
\draw [style=simple] (4) to (8);
\draw [style=simple] (8) to (3);
\draw [style=simple] (8) to (6);
\draw [style=simple] (9) to (4);
\draw [style=simple] (7) to (5);
\draw [style=simple] (11) to (10);
\draw [style=simple] (13) to (12);
\draw [style=simple] (15) to (14);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3.25, -0) {};
\node [style=rn] (1) at (3.25, -1) {};
\node [style=rn] (2) at (-1.25, -0) {};
\node [style=rn] (3) at (-1.25, -1) {};
\node [style=zmul] (4) at (1.25, -1) {};
\node [style=zmul] (5) at (2.25, -0) {};
\node [style=zmul] (6) at (0, -1) {};
\node [style=xmul] (7) at (2.25, -1) {};
\node [style=xmul] (8) at (0, -0) {};
\node [style=xmul] (9) at (1.25, -0) {};
\node [style=zmul] (10) at (0, 0.75) {};
\node [style=xmul] (11) at (2.25, 0.75) {};
\node [style=zmul] (12) at (0, 2.25) {};
\node [style=xmul] (13) at (2.25, 2.25) {};
\node [style=zmul] (14) at (0, 1.5) {};
\node [style=xmul] (15) at (2.25, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=180, out=0, looseness=1.00] (2) to (6);
\draw [style=simple, in=180, out=0, looseness=1.00] (6) to (9);
\draw [style=simple, in=180, out=0, looseness=1.00] (9) to (5);
\draw [style=simple] (5) to (0);
\draw [style=simple] (1) to (7);
\draw [style=simple] (7) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (4) to (8);
\draw [style=simple, in=0, out=180, looseness=1.00] (8) to (3);
\draw [style=simple] (8) to (6);
\draw [style=simple] (9) to (4);
\draw [style=simple] (7) to (5);
\draw [style=simple] (11) to (10);
\draw [style=simple] (13) to (12);
\draw [style=simple] (15) to (14);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -0) {};
\node [style=rn] (1) at (3, -1) {};
\node [style=rn] (2) at (0, -0) {};
\node [style=rn] (3) at (0, -1) {};
\node [style=zmul] (4) at (2, -0) {};
\node [style=xmul] (5) at (2, -1) {};
\node [style=xmul] (6) at (1, -1) {};
\node [style=zmul] (7) at (1, -0) {};
\node [style=zmul] (8) at (0.75, 0.75) {};
\node [style=xmul] (9) at (2.25, 0.75) {};
\node [style=zmul] (10) at (0.75, 1.5) {};
\node [style=xmul] (11) at (2.25, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (4) to (0);
\draw [style=simple] (1) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (6) to (2);
\draw [style=simple] (6) to (5);
\draw [style=simple] (7) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (7) to (3);
\draw [style=simple] (7) to (6);
\draw [style=simple] (9) to (8);
\draw [style=simple] (11) to (10);
\end{pgfonlayer}
\end{tikzpicture} & \ref{ZX.pi:B.M}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, -1) {};
\node [style=rn] (1) at (2, -0) {};
\node [style=rn] (2) at (0, -0) {};
\node [style=rn] (3) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=180, out=0, looseness=1.00] (3) to (1);
\draw [style=simple, in=0, out=180, looseness=1.00] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, -1) {};
\node [style=rn] (1) at (2, -0) {};
\node [style=rn] (2) at (0, -0) {};
\node [style=rn] (3) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=180, out=0, looseness=1.00] (3) to (1);
\draw [style=simple, in=0, out=180, looseness=1.00] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\item[\ref{CNOT.3}]
Immediate from commutative spider theorem.
\item[\ref{CNOT.4}]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (4, -0) {};
\node [style=rn] (1) at (6, -0) {};
\node [style=rn] (2) at (6, 1) {};
\node [style=zeroin] (3) at (4, 1) {};
\node [style=dot] (4) at (5, 1) {};
\node [style=oplus] (5) at (5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (5);
\draw (5) to (0);
\draw (5) to (4);
\draw (4) to (2);
\draw (4) to (3);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-0.75, -0) {};
\node [style=rn] (1) at (2.25, -0) {};
\node [style=rn] (2) at (2.25, 1) {};
\node [style=zmul] (3) at (0.75, 1) {};
\node [style=xmul] (4) at (0.75, -0) {};
\node [style=xmul] (5) at (-0.75, 1) {};
\node [style=zmul] (6) at (1.25, 2) {};
\node [style=xmul] (7) at (2.25, 2) {};
\node [style=xmul] (8) at (0.25, 2) {};
\node [style=zmul] (9) at (-0.75, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (5) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple] (3) to (4);
\draw [style=simple] (4) to (1);
\draw [style=simple] (4) to (0);
\draw [style=simple] (6) to (7);
\draw [style=simple, bend right=45, looseness=1.25] (9) to (8);
\draw [bend right=45, looseness=1.25] (8) to (9);
\draw (9) to (8);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:cnottozx} (ii)}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (3, -0) {};
\node [style=rn] (2) at (3, 1) {};
\node [style=xmul] (3) at (1, -0) {};
\node [style=xmul] (4) at (2, 1) {};
\node [style=xmul] (5) at (1, 1) {};
\node [style=zmul] (6) at (1, 2) {};
\node [style=xmul] (7) at (2, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (1);
\draw [style=simple] (3) to (0);
\draw [style=simple] (4) to (2);
\draw [style=simple] (5) to (3);
\draw [bend left=45, looseness=1.25] (7) to (6);
\draw [bend left=45, looseness=1.25] (6) to (7);
\draw (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:B.U}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, -0) {};
\node [style=rn] (1) at (3, -0) {};
\node [style=rn] (2) at (3, 1) {};
\node [style=xmul] (3) at (2, 1) {};
\node [style=zmul] (4) at (1.5, 2) {};
\node [style=xmul] (5) at (2.5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (2);
\draw [bend left=45, looseness=1.25] (5) to (4);
\draw [bend left=45, looseness=1.25] (4) to (5);
\draw (5) to (4);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture} \\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, -0) {};
\node [style=rn] (1) at (2, -0) {};
\node [style=rn] (2) at (2, 1) {};
\node [style=zeroin] (3) at (1, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}& \text{Lemma \ref{lem:cnottozx} (ii)}
\end{align*}
\item[\ref{CNOT.5}]
Immediate from commutative spider theorem.
\item[\ref{CNOT.6}]
Frobenius algebra is special.
\item[\ref{CNOT.7}]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (7, -0) {};
\node [style=oplus] (1) at (7, -1) {};
\node [style=rn] (2) at (8, -0) {};
\node [style=rn] (3) at (8, -1) {};
\node [style=rn] (4) at (6, -1) {};
\node [style=onein] (5) at (6, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (1);
\draw (1) to (4);
\draw (0) to (2);
\draw (1) to (0);
\draw (0) to (5);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (2, -1) {};
\node [style=rn] (1) at (3, -1) {};
\node [style=rn] (2) at (1, -1) {};
\node [style=zmul] (3) at (2, 0) {};
\node [style=xmul] (4) at (1, 0) {$\pi$};
\node [style=rn] (5) at (3, 0) {};
\node [style=zmul] (6) at (1.5, 1) {};
\node [style=xmul] (7) at (2.5, 1) {};
\node [style=zmul] (8) at (1.5, 2) {};
\node [style=xmul] (9) at (2.5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (0) to (3);
\draw [style=simple] (2) to (0);
\draw [style=simple] (5) to (3);
\draw [style=simple] (6) to (7);
\draw (3) to (4);
\draw [style=simple, bend right=45, looseness=1.25] (8) to (9);
\draw [style=simple, bend right=45, looseness=1.25] (9) to (8);
\draw [style=simple] (8) to (9);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (2, -1) {};
\node [style=rn] (1) at (4.25, -1) {};
\node [style=rn] (2) at (1, -1) {};
\node [style=rn] (3) at (4.25, -0) {};
\node [style=xmul] (4) at (3.25, -0) {$\pi$};
\node [style=xmul] (5) at (2, 0) {$\pi$};
\node [style=zmul] (6) at (2.25, 1.25) {};
\node [style=xmul] (7) at (3.25, 1.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (2) to (0);
\draw (4) to (3);
\draw (5) to (0);
\draw [style=simple, bend left=45, looseness=1.25] (7) to (6);
\draw [style=simple, bend left=45, looseness=1.25] (6) to (7);
\draw [style=simple] (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:PI}, \ref{ZX.pi:B.U} \\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -1) {};
\node [style=rn] (1) at (1, -1) {};
\node [style=rn] (2) at (3, 0.25) {};
\node [style=xmul] (3) at (2, 0.25) {$\pi$};
\node [style=xmul] (4) at (2, -1) {$\pi$};
\node [style=zmul] (5) at (2, 1.25) {};
\node [style=xmul] (6) at (3, 1.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (2);
\draw (0) to (4);
\draw (4) to (1);
\draw [style=simple, bend right=45, looseness=1.25] (6) to (5);
\draw [style=simple, bend right=45, looseness=1.25] (5) to (6);
\draw [style=simple] (6) to (5);
\end{pgfonlayer}
\end{tikzpicture} & \ref{ZX.pi:PH}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (8, -0) {};
\node [style=rn] (1) at (8, -1) {};
\node [style=rn] (2) at (6, -1) {};
\node [style=onein] (3) at (6, -0) {};
\node [style=oplus] (4) at (7, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw (0) to (3);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:cnottozx} (i)}
\end{align*}
\item[\ref{CNOT.8}]
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (0, 1) {};
\node [style=dot] (1) at (1, -0) {};
\node [style=dot] (2) at (2, 1) {};
\node [style=oplus] (3) at (2, -0) {};
\node [style=oplus] (4) at (1, -1) {};
\node [style=oplus] (5) at (0, -0) {};
\node [style=rn] (6) at (3, 1) {};
\node [style=rn] (7) at (3, -0) {};
\node [style=rn] (8) at (3, -1) {};
\node [style=rn] (9) at (-1, -1) {};
\node [style=rn] (10) at (-1, -0) {};
\node [style=rn] (11) at (-1, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (6) to (2);
\draw [style=simple] (2) to (0);
\draw [style=simple] (0) to (11);
\draw [style=simple] (10) to (5);
\draw [style=simple] (5) to (1);
\draw [style=simple] (1) to (3);
\draw [style=simple] (3) to (7);
\draw [style=simple] (4) to (8);
\draw [style=simple] (4) to (9);
\draw [style=simple] (1) to (4);
\draw [style=simple] (3) to (2);
\draw [style=simple] (0) to (5);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, 1) {};
\node [style=zmul] (1) at (1, -0) {};
\node [style=zmul] (2) at (2, 1) {};
\node [style=xmul] (3) at (2, -0) {};
\node [style=xmul] (4) at (1, -1) {};
\node [style=xmul] (5) at (0, -0) {};
\node [style=rn] (6) at (3, 1) {};
\node [style=rn] (7) at (3, -0) {};
\node [style=rn] (8) at (3, -1) {};
\node [style=rn] (9) at (-1, -1) {};
\node [style=rn] (10) at (-1, -0) {};
\node [style=rn] (11) at (-1, 1) {};
\node [style=xmul] (12) at (0.5, 1.75) {};
\node [style=zmul] (13) at (-0.5, 1.75) {};
\node [style=xmul] (14) at (2.5, 1.75) {};
\node [style=zmul] (15) at (1.5, 1.75) {};
\node [style=xmul] (16) at (1.5, 2.5) {};
\node [style=zmul] (17) at (0.5, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (6) to (2);
\draw [style=simple] (2) to (0);
\draw [style=simple] (0) to (11);
\draw [style=simple] (10) to (5);
\draw [style=simple] (5) to (1);
\draw [style=simple] (1) to (3);
\draw [style=simple] (3) to (7);
\draw [style=simple] (4) to (8);
\draw [style=simple] (4) to (9);
\draw [style=simple] (1) to (4);
\draw [style=simple] (3) to (2);
\draw [style=simple] (0) to (5);
\draw [style=simple] (13) to (12);
\draw [style=simple] (15) to (14);
\draw [style=simple] (17) to (16);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, 1) {};
\node [style=zmul] (1) at (0.75, -0.25) {};
\node [style=zmul] (2) at (2, 1) {};
\node [style=xmul] (3) at (2, -0) {};
\node [style=xmul] (4) at (1, -1) {};
\node [style=xmul] (5) at (0.25, 0.25) {};
\node [style=rn] (6) at (3, 1) {};
\node [style=rn] (7) at (3, -0) {};
\node [style=rn] (8) at (3, -1) {};
\node [style=rn] (9) at (-1, -1) {};
\node [style=rn] (10) at (-1, -0) {};
\node [style=rn] (11) at (-1, 1) {};
\node [style=xmul] (12) at (0.5, 1.75) {};
\node [style=zmul] (13) at (-0.5, 1.75) {};
\node [style=xmul] (14) at (2.5, 1.75) {};
\node [style=zmul] (15) at (1.5, 1.75) {};
\node [style=xmul] (16) at (1.5, 2.5) {};
\node [style=zmul] (17) at (0.5, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (6) to (2);
\draw [style=simple] (2) to (0);
\draw [style=simple] (0) to (11);
\draw [style=simple, in=180, out=0, looseness=1.00] (10) to (5);
\draw [style=simple] (5) to (1);
\draw [style=simple, in=180, out=0, looseness=1.00] (1) to (3);
\draw [style=simple] (3) to (7);
\draw [style=simple] (4) to (8);
\draw [style=simple] (4) to (9);
\draw [style=simple, in=90, out=-90, looseness=1.00] (1) to (4);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=90, out=-90, looseness=1.00] (0) to (5);
\draw [style=simple] (12) to (13);
\draw [style=simple] (14) to (15);
\draw [style=simple] (16) to (17);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, 1) {};
\node [style=zmul] (1) at (3, 1) {};
\node [style=xmul] (2) at (3, 0) {};
\node [style=xmul] (3) at (2, -1) {};
\node [style=rn] (4) at (4, 1) {};
\node [style=rn] (5) at (4, 0) {};
\node [style=rn] (6) at (4, -1) {};
\node [style=rn] (7) at (-1, -1) {};
\node [style=rn] (8) at (-1, 0) {};
\node [style=rn] (9) at (-1, 1) {};
\node [style=zmul] (10) at (0, 0) {};
\node [style=xmul] (11) at (1, -0.5) {};
\node [style=xmul] (12) at (2, 0) {};
\node [style=zmul] (13) at (1, 0.5) {};
\node [style=zmul] (14) at (0, 1.75) {};
\node [style=xmul] (15) at (1, 1.75) {};
\node [style=zmul] (16) at (2, 1.75) {};
\node [style=xmul] (17) at (3, 1.75) {};
\node [style=zmul] (18) at (0, 2.5) {};
\node [style=xmul] (19) at (1, 2.5) {};
\node [style=zmul] (20) at (2, 2.5) {};
\node [style=xmul] (21) at (3, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (4) to (1);
\draw [style=simple] (1) to (0);
\draw [style=simple] (0) to (9);
\draw [style=simple] (2) to (5);
\draw [style=simple] (3) to (6);
\draw [style=simple] (3) to (7);
\draw [style=simple] (2) to (1);
\draw [style=simple] (12) to (13);
\draw [style=simple] (13) to (11);
\draw [style=simple] (11) to (10);
\draw [style=simple] (10) to (12);
\draw [style=simple] (10) to (8);
\draw [style=simple] (13) to (0);
\draw [style=simple] (11) to (3);
\draw [style=simple] (12) to (2);
\draw [style=simple] (15) to (14);
\draw [style=simple] (17) to (16);
\draw [style=simple] (19) to (18);
\draw [style=simple] (21) to (20);
\end{pgfonlayer}
\end{tikzpicture}
& \ref{ZX.pi:B.M}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, 1) {};
\node [style=zmul] (1) at (3, 1) {};
\node [style=rn] (2) at (4, 1) {};
\node [style=rn] (3) at (4, 0) {};
\node [style=rn] (4) at (4, -1) {};
\node [style=rn] (5) at (-1, -1) {};
\node [style=rn] (6) at (-1, 0) {};
\node [style=rn] (7) at (-1, 1) {};
\node [style=zmul] (8) at (0, 0) {};
\node [style=xmul] (9) at (1.5, -1) {};
\node [style=xmul] (10) at (2, 0) {};
\node [style=zmul] (11) at (0, 1.75) {};
\node [style=xmul] (12) at (1, 1.75) {};
\node [style=xmul] (13) at (3, 1.75) {};
\node [style=zmul] (14) at (2, 1.75) {};
\node [style=zmul] (15) at (0, 2.75) {};
\node [style=xmul] (16) at (1, 2.75) {};
\node [style=xmul] (17) at (3, 2.75) {};
\node [style=zmul] (18) at (2, 2.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [style=simple] (1) to (0);
\draw [style=simple] (0) to (7);
\draw [style=simple] (9) to (8);
\draw [style=simple] (8) to (10);
\draw [style=simple] (8) to (6);
\draw [style=simple] (10) to (1);
\draw [style=simple] (3) to (10);
\draw [style=simple] (5) to (9);
\draw [style=simple] (9) to (4);
\draw [style=simple] (10) to (0);
\draw [style=simple] (9) to (0);
\draw [style=simple] (12) to (11);
\draw [style=simple] (13) to (14);
\draw [style=simple] (16) to (15);
\draw [style=simple] (17) to (18);
\end{pgfonlayer}
\end{tikzpicture}
\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (1.75, 1) {};
\node [style=rn] (1) at (4.25, 1) {};
\node [style=rn] (2) at (4.25, 0) {};
\node [style=rn] (3) at (4.25, -1) {};
\node [style=rn] (4) at (-0.75, -1) {};
\node [style=rn] (5) at (-0.75, 0) {};
\node [style=rn] (6) at (-0.75, 1) {};
\node [style=zmul] (7) at (0.25, 0) {};
\node [style=xmul] (8) at (1.25, -1) {};
\node [style=xmul] (9) at (3, 0) {};
\node [style=zmul] (10) at (0, 1.75) {};
\node [style=xmul] (11) at (1, 1.75) {};
\node [style=xmul] (12) at (3.5, 1.75) {};
\node [style=zmul] (13) at (2.5, 1.75) {};
\node [style=zmul] (14) at (0, 2.75) {};
\node [style=xmul] (15) at (1, 2.75) {};
\node [style=xmul] (16) at (3.5, 2.75) {};
\node [style=zmul] (17) at (2.5, 2.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (6);
\draw [style=simple] (8) to (7);
\draw [style=simple] (7) to (9);
\draw [style=simple] (7) to (5);
\draw [style=simple] (2) to (9);
\draw [style=simple] (4) to (8);
\draw [style=simple] (8) to (3);
\draw [style=simple, bend right] (9) to (0);
\draw [style=simple] (8) to (0);
\draw [style=simple] (1) to (0);
\draw [style=simple, bend left] (9) to (0);
\draw [style=simple] (11) to (10);
\draw [style=simple] (12) to (13);
\draw [style=simple] (15) to (14);
\draw [style=simple] (16) to (17);
\end{pgfonlayer}
\end{tikzpicture}
\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (2, 1) {};
\node [style=rn] (1) at (3, 1) {};
\node [style=rn] (2) at (3, -0) {};
\node [style=rn] (3) at (3, -1) {};
\node [style=rn] (4) at (-1, -1) {};
\node [style=rn] (5) at (-1, -0) {};
\node [style=rn] (6) at (-1, 1) {};
\node [style=zmul] (7) at (0, -0) {};
\node [style=xmul] (8) at (2, -0) {};
\node [style=xmul] (9) at (0, -1) {};
\node [style=zmul] (10) at (1, 1) {};
\node [style=xmul] (11) at (1, -1) {};
\node [style=zmul] (12) at (-0.5, 1.75) {};
\node [style=xmul] (13) at (0.5, 1.75) {};
\node [style=zmul] (14) at (1.5, 1.75) {};
\node [style=xmul] (15) at (2.5, 1.75) {};
\node [style=xmul] (16) at (0.5, 2.5) {};
\node [style=zmul] (17) at (-0.5, 2.5) {};
\node [style=zmul] (18) at (1.5, 2.5) {};
\node [style=xmul] (19) at (2.5, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (6);
\draw [style=simple] (7) to (8);
\draw [style=simple] (7) to (5);
\draw [style=simple] (2) to (8);
\draw [style=simple, bend right, looseness=1.00] (8) to (0);
\draw [style=simple] (1) to (0);
\draw [style=simple, bend left, looseness=1.00] (8) to (0);
\draw [style=simple] (4) to (9);
\draw [style=simple] (9) to (7);
\draw [style=simple] (9) to (3);
\draw [style=simple] (11) to (10);
\draw [style=simple] (13) to (12);
\draw [style=simple] (15) to (14);
\draw [style=simple] (16) to (17);
\draw [style=simple] (19) to (18);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, 1) {};
\node [style=rn] (1) at (2, 0) {};
\node [style=rn] (2) at (2, -1) {};
\node [style=rn] (3) at (-1, -1) {};
\node [style=rn] (4) at (-1, 0) {};
\node [style=rn] (5) at (-1, 1) {};
\node [style=zmul] (6) at (0, 0) {};
\node [style=xmul] (7) at (0, -1) {};
\node [style=zmul] (8) at (1, 1) {};
\node [style=xmul] (9) at (1, -1) {};
\node [style=zmul] (10) at (-1, 2) {};
\node [style=xmul] (11) at (0, 2) {};
\node [style=zmul] (12) at (1, 2) {};
\node [style=xmul] (13) at (2, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (6) to (4);
\draw [style=simple] (3) to (7);
\draw [style=simple] (7) to (6);
\draw [style=simple] (7) to (2);
\draw [style=simple] (9) to (8);
\draw [style=simple] (1) to (6);
\draw [style=simple] (5) to (8);
\draw [style=simple] (8) to (0);
\draw [style=simple] (11) to (10);
\draw [style=simple] (13) to (12);
\end{pgfonlayer}
\end{tikzpicture}
& \ref{ZX.pi:B.H}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, 1) {};
\node [style=rn] (1) at (2, -0) {};
\node [style=rn] (2) at (2, -1) {};
\node [style=rn] (3) at (-1, -1) {};
\node [style=rn] (4) at (-1, -0) {};
\node [style=rn] (5) at (-1, 1) {};
\node [style=dot] (6) at (0, -0) {};
\node [style=oplus] (7) at (0, -1) {};
\node [style=dot] (8) at (1, 1) {};
\node [style=oplus] (9) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (6) to (4);
\draw [style=simple] (3) to (7);
\draw [style=simple] (7) to (6);
\draw [style=simple] (7) to (2);
\draw [style=simple] (9) to (8);
\draw [style=simple] (1) to (6);
\draw [style=simple] (5) to (8);
\draw [style=simple] (8) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\endgroup
\item[\ref{CNOT.9}]
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (2, -3) {};
\node [style=none] (1) at (4, -3) {};
\node [style=onein] (2) at (2.5, -2) {};
\node [style=zeroout] (3) at (3.5, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0.center);
\draw (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (2, -3) {};
\node [style=none] (1) at (4, -3) {};
\node [style=xmul] (2) at (3, -2) {$\pi$};
\node [style=zmul] (3) at (2.5, -1) {};
\node [style=xmul] (4) at (3.5, -1) {};
\node [style=zmul] (5) at (2.5, -0) {};
\node [style=xmul] (6) at (3.5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0.center);
\draw [style=simple, bend left=45, looseness=1.25] (4) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (3) to (4);
\draw [style=simple] (4) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (6) to (5);
\draw [style=simple, bend left=45, looseness=1.25] (5) to (6);
\draw [style=simple] (6) to (5);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:cnottozx} (ii)}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (2.5, -2) {$\pi$};
\node [style=none] (1) at (5, -3) {};
\node [style=zmul] (2) at (3, -3) {};
\node [style=xmul] (3) at (4, -3) {};
\node [style=none] (4) at (0, -3) {};
\node [style=zmul] (5) at (2, -3) {};
\node [style=xmul] (6) at (1, -3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (3);
\draw (2) to (5);
\draw (6) to (4.center);
\end{pgfonlayer}
\end{tikzpicture}& \ref{ZX.pi:ZO}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (2.5, -2) {$\pi$};
\node [style=none] (1) at (5, -3) {};
\node [style=xmul] (2) at (4, -3) {};
\node [style=none] (3) at (0, -3) {};
\node [style=xmul] (4) at (1, -3) {};
\node [style=zmul] (5) at (2, -3) {};
\node [style=zmul] (6) at (2, -4) {};
\node [style=xmul] (7) at (3, -3) {};
\node [style=xmul] (8) at (3, -4) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (2);
\draw (4) to (3.center);
\draw [style=simple] (7) to (5);
\draw [style=simple] (6) to (8);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:dim}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (4, -3) {};
\node [style=xmul] (1) at (3, -3) {};
\node [style=none] (2) at (1, -3) {};
\node [style=xmul] (3) at (2, -3) {};
\node [style=xmul] (4) at (1, -2) {$\pi$};
\node [style=xmul] (5) at (2, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (1);
\draw (3) to (2.center);
\draw (5) to (4);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (5, -3) {};
\node [style=xmul] (1) at (4, -3) {};
\node [style=none] (2) at (1, -3) {};
\node [style=xmul] (3) at (1, -2) {$\pi$};
\node [style=zmul] (4) at (2, -2.5) {};
\node [style=xmul] (5) at (3, -2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (1);
\draw [in=-150, out=-3, looseness=1.00] (2.center) to (4);
\draw [in=0, out=153, looseness=1.00] (4) to (3);
\draw (4) to (5);
\end{pgfonlayer}
\end{tikzpicture} & \ref{ZX.pi:B.U}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (4, -3) {};
\node [style=xmul] (1) at (3, -3) {};
\node [style=none] (2) at (1, -3) {};
\node [style=xmul] (3) at (3, -1.75) {};
\node [style=xmul] (4) at (2, -3) {$\pi$};
\node [style=xmul] (5) at (2, -1.75) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (1);
\draw (4) to (2.center);
\draw (5) to (3);
\end{pgfonlayer}
\end{tikzpicture} & \text{By symmetry}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (4.25, -3) {};
\node [style=none] (1) at (0.75, -3) {};
\node [style=xmul] (2) at (1.75, -3) {$\pi$};
\node [style=xmul] (3) at (3.25, -3) {$\pi$};
\node [style=xmul] (4) at (2.5, -2) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1.center);
\draw (0.center) to (3);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (4.25, -3) {};
\node [style=none] (1) at (0.75, -3) {};
\node [style=xmul] (2) at (1.75, -3) {$\pi$};
\node [style=xmul] (3) at (3.25, -3) {$\pi$};
\node [style=xmul] (4) at (2.5, -2) {$\pi$};
\node [style=zmul] (5) at (1, -1) {};
\node [style=xmul] (6) at (2, -1) {};
\node [style=zmul] (7) at (3, -1) {};
\node [style=xmul] (8) at (4, -1) {};
\node [style=zmul] (9) at (3, -0) {};
\node [style=xmul] (10) at (4, -0) {};
\node [style=zmul] (11) at (1, -0) {};
\node [style=xmul] (12) at (2, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1.center);
\draw (0.center) to (3);
\draw [style=simple, bend left=45, looseness=1.25] (6) to (5);
\draw [style=simple, bend left=45, looseness=1.25] (5) to (6);
\draw [style=simple] (6) to (5);
\draw [style=simple, bend left=45, looseness=1.25] (8) to (7);
\draw [style=simple, bend left=45, looseness=1.25] (7) to (8);
\draw [style=simple] (8) to (7);
\draw [style=simple, bend left=45, looseness=1.25] (10) to (9);
\draw [style=simple, bend left=45, looseness=1.25] (9) to (10);
\draw [style=simple] (10) to (9);
\draw [style=simple, bend left=45, looseness=1.25] (12) to (11);
\draw [style=simple, bend left=45, looseness=1.25] (11) to (12);
\draw [style=simple] (12) to (11);
\end{pgfonlayer}
\end{tikzpicture} & \ref{ZX.pi:ZO} \\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (4.5, -3) {};
\node [style=onein] (1) at (2.5, -2) {};
\node [style=zeroout] (2) at (3.5, -2) {};
\node [style=none] (3) at (1.5, -3) {};
\node [style=onein] (4) at (3.5, -3) {};
\node [style=oneout] (5) at (2.5, -3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (2);
\draw (0.center) to (4);
\draw (5) to (3.center);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:cnottozx} (ii)}
\end{align*}
\endgroup
\end{description}
\end{proof}
\subsection{Proof of Lemma \ref{lem:cantthink}}
\label{sec:completeness}
\label{lem:cantthink:proof}
\begin{customlemma}{\ref{lem:cantthink} }\
\begin{enumerate}[label=(\roman*)]
\item
$
\hfil
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (5, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (4.5, 0) {$\sqrt{2}$};
\end{pgfonlayer}
\end{tikzpicture}
$
\item
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (5, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [bend right=45, looseness=1.25] (0) to (1);
\draw [bend left=45, looseness=1.25] (0) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (3, 2) {};
\node [style=h] (1) at (4, 2) {};
\node [style=zeroout] (2) at (5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$
\item
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (3, 2) {};
\node [style=h] (1) at (4, 2) {};
\node [style=zeroout] (2) at (5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (5, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [bend right=45, looseness=1.25] (0) to (1);
\draw [bend left=45, looseness=1.25] (0) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{enumerate}
\end{customlemma}
\begin{proof}\
\begin{enumerate}[label=(\roman*)]
\item
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (5, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (0, 0) {};
\node [style=zeroout] (1) at (2, 0) {};
\node [style=h] (2) at (1, 0) {};
\node [style=map] (3) at (0.5, 1) {$\sqrt{2}$};
\node [style=map] (4) at (1.5, 1) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (1) to (2);
\end{pgfonlayer}
\end{tikzpicture} \\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (4.5, 0) {$\sqrt{2}$};
\end{pgfonlayer}
\end{tikzpicture} & \ref{cnoth:H.S}
\end{align*}
\item
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (5, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [bend right=45, looseness=1.25] (0) to (1);
\draw [bend left=45, looseness=1.25] (0) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=fanout] (0) at (3.5, 2) {};
\node [style=fanout] (1) at (4.5, 2.5) {};
\node [style=fanin] (2) at (6.5, 2.5) {};
\node [style=fanout] (3) at (7.5, 2) {};
\node [style=h] (4) at (5.5, 3) {};
\node [style=h] (5) at (5.5, 2) {};
\node [style=h] (6) at (5.5, 1) {};
\node [style=h] (7) at (8.5, 2) {};
\node [style=h] (8) at (2.5, 2) {};
\node [style=zeroin] (9) at (1.5, 2) {};
\node [style=zeroout] (10) at (9.5, 2) {};
\node [style=map] (11) at (4.5, 4) {$\sqrt{2}$};
\node [style=map] (12) at (6.5, 4) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (10) to (7);
\draw (7) to (3);
\draw (3) to (2);
\draw [in=0, out=150, looseness=1.00] (2) to (4);
\draw [in=27, out=180, looseness=1.00] (4) to (1);
\draw (1) to (0);
\draw (0) to (8);
\draw (8) to (9);
\draw [in=180, out=-27, looseness=0.75] (0) to (6);
\draw [in=-153, out=0, looseness=0.75] (6) to (3);
\draw [in=0, out=-153, looseness=1.00] (2) to (5);
\draw [in=180, out=-27, looseness=1.00] (1) to (5);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (5.5, 2) {};
\node [style=h] (1) at (5.5, 1) {};
\node [style=h] (2) at (5.5, 0) {};
\node [style=h] (3) at (8.5, 2) {};
\node [style=h] (4) at (2.5, 2) {};
\node [style=zeroin] (5) at (1.5, 2) {};
\node [style=zeroout] (6) at (9.5, 2) {};
\node [style=map] (7) at (4.5, 3) {$\sqrt{2}$};
\node [style=map] (8) at (6.5, 3) {$\sqrt{2}$};
\node [style=dot] (9) at (4.5, 2) {};
\node [style=dot] (10) at (3.5, 2) {};
\node [style=dot] (11) at (6.5, 2) {};
\node [style=dot] (12) at (7.5, 2) {};
\node [style=oplus] (13) at (3.5, 1) {};
\node [style=oplus] (14) at (4.5, 0) {};
\node [style=oplus] (15) at (7.5, 1) {};
\node [style=oplus] (16) at (6.5, 0) {};
\node [style=zeroout] (17) at (8.5, 1) {};
\node [style=zeroout] (18) at (7.5, 0) {};
\node [style=zeroin] (19) at (2.5, 1) {};
\node [style=zeroin] (20) at (3.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (6) to (3);
\draw (4) to (5);
\draw (18) to (16);
\draw (16) to (2);
\draw (2) to (14);
\draw (14) to (20);
\draw (19) to (13);
\draw (13) to (1);
\draw (1) to (15);
\draw (15) to (17);
\draw (15) to (12);
\draw (11) to (16);
\draw (14) to (9);
\draw (10) to (13);
\draw (4) to (10);
\draw (10) to (9);
\draw (9) to (0);
\draw (0) to (11);
\draw (11) to (12);
\draw (12) to (3);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (8.5, 2) {};
\node [style=h] (1) at (2.5, 2) {};
\node [style=zeroin] (2) at (1.5, 2) {};
\node [style=zeroout] (3) at (9.5, 2) {};
\node [style=map] (4) at (4.5, 3) {$\sqrt{2}$};
\node [style=map] (5) at (6.5, 3) {$\sqrt{2}$};
\node [style=dot] (6) at (3.5, 2) {};
\node [style=dot] (7) at (6.5, 2) {};
\node [style=dot] (8) at (7.5, 2) {};
\node [style=oplus] (9) at (3.5, 1) {};
\node [style=oplus] (10) at (7.5, 1) {};
\node [style=oplus] (11) at (6.5, 0) {};
\node [style=zeroout] (12) at (8.5, 1) {};
\node [style=zeroout] (13) at (7.5, 0) {};
\node [style=zeroin] (14) at (2.5, 1) {};
\node [style=zeroin] (15) at (3.5, 0) {};
\node [style=dot] (16) at (5.5, 0) {};
\node [style=oplus] (17) at (5.5, 2) {};
\node [style=h] (18) at (4.5, 0) {};
\node [style=h] (19) at (4.5, 1) {};
\node [style=h] (20) at (4.5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (0);
\draw (1) to (2);
\draw (13) to (11);
\draw (14) to (9);
\draw (10) to (12);
\draw (10) to (8);
\draw (7) to (11);
\draw (6) to (9);
\draw (1) to (6);
\draw (7) to (8);
\draw (8) to (0);
\draw (17) to (16);
\draw (16) to (11);
\draw (16) to (18);
\draw (18) to (15);
\draw (9) to (19);
\draw (19) to (10);
\draw (7) to (17);
\draw (17) to (20);
\draw (20) to (6);
\end{pgfonlayer}
\end{tikzpicture} & \ref{cnoth:H.F}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (8.5, 2) {};
\node [style=h] (1) at (2.5, 2) {};
\node [style=zeroin] (2) at (1.5, 2) {};
\node [style=zeroout] (3) at (9.5, 2) {};
\node [style=map] (4) at (4.5, 3) {$\sqrt{2}$};
\node [style=map] (5) at (6.5, 3) {$\sqrt{2}$};
\node [style=dot] (6) at (3.5, 2) {};
\node [style=dot] (7) at (7.5, 2) {};
\node [style=oplus] (8) at (3.5, 1) {};
\node [style=oplus] (9) at (7.5, 1) {};
\node [style=zeroout] (10) at (8.5, 1) {};
\node [style=zeroout] (11) at (7.5, 0) {};
\node [style=zeroin] (12) at (2.5, 1) {};
\node [style=zeroin] (13) at (3.5, 0) {};
\node [style=h] (14) at (4.5, 0) {};
\node [style=h] (15) at (4.5, 1) {};
\node [style=h] (16) at (4.5, 2) {};
\node [style=oplus] (17) at (6.5, 2) {};
\node [style=dot] (18) at (6.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (0);
\draw (1) to (2);
\draw (12) to (8);
\draw (9) to (10);
\draw (9) to (7);
\draw (6) to (8);
\draw (1) to (6);
\draw (7) to (0);
\draw (14) to (13);
\draw (8) to (15);
\draw (15) to (9);
\draw (16) to (6);
\draw (17) to (18);
\draw [in=180, out=0, looseness=1.00] (14) to (17);
\draw (17) to (7);
\draw (11) to (18);
\draw [in=0, out=180, looseness=1.00] (18) to (16);
\end{pgfonlayer}
\end{tikzpicture} & \ref{CNOT.1}, \ref{CNOT.2}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (9.5, 2) {};
\node [style=h] (1) at (2.5, 2) {};
\node [style=zeroin] (2) at (1.5, 2) {};
\node [style=zeroout] (3) at (10.5, 2) {};
\node [style=map] (4) at (3.5, 3) {$\sqrt{2}$};
\node [style=map] (5) at (8.5, 3) {$\sqrt{2}$};
\node [style=dot] (6) at (3.5, 2) {};
\node [style=dot] (7) at (8.5, 2) {};
\node [style=oplus] (8) at (3.5, 1) {};
\node [style=oplus] (9) at (8.5, 1) {};
\node [style=zeroout] (10) at (9.5, 1) {};
\node [style=zeroout] (11) at (5.5, 2) {};
\node [style=zeroin] (12) at (2.5, 1) {};
\node [style=zeroin] (13) at (6.5, 2) {};
\node [style=h] (14) at (7.5, 2) {};
\node [style=h] (15) at (6, 1) {};
\node [style=h] (16) at (4.5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (0);
\draw (1) to (2);
\draw (12) to (8);
\draw (9) to (10);
\draw (9) to (7);
\draw (6) to (8);
\draw (1) to (6);
\draw (7) to (0);
\draw (14) to (13);
\draw (8) to (15);
\draw (15) to (9);
\draw (16) to (6);
\draw (7) to (14);
\draw (11) to (16);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (3.5, 2) {};
\node [style=zeroout] (1) at (8.5, 2) {};
\node [style=map] (2) at (4.5, 3) {$\sqrt{2}$};
\node [style=map] (3) at (7.5, 3) {$\sqrt{2}$};
\node [style=zeroout] (4) at (5.5, 2) {};
\node [style=zeroin] (5) at (6.5, 2) {};
\node [style=h] (6) at (6, 1) {};
\node [style=zeroin] (7) at (2.5, 1) {};
\node [style=h] (8) at (3.5, 1) {};
\node [style=zeroout] (9) at (9.5, 1) {};
\node [style=h] (10) at (8.5, 1) {};
\node [style=oplus] (11) at (4.5, 2) {};
\node [style=dot] (12) at (4.5, 1) {};
\node [style=oplus] (13) at (7.5, 2) {};
\node [style=dot] (14) at (7.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (9) to (10);
\draw (8) to (7);
\draw (4) to (11);
\draw (0) to (11);
\draw (11) to (12);
\draw (12) to (6);
\draw (12) to (8);
\draw (13) to (14);
\draw (1) to (13);
\draw (13) to (5);
\draw (6) to (14);
\draw (14) to (10);
\end{pgfonlayer}
\end{tikzpicture} & \ref{cnoth:H.F}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (4, 2) {$\sqrt{2}$};
\node [style=map] (1) at (8, 2) {$\sqrt{2}$};
\node [style=zeroout] (2) at (5.5, 1.5) {};
\node [style=zeroin] (3) at (6.5, 1.5) {};
\node [style=h] (4) at (6, 0.5) {};
\node [style=zeroin] (5) at (2.5, 1) {};
\node [style=h] (6) at (3.5, 1) {};
\node [style=zeroout] (7) at (9.5, 1) {};
\node [style=h] (8) at (8.5, 1) {};
\node [style=fanout] (9) at (4.5, 1) {};
\node [style=fanin] (10) at (7.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (7) to (8);
\draw (6) to (5);
\draw (8) to (10);
\draw [in=0, out=153, looseness=1.00] (10) to (3);
\draw [in=0, out=-162, looseness=1.00] (10) to (4);
\draw [in=180, out=-18, looseness=1.00] (9) to (4);
\draw (9) to (6);
\draw [in=180, out=27, looseness=1.00] (9) to (2);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (2.5, 2) {$\sqrt{2}$};
\node [style=map] (1) at (8, 2) {$\sqrt{2}$};
\node [style=zeroout] (2) at (3.5, 1) {};
\node [style=zeroin] (3) at (7.5, 1) {};
\node [style=h] (4) at (5.5, 1) {};
\node [style=zeroin] (5) at (1.5, 1) {};
\node [style=h] (6) at (2.5, 1) {};
\node [style=zeroout] (7) at (9.5, 1) {};
\node [style=h] (8) at (8.5, 1) {};
\node [style=zeroin] (9) at (4.5, 1) {};
\node [style=zeroout] (10) at (6.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (7) to (8);
\draw (6) to (5);
\draw (8) to (3);
\draw (10) to (4);
\draw (4) to (9);
\draw (2) to (6);
\end{pgfonlayer}
\end{tikzpicture} & \text{\cite[Lemma B.0.2]{thesis}} \\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (4.75, 4) {$\sqrt{2}$};
\node [style=map] (1) at (6.25, 4) {$\sqrt{2}$};
\node [style=map] (2) at (4, 3) {$1/\sqrt{2}$};
\node [style=map] (3) at (5.5, 3) {$1/\sqrt{2}$};
\node [style=map] (4) at (7, 3) {$1/\sqrt{2}$};
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (4, 3) {$1/\sqrt{2}$};
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.S}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (3, 2) {};
\node [style=h] (1) at (4, 2) {};
\node [style=zeroout] (2) at (5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}\\
\end{align*}
\endgroup
\item
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (3, 2) {};
\node [style=h] (1) at (4, 2) {};
\node [style=zeroout] (2) at (5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (4, 2) {};
\node [style=xmul] (1) at (2.25, 2) {};
\node [style=xmul] (2) at (5.75, 2) {};
\node [style=xmul] (3) at (3.5, 3.25) {};
\node [style=zmul] (4) at (2.5, 3.25) {};
\node [style=xmul] (5) at (5.5, 3.25) {};
\node [style=zmul] (6) at (4.5, 3.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (0);
\draw [style=simple] (1) to (0);
\draw [style=simple, bend left=45, looseness=1.25] (3) to (4);
\draw [style=simple, bend left=45, looseness=1.25] (4) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple, bend left=45, looseness=1.25] (5) to (6);
\draw [style=simple, bend left=45, looseness=1.25] (6) to (5);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (3.5, 3.25) {};
\node [style=zmul] (1) at (2.5, 3.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, bend left=45, looseness=1.25] (0) to (1);
\draw [style=simple, bend left=45, looseness=1.25] (1) to (0);
\draw [style=simple] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture} & \ref{cnoth:H.S}\\
\end{align*}
\end{enumerate}
\end{proof}
\subsection{Proof of Lemma \ref{lem:fisfunc}}
\label{lem:fisfunc:proof}
We show that these interpretations are functors:
\begin{customlemma}{\ref{lem:fisfunc}}
$F:\mathsf{CNOT}+H \to \mathsf{ZX}_\pi$ is a strict \dag-symmetric monoidal functor.
\end{customlemma}
\begin{proof}
The preservation of the \dag-symmetric monoidal structure is immediate.
As the restriction of $F$ to $\mathsf{CNOT}$ is a functor, it suffices to show that \ref{cnoth:H.I}, \ref{cnoth:H.F}, \ref{cnoth:H.U}, \ref{cnoth:H.L}, \ref{cnoth:H.S} and \ref{cnoth:H.Z} hold.
\begin{description}
\item[\ref{cnoth:H.I}] Immediate.
\item[\ref{cnoth:H.F}]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (4, -0) {};
\node [style=oplus] (2) at (2, -0) {};
\node [style=h] (3) at (3, -0) {};
\node [style=h] (4) at (1, -0) {};
\node [style=h] (5) at (1, -1) {};
\node [style=dot] (6) at (2, -1) {};
\node [style=h] (7) at (3, -1) {};
\node [style=rn] (8) at (4, -1) {};
\node [style=rn] (9) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (4);
\draw [style=simple] (4) to (2);
\draw [style=simple] (2) to (3);
\draw [style=simple] (3) to (1);
\draw [style=simple] (7) to (6);
\draw [style=simple] (2) to (6);
\draw [style=simple] (9) to (5);
\draw [style=simple] (5) to (6);
\draw [style=simple] (7) to (8);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, 0) {};
\node [style=rn] (1) at (4, 0) {};
\node [style=xmul] (2) at (2, 0) {};
\node [style=h] (3) at (3, 0) {};
\node [style=h] (4) at (1, 0) {};
\node [style=h] (5) at (1, -1) {};
\node [style=zmul] (6) at (2, -1) {};
\node [style=h] (7) at (3, -1) {};
\node [style=rn] (8) at (4, -1) {};
\node [style=rn] (9) at (0, -1) {};
\node [style=xmul] (10) at (2.5, 1) {};
\node [style=zmul] (11) at (1.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (4);
\draw [style=simple] (4) to (2);
\draw [style=simple] (2) to (3);
\draw [style=simple] (3) to (1);
\draw [style=simple] (7) to (6);
\draw [style=simple] (2) to (6);
\draw [style=simple] (9) to (5);
\draw [style=simple] (5) to (6);
\draw [style=simple] (7) to (8);
\draw [style=simple] (10) to (11);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, 0) {};
\node [style=rn] (1) at (4, 0) {};
\node [style=xmul] (2) at (2, 0) {};
\node [style=h] (3) at (1, -3) {};
\node [style=zmul] (4) at (2, -3) {};
\node [style=rn] (5) at (4, -3) {};
\node [style=h] (6) at (3, -3) {};
\node [style=rn] (7) at (0, -3) {};
\node [style=h] (8) at (2, -2) {};
\node [style=h] (9) at (2, -1) {};
\node [style=h] (10) at (1, 0) {};
\node [style=h] (11) at (3, 0) {};
\node [style=zmul] (12) at (1.5, 1) {};
\node [style=xmul] (13) at (2.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (6) to (4);
\draw [style=simple] (7) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple] (6) to (5);
\draw [style=simple] (1) to (11);
\draw [style=simple] (11) to (2);
\draw [style=simple] (2) to (10);
\draw [style=simple] (10) to (0);
\draw [style=simple] (2) to (9);
\draw [style=simple] (9) to (8);
\draw [style=simple] (8) to (4);
\draw (13) to (12);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, -2) {};
\node [style=rn] (1) at (4, -2) {};
\node [style=xmul] (2) at (3, -3) {};
\node [style=zmul] (3) at (3, -2) {};
\node [style=rn] (4) at (4, -3) {};
\node [style=rn] (5) at (2, -3) {};
\node [style=zmul] (6) at (2.5, -4) {};
\node [style=xmul] (7) at (3.5, -4) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (3);
\draw [style=simple] (3) to (0);
\draw [style=simple] (3) to (2);
\draw [style=simple] (2) to (5);
\draw [style=simple] (2) to (4);
\draw [style=simple] (6) to (7);
\end{pgfonlayer}
\end{tikzpicture}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 0) {};
\node [style=rn] (1) at (3, 0) {};
\node [style=dot] (2) at (2, 0) {};
\node [style=oplus] (3) at (2, -1) {};
\node [style=rn] (4) at (3, -1) {};
\node [style=rn] (5) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (3);
\draw (4) to (3);
\draw (3) to (5);
\draw (0) to (2);
\draw (2) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\item[\ref{cnoth:H.L}]
Immediate.
\item[\ref{cnoth:H.S}]
Immediate.
\item[\ref{cnoth:H.Z}]
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (6, -0) {};
\node [style=zeroout] (1) at (7, -0) {};
\node [style=none] (2) at (5.5, -1) {};
\node [style=none] (3) at (7.5, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (3.center) to (2.center);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (5.5, -1) {};
\node [style=none] (1) at (7.5, -1) {};
\node [style=xmul] (2) at (5.5, -0) {$\pi$};
\node [style=xmul] (3) at (7.5, -0) {};
\node [style=zmul] (4) at (5, 1) {};
\node [style=xmul] (5) at (6, 1) {};
\node [style=zmul] (6) at (7, 1) {};
\node [style=xmul] (7) at (8, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (0.center);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (5) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (7) to (6);
\draw [style=simple, in=150, out=30, looseness=1.25] (6) to (7);
\draw [style=simple] (7) to (6);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:cnottozx} (ii)}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (5.5, -1) {};
\node [style=none] (1) at (7.5, -1) {};
\node [style=xmul] (2) at (6.5, -0) {$\pi$};
\node [style=zmul] (3) at (5, 1) {};
\node [style=xmul] (4) at (6, 1) {};
\node [style=zmul] (5) at (7, 1) {};
\node [style=xmul] (6) at (8, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (0.center);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (4) to (3);
\draw [style=simple, in=150, out=30, looseness=1.25] (3) to (4);
\draw [style=simple] (4) to (3);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (6) to (5);
\draw [style=simple, in=150, out=30, looseness=1.25] (5) to (6);
\draw [style=simple] (6) to (5);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (4, -1) {};
\node [style=xmul] (1) at (6.5, -0) {$\pi$};
\node [style=zmul] (2) at (4, -0) {};
\node [style=xmul] (3) at (5, -0) {};
\node [style=zmul] (4) at (8, -0) {};
\node [style=xmul] (5) at (9, -0) {};
\node [style=none] (6) at (9, -1) {};
\node [style=xmul] (7) at (5, -1) {};
\node [style=xmul] (8) at (8, -1) {};
\node [style=zmul] (9) at (6, -1) {};
\node [style=zmul] (10) at (7, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (2);
\draw [style=simple, in=150, out=30, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (5) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (7) to (0.center);
\draw [style=simple] (10) to (9);
\end{pgfonlayer}
\end{tikzpicture} & \ref{ZX.pi:ZO}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (6, -1) {};
\node [style=xmul] (1) at (7.5, 2) {$\pi$};
\node [style=zmul] (2) at (6, -0) {};
\node [style=xmul] (3) at (7, -0) {};
\node [style=zmul] (4) at (8, -0) {};
\node [style=xmul] (5) at (9, -0) {};
\node [style=none] (6) at (9, -1) {};
\node [style=xmul] (7) at (7, -1) {};
\node [style=xmul] (8) at (8, -1) {};
\node [style=zmul] (9) at (6, 1) {};
\node [style=xmul] (10) at (7, 1) {};
\node [style=xmul] (11) at (9, 1) {};
\node [style=zmul] (12) at (8, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (2);
\draw [style=simple, in=150, out=30, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (5) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (10) to (9);
\draw [style=simple] (7) to (0.center);
\draw [style=simple] (11) to (12);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:dim}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (4, -1) {};
\node [style=xmul] (1) at (7.5, 2) {$\pi$};
\node [style=zmul] (2) at (6, -0) {};
\node [style=xmul] (3) at (7, -0) {};
\node [style=zmul] (4) at (8, -0) {};
\node [style=xmul] (5) at (9, -0) {};
\node [style=none] (6) at (11, -1) {};
\node [style=xmul] (7) at (7, -1) {};
\node [style=xmul] (8) at (8, -1) {};
\node [style=zmul] (9) at (6, 1) {};
\node [style=xmul] (10) at (7, 1) {};
\node [style=xmul] (11) at (9, 1) {};
\node [style=zmul] (12) at (8, 1) {};
\node [style=xmul] (13) at (5, 1) {};
\node [style=zmul] (14) at (4, -0) {};
\node [style=xmul] (15) at (5, -0) {};
\node [style=zmul] (16) at (4, 1) {};
\node [style=xmul] (17) at (11, -0) {};
\node [style=zmul] (18) at (10, 1) {};
\node [style=xmul] (19) at (11, 1) {};
\node [style=zmul] (20) at (10, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (2);
\draw [style=simple, in=150, out=30, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (5) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (10) to (9);
\draw [style=simple] (7) to (0.center);
\draw [style=simple] (11) to (12);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (15) to (14);
\draw [style=simple, in=150, out=30, looseness=1.25] (14) to (15);
\draw [style=simple] (15) to (14);
\draw [style=simple] (13) to (16);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (17) to (20);
\draw [style=simple, in=150, out=30, looseness=1.25] (20) to (17);
\draw [style=simple] (17) to (20);
\draw [style=simple] (19) to (18);
\end{pgfonlayer}
\end{tikzpicture} & \text{\ref{ZX.pi:IV}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (4, -1) {};
\node [style=xmul] (1) at (7.5, 2) {$\pi$};
\node [style=zmul] (2) at (6, -0) {};
\node [style=xmul] (3) at (7, -0) {};
\node [style=zmul] (4) at (8, -0) {};
\node [style=xmul] (5) at (9, -0) {};
\node [style=none] (6) at (11, -1) {};
\node [style=xmul] (7) at (7, -1) {};
\node [style=xmul] (8) at (8, -1) {};
\node [style=zmul] (9) at (5, 1) {};
\node [style=xmul] (10) at (6, 1) {};
\node [style=xmul] (11) at (9.75, 1) {};
\node [style=zmul] (12) at (8.75, 1) {};
\node [style=zmul] (13) at (4, -0) {};
\node [style=xmul] (14) at (5, -0) {};
\node [style=xmul] (15) at (11, -0) {};
\node [style=zmul] (16) at (10, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (2);
\draw [style=simple, in=150, out=30, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (5) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (10) to (9);
\draw [style=simple] (7) to (0.center);
\draw [style=simple] (11) to (12);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (14) to (13);
\draw [style=simple, in=150, out=30, looseness=1.25] (13) to (14);
\draw [style=simple] (14) to (13);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (15) to (16);
\draw [style=simple, in=150, out=30, looseness=1.25] (16) to (15);
\draw [style=simple] (15) to (16);
\end{pgfonlayer}
\end{tikzpicture} & \text{\ref{ZX.pi:ZO}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (8, -1) {};
\node [style=xmul] (1) at (9.5, 1) {$\pi$};
\node [style=none] (2) at (11, -1) {};
\node [style=xmul] (3) at (9, -1) {};
\node [style=xmul] (4) at (10, -1) {};
\node [style=zmul] (5) at (8, -0) {};
\node [style=xmul] (6) at (9, -0) {};
\node [style=xmul] (7) at (11, -0) {};
\node [style=zmul] (8) at (10, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2.center) to (4);
\draw [style=simple] (3) to (0.center);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (6) to (5);
\draw [style=simple, in=150, out=30, looseness=1.25] (5) to (6);
\draw [style=simple] (6) to (5);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (7) to (8);
\draw [style=simple, in=150, out=30, looseness=1.25] (8) to (7);
\draw [style=simple] (7) to (8);
\end{pgfonlayer}
\end{tikzpicture} & {\ref{ZX.pi:IV}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9.5, 1) {$\pi$};
\node [style=xmul] (1) at (9, -1) {};
\node [style=xmul] (2) at (10, -1) {};
\node [style=zmul] (3) at (8, -0) {};
\node [style=xmul] (4) at (9, -0) {};
\node [style=xmul] (5) at (11, -0) {};
\node [style=zmul] (6) at (10, -0) {};
\node [style=none] (7) at (6, -1) {};
\node [style=none] (8) at (13, -1) {};
\node [style=xmul] (9) at (7, -1) {$\pi$};
\node [style=xmul] (10) at (8, -1) {$\pi$};
\node [style=xmul] (11) at (12, -1) {$\pi$};
\node [style=xmul] (12) at (11, -1) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (4) to (3);
\draw [style=simple, in=150, out=30, looseness=1.25] (3) to (4);
\draw [style=simple] (4) to (3);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (5) to (6);
\draw [style=simple, in=150, out=30, looseness=1.25] (6) to (5);
\draw [style=simple] (5) to (6);
\draw [style=simple] (8.center) to (11);
\draw [style=simple] (11) to (12);
\draw [style=simple] (12) to (2);
\draw [style=simple] (1) to (10);
\draw [style=simple] (10) to (9);
\draw [style=simple] (9) to (7.center);
\end{pgfonlayer}
\end{tikzpicture} & {\ref{ZX.pi:PP}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9.5, 1) {$\pi$};
\node [style=xmul] (1) at (9.75, -1) {};
\node [style=zmul] (2) at (8, -0) {};
\node [style=xmul] (3) at (9, -0) {};
\node [style=xmul] (4) at (11, -0) {};
\node [style=zmul] (5) at (10, -0) {};
\node [style=none] (6) at (7, -1) {};
\node [style=none] (7) at (11.75, -1) {};
\node [style=xmul] (8) at (8, -1) {$\pi$};
\node [style=xmul] (9) at (10.75, -1) {$\pi$};
\node [style=xmul] (10) at (9, -2) {};
\node [style=xmul] (11) at (8, -2) {$\pi$};
\node [style=zmul] (12) at (7, -2) {};
\node [style=xmul] (13) at (9, -1) {};
\node [style=xmul] (14) at (10.75, -2) {$\pi$};
\node [style=zmul] (15) at (11.75, -2) {};
\node [style=xmul] (16) at (9.75, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (2);
\draw [style=simple, in=150, out=30, looseness=1.25] (2) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (4) to (5);
\draw [style=simple, in=150, out=30, looseness=1.25] (5) to (4);
\draw [style=simple] (4) to (5);
\draw [style=simple] (7.center) to (9);
\draw [style=simple] (8) to (6.center);
\draw [style=simple] (10) to (11);
\draw [style=simple] (13) to (8);
\draw [style=simple] (11) to (12);
\draw [style=simple] (9) to (1);
\draw [style=simple] (16) to (14);
\draw [style=simple] (14) to (15);
\end{pgfonlayer}
\end{tikzpicture}& {\ref{ZX.pi:ZO}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9.5, 1) {$\pi$};
\node [style=zmul] (1) at (8, -0) {};
\node [style=xmul] (2) at (9, -0) {};
\node [style=xmul] (3) at (11, -0) {};
\node [style=zmul] (4) at (10, -0) {};
\node [style=none] (5) at (8, -1) {};
\node [style=none] (6) at (11, -1) {};
\node [style=xmul] (7) at (9, -1) {$\pi$};
\node [style=xmul] (8) at (10, -1) {$\pi$};
\node [style=xmul] (9) at (10.5, -2) {};
\node [style=xmul] (10) at (9.5, -2) {$\pi$};
\node [style=zmul] (11) at (8.5, -2) {};
\node [style=xmul] (12) at (9.5, -3) {$\pi$};
\node [style=zmul] (13) at (8.5, -3) {};
\node [style=xmul] (14) at (10.5, -3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (2) to (1);
\draw [style=simple, in=150, out=30, looseness=1.25] (1) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (7) to (5.center);
\draw [style=simple] (9) to (10);
\draw [style=simple] (10) to (11);
\draw [style=simple] (14) to (12);
\draw [style=simple] (12) to (13);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9.5, 1) {$\pi$};
\node [style=zmul] (1) at (8, -0) {};
\node [style=xmul] (2) at (9, -0) {};
\node [style=xmul] (3) at (11, -0) {};
\node [style=zmul] (4) at (10, -0) {};
\node [style=none] (5) at (8, -1) {};
\node [style=none] (6) at (11, -1) {};
\node [style=xmul] (7) at (9, -1) {$\pi$};
\node [style=xmul] (8) at (10, -1) {$\pi$};
\node [style=xmul] (9) at (11, -2) {};
\node [style=xmul] (10) at (10, -2) {$\pi$};
\node [style=xmul] (11) at (10, -3) {$\pi$};
\node [style=xmul] (12) at (11, -3) {};
\node [style=zmul] (13) at (8, -2.5) {};
\node [style=xmul] (14) at (9, -2.5) {};
\node [style=zmul] (15) at (9, -4) {};
\node [style=xmul] (16) at (10, -4) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (2) to (1);
\draw [style=simple, in=150, out=30, looseness=1.25] (1) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (7) to (5.center);
\draw [style=simple] (9) to (10);
\draw [style=simple] (12) to (11);1
\draw [style=simple, in=-27, out=180, looseness=1.00] (11) to (14);
\draw [style=simple] (14) to (13);
\draw [style=simple, in=180, out=27, looseness=1.00] (14) to (10);
\draw [style=simple] (16) to (15);
\end{pgfonlayer}
\end{tikzpicture} & {\ref{ZX.pi:B.U}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9.5, 1) {$\pi$};
\node [style=zmul] (1) at (8, -0) {};
\node [style=xmul] (2) at (9, -0) {};
\node [style=xmul] (3) at (11, -0) {};
\node [style=zmul] (4) at (10, -0) {};
\node [style=none] (5) at (8, -1) {};
\node [style=none] (6) at (11, -1) {};
\node [style=xmul] (7) at (9, -1) {$\pi$};
\node [style=xmul] (8) at (10, -1) {$\pi$};
\node [style=xmul] (9) at (10, -2) {$\pi$};
\node [style=xmul] (10) at (10, -3) {};
\node [style=zmul] (11) at (8, -2.5) {};
\node [style=xmul] (12) at (9, -2.5) {};
\node [style=zmul] (13) at (9, -4) {};
\node [style=xmul] (14) at (10, -4) {};
\node [style=xmul] (15) at (12, -2) {};
\node [style=xmul] (16) at (11, -2) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (2) to (1);
\draw [style=simple, in=150, out=30, looseness=1.25] (1) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (7) to (5.center);
\draw [style=simple] (12) to (11);
\draw [style=simple, in=180, out=27, looseness=1.00] (12) to (9);
\draw [style=simple] (14) to (13);
\draw [style=simple] (15) to (16);
\draw [style=simple] (16) to (9);
\draw [style=simple, in=-27, out=180, looseness=1.00] (10) to (12);
\end{pgfonlayer}
\end{tikzpicture}& {\ref{ZX.pi:PH}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9.5, 1) {$\pi$};
\node [style=zmul] (1) at (8, -0) {};
\node [style=xmul] (2) at (9, -0) {};
\node [style=xmul] (3) at (11, -0) {};
\node [style=zmul] (4) at (10, -0) {};
\node [style=none] (5) at (8, -1) {};
\node [style=none] (6) at (11, -1) {};
\node [style=xmul] (7) at (9, -1) {$\pi$};
\node [style=xmul] (8) at (10, -1) {$\pi$};
\node [style=xmul] (9) at (10.5, -1.75) {};
\node [style=xmul] (10) at (10.5, -2.75) {};
\node [style=zmul] (11) at (8.5, -2.25) {};
\node [style=xmul] (12) at (9.5, -2.25) {};
\node [style=zmul] (13) at (9, -3.75) {};
\node [style=xmul] (14) at (10, -3.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (2) to (1);
\draw [style=simple, in=150, out=30, looseness=1.25] (1) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (7) to (5.center);
\draw [style=simple] (12) to (11);
\draw [style=simple, in=180, out=27, looseness=1.00] (12) to (9);
\draw [style=simple] (14) to (13);
\draw [style=simple, in=-27, out=180, looseness=1.00] (10) to (12);
\end{pgfonlayer}
\end{tikzpicture} & {\ref{ZX.pi:PP}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9.5, 1) {$\pi$};
\node [style=zmul] (1) at (8, -0) {};
\node [style=xmul] (2) at (9, -0) {};
\node [style=xmul] (3) at (11, -0) {};
\node [style=zmul] (4) at (10, -0) {};
\node [style=none] (5) at (8, -1) {};
\node [style=none] (6) at (11, -1) {};
\node [style=xmul] (7) at (9, -1) {$\pi$};
\node [style=xmul] (8) at (10, -1) {$\pi$};
\node [style=zmul] (9) at (9, -2) {};
\node [style=xmul] (10) at (10, -2) {};
\node [style=zmul] (11) at (9, -3) {};
\node [style=xmul] (12) at (10, -3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (2) to (1);
\draw [style=simple, in=150, out=30, looseness=1.25] (1) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (3) to (4);
\draw [style=simple, in=150, out=30, looseness=1.25] (4) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple] (6.center) to (8);
\draw [style=simple] (7) to (5.center);
\draw [style=simple] (10) to (9);
\draw [style=simple] (12) to (11);
\end{pgfonlayer}
\end{tikzpicture} & {\ref{ZX.pi:ZO}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9.75, 1) {$\pi$};
\node [style=xmul] (1) at (11.5, -0) {};
\node [style=zmul] (2) at (10.5, -0) {};
\node [style=none] (3) at (8, -1) {};
\node [style=none] (4) at (11.5, -1) {};
\node [style=xmul] (5) at (9, -1) {$\pi$};
\node [style=xmul] (6) at (10.5, -1) {$\pi$};
\node [style=xmul] (7) at (9, -0) {};
\node [style=zmul] (8) at (8, -0) {};
\node [style=xmul] (9) at (9, -2) {};
\node [style=zmul] (10) at (8, -2) {};
\node [style=zmul] (11) at (10.5, -2) {};
\node [style=xmul] (12) at (11.5, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-30, out=-150, looseness=1.25] (1) to (2);
\draw [style=simple, in=150, out=30, looseness=1.25] (2) to (1);
\draw [style=simple] (1) to (2);
\draw [style=simple] (4.center) to (6);
\draw [style=simple] (5) to (3.center);
\draw [style=simple, in=150, out=30, looseness=1.25] (8) to (7);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (7) to (8);
\draw [style=simple] (7) to (8);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (9) to (10);
\draw [style=simple, in=150, out=30, looseness=1.25] (10) to (9);
\draw [style=simple] (9) to (10);
\draw [style=simple, in=-30, out=-150, looseness=1.25] (12) to (11);
\draw [style=simple, in=150, out=30, looseness=1.25] (11) to (12);
\draw [style=simple] (12) to (11);
\end{pgfonlayer}
\end{tikzpicture}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (6, -0) {};
\node [style=zeroout] (1) at (7, -0) {};
\node [style=none] (2) at (5, -1) {};
\node [style=none] (3) at (8, -1) {};
\node [style=onein] (4) at (7, -1) {};
\node [style=oneout] (5) at (6, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (3.center) to (4);
\draw [style=simple] (5) to (2.center);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:cnottozx} (ii)}
\end{align*}
\endgroup
\end{description}
\end{proof}
\subsection{Proof of Lemma \ref{lem:gisfunc} }
\label{lem:gisfunc:proof}
\begin{customlemma}{\ref{lem:gisfunc} }\
$G:\mathsf{ZX}_\pi\to\mathsf{CNOT}+H $ is a strict \dag-symmetric monoidal functor.
\end{customlemma}
\begin{proof}
We prove that each axiom holds:
\begin{description}
\item[\ref{ZX.pi:PI}]
This follows by naturality of $\Delta$ in $\mathsf{CNOT}$.
\item[\ref{ZX.pi:B.U}]
This follows by naturality of $\Delta$ in $\mathsf{CNOT}$ and \ref{cnoth:H.S}.
\item[\ref{ZX.pi:H2}]
This follows immediately from \ref{cnoth:H.I}.
\item[\ref{ZX.pi:H2}]
This follows immediately from \ref{cnoth:H.S}.
\item[\ref{ZX.pi:PP}]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-3, -0) {};
\node [style=none] (1) at (0.25, -0) {};
\node [style=zmul] (2) at (-2, -0) {$\pi$};
\node [style=zmul] (3) at (-0.75, -0) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (3);
\draw (3) to (2);
\draw (2) to (0.center);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (-2, -2) {};
\node [style=h] (1) at (0, -2) {};
\node [style=h] (2) at (1, -2) {};
\node [style=h] (3) at (3, -2) {};
\node [style=none] (4) at (-3, -2) {};
\node [style=none] (5) at (4, -2) {};
\node [style=oplus] (6) at (-1, -2) {};
\node [style=oplus] (7) at (2, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4.center) to (0);
\draw (0) to (6);
\draw (6) to (1);
\draw (1) to (2);
\draw (2) to (7);
\draw (7) to (3);
\draw (3) to (5.center);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (0, -2) {};
\node [style=h] (1) at (3, -2) {};
\node [style=none] (2) at (-1, -2) {};
\node [style=none] (3) at (4, -2) {};
\node [style=oplus] (4) at (1, -2) {};
\node [style=oplus] (5) at (2, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2.center) to (0);
\draw (0) to (4);
\draw (5) to (1);
\draw (1) to (3.center);
\draw (4) to (5);
\end{pgfonlayer}
\end{tikzpicture} & \ref{cnoth:H.I}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (2, -2) {};
\node [style=h] (1) at (3, -2) {};
\node [style=none] (2) at (1, -2) {};
\node [style=none] (3) at (4, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2.center) to (0);
\draw (1) to (3.center);
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (1, -2) {};
\node [style=none] (1) at (3, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.I}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (1, -2) {};
\node [style=none] (1) at (3, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\item[\ref{ZX.pi:B.H}]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (3, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\node [style=rn] (2) at (5, 0) {};
\node [style=rn] (3) at (2, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw [bend right=45, looseness=1.25] (1) to (0);
\draw [bend right=45, looseness=1.25] (0) to (1);
\draw (0) to (3);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=rn] (1) at (4, -2) {};
\node [style=h] (2) at (3, -2) {};
\node [style=h] (3) at (1, -1) {};
\node [style=h] (4) at (1, -2) {};
\node [style=zeroout] (5) at (3, -1) {};
\node [style=zeroin] (6) at (-1, -2) {};
\node [style=oplus] (7) at (2, -1) {};
\node [style=oplus] (8) at (0, -2) {};
\node [style=dot] (9) at (0, -1) {};
\node [style=dot] (10) at (2, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (2);
\draw [style=simple] (0) to (9);
\draw [style=simple] (9) to (3);
\draw [style=simple] (3) to (7);
\draw [style=simple] (7) to (5);
\draw [style=simple] (1) to (10);
\draw [style=simple] (10) to (7);
\draw [style=simple] (3) to (4);
\draw [style=simple] (4) to (10);
\draw [style=simple] (4) to (8);
\draw [style=simple] (8) to (9);
\draw [style=simple] (8) to (6);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=rn] (1) at (3, -2) {};
\node [style=zeroin] (2) at (-1, -2) {};
\node [style=oplus] (3) at (1, -2) {};
\node [style=oplus] (4) at (0, -2) {};
\node [style=dot] (5) at (0, -1) {};
\node [style=h] (6) at (2, -1) {};
\node [style=zeroout] (7) at (3, -1) {};
\node [style=dot] (8) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (5);
\draw [style=simple] (4) to (5);
\draw [style=simple] (4) to (2);
\draw [style=simple] (1) to (3);
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (8);
\draw [style=simple] (8) to (3);
\draw [style=simple] (7) to (6);
\draw [style=simple] (6) to (8);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.F}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=rn] (1) at (3, -1) {};
\node [style=zeroin] (2) at (2, -1) {};
\node [style=h] (3) at (0, -1) {};
\node [style=zeroout] (4) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (4) to (3);
\draw [style=simple] (3) to (0);
\draw [style=simple] (2) to (1);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.2}\\
&=\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=rn] (1) at (3, -1) {};
\node [style=zeroin] (2) at (2, -1) {};
\node [style=h] (3) at (0, -1) {};
\node [style=zeroout] (4) at (1, -1) {};
\node [style=map] (5) at (0, 0) {$\sqrt{2}$};
\node [style=map] (6) at (2.5, 0) {$\sqrt{2}$};
\node [style=map] (7) at (2.5, 1) {$1/\sqrt{2}$};
\node [style=map] (8) at (0, 1) {$1/\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (4) to (3);
\draw [style=simple] (3) to (0);
\draw [style=simple] (2) to (1);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.Z}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (3, -0) {};
\node [style=rn] (1) at (2, -0) {};
\node [style=rn] (2) at (5, -0) {};
\node [style=xmul] (3) at (4, -0) {};
\node [style=xmul] (4) at (4, 1) {};
\node [style=zmul] (5) at (5, 1) {};
\node [style=zmul] (6) at (3, 1) {};
\node [style=xmul] (7) at (2, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\draw (2) to (3);
\draw [bend left, looseness=1.25] (5) to (4);
\draw [bend left, looseness=1.25] (4) to (5);
\draw (5) to (4);
\draw [bend left, looseness=1.25] (6) to (7);
\draw [bend left, looseness=1.25] (7) to (6);
\draw (6) to (7);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:cantthink} (ii)}\times 2
\end{align*}
\item[\ref{ZX.pi:B.M}]
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (7, -2) {};
\node [style=zmul] (1) at (7, -2.75) {};
\node [style=xmul] (2) at (8.75, -2) {};
\node [style=xmul] (3) at (8.75, -2.75) {};
\node [style=rn] (4) at (6, -2) {};
\node [style=rn] (5) at (6, -2.75) {};
\node [style=rn] (6) at (9.75, -2) {};
\node [style=rn] (7) at (9.75, -2.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (7) to (3);
\draw (3) to (0);
\draw (2) to (1);
\draw [bend right, looseness=1.00] (1) to (3);
\draw [bend right, looseness=1.00] (2) to (0);
\draw (0) to (4);
\draw (5) to (1);
\draw (2) to (6);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=fanout] (0) at (-0.25, -1.75) {};
\node [style=fanout] (1) at (-0.25, -3.25) {};
\node [style=fanin] (2) at (2.5, -1.75) {};
\node [style=fanin] (3) at (2.5, -3.25) {};
\node [style=h] (4) at (1.25, -1) {};
\node [style=h] (5) at (1.25, -4) {};
\node [style=h] (6) at (3.5, -1.75) {};
\node [style=h] (7) at (3.5, -3.25) {};
\node [style=rn] (8) at (4.5, -1.75) {};
\node [style=rn] (9) at (4.5, -3.25) {};
\node [style=rn] (10) at (-1.5, -1.75) {};
\node [style=rn] (11) at (-1.5, -3.25) {};
\node [style=h] (12) at (1.25, -2.75) {};
\node [style=h] (13) at (1.25, -2.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=180, out=-34, looseness=1.00] (1) to (5);
\draw [style=simple, in=180, out=34, looseness=1.00] (0) to (4);
\draw [style=simple, in=143, out=0, looseness=1.00] (4) to (2);
\draw [style=simple] (2) to (6);
\draw [style=simple] (6) to (8);
\draw [style=simple] (0) to (10);
\draw [style=simple] (11) to (1);
\draw [style=simple] (3) to (7);
\draw [style=simple] (7) to (9);
\draw [style=simple, in=0, out=-143, looseness=1.00] (3) to (5);
\draw [style=simple, in=180, out=27, looseness=1.00] (1) to (13);
\draw [style=simple, in=0, out=-153, looseness=1.00] (2) to (13);
\draw [style=simple, in=0, out=153, looseness=1.00] (3) to (12);
\draw [style=simple, in=-27, out=180, looseness=1.00] (12) to (0);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=fanin] (0) at (2.5, -1.75) {};
\node [style=fanin] (1) at (2.5, -3.25) {};
\node [style=h] (2) at (1.25, -1) {};
\node [style=h] (3) at (1.25, -4) {};
\node [style=h] (4) at (3.5, -1.75) {};
\node [style=h] (5) at (3.5, -3.25) {};
\node [style=rn] (6) at (4.5, -1.75) {};
\node [style=rn] (7) at (4.5, -3.25) {};
\node [style=rn] (8) at (-1.5, -1) {};
\node [style=rn] (9) at (-1.5, -3) {};
\node [style=h] (10) at (1.25, -3) {};
\node [style=h] (11) at (1.25, -2) {};
\node [style=dot] (12) at (0, -1) {};
\node [style=dot] (13) at (0, -3) {};
\node [style=oplus] (14) at (0, -4) {};
\node [style=oplus] (15) at (0, -2) {};
\node [style=zeroin] (16) at (-1.5, -4) {};
\node [style=zeroin] (17) at (-1.5, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=143, out=0, looseness=1.00] (2) to (0);
\draw [style=simple] (0) to (4);
\draw [style=simple] (4) to (6);
\draw [style=simple] (1) to (5);
\draw [style=simple] (5) to (7);
\draw [style=simple, in=0, out=-143, looseness=1.00] (1) to (3);
\draw [style=simple, in=0, out=-153, looseness=1.00] (0) to (11);
\draw [style=simple, in=0, out=153, looseness=1.00] (1) to (10);
\draw [style=simple] (8) to (12);
\draw [style=simple] (2) to (12);
\draw [style=simple] (17) to (15);
\draw [style=simple, in=180, out=0, looseness=1.00] (15) to (10);
\draw [style=simple, in=0, out=180, looseness=1.00] (11) to (13);
\draw [style=simple] (13) to (9);
\draw [style=simple] (16) to (14);
\draw [style=simple] (14) to (3);
\draw [style=simple] (14) to (13);
\draw [style=simple] (15) to (12);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=fanin] (0) at (2, -1.75) {};
\node [style=fanin] (1) at (2, -3.25) {};
\node [style=h] (2) at (3, -1.75) {};
\node [style=h] (3) at (3, -3.25) {};
\node [style=rn] (4) at (4, -1.75) {};
\node [style=rn] (5) at (4, -3.25) {};
\node [style=oplus] (6) at (0, -1) {};
\node [style=rn] (7) at (-2.25, -1) {};
\node [style=dot] (8) at (0, -2) {};
\node [style=zeroin] (9) at (-2.25, -2) {};
\node [style=h] (10) at (-1, -2) {};
\node [style=h] (11) at (-1, -1) {};
\node [style=oplus] (12) at (0, -3) {};
\node [style=dot] (13) at (0, -4) {};
\node [style=zeroin] (14) at (-2.25, -4) {};
\node [style=h] (15) at (-1, -4) {};
\node [style=rn] (16) at (-2.25, -3) {};
\node [style=h] (17) at (-1, -3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple] (2) to (4);
\draw [style=simple] (1) to (3);
\draw [style=simple] (3) to (5);
\draw [style=simple] (6) to (8);
\draw [style=simple] (12) to (13);
\draw [style=simple] (14) to (15);
\draw [style=simple] (13) to (15);
\draw [style=simple, in=0, out=-159, looseness=1.00] (1) to (13);
\draw [style=simple, in=-148, out=0, looseness=1.00] (12) to (0);
\draw [style=simple, in=0, out=148, looseness=1.00] (1) to (8);
\draw [style=simple] (8) to (10);
\draw [style=simple] (10) to (9);
\draw [style=simple] (7) to (11);
\draw [style=simple] (11) to (6);
\draw [style=simple, in=159, out=0, looseness=1.00] (6) to (0);
\draw [style=simple] (12) to (17);
\draw [style=simple] (17) to (16);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.F}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (3, -1.75) {};
\node [style=h] (1) at (3, -3.25) {};
\node [style=rn] (2) at (4, -1.75) {};
\node [style=rn] (3) at (4, -3.25) {};
\node [style=rn] (4) at (-2.25, -1) {};
\node [style=zeroin] (5) at (-2.25, -2) {};
\node [style=h] (6) at (-1, -2) {};
\node [style=h] (7) at (-1, -1) {};
\node [style=zeroin] (8) at (-2.25, -4) {};
\node [style=h] (9) at (-1, -4) {};
\node [style=rn] (10) at (-2.25, -3) {};
\node [style=h] (11) at (-1, -3) {};
\node [style=fanin] (12) at (1, -1.75) {};
\node [style=fanin] (13) at (1, -3.25) {};
\node [style=oplus] (14) at (2, -1.75) {};
\node [style=dot] (15) at (2, -3.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple] (1) to (3);
\draw [style=simple] (8) to (9);
\draw [style=simple] (6) to (5);
\draw [style=simple] (4) to (7);
\draw [style=simple] (11) to (10);
\draw [style=simple] (14) to (15);
\draw [style=simple, in=0, out=159, looseness=1.00] (12) to (7);
\draw [style=simple, in=0, out=-148, looseness=1.00] (12) to (11);
\draw [style=simple, in=148, out=0, looseness=1.00] (6) to (13);
\draw [style=simple, in=-159, out=0, looseness=1.00] (9) to (13);
\draw [style=simple] (13) to (15);
\draw [style=simple] (15) to (1);
\draw [style=simple] (14) to (0);
\draw [style=simple] (14) to (12);
\end{pgfonlayer}
\end{tikzpicture}& \text{$\Delta$ natural in $\mathsf{CNOT}$}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (3, -0) {};
\node [style=h] (1) at (3, -1) {};
\node [style=rn] (2) at (4, -0) {};
\node [style=rn] (3) at (4, -1) {};
\node [style=rn] (4) at (-3.25, 0.75) {};
\node [style=h] (5) at (-2, 0.75) {};
\node [style=zeroin] (6) at (0, -1) {};
\node [style=h] (7) at (1, -1) {};
\node [style=rn] (8) at (-3.25, -0.75) {};
\node [style=h] (9) at (-2, -0.75) {};
\node [style=fanin] (10) at (0, -0) {};
\node [style=oplus] (11) at (2, -0) {};
\node [style=dot] (12) at (2, -1) {};
\node [style=map] (13) at (1, 1) {$1/\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple] (1) to (3);
\draw [style=simple] (6) to (7);
\draw [style=simple] (4) to (5);
\draw [style=simple] (9) to (8);
\draw [style=simple] (11) to (12);
\draw [style=simple, in=0, out=159, looseness=1.00] (10) to (5);
\draw [style=simple, in=0, out=-148, looseness=1.00] (10) to (9);
\draw [style=simple] (12) to (1);
\draw [style=simple] (11) to (0);
\draw [style=simple] (11) to (10);
\draw [style=simple] (12) to (7);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.U}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -0) {};
\node [style=rn] (1) at (3, -1) {};
\node [style=rn] (2) at (-3.25, 0.75) {};
\node [style=h] (3) at (-2, 0.75) {};
\node [style=zeroin] (4) at (1, -1) {};
\node [style=rn] (5) at (-3.25, -0.75) {};
\node [style=h] (6) at (-2, -0.75) {};
\node [style=fanin] (7) at (0, -0) {};
\node [style=h] (8) at (1, -0) {};
\node [style=dot] (9) at (2, -0) {};
\node [style=oplus] (10) at (2, -1) {};
\node [style=map] (11) at (1, 1) {$1/\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (3);
\draw [style=simple] (6) to (5);
\draw [style=simple, in=0, out=159, looseness=1.00] (7) to (3);
\draw [style=simple, in=0, out=-148, looseness=1.00] (7) to (6);
\draw [style=simple] (9) to (10);
\draw [style=simple] (8) to (9);
\draw [style=simple] (9) to (0);
\draw [style=simple] (1) to (10);
\draw [style=simple] (10) to (4);
\draw [style=simple] (8) to (7);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.F}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (2.75, -0.75) {};
\node [style=rn] (1) at (1.5, -0.75) {};
\node [style=fanin] (2) at (4.75, 0) {};
\node [style=rn] (3) at (1.5, 0.75) {};
\node [style=h] (4) at (2.75, 0.75) {};
\node [style=fanout] (5) at (7.25, -0) {};
\node [style=rn] (6) at (8.75, 0.75) {};
\node [style=rn] (7) at (8.75, -0.75) {};
\node [style=h] (8) at (6, -0) {};
\node [style=map] (9) at (6, 1) {$1/\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (4);
\draw [style=simple] (0) to (1);
\draw [style=simple, in=0, out=159, looseness=1.00] (2) to (4);
\draw [style=simple, in=0, out=-148, looseness=1.00] (2) to (0);
\draw [style=simple] (2) to (8);
\draw [style=simple] (8) to (5);
\draw [style=simple, in=180, out=27, looseness=1.00] (5) to (6);
\draw [style=simple, in=-27, out=180, looseness=1.00] (7) to (5);
\end{pgfonlayer}
\end{tikzpicture}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1.5, -0.75) {};
\node [style=rn] (1) at (1.5, 0.75) {};
\node [style=rn] (2) at (5.5, 0.75) {};
\node [style=rn] (3) at (5.5, -0.75) {};
\node [style=xmul] (4) at (2.75, -0) {};
\node [style=zmul] (5) at (4.25, -0) {};
\node [style=zmul] (6) at (3, 1) {};
\node [style=xmul] (7) at (4, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=-31, out=180, looseness=1.00] (3) to (5);
\draw [in=180, out=31, looseness=1.00] (5) to (2);
\draw (5) to (4);
\draw [in=0, out=149, looseness=1.00] (4) to (1);
\draw [in=-149, out=0, looseness=1.00] (0) to (4);
\draw [bend right, looseness=1.25] (7) to (6);
\draw [bend right, looseness=1.25] (6) to (7);
\draw (7) to (6);
\end{pgfonlayer}
\end{tikzpicture} & \text{Lemma \ref{lem:cantthink} (ii)}
\end{align*}
\endgroup
\item[\ref{ZX.pi:L}]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1) {};
\node [style=rn] (1) at (-2, 2) {};
\node [style=zmul] (2) at (-1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=90, out=0, looseness=1] (1) to (2);
\draw [in=0, out=-90, looseness=1] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1.75) {};
\node [style=rn] (1) at (-2, 1.25) {};
\node [style=fanin] (2) at (-1, 1.5) {};
\node [style=h] (3) at (-0.5, 1.5) {};
\node [style=zeroout] (4) at (0, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [bend right=15, looseness=1.00] (2) to (0);
\draw [bend right=15, looseness=1.00] (1) to (2);
\draw (4) to (3);
\draw (3) to (2);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-3, 1.25) {};
\node [style=rn] (1) at (-3, 1.75) {};
\node [style=fanin] (2) at (-1, 1.5) {};
\node [style=h] (3) at (-0.5, 1.5) {};
\node [style=zeroout] (4) at (0, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=3, out=120, looseness=1.00] (2) to (0);
\draw [in=-123, out=-3, looseness=1.00] (1) to (2);
\draw (4) to (3);
\draw (3) to (2);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1.25, 1.25) {};
\node [style=rn] (1) at (-1.25, 1.75) {};
\node [style=zeroout] (2) at (0.25, 1.25) {};
\node [style=zeroout] (3) at (0.25, 1.75) {};
\node [style=dot] (4) at (-0.75, 1.25) {};
\node [style=oplus] (5) at (-0.75, 1.75) {};
\node [style=h] (6) at (-0.25, 1.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (6) to (2);
\draw (6) to (4);
\draw (4) to (0);
\draw (4) to (5);
\draw (5) to (3);
\draw (5) to (1);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroout] (0) at (0, 1.25) {};
\node [style=zeroout] (1) at (0, 1.75) {};
\node [style=rn] (2) at (-2, 1.75) {};
\node [style=rn] (3) at (-2, 1.25) {};
\node [style=dot] (4) at (-1, 1.75) {};
\node [style=oplus] (5) at (-1, 1.25) {};
\node [style=h] (6) at (-0.5, 1.75) {};
\node [style=h] (7) at (-1.5, 1.75) {};
\node [style=h] (8) at (-1.5, 1.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4) to (5);
\draw (0) to (5);
\draw (5) to (8);
\draw (8) to (3);
\draw (2) to (7);
\draw (7) to (4);
\draw (4) to (6);
\draw (6) to (1);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.F}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1) {};
\node [style=rn] (1) at (-2, 2) {};
\node [style=xmul] (2) at (-1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=90, out=0, looseness=1] (1) to (2);
\draw [in=0, out=-90, looseness=1] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\item[\ref{ZX.pi:ZO}]
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (-0.5, -1) {$\pi$};
\node [style=none] (1) at (-1, -2) {};
\node [style=none] (2) at (0, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2.center) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-3.5, -2) {};
\node [style=none] (1) at (3.5, -2) {};
\node [style=fanout] (2) at (-1.25, -1) {};
\node [style=fanin] (3) at (1.25, -1) {};
\node [style=h] (4) at (-2, -1) {};
\node [style=h] (5) at (-2.5, -1) {};
\node [style=h] (6) at (-3, -1) {};
\node [style=h] (7) at (-0.5, -0.5) {};
\node [style=h] (8) at (0, -0.5) {};
\node [style=h] (9) at (0.5, -0.5) {};
\node [style=h] (10) at (0.25, -1.5) {};
\node [style=h] (11) at (-0.25, -1.5) {};
\node [style=h] (12) at (2, -1) {};
\node [style=h] (13) at (2.5, -1) {};
\node [style=h] (14) at (3, -1) {};
\node [style=zeroin] (15) at (-3.5, -1) {};
\node [style=zeroout] (16) at (3.5, -1) {};
\node [style=map] (17) at (-1.5, 0.5) {$\sqrt{2}$};
\node [style=map] (19) at (1.5, 0.5) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (16) to (14);
\draw (14) to (13);
\draw (13) to (12);
\draw (12) to (3);
\draw [in=0, out=135, looseness=1.00] (3) to (9);
\draw (9) to (8);
\draw (8) to (7);
\draw [in=45, out=180, looseness=1.00] (7) to (2);
\draw (2) to (4);
\draw (4) to (5);
\draw (5) to (6);
\draw (6) to (15);
\draw [in=180, out=-34, looseness=1.00] (2) to (11);
\draw (11) to (10);
\draw [in=-146, out=0, looseness=1.00] (10) to (3);
\draw (1.center) to (0.center);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-2.5, -2) {};
\node [style=none] (1) at (2.5, -2) {};
\node [style=fanout] (2) at (-1, -1) {};
\node [style=fanin] (3) at (1, -1) {};
\node [style=h] (4) at (-1.75, -1) {};
\node [style=h] (5) at (0, -0.5) {};
\node [style=h] (6) at (1.75, -1) {};
\node [style=zeroin] (7) at (-2.5, -1) {};
\node [style=zeroout] (8) at (2.5, -1) {};
\node [style=map] (9) at (1.75, 0.5) {$\sqrt{2}$};
\node [style=map] (10) at (-1.75, 0.5) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (6) to (3);
\draw (2) to (4);
\draw (1.center) to (0.center);
\draw (6) to (8);
\draw [in=0, out=153, looseness=1.00] (3) to (5);
\draw [in=27, out=180, looseness=1.00] (5) to (2);
\draw [bend right, looseness=1.25] (2) to (3);
\draw (4) to (7);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.I}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-2.5, -2) {};
\node [style=none] (1) at (-1, -2) {};
\node [style=zeroin] (2) at (-2.5, -1.5) {};
\node [style=map] (3) at (-2.25, -0.75) {$\sqrt{2}$};
\node [style=map] (4) at (-1.25, -0.75) {$\sqrt{2}$};
\node [style=oneout] (5) at (-1, -1.5) {};
\node [style=map] (6) at (-1.75, 0.25) {$1/\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0.center);
\draw (5) to (2);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-2.5, -2) {};
\node [style=none] (1) at (-1, -2) {};
\node [style=zeroin] (2) at (-2.5, -1.5) {};
\node [style=map] (3) at (-1.75, -0.75) {$\sqrt{2}$};
\node [style=oneout] (4) at (-1, -1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0.center);
\draw (4) to (2);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.S}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-1, -2) {};
\node [style=zeroin] (1) at (-2.5, -1.5) {};
\node [style=map] (3) at (-2.25, -0.75) {$\sqrt{2}$};
\node [style=oneout] (5) at (-1, -1.5) {};
\node [style=none] (6) at (-2.5, -2) {};
\node [style=oneout] (7) at (-2, -2) {};
\node [style=onein] (8) at (-1.5, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5) to (1);
\draw (0.center) to (8);
\draw (6.center) to (7);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.9}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (-2.5, -1.5) {};
\node [style=map] (1) at (-1.5, -0.75) {$\sqrt{2}$};
\node [style=none] (2) at (-2.5, -2) {};
\node [style=oneout] (3) at (-2, -2) {};
\node [style=onein] (4) at (-1.5, -2) {};
\node [style=none] (5) at (-0.5, -2) {};
\node [style=oneout] (6) at (-0.5, -1.5) {};
\node [style=dot] (7) at (-1, -1.5) {};
\node [style=oplus] (8) at (-1, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2.center) to (3);
\draw (5.center) to (8);
\draw (8) to (4);
\draw (8) to (7);
\draw (7) to (6);
\draw (7) to (0);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.7}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (-2.5, -1.5) {};
\node [style=map] (1) at (-1.5, -0.75) {$\sqrt{2}$};
\node [style=none] (2) at (-2.5, -2) {};
\node [style=none] (3) at (-0.5, -2) {};
\node [style=oneout] (4) at (-0.5, -1.5) {};
\node [style=zeroin] (5) at (-1, -2) {};
\node [style=zeroout] (6) at (-1.5, -2) {};
\node [style=oplus] (7) at (-2, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (6) to (7);
\draw (7) to (2.center);
\draw (0) to (4);
\draw (3.center) to (5);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (-4, -1.5) {};
\node [style=none] (1) at (-4, -2) {};
\node [style=none] (2) at (0, -2) {};
\node [style=oneout] (3) at (0, -1.5) {};
\node [style=zeroin] (4) at (-0.5, -2) {};
\node [style=zeroout] (5) at (-1, -2) {};
\node [style=map] (6) at (-3, -0.75) {$\sqrt{2}$};
\node [style=h] (7) at (-1.5, -2) {};
\node [style=h] (8) at (-3.5, -2) {};
\node [style=h] (9) at (-2.5, -2) {};
\node [style=oplus] (10) at (-3, -2.5) {};
\node [style=oplus] (11) at (-2, -2.5) {};
\node [style=dot] (12) at (-2, -2) {};
\node [style=dot] (13) at (-3, -2) {};
\node [style=zeroout] (14) at (-1.5, -2.5) {};
\node [style=zeroin] (15) at (-3.5, -2.5) {};
\node [style=map] (16) at (-1, -0.75) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (3);
\draw (2.center) to (4);
\draw (5) to (7);
\draw (12) to (7);
\draw (12) to (9);
\draw (9) to (13);
\draw (13) to (8);
\draw (8) to (1.center);
\draw (15) to (10);
\draw (10) to (13);
\draw (10) to (11);
\draw (11) to (14);
\draw (11) to (12);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.L}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (-4, -1.5) {};
\node [style=none] (1) at (-4, -2) {};
\node [style=oneout] (2) at (4, -1.5) {};
\node [style=map] (3) at (-1, -0.75) {$\sqrt{2}$};
\node [style=h] (4) at (-3.5, -2) {};
\node [style=map] (5) at (1, -0.75) {$\sqrt{2}$};
\node [style=oplus] (6) at (-2, -2.5) {};
\node [style=oplus] (7) at (1, -2.5) {};
\node [style=zeroin] (8) at (-2.5, -2.5) {};
\node [style=dot] (9) at (1, -2) {};
\node [style=dot] (10) at (-2, -2) {};
\node [style=oneout] (11) at (-3, -2) {};
\node [style=onein] (12) at (-2.5, -2) {};
\node [style=h] (13) at (-0.5, -2) {};
\node [style=oneout] (14) at (-1.5, -2) {};
\node [style=onein] (15) at (-1, -2) {};
\node [style=oneout] (16) at (0, -2) {};
\node [style=onein] (17) at (0.5, -2) {};
\node [style=none] (18) at (4, -2) {};
\node [style=zeroout] (19) at (1.5, -2.5) {};
\node [style=zeroin] (20) at (3.5, -2) {};
\node [style=zeroout] (21) at (3, -2) {};
\node [style=h] (22) at (2.5, -2) {};
\node [style=oneout] (23) at (1.5, -2) {};
\node [style=onein] (24) at (2, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (4) to (1.center);
\draw (8) to (6);
\draw (6) to (10);
\draw (6) to (7);
\draw (7) to (9);
\draw (11) to (4);
\draw (10) to (12);
\draw (14) to (10);
\draw (13) to (15);
\draw (16) to (13);
\draw (17) to (9);
\draw (18.center) to (20);
\draw (21) to (22);
\draw (7) to (19);
\draw (22) to (24);
\draw (23) to (9);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.9}\times 4\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (-4, -1.5) {};
\node [style=none] (1) at (-4, -2) {};
\node [style=oneout] (2) at (1, -1.5) {};
\node [style=map] (3) at (-2.5, -0.75) {$\sqrt{2}$};
\node [style=h] (4) at (-3.5, -2) {};
\node [style=map] (5) at (-0.5, -0.75) {$\sqrt{2}$};
\node [style=oneout] (6) at (-3, -2) {};
\node [style=h] (7) at (-2, -2) {};
\node [style=onein] (8) at (-2.5, -2) {};
\node [style=oneout] (9) at (-1.5, -2) {};
\node [style=none] (10) at (1, -2) {};
\node [style=zeroin] (11) at (0.5, -2) {};
\node [style=zeroout] (12) at (0, -2) {};
\node [style=h] (13) at (-0.5, -2) {};
\node [style=onein] (14) at (-1, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (4) to (1.center);
\draw (6) to (4);
\draw (7) to (8);
\draw (9) to (7);
\draw (10.center) to (11);
\draw (12) to (13);
\draw (13) to (14);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (-2.5, -0.75) {$\sqrt{2}$};
\node [style=map] (1) at (-0.5, -0.75) {$\sqrt{2}$};
\node [style=oneout] (2) at (-3.25, -2) {};
\node [style=onein] (3) at (-2.75, -2) {};
\node [style=oneout] (4) at (-0.75, -2) {};
\node [style=none] (5) at (2.25, -2) {};
\node [style=zeroin] (6) at (1.75, -2) {};
\node [style=zeroout] (7) at (1.25, -2) {};
\node [style=h] (8) at (0.75, -2) {};
\node [style=onein] (9) at (-0.25, -2) {};
\node [style=oplus] (10) at (0.25, -2) {};
\node [style=dot] (11) at (0.25, -1.5) {};
\node [style=dot] (12) at (-1.25, -1.5) {};
\node [style=dot] (13) at (-2.25, -1.5) {};
\node [style=dot] (14) at (-3.75, -1.5) {};
\node [style=oneout] (15) at (2.25, -1.5) {};
\node [style=h] (16) at (-1.75, -2) {};
\node [style=oplus] (17) at (-1.25, -2) {};
\node [style=oplus] (18) at (-2.25, -2) {};
\node [style=h] (19) at (-4.25, -2) {};
\node [style=zeroin] (20) at (-4.75, -1.5) {};
\node [style=none] (21) at (-4.75, -2) {};
\node [style=oplus] (22) at (-3.75, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5.center) to (6);
\draw (7) to (8);
\draw (15) to (11);
\draw (11) to (12);
\draw (12) to (13);
\draw (13) to (14);
\draw (8) to (10);
\draw (10) to (9);
\draw (16) to (18);
\draw (18) to (3);
\draw (18) to (13);
\draw (17) to (12);
\draw (4) to (17);
\draw (17) to (16);
\draw (10) to (11);
\draw (19) to (21.center);
\draw (22) to (2);
\draw (22) to (19);
\draw (22) to (14);
\draw (14) to (20);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.7}\times 4\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (-2.25, -0.75) {$\sqrt{2}$};
\node [style=map] (1) at (-0.75, -0.75) {$\sqrt{2}$};
\node [style=none] (2) at (0.75, -2) {};
\node [style=zeroin] (3) at (0.25, -2) {};
\node [style=oneout] (4) at (0.75, -1.5) {};
\node [style=h] (5) at (-3.75, -2) {};
\node [style=zeroin] (6) at (-4.25, -1.5) {};
\node [style=none] (7) at (-4.25, -2) {};
\node [style=map] (8) at (-2.25, -2) {$1/\sqrt{2}$};
\node [style=map] (9) at (-0.75, -2) {$1/\sqrt{2}$};
\node [style=zeroout] (10) at (-3.25, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2.center) to (3);
\draw (5) to (7.center);
\draw (4) to (6);
\draw (10) to (5);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.3}\times 4\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-2.25, -2) {};
\node [style=zeroin] (1) at (-2.75, -2) {};
\node [style=oneout] (2) at (-2.25, -1.5) {};
\node [style=h] (3) at (-3.75, -2) {};
\node [style=zeroin] (4) at (-4.25, -1.5) {};
\node [style=none] (5) at (-4.25, -2) {};
\node [style=zeroout] (6) at (-3.25, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (1);
\draw (3) to (5.center);
\draw (2) to (4);
\draw (6) to (3);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.S}\times 2\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (1.5, -1.75) {};
\node [style=map] (1) at (0.25, -2.5) {$\sqrt{2}$};
\node [style=none] (2) at (-1, -1.75) {};
\node [style=h] (3) at (-0.5, -1.75) {};
\node [style=zeroout] (4) at (0, -1.75) {};
\node [style=zeroin] (5) at (1, -1.75) {};
\node [style=map] (6) at (-0.75, -2.5) {$\sqrt{2}$};
\node [style=map] (7) at (1.25, -2.5) {$\sqrt{2}$};
\node [style=oneout] (8) at (1, -1) {};
\node [style=zeroin] (9) at (-0.5, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (5);
\draw (4) to (3);
\draw (3) to (2.center);
\draw (8) to (9);
\end{pgfonlayer}
\end{tikzpicture}& \text{As before}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (1.5, -1.75) {};
\node [style=map] (1) at (-0.5, -0.25) {$\sqrt{2}$};
\node [style=map] (2) at (1, -0.25) {$\sqrt{2}$};
\node [style=none] (3) at (-1, -1.75) {};
\node [style=h] (4) at (-0.5, -1.75) {};
\node [style=zeroout] (5) at (0, -1.75) {};
\node [style=zeroin] (6) at (1, -1.75) {};
\node [style=map] (7) at (-0.5, -2.5) {$\sqrt{2}$};
\node [style=map] (8) at (1.25, -2.5) {$\sqrt{2}$};
\node [style=map] (9) at (0.25, 0.75) {$1/\sqrt{2}$};
\node [style=oneout] (10) at (1, -1) {};
\node [style=zeroin] (11) at (-0.5, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0.center) to (6);
\draw (5) to (4);
\draw (4) to (3.center);
\draw (10) to (11);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.S}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (3.5, -2) {};
\node [style=fanout] (1) at (-1.25, -1) {};
\node [style=fanin] (2) at (1.25, -1) {};
\node [style=h] (3) at (-2, -1) {};
\node [style=h] (4) at (-2.5, -1) {};
\node [style=h] (5) at (-3, -1) {};
\node [style=h] (6) at (-0.5, -0.5) {};
\node [style=h] (7) at (0, -0.5) {};
\node [style=h] (8) at (0.5, -0.5) {};
\node [style=h] (9) at (0.25, -1.5) {};
\node [style=h] (10) at (-0.25, -1.5) {};
\node [style=h] (11) at (2, -1) {};
\node [style=h] (12) at (2.5, -1) {};
\node [style=h] (13) at (3, -1) {};
\node [style=zeroin] (14) at (-3.5, -1) {};
\node [style=zeroout] (15) at (3.5, -1) {};
\node [style=map] (16) at (-1.5, 0.5) {$\sqrt{2}$};
\node [style=map] (17) at (1.5, 0.5) {$\sqrt{2}$};
\node [style=none] (18) at (-3.5, -2) {};
\node [style=h] (19) at (-3, -2) {};
\node [style=zeroout] (20) at (-2.5, -2) {};
\node [style=zeroin] (21) at (3, -2) {};
\node [style=map] (22) at (-1.5, -2.5) {$\sqrt{2}$};
\node [style=map] (23) at (1.5, -2.5) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (15) to (13);
\draw (13) to (12);
\draw (12) to (11);
\draw (11) to (2);
\draw [in=0, out=135, looseness=1.00] (2) to (8);
\draw (8) to (7);
\draw (7) to (6);
\draw [in=45, out=180, looseness=1.00] (6) to (1);
\draw (1) to (3);
\draw (3) to (4);
\draw (4) to (5);
\draw (5) to (14);
\draw [in=180, out=-34, looseness=1.00] (1) to (10);
\draw (10) to (9);
\draw [in=-146, out=0, looseness=1.00] (9) to (2);
\draw (0.center) to (21);
\draw (20) to (19);
\draw (19) to (18.center);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.L}\\
&\mapsfrom
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (0.5, -1) {$\pi$};
\node [style=none] (1) at (-1, -2) {};
\node [style=none] (2) at (2, -2) {};
\node [style=zmul] (3) at (0, -2) {};
\node [style=xmul] (4) at (1, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2.center) to (4);
\draw (3) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\endgroup
\item[Classical structure: ] Remark that rules \ref{cnoth:H.U} and \ref{cnoth:H.S} complete the semi-Frobenius structure to the appropriate classical structure.
\end{description}
\end{proof}
\subsection{Proof of Proposition \ref{prop:fginv}}
\label{prop:fginv:proof}
\begin{customproposition}{\ref{prop:fginv}}\
$\mathsf{CNOT}+H\xrightarrow{F} \mathsf{ZX}_\pi$ and $\mathsf{ZX}_\pi\xrightarrow{G} \mathsf{CNOT}+H$ are inverses.
\end{customproposition}
\begin{proof}\
\begin{enumerate}
\item First, we show that $G;F=1$ .
We only prove the cases for the generators $\mathsf{cnot}$ and $|1\>$ as the claim follows trivially for the Hadamard gate and by symmetry for $\<1|$:
\begin{description}
\item[For $|1\>$:]
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, -0) {};
\node [style=none] (1) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (0, 1) {$\pi$};
\node [style=xmul] (1) at (1, 2) {};
\node [style=zmul] (2) at (0, 2) {};
\node [style=none] (3) at (1, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (3.center);
\draw [bend left=45, looseness=1.25] (1) to (2);
\draw [bend left=45, looseness=1.25] (2) to (1);
\draw (1) to (2);
\end{pgfonlayer}
\end{tikzpicture}\\
&\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 1) {};
\node [style=none] (1) at (1, 1) {};
\node [style=map] (2) at (-1, 2) {$\sqrt{2}$};
\node [style=map] (3) at (1, 2) {$1/\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}& \text{Lemma \ref{lem:cantthink} (ii)}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 1) {};
\node [style=none] (1) at (1, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.S}\\
\end{align*}
\item[For $\mathsf{cnot}$:]
\begingroup
\allowdisplaybreaks
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (0, -0) {};
\node [style=oplus] (1) at (0, -1) {};
\node [style=rn] (2) at (1, -0) {};
\node [style=rn] (3) at (1, -1) {};
\node [style=rn] (4) at (-1, -0) {};
\node [style=rn] (5) at (-1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (0);
\draw [style=simple] (0) to (4);
\draw [style=simple] (5) to (1);
\draw [style=simple] (1) to (3);
\draw [style=simple] (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
&\xmapsto{F}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, 0) {};
\node [style=rn] (1) at (2, -1) {};
\node [style=rn] (2) at (-1, 0) {};
\node [style=rn] (3) at (-1, -1) {};
\node [style=zmul] (4) at (0, -0.25) {};
\node [style=xmul] (5) at (1, -0.75) {};
\node [style=zmul] (6) at (1, 1) {};
\node [style=xmul] (7) at (0, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=180, out=-30, looseness=1.00] (5) to (1);
\draw [style=simple, in=-45, out=150, looseness=1.00] (5) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (4) to (2);
\draw [style=simple, in=-173, out=0, looseness=1.00] (3) to (5);
\draw [style=simple, in=180, out=15, looseness=1.00] (4) to (0);
\draw [style=simple] (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3.75, 0) {};
\node [style=rn] (1) at (3.75, -1.25) {};
\node [style=rn] (2) at (-1, 0) {};
\node [style=rn] (3) at (-1, -1.5) {};
\node [style=zmul] (4) at (0, -0.25) {};
\node [style=zmul] (5) at (1.75, -1.25) {};
\node [style=h] (6) at (2.75, -1.25) {};
\node [style=h] (7) at (0.25, -1.5) {};
\node [style=h] (8) at (0.75, -0.75) {};
\node [style=xmul] (9) at (1, 1) {};
\node [style=zmul] (10) at (2, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=0, out=180, looseness=1.00] (4) to (2);
\draw [style=simple, in=180, out=15, looseness=1.00] (4) to (0);
\draw (4) to (8);
\draw [in=150, out=-27, looseness=1.00] (8) to (5);
\draw (5) to (6);
\draw (6) to (1);
\draw [in=0, out=-171, looseness=1.00] (5) to (7);
\draw (7) to (3);
\draw (10) to (9);
\end{pgfonlayer}
\end{tikzpicture}\\
&
\xmapsto{G}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, 0) {};
\node [style=rn] (1) at (2, -1) {};
\node [style=rn] (2) at (-3, -2) {};
\node [style=rn] (3) at (-3, -1) {};
\node [style=zeroin] (4) at (-3, 0) {};
\node [style=zeroout] (5) at (1, -2) {};
\node [style=dot] (6) at (0, -1) {};
\node [style=oplus] (7) at (0, -2) {};
\node [style=oplus] (8) at (-2, 0) {};
\node [style=dot] (9) at (-2, -1) {};
\node [style=h] (10) at (-1, -2) {};
\node [style=h] (11) at (-1, -1) {};
\node [style=h] (12) at (1, -1) {};
\node [style=map] (13) at (-0.5, 1) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (12);
\draw (12) to (6);
\draw (6) to (11);
\draw (11) to (9);
\draw (9) to (3);
\draw (2) to (10);
\draw (10) to (7);
\draw (7) to (5);
\draw (7) to (6);
\draw (8) to (9);
\draw (4) to (8);
\draw (8) to (0);
\end{pgfonlayer}
\end{tikzpicture}& \text{Lemma \ref{lem:cantthink} (i)}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 0) {};
\node [style=rn] (1) at (1, -1) {};
\node [style=rn] (2) at (-3, -2) {};
\node [style=rn] (3) at (-3, -1) {};
\node [style=zeroin] (4) at (-3, 0) {};
\node [style=dot] (5) at (-1, -2) {};
\node [style=oplus] (6) at (-1, -1) {};
\node [style=oplus] (7) at (-2, 0) {};
\node [style=dot] (8) at (-2, -1) {};
\node [style=h] (9) at (0, -2) {};
\node [style=zeroout] (10) at (1, -2) {};
\node [style=map] (11) at (-1, 1) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (8) to (3);
\draw (6) to (5);
\draw (7) to (8);
\draw (4) to (7);
\draw (7) to (0);
\draw [style=simple] (8) to (6);
\draw [style=simple] (6) to (1);
\draw [style=simple] (10) to (9);
\draw [style=simple] (9) to (5);
\draw [style=simple] (5) to (2);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.F}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 0) {};
\node [style=rn] (1) at (1, -1) {};
\node [style=rn] (2) at (-3, -2) {};
\node [style=rn] (3) at (-3, -1) {};
\node [style=zeroin] (4) at (-3, 0) {};
\node [style=h] (5) at (0, -2) {};
\node [style=zeroout] (6) at (1, -2) {};
\node [style=oplus] (7) at (-2, -1) {};
\node [style=dot] (8) at (-2, -2) {};
\node [style=oplus] (9) at (0, 0) {};
\node [style=dot] (10) at (0, -1) {};
\node [style=oplus] (11) at (-1, 0) {};
\node [style=dot] (12) at (-1, -2) {};
\node [style=map] (13) at (-1, 1) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (6) to (5);
\draw (7) to (8);
\draw (9) to (10);
\draw [style=simple] (10) to (7);
\draw (11) to (12);
\draw [style=simple] (2) to (12);
\draw [style=simple] (12) to (8);
\draw [style=simple] (8) to (5);
\draw [style=simple] (1) to (10);
\draw [style=simple] (0) to (9);
\draw [style=simple] (9) to (11);
\draw [style=simple] (11) to (4);
\draw [style=simple] (3) to (7);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.9}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oplus] (0) at (2, 0) {};
\node [style=rn] (1) at (3, 0) {};
\node [style=rn] (2) at (3, -1) {};
\node [style=dot] (3) at (2, -1) {};
\node [style=zeroout] (4) at (3, -2) {};
\node [style=zeroin] (5) at (-3, 0) {};
\node [style=oplus] (6) at (-2, -1) {};
\node [style=dot] (7) at (-2, -2) {};
\node [style=rn] (8) at (-3, -1) {};
\node [style=rn] (9) at (-3, -2) {};
\node [style=h] (10) at (1, 0) {};
\node [style=h] (11) at (-1, 0) {};
\node [style=h] (12) at (-1, -2) {};
\node [style=oplus] (13) at (0, -2) {};
\node [style=dot] (14) at (0, 0) {};
\node [style=map] (15) at (0, 1) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (3);
\draw [style=simple] (2) to (3);
\draw [style=simple] (1) to (0);
\draw (6) to (7);
\draw [style=simple] (8) to (6);
\draw [style=simple] (12) to (7);
\draw [style=simple] (7) to (9);
\draw [style=simple] (0) to (10);
\draw [style=simple] (11) to (5);
\draw [style=simple] (6) to (3);
\draw [style=simple] (10) to (14);
\draw [style=simple] (14) to (11);
\draw [style=simple] (12) to (13);
\draw [style=simple] (13) to (4);
\draw [style=simple] (13) to (14);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.F}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oplus] (0) at (2, -0) {};
\node [style=rn] (1) at (3, -0) {};
\node [style=rn] (2) at (3, -1) {};
\node [style=dot] (3) at (2, -1) {};
\node [style=oplus] (4) at (-2, -1) {};
\node [style=dot] (5) at (-2, -2) {};
\node [style=rn] (6) at (-3, -1) {};
\node [style=rn] (7) at (-3, -2) {};
\node [style=h] (8) at (1, -0) {};
\node [style=h] (9) at (-1, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (3);
\draw [style=simple] (2) to (3);
\draw [style=simple] (1) to (0);
\draw (4) to (5);
\draw [style=simple] (6) to (4);
\draw [style=simple] (9) to (5);
\draw [style=simple] (5) to (7);
\draw [style=simple] (0) to (8);
\draw [style=simple] (4) to (3);
\draw [style=simple, in=0, out=180, looseness=0.75] (8) to (9);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.U}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oplus] (0) at (-1, -2) {};
\node [style=rn] (1) at (1, -1) {};
\node [style=rn] (2) at (1, -2) {};
\node [style=dot] (3) at (-1, -1) {};
\node [style=oplus] (4) at (-2, -1) {};
\node [style=dot] (5) at (-2, -2) {};
\node [style=rn] (6) at (-3, -1) {};
\node [style=rn] (7) at (-3, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (3);
\draw [style=simple, in=0, out=180, looseness=1.00] (2) to (3);
\draw [style=simple, in=0, out=180, looseness=1.00] (1) to (0);
\draw (4) to (5);
\draw [style=simple] (6) to (4);
\draw [style=simple] (5) to (7);
\draw [style=simple] (4) to (3);
\draw [style=simple] (5) to (0);
\end{pgfonlayer}
\end{tikzpicture}& \ref{cnoth:H.I}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oplus] (0) at (-1, -2) {};
\node [style=dot] (1) at (-1, -1) {};
\node [style=oplus] (2) at (-2, -1) {};
\node [style=dot] (3) at (-2, -2) {};
\node [style=rn] (4) at (-3, -1) {};
\node [style=rn] (5) at (-3, -2) {};
\node [style=rn] (6) at (3, -2) {};
\node [style=rn] (7) at (3, -1) {};
\node [style=oplus] (8) at (0, -1) {};
\node [style=dot] (9) at (0, -2) {};
\node [style=oplus] (10) at (1, -1) {};
\node [style=dot] (11) at (1, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\draw (2) to (3);
\draw [style=simple] (4) to (2);
\draw [style=simple] (3) to (5);
\draw [style=simple] (2) to (1);
\draw [style=simple] (3) to (0);
\draw (8) to (9);
\draw (10) to (11);
\draw [style=simple, in=0, out=180, looseness=1.00] (7) to (11);
\draw [style=simple, in=0, out=180, looseness=1.00] (6) to (10);
\draw [style=simple] (10) to (8);
\draw [style=simple] (8) to (1);
\draw [style=simple] (0) to (9);
\draw [style=simple] (9) to (11);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.2}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-3, -1) {};
\node [style=rn] (1) at (-3, -2) {};
\node [style=rn] (2) at (1, -2) {};
\node [style=rn] (3) at (1, -1) {};
\node [style=oplus] (4) at (-1, -1) {};
\node [style=dot] (5) at (-1, -2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4) to (5);
\draw [style=simple, in=0, out=180, looseness=1.00] (3) to (5);
\draw [style=simple, in=0, out=180, looseness=1.00] (2) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (4) to (1);
\draw [style=simple, in=180, out=0, looseness=1.00] (0) to (5);
\end{pgfonlayer}
\end{tikzpicture}& \ref{CNOT.1}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, -1) {};
\node [style=rn] (1) at (-2, -2) {};
\node [style=rn] (2) at (0, -2) {};
\node [style=rn] (3) at (0, -1) {};
\node [style=oplus] (4) at (-1, -2) {};
\node [style=dot] (5) at (-1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4) to (5);
\draw [style=simple, in=0, out=180, looseness=1.00] (3) to (5);
\draw [style=simple, in=0, out=180, looseness=1.00] (2) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (4) to (1);
\draw [style=simple, in=180, out=0, looseness=1.00] (0) to (5);
\end{pgfonlayer}
\end{tikzpicture}
\end{align*}
\endgroup
\item It is trivial to observe, on the other hand, that $G;F=1$.
\end{description}
\end{enumerate}
\end{proof}
\section{The identities of $\mathsf{TOF}$}
\label{sec:tof}
$\mathsf{TOF}$ is the PROP generated by the 1 ancillary bits $|1\>$ and $\<1|$ as well as the Toffoli gate:
\[
|1\> :=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (3, -0) {};
\node [style=rn] (1) at (5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
|1\> :=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (3, -0) {};
\node [style=rn] (1) at (5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
\mathsf{tof} :=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -0) {};
\node [style=rn] (1) at (3, 1) {};
\node [style=rn] (2) at (5, 1) {};
\node [style=rn] (3) at (5, -0) {};
\node [style=oplus] (4) at (4, -0) {};
\node [style=dot] (5) at (4, 1) {};
\node [style=dot] (6) at (4, 2) {};
\node [style=rn] (7) at (3, 2) {};
\node [style=rn] (8) at (5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (4);
\draw (4) to (0);
\draw (5) to (4);
\draw (5) to (2);
\draw (5) to (1);
\draw (6) to (8);
\draw (6) to (7);
\draw [style=simple] (6) to (5);
\end{pgfonlayer}
\end{tikzpicture}
\]
Where there is the derived generator
$$
\mathsf{cnot} =
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -0) {};
\node [style=rn] (1) at (3, 1) {};
\node [style=rn] (2) at (5, 1) {};
\node [style=rn] (3) at (5, -0) {};
\node [style=oplus] (4) at (4, -0) {};
\node [style=dot] (5) at (4, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (4);
\draw (4) to (0);
\draw (5) to (4);
\draw (5) to (2);
\draw (5) to (1);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -0) {};
\node [style=rn] (1) at (3, 1) {};
\node [style=rn] (2) at (5, 1) {};
\node [style=rn] (3) at (5, -0) {};
\node [style=oplus] (4) at (4, -0) {};
\node [style=dot] (5) at (4, 1) {};
\node [style=dot] (6) at (4, 2) {};
\node [style=onein] (7) at (3, 2) {};
\node [style=oneout] (8) at (5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (4);
\draw (4) to (0);
\draw (5) to (4);
\draw (5) to (2);
\draw (5) to (1);
\draw (6) to (8);
\draw (6) to (7);
\draw [style=simple] (6) to (5);
\end{pgfonlayer}
\end{tikzpicture}
$$
and the not gate and $|0\>$ ancillary bits are dervied as in Section \ref{sec:cnot}. These generators must satisfy the identities given in the following figure:
\begin{figure}[H]
\label{fig:tof}
\noindent
\scalebox{1.0}{%
\vbox{%
\begin{mdframed}
\begin{multicols}{2}
\begin{enumerate}[label={\bf [TOF.\arabic*]}, ref={\bf [TOF.\arabic*]}, wide = 0pt, leftmargin = 2em]
\item
\label{TOF.1}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=oplus] (2) at (0.5000002, -0) {};
\node [style=dot] (3) at (0.5000002, 0.5000002) {};
\node [style=dot] (4) at (0.5000002, 1) {};
\node [style=onein] (5) at (0, 1) {};
\node [style=nothing] (6) at (1, 1) {};
\node [style=nothing] (7) at (1, 0.5000002) {};
\node [style=nothing] (8) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5) to (4);
\draw (4) to (6);
\draw (7) to (3);
\draw (1) to (3);
\draw (0) to (2);
\draw (2) to (8);
\draw (2) to (3);
\draw (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0) {};
\node [style=nothing] (1) at (0, 0.5) {};
\node [style=oplus] (2) at (0.5, 0) {};
\node [style=dot] (3) at (0.5, 0.5) {};
\node [style=onein] (4) at (0.5, 1) {};
\node [style=nothing] (5) at (1, 1) {};
\node [style=nothing] (6) at (1, 0.5) {};
\node [style=nothing] (7) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (3);
\draw (0) to (2);
\draw (2) to (3);
\draw (6) to (3);
\draw (4) to (5);
\draw (2) to (7);
\end{pgfonlayer}
\end{tikzpicture}
\text{, }
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=oplus] (2) at (0.5000002, -0) {};
\node [style=dot] (3) at (0.5000002, 0.5000002) {};
\node [style=dot] (4) at (0.5000002, 1) {};
\node [style=onein] (5) at (0, 1) {};
\node [style=nothing] (6) at (1, 1) {};
\node [style=nothing] (7) at (1, 0.5000002) {};
\node [style=nothing] (8) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5) to (4);
\draw (4) to (6);
\draw (7) to (3);
\draw (1) to (3);
\draw (0) to (2);
\draw (2) to (8);
\draw (2) to (3);
\draw (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0) {};
\node [style=nothing] (1) at (0, 0.5) {};
\node [style=oplus] (2) at (0.5, 0) {};
\node [style=dot] (3) at (0.5, 0.5) {};
\node [style=onein] (4) at (0.5, 1) {};
\node [style=nothing] (5) at (1, 1) {};
\node [style=nothing] (6) at (1, 0.5) {};
\node [style=nothing] (7) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (3);
\draw (0) to (2);
\draw (2) to (3);
\draw (6) to (3);
\draw (4) to (5);
\draw (2) to (7);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.2}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0.25, 1.25) {};
\node [style=nothing] (1) at (0.25, 0.75) {};
\node [style=nothing] (2) at (1.75, 1.75) {};
\node [style=nothing] (3) at (1.75, 1.25) {};
\node [style=nothing] (4) at (1.75, 0.75) {};
\node [style=dot] (5) at (1, 1.75) {};
\node [style=dot] (6) at (1, 1.25) {};
\node [style=oplus] (7) at (1, 0.7500001) {};
\node [style=zeroin] (8) at (0.25, 1.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5) to (2);
\draw (3) to (6);
\draw (6) to (0);
\draw (1) to (7);
\draw (7) to (4);
\draw (7) to (6);
\draw (6) to (5);
\draw (8) to (5);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1.25) {};
\node [style=nothing] (1) at (0, 0.7500001) {};
\node [style=nothing] (2) at (0.75, 1.75) {};
\node [style=nothing] (3) at (0.75, 1.25) {};
\node [style=nothing] (4) at (0.75, 0.75) {};
\node [style=zeroin] (5) at (0, 1.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5) to (2);
\draw (0) to (3);
\draw (1) to (4);
\end{pgfonlayer}
\end{tikzpicture}
\text{, }
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0.25, 1.25) {};
\node [style=nothing] (1) at (0.25, 0.75) {};
\node [style=nothing] (2) at (1.75, 1.75) {};
\node [style=nothing] (3) at (1.75, 1.25) {};
\node [style=nothing] (4) at (1.75, 0.75) {};
\node [style=dot] (5) at (1, 1.75) {};
\node [style=dot] (6) at (1, 1.25) {};
\node [style=oplus] (7) at (1, 0.7500001) {};
\node [style=zeroin] (8) at (0.25, 1.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5) to (2);
\draw (3) to (6);
\draw (6) to (0);
\draw (1) to (7);
\draw (7) to (4);
\draw (7) to (6);
\draw (6) to (5);
\draw (8) to (5);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1.25) {};
\node [style=nothing] (1) at (0, 0.7500001) {};
\node [style=nothing] (2) at (0.75, 1.75) {};
\node [style=nothing] (3) at (0.75, 1.25) {};
\node [style=nothing] (4) at (0.75, 0.75) {};
\node [style=zeroin] (5) at (0, 1.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5) to (2);
\draw (0) to (3);
\draw (1) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.3}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0.5000001) {};
\node [style=nothing] (1) at (0, -0) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (0, 2) {};
\node [style=dot] (5) at (0.5000002, 1.5) {};
\node [style=oplus] (6) at (0.5000002, 1) {};
\node [style=oplus] (7) at (1, 1) {};
\node [style=dot] (8) at (1, 0.5000002) {};
\node [style=dot] (9) at (0.5000002, 2) {};
\node [style=dot] (10) at (1, -0) {};
\node [style=nothing] (11) at (1.5, 0.5000001) {};
\node [style=nothing] (12) at (1.5, 1.5) {};
\node [style=nothing] (13) at (1.5, 2) {};
\node [style=nothing] (14) at (1.5, 0) {};
\node [style=nothing] (15) at (1.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4) to (9);
\draw (9) to (13);
\draw (3) to (5);
\draw (5) to (12);
\draw (2) to (6);
\draw (6) to (7);
\draw (7) to (15);
\draw (0) to (8);
\draw (8) to (11);
\draw (1) to (10);
\draw (10) to (14);
\draw (10) to (8);
\draw (8) to (7);
\draw (6) to (5);
\draw (5) to (9);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0.5000001) {};
\node [style=nothing] (1) at (0, -0) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (0, 2) {};
\node [style=dot] (5) at (1, 1.5) {};
\node [style=dot] (6) at (0.5000002, 0.5000002) {};
\node [style=dot] (7) at (1, 2) {};
\node [style=dot] (8) at (0.5000002, -0) {};
\node [style=nothing] (9) at (1.5, 0.5000001) {};
\node [style=nothing] (10) at (1.5, 1.5) {};
\node [style=nothing] (11) at (1.5, 2) {};
\node [style=nothing] (12) at (1.5, 0) {};
\node [style=nothing] (13) at (1.5, 1) {};
\node [style=oplus] (14) at (1, 1) {};
\node [style=oplus] (15) at (0.5000002, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4) to (7);
\draw (7) to (11);
\draw (3) to (5);
\draw (5) to (10);
\draw (0) to (6);
\draw (6) to (9);
\draw (1) to (8);
\draw (8) to (12);
\draw (8) to (6);
\draw (5) to (7);
\draw (2) to (15);
\draw (15) to (14);
\draw (14) to (13);
\draw (14) to (5);
\draw (6) to (15);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.4}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0.5000001) {};
\node [style=nothing] (1) at (0, -0) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (0, 2) {};
\node [style=dot] (5) at (0.5000002, 1.5) {};
\node [style=dot] (6) at (0.5000002, 1) {};
\node [style=dot] (7) at (1, 1) {};
\node [style=dot] (8) at (1, 0.5000002) {};
\node [style=oplus] (9) at (0.5000002, 2) {};
\node [style=oplus] (10) at (1, -0) {};
\node [style=nothing] (11) at (1.5, 0.5000001) {};
\node [style=nothing] (12) at (1.5, 1.5) {};
\node [style=nothing] (13) at (1.5, 2) {};
\node [style=nothing] (14) at (1.5, 0) {};
\node [style=nothing] (15) at (1.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4) to (9);
\draw (9) to (13);
\draw (3) to (5);
\draw (5) to (12);
\draw (2) to (6);
\draw (6) to (7);
\draw (7) to (15);
\draw (0) to (8);
\draw (8) to (11);
\draw (1) to (10);
\draw (10) to (14);
\draw (10) to (8);
\draw (8) to (7);
\draw (6) to (5);
\draw (5) to (9);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0.5000001) {};
\node [style=nothing] (1) at (0, -0) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (0, 2) {};
\node [style=dot] (5) at (1, 1.5) {};
\node [style=dot] (6) at (0.5000002, 0.5000002) {};
\node [style=oplus] (7) at (1, 2) {};
\node [style=oplus] (8) at (0.5000002, -0) {};
\node [style=nothing] (9) at (1.5, 0.5000001) {};
\node [style=nothing] (10) at (1.5, 1.5) {};
\node [style=nothing] (11) at (1.5, 2) {};
\node [style=nothing] (12) at (1.5, 0) {};
\node [style=nothing] (13) at (1.5, 1) {};
\node [style=dot] (14) at (1, 1) {};
\node [style=dot] (15) at (0.5000002, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4) to (7);
\draw (7) to (11);
\draw (3) to (5);
\draw (5) to (10);
\draw (0) to (6);
\draw (6) to (9);
\draw (1) to (8);
\draw (8) to (12);
\draw (8) to (6);
\draw (5) to (7);
\draw (2) to (15);
\draw (15) to (14);
\draw (14) to (13);
\draw (14) to (5);
\draw (6) to (15);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.5}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=nothing] (2) at (0, 1.5) {};
\node [style=nothing] (3) at (0, 2) {};
\node [style=nothing] (4) at (1.5, 1) {};
\node [style=nothing] (5) at (1.5, 1.5) {};
\node [style=nothing] (6) at (1.5, 2) {};
\node [style=nothing] (7) at (1.5, 0.5000002) {};
\node [style=oplus] (8) at (0.5000002, 2) {};
\node [style=oplus] (9) at (1, 0.5000002) {};
\node [style=dot] (10) at (0.5000002, 1.5) {};
\node [style=dot] (11) at (0.5000002, 1) {};
\node [style=dot] (12) at (1, 1.5) {};
\node [style=dot] (13) at (1, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (8);
\draw (8) to (6);
\draw (5) to (12);
\draw (12) to (10);
\draw (10) to (2);
\draw (0) to (11);
\draw (11) to (13);
\draw (13) to (4);
\draw (7) to (9);
\draw (9) to (1);
\draw (10) to (11);
\draw (10) to (8);
\draw (12) to (13);
\draw (13) to (9);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=nothing] (2) at (0, 1.5) {};
\node [style=nothing] (3) at (0, 2) {};
\node [style=nothing] (4) at (1.5, 1) {};
\node [style=nothing] (5) at (1.5, 1.5) {};
\node [style=nothing] (6) at (1.5, 2) {};
\node [style=nothing] (7) at (1.5, 0.5000002) {};
\node [style=oplus] (8) at (1, 2) {};
\node [style=dot] (9) at (1, 1.5) {};
\node [style=dot] (10) at (1, 1) {};
\node [style=oplus] (11) at (0.5000002, 0.5000002) {};
\node [style=dot] (12) at (0.5000002, 1) {};
\node [style=dot] (13) at (0.5000002, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (9) to (10);
\draw (9) to (8);
\draw (13) to (12);
\draw (12) to (11);
\draw (3) to (8);
\draw (8) to (6);
\draw (5) to (9);
\draw (9) to (13);
\draw (13) to (2);
\draw (0) to (12);
\draw (12) to (10);
\draw (10) to (4);
\draw (7) to (11);
\draw (11) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.5.5}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, 1.5) {};
\node [style=nothing] (2) at (0, 2) {};
\node [style=nothing] (3) at (1.5, 1) {};
\node [style=nothing] (4) at (1.5, 1.5) {};
\node [style=nothing] (5) at (1.5, 2) {};
\node [style=nothing] (6) at (1.5, 0.5000002) {};
\node [style=oplus] (7) at (0.5, 0.4999999) {};
\node [style=dot] (8) at (1, 1.5) {};
\node [style=dot] (9) at (1, 1) {};
\node [style=dot] (10) at (0.5, 1) {};
\node [style=oplus] (11) at (1, 0.4999999) {};
\node [style=nothing] (12) at (0, 0.4999999) {};
\node [style=dot] (13) at (0.5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (8) to (1);
\draw (0) to (9);
\draw (9) to (10);
\draw (10) to (3);
\draw (6) to (7);
\draw (8) to (9);
\draw (10) to (7);
\draw (12) to (11);
\draw (11) to (7);
\draw (8) to (4);
\draw (9) to (11);
\draw (10) to (13);
\draw (13) to (5);
\draw (13) to (2);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, 1.5) {};
\node [style=nothing] (2) at (0, 2) {};
\node [style=nothing] (3) at (1.5, 1) {};
\node [style=nothing] (4) at (1.5, 1.5) {};
\node [style=nothing] (5) at (1.5, 2) {};
\node [style=nothing] (6) at (1.5, 0.5000002) {};
\node [style=oplus] (7) at (1, 0.5000002) {};
\node [style=dot] (8) at (0.5000002, 1.5) {};
\node [style=dot] (9) at (0.5000002, 1) {};
\node [style=dot] (10) at (1, 1) {};
\node [style=oplus] (11) at (0.5, 0.4999999) {};
\node [style=nothing] (12) at (0, 0.4999999) {};
\node [style=dot] (13) at (1, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (8) to (1);
\draw (0) to (9);
\draw (9) to (10);
\draw (10) to (3);
\draw (6) to (7);
\draw (8) to (9);
\draw (10) to (7);
\draw (12) to (11);
\draw (11) to (7);
\draw (8) to (4);
\draw (9) to (11);
\draw (10) to (13);
\draw (13) to (5);
\draw (13) to (2);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.6}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000001) {};
\node [style=nothing] (2) at (2.5, 0.5000001) {};
\node [style=nothing] (3) at (2.5, -0) {};
\node [style=zeroout] (4) at (2.5, -0.5000001) {};
\node [style=oplus] (5) at (2, -0.5000001) {};
\node [style=dot] (6) at (2, -0) {};
\node [style=dot] (7) at (0.5000001, 0.5000001) {};
\node [style=oplus] (8) at (0.5000001, -0.5000001) {};
\node [style=zeroout] (9) at (0.9999999, -0.5000001) {};
\node [style=onein] (10) at (0, -0.5000001) {};
\node [style=onein] (11) at (1.5, -0.5000001) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (7);
\draw (7) to (2);
\draw (3) to (6);
\draw (6) to (0);
\draw (8) to (9);
\draw (8) to (7);
\draw (5) to (4);
\draw (5) to (6);
\draw (10) to (8);
\draw (11) to (5);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000001) {};
\node [style=nothing] (2) at (0.9999999, 0.5000001) {};
\node [style=nothing] (3) at (0.9999999, -0) {};
\node [style=dot] (4) at (0.5000001, 0.5000001) {};
\node [style=dot] (5) at (0.5000001, -0) {};
\node [style=onein] (6) at (0, -0.5000001) {};
\node [style=zeroout] (7) at (0.9999999, -0.5000001) {};
\node [style=oplus] (8) at (0.5000001, -0.5000001) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (4);
\draw (4) to (2);
\draw (3) to (5);
\draw (5) to (0);
\draw (6) to (8);
\draw (8) to (7);
\draw (8) to (5);
\draw (5) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.7}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (1.5, -0) {};
\node [style=onein] (2) at (0.4, 0.5) {};
\node [style=zeroout] (3) at (1.1, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (3);
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (1.5, -0) {};
\node [style=onein] (2) at (0.4, 0.5) {};
\node [style=zeroout] (3) at (1.1, 0.5) {};
\node [style=onein] (4) at (1, -0) {};
\node [style=oneout] (5) at (0.5000002, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (3);
\draw (5) to (0);
\draw (4) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.8}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, -0) {};
\node [style=oneout] (1) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.9}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1.75) {};
\node [style=nothing] (1) at (0, 1.25) {};
\node [style=nothing] (2) at (0, 0.7500001) {};
\node [style=dot] (3) at (0.4999999, 1.75) {};
\node [style=dot] (4) at (0.5000001, 1.25) {};
\node [style=oplus] (5) at (0.5000001, 0.7500001) {};
\node [style=dot] (6) at (0.9999999, 1.75) {};
\node [style=oplus] (7) at (1, 0.7500001) {};
\node [style=dot] (8) at (1, 1.25) {};
\node [style=nothing] (9) at (1.5, 1.25) {};
\node [style=nothing] (10) at (1.5, 0.7500001) {};
\node [style=nothing] (11) at (1.5, 1.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (3);
\draw (1) to (4);
\draw (2) to (5);
\draw (3) to (4);
\draw (4) to (5);
\draw (6) to (8);
\draw (8) to (7);
\draw (3) to (6);
\draw (6) to (11);
\draw (4) to (8);
\draw (8) to (9);
\draw (5) to (7);
\draw (7) to (10);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1.75) {};
\node [style=nothing] (1) at (0, 1.25) {};
\node [style=nothing] (2) at (0, 0.7500001) {};
\node [style=nothing] (3) at (1.5, 1.25) {};
\node [style=nothing] (4) at (1.5, 0.7500001) {};
\node [style=nothing] (5) at (1.5, 1.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (5);
\draw (1) to (3);
\draw (2) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.10}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=dot] (4) at (0.5000002, 1) {};
\node [style=dot] (5) at (0.5000002, 0.5000002) {};
\node [style=oplus] (6) at (0.5000002, -0) {};
\node [style=dot] (7) at (1, 1.5) {};
\node [style=oplus] (8) at (1, 0.5000002) {};
\node [style=dot] (9) at (1, 1) {};
\node [style=dot] (10) at (1.5, 1) {};
\node [style=oplus] (11) at (1.5, 0) {};
\node [style=dot] (12) at (1.5, 0.5000002) {};
\node [style=nothing] (13) at (2, 1.5) {};
\node [style=nothing] (14) at (2, 0.5000002) {};
\node [style=nothing] (15) at (2, 1) {};
\node [style=nothing] (16) at (2, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (4) to (5);
\draw (5) to (6);
\draw (7) to (9);
\draw (9) to (8);
\draw (10) to (12);
\draw (12) to (11);
\draw (3) to (7);
\draw (7) to (13);
\draw (15) to (10);
\draw (10) to (9);
\draw (9) to (4);
\draw (4) to (2);
\draw (1) to (5);
\draw (5) to (8);
\draw (8) to (12);
\draw (12) to (14);
\draw (16) to (11);
\draw (11) to (6);
\draw (6) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (1.5, 1.5) {};
\node [style=nothing] (5) at (1.5, 0.5000002) {};
\node [style=nothing] (6) at (1.5, 1) {};
\node [style=nothing] (7) at (1.5, -0) {};
\node [style=dot] (8) at (0.5000002, 1.5) {};
\node [style=dot] (9) at (0.5000002, 1) {};
\node [style=dot] (10) at (1, 1.5) {};
\node [style=dot] (11) at (1, 1) {};
\node [style=oplus] (12) at (1, 0.5000002) {};
\node [style=oplus] (13) at (0.5000002, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (8);
\draw (8) to (10);
\draw (10) to (4);
\draw (6) to (11);
\draw (11) to (9);
\draw (9) to (2);
\draw (1) to (12);
\draw (12) to (5);
\draw (7) to (13);
\draw (13) to (0);
\draw (13) to (9);
\draw (9) to (8);
\draw (10) to (11);
\draw (11) to (12);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.11}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (2, 0.5) {};
\node [style=nothing] (5) at (2, -0) {};
\node [style=dot] (6) at (0.5000002, 1.5) {};
\node [style=dot] (7) at (1, 1) {};
\node [style=dot] (8) at (1, 0.5000002) {};
\node [style=oplus] (9) at (0.5000002, 1) {};
\node [style=oplus] (10) at (1, -0) {};
\node [style=nothing] (11) at (2, 1.5) {};
\node [style=nothing] (12) at (2, 1) {};
\node [style=oplus] (13) at (1.5, 1) {};
\node [style=dot] (14) at (1.5, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (6) to (9);
\draw (7) to (8);
\draw (8) to (10);
\draw (0) to (10);
\draw (10) to (5);
\draw (4) to (8);
\draw (8) to (1);
\draw (2) to (9);
\draw (9) to (7);
\draw (6) to (3);
\draw (6) to (14);
\draw (14) to (11);
\draw (12) to (13);
\draw (13) to (7);
\draw (13) to (14);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 1) {};
\node [style=nothing] (2) at (0, 0.5000001) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=dot] (4) at (0.5000001, 1.5) {};
\node [style=dot] (5) at (0.5000001, 0.5000001) {};
\node [style=oplus] (6) at (0.5000001, -0) {};
\node [style=nothing] (7) at (1.5, 0.5) {};
\node [style=nothing] (8) at (1.5, 1) {};
\node [style=nothing] (9) at (1.5, 1.5) {};
\node [style=nothing] (10) at (1.5, -0) {};
\node [style=dot] (11) at (1, 1) {};
\node [style=dot] (12) at (1, 0.5000001) {};
\node [style=oplus] (13) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (4);
\draw (2) to (5);
\draw (6) to (0);
\draw (6) to (5);
\draw (5) to (4);
\draw (11) to (1);
\draw (5) to (12);
\draw (12) to (7);
\draw (10) to (13);
\draw (13) to (6);
\draw (13) to (12);
\draw (12) to (11);
\draw (4) to (9);
\draw (11) to (8);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.12}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0.5000001) {};
\node [style=nothing] (1) at (0, -0) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (2, 0.5000001) {};
\node [style=nothing] (5) at (2, 1.5) {};
\node [style=nothing] (6) at (2, 0) {};
\node [style=nothing] (7) at (2, 1) {};
\node [style=dot] (8) at (0.5000001, 1.5) {};
\node [style=dot] (9) at (0.5000001, 1) {};
\node [style=oplus] (10) at (0.5000001, 0.5000001) {};
\node [style=oplus] (11) at (1, -0) {};
\node [style=dot] (12) at (1, 1) {};
\node [style=dot] (13) at (1, 0.5000001) {};
\node [style=oplus] (14) at (1.5, 0.5000001) {};
\node [style=dot] (15) at (1.5, 1.5) {};
\node [style=dot] (16) at (1.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (8) to (9);
\draw (9) to (10);
\draw (12) to (13);
\draw (13) to (11);
\draw (15) to (16);
\draw (16) to (14);
\draw (3) to (8);
\draw (8) to (15);
\draw (15) to (5);
\draw (7) to (16);
\draw (16) to (12);
\draw (12) to (9);
\draw (9) to (2);
\draw (0) to (10);
\draw (10) to (13);
\draw (13) to (14);
\draw (14) to (4);
\draw (6) to (11);
\draw (11) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0.5000001) {};
\node [style=nothing] (1) at (0, -0) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (1.5, 0.5000001) {};
\node [style=nothing] (5) at (1.5, 1.5) {};
\node [style=nothing] (6) at (1.5, -0) {};
\node [style=nothing] (7) at (1.5, 1) {};
\node [style=dot] (8) at (1, 1) {};
\node [style=dot] (9) at (1, 0.5000001) {};
\node [style=dot] (10) at (0.5000001, 1.5) {};
\node [style=dot] (11) at (0.5000001, 1) {};
\node [style=oplus] (12) at (0.5000001, -0) {};
\node [style=oplus] (13) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (8) to (9);
\draw (3) to (10);
\draw (10) to (5);
\draw (2) to (11);
\draw (11) to (8);
\draw (8) to (7);
\draw (0) to (9);
\draw (9) to (4);
\draw (1) to (12);
\draw (12) to (13);
\draw (13) to (6);
\draw (13) to (9);
\draw (12) to (11);
\draw (11) to (10);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.13}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 1) {};
\node [style=nothing] (2) at (0, 0.5000001) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=nothing] (4) at (2, -0) {};
\node [style=dot] (5) at (0.5000001, 1.5) {};
\node [style=dot] (6) at (0.5000001, 1) {};
\node [style=dot] (7) at (1, 0.5000001) {};
\node [style=oplus] (8) at (0.5000001, 0.5000001) {};
\node [style=oplus] (9) at (1, -0) {};
\node [style=nothing] (10) at (2, 0.5000001) {};
\node [style=nothing] (11) at (2, 1.5) {};
\node [style=nothing] (12) at (2, 1) {};
\node [style=oplus] (13) at (1.5, 0.5) {};
\node [style=dot] (14) at (1.5, 1) {};
\node [style=dot] (15) at (1.5, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (5) to (3);
\draw (6) to (1);
\draw (2) to (8);
\draw (8) to (7);
\draw (4) to (9);
\draw (9) to (0);
\draw (8) to (6);
\draw (6) to (5);
\draw (9) to (7);
\draw (5) to (15);
\draw (15) to (11);
\draw (12) to (14);
\draw (14) to (6);
\draw (7) to (13);
\draw (13) to (10);
\draw (13) to (14);
\draw (14) to (15);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 1) {};
\node [style=nothing] (2) at (0, 0.5000001) {};
\node [style=nothing] (3) at (0, 1.5) {};
\node [style=dot] (4) at (0.5000001, 1.5) {};
\node [style=dot] (5) at (0.5000001, 1) {};
\node [style=oplus] (6) at (0.5000001, -0) {};
\node [style=nothing] (7) at (1.5, 0.5) {};
\node [style=nothing] (8) at (1.5, 1) {};
\node [style=nothing] (9) at (1.5, -0) {};
\node [style=nothing] (10) at (1.5, 1.5) {};
\node [style=dot] (11) at (1, 0.4999998) {};
\node [style=oplus] (12) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (6);
\draw (1) to (5);
\draw (4) to (3);
\draw (4) to (5);
\draw (5) to (6);
\draw (11) to (12);
\draw (12) to (9);
\draw (12) to (6);
\draw (2) to (11);
\draw (4) to (10);
\draw (8) to (5);
\draw (11) to (7);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.14}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=nothing] (2) at (2, 0.5000002) {};
\node [style=nothing] (3) at (2, -0) {};
\node [style=oplus] (4) at (0.5000002, -0) {};
\node [style=oplus] (5) at (1.5, -0) {};
\node [style=oplus] (6) at (1, 0.5000002) {};
\node [style=dot] (7) at (1.5, 0.5000002) {};
\node [style=dot] (8) at (1, -0) {};
\node [style=dot] (9) at (0.5000002, 0.5000002) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (9);
\draw (9) to (6);
\draw (6) to (7);
\draw (7) to (2);
\draw (3) to (5);
\draw (5) to (8);
\draw (8) to (4);
\draw (4) to (0);
\draw (4) to (9);
\draw (8) to (6);
\draw (5) to (7);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=nothing] (2) at (1, 0.5000002) {};
\node [style=nothing] (3) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=180, out=0, looseness=1.25] (1) to (3);
\draw [in=180, out=0, looseness=1.25] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.15}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1.75) {};
\node [style=nothing] (1) at (0, 1.25) {};
\node [style=nothing] (2) at (0, 0.7500001) {};
\node [style=nothing] (3) at (2, 1.75) {};
\node [style=nothing] (4) at (2, 1.25) {};
\node [style=nothing] (5) at (2, 0.7500001) {};
\node [style=dot] (6) at (1, 1.75) {};
\node [style=dot] (7) at (1, 1.25) {};
\node [style=oplus] (8) at (1, 0.7500001) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (6);
\draw (6) to (3);
\draw (4) to (7);
\draw (7) to (1);
\draw (2) to (8);
\draw (8) to (5);
\draw (8) to (7);
\draw (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1.75) {};
\node [style=nothing] (1) at (0, 1.25) {};
\node [style=nothing] (2) at (0, 0.7500001) {};
\node [style=dot] (3) at (1, 1.75) {};
\node [style=dot] (4) at (1, 1.25) {};
\node [style=oplus] (5) at (1, 0.7500001) {};
\node [style=nothing] (6) at (2, 1.75) {};
\node [style=nothing] (7) at (2, 1.25) {};
\node [style=nothing] (8) at (2, 0.7500001) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=180, out=0, looseness=1.25] (0) to (4);
\draw [in=180, out=0, looseness=1.25] (4) to (6);
\draw [in=180, out=0, looseness=1.25] (3) to (7);
\draw [in=0, out=180, looseness=1.25] (3) to (1);
\draw (2) to (5);
\draw (5) to (8);
\draw (3) to (4);
\draw (4) to (5);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{TOF.16}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5) {};
\node [style=nothing] (2) at (0, 1.5) {};
\node [style=nothing] (3) at (0, 2) {};
\node [style=nothing] (4) at (0.9999999, 1.5) {};
\node [style=nothing] (5) at (0.9999999, 0.5) {};
\node [style=zeroin] (6) at (0.9999999, 0.9999999) {};
\node [style=oplus] (7) at (1.5, 0.9999999) {};
\node [style=oplus] (8) at (2.5, 0.9999999) {};
\node [style=dot] (9) at (2, 0.9999999) {};
\node [style=dot] (10) at (2, 0.5) {};
\node [style=dot] (11) at (1.5, 1.5) {};
\node [style=dot] (12) at (1.5, 2) {};
\node [style=dot] (13) at (2.5, 1.5) {};
\node [style=dot] (14) at (2.5, 2) {};
\node [style=oplus] (15) at (2, -0) {};
\node [style=nothing] (16) at (3, 1.5) {};
\node [style=nothing] (17) at (3, 0.5) {};
\node [style=zeroout] (18) at (3, 0.9999999) {};
\node [style=nothing] (19) at (4, -0) {};
\node [style=nothing] (20) at (4, 2) {};
\node [style=nothing] (21) at (4, 0.5) {};
\node [style=nothing] (22) at (4, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (12);
\draw (12) to (14);
\draw (14) to (20);
\draw (22) to (16);
\draw (16) to (13);
\draw (13) to (11);
\draw (11) to (4);
\draw (4) to (2);
\draw (1) to (5);
\draw (5) to (10);
\draw (10) to (17);
\draw (17) to (21);
\draw (19) to (15);
\draw (15) to (0);
\draw (15) to (10);
\draw (10) to (9);
\draw (11) to (7);
\draw (11) to (12);
\draw (14) to (13);
\draw (8) to (13);
\draw (6) to (7);
\draw (7) to (9);
\draw (9) to (8);
\draw (18) to (8);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0.2500001, -0) {};
\node [style=nothing] (1) at (0.2500001, 0.5) {};
\node [style=nothing] (2) at (0.2500001, 1.5) {};
\node [style=nothing] (3) at (0.2500001, 2) {};
\node [style=zeroin] (4) at (0.9999999, 0.9999999) {};
\node [style=oplus] (5) at (1.5, 0.9999999) {};
\node [style=oplus] (6) at (2.5, 0.9999999) {};
\node [style=dot] (7) at (2, 0.9999999) {};
\node [style=dot] (8) at (2, 0.5) {};
\node [style=dot] (9) at (1.5, 1.5) {};
\node [style=dot] (10) at (1.5, 2) {};
\node [style=dot] (11) at (2.5, 1.5) {};
\node [style=dot] (12) at (2.5, 2) {};
\node [style=oplus] (13) at (2, -0) {};
\node [style=zeroout] (14) at (3, 0.9999999) {};
\node [style=nothing] (15) at (3.75, -0) {};
\node [style=nothing] (16) at (3.75, 2) {};
\node [style=nothing] (17) at (3.75, 0.5) {};
\node [style=nothing] (18) at (3.75, 1.5) {};
\node [style=nothing] (19) at (1.25, 1.5) {};
\node [style=nothing] (20) at (2.75, 1.5) {};
\node [style=nothing] (21) at (1.25, 0.5) {};
\node [style=nothing] (22) at (2.75, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (10);
\draw (10) to (12);
\draw (12) to (16);
\draw (11) to (9);
\draw (15) to (13);
\draw (13) to (0);
\draw (13) to (8);
\draw (8) to (7);
\draw (9) to (5);
\draw (9) to (10);
\draw (12) to (11);
\draw (6) to (11);
\draw (4) to (5);
\draw (5) to (7);
\draw (7) to (6);
\draw (6) to (14);
\draw [in=0, out=180, looseness=1.00] (19) to (1);
\draw [in=0, out=180, looseness=1.00] (21) to (2);
\draw (19) to (9);
\draw (21) to (8);
\draw (8) to (22);
\draw [in=180, out=0, looseness=1.00] (22) to (18);
\draw [in=0, out=180, looseness=1.00] (17) to (20);
\draw (20) to (11);
\end{pgfonlayer}
\end{tikzpicture}
$}
\end{enumerate}
\end{multicols}
\
\end{mdframed}
}}
\caption{The identities of $\mathsf{TOF}$}
\label{fig:TOF}
\end{figure}
\section*{Errata}
The author would like to apologize for giving an incorrect proof of Corollary 4.9 in the previous preprint version; meaning that the scalar $\sqrt 2$ cannot be removed.
\section{Background}
The angle-free fragment of the $\mathsf{ZX}$ calculus---describing the interaction of the $Z$ and $X$ observables, Hadamard gate and $\pi$ phases---is known to be complete for (pure) real stabilizer circuits (stabilizer circuits with real coefficients) \cite{realzx}.
In \cite[Section 4.1]{realzx}, it is shown that real stabilizer circuits are generated by the controlled-$Z$ gate, the $Z$ gate, the Hadamard gate, $|0\>$ state preparations and $\<0|$ post-selected measurements. Therefore, real stabilizer circuits can also be generated by the controlled-not gate, Hadamard gate, $|1\>$ state preparations and $\<1|$ post-selected measurements.
Although the Hadamard gate, controlled-not gate and computational ancillary bits are derivable in the angle-free fragment of the ZX-calculus, the identities are not given in terms of circuit relations involving these gates: and instead, on identities such as spider fusion which do not preserve the causal structure of being a circuit. Therefore circuit simplification usually involves a circuit reconstruction step at the end.
We provide a complete set of {\em circuit identities} for the category generated by the controlled-not gate, Hadamard gate and state preparation for $|1\>$ and postselected measurement for $\<1|$ and the scalar $\sqrt 2$. The completeness is proven by performing a translation to and from the angle-free fragment of the ZX-calculus; however, in contrast to the ZX-calculus, we structure the identities so that they preserve the causal structure of circuits. Although the axiomatization we describe is just as computationally expressive as the angle-free fragment of the ZX-calculus, it provides a high-level language for real stabilizer circuits; and hopefuly, alongside the circuit axiomatization for Toffoli circuits, will lead to a high-level language for Hadamard+Toffoli circuits.
The compilation of quantum programming languages can involve multiple intermediate steps before an optimized physical-level circuit is produced; where optimization can be performed at various levels of granularity \cite{maslov2017basic}.
At the coarsest level of granularity, classical oracles and subroutines are synthesized using generalized controlled-not gates.
Next these generalized controlled-not gates are decomposed into cascading Toffoli gates, and so on...
Eventually, these gates are decomposed into fault tolerant 1 and 2-qubit gates.
The ZX-calculus has proven to be successful for this fine grain optimization, in particular at reducing $T$ counts \cite{190310477}. This optimization is performed in three steps: first, a translation must be performed turning the circuit into spiders and phases. Second, the spider fusion laws, Hopf laws, bialgebra laws and so on are applied to reduce the number of nodes/phases; transforming the circuit into a simpler form resembling an undirected, labeled graph without a global causal structure. Finally, an optimized circuit is re-extracted from this undirected graph.
In order to extract circuits at the end, for example, \cite{zxsimp,duncan2010rewriting} use a property of graphs called gFlow.
Using only circuit relations, in contrast, \cite{Fagan} were able to reduce 2-qubit Clifford circuits to minimal forms in quantomatic.
Toffoli+Hadamard quantum circuits, as opposed to the ZX-calculus, are more suitably a language for classical oracles, and thus, are appropriate for coarse granularity optimization. The controlled-not+Hadamard subfragment, on the other hand, which we discuss in this paper one can only produce oracles for affine Boolean functions---which is obviously very computationally weak.
The eventual goal, however, is to use this complete axiomatization controlled-not+Hadamard circuits given in this paper, and the axiomatization of Toffoli circuits provided in \cite{tof}, to provide a complete set of identities for the approximately universal \cite{tofh} fragment Toffoli+Hadamard circuits. In this fragment, indeed, all oracles for classical Boolean functions can be constructed \cite{tof, aaronson}.
In Section \ref{section:tof}, we discuss how this circuit axiomatization of controlled-not+Hadamard circuits could potentially lead to one for Toffoli+Hadamard circuits,
Toffoli+Hadamard circuits also easily accommodate the notion of quantum control.
This is useful for implementing circuits corresponding to the conditional execution of various subroutines; which is discussed in \cite[Section 2.4.3]{gilesprogramming} and \cite{hanersoftware}.
Although, in the fragment which we discuss in this paper, we can not control all unitaries: namely circuits containing controlled-not gates can not be controlled. Again, the eventual goal is to extend the axiomatizations of cnot+Hadamard and Toffoli circuits to Toffoli+Hadamard circuits, where there is no such limitation.
In the ZX-calculus, by contrast, this notion of control is highly unnatural. One would likely have to appeal to the triangle gate, as discussed in \cite{ngcompleteness,triangle,vilmart}.
\section{The controlled not gate}
\label{sec:cnot}
Recall that $\mathsf{CNOT}$ is the PROP generated by the 1 ancillary bits $|1\>$ and $\<1|$ as well as the controlled not gate:
\[
|1\> :=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (3, -0) {};
\node [style=rn] (1) at (5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
|1\> :=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (3, -0) {};
\node [style=rn] (1) at (5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
\mathsf{cnot} :=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (3, -0) {};
\node [style=rn] (1) at (3, 1) {};
\node [style=rn] (2) at (5, 1) {};
\node [style=rn] (3) at (5, -0) {};
\node [style=oplus] (4) at (4, -0) {};
\node [style=dot] (5) at (4, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (4);
\draw (4) to (0);
\draw (5) to (4);
\draw (5) to (2);
\draw (5) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\]
Where ``gaps'' are drawn between $\mathsf{cnot}$ gates and $\mathsf{cnot}$ gates are drawn upside down to suppress symmetry maps:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 3) {};
\node [style=nothing] (1) at (0, 2.5) {};
\node [style=dot] (2) at (0.5000001, 2.5) {};
\node [style=oplus] (3) at (0.5, 3) {};
\node [style=nothing] (4) at (1, 2.5) {};
\node [style=nothing] (5) at (1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (2);
\draw (3) to (0);
\draw (2) to (4);
\draw (5) to (3);
\draw (3) to (2);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (-0.25, 1.5) {};
\node [style=nothing] (1) at (-0.25, 2.5) {};
\node [style=dot] (2) at (0.5000001, 2.5) {};
\node [style=oplus] (3) at (0.5000001, 2) {};
\node [style=nothing] (4) at (1.25, 2.5) {};
\node [style=nothing] (5) at (1.25, 1.5) {};
\node [style=nothing] (6) at (-0.25, 2) {};
\node [style=nothing] (7) at (0.5000001, 1.5) {};
\node [style=nothing] (8) at (1.25, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (2);
\draw [in=0, out=180, looseness=1.00] (3) to (0);
\draw (3) to (2);
\draw [in=180, out=0, looseness=1.00] (6) to (7);
\draw (2) to (4);
\draw [in=0, out=180, looseness=1.00] (5) to (3);
\draw [in=180, out=0, looseness=1.00] (7) to (8);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.5cm}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0.25, 3) {};
\node [style=nothing] (1) at (0.25, 2.5) {};
\node [style=oplus] (2) at (0.5000001, 2.5) {};
\node [style=dot] (3) at (0.5, 3) {};
\node [style=nothing] (4) at (0.75, 2.5) {};
\node [style=nothing] (5) at (0.75, 3) {};
\node [style=nothing] (6) at (1.75, 3) {};
\node [style=nothing] (7) at (1.75, 2.5) {};
\node [style=nothing] (8) at (-0.75, 3) {};
\node [style=nothing] (9) at (-0.75, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (2);
\draw (3) to (0);
\draw (2) to (4);
\draw (5) to (3);
\draw (3) to (2);
\draw [style=simple, in=0, out=180, looseness=1.00] (6) to (4);
\draw [style=simple, in=180, out=0, looseness=1.00] (5) to (7);
\draw [style=simple, in=0, out=180, looseness=1.00] (0) to (9);
\draw [style=simple, in=180, out=0, looseness=1.00] (8) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$$
These gates must satisfy the identities given in Figure \ref{fig:CNOT}:
\begin{figure}[H]
\noindent
\scalebox{1.0}{%
\vbox{%
\begin{mdframed}
\begin{multicols}{2}
\begin{enumerate}[label={\bf [CNOT.\arabic*]}, ref={\bf [CNOT.\arabic*]}, wide = 0pt, leftmargin = 2em]
\item
\label{CNOT.1}
{\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0) {};
\node [style=nothing] (1) at (0, .5) {};
%
\node [style=oplus] (2) at (.5, 0) {};
\node [style=dot] (3) at (.5, .5) {};
%
\node [style=dot] (4) at (1, 0) {};
\node [style=oplus] (5) at (1, .5) {};
%
\node [style=oplus] (6) at (1.5, 0) {};
\node [style=dot] (7) at (1.5, .5) {};
%
\node [style=nothing] (8) at (2, 0) {};
\node [style=nothing] (9) at (2, .5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (8);
\draw [style=simple] (1) to (9);
%
\draw [style=simple] (2) to (3);
\draw [style=simple] (4) to (5);
\draw [style=simple] (6) to (7);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000002) {};
\node [style=nothing] (2) at (1, 0.5000002) {};
\node [style=nothing] (3) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=180, out=0, looseness=1.25] (1) to (3);
\draw [in=180, out=0, looseness=1.25] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture}$}
\item
\label{CNOT.2}
\hfil{
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0) {};
\node [style=nothing] (1) at (0, .5) {};
%
\node [style=oplus] (2) at (.5, 0) {};
\node [style=dot] (3) at (.5, .5) {};
%
\node [style=oplus] (6) at (1, 0) {};
\node [style=dot] (7) at (1, .5) {};
%
\node [style=nothing] (8) at (1.5, 0) {};
\node [style=nothing] (9) at (1.5, .5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (8);
\draw [style=simple] (1) to (9);
%
\draw [style=simple] (2) to (3);
\draw [style=simple] (6) to (7);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0) {};
\node [style=nothing] (1) at (0, .5) {};
%
\node [style=nothing] (3) at (1.5, 0) {};
\node [style=nothing] (4) at (1.5, .5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (3);
\draw [style=simple] (1) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{CNOT.3}
\hfil{
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=oplus] (3) at (.75, 1) {};
\node [style=dot] (4) at (.75, .5) {};
\node [style=dot] (5) at (1.25, .5) {};
\node [style=oplus] (6) at (1.25, 0) {};
%
\node [style=nothing] (7) at (2, 1) {};
\node [style=nothing] (8) at (2, .5) {};
\node [style=nothing] (9) at (2, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=oplus] (3) at (1.25, 1) {};
\node [style=dot] (4) at (1.25, .5) {};
\node [style=dot] (5) at (.75, .5) {};
\node [style=oplus] (6) at (.75, 0) {};
%
\node [style=nothing] (7) at (2, 1) {};
\node [style=nothing] (8) at (2, .5) {};
\node [style=nothing] (9) at (2, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{CNOT.4}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, .5) {};
\node [style=nothing] (1) at (0, 0) {};
%
\node [style=dot] (2) at (.5, .5) {};
\node [style=oplus] (3) at (.5, 0) {};
%
\node [style=nothing] (4) at (1, .5) {};
\node [style=nothing] (5) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (4);
\draw [style=simple] (1) to (5);
\draw [style=simple] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, .5) {};
\node [style=nothing] (1) at (0, 0) {};
%
\node [style=dot] (2) at (.5, .5) {};
\node [style=oplus] (3) at (.5, 0) {};
%
\node [style=oneout] (4) at (1, .5) {};
%
\node [style=nothing] (5) at (2, 0) {};
\node [style=onein] (6) at (1.5, .5) {};
\node [style=nothing] (7) at (2, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (4);
\draw [style=simple] (1) to (5);
\draw [style=simple] (2) to (3);
\draw [style=simple] (6) to (7);
\end{pgfonlayer}
\end{tikzpicture}
$
\hspace*{1.15cm}\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, .5) {};
\node [style=nothing] (1) at (0, 0) {};
%
\node [style=dot] (2) at (.5, .5) {};
\node [style=oplus] (3) at (.5, 0) {};
%
\node [style=oneout] (4) at (1, .5) {};
\node [style=nothing] (5) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (4);
\draw [style=simple] (1) to (5);
\draw [style=simple] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oneout] (0) at (2, .5) {};
\node [style=nothing] (1) at (2, 0) {};
%
\node [style=dot] (2) at (1.5, .5) {};
\node [style=oplus] (3) at (1.5, 0) {};
%
\node [style=onein] (4) at (1, .5) {};
%
\node [style=nothing] (5) at (0, 0) {};
\node [style=oneout] (6) at (.5, .5) {};
\node [style=nothing] (7) at (0, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (4);
\draw [style=simple] (1) to (5);
\draw [style=simple] (2) to (3);
\draw [style=simple] (6) to (7);
\end{pgfonlayer}
\end{tikzpicture}
$
\item
\label{CNOT.5}
\hfil{
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=dot] (3) at (.75, 1) {};
\node [style=oplus] (4) at (.75, .5) {};
\node [style=oplus] (5) at (1.25, .5) {};
\node [style=dot] (6) at (1.25, 0) {};
%
\node [style=nothing] (7) at (2, 1) {};
\node [style=nothing] (8) at (2, .5) {};
\node [style=nothing] (9) at (2, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=dot] (3) at (1.25, 1) {};
\node [style=oplus] (4) at (1.25, .5) {};
\node [style=oplus] (5) at (.75, .5) {};
\node [style=dot] (6) at (.75, 0) {};
%
\node [style=nothing] (7) at (2, 1) {};
\node [style=nothing] (8) at (2, .5) {};
\node [style=nothing] (9) at (2, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{CNOT.6}
\hfil{
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 0) {};
\node [style=oneout] (1) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, 0) {};
\node [style=rn] (1) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{CNOT.7}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 1) {};
\node [style=onein] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=dot] (3) at (.5, 1) {};
\node [style=oplus] (4) at (.5, .5) {};
\node [style=dot] (5) at (1, .5) {};
\node [style=oplus] (6) at (1, 0) {};
%
\node [style=oneout] (7) at (1, 1) {};
\node [style=nothing] (8) at (1.5, .5) {};
\node [style=nothing] (9) at (1.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 1) {};
\node [style=onein] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=dot] (3) at (.5, 1) {};
\node [style=oplus] (4) at (.5, .5) {};
%
\node [style=oneout] (7) at (1, 1) {};
\node [style=nothing] (8) at (1.5, .5) {};
\node [style=nothing] (9) at (1.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$
\hspace*{1.15cm}\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oneout] (0) at (1.5, 1) {};
\node [style=oneout] (1) at (1.5, .5) {};
\node [style=nothing] (2) at (1.5, 0) {};
%
\node [style=dot] (3) at (1, 1) {};
\node [style=oplus] (4) at (1, .5) {};
\node [style=dot] (5) at (.5, .5) {};
\node [style=oplus] (6) at (.5, 0) {};
%
\node [style=onein] (7) at (.5, 1) {};
\node [style=nothing] (8) at (0, .5) {};
\node [style=nothing] (9) at (0, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oneout] (0) at (1.5, 1) {};
\node [style=oneout] (1) at (1.5, .5) {};
\node [style=nothing] (2) at (1.5, 0) {};
%
\node [style=dot] (3) at (1, 1) {};
\node [style=oplus] (4) at (1, .5) {};
%
\node [style=onein] (7) at (.5, 1) {};
\node [style=nothing] (8) at (0, .5) {};
\node [style=nothing] (9) at (0, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$
\item
\label{CNOT.8}
\hfil{
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=dot] (3) at (.5, 1) {};
\node [style=oplus] (4) at (.5, .5) {};
\node [style=dot] (5) at (1, .5) {};
\node [style=oplus] (6) at (1, 0) {};
\node [style=dot] (7) at (1.5, 1) {};
\node [style=oplus] (8) at (1.5, .5) {};
%
\node [style=nothing] (9) at (2, 1) {};
\node [style=nothing] (10) at (2, .5) {};
\node [style=nothing] (11) at (2, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (9);
\draw [style=simple] (1) to (10);
\draw [style=simple] (2) to (11);
%
\draw [style=simple] (3) to (4);
\draw [style=simple] (5) to (6);
\draw [style=simple] (7) to (8);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=dot] (5) at (.5, .5) {};
\node [style=oplus] (6) at (.5, 0) {};
\node [style=dot] (7) at (1, 1) {};
\node [style=oplus] (8) at (1, 0) {};
%
\node [style=nothing] (9) at (1.5, 1) {};
\node [style=nothing] (10) at (1.5, .5) {};
\node [style=nothing] (11) at (1.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (9);
\draw [style=simple] (1) to (10);
\draw [style=simple] (2) to (11);
%
\draw [style=simple] (5) to (6);
\draw [style=simple] (7) to (8);
\end{pgfonlayer}
\end{tikzpicture}
$}
\item
\label{CNOT.9}
\hfil{
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 1) {};
\node [style=onein] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=dot] (3) at (.5, 1) {};
\node [style=oplus] (4) at (.5, .5) {};
%
\node [style=oneout] (7) at (1, 1) {};
\node [style=oneout] (8) at (1, .5) {};
\node [style=nothing] (9) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (9);
%
\draw [style=simple] (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 1) {};
\node [style=onein] (1) at (0, .5) {};
\node [style=nothing] (2) at (0, 0) {};
%
\node [style=dot] (3) at (1, 1) {};
\node [style=oplus] (4) at (1, .5) {};
%
\node [style=oneout] (7) at (2, 1) {};
\node [style=oneout] (8) at (2, .5) {};
\node [style=nothing] (9) at (2, 0) {};
%
\node [style=oneout] (10) at (0.75, 0) {};
\node [style=onein] (11) at (1.25, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (7);
\draw [style=simple] (1) to (8);
\draw [style=simple] (2) to (10);
\draw [style=simple] (11) to (9);
%
\draw [style=simple] (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$}
\end{enumerate}
\end{multicols}
\
\end{mdframed}
}}
\caption{The identities of $\mathsf{CNOT}$}
\label{fig:CNOT}
\end{figure}
Where the not gate and $|0\>$ ancillary bits are derived:
$$
\mathsf{not}=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 3) {};
\node [style=none] (1) at (1, 3) {};
\node [style=oplus] (2) at (0.5, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (2);
\draw [style=simple] (2) to (0.center);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 3) {};
\node [style=none] (1) at (1, 3) {};
\node [style=oplus] (2) at (0.5, 3) {};
\node [style=onein] (3) at (0, 3.5) {};
\node [style=oneout] (4) at (1, 3.5) {};
\node [style=dot] (5) at (0.5, 3.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (2);
\draw [style=simple] (2) to (0.center);
\draw [style=simple] (2) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.5cm}
|0\>=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (0, 3) {};
\node [style=none] (1) at (1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 3) {};
\node [style=none] (1) at (1, 3) {};
\node [style=oplus] (2) at (0.5, 3) {};
\node [style=onein] (3) at (0, 3.5) {};
\node [style=oneout] (4) at (1, 3.5) {};
\node [style=dot] (5) at (0.5, 3.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (2);
\draw [style=simple] (2) to (0.center);
\draw [style=simple] (2) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.5cm}
\<0|=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (0, 3) {};
\node [style=none] (1) at (1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 3) {};
\node [style=none] (1) at (1, 3) {};
\node [style=oplus] (2) at (0.5, 3) {};
\node [style=onein] (3) at (0, 3.5) {};
\node [style=oneout] (4) at (1, 3.5) {};
\node [style=dot] (5) at (0.5, 3.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (2);
\draw [style=simple] (2) to (0.center);
\draw [style=simple] (2) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}
$$
Where there is a pseduo-Frobenius structure (non-unital classical structure) generated by:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 3) {};
\node [style=fanout] (1) at (1, 3) {};
\node [style=none] (2) at (2, 3.5) {};
\node [style=none] (3) at (2, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-34, out=180, looseness=1.00] (3.center) to (1);
\draw [style=simple] (1) to (0.center);
\draw [style=simple, in=180, out=34, looseness=1.00] (1) to (2.center);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 3) {};
\node [style=none] (1) at (1, 3) {};
\node [style=none] (2) at (1, 2.5) {};
\node [style=dot] (3) at (0.5, 3) {};
\node [style=zeroin] (4) at (0, 2.5) {};
\node [style=oplus] (5) at (0.5, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2.center) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (0.center) to (3);
\draw [style=simple] (3) to (1.center);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.5cm}
\text{and}
\hspace*{.5cm}
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 3) {};
\node [style=fanout] (1) at (1, 3) {};
\node [style=none] (2) at (2, 3.5) {};
\node [style=none] (3) at (2, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-34, out=180, looseness=1.00] (3.center) to (1);
\draw [style=simple] (1) to (0.center);
\draw [style=simple, in=180, out=34, looseness=1.00] (1) to (2.center);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 3) {};
\node [style=none] (1) at (1, 3) {};
\node [style=none] (2) at (1, 2.5) {};
\node [style=dot] (3) at (0.5, 3) {};
\node [style=zeroin] (4) at (0, 2.5) {};
\node [style=oplus] (5) at (0.5, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2.center) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (0.center) to (3);
\draw [style=simple] (3) to (1.center);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}
$$
There is the following completeness result:
\begin{theorem}
$\mathsf{CNOT}$ is discrete-inverse equivalent to the category of affine partial isomorphisms between finite-dimensional $\mathbb{Z}_2$ vector spaces, and thus, is complete.
\end{theorem}
\section{Stabilizer quantum mechanics and the angle-free ZX-calculus}
In this section, we briefly describe the well known fragment of quantum mechanics known as stabilizer quantum mechanics. In particular we focus on the real fragment of stabilizer mechanics, and describe a complete axiomization thereof called the angle-free ZX-calculus. Stabilizer quantum mechanics are very well studied, a good reference from a categorical perspective is given in \cite{backensthesis}.
\begin{definition}
The {\bf Pauli matrices} are the complex matrices:
$$
X:=
\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}
\hspace*{.5cm}
Y:=
\begin{bmatrix}
0 & -i\\
i & 0
\end{bmatrix}
\hspace*{.5cm}
Z:=
\begin{bmatrix}
1 & 0\\
0 & -1
\end{bmatrix}
$$
The {\bf Pauli group} on $n$ is the closure of the set:
$$P_n:=\{\lambda a_1\otimes\cdots\otimes a_n | \lambda \in \{\pm1,\pm i\}, a_i \in \{I_2,X,Y,Z\}\}$$
under matrix multiplication.
The {\bf stabilizer group} of $|\phi\>$ denoted by $S_{|\phi\>}$ a quantum state is the group of operators for which $|\phi\>$ is a +1 eigenvector.
A state is a {\bf stabilizer state} in case it is stabilized by a subgroup of $P_n$.
\end{definition}
\begin{definition}
The {\bf Clifford group} on $n$ is the group of operators which acts on the Pauli group on $n$ by conjugation:
$$
C_n:=\{U \in \mathsf{U}(2^n) |\forall p \in P_n, UpU^{-1} \in P_n \}
$$
\end{definition}
There is an algebraic description of stabilizer states:
\begin{lemma
All $n$ qubit stabilizer states have the form $C|0\>^{\otimes n}$, for some member $C$ of the Clifford group on $n$ qubits.
\end{lemma}
Indeed, we also consider a subgroup of $C_n$:
\begin{definition}
The {\bf real Clifford group} on $n$ qubits, is the subgroup of the Clifford group with real elements, ie:
$$
C_n^{re}:=\{ U \in C_n | \bar{U} = U\}
$$
So that an $n$-qubit {\bf real stabilizer state } is a state of the form $C|0\>^{\otimes n}$ for some real Clifford operator $C$.
\end{definition}
We say that a {\bf (real) stabilizer circuit} is a (real) Clifford composed with state preparations and measurements in the computational basis.
\label{sec:ZX}
The {\bf ZX-calculus} is a collection of calculi describing the interaction of the complementary Frobenius algebras corresponding to the Pauli $Z$ and $X$ observables and their phases. The first iteration of the ZX-calculus was described in \cite{coecke2011interacting}.
\begin{definition}
\label{def:frob}
A {\bf Frobenius algebra} in a monoidal category is a $5$-tuple:
$$(A,\zmul,\zmulu,\zcomul,\zcomulu)$$
such that $(A,\zmul,\zmulu)$ is a monoid and $(A,\zcomul,\zcomulu)$ is a comonoid:
$$
\begin{tikzpicture}[yscale=-1,xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-1, -0) {};
\node [style=zmul] (2) at (-1, 1) {};
\node [style=none] (3) at (-3, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=180, out=27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (3.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-1, -0) {};
\node [style=zmul] (2) at (-1, 1) {};
\node [style=none] (3) at (-3, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=180, out=27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (3.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-3, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0.center) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-1, -0) {};
\node [style=zmul] (2) at (-1, 1) {};
\node [style=none] (3) at (-3, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=180, out=27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (3.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[yscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-1, -0) {};
\node [style=zmul] (2) at (-1, 1) {};
\node [style=none] (3) at (-3, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=180, out=27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (3.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$$
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, -0) {};
\node [style=none] (1) at (1, -0) {};
\node [style=none] (2) at (-2, 1) {};
\node [style=none] (3) at (-2, -0) {};
\node [style=none] (4) at (-2, -0.75) {};
\node [style=zmul] (5) at (-1, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-159, out=0, looseness=1.00] (4.center) to (0);
\draw [style=simple] (0) to (1.center);
\draw [style=simple, in=0, out=153, looseness=1.00] (0) to (5);
\draw [style=simple, in=0, out=153, looseness=1.00] (5) to (2.center);
\draw [style=simple, in=-153, out=0, looseness=1.00] (3.center) to (5);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[yscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, -0) {};
\node [style=none] (1) at (1, -0) {};
\node [style=none] (2) at (-2, 1) {};
\node [style=none] (3) at (-2, -0) {};
\node [style=none] (4) at (-2, -0.75) {};
\node [style=zmul] (5) at (-1, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-159, out=0, looseness=1.00] (4.center) to (0);
\draw [style=simple] (0) to (1.center);
\draw [style=simple, in=0, out=153, looseness=1.00] (0) to (5);
\draw [style=simple, in=0, out=153, looseness=1.00] (5) to (2.center);
\draw [style=simple, in=-153, out=0, looseness=1.00] (3.center) to (5);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.5cm}
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, -0) {};
\node [style=none] (1) at (1, -0) {};
\node [style=none] (2) at (-2, 1) {};
\node [style=none] (3) at (-2, -0) {};
\node [style=none] (4) at (-2, -0.75) {};
\node [style=zmul] (5) at (-1, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-159, out=0, looseness=1.00] (4.center) to (0);
\draw [style=simple] (0) to (1.center);
\draw [style=simple, in=0, out=153, looseness=1.00] (0) to (5);
\draw [style=simple, in=0, out=153, looseness=1.00] (5) to (2.center);
\draw [style=simple, in=-153, out=0, looseness=1.00] (3.center) to (5);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[yscale=-1, xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, -0) {};
\node [style=none] (1) at (1, -0) {};
\node [style=none] (2) at (-2, 1) {};
\node [style=none] (3) at (-2, -0) {};
\node [style=none] (4) at (-2, -0.75) {};
\node [style=zmul] (5) at (-1, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-159, out=0, looseness=1.00] (4.center) to (0);
\draw [style=simple] (0) to (1.center);
\draw [style=simple, in=0, out=153, looseness=1.00] (0) to (5);
\draw [style=simple, in=0, out=153, looseness=1.00] (5) to (2.center);
\draw [style=simple, in=-153, out=0, looseness=1.00] (3.center) to (5);
\end{pgfonlayer}
\end{tikzpicture}
$$
And the Frobenius law holds:
\begin{description}
\item[{[F]}]
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (4, -0) {};
\node [style=zmul] (1) at (3, 1) {};
\node [style=rn] (2) at (5, 1) {};
\node [style=rn] (3) at (5, -0) {};
\node [style=rn] (4) at (2, -0) {};
\node [style=rn] (5) at (2, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=150, out=-30, looseness=1.00] (1) to (0);
\draw [style=simple, in=180, out=30, looseness=1.00] (1) to (2);
\draw [style=simple] (3) to (0);
\draw [style=simple, in=0, out=-150, looseness=1.00] (0) to (4);
\draw [style=simple] (5) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (5, 1) {};
\node [style=rn] (1) at (5, -0) {};
\node [style=rn] (2) at (2, -0) {};
\node [style=rn] (3) at (2, 1) {};
\node [style=zmul] (4) at (3, 0.5) {};
\node [style=zmul] (5) at (4, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1) to (5);
\draw [style=simple, in=180, out=27, looseness=1.00] (5) to (0);
\draw [style=simple] (5) to (4);
\draw [style=simple, in=0, out=153, looseness=1.00] (4) to (3);
\draw [style=simple, in=0, out=-153, looseness=1.00] (4) to (2);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (4, -0) {};
\node [style=zmul] (1) at (3, 1) {};
\node [style=rn] (2) at (5, 1) {};
\node [style=rn] (3) at (5, -0) {};
\node [style=rn] (4) at (2, -0) {};
\node [style=rn] (5) at (2, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=150, out=-30, looseness=1.00] (1) to (0);
\draw [style=simple, in=180, out=30, looseness=1.00] (1) to (2);
\draw [style=simple] (3) to (0);
\draw [style=simple, in=0, out=-150, looseness=1.00] (0) to (4);
\draw [style=simple] (5) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{description}
A Frobenius algebra is {\bf special} if
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (2, 3) {};
\node [style=zmul] (1) at (3, 3) {};
\node [style=rn] (2) at (4, 3) {};
\node [style=rn] (3) at (1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (0);
\draw [style=simple, bend left=45, looseness=1.25] (0) to (1);
\draw [style=simple, bend left=45, looseness=1.25] (1) to (0);
\draw [style=simple] (1) to (2);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, 3) {};
\node [style=rn] (1) at (1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$$
and {\bf commutative} if the underlying monoid and comonoids are commutative and cocommutative:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-1, -0) {};
\node [style=none] (2) at (-1, 1) {};
\node [style=none] (3) at (-3, 0.5) {};
\node [style=none] (4) at (0, 1) {};
\node [style=none] (5) at (0, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=180, out=27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (3.center) to (0);
\draw [style=simple, in=180, out=0, looseness=0.75] (1.center) to (4.center);
\draw [style=simple, in=0, out=180, looseness=0.75] (5.center) to (2.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-1, -0) {};
\node [style=none] (2) at (-1, 1) {};
\node [style=none] (3) at (-3, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=180, out=27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (3.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.5cm}
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-1, -0) {};
\node [style=none] (2) at (-1, 1) {};
\node [style=none] (3) at (-3, 0.5) {};
\node [style=none] (4) at (0, 1) {};
\node [style=none] (5) at (0, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=180, out=27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (3.center) to (0);
\draw [style=simple, in=180, out=0, looseness=0.75] (1.center) to (4.center);
\draw [style=simple, in=0, out=180, looseness=0.75] (5.center) to (2.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-2, 0.5) {};
\node [style=none] (1) at (-1, -0) {};
\node [style=none] (2) at (-1, 1) {};
\node [style=none] (3) at (-3, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=180, out=27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (3.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$$
A {\bf \dag-Frobenius algebra} $(A,\zmul,\zmulu)$ is a Frobenius algebra of the form
$$(A,\zmul,\zmulu,\zmul^\dag,\zmulu^\dag)$$
That is to say, the monoid and comonoid are daggers of each other.
Special commutative \dag-Frobenius algebras are called {\bf classical structures}.
A non-(co)unital special commutative \dag-Frobenius algebra is called a {\bf semi-Frobenius algebra}. Semi-Frobenius algebras are used to construct a weak product structure for inverse categories such as $\mathsf{CNOT}$ and $\mathsf{TOF}$.
\end{definition}
However, we are interested in a simple fragment of the ZX-calculus, namely the angle-free calculus for real stabilizer circuits, $\mathsf{ZX}_\pi$, described in \cite{realzx} (slightly modified to account for scalars):
\begin{definition}
\label{def:ZX.pi}
Let $\mathsf{ZX}_\pi $ denote the $\dag$-compact closed PROP with generators:
$$
\zmul \hspace*{.5cm} \zmulu \hspace*{.5cm} \zcomul \hspace*{.5cm} \zcomulu \hspace*{.5cm} \hadamard
$$
such that
$$( \zmul, \zmulu, \zcomul, \zcomulu)$$
is a classical structure, corresponding to the $Z$ basis,
and the following identities also hold up to swapping colours:
\begin{figure}[H]
\noindent
\scalebox{1.0}{%
\vbox{%
\begin{mdframed}
\begin{multicols}{2}
\begin{description}
\item[{[PP]}]
\namedlabel{ZX.pi:PP}{\bf [PP]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (8, -0) {$\pi$};
\node [style=xmul] (1) at (9.5, -0) {$\pi$};
\node [style=none] (2) at (10.5, -0) {};
\node [style=none] (3) at (7, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\draw (2.center) to (1);
\draw (0) to (3.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (8, -0) {};
\node [style=rn] (1) at (9.5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[PI]}]
\namedlabel{ZX.pi:PI}{\bf [PI]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (4.25, 2.25) {};
\node [style=rn] (1) at (0.75, 3) {};
\node [style=zmul] (2) at (3, 3) {};
\node [style=xmul] (3) at (1.5, 3) {$\pi$};
\node [style=rn] (4) at (4.25, 3.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=180, out=-37, looseness=1.00] (2) to (0);
\draw [style=simple, in=37, out=180, looseness=1.00] (4) to (2);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (4.5, 2.25) {};
\node [style=rn] (1) at (1, 3) {};
\node [style=rn] (2) at (4.5, 3.75) {};
\node [style=zmul] (3) at (2, 3) {};
\node [style=xmul] (4) at (3.5, 3.75) {$\pi$};
\node [style=xmul] (5) at (3.5, 2.25) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (3);
\draw [style=simple] (5) to (0);
\draw [style=simple] (4) to (2);
\draw [style=simple, in=37, out=180, looseness=1.00] (4) to (3);
\draw [style=simple, in=-37, out=180, looseness=1.00] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}$
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0.75, 2.25) {};
\node [style=rn] (1) at (4.25, 3) {};
\node [style=zmul] (2) at (2, 3) {};
\node [style=xmul] (3) at (3.5, 3) {$\pi$};
\node [style=rn] (4) at (0.75, 3.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple, in=0, out=-143, looseness=1.00] (2) to (0);
\draw [style=simple, in=143, out=0, looseness=1.00] (4) to (2);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 2.25) {};
\node [style=rn] (1) at (4.5, 3) {};
\node [style=rn] (2) at (1, 3.75) {};
\node [style=zmul] (3) at (3.5, 3) {};
\node [style=xmul] (4) at (2, 3.75) {$\pi$};
\node [style=xmul] (5) at (2, 2.25) {$\pi$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (3);
\draw [style=simple] (5) to (0);
\draw [style=simple] (4) to (2);
\draw [style=simple, in=143, out=0, looseness=1.00] (4) to (3);
\draw [style=simple, in=-143, out=0, looseness=1.00] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}$
\item[{[B.U']}]
\namedlabel{ZX.pi:B.U}{\bf [B.U']}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (5, 4.5) {};
\node [style=xmul] (1) at (3, 4) {};
\node [style=zmul] (2) at (4, 4) {};
\node [style=rn] (3) at (5, 3.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (3) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, in=180, out=27, looseness=1.00] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (5.25, 4.5) {};
\node [style=xmul] (1) at (4, 4.5) {};
\node [style=rn] (2) at (5.25, 3.75) {};
\node [style=xmul] (3) at (4, 3.75) {};
\node [style=xmul] (4) at (4, 5.5) {};
\node [style=zmul] (5) at (5, 5.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (2);
\draw [style=simple] (0) to (1);
\draw [style=simple, in=-45, out=-135, looseness=1.25] (5) to (4);
\draw [style=simple, in=135, out=45, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$
\hfil
$
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (5, 4.5) {};
\node [style=xmul] (1) at (3, 4) {};
\node [style=zmul] (2) at (4, 4) {};
\node [style=rn] (3) at (5, 3.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (3) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple, in=180, out=27, looseness=1.00] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (5.25, 4.5) {};
\node [style=xmul] (1) at (4, 4.5) {};
\node [style=rn] (2) at (5.25, 3.75) {};
\node [style=xmul] (3) at (4, 3.75) {};
\node [style=xmul] (4) at (4, 5.5) {};
\node [style=zmul] (5) at (5, 5.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (2);
\draw [style=simple] (0) to (1);
\draw [style=simple, in=-45, out=-135, looseness=1.25] (5) to (4);
\draw [style=simple, in=135, out=45, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[B.H']}]
\namedlabel{ZX.pi:B.H}{\bf [B.H']}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (5, 5) {};
\node [style=xmul] (1) at (4, 5) {};
\node [style=zmul] (2) at (3, 5) {};
\node [style=rn] (3) at (2, 5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1);
\draw [style=simple, bend right=45, looseness=1.25] (1) to (2);
\draw [style=simple] (3) to (2);
\draw [style=simple, bend right=45, looseness=1.25] (2) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (2, 4.5) {};
\node [style=zmul] (1) at (3, 4.5) {};
\node [style=rn] (2) at (5, 4.5) {};
\node [style=xmul] (3) at (4, 4.5) {};
\node [style=xmul] (4) at (2, 5.5) {};
\node [style=zmul] (5) at (3, 5.5) {};
\node [style=zmul] (6) at (5, 5.5) {};
\node [style=xmul] (7) at (4, 5.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1);
\draw [style=simple] (2) to (3);
\draw [style=simple, in=-45, out=-135, looseness=1.25] (5) to (4);
\draw [style=simple, in=135, out=45, looseness=1.25] (4) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple, in=-45, out=-135, looseness=1.25] (6) to (7);
\draw [style=simple, in=135, out=45, looseness=1.25] (7) to (6);
\draw [style=simple] (6) to (7);
\end{pgfonlayer}
\end{tikzpicture}$
\item[{[B.M']}]
\namedlabel{ZX.pi:B.M}{\bf [B.M']}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 3) {};
\node [style=xmul] (1) at (-1, 3) {};
\node [style=xmul] (2) at (-1, 2) {};
\node [style=zmul] (3) at (1, 3) {};
\node [style=zmul] (4) at (1, 2) {};
\node [style=rn] (5) at (2, 2) {};
\node [style=rn] (6) at (-2, 2) {};
\node [style=rn] (7) at (2, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (6) to (2);
\draw [style=simple, bend right, looseness=1.00] (2) to (4);
\draw [style=simple] (4) to (5);
\draw [style=simple] (3) to (2);
\draw [style=simple] (4) to (1);
\draw [style=simple, bend left, looseness=1.00] (1) to (3);
\draw [style=simple] (0) to (1);
\draw [style=simple] (7) to (3);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, 3) {};
\node [style=rn] (1) at (2, 2) {};
\node [style=rn] (2) at (-1, 2) {};
\node [style=xmul] (3) at (1, 2.5) {};
\node [style=zmul] (4) at (0, 2.5) {};
\node [style=rn] (5) at (2, 3) {};
\node [style=xmul] (6) at (1, 3.75) {};
\node [style=zmul] (7) at (0, 3.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=0, out=165, looseness=1.00] (4) to (0);
\draw [style=simple, in=-165, out=0, looseness=1.00] (2) to (4);
\draw [style=simple] (3) to (4);
\draw [style=simple, in=-15, out=180, looseness=1.00] (1) to (3);
\draw [style=simple, in=180, out=15, looseness=1.00] (3) to (5);
\draw [style=simple, in=-45, out=-135, looseness=1.25] (6) to (7);
\draw [style=simple, in=45, out=135, looseness=1.25] (6) to (7);
\draw [style=simple] (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}$
\item[{[H2]}]
\namedlabel{ZX.pi:H2}{\bf [H2]}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 3) {};
\node [style=rn] (1) at (1, 3) {};
\node [style=h] (2) at (-1, 3) {};
\node [style=h] (3) at (0, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 3) {};
\node [style=rn] (1) at (1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[ZO]}]
\namedlabel{ZX.pi:ZO}{\bf [ZO]}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (2) at (4, 1) {$\pi$};
\node [style=rn] (3) at (5, -0) {};
\node [style=rn] (4) at (3, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (2) at (4.5, 1) {$\pi$};
\node [style=rn] (3) at (3, -0) {};
\node [style=rn] (4) at (6, -0) {};
\node [style=zmul] (5) at (4, -0) {};
\node [style=xmul] (6) at (5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (4) to (6);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[IV]}]
\namedlabel{ZX.pi:IV}{\bf [IV]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (0, 2.75) {};
\node [style=xmul] (1) at (2, 2.75) {};
\node [style=xmul] (2) at (2, 1.75) {};
\node [style=zmul] (3) at (0, 1.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (2);
\draw [bend right, looseness=1.25] (1) to (0);
\draw [bend right, looseness=1.25] (0) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, 2.75) {};
\node [style=rn] (1) at (2, 2.75) {};
\node [style=rn] (2) at (2, 1.75) {};
\node [style=rn] (3) at (0, 1.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[L]}]
\namedlabel{ZX.pi:L}{\bf [L]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1) {};
\node [style=rn] (1) at (-2, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [bend left=90, looseness=2.50] (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1) {};
\node [style=rn] (1) at (-2, 2) {};
\node [style=xmul] (2) at (-1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=90, out=0, looseness=1] (1) to (2);
\draw [in=0, out=-90, looseness=1] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1) {};
\node [style=rn] (1) at (-2, 2) {};
\node [style=zmul] (2) at (-1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=90, out=0, looseness=1] (1) to (2);
\draw [in=0, out=-90, looseness=1] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$
\hfil$
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1) {};
\node [style=rn] (1) at (-2, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [bend left=90, looseness=2.50] (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1) {};
\node [style=rn] (1) at (-2, 2) {};
\node [style=xmul] (2) at (-1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=90, out=0, looseness=1] (1) to (2);
\draw [in=0, out=-90, looseness=1] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 1) {};
\node [style=rn] (1) at (-2, 2) {};
\node [style=zmul] (2) at (-1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [in=90, out=0, looseness=1] (1) to (2);
\draw [in=0, out=-90, looseness=1] (2) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[S1]}]
\namedlabel{ZX.pi:S1}{\bf [S1]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (4, -0) {$\pi$};
\node [style=rn] (1) at (3, -0) {};
\node [style=rn] (2) at (5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (0);
\draw [style=simple] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (4.5, 1) {};
\node [style=rn] (1) at (3, -0) {};
\node [style=rn] (2) at (6, -0) {};
\node [style=zmul] (3) at (4.5, -0) {};
\node [style=zmul] (4) at (3.75, -0.75) {};
\node [style=xmul] (5) at (5.25, -0.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=0, out=30, looseness=1.25] (3) to (0);
\draw [style=simple, in=150, out=180, looseness=1.25] (0) to (3);
\draw [style=simple] (2) to (3);
\draw [style=simple] (3) to (1);
\draw (5) to (4);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[S2]}]
\namedlabel{ZX.pi:S2}{\bf [S2]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 2) {$\vdots$};
\node [style=xmul] (1) at (0, 2) {$\alpha$};
\node [style=rn] (2) at (-1, 2) {$\vdots$};
\node [style=rn] (3) at (1, 3) {};
\node [style=rn] (4) at (1, 1) {};
\node [style=rn] (5) at (-1, 1) {};
\node [style=rn] (6) at (-1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=135, out=0, looseness=1.00] (6) to (1);
\draw [style=simple, in=180, out=45, looseness=1.00] (1) to (3);
\draw [style=simple, in=-45, out=180, looseness=1.00] (4) to (1);
\draw [style=simple, in=0, out=-135, looseness=1.00] (1) to (5);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1.5, 2) {$\vdots$};
\node [style=zmul] (1) at (0, 2) {$\alpha$};
\node [style=rn] (2) at (2, 1) {};
\node [style=rn] (3) at (2, 3) {};
\node [style=rn] (4) at (-2, 1) {};
\node [style=rn] (5) at (-2, 3) {};
\node [style=rn] (6) at (-1.5, 2) {$\vdots$};
\node [style=h] (7) at (1, 3) {};
\node [style=h] (8) at (1, 1) {};
\node [style=h] (9) at (-1, 1) {};
\node [style=h] (10) at (-1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (7);
\draw [style=simple] (10) to (5);
\draw [style=simple] (4) to (9);
\draw [style=simple] (2) to (8);
\draw [style=simple, in=135, out=0, looseness=1.00] (10) to (1);
\draw [style=simple, in=180, out=45, looseness=1.00] (1) to (7);
\draw [style=simple, in=-45, out=180, looseness=1.00] (8) to (1);
\draw [style=simple, in=0, out=-135, looseness=1.00] (1) to (9);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[S3]}]
\namedlabel{ZX.pi:S3}{\bf [S3]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 2) {$\vdots$};
\node [style=zmul] (1) at (0, 2) {$0$};
\node [style=rn] (2) at (-1, 2) {$\vdots$};
\node [style=rn] (3) at (1, 3) {};
\node [style=rn] (4) at (1, 1) {};
\node [style=rn] (5) at (-1, 1) {};
\node [style=rn] (6) at (-1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=135, out=0, looseness=1.00] (6) to (1);
\draw [style=simple, in=180, out=45, looseness=1.00] (1) to (3);
\draw [style=simple, in=-45, out=180, looseness=1.00] (4) to (1);
\draw [style=simple, in=0, out=-135, looseness=1.00] (1) to (5);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1.5, 2) {$\vdots$};
\node [style=zmul] (1) at (0, 2) {$$};
\node [style=rn] (2) at (2, 1) {};
\node [style=rn] (3) at (2, 3) {};
\node [style=rn] (4) at (-2, 1) {};
\node [style=rn] (5) at (-2, 3) {};
\node [style=rn] (6) at (-1.5, 2) {$\vdots$};
\node [style=rn] (7) at (1, 3) {};
\node [style=rn] (8) at (1, 1) {};
\node [style=rn] (9) at (-1, 1) {};
\node [style=rn] (10) at (-1, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (7);
\draw [style=simple] (10) to (5);
\draw [style=simple] (4) to (9);
\draw [style=simple] (2) to (8);
\draw [style=simple, in=135, out=0, looseness=1.00] (10) to (1);
\draw [style=simple, in=180, out=45, looseness=1.00] (1) to (7);
\draw [style=simple, in=-45, out=180, looseness=1.00] (8) to (1);
\draw [style=simple, in=0, out=-135, looseness=1.00] (1) to (9);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{description}
\end{multicols}
\
\end{mdframed}
}}
\caption{The identities of $\mathsf{ZX}_\pi$ (where $\alpha \in \{0,\pi\}$)}
\label{fig:ZXPI}
\end{figure}
The last 3 Axioms are actually definitions, which simplify the presentation of $\mathsf{ZX}_\pi$.
Note that the axioms of a classical structure are omitted from this box to save space.
These axioms imply that the black and white Frobenius algebras are complementary where the antipode is the identity.
This category has a canonical \dag-functor, as all of the stated axioms are horizontally symmetric. It is also $\dag$-compact closed.
This category embeds $\mathsf{FHilb}$; the black Frobenius algebra corresponds to the Pauli $Z$ basis; the white Frobenius algebra corresponds to the Pauli $X$ basis; the gate $\hadamard$ corresponds to the Hadamard gate and $\zpi$ and $\xpi$ correspond to $Z$ and $X$ $\pi$-phase-shifts respectively. In particular, the $X$ $\pi$-phase-shift is the not gate.
\end{definition}
Because the $\pi$-phases are given by \ref{ZX.pi:S3}, by the commutative spider theorem, it is immediate that they are phase shifts:
\begin{description}
\item[{[PH]}]
\namedlabel{ZX.pi:PH}{\bf [PH]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (8, -0) {$\pi$};
\node [style=xmul] (1) at (9, -0) {};
\node [style=none] (2) at (7, -0) {};
\node [style=none] (3) at (10, 0.5) {};
\node [style=none] (4) at (10, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-27, out=180, looseness=1.00] (4.center) to (1);
\draw [style=simple] (1) to (0);
\draw [style=simple] (0) to (2.center);
\draw [style=simple, in=180, out=27, looseness=1.00] (1) to (3.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9, 0) {};
\node [style=none] (1) at (8, 0) {};
\node [style=none] (2) at (10.25, -0.5) {};
\node [style=none] (3) at (10.25, 0.5) {};
\node [style=xmul] (4) at (10.25, -0.5) {$\pi$};
\node [style=none] (5) at (11.25, -0.5) {};
\node [style=none] (6) at (11.25, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=27, out=180, looseness=1.00] (3.center) to (0);
\draw [style=simple, in=180, out=-27, looseness=1.00] (0) to (2.center);
\draw [style=simple] (6.center) to (3.center);
\draw [style=simple] (2.center) to (5.center);
\draw [style=simple] (0) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}
$
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (9, -0) {$\pi$};
\node [style=xmul] (1) at (8, -0) {};
\node [style=none] (2) at (10, -0) {};
\node [style=none] (3) at (7, 0.5) {};
\node [style=none] (4) at (7, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=-153, out=0, looseness=1.00] (4.center) to (1);
\draw [style=simple] (1) to (0);
\draw [style=simple] (0) to (2.center);
\draw [style=simple, in=0, out=153, looseness=1.00] (1) to (3.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (10.25, 0) {};
\node [style=none] (1) at (11.25, 0) {};
\node [style=none] (2) at (9, -0.5) {};
\node [style=none] (3) at (9, 0.5) {};
\node [style=xmul] (4) at (9, -0.5) {$\pi$};
\node [style=none] (5) at (8, -0.5) {};
\node [style=none] (6) at (8, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=153, out=0, looseness=1.00] (3.center) to (0);
\draw [style=simple, in=0, out=-153, looseness=1.00] (0) to (2.center);
\draw [style=simple] (6.center) to (3.center);
\draw [style=simple] (2.center) to (5.center);
\draw [style=simple] (0) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{description}
In bra-ket notation, a black spider from $n$ to $m$ with angle $\theta$ is interpreted as follows in $\mathsf{FHilb}$:
$$|0\>^{ \otimes n}\< 0|^{ \otimes m}+e^{ i \theta}|1\>^{ \otimes n}\< 1|^{ \otimes m}$$
and a white spider from $n$ to $m$ with angle $\theta$ is interpreted as follows in $\mathsf{FHilb}$:
$$|+\>^{ \otimes n}\< +|^{ \otimes m}+e^{ i \theta}|-\>^{ \otimes n}\< -|^{ \otimes m}$$
Note that the controlled-not gate has a succinct representation in $\mathsf{ZX}_\pi$ (this can be verified by calculation):
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2.25, 2) {};
\node [style=rn] (1) at (-2.25, 3) {};
\node [style=rn] (2) at (1.25, 3) {};
\node [style=rn] (3) at (1.25, 2) {};
\node [style=zmul] (4) at (-0.5, 3) {};
\node [style=xmul] (5) at (-0.5, 2) {};
\node [style=zmul] (6) at (-1.25, 4.25) {};
\node [style=xmul] (7) at (0.25, 4.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (5) to (4);
\draw [style=simple] (4) to (1);
\draw [style=simple] (4) to (2);
\draw [style=simple, in=0, out=180, looseness=1.00] (3) to (5);
\draw [style=simple] (5) to (0);
\draw [style=simple] (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}
$$
This means that $\mathsf{ZX}_\pi$ contains all of the generators of the real Clifford group. Furthermore, the following is known:
\begin{theorem}\cite{realzx} \label{thm:zxcomplete}
$\mathsf{ZX}_\pi$ is complete for real stabilizer states.
\end{theorem}
The original presentation of $\mathsf{ZX}_\pi$ in \cite{realzx} did not account for scalars; instead, it imposed the equivalence relation on circuits up to an invertible scalar and ignored the zero scalar entirely. Therefore, the original completeness result described in \cite{realzx} is not actually as strong as Theorem \ref{thm:zxcomplete}. This means, of course, that this original calculus does not embed in $\mathsf{Mat}_\mathbb{C}$ as the relations are not sound. For example, the following map is interpreted as $\sqrt 2$, not $1$, in $\mathsf{Mat}_\mathbb{C}$:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (3, 2) {};
\node [style=zmul] (1) at (4, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$$
Later on, \cite{scalars,removingstar} showed that by scaling certain axioms to make them sound, and by adding Axioms \ref{ZX.pi:IV} and \ref{ZX.pi:ZO} this fragment of the ZX-calculus is also complete for scalars. The properly scaled axioms have all been collected in Figure \ref{fig:ZXPI}.
\section{Embedding \texorpdfstring{$\mathsf{CNOT}$}{CNOT} into \texorpdfstring{$\mathsf{ZX}_\pi$}{the real stabilizer fragment of the ZX-calculus} }
Consider the interpretation of $\mathsf{CNOT}$ into $\mathsf{ZX}_\pi$, sending:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=rn] (1) at (2, -0) {};
\node [style=oplus] (2) at (1, -1) {};
\node [style=dot] (3) at (1, -0) {};
\node [style=rn] (4) at (2, -1) {};
\node [style=rn] (5) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (3);
\draw [style=simple] (0) to (3);
\draw [style=simple] (3) to (1);
\draw [style=simple] (4) to (2);
\draw [style=simple] (2) to (5);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-2, 2) {};
\node [style=rn] (1) at (-2, 3) {};
\node [style=rn] (2) at (1, 3) {};
\node [style=rn] (3) at (1, 2) {};
\node [style=zmul] (4) at (-0.5, 3) {};
\node [style=xmul] (5) at (-0.5, 2) {};
\node [style=zmul] (6) at (-1.25, 4.25) {};
\node [style=xmul] (7) at (0.25, 4.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (5) to (4);
\draw [style=simple] (4) to (1);
\draw [style=simple] (4) to (2);
\draw [style=simple, in=0, out=180, looseness=1.00] (3) to (5);
\draw [style=simple] (5) to (0);
\draw [style=simple] (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.75cm}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 0) {};
\node [style=none] (1) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (1, -1) {$\pi$};
\node [style=rn] (1) at (2, -1) {};
\node [style=zmul] (2) at (1, -0) {};
\node [style=xmul] (3) at (2, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [bend left=45, looseness=1.25] (3) to (2);
\draw [bend left=45, looseness=1.25] (2) to (3);
\draw (3) to (2);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{.75cm}
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0, 0) {};
\node [style=none] (1) at (1, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=xmul] (0) at (2, -1) {$\pi$};
\node [style=rn] (1) at (1, -1) {};
\node [style=zmul] (2) at (1, -0) {};
\node [style=xmul] (3) at (2, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [bend left=45, looseness=1.25] (3) to (2);
\draw [bend left=45, looseness=1.25] (2) to (3);
\draw (3) to (2);
\end{pgfonlayer}
\end{tikzpicture}
$$
We explicitly prove that this interpretation is functorial.
\begin{lemma}\label{lem:cnotzxfunc}
The interpretation of $\mathsf{CNOT}$ into $\mathsf{ZX}_\pi$ is functorial.
\end{lemma}
\begin{proof}
See \ref{sec:embeddingcnotzx} \ref{lem:cnotzxfunc:proof}
\end{proof}
Because the standard interpretations of $\mathsf{CNOT}$ and $\mathsf{ZX}_\pi$ into $\mathsf{Mat}_\mathbb{C}$ commute and are faithful, the following diagram of strict \dag-symmetric monoidal functors makes $\mathsf{CNOT}\to\mathsf{ZX}_\pi$ faithful:
$$
\xymatrix{
\mathsf{CNOT} \ar@{ >->}[dr] \ar[d]\\
\mathsf{ZX}_\pi \ar@{ >->}[r] & \mathsf{Mat}_\mathbb{C}
}
$$
\section{Extending \texorpdfstring{$\mathsf{CNOT}$}{CNOT} to \texorpdfstring{$\mathsf{ZX}_\pi$}{the real stabilizer fragment of the ZX-calculus}}
As opposed to the ZX-calculus, the identities of $\mathsf{CNOT}$ are given in terms of {\em circuit relations}. When applying rules of the $\mathsf{ZX}$ calculus, circuits can be transformed into intermediary representations so that the flow of information is lost. Various authors have found complete circuit relations for various fragments of quantum computing. Notably, Selinger found a complete set of identities for Clifford circuits (stabilizer circuits without ancillary bits) \cite{selinger2015generators}. Similarly, Amy et al. found a complete set of identities for cnot-dihedral circuits (without ancillary bits) \cite{1701.00140}.
In this section, we provide a complete set of circuit relations for {\em real} stabilizer circuits (although circuits can have norms greater than 1).
We show that $\mathsf{CNOT}$ is embedded in $\mathsf{ZX}_\pi$ and we complete $\mathsf{CNOT}$ to $\mathsf{ZX}_\pi$ by adding the Hadamard gate and the scalar $\sqrt 2$ as generators along with 5 relations.
\begin{definition}
\label{def:cnoth}
Let $\mathsf{CNOT}+H$ denote the PROP freely generated by the axioms of $\mathsf{CNOT}$ with additional generators the Hadamard gate and $\sqrt{2}$:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 0) {};
\node [style=none] (1) at (1, 0) {};
\node [style=h] (2) at (0.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1.center) to (2);
\draw (2) to (0.center);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (1) at (-0.75, -0) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\end{pgfonlayer}
\end{tikzpicture}
$$
satisfying the following identities:
\begin{figure}[H]
\noindent
\scalebox{1.0}{%
\vbox{%
\begin{mdframed}
\begin{multicols}{2}
\begin{description}
\item[{[H.I]}]
\namedlabel{cnoth:H.I}{\bf [H.I]}
\label{H.I}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -0) {};
\node [style=h] (1) at (1, -0) {};
\node [style=h] (2) at (2, -0) {};
\node [style=rn] (3) at (3, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (3);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, 0) {};
\node [style=rn] (1) at (3, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[H.F]}]
\namedlabel{cnoth:H.F}{\bf [H.F]}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, -0.5) {};
\node [style=rn] (1) at (3, -0.5) {};
\node [style=oplus] (2) at (2, -0.5) {};
\node [style=h] (3) at (2.5, -0.5) {};
\node [style=h] (4) at (1.5, -0.5) {};
\node [style=h] (5) at (1.5, -1) {};
\node [style=dot] (6) at (2, -1) {};
\node [style=h] (7) at (2.5, -1) {};
\node [style=rn] (8) at (3, -1) {};
\node [style=rn] (9) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (4);
\draw [style=simple] (4) to (2);
\draw [style=simple] (2) to (3);
\draw [style=simple] (3) to (1);
\draw [style=simple] (7) to (6);
\draw [style=simple] (2) to (6);
\draw [style=simple] (9) to (5);
\draw [style=simple] (5) to (6);
\draw [style=simple] (7) to (8);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0.5, 0) {};
\node [style=rn] (1) at (1.5, 0) {};
\node [style=oplus] (2) at (1, -0.5) {};
\node [style=dot] (3) at (1, 0) {};
\node [style=rn] (4) at (1.5, -0.5) {};
\node [style=rn] (5) at (0.5, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (3);
\draw [style=simple] (0) to (3);
\draw [style=simple] (3) to (1);
\draw [style=simple] (4) to (2);
\draw [style=simple] (2) to (5);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[H.L]}]
\namedlabel{cnoth:H.L'}{\bf [H.L]}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (5) at (-1.25, -1.5) {};
\node [style=zeroout] (6) at (3.25, -1.5) {};
\node [style=rn] (7) at (3.25, -1) {};
\node [style=rn] (8) at (-1.25, -1) {};
\node [style=h] (11) at (1, -1.5) {};
\node [style=h] (12) at (-0.5, -1.5) {};
\node [style=h] (13) at (2.5, -1.5) {};
\node [style=dot] (14) at (0.25, -1.5) {};
\node [style=dot] (15) at (1.75, -1.5) {};
\node [style=oplus] (16) at (0.25, -1) {};
\node [style=oplus] (17) at (1.75, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (17) to (16);
\draw (16) to (14);
\draw (17) to (15);
\draw (15) to (14);
\draw (7) to (17);
\draw (16) to (8);
\draw (5) to (12);
\draw (12) to (14);
\draw (15) to (13);
\draw (13) to (6);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, 0) {};
\node [style=rn] (1) at (2, 0) {};
\node [style=oplus] (2) at (1, 0) {};
\node [style=zeroin] (3) at (0, -0.5) {};
\node [style=zeroout] (4) at (2, -0.5) {};
\node [style=h] (5) at (1, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple] (4) to (5);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[H.Z]}]
\namedlabel{cnoth:H.Z'}{\bf [H.Z]}
\hfil
\vspace*{.8cm}
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0.25, 0.5) {};
\node [style=zeroout] (1) at (1.25, 0.5) {};
\node [style=map] (2) at (-1, 0.5) {$\sqrt 2$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0.25, 0.5) {};
\node [style=zeroout] (1) at (1.25, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$
\item[{[H.S]}]
\namedlabel{cnoth:H.S}{\bf [H.S]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (0, 0) {$\sqrt{2}$};
\node [style=zeroin] (1) at (1.5, -0) {};
\node [style=zeroout] (2) at (3.5, -0) {};
\node [style=h] (3) at (2.5, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (3);
\draw [style=simple] (3) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 0) {};
\node [style=none] (1) at (0.5, 0) {};
\end{pgfonlayer}
\end{tikzpicture}
$
\end{description}
\end{multicols}
\
\end{mdframed}
}}
\caption{The identities of $\mathsf{CNOT}+H$ (in addition to the identities of $\mathsf{CNOT}$)}
\label{fig:CNOTH}
\end{figure}
\end{definition}
The inverse of $\sqrt{2}$ is given the alias:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (0.25, 0) {$1/\sqrt{2}$};
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (0, 0) {};
\node [style=zeroout] (1) at (1, 0) {};
\node [style=h] (2) at (0.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (1) to (2);
\end{pgfonlayer}
\end{tikzpicture}
$$
\ref{cnoth:H.I} is stating that the Hadamard gate is self-inverse.
\ref{cnoth:H.F} reflects the fact that composing the controlled-not gate with Hadamards reverses the control and operating bits.
\ref{cnoth:H.Z'} is stating that $\sqrt{2}$ composed with the zero matrix is again the zero matrix.
\ref{cnoth:H.S} makes $\sqrt{2}$ and $1/\sqrt{2}$ inverses to each other.
\ref{cnoth:H.L'} can be restated to resemble \ref{ZX.pi:S1}:
\begin{lemma} $ $
\begin{description}
\item[{[H.L']}]
\namedlabel{cnoth:H.L}{\bf [H.L']}
\hfil$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (1, -1) {};
\node [style=dot] (1) at (0.25, -1) {};
\node [style=dot] (2) at (1.75, -1) {};
\node [style=oplus] (3) at (0.25, -1.5) {};
\node [style=oplus] (4) at (1.75, -1.5) {};
\node [style=zeroin] (5) at (-0.5, -1.5) {};
\node [style=zeroout] (6) at (2.5, -1.5) {};
\node [style=rn] (7) at (3.25, -1) {};
\node [style=rn] (8) at (-1.25, -1) {};
\node [style=h] (9) at (2.5, -1) {};
\node [style=h] (10) at (-0.5, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (0);
\draw (0) to (2);
\draw (6) to (4);
\draw (4) to (3);
\draw (3) to (5);
\draw (3) to (1);
\draw (4) to (2);
\draw (7) to (9);
\draw (9) to (2);
\draw (1) to (10);
\draw (10) to (8);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, 0) {};
\node [style=rn] (1) at (2, 0) {};
\node [style=oplus] (2) at (1, 0) {};
\node [style=zeroin] (3) at (0, -0.5) {};
\node [style=zeroout] (4) at (2, -0.5) {};
\node [style=h] (5) at (1, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple] (4) to (5);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{description}
\end{lemma}
\begin{proof}
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (1, -0.5) {};
\node [style=rn] (7) at (4, -1) {};
\node [style=rn] (8) at (-2, -1) {};
\node [style=fanout] (9) at (0, -1) {};
\node [style=fanin] (10) at (2, -1) {};
\node [style=h] (11) at (3, -1) {};
\node [style=h] (12) at (-1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (9) to (8);
\draw [in=-180, out=30] (9) to (0);
\draw [in=150, out=0] (0) to (10);
\draw (10) to (7);
\draw [in=330, out=210, looseness=1.25] (10) to (9);
\end{pgfonlayer}
\end{tikzpicture}
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (1, -1) {};
\node [style=rn] (7) at (4, -0.5) {};
\node [style=rn] (8) at (-2, -0.5) {};
\node [style=fanout] (9) at (0, -0.5) {};
\node [style=fanin] (10) at (2, -0.5) {};
\node [style=h] (11) at (3, -0.5) {};
\node [style=h] (12) at (-1, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (9) to (8);
\draw [in=180, out=-30] (9) to (0);
\draw [in=-150, out=0] (0) to (10);
\draw (10) to (7);
\draw [in=-330, out=-210, looseness=1.25] (10) to (9);
\end{pgfonlayer}
\end{tikzpicture} & \text{$\Delta$ is commutative}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (7) at (4, -0.5) {};
\node [style=rn] (8) at (-2, -0.5) {};
\node [style=dot] (9) at (0, -0.5) {};
\node [style=dot] (10) at (2, -0.5) {};
\node [style=h] (11) at (1, -1.5) {};
\node [style=oplus] (12) at (0, -1.5) {};
\node [style=oplus] (13) at (2, -1.5) {};
\node [style=zeroin] (14) at (-1, -1.5) {};
\node [style=zeroout] (15) at (3, -1.5) {};
\node [style=h] (16) at (3, -0.5) {};
\node [style=h] (17) at (-1, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (9) to (8);
\draw (10) to (7);
\draw (10) to (9);
\draw (14) to (12);
\draw (12) to (9);
\draw (12) to (11);
\draw (11) to (13);
\draw (13) to (10);
\draw (13) to (15);
\end{pgfonlayer}
\end{tikzpicture}
\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (7) at (4, -0.5) {};
\node [style=rn] (8) at (-2, -0.5) {};
\node [style=h] (11) at (1, -1.5) {};
\node [style=zeroin] (14) at (-1.75, -1.5) {};
\node [style=dot] (16) at (0, -1.5) {};
\node [style=dot] (17) at (2, -1.5) {};
\node [style=oplus] (18) at (0, -0.5) {};
\node [style=oplus] (19) at (2, -0.5) {};
\node [style=h] (20) at (3, -1.5) {};
\node [style=h] (23) at (-1, -1.5) {};
\node [style=zeroout] (24) at (4, -1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (17) to (16);
\draw (18) to (16);
\draw (19) to (17);
\draw (8) to (7);
\draw (24) to (14);
\end{pgfonlayer}
\end{tikzpicture}
& \text{\ref{cnoth:H.F}, \ref{cnoth:H.I}}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, 0) {};
\node [style=rn] (1) at (2, 0) {};
\node [style=oplus] (2) at (1, 0) {};
\node [style=zeroin] (3) at (0, -1) {};
\node [style=zeroout] (4) at (2, -1) {};
\node [style=h] (5) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple] (2) to (1);
\draw [style=simple] (4) to (5);
\draw [style=simple] (5) to (3);
\end{pgfonlayer}
\end{tikzpicture} & \text{\ref{cnoth:H.L}}\\
\end{align*}
\end{proof}
\ref{cnoth:H.Z'} can be restated in slightly different terms:
\begin{lemma} $ $
\begin{description}
\item[{[H.Z']}]
\namedlabel{cnoth:H.Z}{\bf [H.Z']}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0.25, 0.5) {};
\node [style=zeroout] (1) at (1.25, 0.5) {};
\node [style=zeroout] (2) at (1.25, 0) {};
\node [style=h] (3) at (0.75, 0) {};
\node [style=zeroin] (4) at (0.25, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\draw (2) to (3);
\draw (3) to (4);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=onein] (0) at (0.25, 0.5) {};
\node [style=zeroout] (1) at (1.25, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{description}
\end{lemma}
\begin{proof}
Immediate by \ref{cnoth:H.S} and \ref{cnoth:H.Z'}.
\end{proof}
There is a derived identity, showing that the Frobenius structure identified with the inverse products of $\mathsf{CNOT}$ is unital:
\begin{lemma} $ $
\begin{description}
\item[{[H.U]}]
\namedlabel{cnoth:H.U}{\bf [H.U]}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroout] (1) at (3, 0) {};
\node [style=h] (2) at (2.5, 0) {};
\node [style=rn] (3) at (4.5, -1) {};
\node [style=map] (7) at (3.75, 0) {$\sqrt 2$};
\node [style=fanout] (8) at (1.5, -0.5) {};
\node [style=rn] (9) at (0.75, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [in=30, out=180] (2) to (8);
\draw (8) to (9);
\draw [in=-180, out=-30, looseness=0.75] (8) to (3);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0.5, 0) {};
\node [style=rn] (1) at (2.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (1) at (2.25, 0) {};
\node [style=h] (2) at (2.75, 0) {};
\node [style=rn] (3) at (0.75, -1) {};
\node [style=map] (7) at (1.5, 0) {$\sqrt 2$};
\node [style=fanin] (8) at (3.75, -0.5) {};
\node [style=rn] (9) at (4.5, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [in=150, out=0] (2) to (8);
\draw (8) to (9);
\draw [in=0, out=-150, looseness=0.75] (8) to (3);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{description}
\end{lemma}
\begin{proof}
\begin{align*}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroout] (1) at (3, 0) {};
\node [style=h] (2) at (2.5, 0) {};
\node [style=rn] (3) at (4.5, -1) {};
\node [style=map] (7) at (3.75, 0) {$\sqrt 2$};
\node [style=fanout] (8) at (1.5, -0.5) {};
\node [style=rn] (9) at (0.75, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [in=30, out=180] (2) to (8);
\draw (8) to (9);
\draw [in=-180, out=-30, looseness=0.75] (8) to (3);
\end{pgfonlayer}
\end{tikzpicture}
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1.5, 0) {};
\node [style=zeroout] (1) at (3, 0) {};
\node [style=h] (2) at (2.5, 0) {};
\node [style=rn] (3) at (4.5, -0.5) {};
\node [style=zeroin] (4) at (1.5, -0.5) {};
\node [style=oplus] (5) at (2, -0.5) {};
\node [style=dot] (6) at (2, 0) {};
\node [style=map] (7) at (3.75, 0) {$\sqrt 2$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [style=simple] (0) to (6);
\draw [style=simple] (6) to (2);
\draw [style=simple] (5) to (4);
\draw [style=simple] (5) to (3);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1.5, -0.5) {};
\node [style=zeroout] (1) at (3, -0.5) {};
\node [style=h] (2) at (2.5, -0.5) {};
\node [style=rn] (3) at (4.5, 0) {};
\node [style=zeroin] (4) at (1.5, 0) {};
\node [style=oplus] (5) at (2, 0) {};
\node [style=dot] (6) at (2, -0.5) {};
\node [style=map] (7) at (3.75, -0.5) {$\sqrt 2$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2) to (1);
\draw [style=simple] (0) to (6);
\draw [style=simple] (6) to (2);
\draw [style=simple] (5) to (4);
\draw [style=simple] (5) to (3);
\draw [style=simple] (5) to (6);
\end{pgfonlayer}
\end{tikzpicture}
& \text{$\Delta$ is commutative}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 0) {};
\node [style=zeroin] (1) at (1, -0.5) {};
\node [style=zeroout] (2) at (2.5, -0.5) {};
\node [style=rn] (3) at (3, 0) {};
\node [style=oplus] (4) at (2, 0) {};
\node [style=dot] (5) at (2, -0.5) {};
\node [style=h] (6) at (2.5, 0) {};
\node [style=h] (7) at (1.5, 0) {};
\node [style=h] (8) at (1.5, -0.5) {};
\node [style=map] (9) at (2, -1.25) {$\sqrt 2$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (8);
\draw [style=simple] (8) to (5);
\draw [style=simple] (5) to (4);
\draw [style=simple] (0) to (3);
\draw [style=simple] (2) to (5);
\end{pgfonlayer}
\end{tikzpicture}
& \ref{cnoth:H.I},\ref{cnoth:H.F}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 0) {};
\node [style=zeroin] (1) at (1.25, -0.5) {};
\node [style=zeroout] (2) at (2.25, -0.5) {};
\node [style=rn] (3) at (2.5, 0) {};
\node [style=h] (4) at (2, 0) {};
\node [style=h] (5) at (1.5, 0) {};
\node [style=h] (6) at (1.75, -0.5) {};
\node [style=map] (7) at (1.75, -1.25) {$\sqrt 2$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (6);
\draw [style=simple] (0) to (3);
\draw [style=simple] (2) to (6);
\draw [style=simple] (4) to (5);
\end{pgfonlayer}
\end{tikzpicture}
& \ref{CNOT.7}\\
&= \begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 0) {};
\node [style=zeroin] (1) at (1.25, -0.5) {};
\node [style=zeroout] (2) at (2.25, -0.5) {};
\node [style=rn] (3) at (2.5, 0) {};
\node [style=h] (6) at (1.75, -0.5) {};
\node [style=map] (7) at (1.75, -1.25) {$\sqrt 2$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (6);
\draw [style=simple] (0) to (3);
\draw [style=simple] (2) to (6);
\end{pgfonlayer}
\end{tikzpicture}
& \ref{cnoth:H.I}\\
&=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (1, 0) {};
\node [style=rn] (3) at (2.5, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (3);
\end{pgfonlayer}
\end{tikzpicture}
& \ref{cnoth:H.S}
\end{align*}
\end{proof}
\subsection{The completeness of \texorpdfstring{$\mathsf{CNOT}+H$}{CNOT+H}}
We construct two functors between $\mathsf{CNOT}+H$ and $\mathsf{ZX}_\pi$ and show that they are pairwise inverses.
\begin{definition}
Let $F:\mathsf{CNOT}+H \to \mathsf{ZX}_\pi$, be the extension of the interpretation $\mathsf{CNOT} \to \mathsf{ZX}_\pi$ which takes:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (1, 3) {$\sqrt{2}$};
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (-1, 4.25) {};
\node [style=xmul] (1) at (0, 4.25) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=rn] (1) at (1, -1) {};
\node [style=h] (2) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw [style=simple] (1) to (2);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=rn] (1) at (1, -1) {};
\node [style=h] (2) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw [style=simple] (1) to (2);
\end{pgfonlayer}
\end{tikzpicture}
$$
Let $G:\mathsf{ZX}_\pi\to \mathsf{CNOT}+H $ be the interpretation sending:
$$
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -1) {};
\node [style=rn] (1) at (2, -0.25) {};
\node [style=zmul] (2) at (1, -1) {};
\node [style=rn] (3) at (2, -1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple, in=180, out=37, looseness=1.00] (2) to (1);
\draw [style=simple, in=180, out=-27, looseness=1.00] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -1) {};
\node [style=rn] (1) at (2, -0.25) {};
\node [style=fanout] (2) at (1, -1) {};
\node [style=rn] (3) at (2, -1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple, in=180, out=37, looseness=1.00] (2) to (1);
\draw [style=simple, in=180, out=-27, looseness=1.00] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
\begin{tikzpicture}[xscale=-1]
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -1) {};
\node [style=zmul] (1) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=h] (0) at (0, 0) {};
\node [style=zeroin] (1) at (-1, 0) {};
\node [style=rn] (2) at (1, 0) {};
\node [style=map] (3) at (0, 1) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -1) {};
\node [style=rn] (1) at (2, -0.25) {};
\node [style=zmul] (2) at (1, -1) {};
\node [style=rn] (3) at (2, -1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple, in=180, out=37, looseness=1.00] (2) to (1);
\draw [style=simple, in=180, out=-27, looseness=1.00] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -1) {};
\node [style=rn] (1) at (2, -0.25) {};
\node [style=fanout] (2) at (1, -1) {};
\node [style=rn] (3) at (2, -1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\draw [style=simple, in=180, out=37, looseness=1.00] (2) to (1);
\draw [style=simple, in=180, out=-27, looseness=1.00] (2) to (3);
\end{pgfonlayer}
\end{tikzpicture}
\hspace*{1cm}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (0, -1) {};
\node [style=zmul] (1) at (1, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=h] (1) at (0, -1) {};
\node [style=zeroout] (2) at (1, -1) {};
\node [style=map] (3) at (0, 0) {$\sqrt{2}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (0) to (2);
\end{pgfonlayer}
\end{tikzpicture}
$$
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=rn] (1) at (1, -1) {};
\node [style=h] (2) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw [style=simple] (1) to (2);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=rn] (0) at (-1, -1) {};
\node [style=rn] (1) at (1, -1) {};
\node [style=h] (2) at (0, -1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw [style=simple] (1) to (2);
\end{pgfonlayer}
\end{tikzpicture}
$$
\end{definition}
The scalars $\sqrt{2}^n$ are taken to their chosen representatives by $F$ and $G$:
\begin{lemma}\
\label{lem:cantthink}
\begin{enumerate}[label=(\roman*)]
\item
$
\hfil
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (5, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (1);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=map] (0) at (4.5, 0) {$\sqrt{2}$};
\end{pgfonlayer}
\end{tikzpicture}
$
\item
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (5, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [bend right=45, looseness=1.25] (0) to (1);
\draw [bend left=45, looseness=1.25] (0) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (3, 2) {};
\node [style=h] (1) at (4, 2) {};
\node [style=zeroout] (2) at (5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$
\item
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zeroin] (0) at (3, 2) {};
\node [style=h] (1) at (4, 2) {};
\node [style=zeroout] (2) at (5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
\mapsto
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=zmul] (0) at (5, 0) {};
\node [style=xmul] (1) at (4, 0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [bend right=45, looseness=1.25] (0) to (1);
\draw [bend left=45, looseness=1.25] (0) to (1);
\draw (1) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{enumerate}
\end{lemma}
\begin{proof}
See Lemma \ref{lem:cantthink:proof}
\end{proof}
This lemma makes it easier to show that $F$ and $G$ are functors:
\begin{lemma}
\label{lem:fisfunc}
$F:\mathsf{CNOT}+H \to \mathsf{ZX}_\pi$ is a strict \dag-symmetric monoidal functor.
\end{lemma}
\begin{proof}
See \ref{sec:completeness} Lemma \ref{lem:fisfunc:proof}
\end{proof}
For the other way around:
\begin{lemma}
\label{lem:gisfunc}
$G:\mathsf{ZX}_\pi\to\mathsf{CNOT}+H $ is a strict \dag-symmetric monoidal functor.
\end{lemma}
\begin{proof}
See Lemma \ref{lem:gisfunc:proof}
\end{proof}
Next:
\begin{proposition}
\label{prop:fginv}
$\mathsf{CNOT}+H\xrightarrow{F} \mathsf{ZX}_\pi$ and $\mathsf{ZX}_\pi\xrightarrow{G} \mathsf{CNOT}+H$ are inverses.
\end{proposition}
\begin{proof}
See Proposition \ref{prop:fginv:proof}
\end{proof}
Because all of the axioms of $\mathsf{CNOT}+H$ and $\mathsf{ZX}_\pi$ satisfy the same ``horizontal symmetry''; we can not only conclude that they are isomorphic, but rather:
\begin{theorem}
$\mathsf{CNOT}+H$ and $\mathsf{ZX}_\pi$ are strictly $\dag$-symmetric monoidally isomorphic.
\end{theorem}
\section{Towards the Toffoli gate plus the Hadamard gate}
\label{section:tof}
Recall the PROP $\mathsf{TOF}$, generated by the 1 ancillary bits $|1\>$ and $\<1|$ (depicted graphically as in $\mathsf{CNOT}$) as well as the Toffoli gate:
\[ \mathsf{tof} :=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, -0) {};
\node [style=nothing] (1) at (0, 0.5000001) {};
\node [style=nothing] (2) at (0, 1) {};
\node [style=nothing] (3) at (2, 1) {};
\node [style=nothing] (4) at (2, 0.5000001) {};
\node [style=nothing] (5) at (2, -0) {};
\node [style=dot] (6) at (1, 1) {};
\node [style=dot] (7) at (1, 0.5000001) {};
\node [style=oplus] (8) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (6);
\draw (6) to (3);
\draw (4) to (7);
\draw (7) to (1);
\draw (0) to (8);
\draw (8) to (5);
\draw (8) to (7);
\draw (6) to (7);
\end{pgfonlayer}
\end{tikzpicture}
\]
The axioms are given Figure \ref{fig:TOF}, which we have put in the Appendix \ref{sec:tof}.
Recall that have that:
\begin{theorem}
$\mathsf{TOF}$ is discrete-inverse equivalent to the category of partial isomorphisms between finite powers of the two element set, and thus, is complete.
\end{theorem}
By \cite{aaronson}, we have that the Toffoli gate is universal for classical reversible computing, therefore $\mathsf{TOF}$ is a complete set of identities for the universal fragment of classical computing.
However, the category is clearly is not universal for quantum computing. Surprisingly, by adding the Hadamard gate as a generator, this yields a category which is universal for an approximately universal fragment of quantum computing \cite{tofh}.
Thus, one would hope that the completeness of $\mathsf{CNOT}+H$ could be used to give a complete set of identities for a category $\mathsf{TOF}+H$.
Although we have not found such a complete set of identities, the identity \ref{cnoth:H.F} can be easily extended to an identity that characterizes the commutativity of a multiply controlled-$Z$-gate. This could possibly facilitate a two way translation to and from the ZH calculus \cite{zh}, like we performed between $\mathsf{CNOT}+H$ and $\mathsf{ZX}_\pi$. This, foreseeably would be much easier than a translation between one of the universal fragments of the ZX-calculus; because, despite the recent simplifications of the Toffoli gate in terms of the triangle, the triangle itself does not have a simple representation in terms of the Toffoli gate, Hadamard gate and computational ancillary bits \cite{vilmart}.
If we conjugate the not gate ($X$ gate) with Hadamard gates, we get the $Z$ gate:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, -0) {};
\node [style=none] (1) at (2, -0) {};
\node [style=dot] (2) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (2);
\draw [style=simple] (2) to (0.center);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, -0) {};
\node [style=none] (1) at (4, -0) {};
\node [style=oplus] (2) at (2, -0) {};
\node [style=h] (3) at (3, -0) {};
\node [style=h] (4) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple] (2) to (4);
\draw [style=simple] (4) to (0.center);
\end{pgfonlayer}
\end{tikzpicture}
$$
Furthermore, if we conjugate the operating bit controlled-not gate with the the Hadamard gate, we get the controlled-$Z$ gate:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, -0) {};
\node [style=dot] (1) at (1, -0) {};
\node [style=none] (2) at (2, -0) {};
\node [style=none] (3) at (0, -0.5) {};
\node [style=dot] (4) at (1, -0.5) {};
\node [style=none] (5) at (2, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2.center) to (1);
\draw [style=simple] (1) to (0.center);
\draw [style=simple] (5.center) to (4);
\draw [style=simple] (4) to (3.center);
\draw [style=simple] (4) to (1);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, 0.5) {};
\node [style=none] (1) at (4, 0.5) {};
\node [style=oplus] (2) at (2, 0.5) {};
\node [style=h] (3) at (3, 0.5) {};
\node [style=h] (4) at (1, 0.5) {};
\node [style=none] (5) at (0, 1) {};
\node [style=none] (6) at (4, 1) {};
\node [style=dot] (7) at (2, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple] (2) to (4);
\draw [style=simple] (4) to (0.center);
\draw [style=simple] (2) to (7);
\draw [style=simple] (7) to (6.center);
\draw [style=simple] (7) to (5.center);
\end{pgfonlayer}
\end{tikzpicture}
$$
Because the flow of information in the controlled-$Z$ gate is undirected in the sense that:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, -0) {};
\node [style=dot] (1) at (1, -0) {};
\node [style=none] (2) at (2, -0) {};
\node [style=none] (3) at (0, -0.5) {};
\node [style=dot] (4) at (1, -0.5) {};
\node [style=none] (5) at (2, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (2.center) to (1);
\draw [style=simple] (1) to (0.center);
\draw [style=simple] (5.center) to (4);
\draw [style=simple] (4) to (3.center);
\draw [style=simple] (4) to (1);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0, -0.5) {};
\node [style=dot] (1) at (1, -0) {};
\node [style=none] (2) at (2, -0.5) {};
\node [style=none] (3) at (0, -0) {};
\node [style=dot] (4) at (1, -0.5) {};
\node [style=none] (5) at (2, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=0, out=180, looseness=1.00] (2.center) to (1);
\draw [style=simple, in=0, out=180, looseness=1.00] (1) to (0.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (5.center) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (4) to (3.center);
\draw [style=simple] (4) to (1);
\end{pgfonlayer}
\end{tikzpicture}
$$
this motivates the identity $\ref{cnoth:H.F}$ of $\mathsf{CNOT}+H$:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0.5, 0.5) {};
\node [style=none] (1) at (3.5, 0.5) {};
\node [style=oplus] (2) at (2, 0.5) {};
\node [style=h] (3) at (2.75, 0.5) {};
\node [style=h] (4) at (1.25, 0.5) {};
\node [style=none] (5) at (0.5, 1) {};
\node [style=none] (6) at (3.5, 1) {};
\node [style=dot] (7) at (2, 1) {};
\node [style=h] (8) at (1.25, 1) {};
\node [style=h] (9) at (2.75, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1.center) to (3);
\draw [style=simple] (3) to (2);
\draw [style=simple] (2) to (4);
\draw [style=simple] (4) to (0.center);
\draw [style=simple] (2) to (7);
\draw [style=simple] (7) to (6.center);
\draw [style=simple] (7) to (5.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0.5, 0.5) {};
\node [style=none] (1) at (3.5, 0.5) {};
\node [style=none] (2) at (0.5, 1) {};
\node [style=none] (3) at (3.5, 1) {};
\node [style=dot] (4) at (2, 1) {};
\node [style=h] (5) at (1.25, 1) {};
\node [style=h] (6) at (2.75, 1) {};
\node [style=dot] (7) at (2, 0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (4) to (3.center);
\draw [style=simple] (4) to (2.center);
\draw [style=simple] (1.center) to (7);
\draw [style=simple] (7) to (4);
\draw [style=simple] (7) to (0.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0.25, -0.5) {};
\node [style=dot] (1) at (1, -0) {};
\node [style=none] (2) at (1.75, -0.5) {};
\node [style=none] (3) at (0.25, -0) {};
\node [style=dot] (4) at (1, -0.5) {};
\node [style=none] (5) at (1.75, -0) {};
\node [style=h] (6) at (1.75, -0) {};
\node [style=h] (7) at (0.25, -0) {};
\node [style=none] (8) at (2.5, -0.5) {};
\node [style=none] (9) at (-0.75, -0.5) {};
\node [style=none] (10) at (2.5, -0) {};
\node [style=none] (11) at (-0.75, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=0, out=180, looseness=1.00] (2.center) to (1);
\draw [style=simple, in=0, out=180, looseness=1.00] (1) to (0.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (5.center) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (4) to (3.center);
\draw [style=simple] (4) to (1);
\draw [style=simple] (8.center) to (2.center);
\draw [style=simple] (5.center) to (10.center);
\draw [style=simple] (0.center) to (9.center);
\draw [style=simple] (11.center) to (3.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (0) at (0.25, -0) {};
\node [style=dot] (1) at (1, -0) {};
\node [style=none] (2) at (1.75, -0) {};
\node [style=none] (3) at (0.25, -0.5) {};
\node [style=dot] (4) at (1, -0.5) {};
\node [style=none] (5) at (1.75, -0.5) {};
\node [style=h] (6) at (1.75, -0.5) {};
\node [style=h] (7) at (0.25, -0.5) {};
\node [style=none] (8) at (2.75, -0.5) {};
\node [style=none] (9) at (-0.75, -0.5) {};
\node [style=none] (10) at (2.75, -0) {};
\node [style=none] (11) at (-0.75, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=0, out=180, looseness=1.00] (2.center) to (1);
\draw [style=simple, in=0, out=180, looseness=1.00] (1) to (0.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (5.center) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (4) to (3.center);
\draw [style=simple] (4) to (1);
\draw [style=simple, in=0, out=180, looseness=1.00] (8.center) to (2.center);
\draw [style=simple, in=180, out=0, looseness=1.00] (5.center) to (10.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (0.center) to (9.center);
\draw [style=simple, in=180, out=0, looseness=1.00] (11.center) to (3.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, -0) {};
\node [style=none] (1) at (2.5, -0.5) {};
\node [style=none] (2) at (2.5, -0) {};
\node [style=h] (3) at (2.5, -0.5) {};
\node [style=none] (4) at (3.5, -0) {};
\node [style=none] (5) at (3.5, -0.5) {};
\node [style=h] (6) at (-0.5, -0.5) {};
\node [style=none] (7) at (-1.5, -0.5) {};
\node [style=none] (8) at (-0.5, -0) {};
\node [style=none] (9) at (-0.5, -0.5) {};
\node [style=none] (10) at (-1.5, -0) {};
\node [style=h] (11) at (0.25, -0.5) {};
\node [style=h] (12) at (1.75, -0.5) {};
\node [style=oplus] (13) at (1, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple, in=0, out=180, looseness=1.00] (5.center) to (2.center);
\draw [style=simple, in=180, out=0, looseness=1.00] (1.center) to (4.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (8.center) to (7.center);
\draw [style=simple, in=180, out=0, looseness=1.00] (10.center) to (9.center);
\draw [style=simple] (1.center) to (12);
\draw [style=simple] (12) to (13);
\draw [style=simple] (13) to (11);
\draw [style=simple] (11) to (6);
\draw [style=simple] (8.center) to (0);
\draw [style=simple] (0) to (2.center);
\draw [style=simple] (13) to (0);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, -0.5) {};
\node [style=none] (1) at (1.75, -0) {};
\node [style=none] (2) at (1.75, -0.5) {};
\node [style=none] (3) at (0.25, -0.5) {};
\node [style=none] (4) at (0.25, -0) {};
\node [style=oplus] (5) at (1, -0) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (5) to (0);
\draw [style=simple] (2.center) to (0);
\draw [style=simple] (0) to (3.center);
\draw [style=simple] (4.center) to (5);
\draw [style=simple] (5) to (1.center);
\end{pgfonlayer}
\end{tikzpicture}
$$
We can continue this, so that by conjugating the operating bit of the Toffoli gate with Hadamard gates, we obtain a doubly controlled-$Z$ gate. This suggests the following (sound) identity:
\begin{description}
\item[{[H.F']}]
\namedlabel{tofh:H.F}{\bf [H.F']}
\hfil
$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, -1) {};
\node [style=none] (1) at (3, -1) {};
\node [style=none] (2) at (-1, -1) {};
\node [style=oplus] (3) at (1, -1.5) {};
\node [style=dot] (4) at (1, -0.5) {};
\node [style=h] (5) at (0, -1.5) {};
\node [style=h] (6) at (2, -1.5) {};
\node [style=h] (7) at (2, -0.5) {};
\node [style=h] (8) at (0, -0.5) {};
\node [style=none] (9) at (3, -0.5) {};
\node [style=none] (10) at (3, -1.5) {};
\node [style=none] (11) at (-1, -0.5) {};
\node [style=none] (12) at (-1, -1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (0);
\draw [style=simple] (1.center) to (0);
\draw [style=simple] (0) to (2.center);
\draw [style=simple] (0) to (4);
\draw [style=simple] (10.center) to (6);
\draw [style=simple] (6) to (3);
\draw [style=simple] (3) to (5);
\draw [style=simple] (5) to (12.center);
\draw [style=simple] (11.center) to (8);
\draw [style=simple] (8) to (4);
\draw [style=simple] (4) to (7);
\draw [style=simple] (7) to (9.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, 1.5) {};
\node [style=none] (1) at (2, 1.5) {};
\node [style=none] (2) at (0, 1.5) {};
\node [style=oplus] (3) at (1, 2) {};
\node [style=dot] (4) at (1, 1) {};
\node [style=none] (5) at (2, 1) {};
\node [style=none] (6) at (2, 2) {};
\node [style=none] (7) at (0, 1) {};
\node [style=none] (8) at (0, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (0);
\draw [style=simple] (0) to (4);
\draw [style=simple] (8.center) to (3);
\draw [style=simple] (3) to (6.center);
\draw [style=simple] (1.center) to (0);
\draw [style=simple] (0) to (2.center);
\draw [style=simple] (7.center) to (4);
\draw [style=simple] (4) to (5.center);
\end{pgfonlayer}
\end{tikzpicture}
$
\end{description}
Along with \ref{TOF.15}, this entails:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oplus] (0) at (1, -1.5) {};
\node [style=h] (1) at (0, -1.5) {};
\node [style=h] (2) at (2, -1.5) {};
\node [style=none] (3) at (3, -1.5) {};
\node [style=none] (4) at (-1, -1.5) {};
\node [style=h] (5) at (0, -1) {};
\node [style=none] (6) at (3, -1) {};
\node [style=none] (7) at (-1, -1) {};
\node [style=dot] (8) at (1, -0.5) {};
\node [style=none] (9) at (-1, -0.5) {};
\node [style=dot] (10) at (1, -1) {};
\node [style=h] (11) at (2, -1) {};
\node [style=none] (12) at (3, -0.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3.center) to (2);
\draw [style=simple] (2) to (0);
\draw [style=simple] (0) to (1);
\draw [style=simple] (1) to (4.center);
\draw [style=simple] (12.center) to (8);
\draw [style=simple] (8) to (9.center);
\draw [style=simple] (8) to (10);
\draw [style=simple] (7.center) to (5);
\draw [style=simple] (5) to (10);
\draw [style=simple] (10) to (11);
\draw [style=simple] (11) to (6.center);
\draw [style=simple] (0) to (10);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, 1) {};
\node [style=none] (1) at (2, 1.5) {};
\node [style=none] (2) at (0, 1.5) {};
\node [style=oplus] (3) at (1, 1.5) {};
\node [style=dot] (4) at (1, 0.5) {};
\node [style=none] (5) at (2, 0.5) {};
\node [style=none] (6) at (2, 1) {};
\node [style=none] (7) at (0, 0.5) {};
\node [style=none] (8) at (0, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (0);
\draw [style=simple] (0) to (4);
\draw [style=simple, in=180, out=0, looseness=1.00] (8.center) to (3);
\draw [style=simple, in=180, out=0, looseness=1.00] (3) to (6.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (1.center) to (0);
\draw [style=simple, in=0, out=180, looseness=1.00] (0) to (2.center);
\draw [style=simple, in=180, out=0, looseness=1.00] (7.center) to (4);
\draw [style=simple] (4) to (5.center);
\end{pgfonlayer}
\end{tikzpicture}
$$
So that we can unambiguously represent the doubly controlled-$Z$ gate as:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, 1.5) {};
\node [style=none] (1) at (2, 1.5) {};
\node [style=none] (2) at (0, 1.5) {};
\node [style=dot] (3) at (1, 2) {};
\node [style=dot] (4) at (1, 1) {};
\node [style=none] (5) at (2, 1) {};
\node [style=none] (6) at (2, 2) {};
\node [style=none] (7) at (0, 1) {};
\node [style=none] (8) at (0, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (3) to (0);
\draw [style=simple] (0) to (4);
\draw [style=simple] (8.center) to (3);
\draw [style=simple] (3) to (6.center);
\draw [style=simple] (1.center) to (0);
\draw [style=simple] (0) to (2.center);
\draw [style=simple] (7.center) to (4);
\draw [style=simple] (4) to (5.center);
\end{pgfonlayer}
\end{tikzpicture}
$$
Indeed, this identity entails a more general form for generalized controlled-not gate with two or more controls. Recall the definition of a multiply controlled-not gate in \cite{tof}:
\begin{definition}\cite[Definition 5.1]{tof}
\label{definition:Generalizedcnot}
For every $n\in\mathbb{N}$, inductively define the controlled not gate, $\mathsf{cnot}_n:n+1\to n+1$ inductively by:
\begin{itemize}
\item
For the base cases, let $\mathsf{cnot}_0:=\mathsf{not}$, $\mathsf{cnot}_1:=\mathsf{cnot}$ and $\mathsf{cnot}_2:=\mathsf{tof}$.
\item
For all $n \in \mathbb{N}$ such that $n\geq 2$:
\[
\mathsf{cnot}_{n+1} \equiv
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 1) {};
\node [style=nothing] (1) at (0, 1.5) {};
\node [style=nothing] (2) at (0, 2.5) {};
\node [style=dot] (3) at (0.5000001, 1.5) {};
\node [style=dot] (4) at (0.5000001, 2.5) {};
\node [style=nothing] (5) at (1, 2.5) {};
\node [style=nothing] (6) at (1, 1) {};
\node [style=nothing] (7) at (1, 1.5) {};
\node [style=nothing] (8) at (0.5000001, 1.75) {};
\node [style=nothing] (9) at (0.5000001, 2.25) {};
\node [style=oplus] (10) at (0.5000001, 1) {};
\node [style=nothing] (11) at (0, 3) {};
\node [style=nothing] (12) at (1, 3) {};
\node [style=dot] (13) at (0.5000001, 3) {};
\node [style=nothing] (14) at (-0.5000001, 2.5) {};
\node [style=nothing] (15) at (-0.5000001, 1.5) {};
\node [style=nothing] (16) at (-1.25, 2) {$n$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (4);
\draw (3) to (1);
\draw (3) to (8);
\draw (9) to (4);
\draw (3) to (7);
\draw (5) to (4);
\draw (6) to (10);
\draw (10) to (0);
\draw (11) to (13);
\draw (13) to (12);
\draw (10) to (3);
\draw (13) to (4);
\draw [style={densely dotted}] (9) to (8);
\draw [style={decorate, decoration={brace,amplitude=1pt}, xshift={-4 pt}, yshift={0 pt}}] (15) to (14);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (-0.7500001, 1) {};
\node [style=nothing] (1) at (-0.7500001, 1.5) {};
\node [style=dot] (2) at (0.5000001, 1.5) {};
\node [style=dot] (3) at (0.5000001, 2.5) {};
\node [style=nothing] (4) at (1.75, 1) {};
\node [style=nothing] (5) at (1.75, 1.5) {};
\node [style=nothing] (6) at (0.5000001, 1.75) {};
\node [style=nothing] (7) at (0.5000001, 2.25) {};
\node [style=oplus] (8) at (0.5000001, 1) {};
\node [style=oplus] (9) at (0, 2.5) {};
\node [style=oplus] (10) at (1, 2.5) {};
\node [style=zeroin] (11) at (-0.5000001, 2.5) {};
\node [style=zeroout] (12) at (1.5, 2.5) {};
\node [style=dot] (13) at (0, 3) {};
\node [style=dot] (14) at (0, 3.5) {};
\node [style=dot] (15) at (1, 3) {};
\node [style=dot] (16) at (1, 3.5) {};
\node [style=nothing] (17) at (1.75, 3) {};
\node [style=nothing] (18) at (1.75, 3.5) {};
\node [style=nothing] (19) at (-0.7500001, 3.5) {};
\node [style=nothing] (20) at (-0.7500001, 3) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (1);
\draw (2) to (6);
\draw (7) to (3);
\draw (2) to (5);
\draw (4) to (8);
\draw (8) to (0);
\draw (8) to (2);
\draw (19) to (14);
\draw (14) to (16);
\draw (16) to (18);
\draw (17) to (15);
\draw (15) to (13);
\draw (13) to (20);
\draw (9) to (13);
\draw (13) to (14);
\draw (11) to (9);
\draw (9) to (3);
\draw (3) to (10);
\draw (10) to (12);
\draw (10) to (15);
\draw (15) to (16);
\draw [style={densely dotted}] (7) to (6);
\end{pgfonlayer}
\end{tikzpicture}
\]
\end{itemize}
\end{definition}
Recall $\mathsf{cnot}_n$ gates can be decomposed into other $\mathsf{cnot}_n$ gates in the following fashion:
\begin{proposition}\cite[Proposition 5.3 (i)]{tof}
$\mathsf{cnot}_{n+k}$ gates can be zipped and unzipped:
\[
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1.25, 0.9999999) {};
\node [style=dot] (1) at (1.25, 2) {};
\node [style=nothing] (2) at (1.25, 1.25) {};
\node [style=nothing] (3) at (1.25, 1.75) {};
\node [style=dot] (4) at (1.25, -0) {};
\node [style=nothing] (5) at (1.25, 0.7499999) {};
\node [style=nothing] (6) at (1.25, 0.2500001) {};
\node [style=oplus] (7) at (1.25, -0.5000001) {};
\node [style=nothing] (8) at (2.5, 2) {};
\node [style=nothing] (9) at (2.5, 0.9999999) {};
\node [style=nothing] (10) at (0, 0.9999999) {};
\node [style=nothing] (11) at (0, 2) {};
\node [style=nothing] (12) at (2.5, -0) {};
\node [style=nothing] (13) at (0, -0) {};
\node [style=nothing] (14) at (2.5, -0.5000001) {};
\node [style=nothing] (15) at (0, -0.5000001) {};
\node [style=nothing] (16) at (-0.5, 1) {};
\node [style=nothing] (17) at (-0.5, 2) {};
\node [style=nothing] (18) at (-1, 1.5) {$n$};
\node [style=nothing] (19) at (-1, 0.5) {$k$};
\node [style=nothing] (20) at (-0.5, -0) {};
\node [style=nothing] (21) at (-0.5, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (3) to (1);
\draw (4) to (6);
\draw (7) to (4);
\draw (0) to (10);
\draw (11) to (1);
\draw (13) to (4);
\draw (4) to (12);
\draw (14) to (7);
\draw (7) to (15);
\draw (1) to (8);
\draw (9) to (0);
\draw (0) to (5);
\draw [style={densely dotted}] (3) to (2);
\draw [style={densely dotted}] (5) to (6);
\draw [style={decorate, decoration={brace,amplitude=1pt}, xshift={-4 pt}, yshift={0 pt}}] (16) to (17);
\draw [style={decorate, decoration={brace,amplitude=1pt}, xshift={-4 pt}, yshift={0 pt}}] (20) to (21);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (0.75, 1.5) {};
\node [style=dot] (1) at (0.75, 2.5) {};
\node [style=nothing] (2) at (0.75, 1.75) {};
\node [style=nothing] (3) at (0.75, 2.25) {};
\node [style=oplus] (4) at (0.75, 0.9999999) {};
\node [style=dot] (5) at (1.75, 2.5) {};
\node [style=dot] (6) at (1.75, 1.5) {};
\node [style=nothing] (7) at (1.75, 2.25) {};
\node [style=oplus] (8) at (1.75, 0.9999999) {};
\node [style=nothing] (9) at (1.75, 1.75) {};
\node [style=dot] (10) at (1.25, -0) {};
\node [style=dot] (11) at (1.25, 0.9999999) {};
\node [style=nothing] (12) at (1.25, 0.75) {};
\node [style=nothing] (13) at (1.25, 0.2500001) {};
\node [style=oplus] (14) at (1.25, -0.5000001) {};
\node [style=nothing] (15) at (2.5, 2.5) {};
\node [style=nothing] (16) at (2.5, 1.5) {};
\node [style=nothing] (17) at (0, 1.5) {};
\node [style=nothing] (18) at (0, 2.5) {};
\node [style=zeroin] (19) at (0.2500001, 0.9999999) {};
\node [style=zeroout] (20) at (2.25, 0.9999999) {};
\node [style=nothing] (21) at (2.5, -0) {};
\node [style=nothing] (22) at (0, -0) {};
\node [style=nothing] (23) at (2.5, -0.5000001) {};
\node [style=nothing] (24) at (0, -0.5000001) {};
\node [style=nothing] (25) at (3, 2.5) {};
\node [style=nothing] (26) at (3, 1.5) {};
\node [style=nothing] (27) at (3.5, 2) {$n$};
\node [style=nothing] (28) at (3, 1) {};
\node [style=nothing] (29) at (3, -0) {};
\node [style=nothing] (30) at (3.5, 0.5) {$k$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (3) to (1);
\draw (4) to (0);
\draw (6) to (9);
\draw (7) to (5);
\draw (8) to (6);
\draw (10) to (13);
\draw (12) to (11);
\draw (14) to (10);
\draw (15) to (5);
\draw (6) to (16);
\draw (0) to (17);
\draw (18) to (1);
\draw (19) to (4);
\draw (4) to (11);
\draw (11) to (8);
\draw (8) to (20);
\draw (6) to (0);
\draw (1) to (5);
\draw (22) to (10);
\draw (10) to (21);
\draw (23) to (14);
\draw (14) to (24);
\draw [style={densely dotted}] (3) to (2);
\draw [style={densely dotted}] (7) to (9);
\draw [style={densely dotted}] (12) to (13);
\draw [style={decorate, decoration={brace,amplitude=1pt}, xshift={-4 pt}, yshift={0 pt}}] (25) to (26);
\draw [style={decorate, decoration={brace,amplitude=1pt}, xshift={-4 pt}, yshift={0 pt}}] (28) to (29);
\end{pgfonlayer}
\end{tikzpicture}
\]
\end{proposition}
Therefore, can can derive that:
\begin{lemma}
\label{lem:h.f}
\ref{tofh:H.F} entails:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, 2) {};
\node [style=dot] (1) at (1, 2.5) {};
\node [style=oplus] (2) at (1, 1.5) {};
\node [style=none] (3) at (3, 2.5) {};
\node [style=none] (4) at (-1, 2.5) {};
\node [style=dot] (5) at (1, 3) {};
\node [style=none] (6) at (-1, 3) {};
\node [style=none] (7) at (3, 3) {};
\node [style=h] (8) at (2, 2) {};
\node [style=h] (9) at (2, 1.5) {};
\node [style=h] (10) at (0, 2) {};
\node [style=h] (11) at (0, 1.5) {};
\node [style=none] (12) at (3, 2) {};
\node [style=none] (13) at (3, 1.5) {};
\node [style=none] (14) at (-1, 2) {};
\node [style=none] (15) at (-1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (0) to (2);
\draw [style=simple] (4.center) to (1);
\draw [style=simple] (1) to (3.center);
\draw [style=simple] (6.center) to (5);
\draw [style=simple] (5) to (7.center);
\draw [style={densely dotted}] (1) to (5);
\draw [style=simple] (13.center) to (9);
\draw [style=simple] (9) to (2);
\draw [style=simple] (2) to (11);
\draw [style=simple] (11) to (15.center);
\draw [style=simple] (14.center) to (10);
\draw [style=simple] (10) to (0);
\draw [style=simple] (0) to (8);
\draw [style=simple] (8) to (12.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, 2) {};
\node [style=dot] (1) at (1, 2.5) {};
\node [style=oplus] (2) at (1, 1.5) {};
\node [style=none] (3) at (2, 2.5) {};
\node [style=none] (4) at (0, 2.5) {};
\node [style=dot] (5) at (1, 3) {};
\node [style=none] (6) at (0, 3) {};
\node [style=none] (7) at (2, 3) {};
\node [style=none] (8) at (2, 1.5) {};
\node [style=none] (9) at (2, 2) {};
\node [style=none] (10) at (0, 1.5) {};
\node [style=none] (11) at (0, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (0) to (2);
\draw [style=simple] (4.center) to (1);
\draw [style=simple] (1) to (3.center);
\draw [style=simple] (6.center) to (5);
\draw [style=simple] (5) to (7.center);
\draw [style={densely dotted}] (1) to (5);
\draw [style=simple, in=0, out=180, looseness=1.00] (9.center) to (2);
\draw [style=simple, in=0, out=180, looseness=1.00] (2) to (11.center);
\draw [style=simple, in=180, out=0, looseness=1.00] (10.center) to (0);
\draw [style=simple, in=0, out=180, looseness=1.00] (8.center) to (0);
\end{pgfonlayer}
\end{tikzpicture}
$$
\end{lemma}
\begin{proof}
From the zipper lemma and \ref{tofh:H.F}, we have:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (1, 2) {};
\node [style=dot] (1) at (1, 2.5) {};
\node [style=oplus] (2) at (1, 1.5) {};
\node [style=none] (3) at (3, 2.5) {};
\node [style=none] (4) at (-1, 2.5) {};
\node [style=dot] (5) at (1, 3) {};
\node [style=none] (6) at (-1, 3) {};
\node [style=none] (7) at (3, 3) {};
\node [style=h] (8) at (2, 2) {};
\node [style=h] (9) at (2, 1.5) {};
\node [style=h] (10) at (0, 2) {};
\node [style=h] (11) at (0, 1.5) {};
\node [style=none] (12) at (3, 2) {};
\node [style=none] (13) at (3, 1.5) {};
\node [style=none] (14) at (-1, 2) {};
\node [style=none] (15) at (-1, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (0);
\draw [style=simple] (0) to (2);
\draw [style=simple] (4.center) to (1);
\draw [style=simple] (1) to (3.center);
\draw [style=simple] (6.center) to (5);
\draw [style=simple] (5) to (7.center);
\draw [style={densely dotted}] (1) to (5);
\draw [style=simple] (13.center) to (9);
\draw [style=simple] (9) to (2);
\draw [style=simple] (2) to (11);
\draw [style=simple] (11) to (15.center);
\draw [style=simple] (14.center) to (10);
\draw [style=simple] (10) to (0);
\draw [style=simple] (0) to (8);
\draw [style=simple] (8) to (12.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (7, 2.5) {};
\node [style=oplus] (1) at (6, 1) {};
\node [style=oplus] (2) at (7, 2) {};
\node [style=oplus] (3) at (5, 2) {};
\node [style=dot] (4) at (5, 2.5) {};
\node [style=h] (5) at (5, 1.5) {};
\node [style=dot] (6) at (6, 2) {};
\node [style=dot] (7) at (7, 3) {};
\node [style=none] (8) at (8, 1.5) {};
\node [style=dot] (9) at (6, 1.5) {};
\node [style=h] (10) at (7, 1.5) {};
\node [style=dot] (11) at (5, 3) {};
\node [style=h] (12) at (7, 1) {};
\node [style=h] (13) at (5, 1) {};
\node [style=none] (14) at (8, 3) {};
\node [style=none] (15) at (4, 2.5) {};
\node [style=none] (16) at (4, 3) {};
\node [style=none] (17) at (4, 1.5) {};
\node [style=none] (18) at (8, 1) {};
\node [style=zeroout] (19) at (8, 2) {};
\node [style=zeroin] (20) at (4, 2) {};
\node [style=none] (21) at (4, 1) {};
\node [style=none] (22) at (8, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style={densely dotted}] (4) to (11);
\draw [style=simple] (4) to (3);
\draw [style={densely dotted}] (0) to (7);
\draw [style=simple] (0) to (2);
\draw [style=simple] (18.center) to (12);
\draw [style=simple] (12) to (1);
\draw [style=simple] (1) to (13);
\draw [style=simple] (13) to (21.center);
\draw [style=simple] (17.center) to (5);
\draw [style=simple] (5) to (9);
\draw [style=simple] (9) to (10);
\draw [style=simple] (10) to (8.center);
\draw [style=simple] (1) to (9);
\draw [style=simple] (9) to (6);
\draw [style=simple] (6) to (2);
\draw [style=simple] (2) to (19);
\draw [style=simple] (6) to (3);
\draw [style=simple] (3) to (20);
\draw [style=simple] (22.center) to (0);
\draw [style=simple] (0) to (4);
\draw [style=simple] (4) to (15.center);
\draw [style=simple] (16.center) to (11);
\draw [style=simple] (11) to (7);
\draw [style=simple] (7) to (14.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=dot] (0) at (7, 2.5) {};
\node [style=oplus] (1) at (6, 1) {};
\node [style=oplus] (2) at (7, 2) {};
\node [style=oplus] (3) at (5, 2) {};
\node [style=dot] (4) at (5, 2.5) {};
\node [style=dot] (5) at (6, 2) {};
\node [style=dot] (6) at (7, 3) {};
\node [style=none] (7) at (8, 1) {};
\node [style=dot] (8) at (6, 1.5) {};
\node [style=dot] (9) at (5, 3) {};
\node [style=none] (10) at (8, 3) {};
\node [style=none] (11) at (4, 2.5) {};
\node [style=none] (12) at (4, 3) {};
\node [style=none] (13) at (4, 1) {};
\node [style=none] (14) at (8, 1.5) {};
\node [style=zeroout] (15) at (8, 2) {};
\node [style=zeroin] (16) at (4, 2) {};
\node [style=none] (17) at (4, 1.5) {};
\node [style=none] (18) at (8, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style={densely dotted}] (4) to (9);
\draw [style=simple] (4) to (3);
\draw [style={densely dotted}] (0) to (6);
\draw [style=simple] (0) to (2);
\draw [style=simple] (1) to (8);
\draw [style=simple] (8) to (5);
\draw [style=simple] (5) to (2);
\draw [style=simple] (2) to (15);
\draw [style=simple] (5) to (3);
\draw [style=simple] (3) to (16);
\draw [style=simple] (18.center) to (0);
\draw [style=simple] (0) to (4);
\draw [style=simple] (4) to (11.center);
\draw [style=simple] (12.center) to (9);
\draw [style=simple] (9) to (6);
\draw [style=simple] (6) to (10.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (14.center) to (1);
\draw [style=simple, in=180, out=0, looseness=1.00] (17.center) to (1);
\draw [style=simple, in=180, out=0, looseness=1.00] (8) to (7.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (8) to (13.center);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=oplus] (0) at (6, -0.25) {};
\node [style=dot] (1) at (6, 0.75) {};
\node [style=none] (2) at (7, -0.25) {};
\node [style=dot] (3) at (6, 0.25) {};
\node [style=dot] (4) at (6, 1.25) {};
\node [style=none] (5) at (7, 1.25) {};
\node [style=none] (6) at (5, 0.75) {};
\node [style=none] (7) at (5, 1.25) {};
\node [style=none] (8) at (5, -0.25) {};
\node [style=none] (9) at (7, 0.25) {};
\node [style=none] (10) at (5, 0.25) {};
\node [style=none] (11) at (7, 0.75) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style={densely dotted}] (1) to (4);
\draw [style=simple] (0) to (3);
\draw [style=simple] (1) to (6.center);
\draw [style=simple] (7.center) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (9.center) to (0);
\draw [style=simple, in=180, out=0, looseness=1.00] (10.center) to (0);
\draw [style=simple, in=180, out=0, looseness=1.00] (3) to (2.center);
\draw [style=simple, in=0, out=180, looseness=1.00] (3) to (8.center);
\draw [style=simple] (1) to (3);
\draw [style=simple] (11.center) to (1);
\draw [style=simple] (4) to (5.center);
\end{pgfonlayer}
\end{tikzpicture}
$$
\end{proof}
Recall from \cite[Corollary 5.4]{tof} that control-wires of $\mathsf{cnot}_n$ gates can be permuted in the following sense:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (0, 0) {};
\node [style=nothing] (1) at (0, 0.5) {};
\node [style=nothing] (2) at (1, 0) {};
\node [style=nothing] (3) at (1, 0.5) {};
\node [style=dot] (4) at (0.5, 0.5) {};
\node [style=oplus] (5) at (0.5, 0) {};
\node [style=nothing] (6) at (1, 1.5) {};
\node [style=dot] (7) at (0.5, 1.5) {};
\node [style=nothing] (8) at (0, 1.5) {};
\node [style=nothing] (9) at (0.5, 1.25) {};
\node [style=nothing] (10) at (0.5, 0.75) {};
\node [style=nothing] (11) at (0, 3) {};
\node [style=nothing] (12) at (0.5, 2.75) {};
\node [style=nothing] (13) at (1, 3) {};
\node [style=dot] (14) at (0.5, 3) {};
\node [style=nothing] (15) at (1, 2) {};
\node [style=nothing] (16) at (0, 2) {};
\node [style=nothing] (17) at (0.5, 2.25) {};
\node [style=dot] (18) at (0.5, 2) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (4);
\draw (4) to (3);
\draw (2) to (5);
\draw (5) to (0);
\draw (5) to (4);
\draw (7) to (8);
\draw (7) to (6);
\draw (9) to (7);
\draw (10) to (4);
\draw (11) to (14);
\draw (12) to (14);
\draw (13) to (14);
\draw (18) to (16);
\draw (18) to (17);
\draw (18) to (15);
\draw (7) to (18);
\draw [style={densely dotted}] (12) to (17);
\draw [style={densely dotted}] (9) to (10);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (-0.75, 0) {};
\node [style=nothing] (1) at (-0.75, 0.5) {};
\node [style=nothing] (2) at (1.75, 0) {};
\node [style=nothing] (3) at (1.75, 0.5) {};
\node [style=dot] (4) at (0.5, 0.5) {};
\node [style=oplus] (5) at (0.5, 0) {};
\node [style=dot] (6) at (0.5, 1.5) {};
\node [style=nothing] (7) at (0.5, 1.25) {};
\node [style=nothing] (8) at (0.5, 0.75) {};
\node [style=nothing] (9) at (-0.75, 3) {};
\node [style=nothing] (10) at (0.5, 2.75) {};
\node [style=nothing] (11) at (1.75, 3) {};
\node [style=dot] (12) at (0.5, 3) {};
\node [style=nothing] (13) at (0.5, 2.25) {};
\node [style=dot] (14) at (0.5, 2) {};
\node [style=nothing] (15) at (1.75, 1.5) {};
\node [style=nothing] (16) at (1.75, 2) {};
\node [style=nothing] (17) at (-0.75, 2) {};
\node [style=nothing] (18) at (-0.75, 1.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (1) to (4);
\draw (4) to (3);
\draw (2) to (5);
\draw (5) to (0);
\draw (5) to (4);
\draw (7) to (6);
\draw (8) to (4);
\draw (9) to (12);
\draw (10) to (12);
\draw (11) to (12);
\draw (14) to (13);
\draw [in=180, out=0, looseness=1.00] (14) to (15);
\draw [in=180, out=0, looseness=1.00] (6) to (16);
\draw [in=0, out=180, looseness=1.00] (14) to (18);
\draw [in=0, out=180, looseness=1.00] (6) to (17);
\draw (6) to (14);
\draw [style={densely dotted}] (10) to (13);
\draw [style={densely dotted}] (7) to (8);
\end{pgfonlayer}
\end{tikzpicture}
$$
Therefore, by this observation and Lemma \ref{lem:h.f} we have:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (-1, 3) {};
\node [style=nothing] (1) at (3, 3) {};
\node [style=dot] (2) at (1, 3) {};
\node [style=oplus] (3) at (1, 2.5) {};
\node [style=nothing] (4) at (3, 3.5) {};
\node [style=dot] (5) at (1, 3.5) {};
\node [style=nothing] (6) at (-1, 3.5) {};
\node [style=dot] (7) at (1, 4) {};
\node [style=dot] (8) at (1, 4.5) {};
\node [style=nothing] (9) at (3, 4.5) {};
\node [style=nothing] (10) at (-1, 4.5) {};
\node [style=dot] (11) at (1, 5) {};
\node [style=nothing] (12) at (3, 5) {};
\node [style=nothing] (13) at (-1, 5) {};
\node [style=nothing] (14) at (3, 2.5) {};
\node [style=nothing] (15) at (-1, 2.5) {};
\node [style=nothing] (16) at (3, 4) {};
\node [style=nothing] (17) at (-1, 4) {};
\node [style=h] (18) at (2, 4) {};
\node [style=h] (19) at (0, 4) {};
\node [style=h] (20) at (0, 2.5) {};
\node [style=h] (21) at (2, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (0) to (2);
\draw (2) to (1);
\draw (3) to (2);
\draw (5) to (6);
\draw (5) to (4);
\draw (5) to (7);
\draw (10) to (8);
\draw (9) to (8);
\draw (13) to (11);
\draw (12) to (11);
\draw [style=simple] (7) to (8);
\draw [style=simple] (14) to (21);
\draw [style=simple] (21) to (3);
\draw [style=simple] (3) to (20);
\draw [style=simple] (20) to (15);
\draw [style=simple] (17) to (19);
\draw [style=simple] (19) to (7);
\draw [style=simple] (7) to (18);
\draw [style=simple] (18) to (16);
\draw [style={densely dotted}] (8) to (11);
\draw [style={densely dotted}] (5) to (2);
\end{pgfonlayer}
\end{tikzpicture}
=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (-1, 3) {};
\node [style=nothing] (1) at (3, 3) {};
\node [style=dot] (2) at (1, 3) {};
\node [style=oplus] (3) at (1, 2.5) {};
\node [style=nothing] (4) at (3, 3.5) {};
\node [style=dot] (5) at (1, 3.5) {};
\node [style=nothing] (6) at (-1, 3.5) {};
\node [style=dot] (7) at (1, 4) {};
\node [style=dot] (8) at (1, 4.5) {};
\node [style=nothing] (9) at (3, 4.5) {};
\node [style=nothing] (10) at (-1, 4.5) {};
\node [style=dot] (11) at (1, 5) {};
\node [style=nothing] (12) at (3, 5) {};
\node [style=nothing] (13) at (-1, 5) {};
\node [style=nothing] (14) at (3, 4) {};
\node [style=nothing] (15) at (-1, 4) {};
\node [style=nothing] (16) at (3, 2.5) {};
\node [style=nothing] (17) at (-1, 2.5) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (3) to (2);
\draw (5) to (7);
\draw (10) to (8);
\draw (9) to (8);
\draw (13) to (11);
\draw (12) to (11);
\draw [style=simple] (7) to (8);
\draw [style={densely dotted}] (8) to (11);
\draw [style={densely dotted}] (5) to (2);
\draw (5) to (6);
\draw (0) to (2);
\draw (2) to (1);
\draw (5) to (4);
\draw [style=simple, in=0, out=180, looseness=1.00] (16) to (7);
\draw [style=simple, in=0, out=180, looseness=1.00] (7) to (17);
\draw [style=simple, in=180, out=0, looseness=1.00] (15) to (3);
\draw [style=simple, in=180, out=0, looseness=1.00] (3) to (14);
\end{pgfonlayer}
\end{tikzpicture}
$$
So that, by observing that the multiply controlled-$Z$ gate can be unambiguously defined as follows:
$$
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (-0.5, 1) {};
\node [style=nothing] (1) at (-0.5, 1.5) {};
\node [style=nothing] (2) at (-0.5, 2.5) {};
\node [style=dot] (3) at (0.5000001, 1.5) {};
\node [style=dot] (4) at (0.5000001, 2.5) {};
\node [style=nothing] (5) at (1.5, 2.5) {};
\node [style=nothing] (6) at (1.5, 1) {};
\node [style=nothing] (7) at (1.5, 1.5) {};
\node [style=nothing] (8) at (0.5000001, 1.75) {};
\node [style=nothing] (9) at (0.5000001, 2.25) {};
\node [style=dot] (10) at (0.5000001, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (4);
\draw (3) to (1);
\draw (3) to (8);
\draw (9) to (4);
\draw (3) to (7);
\draw (5) to (4);
\draw (6) to (10);
\draw (10) to (0);
\draw (10) to (3);
\draw [style=densely dotted] (9) to (8);
\end{pgfonlayer}
\end{tikzpicture}
:=
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=nothing] (0) at (-1, 1) {};
\node [style=nothing] (1) at (-1, 1.5) {};
\node [style=nothing] (2) at (-1, 2.5) {};
\node [style=dot] (3) at (0.5000001, 1.5) {};
\node [style=dot] (4) at (0.5000001, 2.5) {};
\node [style=nothing] (5) at (2, 2.5) {};
\node [style=nothing] (6) at (2, 1) {};
\node [style=nothing] (7) at (2, 1.5) {};
\node [style=nothing] (8) at (0.5000001, 1.75) {};
\node [style=nothing] (9) at (0.5000001, 2.25) {};
\node [style=oplus] (10) at (0.5000001, 1) {};
\node [style=h] (11) at (1.25, 1) {};
\node [style=h] (12) at (-0.25, 1) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw (2) to (4);
\draw (3) to (1);
\draw (3) to (8);
\draw (9) to (4);
\draw (3) to (7);
\draw (5) to (4);
\draw (6) to (10);
\draw (10) to (0);
\draw (10) to (3);
\draw [style=densely dotted] (9) to (8);
\end{pgfonlayer}
\end{tikzpicture}
$$
\section*{Acknowledgement}
The author would like to thank Robin Cockett, Jean-Simon Lemay and John van de Wetering for useful discussions.
\nocite{duncaninteracting}
\nocite{bonchiinteracting}
\nocite{gottesmanstabilizer}
|
2,869,038,156,885 | arxiv | \section*{Results}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig1}
\caption{Imitation on a square lattice fails to sustain cooperation and extortion. Depicted are the stationary frequencies of surviving strategies in dependence on the strength of the social dilemma $b$. It can be observed that for sufficiently small values of $b$ only $WSLS$ survive. As $b$ increases, the pure $WSLS$ phase first gives way to a narrow two-strategy $WSLS+D$ phase, which then transforms into the three-strategy $WSLS+TFT+D$ phase. The emergence of these three different phases is a direct consequence of dominance relations between the three involved strategies, which are schematically depicted in the bottom frame for the respective values of $b$ from left to right. Arrows show the direction of invasion between strategies.}
\label{imitate}
\end{figure}
Before turning to the main results obtained with myopic best response updating, we present in Fig.~\ref{imitate} the evolutionary outcomes obtained via imitation on a square lattice. If imitation is the basis of strategy updating, then neither cooperators nor extortioners can survive, and this regardless of the strength of the social dilemma and the strength of exploitation. Since extortioners always die out, the composition of the final state is actually completely independent of $\chi$. We have used $\chi=1.5$ for the presented results, but the value influences only the time needed for relaxation towards the final stable solution. Starting with $b \geq 1$ (we show results from $b=1.5$ onwards for clarity with regards to the subsequent phase transitions), the completely dominant strategy is $WSLS$. At the other end of the interval of $b$, we have a stable three-strategy $WSLS+TFT+D$ phase, which is sustained by cyclic dominance. In between, we have a narrow two-strategy $WSLS+D$ phase, which terminates immediately after $D$ reach dominance.
This dependence on $b$ can be understood by considering the relations among the surviving strategies, as summarized in the bottom frame of Fig.~\ref{imitate}. For small values of $b$ (left), $WSLS$ dominate both $D$ and $TFT$. The latter also dominate $D$, but their superior status in this relationship has no effect on the final state. For high values of $b$ (right), the direction of invasion between $WSLS$ and $D$ changes compared to the low $b$ case, while the other two relations remain unchanged. Consequently, instead of a pure $WSLS$ phase, we have a three-strategy $WSLS+TFT+D$ phase, where $WSLS$ invade $TFT$, $TFT$ invade $D$, and $D$ invade $WSLS$ to close the loop of dominance. It is worth emphasizing that this solution is impossible in a well-mixed population for all $b<2$.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{fig2}
\caption{The coexistence of defectors and players adopting the win-stay-lose-shift strategy in case of imitation on a square lattice. Depicted is the time evolution of the frequency of defectors $f_D$ as obtained for $b=1.7$, $1.734$, $1.736$, $1.738$, $1.739$ and $1.741$ from bottom to top. The time courses provide insight into the competition for space within the narrow two-strategy $WSLS+D$ phase that can be observed in Fig.~\ref{imitate}. At $b=1.741$ defectors come to dominate the whole population, but their dominance is immediately overthrown in favor of the three-strategy $WSLS+TFT+D$ phase that is sustained by cyclic dominance. The used linear size of the square lattice is $L=1000$. Note that the time scale is logarithmic.}
\label{time}
\end{figure}
In a narrow interval between the pure $WSLS$ phase and the cyclic $WSLS+TFT+D$ phase, we have the situation depicted in the middle of the bottom frame of Fig.~\ref{imitate}, where unlike for small and high values of $b$, the relation between $WSLS$ and $D$ enables their coexistence in a structured population. As for small values of $b$, here too $TFT$ can invade $D$, but this is without effect on the final outcome. The stable two-strategy coexistence is illustrated in Fig.~\ref{time}, where we show how $WSLS$ and $D$ compete for space over time for different values of $b$. The larger the value of $b$, the smaller the fraction of the population that is occupied by $WSLS$ in the stationary state. Interestingly, when $b$ is large enough for $D$ to fully eliminate $WSLS$, the complete dominance of defectors is prevented by the presence of $TFT$, who become viable via a second-order continuous phase transition. From this point onwards, the cyclic dominance $WSLS \to TFT \to D \to WSLS$ starts working until the end of the interval of $b$, as depicted in the main panel of Fig.~\ref{imitate}.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig3}
\caption{Myopic best response updating in structured populations stabilizes extortion and cooperation. Depicted are the stationary frequencies of surviving strategies in dependence on the strength of the social dilemma $b$, as obtained for the strength of extortion $\chi=1.5$ on the square lattice (top), the random regular graph (middle), and the scale-free network (bottom).
It can be observed that players adopting the $WSLS$ strategy dominate for sufficiently small values of $b$ on homogeneous interaction networks (top and middle), but as $b$ increases or if the interaction network is heterogeneous (bottom), the pure $WSLS$ phase gives way to a stable five-strategy $WSLS+D+E_{\chi}+TFT+C$ phase. (\textit{continues on next page})}
\label{myopic}
\end{figure}
Overall, extortion is unable to capitalize on structured interactions if the strategy updating is governed by imitation or a birth-death rule (results not shown), and in fact this is in full qualitative agreement with the results obtained in well-mixed populations \cite{adami_ncom13, hilbe_pnas13}. In the realm of evolutionary games, extortioners do not do well against cooperative strategies like $C$, $TFT$ and $WSLS$. They may thrive for a short period of time, but as soon extortion becomes widespread, it is more profitable to cooperate, which ultimately renders extortion evolutionary unstable.
Myopic strategy updating, on the other hand, can sustain very different evolutionary outcomes as it allows players to adopt strategies that are not necessarily present in their interaction neighborhood. In fact, strategies need not be present in the population at all, as long as they are an option for the players to choose randomly when it is their turn to perhaps change their strategy. Nevertheless, we emphasize that myopic best response updating is different from mutation, because each individual strategy change is still driven by the payoff difference, as described by Eq.~\ref{myop}. Results presented in Fig.~\ref{myopic} obtained on the square lattice (top) and the random regular graph (middle) show that for sufficiently small values of $b$ the final state is the same as under imitation dynamics. Players adopting $WSLS$ dominate completely from $b=1$ onwards (as in Fig.~\ref{imitate}, we show results for $b \geq 1.5$ only). At a critical value of $b$, however, a second-order continuous phase transition rather unexpectedly leads to the stable coexistence of all five competing strategies. A similar diversity of strategies prevails on heterogeneous interaction networks, as illustrated by the results obtained on a scale-free network shown in the bottom panel of Fig.~\ref{myopic}. Myopic best response updating is thus able to stabilize extortion in structured populations. Perhaps even more surprisingly, as the strength of the social dilemma increases, the two cooperative strategies $C$ and $TFT$ become viable as well. This outcome is rather independent of the structure of the interaction network.
\begin{figure}\renewcommand{\thefigure}{3}
\caption{(\textit{continues from previous page}) Here defectors emerge and coarsen spontaneously because for sufficiently large values of $b$ their payoff becomes larger than that of clustered $WSLS$ players. The emergence of defectors immediately opens the door to the survival of extortioners and $TFT$ players, which both emerge by chance and spread by means of neutral drift. Lastly, with the emergence of extortioners and $TFT$ players cooperators become viable as well, thus forming the stable five-strategy phase. The latter is virtually unaffected by different values of $\chi$, as demonstrated in Fig.~\ref{chi}. Importantly, the described coexistence of the competing strategies is a universal behavior that can be observed in structured populations regardless of the properties of the interaction network, and even across the whole span of $b$ values, as illustrated in the bottom panel. Characteristic snapshots depicting the described key stages of the evolutionary process are presented in Fig.~\ref{snaps}.}
\end{figure}
Since extortioners survive for sufficiently high values of $b$, the strength of extortion $\chi$ might play a role too, but as evidenced by the results presented in Fig.~\ref{chi}, this role is in fact very minor. As the value of $\chi$ increases, the extortioners become slightly more common on the expense of $TFT$ and $C$ players, but overall this does not affect the evolutionary stability of extortion and cooperation. Compared to our previous results presented in \cite{szolnoki_pre14}, where we have studied the three strategy variant of the game without $TFT$ and $WSLS$ players, the role of $\chi$ is less significant here mainly because the stationary frequency of extortioners is much smaller. The fact that their frequency is much smaller, however, is a direct consequence of the presence of the two additional cooperative strategies ($TFT$ and $WSLS$), which in turn highlights the general subordinate role of extortioners compared to cooperation in evolutionary games. The latter was emphasized already in \cite{adami_ncom13, hilbe_pnas13}, as well as by the results presented in Fig.~\ref{imitate} above. Also contributing to the minor role of $\chi$ is that the emergence of extortioners is in fact a second-order effect, as we will explain next.
\begin{figure}\renewcommand{\thefigure}{4}
\centering
\includegraphics[width=8.5cm]{fig4}
\caption{The strength of extortion has a negligible impact on the stationary frequencies of competing strategies, and it does not affect the evolutionary stability of extortion and cooperation. Depicted are the stationary frequencies of surviving strategies in dependence on the strength of extortion $\chi$, as obtained for the social dilemma strength $b=2$ on a square lattice. It can be observed that the variations of all frequencies are small. Expectedly, larger values of $\chi$ favor extortion. The neutral drift of $TFT$ players therefore becomes slightly less prolific, which in turn also slightly decreases the frequency of cooperators. Interestingly, the stationary frequencies of strategies at $b=2$ and their $\chi$-dependency are practically indistinguishable for the square lattice and the random regular graph. This further highlights the irrelevance of the structure of the interaction network under myopic best response updating, and thus also the universality of the presented results.}
\label{chi}
\end{figure}
To understand why $E_{\chi}$, $TFT$ and $C$ emerge as $b$ increases, it is instructive to consider the erosion of the pure $WSLS$ phase on square lattice, as illustrated in Fig.~\ref{snaps}. For a sufficiently high value of $b$ defectors emerge and start coarsen spontaneously because their payoff becomes competitive with the payoff of aggregated $WSLS$ players. The emergence of the $D$ phase, however, paves the way for the emergence of all the other strategies. Namely, both $E_{\chi}$ and $TFT$ are neutral against $D$, and thus they may emerge by chance and spread via neutral drift. As $E_{\chi}$ accumulate locally, $C$ become viable too because their payoff is higher. The emergence of $C$ is helped further (or at least not hindered) by $TFT$, who are neutral with $C$. During this unexpected chain of strategy invasions, defection and extortion thus emerge as catalysts of unconditional cooperation. Effectively, the defectors act as a Trojan horse for all the other strategies, while subsequently the extortioners act as a Trojan horse for cooperation. Evidently, the spreading of $C$, which utilizes the neutral drift of $E_\chi$, will be controlled by defectors and $WSLS$ players who can strike back since their presence in place of an extortioner may yield a higher payoff in a predominantly cooperative neighborhood. This, however, will again be only temporary, since the described elementary invasions are bound to recur, thus assuring the stability of the $WSLS+D+E_{\chi}+TFT+C$ phase.
\begin{figure}\renewcommand{\thefigure}{5}
\centering
\includegraphics[width=8.5cm]{fig5}
\caption{Characteristic time evolution of the spatial distribution of the five competing strategies on a square lattice. The evolution starts from the full $WSLS$ phase (not shown), using $b=1.8$ and $\chi=1.5$. At $MCS=5$ (leftmost panel), first defectors start emerging because their payoff is comparable with $WSLS$ players. Soon thereafter, at $MCS=10$ (second panel from left), first extortioners and $TFT$ players emerge. Both have neutral relations with the defectors, and thus their emergence and spreading are due to chance and neutral drift. At $MCS=30$, as soon as locally the number of extortioners becomes sufficiently large, cooperators emerge as well due to their higher payoffs, and their spreading is additionally supported by the $TFT$ players. The recurrence of these elementary processes eventually spreads a stable mixture of all five strategies across the whole population, as depicted in the rightmost panel that was taken at $MCS=100$. The color encoding of the strategies is the same as used in Figs.~\ref{myopic} and \ref{chi}. For clarity with regards to individual players and their strategies, we have used a small square lattice with linear size $L=40$.}
\label{snaps}
\end{figure}
An important lesson learned from the presented results in Fig.~\ref{snaps} is that although extortion can be as counterproductive as defection, it is still less destructive. For an unconditional cooperator it never pays sticking with the strategy if surrounded by defectors, but it may be the best option among extortioners. Cooperators are of course happiest among other cooperators, but in the presence of extortioners they can still attain a positive payoff, and this is much better than nothing or a negative value in the presence of defectors. It is worth emphasizing that this argument is valid independently of the properties of the interaction network, as the described chain of strategy invasions emerges in all the structured populations that we have considered.
\section*{Discussion}
We have shown that even if the set of competing strategies is extended to encompass, besides unconditional cooperators, defectors and extortioners \cite{szolnoki_pre14}, also the tit-for-tat strategy and the win-stay-lose-shift strategy, the imitation dynamics in structured populations is still unable to render extortion evolutionary stable. For sufficiently small values of $b$ only players adopting the win-stay-lose-shift strategy survive, while beyond a threshold value a stable three-strategy phase consisting of defectors, tit-for-tat and win-stay-lose-shift players emerges. Since extortioners never survive, the strength of exploitation $\chi$ is without effect. These results agree with those reported previously for sizable isolated well-mixed populations \cite{hilbe_pnas13}, and they highlight the severe challenges that extortioners face when vying for survival in the realm of evolutionary games where players are able to imitate strategies that are performing better \cite{adami_ncom13}.
If the evolution is governed by myopic best response updating, however, the outcomes are significantly different from those obtained via imitation. We have shown that for sufficiently large values of $b$ the complete dominance of win-stay-lose-shift players is broken as soon as defectors emerge and start coarsening. Subsequently, within the homogeneous domains of defectors, extortion becomes viable too via the same mechanism as we have described before in \cite{szolnoki_pre14}. In particular, extortioners and defectors are neutral, and hence the former can emerge by chance and spread via neutral drift. Yet as soon as extortioners emerge, cooperators can finally emerge as well, because in competition with the former they are superior. In this evolutionary scenario, defection and extortion thus act as the most surprising catalysts of unconditional cooperation in structured populations. Moreover, we have shown that the coexistence of all competing strategies occurs across the whole interval of $b$ values if a heterogeneous (scale-free) network describes the interactions among players. Because of this unlikely path towards cooperation, we conclude that defectors and extortioners effectively play the role of a Trojan horse for cooperators. Interestingly, similar transient roles of extortionate behavior were recently reported in the realm of well-mixed populations when studying the adaptive dynamics of extortion and compliance \cite{hilbe_pone13b}. Moreover, after the emergence and coarsening of defectors, in the presently studied game the tit-for-tat players also become viable as they are likewise neutral, and can thus spread via neutral drift just like extortioners. In recurrence, these evolutionary processes give rise to a stable five-strategy phase that is hardly affected by the strength of exploitation $\chi$, and it is also robust to the population size and the structure of the interaction network.
Taken together, these results thus have a high degree of universality and highlight the relevance of coarsening, the emergence of role-separating strategy distributions (which manifests as checkerboard ordering on regular graphs), and best response updating in evolutionary games. The latter is especially important, as it appears to be an integral part of human behavior \cite{matsui_jet92, blume_l_geb93, ellison_econm93}. From the more pragmatical point of view, best response updating conveys to the players an ability to explore the space of available strategies even if they are not present in their immediate neighborhood or even in the population as a whole, and by doing so, such updating dynamics opens up the door to the most counterintuitive evolutionary outcomes. Similarly to kin competition, the presented results also highlight the other side of network reciprocity. Namely, it does not only support cooperative behavior by means of clustering, but it also reveals the consequences of bad decisions -- defectors and extortioners become weak when they become surrounded by their like. From this point of view, it is understandable and indeed expected that structured populations, if anything, hinder the successful evolution of extortion under imitation. The surprising positive role of extortioners becomes apparent only under best response updating, where the threatening loom of widespread defection is drifted away by the lesser evil to eventually introduce more constructive cooperative strategies.
\section*{Methods}
We adopt the same game parametrization as Hilbe et al. \cite{hilbe_pnas13}. Accordingly, the payoff matrix for the five competing strategies is\\ \\
\centerline{\begin{tabular}{r|c c c c c}
& $TFT$ & $WSLS$ & $E_\chi$ & $all \,C$ & $all \,D$ \\
\hline
$TFT$ & $\frac{1}{2}$ &$\frac{1}{2}$ &$0$ &$1$ &$0$\\
$WSLS$ & $\frac{1}{2}$ &$1$ &$\frac{(2b-1)\chi}{3b-2+(3b-1)\chi}$ &$\frac{b+1}{2}$ &$\frac{1-b}{2}$\\
$E_\chi$ & $0$ &$\frac{(2b-1)\chi}{3b-2+(3b-1)\chi}$ &$0$ &$\frac{(2b-1)\chi}{b-1+b\chi}$ &0 \\
$all \,C$ & $1$ &$\frac{2-b}{2}$ &$\frac{2b-1}{b-1+b\chi}$ &$1$ &$1-b$ \\
$all \,D$ & $0$ &$\frac{b}{2}$ &$0$ &$b$ &$0$ \\
\end{tabular}}\\ \\ \\
where $b$ is the benefit to the other player provided by each cooperator at the cost $c$, and $\chi$ determines the surplus of the extortioner in relation to the surplus of the other player. Moreover, we use $b-c=1$, thus having $b>1$ and $\chi>1$ as the two main parameters. The former determines the strength of the social dilemma, while the latter determines just how strongly strategy $E_\chi$ exploits cooperators. A direct comparison of the extortioner strategy with the other strategies reveals that $E_\chi$ is neutral with unconditional defectors and players adopting the $TFT$ strategy. The latter, however, may beat $E_\chi$ if they are surrounded by other $TFT$ players. Similar relations hold for the competition between $E_\chi$ and $WSLS$ players. While the latter receive the same income from a direct interaction, they do gain more if the neighbors also adopt the $WSLS$ strategy. It is also worth noting that the payoffs between $C$ and $D$ constitute the so-called donation game, which is an important special case of the iterated prisoner's dilemma game with all the original properties retained \cite{brede_pone13}.
We predominantly consider a $L \times L$ square lattice with periodic boundary conditions as the simplest interaction network to describe a structured population. To demonstrate the robustness of our findings, we also use a random regular graph and the scale-free network with the same average degree, which is likely somewhat more apt to describe realistic social and technological networks \cite{barabasi_s99}. We have used population sizes from $10^4$ up to $10^6$ players to avoid finite-size effects.
Unless stated differently, for example to illustrate a specific invasion process as in Fig.~\ref{snaps}, we use random initial conditions such that all five strategies are uniformly distributed across the network. We carry out Monte Carlo simulations comprising the following elementary steps. First, a randomly selected player $x$ with strategy $s_x$ acquires its payoff $p_x$ by playing the game with its $k$ neighbors, as specified by the underlying interaction network. Next, player $x$ changes its strategy $s_x$ to $s_x^{\prime}$ with the probability
\begin{equation}
q(s_x^{\prime} \to s_x) =\frac{1}{1+\exp[(p_x-p_x^{\prime})/K]}\ \,
\label{myop}
\end{equation}
where $p_x^{\prime}$ is the payoff of the same player if adopting strategy $s_x^{\prime}$ within the same neighborhood, and $K=0.05$ quantifies a small uncertainty that is related to the strategy adoption process \cite{szabo_pr07}. The strategy $s_x^{\prime}$ should of course be different from $s_x$, and it is drawn randomly from the remaining four strategies. Such strategy updating is known as the myopic best response rule \cite{matsui_jet92}.
We also consider the more traditional strategy imitation, where player $x$ imitates the strategy of a randomly selected neighbor $y$, only that $p_x^{\prime}$ in Eq.~\ref{myop} is replaced by $p_y$ \cite{szabo_pr07}, as well as death-birth updating as described for example in \cite{ohtsuki_jtb06}. Regardless of the applied strategy updating rule, we let the system evolve towards the stationary state where the average frequency of strategies becomes time independent. We measure time in full Monte Carlo steps ($MCS$), during which each player is given a chance to change its strategy once on average.
\begin{acknowledgments}
This research was supported by the Hungarian National Research Fund (Grant K-101490), TAMOP-4.2.2.A-11/1/KONV-2012-0051, and the Slovenian Research Agency (Grants J1-4055 and P5-0027).
\end{acknowledgments}
|
2,869,038,156,886 | arxiv | \section{Introduction}
Depth estimation has shown great significance in many real-world applications, including robotic manipulation\cite{tremblay2018deep}, augmented reality \cite{tang20193d,marchand2015pose}, and autonomous driving \cite{manhardt2019roi,wu20196d}. However, it suffers from bottlenecks in high-velocity motion circumstances, hindered by blurred images from traditional low frame rate cameras \cite{hu2021optical}. To deal with high-velocity motion, spike cameras are designed to capture the images at high frame rate \cite{dong2019efficient,zhu2019retina}. Since spike cameras can capture the pixel-wise luminance intensity at high frame rate, spike depth estimation is an ideal solution to depth estimation in high-velocity motion \cite{zhu2020retina}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm, height=6.5cm]{Graphs/introv5.pdf
\end{center}
\caption{We demonstrate the advantage of spike camera when dealing with fast-moving objects for driving depth estimation. The first row indicates the motion blur (yellow dotted circle) from traditional RGB camera. The second row indicates such motion blur brings inaccurate depth estimation for such high-speed objects.
The third row demonstrates the performance decrease for RGB depth estimation, compared with spike depth estimation. Therefore, we introduce spike camera and our proposed UGDF to solve this problem.}
\label{fig:0}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm, height=4cm]{Graphs/introv6.pdf
\end{center}
\caption{ We use the binocular spike data as the input and train the monocular and stereo depth estimation models separately. By analyzing the predictions of monocular and stereo models, we find they have different accuracies at different depth ranges. This motivates us to fuse the predictions in a dual-task depth estimation architecture.}
\label{fig:0}
\end{figure}
Although there are plenty of traditional works on monocular depth estimation \cite{mayer2016large,kendall2017end,Khamis_2018_ECCV,Chabra_2019_CVPR,guo2019group} and stereo depth estimation \cite{Xu_2018_CVPR,Ramamonjisoa_2020_CVPR,lee2019monocular,Ramamonjisoa_2019_ICCV,fu2018deep,godard2017unsupervised}\cite{liu2022local}. It is still very challenging to apply them to spike depth estimation since spike data lacks reliable photometric consistency. In order to solve this problem, we first analyze the pros and cons of monocular and stereo depth estimation. On the one hand, monocular depth estimation is inherently ill-posed and mainly depends on the semantic knowledge of features. Therefore, it is robust to the disparity error and achieves better results at long range. On the other hand, stereo depth estimation compares the local patch pairs to obtain the optimal disparity. Therefore, it obtains better results at close range and performs worse at long range. As shown in Figure 1, we conduct the analysis of monocular and stereo spike depth estimation. This motivates us to fuse the monocular and stereo predictions for spike depth estimation, alleviating the problem of lacking reliable photometric consistency.
In this paper, we propose a novel Uncertainty-Guided Depth Fusion (UGDF) framework to fuse the predictions of monocular and stereo spike depth estimation. Instead of training the monocular and stereo models separately, UGDF introduces a depth estimation architecture for dual tasks with a joint training strategy. This architecture includes two components. The first component is a shared encoder, which learns a feature representation to build stereo cost volume and monocular depth regression. The second component consists of two parallel branches for monocular and stereo depth estimation tasks. For the monocular branch, we set decoder to consist of three upsampling blocks. As for the stereo branch, we utilize a 3D hourglass-shaped convolution to aggregate the disparity dimension feature of 4D cost volume \cite{chang2018pyramid}. To fuse the predictions of both branches, instead of naive linear fusion, we introduce a novel adaptive uncertainty-guided fusion approach. Different from occlusion-aware fusion \cite{chen2021revealing}, which only exploits the knowledge from stereo branch, we adopt regression uncertainty formulations \cite{zhou2021sub} to measure the performances of monocular and stereo branches. Guided by the uncertainty maps, we fuse the reliable predictions of monocular and stereo branches, taking advantage of both tasks for the final estimation.
In addition, we contribute a spike-depth dataset named CitySpike20K, which consists of 20K paired samples, for spike depth estimation. We demonstrate the great advantages of spike camera for high-velocity depth estimation on CitySpike20K. Extensive experiments are conducted to demonstrate the good performance of our framework compared with state-of-the-art monocular and stereo baselines.
Our contributions can be concluded as follows:
\begin{itemize}
\item We propose a novel Uncertainty-Guided Depth Fusion framework to fuse the predictions of monocular and stereo spike depth estimation, alleviating the problem of lacking reliable photometric consistency for spike data.
\item We introduce a dual-task depth estimation architecture along with a joint training strategy. To the best of our knowledge, we are the first to fuse dual tasks for spike depth estimation.
\item We contribute a spike dataset named CitySpike20K, which contains 20K spike-depth pairs, to demonstrate the advantages of spike camera over traditional cameras on high-velocity depth estimation.
\item We conduct extensive experiments to evaluate the advantages of our method against existing monocular and stereo baselines.
\end{itemize}
\section{Related Work}
In this section, we investigate and reviewed recent works that are related to ours concerned with frame-based and event-based vision for depth estimation.
\subsection{Monocular and Stereo Depth Estimation}
Monocular and stereo methods are two parallel mainstreams in the development of depth estimation algorithms.
One of the earliest works that inspired recent trends for monocular depth estimation was introduced by Eigen et at.\cite{eigen2014depth}. This work proposed a kind of novel architecture that includes coarse-scale and fine-scale two steps, defining depth estimation as a pixel-wise regression problem. Similar to semantic segmentation task, one popular design for monocular depth estimation is encoder-decoder structure with CNNs\cite{Xu_2018_CVPR,Ramamonjisoa_2020_CVPR,lee2019monocular,Ramamonjisoa_2019_ICCV,fu2018deep,godard2017unsupervised} or transformers\cite{ranftl2021vision,yang2021transformer}. In the encoding stage, the encoder captures context information and learns a global representation. And in the decoding stage, the network tends to establish a coupling connection between context texture and depth information with ground-truth full-supervision or self-supervision\cite{godard2017unsupervised,guizilini20203d,lyu2020hr}. Innovation has also been made in regression style\cite{fu2018deep,bhat2021adabins,roy2016monocular}, for more efficient representation of depth information. A recent study shows great potential in combining monocular depth estimation as auxiliary tasks for semantic segmentation\cite{Jiao_2018_ECCV,hoyer2021three}.
Stereo depth estimation shows quite a different design strategy from monocular ways. Early works concentrate mainly on stereo matching for left and right stereo pairs\cite{4270415,7989227}. After deep learning was applied to this task, a stereo depth estimation pipeline contains three main steps: (1) feature extraction (2) cost aggregation, and (3) disparity/depth regression. Thanks to 3D convolution and the proposal of 3D cost-volume\cite{mayer2016large,kendall2017end}, the whole pipeline can be constructed end-to-end\cite{Khamis_2018_ECCV,Chabra_2019_CVPR}. PSMNet\cite{chang2018pyramid} concatenates left-fight features to cost volume and perform hourglass-shaped 3D convolution to make aggregation. GwcNet\cite{guo2019group} uses correlation formulation to divide cost volume into groups, which decreases computation while improving prediction results. In addition, self-supervised methods also gain competitive performance\cite{Zhou_2017_ICCV}.
Recent study shows a new trend for rethinking the connection between monocular and stereo depth estimation\cite{Tosi_2019_CVPR,chen2021revealing}. Besides, left-right consistency has been the main cue for unsupervised depth estimation works of monocular task\cite{godard2017unsupervised}.
\subsection{Depth Estimation for Event-based Vision}
Compared to the standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. Dynamic Vision Sensors (DVS) is a kind of representative event-based camera. Compared with frame-based camera, DVS is capable of capturing motional objects. Zhu et al.\cite{zhu2018multivehicle} provides a 100Hz DVS dataset containing depth ground-truth. They also propose time-synchronized event disparity volumes in \cite{zhu2018realtime} to handle DVS data for stereo matching. Similar research\cite{zhu2019unsupervised} uses discretized event volume to supervise monocular optical flow and depth estimation without labels. Another work\cite{ranccon2021stereospike} adopts a spiking neural network to estimate event depth.
\subsection{Spike Camera and Its Visual Application }
Spike camera is also a kind of novel bio-inspired event camera. Distinct from the frame-based cameras and dynamic vision sensors, spike camera mimics the retina to record the nature scenes by continuous-time spikes\cite{yu2020toward,zhu2019retina}. \cite{9181055} develops a new image reconstruction approach for potential retina-inspired spike camera to recover high-speed motion scenes. \cite{Zheng_2021_CVPR,Zhao_2021_CVPR,zheng2021high,zhu2021neuspike}use spiking or regular convolutional neural networks to reconstruct high quality and high-speed images like a human vision from spike streams. Spike vision shows obvious advantages in capturing high-speed moving objects or scenes, so it provides new solutions for some long-standing problems in the field of machine vision.
\begin{figure*}[t]
\includegraphics[width=17.2cm, height=7.2cm]{Graphs/framework.pdf}
\centering
\caption{\textbf{Illustration of the network architecture}. The network consists of three major modules. Processed spike data pairs are sent into spike encoders, which contain 3 downsampling layers, for initial representation (a). Monocular and stereo branches deal with these features, and output depth and disparity respectively (b1, b2). A final uncertainty-guided fusion is performed to aggregate monocular and stereo results (c).}
\label{fig:3}
\end{figure*}
\subsection{Spike Camera}
Different from the RGB cameras and dynamic vision sensors, spike camera mimics the retina to record natural scenes by continuous-time spikes \cite{yu2020toward,zhu2019retina}. \cite{9181055} develop a new image reconstruction approach for the spike camera to recover high-speed motion scenes. \cite{Zheng_2021_CVPR,Zhao_2021_CVPR,zheng2021high,zhu2021neuspike} use spike or regular CNNs to reconstruct high quality and high-speed images from spike streams. Spike vision shows obvious advantages in capturing high-speed moving objects or scenes, so it provides new solutions to some long-standing problems in the field of computer vision. In this paper, we propose a novel method for high-quality spike depth estimation by fusing monocular and stereo depth estimation.
\section{Proposed Method}
In this section, we present our method uncertainty-guided depth fusion framework (UGDF) to fully complement the strengths of both stereo and monocular tasks in spike data. The whole framework is demonstrated in Fig. \ref{fig:3} which consists of four components.
\subsection{Spike Data Analysis}
\label{sec3.1}
For spike camera, natural lights are captured by photoreceptors and converted to voltage under the integration of time series $t$. Once the voltage at a certain sensing unit reaches a threshold $\Theta$, a one-bit spike is fired and the voltage is reset to zero at the same time~\cite{9181055}.
\begin{equation}
S(i, j, t) = \left\{
\begin{array}{l}
1 , \quad \int_{t0_{i,j}^{pre}}^{t} I(i, j) \, dt \ge \Theta\\
0 , \quad \int_{t0_{i,j}^{pre}}^{t} I(i, j) \, dt < \Theta
\end{array}
\right.
\end{equation}
The above formula reveals the basic working pipeline of the spike camera, where $I(i,j)$ represents the luminance of pixel $(i,j)$, and $t0^{pre}_{i,j}$ represents the time that fires the last spike at pixel$(i,j)$. An ideal Analog to Digital Conversion(ADC) process is by continuous time, but such circumstances do not exist due to the inherent limitations of the digital circus. Even so, the spike camera is still able to generate much more dense frames than RGB models like streams, at a maximum frequency of 40000Hz\cite{Zhao_2021_ICCV,zhu2020retina, Zhao_2021_CVPR}. Suppose we have $H \times W$ receptive field, the camera would output a $H \times W$ binary spike frame at a certain moment, and as time goes on, high-frequency spike frames are produced. However, directly performing depth estimation on spike frames remains challenging. On one hand, high contrast between 1-bit spike data makes it more difficult to distinguish local context information. On the other hand, different light intensities in the scene cause different frequencies of spike generation. So in practice, we make spike data in a fixed size time window to be a multi-channel tensor. For example, we take spike frames from continuous 100 time-steps frames and concatenate them at time dimension as $100 \times H \times W$ voxels, which then become inputs to our designed networks.
\subsection{UGDF Framework}
\label{sec3.2}
We propose a simple yet efficient network that includes a shared spike-encoder and a spike-decoder with two branches.
First, we build a neuromorphic encoding module to extract spiking features in both time domain and frequency domain. The spike voxel $V \in \mathbb{Z}^{100\times H\times W}$ is split into spike sequences $V_s = \{ v_1, v_2, ..., v_s \} $ by a fixed length of time window n, where s = $|100/n|_{\mathbb{Z}}$ and $v_s\in \mathbb{Z}^{n\times H\times W}$ . Then, the spike sequences are fed into a ConvRNN to extract temporal connections. Meanwhile, an FFT operation is performed on spike voxel $V$ to extract global information in frequency domain.
we use a shared deep encoder to learn a representation to build stereo cost volume and monocular depth regression, as shown in part (a) of Figure 3. We adopt MobileNetV3\cite{howard2019searching} as our encoder to make a trade-off between computation cost and model performance. The encoder contains 3 downsampling stage and the final feature map size of coding is $B \times 256 \times \frac{H}{8} \times \frac{W}{8}$.
As shown in the part (b1) and (b2) of Figure 3, two parallel branches stretch away for monocular and stereo depth estimation and serves as the spike-decoder. Different from any other previous works, we fuse monocular and depth estimation in one workflow at a multi-task level. In the stereo branch, we found a common ground for monocular and stereo tasks to learn a global context representation. So we take advantage of the encoder module, and concatenate the unary features(obtained coding from spike encoders) to build a 4D cost volume($ 256 \times \textit{Max-Disp.} \times \frac{H}{8} \times \frac{W}{8}$) for stereo disparity regression, where $\textit{Max-Disp.}$ represents the maximum disparity level to regression. Then inspired by \cite{chang2018pyramid}, we perform a 3D hourglass-shaped convolution to aggregate the disparity dimension feature of 4D cost volume. We stack three 3D hourglasses, each of which contains two blocks with \textit{$3\times3\times3$} kernel size and 2-stride 3D convolution, and two blocks with \textit{$3\times3\times3$} kernel size and 2-stride 3D transposed convolution. The disparity map is regressed at the final 3D convolution stage via a \textit{soft-argmin} operation\cite{kendall2017end}:
\begin{equation}
\textit{soft-argmin:}=\sum_{d=0}^{Disp^{*}_{max}}{d \times \gamma{(-c_d)}}
\end{equation}
where $\gamma{(\cdot)}$ represents soft-max operation at disparity dimension, $c_d$ is predicted costs for disparity $d$, and $Disp^{*}_{max}$ means length of disparity dimension of the output features. The final disparity is weighed by a normalized probability. In the training phase, three 3D hourglass disparity outputs are all involved in building loss function. And the last output of three 3D hourglasses is used for the evaluation process.
In the monocular branch, the right unary features(coding) are then sent to a decoding block for depth estimation. The decoder consists of three upsample blocks, each block contains one bilinear interpolation module and two convolutional layers along with batch normalization and Mish activation. The output layer is a $1\times1$ convolution which squeeze the features to \textbf{two channels}, \textbf{one} of which is used for \textbf{depth} estimation and \textbf{the other} one used for \textbf{uncertainty} allocation, as described in the next subsection.
\subsection{Uncertainty Guided Fusion}
\label{sec3.3}
Inspired by SUB-Depth\cite{zhou2021sub}, we assume the distribution over the output of either branch can be modeled as exponential family distribution such as Laplace's distribution or Gaussian distribution. The stereo branch and monocular branch adopt the same approach while we use the monocular branch as an illustration example. Given a dataset with left and right spike frames and corresponding depth ground truth $(x_l, x_r, y_l, y_r)$, we let our monocular branch output the mean $\hat{y}$ and variance $\sigma$ of the posterior probability distribution $p(y_r|\hat{y_r}, x_r)$. We use Laplace's distribution as:
\begin{equation}
p(y_r|\hat{y_r}, x_r) = \frac{1}{2\sigma}exp\frac{-|\hat{y_r}-y_r|}{\sigma}
\end{equation}
We can convert the above distribution to a log-likelihood formula like:
\begin{equation}
log(p(y_r|\hat{y_r}, x_r)) = -log(\sigma) + \frac{-|\hat{y_r}-y_r|}{\sigma} + const.
\end{equation}
So according to max-posterior probability estimation, an uncertainty loss can be formulated in the form of:
\begin{equation}
loss_{unc.} = log(\sigma) + \frac{|\hat{y_r}-y_r|}{\sigma}
\label{eq:unc}
\end{equation}
We can minimize this loss function to obtain a max-posterior probability distribution over estimated monocular depth $\hat{y}$. The uncertainty coefficient $\sigma_m$ and predicted depth $\hat{y}$, which are the outputs of the part (b1), are regressed from the same decoder at the same time, so $\sigma_m$ can be seen as prediction uncertainty for monocular depth estimation task. Similar to the monocular branch, a lite CNN is added behind the probability map of the stereo branch, and regresses uncertainty coefficient $\sigma_s$ after a sigmoid activation.
We notice that the monocular branch outperforms the stereo branch in farther regions and the stereo branch is good at predicting closer regions. So we hope the fusion style may combine both monocular and stereo advantages. So we define a distance threshold:
\begin{equation}
\sigma_{dis.} = D_{max} \frac{e^{2(\sigma_m - \sigma_s)} }{1 + e^{2(\sigma_m - \sigma_s)}}
\label{eq:unc}
\end{equation}
\begin{figure}[t!]
\includegraphics[width=0.45\textwidth]{Graphs/nmencode.pdf}
\centering
\caption{Neuromorphic encoding module. We use Conv-RNN to extract local information, and meanwhile a Fast-Fourier-Transform is applied through the whole spike voxel to extract global information.}
\label{fig:5}
\end{figure}
With estimated uncertainty, an uncertainty-guided fusion mask $F$ can be defined as:
\begin{equation}
F_i = \left\{
\begin{array}{l}
0 , \hat{D}_{mono.} \leq \sigma_{dis.}\\
1 , \hat{D}_{mono.} > \sigma_{dis.}
\end{array}
\right.
\end{equation}
where i represents i-th element of the uncertainty map, and $\sigma_{dis}$ represents an uncertainty threshold to make fusion.
To take advantage of both monocular and stereo branches, we use monocular depth prediction results $\hat{D}_{mono.}$ and stereo depth prediction results $\hat{D}_{ster.}$ to make further fusion, exploiting complementary advantages for monocular and stereo models. Instead of directly performing linear addition between two kinds of outputs, we fuse them in a more efficient uncertainty-guided way. And the uncertain-guided fusion is given:
\begin{equation}
\hat{D_f} = F \odot \hat{D}_{mono.} + (1 - F) \odot \hat{D}_{ster.}
\end{equation}
\subsection{UGDF Loss Functions}
\label{sec3.4}
We present training strategies for baseline network without fusion and UGDF with fusion. The training loss of baseline network consists of monocular depth estimation \textit{$loss_{disp.}$} and stereo disparity regression \textit{$loss_{depth}$}, which use smooth-L1 loss during the training phase under the supervision of depth ground-truth and generated disparity labels. The baseline $loss_{base}$ is shown as below:\\
\begin{equation}
loss_{base.} = loss_{disp.} + loss_{depth.}
\end{equation}
in which \textit{$loss_{disp.}$} and \textit{$loss_{depth.}$} are shown as below:
\begin{equation}
loss_{disp.}(d^{*}, \hat{d}) = \frac{1}{N}\sum_{i=1}^{M}\sum_{j=1}^{N} \alpha_i \cdot smoothL1(d^{*}, \hat{d})
\end{equation}
\begin{equation}
loss_{depth.}(D, \hat{D}) = \frac{1}{n}\sum_i{c_i}^2 - \frac{1}{n^2}(\sum_i{c_i})^2 + \eta
\end{equation}
In which $\eta = 0.1$ and $c_i = log(D_i) - log(\hat{D_i})$. And N is the size of data and M=3 is the number of stacked 3D hourglasses, $\left\{ \alpha_1, \alpha_2, \alpha_3 \right\}$ = $\left\{ 0.5, 0.7, 1.0 \right\}$. $\hat{D}$ and $D$ represent predicted depth and depth ground truth respectively. Similarly, $\hat{d}$ means predicted disparity and $d^{*}$ is generated disparity ground truth.
The training loss of UGDF consists of five losses, including monocular depth estimation \textit{$loss_{disp.}$}, stereo disparity regression \textit{$loss_{depth.}$}, monocular branch uncertainty \textit{$loss_{mono\_unc.}$}, stereo branch uncertainty \textit{$loss_{ster\_unc.}$} and fusion \textit{$loss_{fu.}$}. The whole UGDF \textit{$loss_{ugdf.}$} is shown as:
\begin{equation}
loss_{ugdf.} = loss_{base.} + loss_{ster\_unc.} + loss_{mono\_unc.}
\label{eq:ugde}
\end{equation}
where \textit{$loss_{mono-unc.}$} and \textit{$loss_{ster-unc.}$} follow Eq. \ref{eq:unc} while $\hat{D_f}$ denotes the fusion predicted depth from two branches. To be mentioned, the depth of the stereo branch is converted from disparity under intrinsic parameters of the camera. The training details of baseline and UGDF are the same. Further training details are presented in Section 5.
\section{Spike-Depth Dataset: CitySpike20K}
This section introduces different aspects of the dataset we propose. The dataset includes RGB scenes, and their corresponding spike frames and depth maps. All these data are generated by a simulated spike camera in Unity3D virtual environment. The dataset describes 11 sequences of city street scenes containing 6 day scenes and 5 night scenes.
Our proposed CitySpike20K dataset provides a depth estimation benchmark for spike data. The scenes are created via a simulated spike camera, recording a fast-moving car in the street scene, at a frequency of 1000Hz. The resolution for recorded data is $1024 \times 768$. We also build a depth field for these scenes and store them as 24-bit depth maps. The ground truth information is of 0.3-1000m absolute depth. In addition, we provide the focus $f$ and baseline length $base_{len}$ of the stereo camera in supplement. We also convert depth $D$ to disparity $disp$ under the function $disp. = \frac{f*base_{len}}{D}$. A visualization of our dataset is shown in \textbf{supplementary file}. Besides, in terms of sensor-collaboration, we provide 842 pairs of RGB images from regular stereo cameras, dense spike frames from stereo Vidar, as well as depth maps from stereo depth cameras. Three kinds of data are organized in a one-to-one corresponding way. Besides, we provide a demo sequence of 40000Hz frequency spike data, recording a 91km/h car driving in the city street. This demo is for evaluating the depth estimation algorithm when loaded with high-frequency spike data.
\section{Experiments}
In this section, we conduct extensive experiments to show the advantages of UGDF. Then, we extensively evaluate UGDF by comparing it with the state-of-the-art and classic depth estimation methods which have shown great performance on RGB depth datasets such as KITTI\cite{Uhrig2017THREEDV} and NYUD-V2\cite{Silberman:ECCV12}. We also conduct comprehensive ablation studies to evaluate the contribution of each component in the last subsection. Due to space limitations, some details of experiments and results are provided in the supplementary materials.
\begin{table*}[t]
\centering
\scriptsize
\caption{Quantitative results on CitySpike20K (decribed as CS20K below) \textbf{validation} set. Evaluation metrics are as described in section 3. We make comparison with GwcNet\cite{guo2019group}, CFNet\cite{shen2021cfnet}, PSMNet\cite{chang2018pyramid}. The evaluation metrics are as introduced in subsection 4.2. We also consider model parameter size to be one of the compared targets.}
\begin{tabular}{c|c|c|c|ccccccc}
\toprule
Dataset & Method & Approach & Modality & \multicolumn{1}{l}{Abs\_Rel↓}\cellcolor{lightgray} & \multicolumn{1}{l}{RMSE ↓} & \multicolumn{1}{l}{Sq\_Rel ↓} & \multicolumn{1}{l}{RMSE\_log ↓} & \multicolumn{1}{l}{a1 ↑} & \multicolumn{1}{l}{a2 ↑} & \multicolumn{1}{l}{a3 ↑} \\
\midrule
& PSMNet & Ster.& RGB & 0.4564 & 15.484 & 12.990 & 0.734 & 0.469 & 0.668 &0.743 \\
CS20K & GwcNet & Ster. & RGB & 0.419 & 19.724 & 9.753 & 0.632 & 0.469 & 0.685& 0.767 \\
& CFnet & Ster. & RGB & 0.4038 & 14.928 & 8.870 & 0.437 & 0.593 & 0.677&0.786 \\
\midrule
CS20k & \textbf{UGDF}(Ours) & Fusion & Spike & \textbf{0.2282 } & \textbf{11.075 } & \textbf{4.699} & \textbf{0.305} & \textbf{0.754} & \textbf{0.879}&\textbf{0.942} \\
\bottomrule
\end{tabular}%
\label{tab:1}%
\end{table*}%
\begin{table*}[t]
\centering
\scriptsize
\caption{Quantitative results on CitySpike20K \textbf{test} set. We add two monocular algorithms as baselines which are DPT\cite{ranftl2021vision} and UNet\cite{ronneberger2015u}. }
\begin{tabular}{c|c|c|c|ccccccc}
\toprule
Dataset & Method & Approach & Modality & \multicolumn{1}{l}{Abs\_Rel↓} & \multicolumn{1}{l}{RMSE ↓} \cellcolor{lightgray} & \multicolumn{1}{l}{Sq\_Rel ↓} & \multicolumn{1}{l}{RMSE\_log ↓} & \multicolumn{1}{l}{a1 ↑} & \multicolumn{1}{l}{a2 ↑} & \multicolumn{1}{l}{a3 ↑} \\
\midrule
& UNet & Mono.& RGB & 0.3612 & 19.217 & 6.981 & 0.502 & 0.569 & 0.765 & 0.893 \\
& DPT & Mono.& RGB & 0.249& 13.641 & 4.349 & 0.379 & 0.632 & 0.817 & 0.925 \\
CS20K & PSMNet & Ster.& RGB & 0.4341& 16.294 & 9.247 & 0.840 & 0.411 & 0.626 & 0.712 \\
& GwcNet & Ster. & RGB & 0.3931 & 18.680 & 8.745 & 0.577 & 0.492 & 0.704& 0.787 \\
& CFnet & Ster. & RGB & 0.3825 & 13.794 & 7.925 & 0.496 & 0.467 & 0.723& 0.836 \\
\midrule
CS20k & \textbf{UGDF}(Ours) & Fusion & Spike & \textbf{0.1997 } & \textbf{10.953 } & \underline{4.879} & \underline{0.412} & \textbf{0.790} & \textbf{0.888}&\textbf{0.945} \\
\bottomrule
\end{tabular}%
\label{tab:1}%
\end{table*}%
\subsection{Implementation Details}
\label{sec:5.1}
We train our proposed UGDF network on spike-depth pairs, including stereo spike frames and right depth-ground-truth.
The whole training phase contains 200 epochs and takes about 16 hours with the batch size of 4 on two NVIDIA-Tesla P100 GPUs, for $256 \times 512$ resolution spiking frames.
We utilize 24-bit 0-1000m absolute depth ground-truth to supervise training for the monocular branch. We normalize depth ground truth $D$ to $D^{*} \in (0,1)$, with the function $ D^{*} = D / 1000 $. Meanwhile, the disparity is transformed from depth with camera intrinsics.
As for optimization, we use Adam optimizer with $(\beta_1, \beta_2) = (0.9, 0.999)$. We set an initial learning rate of 1e-3 and decay to 0.33e-3 at epoch 35 for the sake of a more smooth optimizing process.
In this section, we compare UGDF against the state-of-the-art and classic depth estimation methods on CitySpike20K dataset.
\textbf{Data processing} Our proposed dataset contains 20K frames of spike-depth pairs. We split 7 out of 10 total sequences for training, 2 sequences for testing and 1 sequence for validating. All the data used for training and validating is sampled every 100 time-stamps to form a spike voxel. So we obtain 140 training pieces and 40, 20 for testing and validating our framework. To emphasize the advantage of spike data, we use blurred 30fps RGB frames in our dataset to train the RGB-based baseline methods. So there are 571 training pieces, 142 testing pieces and 111 validation pieces for baseline methods.
\textbf{Baseline methods} To demonstrate the effectiveness of UGDF, we compare it with some state-of-the-art and classic depth estimation methods which have shown remarkable performance on our proposed CitySpike20k dataset. For monocular methods, we choose classical UNet \cite{ronneberger2015u} and DPT\cite{ranftl2021vision}. UNet has been demonstrated a successful design on semantic segmentation\cite{ronneberger2015u, baheti2020eff} and image reconstruction\cite{Chen_2021_CVPR, chen2021hinet}. we adopt its proposed structure and evaluate it on our spike-depth estimation task. In DPT, we use Vit-b16 as the backbone and 224x224 as input resolution. PSMNet\cite{chang2018pyramid} uses subtract and concatenation method to build a 3D cost volume. GwcNet\cite{guo2019group} proposes group-wise correlation to reduce computation while conducting 3D convolution. CFNet\cite{shen2021cfnet} employs a variance-based uncertainty estimation to adaptively search disparity space.
\begin{figure*}[t]
\centering
\subfigure[RGB Frame]{\includegraphics[width=0.15\textwidth]{Graphs/rgb.pdf}}
\subfigure[Spike Frame]{\includegraphics[width=0.15\textwidth]{Graphs/spike.pdf}}
\subfigure[Mono. result]{\includegraphics[width=0.15\textwidth]{Graphs/mono.pdf}}
\subfigure[Ster. result]{\includegraphics[width=0.15\textwidth]{Graphs/ster.pdf}}
\subfigure[Fusion result]{\includegraphics[width=0.15\textwidth]{Graphs/fusion.pdf}}
\subfigure[RGB result]{\includegraphics[width=0.15\textwidth]{Graphs/blur12.pdf}}\\
\caption{Visualization of depth estimation on CitySpike20K. Pic. a is 30hz RGB data, and b is one spike frame in a spike voxel. Pic. c-e is output result of our method, and f is UNet output for RGB depth estimation. }
\label{fig:data_distribution}
\end{figure*}
\textbf{Main Results and Analysis for CS20K} Table 1 and Table 2 show quantitative results with the comparison of the RGB methods. We experiment with classic monocular depth estimation works, as well as stereo depth estimation methods. We can see that under the uncertainty-guided fusion, our result gets the top performance among all the methods. Compared with the best monocular method, UGDF reduces $4.93\%$, $2.688$ error in terms of AbsRel.and RMSE metric respectively. For stereo methods, we also gained improvements on all metrics. We also show the qualitative comparison in Figure 5. As can be seen, our method achieves better depth estimation compared with blurred RGB-based methods. Other visualization results are in our supplement.
\begin{table*}[t]
\centering
\scriptsize
\caption{Quantitative results on Spike-Real \textbf{test} set. Our UGDF framework still obtains performance increase to two branches.}
\begin{tabular}{c|c|c|c|ccccccc}
\toprule
Dataset & Method & Approach & Modality & \multicolumn{1}{l}{Abs\_Rel↓} \cellcolor{lightgray} & \multicolumn{1}{l}{RMSE ↓} & \multicolumn{1}{l}{Sq\_Rel ↓} & \multicolumn{1}{l}{RMSE\_log ↓} & \multicolumn{1}{l}{a1 ↑} & \multicolumn{1}{l}{a2 ↑} & \multicolumn{1}{l}{a3 ↑} \\
\midrule
Real & PSMNet & Ster. & Image & 0.3743 & 2.228 & 0.413 & 0.843 & 0.451 & 0.703 & 0.838 \\ \midrule
& & Ster. & & 0.2722 & 1.264 & 0.376 & 0.348 & 0.581 & 0.819 & 0.906\\
Real & \textbf{UGDF}(Ours) & Mono. & Spike & 0.4037 & 1.552 & 1.017 & 0.382 & 0.528 & 0.796& 0.889 \\
& & Fusion & & \textbf{0.2693} & 1.237 & 0.413 & 0.374 & 0.533 & 0.795 & 0.899 \\
\bottomrule
\end{tabular}%
\label{tab:1}%
\end{table*}%
\begin{table}[h]
\centering
\caption{Ablating the fusion effectiveness on CitySpike 20K. We design a depth estimation fusion method with strong-efficiency and gain improvements for both branches. }
\begin{tabular}{c|c|ccc}
\toprule
Split & Branch & \multicolumn{1}{l}{Abs\_Rel} & \multicolumn{1}{l}{Sq\_Rel } & a1 \\
\midrule
Valid & Mono. & 0.3302 \textbf{(0.102↓)}& 12.759 & 0.738 \\
& Ster. & 0.2543 (0.026↓) & 3.995 & 0.613 \\
& E. Fusion & 0.2652 (0.037↓)& 5.712 & 0.706 \\
& U. Fusion & 0.2282 & 4.699 & 0.754 \\
\midrule
Test & Mono. & 0.2944 \textbf{(0.095↓)} & 12.508 & 0.779 \\
& Ster. & 0.2118 (0.012↓) & 3.780 & 0.753 \\
& E. Fusion & 0.2347 (0.035↓) & 4.018 & 0.761 \\
&U. Fusion. & 0.1997 & 4.897 & 0.791 \\
\bottomrule
\end{tabular}%
\label{tab:1}%
\end{table
\begin{table}[h!]
\begin{center}
\small
\caption{Ablation results on test split of CS20K for window-width of neuromorphic encoding. The runtime statistics are made on RTX 2080ti GPU for a single forward pass of the network with the batch-size of 1}
\begin{tabular}{c|c|ccccc}
\toprule
Width & Time& & & \textbf{Error} & & \\\
& (ms) & Abs\_Rel & Sq\_Rel &a1 & a2 & a3 \\
\midrule
8 & 5.3 &0.2301 & 5.146 & 0.756 & 0.877 & 0.942 \\
16 & 2.8 &0.2143 & 4.699 & 0.764 & 0.892 & 0.946 \\
24 & 2.1 &0.1997 & 4.879 & 0.790 & 0.888 & 0.945 \\
32 & 1.6 &0.2552 & 5.612 & 0.726 & 0.876 & 0.934 \\
\bottomrule
\end{tabular}
\end{center}
\label{tab3}
\end{table}
\textbf{Evaluation on Spike-Real Set}
We also train and evaluate our network on a dataset captured by a stereo real-world Vidar in a series of outdoor scenes. The dataset contains 40 sequences of outdoor scenes and we split 33 sequences for training and 7 sequences for testing. Table 4 and Figure 6 shows results evaluated on its test set.
\subsection{Ablation Study}
\label{sec:5.3}
We carry out ablation experiments from two aspects. The first of those is to explore the effect of different choices of time-window widths, and the other is to verify the effectiveness of uncertainty-guided fusion design.
\textbf{Effectiveness of Uncertainty Guided Fusion}
We conduct experiments to verify the effectiveness of monocular and stereo uncertainty jointly guided fusion. In order to demonstrate the benefits of joint-guided fusion, we first compare it with a linear additive ensemble fusion manner. To be specific, we make a uniform linear addition of monocular and stereo estimation results, denoted as E. Fusion in Table 4 As can be seen, the linear additive fusion manner is inferior to other fusion methods. In addition, we visualize the improvement gap between fusion results and the other two branches. We can see the advantages of our UGDF framework. Firstly it combines both advantages of stereo and monocular estimation. And secondly, it brings substantial improvements, rather than the compromising fusion of ensemble style.
\textbf{Time window Width of Neuromorphic Encoding}
In our framework, we apply a kind of neuromorphic encoding method to effectively extract the feature of spike data. As we have described in Section 3.2, we chunk the spike voxel into spike sequences by the time window of 24 to obtain better local representations. Then the sequences are sent into the Conv-RNN to extract temporal connections between different sequences. Theoretically, applying a smaller time window is beneficial to extract local connections between spike sequences, yet increases the convergence and inference time on hardware. So we set different widths of encoding time-window and train the whole network for 200 epochs. We make evaluations on the test set of CitySpike20K. The results are shown in Table 5, and we can see that 24 turns out to be the optimal choice of time-window widths for shorter inference time and better precision.
\section{Conclusion}
In this paper, we propose an uncertainty-guided depth fusion framework (UGDF) for spike data, consisting of four modules including neuromorphic encoding module, spike encoder, spike decoder for monocular and stereo tasks, and an uncertainty-guided fusion part. The main motivation of our work is monocular depth estimation models and stereo models show different advantages when performing predictions for spike data. So it's critical to explore an effective fusion method for leveraging the advantages of both tasks. Different from previous works, we fuse monocular and stereo depth prediction results according to individual adaptive uncertainty estimations. We also generate a spike dataset for depth estimation which contains 20K paired spike-depth data (CitySpike20K), along with its technical details and evaluation metrics. We demonstrate the good efficiency of spike data when applied to fast-moving circumstances. Extensive experiments are conducted to validate the effectiveness of our proposed UGDF. We hope this paper can inspire future works on spike data depth estimation.
\section*{Appendix I: Proposed Dataset: CitySpike20K}
\subsection*{Introduction and Visualization}
We propose CitySpike20K, a spike-depth dataset to help explore the depth estimation algorithms for spike camera. The dataset is generated by Unity3D and contains 10 sequences, 5 of which are day scenes and 5 others are night scenes. In the dataset, the frequency of the spike data and corresponding depth GTs is 1000Hz. Besides, we supply 30Hz RGB images for each scenes as well as 1000Hz RGB images that aligned with spike data.
To fully simulate the city environments, we add moving automobiles and dynamic traffic lights. We set 5-10 moving automobiles including buses, cars, vans and trucks for each scene. Figure 6 gives a visualization of CitySpike20K which contains RGB frames, spike data and depth maps. Specifically, we split scene03, scene07 for testing, scene09 for validation and others for training.
\begin{figure*}[t]
\includegraphics[width=13cm]{supplementary/dataset.pdf}
\centering
\vspace{-5pt}
\caption{ A visualization of our proposed CitySpike20k dataset. We generate it by Unity3D engine and simulate a vivid city environment along with dense depth maps and spike data.}
\label{fig:2}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=16.5cm]{supplementary/prediction1.pdf}
\centering
\vspace{-5pt}
\caption{More prediction results on CitySpike20K dataset. As can be seen, the stereo estimation results and the monocular estimation results fuse efficiently by our framework}
\label{fig:2}
\end{figure*}
\begin{figure}[h]
\includegraphics[width=8cm]{supplementary/real-vsi.pdf}
\centering
\vspace{-5pt}
\caption{A visualization for Spike-Real dataset and prediction results from its test set.}
\end{figure}
\begin{table*}[t]
\centering
\footnotesize
\caption{Quantitative results on Spike-Kitti \textbf{test} set. Our UGDF framework still obtains performance increase to two branches.}
\begin{tabular}{c|c|c|c|ccccccc}
\toprule
Dataset & Method & Approach & Modality & \multicolumn{1}{l}{Abs\_Rel↓} \cellcolor{lightgray} & \multicolumn{1}{l}{RMSE ↓} & \multicolumn{1}{l}{Sq\_Rel ↓} & \multicolumn{1}{l}{RMSE\_log ↓} & \multicolumn{1}{l}{a1 ↑} & \multicolumn{1}{l}{a2 ↑} & \multicolumn{1}{l}{a3 ↑} \\
\midrule
& & Ster. & & 0.1250 & 4.283 & 0.717 & 0.188 & 0.830 & 0.957 & 0.986\\
Spike-Kitti& \textbf{UGDF}(Ours) & Mono. & Spike & 0.1706 & 5.067 & 1.127 & 0.242 & 0.753 & 0.910& 0.968 \\
& & Fusion & & \textbf{0.1247} & 4.281 & 0.721 & 0.189 & 0.829 & 0.957 & 0.985 \\
\bottomrule
\end{tabular}%
\label{tab:1}%
\end{table*}%
\begin{table*}[t]
\centering
\footnotesize
\caption{Quantitative results on CitySpike20K-demo. Evaluation metrics are as described above. We make comparison with DORN\cite{fu2018deep}, GwcNet\cite{guo2019group}, CFNet\cite{shen2021cfnet}, StereoNet\cite{Khamis_2018_ECCV}, PSMNet\cite{chang2018pyramid}, and GANet\cite{Zhang_2019_CVPR} . The evaluation metrics are as introduced in subsection 4.2. We also consider model parameter size to be one of compared targets.}
\begin{tabular}{c|c|c|ccccccc}
\toprule
Dataset & Method & Approach & \multicolumn{1}{l}{Abs\_Rel↓} & \multicolumn{1}{l}{RMSE ↓} & \multicolumn{1}{l}{Sq\_Rel ↓} & \multicolumn{1}{l}{RMSE\_log ↓} & \multicolumn{1}{l}{a1 ↑} & \multicolumn{1}{l}{a2 ↑} & \multicolumn{1}{l}{a3 ↑} \\
\midrule
& UNet\cite{ronneberger2015u} & Mono. & 0.2518 & \underline{23.993} & 9.008 & 0.357 & 0.68 & \underline{0.896} & 0.932 \\
demo & DORN\cite{fu2018deep} & Mono. & 0.3857 & 25.258 & 10.691 & 0.438 & 0.409 & 0.841& 0.917\\
& Eigen\cite{eigen2014depth} & Mono.& 0.4262 & 25.154 & 20.363 & 0.459 & 0.542 & 0.800& 0.893 \\
\midrule
& GC-Net\cite{kendall2017end} & Ster. &0.2350& 37.158 & 12.743 & 0.401 & 0.614 & 0.809 & 0.868\\
& GwcNet\cite{guo2019group} & Ster. & \underline{0.1880} & 24.152 & 7.469 & \textbf{0.304} & \underline{0.757} & 0.895& \underline{0.953} \\
& CFnet\cite{shen2021cfnet} & Ster. & 0.2281 & 25.905 & \textbf{5.557} & 0.397 & 0.610 & 0.847&0.926 \\
demo & SteroNet\cite{Khamis_2018_ECCV} & Ster.& 0.2890 & 50.765 & 19.772 & 0.690 & 0.563& 0.727& 0.823\\
& PSMNet\cite{chang2018pyramid} & Ster. & 0.1886 & 28.496 & \underline{7.354} & 0.340& 0.723 & 0.887 &0.941 \\
& GANet-1\cite{Zhang_2019_CVPR} & Ster.&0.3270 & 49.068 & 19.505 & 0.865 & 0.586 & 0.764 & 0.851 \\
& GANet\cite{Zhang_2019_CVPR} & Ster.&0.2963 & 47.202 & 17.598 & 0.714 & 0.576 & 0.771 & 0.857 \\
\midrule
demo & \textbf{Ours } & Fusion & \textbf{0.1715 } & \textbf{22.793 } & 11.217 & \underline{0.306} & \textbf{0.791} & \textbf{0.928}&\textbf{0.961} \\
\bottomrule
\end{tabular}%
\label{tab:1}%
\end{table*}%
\begin{figure*}[h]
\includegraphics[width=16.5cm]{supplementary/vis-kitti.pdf}
\centering
\vspace{-5pt}
\caption{Visualization of predicting results on Spike-Kitti datasets. As seen, the monocular branch can still provide smoother and more accurate results in further regions, and the stereo branch makes sharper and cleaner prediction for closer regions.}
\end{figure*}
\subsection*{Evaluation Metric}
We conducted to evaluate the effectiveness of supervised depth estimation model on CitySpike20K. Our evaluation metrics for depth estimation is described as follows:
Given an estimated depth map $\hat{D}$, and its corresponding ground truth $D$, $N = H \times W$, $Abs\_Rel$ is quantified as:
\begin{equation}
Abs\_Rel = \frac{1}{N}\sum_{i=1}^{N}\frac{|D_i - \hat{D_i}|}{D_i}
\end{equation}
and RMSE defined:
\begin{equation}
RMSE = \sqrt{\frac{1}{N}\sum_{i=1}^{N}||D_i - \hat{D_i}||^{2}}
\end{equation}
we also introduce $RMSE\_log$ metric:
\begin{equation}
RMSE\_log = \sqrt{\frac{1}{N}\sum_{i=1}^{N}||log(\hat{D_i})-log(D_i)||^{2}}
\end{equation}
and Sq\_Rel metric as here:
\begin{equation}
Sq\_Rel = \frac{1}{N}\sum_{i=1}^{N}\frac{||D_i - \hat{D_i}||^{2}}{D_i}
\end{equation}
Above metrics measure output errors from different statistic aspect, weighting the distance between predictions and ground-truth labels, where lower values mean better model performance. Below metrics are for evaluation of whether predictions are accurate within certain range of ground-truth, and higher values mean better performance. Note that $j\in \{1,2,3\}$
\begin{equation}
aj \quad accuracy: \% \quad of \quad D_i \quad s.t. \quad max(\frac{\hat{D_i}}{D_i},\frac{D_i}{\hat{D_i}})=\delta<T=1.25^{j}
\end{equation}
\begin{figure*}[t]
\includegraphics[width=13.0cm]{supplementary/precision-test.pdf}
\centering
\caption{Accuracy statistics on CitySpike20K test set. The green lines and blue lines represent the monocular and stereo accuracies respectively.}
\label{fig:2}
\end{figure*}
\begin{figure*}[h]
\includegraphics[width=13.0cm]{supplementary/precision-val.pdf}
\centering
\caption{Accuracy statistics on CitySpike20K validation set. }
\label{fig:2}
\end{figure*}
\section*{Appendix II: Performance on other datasets}
\subsection*{Real-Dataset}
As we have described in our submitted paper, we also evaluate our framework on a real-recorded dataset by a spike camera. The dataset contains 40 sequences data and each of which includes 3-6 $ [400\times250\times400] $ spike voxels in the format of $[T\times H \times W]$. We split 33 sequences for training and 7 for testing.
\subsection*{Kitti}
To demonstrate that our UGDF framework still works in real-world scenes, we carry out experiment on a spike-kitti dataset. To convert Kitti\cite{geiger2013vision} from RGB modality to spike modality, we first make frame interpolation using XVFI\cite{sim2021xvfi} by 128 times. Then we use a Simulated-Vidar code script to generate spike data from RGB Kitti images to form spike voxels in the format $(128\times 375\times 1242)$ , where 128 represents the time dimension and $(375\times1242)$ is the original size of Kitti RGB images. We maintain the same way to operate neuromorphic encoding as what we design for CitySpike20K dataset in our submitted paper. As mentioned above, we set this experiment to further explore the effectiveness of our fusion strategy. We train our framework for 50 epochs on 4 RTX-2080Ti GPUs. Figure 9 provides a group of visualization of the output of our results on validation set.
\subsection*{CitySpike20K-demo}
In addition to 10 sequences of 1000Hz spike data we provide in the CitySpike20K dataset, we still supply a 40000Hz demo to simulate real spike as possible as we could. The demo contains 60K paired data and records a 1.5 seconds video of a fast-driving car in the city street. Different from our submitted papers, we use this demo to evaluate the performance of models loaded with spike data. Considering existing methods for monocular or stereo depth estimation are mostly based on RGB 3-channel data, we change the input channel of the models to the time-window width of applied spike sequences, i.e. 32 as we adopted. And we use the first half of the demo for training and the second half for testing.
\section*{Appendix III: Statistics to Support Our Motivation }
There are two clues to inspire our motivations. The first of which is that, the spike camera has its unique advantages to deal with fast-moving circumstances when operating depth estimation task. And the second is that, the monocular strategy and stereo strategy share some distinct advantages to finish depth estimation task while loaded with spike data. We supply statistical results to prove our second motivation. On CitySpike20K dataset, we make a1, a2, a3 accuracy calculation in different depth intervals according to depth GT while evaluating our network. We transform the stereo disparities into depths, and count a1, a2, a3 accuracy for two branches respectively in the same metrics. Then we plot them in one coordinate. Figure 10 shows statistical results on test set and Figure 11 shows results on validation sets. As seen, the stereo branch suffers from great accuracy decrease for far regions, while monocular branch still maintains certain reliability. Similarly, the stereo branch is more stable and accurate than the monocular branch for closer regions.
\bibliographystyle{unsrt}
\section{Introduction}
\lipsum[2]
\lipsum[3]
\section{Headings: first level}
\label{sec:headings}
\lipsum[4] See Section \ref{sec:headings}.
\subsection{Headings: second level}
\lipsum[5]
\begin{equation}
\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}
\end{equation}
\subsubsection{Headings: third level}
\lipsum[6]
\paragraph{Paragraph}
\lipsum[7]
\section{Examples of citations, figures, tables, references}
\label{sec:others}
\lipsum[8] \cite{kour2014real,kour2014fast} and see \cite{hadash2018estimate}.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations
appropriate for use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
\subsection{Figures}
\lipsum[10]
See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.}
\lipsum[11]
\begin{figure}
\centering
\fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\label{fig:fig1}
\end{figure}
\subsection{Tables}
\lipsum[12]
See awesome Table~\ref{tab:table}.
\begin{table}
\caption{Sample table title}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\label{tab:table}
\end{table}
\subsection{Lists}
\begin{itemize}
\item Lorem ipsum dolor sit amet
\item consectetur adipiscing elit.
\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.
\end{itemize}
\section{Conclusion}
Your conclusion here
\section*{Acknowledgments}
This was was supported in part by......
\bibliographystyle{unsrt}
|
2,869,038,156,887 | arxiv | \section{Introduction}
\subsection*{Special points}
Let \( S = \Sh_K(\mathbf{G}, X) \) be a Shimura variety.
Then \( S \) has a canonical model over the reflex field \( E_\mathbf{G} = E(\mathbf{G}, X) \).
According to the definition of a canonical model, for every special point \( s \in S \) with reflex field \( E(s) \), the Galois group \( \Gal(\overline \bQ/E(s)) \) acts on the Hecke orbit of \( s \) via a reciprocity morphism \cite[2.2.5]{Del79}.
In particular, if \( \tau \in \Gal(\overline \bQ/E(s)) \), then \( \tau(s) \) is also a special point.
In general \( E(s) \) is a non-trivial extension of \( E_\mathbf{G} \), so this raises the following question.
\begin{question} \label{q:conj-sp-pt}
If \( \tau \in \Gal(\overline \bQ/E_\mathbf{G}) \) does not fix \( E(s) \), is \( \tau(s) \) still a special point?
\end{question}
Langlands \cite{Lan79} stated a conjecture on the conjugation of Shimura varieties which implies the existence of canonical models (the construction of a canonical model, assuming the conjecture of Langlands, was completed in \cite{MS82b}).
Subsequently Borovoi \cite{Bor84} and Milne \cite{Mil83} proved the conjecture of Langlands.
This construction of canonical models gives a positive answer to \cref{q:conj-sp-pt}.
\subsection*{Special subvarieties}
We may ask a similar question about special subvarieties of dimension greater than zero.
By definition, a \defterm{special subvariety} of \( S \) is a geometrically irreducible component of the translate by a Hecke correspondence \( T_g \) of the image of a Shimura morphism \( [f] \colon \Sh_{K_\mathbf{H}}(\mathbf{H}, X_\mathbf{H}) \to \Sh_K(\mathbf{G}, X) \).
The Shimura variety \( \Sh_{K_\mathbf{H}}(\mathbf{H}, X_\mathbf{H}) \) has a canonical model over \( E_\mathbf{H} = E(\mathbf{H}, X_\mathbf{H}) \).
By \cite[Cor.~5.4]{Del71}, the morphism \( [f] \) is defined over the compositum \( E_\mathbf{G} E_\mathbf{H} \).
Consequently if \( Z \) is a component of the image of \( T_g \circ [f] \), then for every \( \tau \in \Gal(\overline \bQ/E_\mathbf{G} E_\mathbf{H}) \), \( \tau(Z) \) is again a geometrically irreducible component of the image of \( T_g \circ [f] \) and so \( \tau(Z) \) is a special subvariety of \( S \).
This leads to the following question.
\begin{question} \label{q:conj-sp-subvar}
If \( \tau \in \Gal(\overline \bQ/E_\mathbf{G}) \) does not fix \( E_\mathbf{G} E_\mathbf{H} \), is \( \tau(Z) \) still a special subvariety?
\end{question}
The conjecture of Langlands tells us that \( \tau \Sh_{K_\mathbf{H}}(\mathbf{H}, X_\mathbf{H}) \) is isomorphic to another Shimura variety, but it does not immediately tell us that the morphism \( \tau[f] \colon \tau \Sh_{K_\mathbf{H}}(\mathbf{H}, X_\mathbf{H}) \to \Sh_K(\mathbf{G}, X) \) is a Shimura morphism.
A positive answer to \cref{q:conj-sp-subvar} can be obtained from the proof of \cite[Lemma~9.5]{MS82b}.
Thus the answers to the above questions are implicit in \cite{MS82b} but they are not explicitly stated there.
The first goal of this paper is to explain enough of the machinery of \cite{MS82b} to answer \cref{q:conj-sp-pt,q:conj-sp-subvar}.
\subsection*{Complexity}
The second goal of the paper is to prove that all Galois conjugates of a special point have the same complexity.
The complexity of a special point is a quantity used in studying questions of unlikely intersections such as the André--Oort conjecture.
The complexity of special points in general Shimura varieties was first used by Ullmo and Yafaev \cite{UY14}.
For a precise definition, we use a generalisation of \cite[Definition~10.1]{DR}.
We have generalised the definition slightly: \cite{DR} considered only a single geometrically irreducible component of a Shimura variety, and therefore could always choose \( g=1 \) (in the notation of the definition below).
We need to explicitly account for~\( g \) so that the complexity is well-defined for special points in all components of the Shimura variety.
Let \( s \) be a special point in \( \Sh_K(\mathbf{G}, X) \).
The complexity of~\( s \) is defined as follows.
\begin{enumerate}
\item Write \( s = [h,g]_K \) for some \( h \in X \) and \( g \in \mathbf{G}(\mathbb{A}_f) \).
\item Let \( \mathbf{M} \) be the Mumford-Tate group of \( h \).
This is a \( \mathbb{Q} \)-torus in \( \mathbf{G} \).
\item Let \( K_\mathbf{M}^m \) be the maximal compact subgroup of \( \mathbf{M}(\mathbb{A}_f) \).
There is a unique maximal compact subgroup because \( \mathbf{M} \) is a torus.
\item Let \( K_\mathbf{M} = gKg^{-1} \cap \mathbf{M}(\mathbb{A}_f) \).
This is a compact subgroup, so contained in \( K_\mathbf{M}^m \).
\item Let \( D_\mathbf{M} \) be the absolute value of the discriminant of the splitting field of~\( \mathbf{M} \).
\item Let \( \Delta(s) = \max\{D_\mathbf{M}, \, [K_\mathbf{M}^m : K_\mathbf{M}]\} \).
\end{enumerate}
The \defterm{complexity} of \( s \) is \( \Delta(s) \).
If we make a different choice of \( (h',g') \in X \times \mathbf{G}(\mathbb{A}_f) \) lifting \( s \), then \( h' = \gamma h \) for some \( \gamma \in \mathbf{G}(\mathbb{Q}) \) such that \( g'^{-1} \gamma g \in K \).
Writing \( \mathbf{M}' = \MT(h') \) and \( K_{\mathbf{M}'} = g' K g'^{-1} \cap \mathbf{M}'(\mathbb{A}_f) \), we get \( \mathbf{M}' = \gamma \mathbf{M} \gamma^{-1} \) and \( K_{\mathbf{M}'} = \gamma K_{\mathbf{M}} \gamma^{-1} \).
Hence \( \Delta(s) \) is independent of the choice of \( (h,g) \).
Our main result on complexity is the following.
\begin{theorem} \label{galois-complexity}
Let \( S = \Sh_K(\mathbf{G}, X) \) be the canonical model of a Shimura variety over the reflex field \( E_\mathbf{G} = E(\mathbf{G}, X) \).
Let \( s \in S \) be a special point.
Then for every \( \tau \in \Gal(\overline \bQ/E_\mathbf{G}) \), we have \( \Delta(\tau(s)) = \Delta(s) \).
\end{theorem}
The proof of \cref{galois-complexity} uses details of Milne and Shih's construction of descent data for Shimura varieties.
As with the answers to \cref{q:conj-sp-pt,q:conj-sp-subvar}, the theorem has a simpler proof when restricted to \( \tau \) fixing \( E(s) \): see \cite[p.~156]{Daw15}.
\subsection*{Motivation: unlikely intersections}
The motivation for this paper came from work on unlikely intersections in collaboration with Christopher Daw.
One aim of the paper is to give full proofs of certain claims in \cite{DR} which are well-known to experts but for which either no proof appears in the literature,
or the proof can be found only within the proof of a larger result and the claim used in \cite{DR} is never explicitly stated.
In particular, the last paragraph of \cite[p.~1869]{DR} claims that the answer to \cref{q:conj-sp-subvar} is positive.
The same paragraph also claims that
\[ \Delta(\sigma(Z)) = \Delta(Z) \]
where \( Z \) is a special subvariety of a Shimura variety component \( S \), \( \sigma \) is an element of \( \Gal(\overline \bQ/F) \) (where \( F \) is a field of definition for \( S \)), and \( \Delta \) is defined as
\[ \Delta(Z) = \max \{ \deg(Z), \min\{ \Delta(P) : P \in Z \text{ is a special point} \} \}. \]
The equality \( \Delta(\sigma(Z)) = \Delta(Z) \) is an immediate corollary of \cref{galois-complexity}.
The results of this paper will also be used in a forthcoming paper of Daw and the author on unlikely intersections with Hecke--facteur families.
\subsection*{Outline of paper}
Sections~\ref{sec:svs}--\ref{sec:langlands} recall various known facts and definitions, in order to make our terminology and notation clear and to gather together in one place all the facts we will use.
Section~\ref{sec:svs} consists of definitions of Shimura varieties and associated objects.
Most of this is standard, except our use of the term ``Shimura pro-variety'' for the inverse limit of the system of Shimura varieties associated with a given Shimura dataum.
Section~\ref{sec:serre-taniyama} outlines key facts about the Serre and Taniyama groups from \cite{Lan79} and \cite{MS82a}.
Section~\ref{sec:langlands} states the conjecture of Langlands on conjugation of Shimura varieties, which is central to all the results of the paper.
It also explains the construction of the twisted group \( {^{\tau,h}}\gG \) which appears in this conjecture, based on \cite{Lan79} and \cite{MS82b}.
In section~\ref{sec:conjugation} we prove that \cref{q:conj-sp-pt,q:conj-sp-subvar} have positive answers.
This is a simple application of the conjecture and construction in section~\ref{sec:langlands}.
Finally in section~\ref{sec:conj-complexity} we prove \cref{galois-complexity} on the complexity of Galois conjugates of special points.
This depends on further details of the construction from \cite{MS82b} as well as a lemma on morphisms of Shimura pro-varieties, which we prove.
\subsection*{Acknowledgements}
The questions discussed in this paper arose during collaboration with Christopher Daw. I am grateful to him for extensive discussion of these questions and for carefully reading drafts of this paper. I am also grateful to Andrei Yafaev for helpful discussions.
I would like to thank the anonymous referee for careful reading of the paper and suggesting some improvements to the clarity of explanations.
\section{Preliminaries: Shimura varieties} \label{sec:svs}
We recall various definitions associated with Shimura varieties, in order to fix terminology and notation.
A \defterm{Shimura datum} is a pair \( (\mathbf{G}, X) \) where \( \mathbf{G} \) is a connected reductive \( \mathbb{Q} \)-algebraic group and \( X \) is a \( \mathbf{G}(\mathbb{R}) \)-conjugacy class in \( \Hom(\mathbb{S}, \mathbf{G}_\mathbb{R}) \) such that each \( h \in X \) satisfies the following axioms \cite[2.1.1.1--2.1.1.3]{Del79}.
(Here \( \mathbb{S} \) denotes the Deligne torus \( \Res_{\mathbb{C}/\mathbb{R}} \mathbb{G}_m \).)
\begin{enumerate}
\item The Hodge structure on \( \Lie(\mathbf{G}_\mathbb{R}) \) induced by \( h \) (via the adjoint representation of \( \mathbf{G} \)) has type \( \{ (-1,1), (0,0), (1,-1) \} \).
\item The involution \( \operatorname{Int} h(i) \) is a Cartan involution of the adjoint group \( \mathbf{G}^\mathrm{ad}_\mathbb{R} \).
\item \( \mathbf{G}^\mathrm{ad} \) has no \( \mathbb{Q} \)-simple factor on which the projection of \( h \) is trivial.
\end{enumerate}
These axioms imply that \( X \) is a finite disjoint union of Hermitian symmetric domains \cite[Cor.~1.1.17]{Del79}.
Given a Shimura datum \( (\mathbf{G}, X) \) and a compact open subgroup \( K \subset \mathbf{G}(\mathbb{A}_f) \), we can form a quasi-projective complex variety \( \Sh_K(\mathbf{G}, X) \) whose \( \mathbb{C} \)-points are
\[ \Sh_K(\mathbf{G}, X)(\mathbb{C}) = \mathbf{G}(\mathbb{Q}) \mathop\backslash X \times \mathbf{G}(\mathbb{A}_f) \mathop/ K. \]
Here \( \mathbf{G}(\mathbb{Q}) \) acts diagonally on \( X \times \mathbf{G}(\mathbb{A}_f) \) on the left, while \( K \) acts only on \( \mathbf{G}(\mathbb{A}_f) \) on the right.
We call \( \Sh_K(\mathbf{G}, X) \) a \defterm{Shimura variety}.
We write \( [h,g]_K \) for the complex point of \( \Sh_K(\mathbf{G}, X) \) which is represented by \( (h,g) \in X \times \mathbf{G}(\mathbb{A}_f) \).
\medskip
If \( K' \subset K \), then there is a finite morphism \( \Sh_{K'}(\mathbf{G}, X) \to \Sh_K(\mathbf{G}, X) \).
Thus the Shimura varieties \( \Sh_K(\mathbf{G}, X) \) form a projective system as \( K \) varies over compact open subgroups of \( \mathbf{G}(\mathbb{A}_f) \).
The inverse limit of this system is a scheme over \( \mathbb{C} \), not of finite type, which we denote \( \Sh(\mathbf{G}, X) \).
Its \( \mathbb{C} \)-points are given by
\begin{equation} \label{eqn:spv-c-pts}
\Sh(\mathbf{G}, X)(\mathbb{C}) = \mathbf{G}(\mathbb{Q}) \mathop\backslash X \times \mathbf{G}(\mathbb{A}_f) \mathop/ \overline{\mathbf{Z}(\mathbb{Q})}.
\end{equation}
Here \( \mathbf{Z} \) denotes the centre of \( \mathbf{G} \) and \( \overline{\mathbf{Z}(\mathbb{Q})} \) is the closure of \( \mathbf{Z}(\mathbb{Q}) \) in the adelic topology on \( \mathbf{Z}(\mathbb{A}_f) \).
Again \( \mathbf{G}(\mathbb{Q}) \) acts diagonally on \( X \times \mathbf{G}(\mathbb{A}_f) \) on the left, while \( \overline{\mathbf{Z}(\mathbb{Q})} \) acts only on \( \mathbf{G}(\mathbb{A}_f) \) on the right.
Note that \( \mathbf{Z}(\mathbb{Q}) \) acts trivially on both \( X \) and \( \mathbf{G}(\mathbb{A}_f) / \overline{\mathbf{Z}(\mathbb{Q})} \), so \eqref{eqn:spv-c-pts} is equivalent to the description in \cite[Prop.~2.1.10]{Del79}:
\[ \Sh(\mathbf{G}, X)(\mathbb{C}) = \raisebox{-0.3em}{\( \bigl( \mathbf{G}(\mathbb{Q}) \mathop/ \mathbf{Z}(\mathbb{Q}) \bigr) \)} \bigm\backslash \raisebox{0.3em}{\( X \times \bigl( \mathbf{G}(\mathbb{A}_f) \mathop/ \overline{\mathbf{Z}(\mathbb{Q})} \bigr) \)}. \]
We call \( \Sh(\mathbf{G}, X) \) a \defterm{Shimura pro-variety}.
We write \( [h,g] \) for the complex point of \( \Sh(\mathbf{G}, X) \) which is represented by \( (h,g) \in X \times \mathbf{G}(\mathbb{A}_f) \).
\medskip
If \( g \in \mathbf{G}(\mathbb{A}_f) \), write \( T_g \colon \Sh(\mathbf{G}, X) \to \Sh(\mathbf{G}, X) \) for the morphism of pro-varieties
\[ T_g [h,q] = [h,qg]. \]
This gives a right action of \( \mathbf{G}(\mathbb{A}_f) \) on \( \Sh(\mathbf{G}, X) \).
The morphisms \( T_g \) are known as \defterm{Hecke operators}.
The orbit of any point in \( \Sh(\mathbf{G}, X) \) under the action of \( \mathbf{G}(\mathbb{A}_f) \) is called a \defterm{Hecke orbit}.
Let \( K \) be a compact open subgroup of \( \mathbf{G}(\mathbb{A}_f) \) and \( g \in \mathbf{G}(\mathbb{A}_f) \).
The Hecke operator \( T_g \colon \Sh(\mathbf{G}, X) \to \Sh(\mathbf{G}, X) \) induces a morphism of varieties
\[ \cdot g \colon \Sh_{K \cap gKg^{-1}}(\mathbf{G}, X) \to \Sh_{g^{-1}Kg \cap K}(\mathbf{G}, X). \]
Hence the following diagram defines a correspondence on \( \Sh_K(\mathbf{G}, X) \):
\[ \xymatrix{
\Sh_{K \cap gKg^{-1}}(\mathbf{G}, X) \ar[d] \ar[r]^{\cdot g}
& \Sh_{g^{-1}Kg \cap K}(\mathbf{G}, X) \ar[d]
\\ \Sh_K(\mathbf{G}, X)
& \Sh_K(\mathbf{G}, X)
} \]
We call this correspondence a \defterm{Hecke correspondence} and also denote it by \( T_g \).
A \defterm{morphism of Shimura data} \( (\mathbf{G}_1, X_1) \to (\mathbf{G}_2, X_2) \) is a homomorphism of \( \mathbb{Q} \)-algebraic groups \( f \colon \mathbf{G}_1 \to \mathbf{G}_2 \) such that composition with \( f \) maps \( X_1 \) into \( X_2 \).
A morphism of Shimura data induces a morphism of pro-varieties
\[ [f] \colon \Sh(\mathbf{G}_1, X_1) \to (\mathbf{G}_2, X_2). \]
If \( K_1 \subset \mathbf{G}_1(\mathbb{A}_f) \) and \( K_2 \subset \mathbf{G}_2(\mathbb{A}_f) \) are compact open subgroups such that \( f(K_1) \subset K_2 \), then \( f \) also induces a morphism
\[ [f] \colon \Sh_{K_1}(\mathbf{G}_1, X_1) \to \Sh_{K_2}(\mathbf{G}_2, X_2). \]
We call either of these induced morphisms \( [f] \) a \defterm{Shimura morphism}.
A \defterm{Shimura subdatum} of \( (\mathbf{G}, X) \) is a Shimura datum \( (\mathbf{H}, X_\mathbf{H}) \) where \( \mathbf{H} \subset \mathbf{G} \) and \( X_\mathbf{H} \subset X \), with the inclusions \( \mathbf{H} \hookrightarrow \mathbf{G} \) and \( X_\mathbf{H} \hookrightarrow X \) being compatible.
The inclusion of Shimura data induces a morphism of Shimura pro-varieties \( \Sh(\mathbf{H}, X_\mathbf{H}) \to \Sh(\mathbf{G}, X) \), which is a closed immersion by \cite[Prop.~1.15]{Del71}.
We call the image of \( \Sh(\mathbf{H}, X_\mathbf{H}) \to \Sh(\mathbf{G}, X) \) a \defterm{Shimura sub-pro-variety} of \( \Sh(\mathbf{G}, X) \).
\medskip
We will next recall Deligne's definition of a canonical model of a Shimura pro-variety.
Before this we need to recall certain other definitions.
For any point \( h \in X \), the \defterm{Mumford--Tate group} of \( h \) is the smallest \( \mathbb{Q} \)-algebraic subgroup \( \MT(h) \subset \mathbf{G} \) such that \( h \) factors through \( \MT(h)_\mathbb{R} \).
The \defterm{generic Mumford--Tate group} of \( (\mathbf{G}, X) \) is the smallest \( \mathbb{Q} \)-algebraic subgroup \( \MT(X) \subset \mathbf{G} \) such that \emph{every} \( h \in X \) factors through \( \MT(X)_\mathbb{R} \).
A point \( h \in X \) is said to be \defterm{Hodge generic} if \( \MT(h) = \MT(X) \).
It is well-known that every Shimura datum contains Hodge generic points; see for example the proof of \cite[Prop.~7.5]{Del72}.
A \defterm{pre-special point} is a point \( h \in X \) for which \( \MT(h) \) is commutative.
Mumford--Tate groups are always reductive, so this implies that \( \MT(h) \) is a torus.
A \defterm{special point} is a point \( [h,g] \in \Sh(\mathbf{G}, X) \) or \( [h,g]_K \in \Sh_K(\mathbf{G}, X) \) such that \( h \) is pre-special (note that this is independent of the choice of lift \( (h,g) \)).
\medskip
Let \( \mu \colon \mathbb{G}_{m,\mathbb{C}} \to \mathbb{S}_\mathbb{C} \) denote the cocharacter given by \( \mu(z) = (z,1) \) (identifying \( \mathbb{S}(\mathbb{C}) \) with \( \mathbb{C}^\times \times \mathbb{C}^\times \)).
If \( h \in X \), then the \( \mathbf{G}(\mathbb{C}) \)-conjugacy class of \( h \circ \mu \colon \mathbb{S}_\mathbb{C} \to \mathbf{G}_\mathbb{C} \) is defined over a number field.
We call the field of definition of this conjugacy class (inside \( \mathbb{C} \)) the \defterm{reflex field} of the Shimura datum \( (\mathbf{G}, X) \) and write it \( E(\mathbf{G}, X) \).
If \( h \in X \) is a pre-special point with Mumford--Tate group \( \mathbf{M} \subset \mathbf{G} \), then the pair \( (\mathbf{M}, \{h\}) \) is a Shimura datum.
We define \( E(h) \), the \defterm{reflex field} of \( h \), to be the reflex field of \( (\mathbf{M}, \{h\}) \).
Note that \( E(\mathbf{G}, X) \subset E(h) \).
The complex points of \( \Sh(\mathbf{M}, \{h\}) \) form a pro-finite set.
Deligne \cite[2.2.4]{Del79} defined an action of \( \Gal(\overline \bQ/E(h)) \) on \( \Sh(\mathbf{M}, \{h\})(\mathbb{C}) \)
by means of a reciprocity morphism \( \Gal(\overline \bQ/E(h)) \to \mathbf{M}(\mathbb{A}_f) / \overline{\mathbf{M}(\mathbb{Q})} \).
The pro-variety \( \Sh(\mathbf{M}, \{h\}) \) has a unique model over \( E(h) \) for which the Galois action is the same as the reciprocity action.
A \defterm{canonical model} of a Shimura pro-variety \( \Sh(\mathbf{G}, X) \) is a scheme \( M \) over \( E(\mathbf{G}, X) \) equipped with a right action of \( \mathbf{G}(\mathbb{A}_f) \) and a \( \mathbf{G}(\mathbb{A}_f) \)-equivariant isomorphism \( M \times_{E(\mathbf{G}, X)} \mathbb{C} \to \Sh(\mathbf{G}, X) \) such that
\begin{enumerate}[(a)]
\item the special points in \( M \) are defined over \( \overline \bQ \);
\item for each pre-special point \( h \in X \), the morphism \( \Sh(\MT(h), \{h\}) \to \Sh(\mathbf{G}, X) \) induced by the inclusion \( \MT(h) \hookrightarrow \mathbf{G} \) is defined over \( E(h) \).
\end{enumerate}
According to \cite[Cor.~5.5]{Del71}, a Shimura pro-variety has at most one canonical model (up to unique isomorphism).
According to \cite[Cor.~5.4]{Del71}, if \( \Sh(\mathbf{G}_1, X_1) \) and \( \Sh(\mathbf{G}_2, X_2) \) have canonical models, then any Shimura morphism \( \Sh(\mathbf{G}_1, X_1) \to \Sh(\mathbf{G}_2, X_2) \) is defined over the compositum of the reflex fields \( E(\mathbf{G}_1, X_1).E(\mathbf{G}_2, X_2) \).
Deligne (\cite{Del71} and \cite{Del79}) established the existence of canonical models for a large class of Shimura pro-varieties, namely those of ``abelian type'', starting from the fact that the moduli space of principally polarised abelian varieties of dimension~\( g \) is defined over \( \mathbb{Q} \) and this gives a canonical model for \( \Sh(\mathbf{GSp}_{2g}, \mathcal{H}_g^\pm) \).
Milne and Shih \cite{MS82b} proved that a conjecture of Langlands implies the existence of canonical models for all Shimura pro-varieties.
Borovoi \cite{Bor84} and Milne \cite{Mil83} then proved the conjecture of Langlands using a result of Kazhdan \cite{Kaz83}, completing the proof of the existence of canonical models.
\section{The Serre and Taniyama groups} \label{sec:serre-taniyama}
We recall some facts about the Serre and Taniyama groups which are required in order to set up the conjecture of Langlands.
The Serre group \( \mathcal{S} \) is a pro-algebraic torus over \( \mathbb{Q} \) (i.e.\ an inverse limit of a projective system of \( \mathbb{Q} \)-tori) which can be thought of as the ``universal Mumford--Tate group of a \( \mathbb{Q} \)-rational polarisable Hodge structure of CM type.''
More formally, \( \mathcal{S} \) is the Tannakian group of the category of \( \mathbb{Q} \)-rational polarisable Hodge structures of CM type (with the obvious forgetful fibre functor to \( \mathbb{Q} \)-vector spaces).
An explicit construction of \( \mathcal{S} \) is described in \cite[\S 1]{MS82a}, as well as the construction of a canonical Hodge parameter \( h_{\mathrm{can}} \colon \mathbb{S} \to \mathcal{S}_\mathbb{R} \).
We shall need the following universal property of \( (\mathcal{S}, h_{\mathrm{can}}) \).
\begin{lemma} \label{serre-universal}
For every \( \mathbb{Q} \)-algebraic torus \( \mathbf{T} \) and every \( h \colon \mathbb{S} \to \mathbf{T}_\mathbb{R} \), if the weight homomorphism \( h \circ w \colon \mathbb{G}_{m,\mathbb{R}} \to \mathbf{T}_\mathbb{R} \) is defined over \( \mathbb{Q} \), then there exists a unique homomorphism of pro-\( \mathbb{Q} \)-algebraic tori \( \rho \colon \mathcal{S} \to \mathbf{T} \) such that \( h = \rho \circ h_{\mathrm{can}} \).
\end{lemma}
Here \( w \colon \mathbb{G}_{m,\mathbb{R}} \to \mathbb{S} \) denotes the morphism given on \( \mathbb{R} \)-points by the inclusion \( \mathbb{R}^\times \to \mathbb{C}^\times \).
The condition that \( h \circ w \) is defined over \( \mathbb{Q} \) is equivalent to \cite[(1.1)]{MS82a}.
The universal property determines \( (\mathcal{S}, h_{\mathrm{can}}) \) up to unique isomorphism.
\medskip
The Taniyama group \( \mathcal{T} \) is an extension of \( \Gal(\overline \bQ/\mathbb{Q}) \) by \( \mathcal{S} \) which was defined by Langlands \cite[\S 5]{Lan79}.
According to \cite{Del82}, it is isomorphic to the Tannakian group of the category of absolute Hodge CM motives over \( \mathbb{Q} \).
The Taniyama group comes with an exact sequence
\[ 1 \longrightarrow \mathcal{S} \longrightarrow \mathcal{T} \overset{\pi}{\longrightarrow} \Gal(\overline \bQ/\mathbb{Q}) \longrightarrow 1. \]
This is an exact sequence of pro-\( \mathbb{Q} \)-algebraic groups if we make \( \Gal(\overline \bQ/\mathbb{Q}) \) into a pro-\( \mathbb{Q} \)-group by regarding it as a limit of finite groups \( \Gal(L/\mathbb{Q}) \) and declaring that every point of these finite sets is a \( \mathbb{Q} \)-point.
The Taniyama group is also equipped with a splitting of the exact sequence over \( \mathbb{A}_f \):
\[ \spl \colon \Gal(\overline \bQ/\mathbb{Q}) \to \mathcal{T}(\mathbb{A}_f). \]
For any \( \tau \in \Gal(\overline \bQ/\mathbb{Q}) \), \( \pi^{-1}(\tau) \subset \mathcal{T} \) is a pro-\( \mathbb{Q} \)-variety.
Letting \( \mathcal{S} \) act on \( \pi^{-1}(\tau) \) by multiplication on the right, we get a right \( \mathcal{S} \)-torsor which we denote \( {^\tau \mathcal{S}} \) (in choosing the right action, we are following \cite[Remark~2.9]{MS82a}).
Thanks to \( \spl \), this \( \mathcal{S} \)-torsor is split over \( \mathbb{A}_f \).
\section{The conjecture of Langlands} \label{sec:langlands}
Let \( (\mathbf{G}, X) \) be a Shimura datum.
In order to give a conjectural description of the Galois conjugates of \( \Sh(\mathbf{G}, X) \), Langlands \cite[\S 6]{Lan79} constructed the following objects for each pre-special point \( h \in X \) and each \( \tau \in \Gal(\overline \bQ/\mathbb{Q}) \):
\begin{enumerate}[(i)]
\item a Shimura datum \( ({^{\tau,h}}\gG, {^{\tau,h}}X) \);
\item a pre-special point \( {^\tau h} \in {^{\tau,h}}X \);
\item a continuous group isomorphism \( \theta_{\tau,h} \colon \mathbf{G}(\mathbb{A}_f) \to {^{\tau,h}}\gG(\mathbb{A}_f) \).
\end{enumerate}
The construction depends on the chosen pre-special point \( h \in X \).
In order to explain how the resulting Shimura pro-varieties are related when we vary \( h \),
Langlands also constructed an isomorphism of pro-\( \mathbb{C} \)-varieties
\[ \phi(\tau;h',h) \colon \Sh({^{\tau,h}}\gG, {^{\tau,h}}X) \to \Sh({^{\tau,h'}}\gG, {^{\tau,h'}}X) \]
for each pair of pre-special points \( h, h' \in X \).
\begin{remark}
Our notation is based on \cite{MS82b}, which differs slightly from the notation in \cite{Lan79} in the positioning of superscripts.
We always explicitly include the dependence on \( h \) in our notation, while both \cite{Lan79} and \cite{MS82b} frequently omit it.
We label various objects with the Hodge parameter \( h \colon \mathbb{S} \to \mathbf{G}_\mathbb{R} \), while \cite{Lan79} and \cite{MS82b} use the cocharacter \( \mu \colon \mathbb{G}_{m,\mathbb{C}} \to \mathbf{G}_\mathbb{C} \); since \( h \in X \) and \( \mu \) determine each other, this does not matter.
The isomorphism of adelic groups \( \theta_{\tau,h} \) is not given a name in \cite{Lan79} or \cite{MS82b}, being denoted simply by \( g \mapsto g^\tau \) or \( g \mapsto {^\tau g} \) respectively, but we have found it convenient to name it explicitly.
\end{remark}
Before outlining the construction of these objects, we shall first state Conjecture~C of Langlands and discuss its consequences for canonical models.
The original statement of this conjecture was at \cite[pp.~232--233]{Lan79}.
Alternative formulations can be found at \cite[p.~311]{MS82b} and \cite[\S 2]{Bor82}.
The conjecture was proved by Borovoi \cite{Bor84} (completed in \cite{Bor87}) and Milne \cite[Thm.~7.1]{Mil83}.
\begin{theorem} \label{conjC}
Let \( (\mathbf{G}, X) \) be a Shimura datum and let \( \tau \in \Aut(\mathbb{C}) \).
\begin{enumerate}[(a)]
\item For every pre-special point \( h \in X \), there is an isomorphism of pro-\( \mathbb{C} \)-varieties
\[ \phi_{\tau,h} \colon \tau \Sh(\mathbf{G}, X) \to \Sh({^{\tau,h}}\gG, {^{\tau,h}}X) \]
such that
\begin{enumerate}[(i)]
\item \( \phi_{\tau,h}(\tau[h,1]) = [{^\tau h}, 1] \); and
\item the diagram
\vspace{-6pt}
\[ \renewcommand{\labelstyle}{\textstyle}
\xymatrix@C+3pc@R+1pc{
\tau \Sh(\mathbf{G}, X) \ar[r]^{\tau T_g} \ar[d]^{\phi_{\tau,h}}
& \tau \Sh(\mathbf{G}, X) \ar[d]^{\phi_{\tau,h}}
\\ \Sh({^{\tau,h}}\gG, {^{\tau,h}}X) \ar[r]^{T_{\theta_{\tau,h}(g)}}
& \Sh({^{\tau,h}}\gG, {^{\tau,h}}X)
} \]
commutes for every \( g \in \mathbf{G}(\mathbb{A}_f) \).
In other words,
\[ \phi_{\tau,h}(\tau(T_g(s))) = T_{\theta_{\tau,h}(g)}(\phi_{\tau,h}(\tau(s))) \]
for all \( s \in \Sh(\mathbf{G}, X) \) and \( g \in \mathbf{G}(\mathbb{A}_f) \).
\end{enumerate}
\medskip
\item For every pair of pre-special points \( h, h' \in X \), the following diagram commutes:
\[
\renewcommand{\labelstyle}{\textstyle}
\xymatrix@C+2pc@R+1pc{
\tau \Sh(\mathbf{G}, X) \ar[r]^-{\phi_{\tau,h}} \ar[rd]_{\phi_{\tau,h'}}
& \Sh({^{\tau,h}}\gG, {^{\tau,h}}X) \ar[d]^{\phi(\tau;h',h)}
\\& \Sh({^{\tau,h'}}\gG, {^{\tau,h'}}X).
} \]
\end{enumerate}
\end{theorem}
Note that the isomorphism \( \phi_{\tau,h} \) in \cref{conjC}(a) is unique because (a)(i) and~(ii) specify what it does on the Hecke orbit of \( [h,1] \), which is dense in \( \Sh(\mathbf{G}, X) \).
If \( \tau \in \Aut(\mathbb{C}) \) fixes \( E := E(\mathbf{G}, X) \), then Milne and Shih \cite[Remark~4.13 and Prop.~7.8]{MS82b} construct an isomorphism of pro-\( \mathbb{C} \)-varieties
\[ \phi(\tau;h) \colon \Sh(\mathbf{G}, X) \to \Sh({^{\tau,h}}\gG, {^{\tau,h}}X) \]
such that
\begin{enumerate}[(i)]
\item \( \phi(\tau;h) \circ T_g = T_{\theta_{\tau,h}(g)} \circ \phi(\tau;h) \) \label{phi-tau-h}
for all \( g \in \mathbf{G}(\mathbb{A}_f) \);
\item \( \phi(\tau;h',h) = \phi(\tau;h') \circ \phi(\tau;h)^{-1} \)
for all pre-special points \( h, h' \in X \).
\end{enumerate}
The isomorphism \( \phi(\tau;h) \) has the form \( [f_1] \circ T_{\beta_1} \) for some isomorphism of Shimura data \( f_1 \colon (\mathbf{G}, X) \to ({^{\tau,h}}\gG, {^{\tau,h}}X) \) and some element \( \beta_1 \in \mathbf{G}(\mathbb{A}_f) \).
(Note that \( \beta_1 \) and \( f_1 \) depend on choices made during the construction, but \( \phi(\tau;h) \) does not.)
Following \cite[p.~233]{Lan79}, Milne and Shih show that the morphisms
\begin{equation} \label{eqn:psit}
\psi_\tau = \phi(\tau;h)^{-1} \circ \phi_{\tau,h} \colon \tau \Sh(\mathbf{G}, X) \to \Sh(\mathbf{G}, X)
\end{equation}
form a descent datum (for any pre-special point~\( h \in X \)).
For a proof that this descent datum is effective, see \cite{Mil99}.
Hence there exists a model \( M(\mathbf{G}, X) \) for \( \Sh(\mathbf{G}, X) \) over \( E(\mathbf{G}, X) \) which splits this descent datum.
In other words, \( M(\mathbf{G}, X) \) is a pro-variety over \( E(\mathbf{G}, X) \) with an isomorphism \( \iota \colon M(\mathbf{G}, X) \times_E \mathbb{C} \to \Sh(\mathbf{G}, X) \) such that the following diagram commutes for every \( \tau \in \Aut(\mathbb{C}/E) \):
\begin{equation} \label{eqn:descent}
\vcenter{\xymatrix{
& M(\mathbf{G}, X) \times_E \mathbb{C} \ar[ld]_{\tau \iota} \ar[rd]^{\iota}
\\ \tau \Sh(\mathbf{G}, X) \ar[rr]^{\psi_\tau} \ar[rd]_{\phi_{\tau,h}}
&& \Sh(\mathbf{G}, X) \ar[ld]^{\phi(\tau;h)}
\\& \Sh({^{\tau,h}}\gG, {^{\tau,h}}X)
}}
\end{equation}
Because \( \psi_\tau \circ \tau T_g = T_g \circ \psi_\tau \) for all \( g \in \mathbf{G}(\mathbb{A}_f) \) and \( \tau \in \Aut(\mathbb{C}/E) \), the right action of \( \mathbf{G}(\mathbb{A}_f) \) on \( \Sh(\mathbf{G}, X) \) also descends to \( M(\mathbf{G}, X) \).
The model \( M(\mathbf{G}, X) \) is canonical by \cite[Prop.~7.14]{MS82b}.
\medskip
Now we shall outline the construction of the Shimura datum \( ({^{\tau,h}}\gG, {^{\tau,h}}X) \), following the description at \cite[p.~310]{MS82b}.
Let \( \mathbf{T} \) be a maximal \( \mathbb{Q} \)-torus in \( \mathbf{G} \) such that \( h \) factors through \( \mathbf{T}_\mathbb{R} \).
Let \( \mathbf{T}^\mathrm{ad} \) denote the image of \( \mathbf{T} \) in \( \mathbf{G}^\mathrm{ad} \) and let \( h^\mathrm{ad} \colon \mathbb{S} \to \mathbf{T}^\mathrm{ad}_\mathbb{R} \) be the composition of \( \mathbf{T} \to \mathbf{T}^\mathrm{ad} \) with~\( h \).
Now \( \mathbf{T}^\mathrm{ad}(\mathbb{R}) \) is compact by axiom \cite[2.1.1.2]{Del79}, so the weight homomorphism \( h^\mathrm{ad} \circ w \colon \mathbb{G}_{m,\mathbb{R}} \to \mathbf{T}^\mathrm{ad}_\mathbb{R} \) is trivial and \textit{a fortiori} defined over \( \mathbb{Q} \).
Hence we can apply the universal property of the Serre group (\cref{serre-universal}) to get a homomorphism of pro-\( \mathbb{Q} \)-algebraic groups \( \rho_h \colon \mathcal{S} \to \mathbf{T}^\mathrm{ad} \) such that \( h^\mathrm{ad} = \rho_h \circ h_{\mathrm{can}} \).
Composing \( \rho_h \) with the inclusion \( \mathbf{T}^\mathrm{ad} \to \mathbf{G}^\mathrm{ad} \), we get a homomorphism \( \mathcal{S} \to \mathbf{G}^\mathrm{ad} \).
Now \( \mathbf{G}^\mathrm{ad} \) acts on \( \mathbf{G} \) by inner automorphisms, so we get a left action of \( \mathcal{S} \) on \( \mathbf{G} \) defined over \( \mathbb{Q} \).
This action is independent of the choice of maximal torus \( \mathbf{T} \) because it factors through \( \MT(h^\mathrm{ad}) \).
We define \( {^{\tau,h}}\gG \) to be the twist of \( \mathbf{G} \) (with the left action of \( \mathcal{S} \) via \( \rho_h \) which we just described) by the right \( \mathcal{S} \)-torsor \( {^\tau \mathcal{S}} = \pi^{-1}(\tau) \) which we defined in section~\ref{sec:serre-taniyama}:
\[ {^{\tau,h}}\gG = {^\tau \mathcal{S}} \times^\mathcal{S} \mathbf{G}. \]
As remarked in \cite{MS82b}, using \cite[Remarks~2.9, 3.18]{MS82a}, there is an isomorphism \( {^{\tau,h}}\gG_L \cong \mathbf{G}_L \) where \( L \) is a splitting field for \( \mathbf{T} \).
\begin{remark} \label{thT}
Because \( \mathcal{S} \) acts on \( \mathbf{G} \) by inner automorphisms and the action factors through \( \mathbf{T}^\mathrm{ad} \), the action is trivial on \( \mathbf{T} \).
Hence \( {^\tau \mathcal{S}} \times^\mathcal{S} \mathbf{T} = \mathbf{T} \).
Thus \( \mathbf{T} = {^{\tau,h}}\gT \) is naturally a subtorus of \( {^{\tau,h}}\gG \) and \( h \colon \mathbb{S} \to \mathbf{T}_\mathbb{R} \) induces \( {^\tau h} \colon \mathbb{S} \to {^{\tau,h}}\gG_\mathbb{R} \).
\end{remark}
Let \( {^{\tau,h}}X \) be the \( {^{\tau,h}}\gG(\mathbb{R}) \)-conjugacy class of \( {^\tau h} \).
Langlands showed that \( ({^{\tau,h}}\gG, {^{\tau,h}}X) \) is a Shimura datum \cite[p.~231]{Lan79}.
By construction, \( {^\tau h} \) factors through the \( \mathbb{Q} \)-torus \( {^{\tau,h}}\gT \), so it is a pre-special point.
Recall that the Taniyama group comes with an adelic splitting \( \spl \colon \Gal(\overline \bQ/\mathbb{Q}) \to \mathcal{T}(\mathbb{A}_f) \).
Since \( \spl(\tau) \in {^\tau \mathcal{S}}(\mathbb{A}_f) \), we can define a continuous group isomorphism \( \theta_{\tau,h} \colon \mathbf{G}(\mathbb{A}_f) \to {^{\tau,h}}\gG(\mathbb{A}_f) = {^\tau \mathcal{S}}(\mathbb{A}_f) \times^\mathcal{S} \mathbf{G}(\mathbb{A}_f) \) by
\[ \theta_{\tau,h}(g) = \spl(\tau).g. \]
Since \( \mathcal{S} \) acts trivially on \( \mathbf{T} \), we have \( \spl(\tau).g = g \) for all \( g \in \mathbf{T}(\mathbb{A}_f) \) and thus \( \theta_{\tau,h} \) restricts to the identity on \( \mathbf{T}(\mathbb{A}_f) = {^{\tau,h}}\gT(\mathbb{A}_f) \).
\begin{lemma} \label{th-subdatum}
Let \( (\mathbf{H}, X_\mathbf{H}) \) be a Shimura subdatum of \( (\mathbf{G}, X) \).
Let \( h \in X_\mathbf{H} \subset X \) be a pre-special point.
Then \( ({^{\tau,h}}\gH, {^{\tau,h}}X_\gH) \) is a Shimura subdatum of \( ({^{\tau,h}}\gG, {^{\tau,h}}X) \) and \( \theta_{\tau,h,\mathbf{H}} \colon \mathbf{H}(\mathbb{A}_f) \to {^{\tau,h}}\gH(\mathbb{A}_f) \) is the restriction of \( \theta_{\tau,h,\mathbf{G}} \colon \mathbf{G}(\mathbb{A}_f) \to {^{\tau,h}}\gG(\mathbb{A}_f) \).
\end{lemma}
\begin{proof}
Choose maximal \( \mathbb{Q} \)-tori \( \mathbf{T}_\mathbf{G} \subset \mathbf{G} \) and \( \mathbf{T}_\mathbf{H} \subset \mathbf{H} \) such that \( \mathbf{T}_\mathbf{H} \subset \mathbf{T}_\mathbf{G} \) and \( h \) factors through \( \mathbf{T}_{\mathbf{H},\mathbb{R}} \).
Letting \( \mathbf{T}_\mathbf{G}^\mathrm{ad} = \mathbf{T}/Z(\mathbf{G}) \subset \mathbf{G}^\mathrm{ad} \) and \( \mathbf{T}_\mathbf{H}^\mathrm{ad} = \mathbf{T}/Z(\mathbf{H}) \subset \mathbf{H}^\mathrm{ad} \), we have the following commutative diagram of \( \mathbb{Q} \)-tori:
\[ \xymatrix@R-2pt{
\MT(h) \ar@{->>}[r] \ar@{^{(}->}[d]
& \MT(h) / Z(\mathbf{G}) \cap \MT(h) \ar@{->>}[r] \ar@{^{(}->}[d]
& \MT(h) / Z(\mathbf{H}) \cap \MT(h) \ar@{^{(}->}[d]
\\ \mathbf{T}_\mathbf{H} \ar@{->>}[r] \ar@{^{(}->}[d]
& \mathbf{T}_\mathbf{H} / Z(\mathbf{G}) \cap \mathbf{H} \ar@{->>}[r] \ar@{^{(}->}[d]
& \mathbf{T}_\mathbf{H}^\mathrm{ad}
\\ \mathbf{T}_\mathbf{G} \ar@{->>}[r]
& \mathbf{T}_\mathbf{G}^\mathrm{ad}
} \]
The image of \( h \) in \( \MT(h) / Z(\mathbf{G}) \cap \MT(h) \) has trivial weight, so by \cref{serre-universal} it factors as \( \rho_{h,\MT} \circ h_{\mathrm{can}} \) for some \( \rho_{h,\MT} \colon \mathcal{S} \to \MT(h) / Z(\mathbf{G}) \cap \MT(h) \).
The homomorphisms \( \rho_{h,\mathbf{G}} \colon \mathcal{S} \to \mathbf{T}_\mathbf{G}^\mathrm{ad} \) and \( \rho_{h,\mathbf{H}} \colon \mathcal{S} \to \mathbf{T}_\mathbf{H}^\mathrm{ad} \) used to construct \( {^{\tau,h}}\gG \) and \( {^{\tau,h}}\gH \) respectively both factor through \( \rho_{h,\MT} \).
Hence the action of \( \mathcal{S} \) on \( \mathbf{H} \) coming from \( h \) is the restriction of the action of \( \mathcal{S} \) on \( \mathbf{G} \) coming from \( h \).
Consequently
\[ {^{\tau,h}}\gH = {^\tau\mathcal{S}} \times^\mathcal{S} \mathbf{H} \subset {^\tau\mathcal{S}} \times^\mathcal{S} \mathbf{G} = {^{\tau,h}}\gG. \]
Furthermore \( {^\tau h} \) is the same whether we construct it using \( \mathbf{G} \) or \( \mathbf{H} \).
Consequently \( ({^{\tau,h}}\gH, {^{\tau,h}}X_\gH) \) is a Shimura subdatum of \( ({^{\tau,h}}\gG, {^{\tau,h}}X) \).
To see that \( \theta_{\tau,h,\mathbf{H}} \) is the restriction of \( \theta_{\tau,h,\mathbf{G}} \) to \( \mathbf{H} \), simply note that both maps have the form \( g \mapsto \spl(\tau).g \).
\end{proof}
\section{Conjugation of special points and special subvarieties} \label{sec:conjugation}
In this section, we use \cref{conjC}{} and the construction in \cref{sec:langlands} to obtain positive answers to \cref{q:conj-sp-pt,q:conj-sp-subvar}.
\begin{theorem}
Let \( S \) be the canonical model of a Shimura variety \( Sh_K(\mathbf{G}, X) \).
Let \( s \in S(\overline \bQ) \) be a special point.
Then for every \( \tau \in \Gal(\overline \bQ/E(\mathbf{G}, X)) \), the Galois conjugate \( \tau(s) \) is a special point of \( S \).
\end{theorem}
\begin{proof}
By \eqref{eqn:descent}, we have
\begin{equation} \label{eqn:phi-iota}
\phi(\tau;h) \circ \iota(\tau(s)) = \phi_{\tau,h} \circ (\tau\iota)(\tau(s)) = \phi_{\tau,h}(\tau(\iota(s))).
\end{equation}
By construction, \( \phi(\tau;h) \) is the composition of a Hecke operator with a Shimura isomorphism.
Hence \( \phi(\tau;h)^{-1} \) maps special points to special points.
So in order to show that \( \tau(s) \) is special, it suffices to show that
\( \phi_{\tau,h}(\tau(\iota(s))) \) is special.
Write \( \iota(s) = [h,g]_K \in \Sh_K(\mathbf{G}, X)(\mathbb{C}) \).
Using \cref{conjC}(a) (ii) then~(i) gives
\begin{align}
\phi_{\tau,h}(\tau[h,g])
& = T_{\theta_{\tau,h}(g)} \circ \phi_{\tau,h}(\tau[h,1]) \notag
\\& = T_{\theta_{\tau,h}(g)} ([{^\tau h}, 1])
= [{^\tau h}, \theta_{\tau,h}(g)]. \label{eqn:phi-hg}
\end{align}
By construction, \( {^\tau h} \) is pre-special, so \( \phi_{\tau,h}(\tau[h,g]) \) is special.
\end{proof}
\begin{theorem} \label{sp-subpro}
Let \( (\mathbf{G}, X) \) be a Shimura datum.
Let \( \Sh(\mathbf{H}, X_\mathbf{H}) \) be a Shimura sub-pro-variety of \( \Sh(\mathbf{G}, X) \).
Then for every \( \tau \in \Gal(\overline \bQ/E(\mathbf{G}, X)) \), \( \tau \Sh(\mathbf{H}, X_\mathbf{H}) \) is also a Shimura sub-pro-variety of \( \Sh(\mathbf{G}, X) \).
\end{theorem}
\begin{proof}
Choose a pre-special point \( h \in X_\mathbf{H} \subset X \).
Let
\begin{gather*}
\phi_{\tau,h,\mathbf{G}} \colon \tau \Sh(\mathbf{G}, X) \to \Sh({^{\tau,h}}\gG, {^{\tau,h}}X),
\\ \phi_{\tau,h,\mathbf{H}} \colon \tau \Sh(\mathbf{H}, X_\mathbf{H}) \to \Sh({^{\tau,h}}\gH, {^{\tau,h}}X_\gH)
\end{gather*}
be the isomorphisms of \cref{conjC}{}(a) for \( (\mathbf{G}, X) \) and \( (\mathbf{H}, X_\mathbf{H}) \) respectively.
By \eqref{eqn:phi-iota} and the subsequent remark, it suffices to show that \( \phi_{\tau,h,\mathbf{G}}(\tau \Sh(\mathbf{H}, X_\mathbf{H})) \) is a Shimura sub-pro-variety of \( \Sh({^{\tau,h}}\gG, {^{\tau,h}}X) \).
By \cref{th-subdatum}, \( ({^{\tau,h}}\gH, {^{\tau,h}}X_\gH) \) is a Shimura subdatum of \( ({^{\tau,h}}\gG, {^{\tau,h}}X) \) so \( \Sh({^{\tau,h}}\gH, {^{\tau,h}}X_\gH) \) is a Shimura sub-pro-variety of \( \Sh({^{\tau,h}}\gG, {^{\tau,h}}X) \).
Thus it suffices to show that the following diagram commutes.
\begin{equation} \label{inclusion-commutes}
\vcenter{\xymatrix@C+2pc{
\tau \Sh(\mathbf{H}, X_\mathbf{H}) \ar[r]^{\phi_{\tau,h,\mathbf{H}}} \ar@{^{(}->}[d]
& \Sh({^{\tau,h}}\gH, {^{\tau,h}}X_\gH) \ar@{^{(}->}[d]
\\ \tau \Sh(\mathbf{G}, X) \ar[r]^{\phi_{\tau,h,\mathbf{G}}}
& \Sh({^{\tau,h}}\gG, {^{\tau,h}}X).
}}
\end{equation}
After translating notation, this is precisely the assertion in the proof of \cite[Lemma~9.5]{MS82b} that \( \phi_{\tau,h,\mathbf{G}} \) maps \( \tau \Sh(\mathbf{H}, X_\mathbf{H}) \) into \( \Sh({^{\tau,h}}\gH, {^{\tau,h}}X_\gH) \).
For completeness, we prove this assertion.
Using equation~\eqref{eqn:phi-hg} for both \( \mathbf{G} \) and~\( \mathbf{H} \) and the fact that \( \theta_{\tau,h,\mathbf{G}} \) restricts to \( \theta_{\tau,h,\mathbf{H}} \), we get the following for every \( g \in \mathbf{H}(\mathbb{A}_f) \):
\begin{align*}
\phi_{\tau,h,\mathbf{G}}(\tau[h,g])
= [{^\tau h}, \theta_{\tau,h,\mathbf{G}}(g)]
= [{^\tau h}, \theta_{\tau,h,\mathbf{H}}(g)]
= \phi_{\tau,h,\mathbf{H}}(\tau[h,g]).
\end{align*}
Since \( \{ \tau[h,g] : g \in \mathbf{H}(\mathbb{A}_f) \} \) is dense in \( \tau \Sh(\mathbf{H}, X_\mathbf{H}) \), this shows that \eqref{inclusion-commutes} commutes.
\end{proof}
\begin{corollary}
Let \( S \) be the canonical model of a Shimura variety \( \Sh_K(\mathbf{G}, X) \).
Let \( Z \subset S \) be a special subvariety, defined over \( \overline \bQ \).
Then for every \( \tau \in \Gal(\overline \bQ/E(\mathbf{G}, X)) \), \( \tau(Z) \) is also a special subvariety of \( S \).
\end{corollary}
\begin{proof}
By definition, \( Z \) is a geometrically irreducible component of the image of \( T_g \circ [f] \) for some morphism of Shimura data \( f \colon (\mathbf{H}, X_\mathbf{H}) \to (\mathbf{G}, X) \) and some Hecke correspondence \( T_g \) on \( \Sh_K(\mathbf{G}, X) \).
The Hecke correspondence \( T_g \) is defined over \( E(\mathbf{G}, X) \) and every component of the image of a special subvariety under a Hecke correspondence is special.
Hence it suffices to show that each component of the image of \( \tau[f] \colon \Sh_{K_\mathbf{H}}(\mathbf{H}, X_\mathbf{H}) \to \Sh_K(\mathbf{G}, X) \) is a special subvariety.
By \cite[Prop.~4.3]{Pin05}, we may assume that \( f \) is injective.
Then \( \tau \Sh(\mathbf{H}, X_\mathbf{H}) \) is a Shimura sub-pro-variety of \( \Sh(\mathbf{G}, X) \) by \cref{sp-subpro}.
Projecting down to finite level, we deduce that every geometrically irreducible component of the image of \( \tau \Sh_{K_\mathbf{H}}(\mathbf{H}, X_\mathbf{H}) \) in \( \Sh_K(\mathbf{G}, X) \) is indeed a special subvariety.
\end{proof}
\section{Conjugation and complexity} \label{sec:conj-complexity}
We conclude the paper by proving \cref{galois-complexity}.
We first need the following lemma on morphisms of Shimura varieties.
\begin{lemma} \label{zeta}
Let \( f \colon (\mathbf{G}_1, X_1) \to (\mathbf{G}_2, X_2) \) be an isomorphism of Shimura data and let \( \beta \in \mathbf{G}_1(\mathbb{A}_f) \).
Define a morphism of pro-varieties by
\[ \phi = [f] \circ T_\beta \colon \Sh(\mathbf{G}_1, X_1) \to \Sh(\mathbf{G}_2, X_2). \]
Let \( \theta \colon \mathbf{G}_1(\mathbb{A}_f) \to \mathbf{G}_2(\mathbb{A}_f) \) be the continuous group homomorphism
\( \theta(g) = f(\beta^{-1} g \beta) \).
Let \( \theta' \colon \mathbf{G}_1(\mathbb{A}_f) \to \mathbf{G}_2(\mathbb{A}_f) \) be any continuous group homomorphism.
Then
\[ \phi \circ T_g = T_{\theta'(g)} \circ \phi \text{ for all } g \in \mathbf{G}_1(\mathbb{A}_f) \]
if and only if \( \theta' \) has the form \( \theta'(g) = \zeta(g) \theta(g) \) for some continuous homomorphism \( \zeta \colon \mathbf{G}_1(\mathbb{A}_f) \to \overline{\mathbf{Z}_2(\mathbb{Q})} \),
where \( \mathbf{Z}_2 \) denotes the centre of \( \mathbf{G}_2 \).
\end{lemma}
\begin{proof}
Let \( \zeta(g) = \theta'(g) \theta(g)^{-1} \).
For every \( h \in X_1 \) and \( a, g \in \mathbf{G}_1(\mathbb{A}_f) \), we can calculate
\begin{gather*}
T_{\theta'(g)} \phi([h,a]) = T_{\theta'(g)} [f] T_\beta([h,a]) = [f_*(h), f(a\beta) \theta'(g)] = [f_*(h), f(a\beta) \zeta(g) \theta(g)],
\\ \phi T_g([h,a]) = [f] T_\beta T_g([h,a]) = [f_*(h), f(ag\beta)] = [f_*(h), f(a\beta) \theta(g)].
\end{gather*}
Since \( T_{\theta(g)} \) is invertible, we deduce that \( \phi \circ T_g = T_{\theta'(g)} \circ \phi \) if and only if
\begin{equation} \label{eqn:equiv}
[f_*(h), f(a\beta)] = [f_*(h), f(a\beta) \zeta(g)] \text{ for all } h \in X_1, a \in \mathbf{G}_1(\mathbb{A}_f).
\end{equation}
From the double quotient description~\eqref{eqn:spv-c-pts} of \( \Sh(\mathbf{G}_2, X_2)(\mathbb{C}) \), we see immediately that, if the image of \( \zeta \) lies in \( \overline{\mathbf{Z}_2(\mathbb{Q})} \), then \eqref{eqn:equiv} holds.
Now for the converse.
Assume that \eqref{eqn:equiv} holds for every \( g \in \mathbf{G}_1(\mathbb{A}_f) \).
Choose a Hodge generic point \( h \in X_1 \).
Thanks to \eqref{eqn:spv-c-pts} and \eqref{eqn:equiv}, for every \( a,g \in \mathbf{G}_1(\mathbb{A}_f) \), there exist \( \gamma(a,g) \in \mathbf{G}_2(\mathbb{Q}) \) and \( \nu(a,g) \in \overline{\mathbf{Z}_2(\mathbb{Q})} \) such that
\begin{align}
\gamma(a,g) \, f_*(h) &= f_*(h), \label{mu-fh}
\\ \gamma(a,g) \, f(a\beta) \, \nu(a,g) &= f(a\beta) \zeta(g). \label{mu-fg}
\end{align}
According to \cite[Lemma~2.2]{UY13}, the intersection of \( \mathbf{G}_2(\mathbb{Q}) \) with the stabiliser of \( f_*(h) \) in \( \mathbf{G}_2(\mathbb{R}) \) is equal to the \( \mathbb{Q} \)-points of the centraliser of \( \MT(f_*(h)) \), that is,
\begin{equation} \label{eqn:stab-q}
\mathbf{G}_2(\mathbb{Q}) \cap \Stab_{\mathbf{G}_2(\mathbb{R})}(f_*(h)) = Z_{\mathbf{G}_2}(\MT(f_*(h)))(\mathbb{Q}).
\end{equation}
Since \( h \) is Hodge generic in \( X_1 \) and \( f \) is an isomorphism, \( f_*(h) \) is Hodge generic in~\( X_2 \).
By the axiom \cite[2.1.1.3]{Del79}, \( \mathbf{G}_2^\mathrm{der} \subset \MT(X_2) \).
Hence
\begin{equation} \label{eqn:centralisers}
Z_{\mathbf{G}_2}(\MT(f_*(h))) = Z_{\mathbf{G}_2}(\MT(X_2)) \subset Z_{\mathbf{G}_2}(\mathbf{G}_2^\mathrm{der}).
\end{equation}
Since \( \mathbf{G}_2 \) is reductive, \( Z_{\mathbf{G}_2}(\mathbf{G}_2^\mathrm{der}) = \mathbf{Z}_2 \).
Hence \eqref{mu-fh}, \eqref{eqn:stab-q} and \eqref{eqn:centralisers} imply that \( \gamma(a,g) \in \mathbf{Z}_2(\mathbb{Q}) \).
Since \( \gamma(a,g) \) and \( \nu(a,g) \) both lie in \( \overline{\mathbf{Z}_2(\mathbb{Q})} \subset \mathbf{Z}_2(\mathbb{A}_f) \), \eqref{mu-fg} gives
\[ \gamma(a,g) \nu(a,g) = \zeta(g). \]
We conclude that \( \zeta(g) \) lies in \( \overline{\mathbf{Z}_2(\mathbb{Q})} \) and that \( \zeta \) is a group homomorphism.
\end{proof}
The following proposition is the main step in the proof of \cref{galois-complexity}.
\begin{proposition} \label{galois-special-mt}
Let \( (\mathbf{G}, X) \) be a Shimura datum and let \( [h,g] \in \Sh(\mathbf{G}, X) \) be a special point.
For every \( \tau \in \Gal(\overline \bQ/E(\mathbf{G}, X)) \),
if \( \tau[h,g] = [h_\tau,g_\tau] \in \Sh(\mathbf{G}, X) \), then
there exist:
\begin{enumerate}[(a)]
\item an isomorphism of \( \mathbb{Q} \)-tori \( f_2 \colon \MT(h_\tau) \to \MT(h) \);
\item an automorphism \( \alpha \) of the topological group \( \MT(h)(\mathbb{A}_f) \) such that
\( \alpha \circ f_2 \) maps \( g_\tau K g_\tau^{-1} \cap \MT(h_\tau)(\mathbb{A}_f) \) onto \( gKg^{-1} \cap \MT(h)(\mathbb{A}_f) \) for each compact open subgroup \( K \subset \mathbf{G}(\mathbb{A}_f) \).
\end{enumerate}
\end{proposition}
\begin{proof}
By \eqref{eqn:phi-iota} and \eqref{eqn:phi-hg}, we have
\[ \phi(\tau;h)[h_\tau,g_\tau] = \phi_{\tau,h}(\tau [h, g]) = [{^\tau h}, \theta_{\tau,h}(g)] \text{ in } \Sh({^{\tau,h}}\gG, {^{\tau,h}}X). \]
Write \( \phi(\tau;h) = [f_1] \circ T_{\beta_1} \) as in \cite[Prop.~7.8]{MS82b}.
Recalling the double coset description \eqref{eqn:spv-c-pts} of \( \Sh({^{\tau,h}}\gG, {^{\tau,h}}X)(\mathbb{C}) \), we get
\begin{gather}
f_{1*}(h_\tau) = \gamma \, {^\tau h}, \label{f-h}
\\ f_1(g_\tau \beta_1) = \gamma \theta_{\tau,h}(g)z. \label{f-g}
\end{gather}
for some \( \gamma \in {^{\tau,h}}\gG(\mathbb{Q}) \) and \( z \in \overline{{^{\tau,h}}\gZ(\mathbb{Q})} \).
Thanks to \eqref{f-h}, \( f_2 := \Inn(\gamma^{-1}) \circ f_1 \) restricts to an isomorphism of \( \mathbb{Q} \)-tori
\[ \MT(h_\tau) \to \MT({^\tau h}) = \MT(h). \]
This proves (a).
Define \( \theta_{\tau,h}' \colon \mathbf{G}(\mathbb{A}_f) \to {^{\tau,h}}\gG(\mathbb{A}_f) \) by
\[ \theta_{\tau,h}'(g') = f_1(\beta_1^{-1} g' \beta_1). \]
Using \eqref{f-g}, we can calculate
\begin{align} \label{eqn:f2}
f_2(g_\tau K g_\tau^{-1})
& = \gamma^{-1} f_1(g_\tau \beta_1) \, f_1(\beta_1^{-1} K \beta_1) \, f_1(\beta_1^{-1} g_\tau^{-1}) \gamma \notag
\\& = \theta_{\tau,h}(g) z \, \theta_{\tau,h}'(K) \, z^{-1} \theta_{\tau,h}(g)^{-1}.
\end{align}
Because \( \phi(\tau;h) \) satisfies property~(i) from p.~\pageref{phi-tau-h}, we can apply \cref{zeta} to establish that
\( \theta_{\tau,h}(g) = \zeta(g) \theta_{\tau,h}'(g) \) for some \( \zeta(g) \in \overline{{^{\tau,h}}\gZ(\mathbb{Q})} \).
Substituting this into~\eqref{eqn:f2} and using the fact that \( z \) and \( \zeta(g) \) are in \( \overline{{^{\tau,h}}\gZ(\mathbb{Q})} \subset {^{\tau,h}}\gZ(\mathbb{A}_f) \), we get
\begin{equation} \label{eqn:f2-theta}
f_2(g_\tau K g_\tau^{-1}) = \theta_{\tau,h}'(gKg^{-1}).
\end{equation}
Let \( \mathbf{M} = \MT(h) \subset \mathbf{G} \) and \( {^{\tau,h}}\gM = \MT({^\tau h}) \subset {^{\tau,h}}\gG \).
Let \( \mathbf{T} \subset \mathbf{G} \) denote a maximal \( \mathbb{Q} \)-torus such that \( h \) factors through \( \mathbf{T}_\mathbb{R} \) and let \( {^{\tau,h}}\gT = {^\tau\mathcal{S}} \times^\mathcal{S} \mathbf{T} \subset {^{\tau,h}}\gG \).
Since \( \mathcal{S} \) acts trivially on~\( \mathbf{T} \), twisting by \( {^\tau\mathcal{S}} \) gives a bijection between \( \mathbb{Q} \)-algebraic subgroups of \( \mathbf{T} \) and \( \mathbb{Q} \)-algebraic subgroups of \( {^{\tau,h}}\gT \), as explained in \cref{thT}.
Hence \( {^{\tau,h}}\gM = {^\tau\mathcal{S}} \times^\mathcal{S} \mathbf{M} \).
We claim that \( \theta'_{\tau,h} \) restricts to an isomorphism \( \mathbf{M}(\mathbb{A}_f) \to {^{\tau,h}}\gM(\mathbb{A}_f) \).
In order to establish this, we look into the definition of \( f_1 \) and \( \beta_1 \) at \cite[p.~328--329]{MS82b}.
Choose any element \( a(\tau) \in {^\tau\mathcal{S}}(\overline \bQ) \) and define an isomorphism \( f \colon \mathbf{G}_{\overline \bQ} \to {^{\tau,h}}\gG_{\overline \bQ} \) by
\[ f(g) = a(\tau).g. \]
Let \( L \) be a number field over which \( f \) is defined.
Milne and Shih use the Taniyama group to construct an element
\[ \tilde{\beta}(\tau,h) \in \mathbf{T}(\mathbb{A}_f \otimes_\mathbb{Q} L). \]
The cocycle \( \tilde\beta(\tau,h)^{-1} \cdot \sigma\tilde\beta(\tau,h) \) becomes trivial in \( \mathrm{H}^1(\mathbb{Q}, \mathbf{G}) \), and hence it is the coboundary of some element \( v \in \mathbf{G}(\overline \bQ) \).
Define \( f_1 \colon \mathbf{G} \to {^{\tau,h}}\gG \) and \( \beta_1 \in \mathbf{G}(\mathbb{A}_f) \) by
\begin{gather*}
f_1 = f \circ \Inn(v^{-1}),
\\ \beta_1 = \tilde\beta(\tau,h) \, v^{-1}.
\end{gather*}
From these descriptions of \( f_1 \) and \( \beta_1 \), we can read off
\begin{equation} \label{eqn:theta-beta}
\theta_{\tau,h}'(g') = f(\tilde\beta(\tau,h)^{-1} \, g' \, \tilde\beta(\tau,h))
\end{equation}
(as elements of \( {^{\tau,h}}\gG(\mathbb{A}_f \otimes_\mathbb{Q} L) \)) for all \( g' \in \mathbf{G}(\mathbb{A}_f) \).
Since \( \tilde\beta(\tau,h) \in \mathbf{T}(\mathbb{A}_f \otimes_\mathbb{Q} L) \) and \( \mathbf{T} \) is a torus containing \( \mathbf{M} \), \( \tilde\beta(\tau,h) \) commutes with \( \mathbf{M}(\mathbb{A}_f) \).
Furthermore, since \( f \) is defined as twisting by an element of \( {^\tau\mathcal{S}}(\overline \bQ) \), it maps \( \mathbf{M} \) to \( {^{\tau,h}}\gM \).
Thus \eqref{eqn:theta-beta} shows that \( \theta_{\tau,h}' \) maps \( \mathbf{M}(\mathbb{A}_f) \) into \( {^{\tau,h}}\gM(\mathbb{A}_f \otimes_\mathbb{Q} L) \).
By definition, \( \theta_{\tau,h}' \) maps \( \mathbf{G}(\mathbb{A}_f) \) into \( {^{\tau,h}}\gG(\mathbb{A}_f) \), so in fact it maps \( \mathbf{M}(\mathbb{A}_f) \) into \( {^{\tau,h}}\gM(\mathbb{A}_f) \).
By the same argument, \( \theta_{\tau,h}'^{-1} \) maps \( {^{\tau,h}}\gM(\mathbb{A}_f) \) into \( \mathbf{M}(\mathbb{A}_f) \).
Thus \( \theta_{\tau,h}' \) restricts to an isomorphism \( \mathbf{M}(\mathbb{A}_f) \to {^{\tau,h}}\gM(\mathbb{A}_f) \).
Composing \( \theta_{\tau,h}'^{-1} \) with the identification \( {^{\tau,h}}\gM(\mathbb{A}_f) = \mathbf{M}(\mathbb{A}_f) \) gives an automorphism \( \alpha \) of \( \mathbf{M}(\mathbb{A}_f) \).
Now \eqref{eqn:f2-theta} shows that
\[ f_2(g_\tau K g_\tau^{-1}) \cap \mathbf{M}(\mathbb{A}_f) = \alpha^{-1}(gKg^{-1} \cap \mathbf{M}(\mathbb{A}_f)). \]
Thus \( \alpha \) is the automorphism required for~(b).
\end{proof}
\begin{corollary*} (\cref{galois-complexity})
Let \( S = \Sh_K(\mathbf{G}, X) \) be the canonical model of a Shimura variety over the reflex field \( E_\mathbf{G} = E(\mathbf{G}, X) \).
Let \( s \in S \) be a special point.
Then for every \( \tau \in \Gal(\overline \bQ/E_\mathbf{G}) \), we have \( \Delta(\tau(s)) = \Delta(s) \).
\end{corollary*}
\begin{proof}
Choose \( h \in X \) and \( g \in \mathbf{G}(\mathbb{A}_f) \) such that \( s = [h,g]_K \).
Let \( \tau[h,g] = [h_\tau, g_\tau] \in \Sh(\mathbf{G}, X) \).
Then \( \tau(s) = [h_\tau, g_\tau]_K \).
We write \( \mathbf{M} = \MT(h) \), \( D_\mathbf{M} \), \( K_\mathbf{M}^m \), \( K_\mathbf{M} \) as in the definition of complexity of \( s \), and \( \mathbf{M}_\tau = \MT(h_\tau) \), \( D_{\mathbf{M}_\tau} \), \( K_{\mathbf{M}_\tau}^m \), \( K_{\mathbf{M}_\tau} \) for the analogous objects attached to \( \tau(s) \).
\Cref{galois-special-mt}(1) implies that the discriminants \( D_\mathbf{M} \) and \( D_{\mathbf{M}_\tau} \) are equal.
Let \( f_2 \colon \mathbf{M}_\tau \to \mathbf{M} \) and \( \alpha \colon \mathbf{M}(\mathbb{A}_f) \to \mathbf{M}(\mathbb{A}_f) \) be as in \cref{galois-special-mt}.
Since both \( f_2 \) and \( \alpha \) induce isomorphisms of topological groups on the adelic points, \( \alpha \circ f_2 \) maps \( K_{\mathbf{M}_\tau}^m \) to \( K_\mathbf{M}^m \).
By \cref{galois-special-mt}(2), \( \alpha \circ f_2 \) maps
\[ K_{\mathbf{M}_\tau} = g_\tau K g_\tau^{-1} \cap \mathbf{M}_\tau(\mathbb{A}_f) \]
onto \( K_\mathbf{M} = gKg^{-1} \cap \mathbf{M}(\mathbb{A}_f) \).
Consequently
\[ [K_{\mathbf{M}_\tau}^m:K_{\mathbf{M}_\tau}] = [K_\mathbf{M}^m:K_\mathbf{M}]. \]
The definition of complexity now tells us that \( \Delta(\tau(s)) = \Delta(s) \).
\end{proof}
\bibliographystyle{amsalpha}
|
2,869,038,156,888 | arxiv | \section{...}
We study jet-pairs produced in hard partonic collisions during the early stages of ultrarelativistic heavy ion collisions.
In order to study the ultrarelativistic nuclear collisions, we first describe di-jet production in $p$-$p$ collisions, where we assume that a QGP-medium is not formed, and later on extend
this description to include medium effects.
In our study of di-jet observables we focus on distributions of jet-pairs in the transverse plane, orthogonal to the collision axis of the incoming nucleons.
In this case the momentum components of the incoming hard partons transverse to the collision axis cannot be neglected, since they contribute considerably to the deviations of pure
back-to-back emission of the di-jet pairs.
Thus, we describe the di-jet production cross section $\sigma_{hard}$ using the following factorization formula
\begin{eqnarray}\label{LO_kt-factorisation}
\frac{d \sigma_{hard}}{d y_1 d y_2 d^2q_{1T} d^2q_{2T}} &=&
\int \frac{d^2 k_{1T}}{\pi} \frac{d^2 k_{2T}}{\pi}
\frac{1}{16 \pi^2 (x_1 x_2 s)^2} \; \overline{ | {\cal M}^{\mathrm{off-shell}}_{g^* g^* \to g g} |^2}
\\
&& \times \; \delta^{2} \left( \vec{k}_{1T} + \vec{k}_{2T}
- \vec{q}_{1T} - \vec{q}_{2T} \right) \;
{\cal F}_g(x_1,k_{1T}^2,\mu_{F}^2) \; {\cal F}_g(x_2,k_{2T}^2,\mu_{F}^2) \; \nonumber ,
\end{eqnarray}
where ${\cal F}_g(x_i,k_{iT}^2,\mu_{F}^2)$ (for $i=1$, $2$) are unintegrated parton distribution functions at factorization scale $\mu_F$, which give the distribution of the transverse momenta $k_{iT}$ and the momentum fraction $x_i$ in direction of the collision axis.
The momentum fractions $x_i$, the rapidities $y_i$ and the transverse momenta $q_{iT}$ of the outgoing particles are related to one another as
$
x_1 = \frac{q_{1T}}{\sqrt{s}}\exp( y_1)
+ \frac{q_{2T}}{\sqrt{s}}\exp( y_2)$, and
$x_2 = \frac{q_{1T}}{\sqrt{s}}\exp(-y_1)
+ \frac{q_{2T}}{\sqrt{s}}\exp(-y_2)\,. \nonumber
$
${\cal M}^{\mathrm{off-shell}}_{g^* g^* \to g \bar g}$ is the matrix element for the collision of virtual incoming particles and outgoing particles that are on the mass shell.
We obtained our numerical results for $p$-$p$ collisions via an implementation of Eq.~(\ref{LO_kt-factorisation}) in the Monte-Carlo program {\sf KATIE}.
For the description of di-jet production in heavy ion collisions, it can be approximated that the probability densities for di-jet production in heavy-ion collisions factorizes into a density for the initial hard scattering and a density for the jet-medium interactions, since either of these subprocesses involves largely different scales of momentum transfer.
We again describe the hard scattering using the factorization formula Eq.~(\ref{LO_kt-factorisation}).
We consider an in-medium jet-evolution for which, under the assumption that transverse momentum transfers from the medium to the jet particles are small, the following evolution equation was found in~\cite{Blaizot:2014rla} for distributions of the leading jet particle
\begin{equation}
\begin{aligned}
\frac{\partial}{\partial t} D(\tilde{x},\mathbf{l},t) = & \: \frac{1}{t^*} \int_0^1 dz\, {\cal K}(z) \left[\frac{1}{z^2}\sqrt{\frac{z}{\tilde{x}}}\, D\left(\frac{\tilde{x}}{z},\frac{\mathbf{l}}{z},t\right)\theta(z-\tilde{x})
- \frac{z}{\sqrt{\tilde{x}}}\, D(\tilde{x},\mathbf{l},t) \right] \\
+& \int \frac{d^2\mathbf{q}}{(2\pi)^2} \,C(\mathbf{q})\, D(\tilde{x},\mathbf{l}-\mathbf{q},t),
\end{aligned}
\label{eq:ktee1}
\end{equation}
with
$
\frac{1}{t^*} = \frac{\alpha_s N_c}{\pi}\sqrt{\frac{\hat{q}}{E}}$
defined via the quenching parameter $\hat{q}$ and the energy of the incoming parton $E$.
$D(\tilde{x},\mathbf{l},t)$ is defined as an energy density with $\tilde{x}$ the fraction of energy $E$ that the jet-particle retains and $\mathbf{l}$ its momentum component orthogonal to the momentum of the incoming particle.
The splitting kernel takes the form
\begin{equation}
{\cal K}(z) = \frac{\left[f(z)\right]^{5/2}}{\left[z(1-z)\right]^{3/2}},
\quad f(z) = 1 - z + z^2,
\qquad 0 \leq z \leq 1,
\label{eq:kernel1}
\end{equation}
The scattering kernel is
\begin{equation}
C(\mathbf{q}) = w(\mathbf{q}) - \delta(\mathbf{q}) \int d^2\mathbf{q'}\,w(\mathbf{q'})\,.
\label{eq:Cq}
\end{equation}
We consider here a scattering off medium particles of a form~\cite{Aurenche:2002pd} such that
\begin{equation}
w(\mathbf{q}) = \frac{16\pi^2\alpha_s^2N_cn}{\mathbf{q}^2(\mathbf{q}^2+m_D^2)}\,,
\label{eq:wq2}
\end{equation}
with $m_D$ the medium quasi-particle Debye mass.
After integration of Eq.~(\ref{eq:ktee1}) over the transverse momenta, one obtains
\begin{equation}
\begin{aligned}
\frac{\partial}{\partial t} D(\tilde{x},t) = & \: \frac{1}{t^*} \int_0^1 dz\, {\cal K}(z) \left[\sqrt{\frac{z}{\tilde{x}}}\, D\left(\frac{\tilde{x}}{z},t\right)\theta(z-\tilde{x})
- \frac{z}{\sqrt{\tilde{x}}}\, D(\tilde{x},t) \right] \,,
\end{aligned}
\label{eq:qee1}
\end{equation}
It was shown in~\cite{Kutak:2018dim} that both Eqs.~(\ref{eq:ktee1}) and~(\ref{eq:qee1}) can be written as integral equations, which can be solved numerically via a Monte-Carlo algorithm, as it was done by the {\tt MINCAS} -program~\cite{Kutak:2018dim}.
In our approach~\cite{vanHameren:2019xtr} we obtained the four momenta of the gluons produced in the hard collisions via the {\sf KATIE} -program and then the changes in gluon momenta due to in-medium evolution via the {\tt MINCAS} -program.
We parametrize the medium as follows: $\hat{q}=0.29$~GeV$^2$/fm, $n=0.08$~GeV$^3$, $m_D=0.61$~Ge
. These parameters are estimates for a medium of constant temperature $T=250$~MeV (cf.~\cite{vanHameren:2019xtr} for further explanations). The particles evolve over a time of $t_L=5$~fm/c in the medium.
We obtained numerical results for the azimuthal angular decorrelations $\frac{dN}{d\Delta\phi}$ ($N$ is the number of di-jets; $\Delta \phi$ is the difference of the azimuthal angles of the two outgoing jet-momenta).
We compared the following three cases
\begin{enumerate}
\item A case without jet-medium interactions, referred to as the "vacuum case", where the outgoing di-jet momenta where obtained via the {\sf KATIE} -algorithm
\item A case with jet-medium interactions, where the in-medium jet-evolution follows Eq.~(\ref{eq:ktee1}) . We refer to this case as "non-Gaussian $k_T$ broadening"
\item
A case referred to as "Gaussian $k_T$ broadening", where the distribution of momentum fractions $\tilde{x}$ follow Eq.~(\ref{eq:qee1}) ,
while the transverse momentum components $\mathbf{l}$ are selected from a Gaussian distribution $P(||\mathbf{l}||)$, i.e.:
$
P(||\mathbf{l}||)=\frac{1}{\sqrt{2\pi\hat{q}t_L}}\;
\exp\left(-\frac{\mathbf{l}^2}{2\hat{q}t_L}\right).
\end{enumerate}
For jet-pairs produced in collisions at $\sqrt{s_{\rm NN}}=2.76$~TeV, where jets are emitted in directions with rapidities $1.2$$<$$|y|$$<$$2.1$, the left panel of Fig.~\ref{fig1} shows results for $\frac{dN}{d\Delta\phi}$.
There, the cases with jet-medium interactions are considerably suppressed.
Differences in shapes of the cases are examined in the right panel of Fig.~\ref{fig1}, which shows the $\frac{dN}{d\Delta\phi}$ distributions for all three cases divided by their respective maxima.
The vacuum case behaves similarly to the case of Gaussian $k_T$ broadening, where the curves for the latter case correspond to an even slightly narrower distribution than that of the former.
However, the distribution for non-Gaussian $k_T$ broadening is considerably broader.
We studied di-jet production in heavy ion collisions by using a Monte-Carlo approach which accounts for the momentum components of the colliding partons transverse to the beam axis via unintegrated parton densities and an in-medium jet evolution that follows Eqs.~(\ref{eq:ktee1}) and~(\ref{eq:qee1}) from~\cite{Blaizot:2014rla} with both scattering off-medium particles and coherent medium induced radiation.
Some shortcomings of our approach are that it does not include quark jets, that the leading jet-gluon momenta are taken as the jet momenta, and that we do not include bremsstrahlung emission in vacuum.
Thus, we do not use our approach for comparison with the experiment, but rather to qualitatively study the influences of in-medium interactions on di-jet observables.
We obtained phenomenological results for the azimuthal angular decorrelations $\frac{dN}{d\Delta\phi}$.
We conclude that the obtained distribution for di-jets in a QGP-medium is considerably suppressed as compared to di-jet production without consideration of a medium and also considerably broader. The main reason of the broadening are deviations of the $k_T$ distributions from Gaussian behavior.
%
\begin{figure}
\includegraphics[scale=0.4,clip=true,trim=20 5 45 0]{DPHI_2760.pdf}
\includegraphics[scale=0.4,clip=true,trim=20 5 45 0]{DPHI_2760_norm.pdf}
\caption{Left panel: results for azimuthal angular decorrelations for di-jets produced in nuclear collisions at $\sqrt{s_{\rm NN}}=2.76$~TeV.
Right panel: same curves, but now normalized to their respective maxima.}
\label{fig1}
\end{figure}
\section*{Acknowledgements}
The research is supported by
Polish National Science Centre grant no. DEC-2017/27/B/ST2/01985
\bibliographystyle{JHEP}
|
2,869,038,156,889 | arxiv | \section{Introduction}
\label{sec:introduction}
The study of the origin and structure of clusters of galaxies occupies
a central place in the current efforts to understand the origin and
evolution of the Universe. In particular, the determination of the
mass distribution of clusters can prove crucial for verifying the existence of dark matter in the Universe and eventually to determine its abundance and
dynamical evolution.\\
\noindent
Cluster masses have traditionally been derived through the virial analysis
of the velocity dispersion of cluster galaxies, with the assumption of
dynamical equilibrium
\citep[eg.][]{1998ApJ...506...45G,2001ApJ...548...79G}, and/or from the X-ray temperature of the hot
intracluster gas, assuming hydrostatic equilibrium \citep[][and references therein]{2002ARA&A..40..539R}.
Since using these methods one has to
assume the dynamical and hydrostatic equilibrium, these mass
estimates are affected by the ignorance about the dynamical state of the cluster. Weak lensing analysis offers a unique opportunity to determine the cluster
mass distribution without such assumptions on its equilibrium, as the effect
is due to the gravitational deflection of the light that is dependent solely on the
distribution of matter. In particular, one application of weak lensing
analysis is the detection of the weak shear around galaxy clusters,
yielding an estimate of the total cluster mass and allowing a full
mass reconstruction of mainly low ($0.2\!\la\!z\!\la\!0.5$) redshift clusters \citep{2004ApJ...604..596C, 2005A&A...434..433B, 2006A&A...451..395C}. As a possible target for a weak lensing
analysis the galaxy cluster \mbox{ABCG$\,$209}{} is particularly interesting, because the photometric and
evolutionary properties of its galaxy populations have been already
thoroughly studied
\citep{2004A&A...424...79M,2003A&A...397..431M,2004A&A...425..783H}.\\
\noindent
In this paper we present the weak lensing mass reconstruction of the
galaxy cluster \mbox{ABCG$\,$209}{} at $z=0.21$ \citep[][and references therein]{2003A&A...408...57M}
using archival wide-field $R$-band imaging. \mbox{ABCG$\,$209}{}
is a rich \citep[richness class R=3,][]{1989ApJS...70....1A}, X-ray
luminous
\citep[L$_{\mathrm{X}}$\,(0.1-2.4\,keV)$ \sim 14\;h_{50}^{-2}\;10^{44}\,$erg.$s^{-1}$, ][]{1996MNRAS.281..799E}, moderately hot
\citep[T$_{\mathrm{X}}\sim10$ keV,][]{1998MNRAS.301..328R} and massive cluster \citep{2002ApJS..139..313D,2003A&A...408...57M}. It is
characterized by the presence of substructures, which is shown by an elongation and asymmetry in the X-ray emission maps, with
two main clumps \citep{1998MNRAS.301..328R}. Moreover, the young dynamical state is indicated by the
presence of a radio halo
\citep{1999NewA....4..141G, 2006astro.ph.10271V}, which has been suggested to
be the result of a recent cluster merger, through the acceleration of
relativistic particles by the merger shocks \citep{2002IAUS..199..133F}.\\
The plan of the paper is as follows. In section 2 we will shortly
review the data reduction procedures. In section 3 we will present the software
pipeline which we have developed to perform this analysis,
based on the KSB+ algorithm \citep{1995ApJ...449..460K,
1997ApJ...475...20L}. There exist many variants of this algorithm
\citep[see][for a presentation and comparison of some
implementations]{2006MNRAS.368.1323H,2006astro.ph..8643M}, so we will describe in some
detail our particular implementation (the OACt pipeline). In section 4 we
will describe the preparation of the galaxy catalogue, and the technique
adopted to extract the shear information. Section 5 will be devoted to
present the mass estimated and a mass aperture reconstruction obtained
from the shear maps. In section 6 we will compare the mass maps with
the galaxy distributions and the X-ray maps. Section 7 will present
the conclusions.
\section{Data description}
\label{sec:datadescription}
A detailed description of the data and reduction techniques has been
given elsewhere \citep{2004A&A...425..783H}, so here we will only
summarize the main steps. The data were obtained from the Canada-France-Hawaii telescope (CFHT)
science archive (PI J.-P. Kneib), and comprise a wide-field R-band image centered on the cluster. The observations were made on 14-16
November 1999, using the CFHT12K mosaic camera, an instrument made up of
12 $4096\times2048$ CCDs, set at the prime focus of the 3.6-m CFHT. The
CCDs have a pixel scale of $0.206^{\prime\prime}$, resulting in a total
field of view of $42\times28~{\rm arcmin}^{2}$, corresponding to
$8.6\times5.7~h^{-2}_{70}~{\rm Mpc}^{2}$ at the cluster redshift. The
total exposure time is 7200 s, made up of twelve 600s exposures, jittered to cover
the gaps between the CCDs.\\
\noindent
Standard IRAF tools were used to bias-subtract the images, using bias
exposures and the overscan regions of each CCD, before flat-fielding using
a superflat made up of all science images from the same observing run,
registering and Co-adding the images. The resultant images have a median
seeing of $0.73^{\prime\prime}$.\\
\noindent
The photometric calibration was performed in the the Johnson-Kron-Cousins
photometric system, using observations of $\sim$300 secondary standard
stars ($14<$R$<17$) in fields 6 and 7 of Galadi-Enriquez et al. (2000),
resulting in a zero-point uncertainty of 0.005 magnitude.\\
\noindent
For the weak lensing analysis we have masked: (i) saturated stars and
hot pixels; (ii) a $1^{\prime}$ (ie. $300\,$pixels) border all
around the field, where the point-spread function (hereafter PSF) is too
complex to be properly modelled (eg. concave and/or varying too
rapidly on small scales); (iii) CCD gaps (ie. area covered by
several CCD on stacked images), where the PSF is also too complex.\\
\noindent
The data set does also contain B-band imaging of the field with seeing and depth significantly lower than on the R-band image. On this B-band image, the bright star density is too low to interpolate the PSF, so it is not used to compute the shape parameters, but to help distinguish between cluster, foreground and background galaxies.
Using an algorithm which takes into account the R-band magnitude, B-R colour and the local
density, \citet{2004A&A...425..783H} attribute to each galaxy a
probability of belonging to the cluster. Given the lack of spectroscopic
information, this probability is the most accurate information we have
to discriminate among cluster and field galaxies.
\section{Weak gravitational shear estimate}
\label{sec:theory}
We extract the shear signal from observed polarisation of background
galaxies, corrected for the effects of the PSF, via the standard KSB+ method \citep{1995ApJ...449..460K, 1997ApJ...475...20L}, improved as described in the following sections.
We have blind tested our pipeline on the Shear-TEsting-Program (STEP)
simulated data \citep{2006MNRAS.368.1323H,2006astro.ph..8643M}, where
altogether 16 different weak lensing pipelines have been tested and compared.
For reasonable PSFs, as the one sampled by the images presented in
this paper turned out to be, the shear we measure $\boldsymbol{\gamma}^{\rm
meas.}$ is a linear function of the true (ie. simulated) one $\boldsymbol{\gamma}^{\rm true}$:
\begin{equation}
\label{eq:biasdef}
\boldsymbol{\gamma}^{\rm meas.} = (m+1) \boldsymbol{\gamma}^{\rm true} + \boldsymbol{C}
\end{equation}
where $m$ is the calibration bias and $\boldsymbol{C}$ is a systematic effect, mainly describing the anisotropy of PSF residuals. Both $m$ and $\boldsymbol{C}$ depend on the PSF, but contain a constant factor intrinsic to the pipeline, which can be subtracted. After subtraction, one has:
\begin{equation}
\label{eq:biasmeasured}
\begin{array}{lll}
-0.05 & < m < & 0\\
-0.02 & < |\boldsymbol{C}| < & 0.02
\end{array}
\end{equation}
For the current data set these systematics are well below statistical
errors. These statistical errors are mostly due to the intrinsic distribution of galaxy polarisations and to the background galaxy density.\\
\noindent
We present in sections \ref{sec:formalism} and \ref{sec:ksb} the basic
KSB+ formalism, and in section \ref{sec:oact} the details specific to
the OACt pipeline.
\subsection{Polarisation as a shear estimator}
\label{sec:formalism}
The observed polarisation of a galaxy offers an estimate of the local
gravitational shear, and can be defined using the weighted quadrupole
moments of the brightness distribution \citep{1995ApJ...449..460K}:
\begin{equation}
\label{eq:quadrupoles}
Q_{ij} = \frac{\int \, d^2\theta \, W(\theta) \,
I(\theta) \, \theta_i
\theta_j} {\int d^2\theta \, W(\theta) \,I(\theta) },
\end{equation}
where $I$ is the surface brightness of the object, $\theta$ is the angular
distance from the object center and $W$ is a window function, whoise
introduction is necessary
to reduce the shot noise to a reasonable level at large distances from
the centroid. Note that the indexes $i$ and $j$ are symmetric, so we have $Q_{12}=Q_{21}$. Using the weighted quadrupoles, we define the complex polarisation $\boldsymbol{e}=e_1+i\,e_2$ as:
\begin{equation}
\label{sec:defpolarisation}
\left(
\begin{array}{c}
e_1\\
e_2
\end{array}
\right)
= \frac{1}{Q_{11} + Q_{22}}
\left(
\begin{array}{c}
Q_{11} - Q_{22}\\
2Q_{12}
\end{array}
\right)
\end{equation}
For an elliptical object and a constant window function $W(\theta )=1$,
$\boldsymbol{e}$ is simply related to the axis ratio $\beta$. Defining
a position angle $\theta$ of the major axis, measured
counter-clockwise from the $x$ axis, one obtains:
\begin{equation}
\label{eq:elliparam}
\left(
\begin{array}{c}
e_1\\
e_2
\end{array}
\right)
= \frac{1-\beta^2}{1+\beta^2}
\left(
\begin{array}{c}
\cos 2\theta\\
\sin 2\theta
\end{array}
\right)
\end{equation}
In the weak lensing limit (defined by: $|\boldsymbol{\gamma}| \ll 1$, where $\boldsymbol{\gamma} = \gamma_1 + i\gamma_2$ is the complex shear field), and assuming that the real (ie. unobserved) polarisation distribution has a null average, $\boldsymbol{\gamma}$ is directly related to the average observed polarisation, $\boldsymbol{\gamma}\approx\boldsymbol{e}/2$.
\subsection{The KSB method}
\label{sec:ksb}
The current KSB+ method is the result of a series of successive
improvements \citep{1997ApJ...475...20L,1998ApJ...504..636H} of the
original method proposed by \citet{1995ApJ...449..460K}. It provides a
gravitational shear estimate by first-order subtraction of the PSF
smearing and shearing from the galaxy polarisation.
The 2D vector KSB+ shear estimator of a single galaxy
\mbox{$\widehat{\boldsymbol{\gamma}}$} is given by:
\begin{equation}
\label{eq:shearestimator}
\widehat{\gamma}_\alpha = \left(P^\gamma \right)^{-1}_{\alpha
\beta} \left[e_{\beta} - P^{\rm
sm}_{\beta\mu}q_{\mu}\right].
\end{equation}
where we have adopted the standard convention on the summation rule of
indices, and $\boldsymbol{q}$ is the anisotropic component of the PSF, $\boldsymbol{P^{\rm sm}}$ is the smear polarisability tensor,
and $\boldsymbol{P^\gamma}$ is the pre-seeing shear polarisability
tensor, the latter being defined as:
\begin{equation}
\label{eq:pgammadef}
\boldsymbol{P^\gamma} = \boldsymbol{P^{\rm sh}}-\left(\boldsymbol{P^{\rm sm}}_{\rm PSF}\right)^{-1}\cdot\boldsymbol{P^{\rm sh}}_{\rm PSF}\cdot\boldsymbol{P^{\rm sm}}
\end{equation}
Here $\boldsymbol{P^{\rm sh}}$ is the shear polarisability tensor, and
the subscript ``PSF'' signals that the quantity is computed for the PSF. In equation \ref{eq:shearestimator}, $\boldsymbol{q}$ and
$\boldsymbol{P^\gamma}$ depend on the PSF and are estimated from the
images of the surrounding stars.\\
\noindent
The actual prescription to estimate $\boldsymbol{q}$,
$\boldsymbol{P^\gamma}$ and the PSF-subscripted tensors, the choice of the window function $W$ in equation \ref{eq:quadrupoles}, the algorithm of pixelised summations, and finally the approximations, vary from one implementation of the method to another. Our algorithm is described in the following section.
\subsection{The OACt pipeline}
\label{sec:oact}
\subsubsection{Smoothing radius, significance, Window function,centroid and Summation algorithm}
We determine the significance $\nu$ and the optimal smoothing
radius $r_g$ of each object.
$\nu$ and $r_g$ are defined as usual in weak lensing: when convolving
the image with a Mexican-hat filter, the radius $r_g$ is the smoothing
radius for which the object has the best signal-to-noise ratio, and
the significance $\nu$ is this best signal-to-noise ratio.\\
\noindent
The window function $W$ of \mbox{equation \ref{eq:quadrupoles}} is
taken to be a circular Gaussian centered on the weighted centroid,
having a standard deviation equal to $r_g$. The weighted centroid, computed iteratively, is the point for which weighted dipoles are equal to zero:
\begin{equation}
\label{eq:dipoles}
\int \, d^2\theta \, W(\theta) \, I(\theta) \, \theta_i = 0
\end{equation}
Due to the finite size of pixels, all the integrals are replaced by
discrete sums with steps of $0.25\,$pixel in x and y directions,
truncated at a distance of $4r_g$. The flux at a given position is the
linear interpolation of the flux in the four nearest pixels, and the
background flux in a pixel is estimated by \textsc{se}xtractor (more details on
the actual parameters adopted are given in \mbox{section \ref{sec:alldet}}).
\subsubsection{Shape computation}
\label{sec:shapecomputation}
We compute the ellipticities, smear polarisability tensors and shear
polarisability tensors of background galaxies as described above (section \ref{sec:ksb}) .
We do the same for stars, except that we do not use the smoothing
radii $r_g$ of stars themselves, but we perform the calculation for
every single $0.1\,$pixel-bin of $r_g$ in the range
$0.95<r_g<9.05$. This bin width of $0.1\,$pixel is chosen to be much
smaller than the accuracy of the measured $r_g$ (this accuracy is of
few tenths of pixel), and the range is given by extremum values of
$r_g$ in usual data. This gives a total of 81 $r_g$ bins.\\
\noindent
After computation of the shape parameters and shear estimator, we discard the objects for which: \mbox{(i) the} standard deviation on the centroid is larger than $0.2\,$pixel; \mbox{(ii) the} polarisation $|\boldsymbol{e}|$ is greater than 1; \mbox{(iii) the} shear estimator $|\boldsymbol{\gamma|}$ is greater than 2; \mbox{(iv) the} trace of the $\boldsymbol{P^\gamma}$ tensor is lower than 0.2. For the current data set, this corresponds to 5\% of the background galaxies. Even on noisier data sets this usually corresponds to less than 10\%.
\subsubsection{Subtracting the smearing and shearing effects of the PSF}
The corrective factors of a given galaxy are all computed in the $r_g$
bin corresponding to the $r_g$ of the galaxy itself. In other words,
we interpolate PSF properties from stars in every single $r_g$ bin
independently for each galaxy.
In all the $\boldsymbol{P^{\rm sm*}}$ and $\boldsymbol{P^{\rm sh*}}$
tensors of stars, the off-diagonal terms are completely dominated by
shot-noise, and typically they are more than one order of magnitude
smaller than the diagonal terms. For this reason, following similar
implementations \citep{2000ApJ...532..88H}, we systematically neglect these off-diagonal terms.
Also the \mbox{non-diagonal} terms of the tensor $\boldsymbol{P^\gamma}$ are
extremely noisy, and we then approximate this tensor
as a scalar equal to half its trace.
These approximations imply that the vector $\boldsymbol{q}$ of
equation \ref{eq:shearestimator} (ie. the anisotropic component of the
PSF) is the interpolation of:
\begin{equation}
\label{eq:qoactdef}
q_\alpha = \frac{e_\alpha^*}{P^{\rm sm*}_{\alpha\alpha}}
\end{equation}
and :
\begin{equation}
\label{eq:pgammaoactdef}
P^\gamma = \frac{1}{2}\sum_\alpha\left(P^{\rm sh}_{\alpha\alpha} - \frac{P^{\rm sh*}_{\alpha\alpha}}{P^{\rm sm*}_{\alpha\alpha}}P^{\rm sm}_{\alpha\alpha}\right)
\end{equation}
Finally, we compute the shear estimator following this simplified version of equation \ref{eq:shearestimator}:
\begin{equation}
\label{eq:gammaoactdef}
\widehat{\gamma}_\alpha = \frac{1}{P^\gamma}\left[e_{\alpha} - \sum_i P^{\rm sm}_{\alpha i} q_i\right].
\end{equation}
\subsubsection{Mapping PSF properties}
\label{sec:mappingpsf}
From the star catalogues, we interpolate the 4 ratios:
$(e_\alpha^*\,/\,P^{\rm sm*}_{\alpha\alpha})$ and $(P^{\rm
sh*}_{\alpha \alpha}\,/\,P^{\rm sm*}_{\alpha \alpha})$ (with $\alpha=1$ or 2) over the entire
region (where the asterisks refer to parameters measured on
stars). The former two terms give $\boldsymbol{q}$ (equation
\ref{eq:qoactdef}) and the latter two give $\boldsymbol{P^\gamma}$
through equation \ref{eq:pgammaoactdef}. For each term, we fit a
2-dimensional, 2-degree polynomial independently on each CCD (and also independently in each $r_g$ bin).\\
\noindent
As we will see in the following, in order to keep a
relatively high star density, we use a
rather permissive star selection criterion. As a consequence, our star
catalogue is contaminated by small galaxies. In order to reject these small galaxies when fitting PSF properties, the fits are iterative: after the first fit is performed, we
reject all objects with at least one residual at more than $3\sigma$,
and we continue to iteratively perform new fits until
convergence. Typically, this procedure converges after 2 or 3 iterations.
\section{Detection of the weak lensing signal}
\label{sec:polarisationmeasurements}
To extract the weak shear information from the reduced and calibrated CHFT12k images presented in section \ref{sec:datadescription}, we first build the star-field and the background-galaxy-field catalogues. Background galaxies contain the weak lensing signal, smeared and sheared by the PSF, while stars are measures of the local PSF.
We first detect all the objects within the field and then select those
relevant for weak lensing. Stars are selected according to their sizes
and their magnitudes, while the background galaxies are selected by
cross-checking our relevant object catalogue with the galaxy catalogue
of \citet{2004A&A...425..783H}. The latter contains all galaxies
within the field and assigns to each a probability of belonging to the field rather than to the cluster itself. All these steps are described in section \ref{sec:alldet}.
We then compute shape parameters of stars and background galaxies (section \ref{sec:shapecomputation}), using the stars to map the PSF (as described in section \ref{sec:mappingpsf}), and finally compute a shear estimator for each background galaxy.
\subsection{Star and background galaxy catalogues}
\label{sec:alldet}
We detect all the objects on the image using \textsc{se}xtractor, with very low thresholds. We also get a large proportion ($\sim 50\%$) of spurious detections but these are rejected later. We demand detected objects to have at least 5 pixels above $1.5\,\sigma$, where $\sigma$ is the standard deviation of the local background (ie. \textsc{detect\_thresh} and \textsc{analysis\_thresh} set to 1.5, \textsc{detect\_minarea} set to 5), where the local background is estimated with keywords \textsc{back\_size} set to 70 and \textsc{back\_filterize} set to 5.
\textsc{se}xtractor is also used
to measure the flux and the half-light-radius $r_h$ of each object.
We then compute $r_g$ and $\nu$, as described in section \ref{sec:oact}.
We then successively remove from the \mbox{catalogues :}
\mbox{(i) the} objects with $\nu <3$; \mbox{(ii) the} objects with at
least one neighbor nearer than $3(r_g+r_g(\textrm{neighbor}))$;
\mbox{(iii) the} objects with at least one pixel belonging to the
masked area within an aperture of $3\,r_g$ (the masked area is
described in section \ref{sec:datadescription}); \mbox{(iv) the}
objects with $r_g$ lower than the local smoothing radius of
stars. Finally, we compute the shapes of the remaining objects and
apply shape cuts, as described in section
\ref{sec:shapecomputation}. After having performed these steps, we
have a catalogue of $\sim 30000$ detections, containing all the
weak-lensing-relevant background galaxies but also stars, cluster
galaxies and a large proportion of spurious events due to the low
thresholds used with \textsc{se}xtractor.\\
\noindent
We build a star catalogue with a loose selection based on 5 parameters:
the significance $\nu$; the magnitude $R$; the surface brightness in
the central pixel $R_{\rm max}$; the half-light-radius $r_h$, and the
smoothing radius $r_g$. We demand: \mbox{$\nu>10$}; \mbox{$R\le
24.0$}; \mbox{$R+2.6\le R_{\rm max}\le R+3.3$};
\mbox{$1.9<\frac{r_h(R)}{1\,{\rm pixel}}<2.5$};
\mbox{$1.3<\frac{r_g(R)}{1\,{\rm pixel}}<1.7$}. This returns
$1588\,$objects (\mbox{$\sim 1.5\,$objects.arcmin$^{-2}$}) uniformly
distributed in the field. We define this catalogue {\em loose} because
$\nu$, $r_g$ and $r_h$ of stars show variations of $\sim 30\%$
through the field with the PSF (while we apply these constant
cuts). Thus, these cuts result in a star catalogue which contains
$20\,-\,45\%$ of non-stellar detections, most of them being small
galaxies. These fake stars are iteratively rejected during the PSF
property fits, as described in section \ref{sec:mappingpsf}. At the
end of iterations, we are left with \mbox{$\sim 0.9\,$stars.arcmin$^{-2}$}, as shown in figure \ref{fig:starellmapr}.\\
\noindent
We select background galaxies from the remaining objects in two
steps. First, we reject fake detections and cluster galaxies by
cross-checking the remaining objects with the catalogue of
\citet{2004A&A...425..783H}, which includes all the galaxies within the
field, together with their accurate photometry and the probability for
each galaxy of belonging to the field rather than to the cluster itself
(see section \ref{sec:datadescription}). We include in our final
catalogue those galaxies identified in the catalogue of
\citet{2004A&A...425..783H} marked as having a probability larger than 80\% to belong to the field. Second, we reject most of the foreground galaxies with the cut $R>21$ (since the R-band magnitude is the only information we have at this stage). In order to reject too faint background galaxies for which the shape parameters are not reliable, we cut the $\sim 3\%$ faintest galaxies for which $R>25.5$.\\
\noindent
These cuts are optimised to give a background galaxy catalogue with low foreground contamination, while keeping almost all the relevant
background galaxies. They are illustrated in figure
\ref{fig:stargalsepr}, where the star sequence is clearly visible in
red, while background galaxies are in blue. The final background
galaxy catalogue contains $16708\,$galaxies (\mbox{$16.7\,$galaxies.arcmin$^{-2}$}).
When combining individual galaxy shears to produce shear maps, mass
reconstructions or density profile fits (see sections
\ref{sec:shearmaps} and \ref{sec:massdistrib}), we weight galaxies
according to their significance, as proposed by
\citet{2006A&A...451..395C}: we do not consider objects with $\nu<5$,
while for $\nu>5$ the weight is set equal to ${\rm
min}(\nu\,;\,40)$. In other words, for a given galaxy $i$, the
weight $w_i$ is defined as:
\begin{equation}
\label{eq:weight}
\begin{array}{lll}
w_i = 0 & {\rm ; if } & \nu_i<5\\
w_i = \nu_i & {\rm ; if } & 5\le\nu_i\le 40\\
w_i = 40 & {\rm ; if } & \nu_i>40
\end{array}
\end{equation}
The weighted galaxy number is $\Sigma_{i}w_{i}/{\rm max}(w_{i}) =
\Sigma_{i} w_{i}/40 = 8816$ (\mbox{$8.8\,$weighted galaxies.arcmin$^{-2}$}).
\subsection{Testing the PSF subtraction}
\label{sec:testingpsfsub}
Figure \ref{fig:starellmapr} shows the star polarisations over the
field before and after the anisotropic correction (ie. the subtraction
of $\sum_i P^{\rm sm}_{\alpha i} q_i$ in equation
\ref{eq:gammaoactdef}). One can notice the lack of large scale
correlations due to the PSF anisotropies, after the subtraction. The
correction also reduces the average amplitude and anisotropy of the PSF, as
it is clear from Figure \ref{fig:epsstarsbeforeafter1}. The corrected
distribution of ellipticities is more isotropic, and has also a smaller
scatter dispersion.
\subsection{Combining the individual shears, building shear maps}
\label{sec:shearmaps}
To get a shear map, we divide the field into $17\times 11$ square cells
with an overlap of 50\% (ie. 50\% of the galaxies in one cell belong only to this cell, while
the remaining 50\% belong also to at least one of the 8 neighbouring
cells). In each cell we average
$\boldsymbol{\gamma}$ according to the weighting scheme described in
section \ref{sec:alldet}. The resulting shear map is shown in Figure
\ref{fig:shearmaps}. One can see a characteristic pattern of increased
tangential shear, which coincides with the central region of the
cluster, as defined by the optical distribution of galaxies. This
visual impression is confirmed by the mass aperture map, as we will
see in the following sections.
\clearpage
\begin{figure}
\centering
\includegraphics[scale=0.5]{Rmax.vs.R.eps}
\includegraphics[scale=0.5]{R.vs.rg.eps}
\caption{
Top (bottom respectively) panel: magnitude - central magnitude (optimal smoothing radius $r_g$ - magnitude resp.) diagram for all objects detected in the image. Objects selected as possible stars are in red, those selected as possible background galaxies are in blue. Other objects are in black.
}
\label{fig:stargalsepr}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.5]{epsstarsmapR-rg1.5-070127.eps}
\includegraphics[scale=0.5]{epsstarsmapR_after-rg1.5-070127.eps}
\caption{Star polarisations in the field. Top panel: before any
correction (ie. measured polarisations). Bottom panel: after the
anisotropic correction. In both panels, the vectors are oriented
along the major axis of the ellipsoid, their length being
proportional to the polarisation:
\mbox{$|\boldsymbol{e}|=\sqrt{(Q_{11}-Q_{22})^2+4\,Q_{12}^2}\,/\,(Q_{11}+Q_{22})$}.
The scale shown in the upper right corners is
\mbox{$|\boldsymbol{e}|=0.03$}. The straight lines show the (masked) regions covered by several unstacked images.}
\label{fig:starellmapr}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.7]{epsstarsbeaf_R-061120.eps}
\includegraphics[scale=0.7]{absepsstarsbeaf_R-061219.eps}
\caption{Distribution of stellar polarisations. Upper panel: $e_1$ against $e_2$ of stars before (black) and after (red) the anisotropic correction. Bottom panel: $|\boldsymbol{e}|$ distribution (same colours).
}
\label{fig:epsstarsbeforeafter1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.9]{shearmap_R-061104.eps}
\caption{Shear map of the whole field obtained with an overlap of
50\% (ie. 50\% of the field area is accounted into more than one cell). The scale is given by the thick line in the upper right corners : $\gamma = 0.05$.}
\label{fig:shearmaps}
\end{center}
\end{figure*}
\clearpage
\section{Mass distribution}
\label{sec:massdistrib}
There are quite a few different methods to deduce the mass
distribution from the individual shears of background galaxies \citep[see eg.][for a
review]{2001PhR...340..291B}. A class of these methods makes use
of the 2D {\em smoothed} shear maps to get the projected surface density
distribution: examples of this class are the {\em mass
reconstruction} method \citep{1995A&A...294..411S, 1995A&A...297..287S,
2001A&A...374..740S}, or the {\em mass aperture} technique
\citep{1996MNRAS.283..837S}. These methods suffer from the {\em sheet
mass degeneracy}: they provide a reliable reconstruction of the surface density, but this
is known except for a constant value. The alternative is to assume
{\em a priori} a given mass profile, so that the degeneracy is removed,
then attempt a parametric fitting. These latter methods are useful to get an
estimate of the total mass of the cluster. Both methods suffer from
possible systematic effects, for instance the contamination from
background sources.\\
\noindent
In section~\ref{sec:modelfitting}, we fit two different parametric
spherical density profiles: a Singular Isothermal Sphere
(SIS) and a Navarro-Frenk-White (NFW) \citep{1996ApJ...462..563N}. In section \ref{sec:massap}, we map the mass distribution using mass aperture statistics.
In order to derive actual values of the mass from the shear parametric, we have to know the critical surface density:
\begin{equation}
\label{eq:sigmacritdef}
\Sigma_{\rm crit} = \frac{c^2}{4\pi G} \frac{\left\langle D_{\rm{ls}} / D_s \right\rangle^{- 1}}{D_{\rm{l}}}
\end{equation}
where $D_{\rm ls}$, $D_{\rm l}$ and $D_{\rm s}$ are the lens-source,
observer-lens and observer-source distances respectively. We do not
have however any redshifts of the background galaxies, so we approximate
their redshift distribution by a single median value (ie. we adopt a
source plane approximation). As described in section \ref{sec:alldet}, we restrict the background galaxies to the range \mbox{$21 < R < 25.5$}: in this magnitude
range, we can assume the median redshift to be $z \sim 1$ \citep{2004A&A...422..407G}. Then the critical density at the redshift of \mbox{ABCG$\,$209}{} ($z=0.21 $) becomes:
\begin{equation}
\label{eq:sigmacritval}
\begin{array}{lll}
\Sigma_{\rm crit} & \simeq & 1.62\times 10^{14}\,{\rm M}_\odot .{\rm arcmin}^{-2}\\
& \simeq & 3.91\times 10^{15}\,{\rm M}_\odot.{\rm Mpc}^{-2}
\end{array}
\end{equation}
for the basic $\Lambda$CDM cosmological model derived from the 3-year WMAP data \citep{2006astro.ph..3449S}:
\mbox{$\Omega_{M} = 0.27$}, \mbox{$\Omega_{\Lambda} = 0.73$},
\mbox{${\rm H}_0 = 70\,$km.sec$^{-1}$.Mpc$^{-1}$} and \mbox{$w=-1$}.\\
\subsection{Parametric model fitting}
\label{sec:modelfitting}
For both profiles we have performed a $\chi^{2}$ minimization of the
tangential shear:
\begin{equation}
\label{eq:chisqdef}
\chi^{2} = \sum_{i=1}^{N} w_{i}\left(\gamma^{T}_{i} - \gamma^{T}_{\rm model}(r_{i})\right)^{2}
\end{equation}
where $\gamma^T$ is the tangential shear (ie. the
shear projected along the direction orthogonal to the line connecting
the galaxy position to the cluster center), $w_{i}$ is the weight (as
defined in equation~\ref{eq:weight}) and $r_i$ is the distance to the
cluster center of mass. The latter is defined as the point for which the signal-to-noise of the $\gamma^T$ profile attains its maximum value:
\begin{equation}
\label{eq:masscenterdef}
\frac{S}{N}(\alpha ,\delta )= \sum_{i=1}^N w_i\,\gamma^T_i(\alpha ,\delta )
\end{equation}
By maximising eq.~\ref{eq:masscenterdef}, we find the center at \mbox{$\alpha=1$h$31$m$51.6s$} ; $\delta=-13^\circ 36^\prime$,
offset by about $36^{\prime\prime}$ from the cD galaxy, as shown on figures \ref{fig:galdist}.\\
\noindent
For the NFW fit we have either assumed that
each galaxy provides an estimate of the underlying shear field, (so
the summation in equation~\ref{eq:chisqdef} extends to the full sample) or we
have binned the data and minimised over the averages in the bins. We
have verified that both procedures give compatible results.\\
\noindent
We exclude from the fits galaxies lying within an inner central
region of the cluster, where the weak lensing approximation ($\kappa <
1$) is not everywhere valid, and outside an outer region, where the
statistics are very weak.\\
\noindent
The actual fitted region lies within the following bounds: \mbox{$1\arcmin < \theta < 10\arcmin$} (\mbox{$0.2\,{\rm Mpc} < R < 2\,{\rm Mpc}$}) around the cluster center of mass.
\subsubsection{Fitting a NFW profile}
The NFW profile has often been used as a good fit of numerically
simulated halos \citep{1995MNRAS.275...56N,1997ApJ...490..493N}.
Although this fit was originally made only for simulated halos in
standard CDM models, it turned out to be a good fit for
halos which formed in $\Lambda$CDM models.
The mass density of the NFW profile is described by:
\begin{eqnarray}
\rho(r) & = & \frac{\delta_{\rm c} \rho_{\rm c}}{(r/r_{\rm
s})(1+r/r_{\rm s})^2} \\
\mbox{where:} \,\,\, \delta_{\rm c} & = & \frac{200}{3} \frac{c^3}{\ln(1+c)-c/(1+c)} \\
\mbox{and:} \,\,\,\rho_{\rm c} & = &
\frac{3 H^2(z)}{8 \pi G}\,.
\end{eqnarray}
In the equations above $r$ is the distance to the cluster center, $r_{\rm s}$ is the scale radius, $H(z)$ the Hubble parameter at the
redshift of the cluster and \mbox{$c = r_{200}/r_{\rm s}$} is the
concentration parameter, where $r_{200}$ is the virial radius.
The NFW density profile is
shallower than the SIS profile near the center but steeper in the outer parts. The total mass inside the radius $R$, shown on figure \ref{fig:fittedmass}, is:
\begin{equation}
\label{eq:mrnfw}
M(<R) = 4\pi\,\delta_c\,\rho_c\,r_s^3\,\left(\ln (1+R/r_s) -1 + \frac{1}{1+R/r_s} \right)
\end{equation}
Exact expressions for the tangential shear due to this mass distribution are given by
\citet{2000ApJ...534...34W}. It has two
highly correlated degrees of freedom: $r_s$ and $c$. From a \mbox{non-linear}
\mbox{least-squares} fit (Levenberg-Marquardt) we obtain:
\begin{equation}
\label{eq:resultnfwfit}
\begin{array}{lll}
c & = & 3.4^{+3.1}_{-1.6}\\
r_{\rm s} & = & 0.50^{+0.60}_{-0.25}\,{\rm Mpc}\\
\end{array}
\end{equation}
This corresponds to virial radius $r_{200}$ and mass $M_{200}$:
\begin{equation}
\begin{array}{lll}
r_{200} & = & 1.81^{+0.30}_{-0.26}\,{\rm Mpc}\\
M_{200} & = & 7.7^{+4.3}_{-2.7}\,10^{14}\,{\rm M}_\odot
\end{array}
\end{equation}
\subsubsection{Fitting a SIS profile}
For a SIS model, the density and shear profiles are respectively given by:
\begin{eqnarray}
\rho(r) & = & \frac{\sigma^2}{2\pi G r^2} \\
\kappa(\theta) & = & \gamma^T(\theta) = \frac{4\pi\sigma_v^2}{c^2}\frac{D_{\rm ls}}{D_{\rm S}}\frac{1}{2 \theta}
\end{eqnarray}
where $\sigma_v$ is the velocity dispersion, the only free parameter
in this model. The total mass inside the radius $R$ (shown on figure \ref{fig:fittedmass}) is:
\begin{equation}
\label{eq:mrsis}
M(<R) = \frac{2}{G}\,\sigma_v^2\,R
\end{equation}
A linear least-squares fit provides:
\begin{equation}
\label{eq:resultsisfit}
\sigma_v = 924\pm 84\,{\rm km.s}^{-1}
\end{equation}
The previous analysis from \citet{2002ApJS..139..313D} gives a
significantly lower value (\mbox{$\sigma_v = 680_{-130}
^{+120}\,$km.s$^{-1}$}), but the authors adopt a less restrictive
criterion to eliminate cluster galaxies before the weak lensing
analysis, so their weak lensing signal, and consequently $\sigma_v$,
could be underestimated.
Although the total mass of a SIS model diverges, we can however show
the integrated mass inside a given radius $R$ (Figure
\ref{fig:fittedmass}). At the virial radius $R=r_{200}\sim
1.8\,$Mpc, as provided by the NFW profile fit (previous section), the
mass inside the radius is almost the same both for SIS and NFW profiles: \mbox{$M(<R)= M_{200}\sim 7.7\times 10^{14}M_\odot$}.\\
\noindent
In Figure \ref{fig:fittedmass} we show the total mass inside the
radius $R$, according to the two different fits we find. Note that the $\pm 1\sigma$ region of NFW is much larger than in the SIS case. This is a direct consequence of equations \ref{eq:mrnfw} and \ref{eq:mrsis}: in the SIS model $M(<R)$ is proportional to $\sigma_v^2$ and so the confidence region has a width of $\sim\pm 20\%$, while in the NFW model $M(<R)$ depends on $\delta_cr_s^3f(r_s,R)$. This makes the $\pm 1\sigma$ region to have a minimum $\sim\pm 50\%$ width for $R\sim 1.3\,$Mpc.
\begin{figure*}
\centering
\includegraphics[scale=0.7]{mass_sis_nfw-070126.eps}
\caption{
Total mass inside the radius $R$ for the two models SIS (in blue) and NFW (in red).
Dashed lines are the 66\% confidence level regions.
The black vertical lines show the virial radius $R_{200}=1.81^{+0.30}_{-0.26}\,{\rm Mpc}$ given by the NFW fit: straight line for the best fit and dashed lines for the 66\% confidence level region. The black cross (in the upper right corner) shows the measure of $M(<R_{vir})$ and $R_{vir}$ by \cite{2003A&A...397..431M} from internal dynamics, discussed in section \ref{sec:compargaldistrib}.
}
\label{fig:fittedmass}
\end{figure*}
\subsection{Mass aperture statistics}
\label{sec:massap}
The mass within an aperture radius $r$ can be obtained directly from the shear
using the {\em aperture densitometry} statistics $\zeta$, defined as
in \citet{1994ApJ...437...56F}:
\begin{eqnarray}
\label{eq:massap1}
\zeta (R_1, R_2) & = & \bar{\kappa} (R_1) - \bar{\kappa} (R_1, R_2)\\
& = & \frac{2}{1 - R^2_{^{} 1^{}} / R^2_{^{} 2^{}}} \int^{R_2}_{R_1}\left\langle \gamma_T \right\rangle_{r}\,{\rm d}\, \ln r
\end{eqnarray}
The mass inside the annulus defined by $(R_1, R_2)$ is connected to the
above quantity by: $M_{\rm ap} = \pi r^2 \zeta \Sigma_{\rm
crit}$. This is in fact a lower limit on the true mass, but only dependent on the tangential shear. Thus, it is not
affected by residual B-modes
and gives a non-parametric representation of the matter density at a given point.
Following the approach of \citet{1996MNRAS.283..837S}, we build the
(2$\,$D) mass aperture statistics by computing the aperture densitometry at each point of a grid on the field. The mass density $M_{\rm ap}$ at a given point is given by:
\begin{equation}
\label{eq:massap2}
M_{\rm ap} = \frac{\Sigma_i\,\gamma_{Ti}\,w_i\,Q_i}{\Sigma_i\,w_i\,Q_i}
\end{equation}
where the sum extends over all the background galaxies and $Q$ is a
smoothing-weighting function. We use the window function $Q$ that maximises
the total signal-to-noise ratio (S/N) for a SIS \citep{1996MNRAS.283..837S}:
\begin{equation}
\label{eq:qdef}
\begin{array}{lclr}
Q(x) & = & 6\pi\,x^2\times(1 - x^2) & {\rm ; x<0}\\
& = & 0 & {\rm ; x\ge 0}
\end{array}
\end{equation}
where: $x=r/r_{\rm an}$, $r$ being the distance to the center of the
annulus, and \mbox{$r_{\rm an}=5\,$arcmin} (equivalent to \mbox{$=1500\,$pixels}
\mbox{$=1\,$Mpc}) is the external radius of the annulus. The only
free parameter, ie. the filter scale $r_{\rm an}$, is chosen as a compromise
between different needs. On one side, $r_{\rm an}$ needs to be as small as
possible to avoid to wash out small scale structures. On the other
hand the annulus needs to be large enough to encompass a significant
number of background galaxies (typically $1000$) everywhere on the field, in order to get a good and spatially stable S/N.
Figure \ref{fig:galdist} shows the mass aperture density map of the
\mbox{$\sim 100\,$arcmin$^2$} patch centered on \mbox{ABCG$\,$209} , together with the cluster center of mass
defined by equation \ref{eq:masscenterdef}. The mass
aperture statistics show the elongated structure of the mass
distribution. The mass distribution is far from the circular symmetry
as assumed by SIS and NFW profile fits. This explains why these fits give only a rough estimate of the total mass, as discussed in section \ref{sec:conclusions}.
\section{Comparisons with other observations}
\subsection{Comparison with the galaxy distribution}
\label{sec:compargaldistrib}
Figure~\ref{fig:galdist} shows the galaxy distribution (shown as the
grayscale-filled black contours) as
represented by the surface density of $R<23.0$ cluster galaxies
\citep[ie. after correcting for background/foreground galaxies, see][]{2004A&A...425..783H}. The galaxy
distribution is strongly elongated in the same SE-NW axis as observed
for the weak-lensing reconstructed projected mass distribution. The center of
the galaxy distribution appears somewhat offset to the NW by about
$1^\prime$ with respect
to the center of mass obtained from the weak lensing reconstruction.
There is also a substructure about $5^\prime$ to the North of the
cluster center, which could be connected with the observed
substructure in the reconstructed mass distribution that extends
northwards from the central mass concentration.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.85]{comp_gal.eps}
\caption{Comparison of the weak lensing mass reconstruction with the
galaxy distribution for \mbox{ABCG$\,$209} . The black contours represent the isodensity
contours from the M$_{ap}$ statistics, corresponding to mass
densities $5\times10^{13}{\rm M}_{\sun}$\,arcmin$^{-2}$
(thin dot-dashed curve) to $1\times10^{14}{\rm M}_{\sun}$\,arcmin$^{-2}$
(tick solid curve), with each contour separated by
$1\times10^{13}{\rm M}_{\sun}$\,arcmin$^{-2}$. The cluster center of mass as
defined by Equation 14 is indicated by the red diamond.
The grayscale filled
black contours represent the surface density of $R<23.0$ galaxies
(background-corrected), corresponding respectively to 0 (dashed
curve) 2 (solid), 3.3, 5,
7.5, 10.0 and 12.5 cluster \mbox{galaxies.arcmin$^{-2}$}.}
\label{fig:galdist}
\end{center}
\end{figure*}
The internal dynamics of the cluster was studied by \citet{2003A&A...397..431M} through a spectroscopic survey of 112 cluster members. A
high value of the line-of-sight velocity dispersion was found, with
\mbox{$\sigma_{\nu}=1250_{+84}^{-98}$\,km\,s$^{-1}$} after removing
seven interlopers. Assuming dynamical equilibrium, this value of $\sigma_\nu$ leads to a
virial radius of \mbox{$R_{vir}\sim 3.28\pm0.55\,h_{70}^{-1}\,$Mpc} and a virial mass of \mbox{$M(<R_{vir})=3.02^{+0.86}_{-0.89},\times\,10^{15}\,h_{70}^{-1}\,{\rm M}_\odot$} in a $\Lambda$CDM model,
with $\Omega_{m}$=0.27 and $\Omega_{\Lambda}$=0.73. We report this value
in Figure \ref{fig:fittedmass}, for a direct comparison with the results of the NFW and SIS analyses showed in this paper. The
difference in the estimates of the virial mass and the virial radius
obtained by the kinematics and weak lensing could be due to the presence
of substructures which results in an over estimate of the velocity
dispersion. Another source of uncertainties in the kinematics could be the
anisotropy parameter of the cluster velocity distribution. On the other
hand, the lensing values could be biased because of the elongation of the
cluster mass distribution (see section \ref{sec:conclusions}).
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{comp_xmm.eps}}
\caption{Comparison of the weak lensing mass reconstruction with the
X-ray emission for \mbox{ABCG$\,$209} . The red contours represent the isodensity
contours from the M$_{ap}$ statistics, as in Fig.~\ref{fig:galdist}. The black
grayscale-filled contours represent the X-ray emission based on the
XMM imaging. The contours are logarithmically spaced, adjacent
contours indicating a factor two change in flux density.}
\label{fig:xmm}
\end{figure*}
Evidence in favour of the cluster undergoing a dynamical evolution is
found in the form of a velocity gradient acting along a SE-NW axis,
which is the same preferential direction found from the elongation in
the spatial distribution of galaxies, as well as that
of the cD galaxy. There is also significant deviation of the velocity
distribution from a Gaussian, with evidence for two secondary clumps
at \mbox{$z=0.199$} and \mbox{$z=0.215$}, which appear spatially
segregated from the main cluster. These all indicate that \mbox{ABCG$\,$209}{} is
undergoing strong dynamic evolution with the merging of two or more
sub-clumps along the SE-NW direction.
\subsection{Comparison to X-ray emission}
The X-ray data are taken from the XMM science archive (Prop \#8423,
PI. J.-P. Kneib, see \citet{2003SPIE.4851..208M} for an analysis). The observations were made in Jan 2001, with an
exposure time of 20\,ksec. The EPIC MOS1, MOS2 and pn images were
combined over the temperature range \mbox{$0.5 - 12\,$keV} and the resulting spatial
distribution of the X-ray emission is shown in Fig.~\ref{fig:xmm} by the
grayscale-filled contours. The X-ray emission is centered on the cD
galaxy (\mbox{$\alpha=$1h31m52.5s}, \mbox{$\delta=-13^\circ 36^\prime 40^{\prime\prime}$}, \mbox{$z=0.2097$}), making
it slightly offset \mbox{($36^{\prime\prime}$)} from the center of mass determined from the weak
lensing by equation \ref{eq:masscenterdef}. The X-ray emission is
elongated along the same SE-NW direction as seen for the weak lensing reconstructed
mass distribution, the emission being most extended
towards the NW. There is no evidence of excess X-ray emission from the
substructure seen in the weak-lensing reconstruction \mbox{$\sim3^\prime$}
to the North of the cluster center.\\
\noindent
From an analysis of a 10\,ksec Chandra ACIS-I (0.3--10\,keV) X-ray observation of the
cluster, \citet{2003A&A...397..431M} obtained a best-fitting temperature of
\mbox{T$_{X}=10.2_{-1.2}^{+1.4}$\,keV} which, assuming $\beta_{spec}=1$, would
correspond to \mbox{$\sigma_v\sim1300\,$km.s$^{-1}$}, and is consistent with the high value of
$L_X(0.1-2.4\,$keV)$ \sim 14\;h_{50}^{-2}\;10^{44}\,$erg.$s^{-1}$ \citep{1996MNRAS.281..799E}. This value of the
velocity dispersion produces a virial mass estimate \mbox{of $3.3\times 10^{15}\,{\rm M}_\odot$}\\
\noindent The mass estimates obtained through our weak lensing analysis are lower than those based on the X-ray temperature and
galaxy velocity dispersions. In a weak lensing analysis of 35 X-ray
luminous clusters at \mbox{$0.15<z<0.30$}, \citet{2006ApJ...653..954D} finds a large
scatter in the relation between the weak lensing mass estimates and
the X-ray luminosity, producing a mass uncertainty of \mbox{$\sigma_{M}=0.44$\,dex}. In a weak lensing study of 24 X-ray luminous
clusters at \mbox{$0.05<z<0.31$}, \citet{2004ApJ...613...95C} found that on average the mass estimates
based on X-ray temperatures and velocity dispersions were 13--27\%
higher than those from the weak lensing analysis. In particular they
found the discrepancy to be much greater for the most massive clusters
(\mbox{$T_{X}>8$\,keV} or \mbox{$\sigma_{\nu}>1122$\,km.s$^{-1}$}), where the mass
excess from the X-ray temperatues or velocity dispersions were
40--75\%. They found that the discrepancy was largest for the two
clusters with the largest X-ray temperatures \mbox{($T_{X}\sim13$\,keV)} and
velocity dispersions, which were known to be undergoing a merging
event, and are far from equilibrium. The high X-ray temperatures would
then be probably due to recent shocks, and the high velocity
dispersions due to substructures and the complex dynamical situation.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{comp_b-r.eps}}
\caption{Comparison of the weak lensing mass reconstruction with the
mean colours of galaxies across \mbox{ABCG$\,$209} . The red contours represent the isodensity
contours from the M$_{ap}$ statistics, as in Fig.~\ref{fig:galdist}.
The coloured contours indicate the mean $B-R$ colour of the $R<21$ cluster
galaxy population (after statistically correcting for field
contamination) as a function of spatial position.}
\label{colours}
\end{figure*}
\subsection{Comparison with Galaxy Colours}
The star-formation history of galaxies is known to correlate strongly
with their local environment. In Figure~\ref{colours} we compare the
mass distribution with the mean \mbox{$B-R$} colour of the \mbox{$R<21$} cluster
galaxy population as a function of spatial position. Each galaxy is
weighted according to the probability that it belongs to the cluster,
and then the mean colour of cluster galaxies calculated as a function
of spatial position using an adaptive kernel method. This analysis is
described in details in \citet{2004A&A...425..783H}: the mean galaxy colours (and hence their star-formation
histories) are strongly correlated with the dynamical state of the
cluster, with an alignment of the colours with the main SE-NW
axis. The region with the reddest mean galaxy colours, and hence
the oldest stellar populations, is found at the cluster center of
mass. There are also two regions of red galaxies outside the cluster core that are aligned with the dark
matter distribution, confirming that galaxy evolution is strongly
dependent on the hierarchical build up of clusters through mergers.
Given the uncertain effects cluster dynamics have on the X-ray
emission and galaxy velocity distributions, maps of the mass
distribution based on weak lensing analyses provide an important tool
for understanding the relation between galaxy evolution and the
underlying dark matter distribution \citep{2004MNRAS.347L..73G}.
\section{Conclusions}
\label{sec:conclusions}
We have performed a weak lensing analysis of the galaxy cluster
\mbox{ABCG$\,$209}{} through a new implementation of the KSB+ algorithm (the OACt
pipeline), and we have also performed a mass reconstruction using Mass
Aperture and parametric statistics. We clearly find a measurable weak lensing signal, and the comparison with optical and X-ray data for this
cluster brings some interesting conlusions.
\noindent First, the centers of the X-ray emission, dark matter, and galaxy
distribution all appear offset from one another, with the center of
mass found from the weak lensing analysis lying between that of the
X-ray and galaxy
distributions, and all the three centers of mass aligned on the main SE-NW
axis of the cluster. Such an effect is seen for the more extreme
Bullet cluster \citep{2004ApJ...604..596C}, and appears to reflect the
different responses of the gas and dark matter components to the merger,
briging a further hint at a cluster merging scenario for \mbox{ABCG$\,$209} .
\noindent Second, we confirm that \mbox{ABCG$\,$209}{} is a massive cluster,
although the mass estimated by weak lensing is lower than the
estimates obtained by \citet{2003A&A...397..431M} from the analysis of
the dynamical properties of the galactic population (assuming the
dynamical equilibrium). On the weak lensing side, there are two
sources of error not taken into account in this
analysis, that could explain this discrepancy. First, the 2D-mass
distribution of the cluster is not circular and this fit of a
circular profile is possibly not accurate. Second, we have not taken
into account the uncertainty on the critical surface density
(equations \ref{eq:sigmacritdef} and \ref{eq:sigmacritval}), which has
been computed according to the single-source plane approximation at
$z=1$. Considering these uncertainties, the agreement we find among the
different mass estimates should be regarded as satisfactory.
\section*{Acknowledgments}
The work of S. Paulin-Henriksson, V. Antonuccio-Delogu and U. Becciani
has been partially supported through the EC {\em Transfer of
Knowledge} Marie Curie grant
MTKD-CT-002995, project: {\em COSMOCT}, EC-VI Framework Programme for
R\&D.
C. P. Haines acknowledges the financial supports provided
through the European Community's Human Potential Program, under
contract HPRN-CT-2002-0031 SISCO. This work is partially supported by
the Italian Ministry of Education, University and Research (MIUR)
grant COFIN200420323 and by the INAF grant ``PRIN 2005''.
S. Paulin and V. Antonuccio-Delogu would like to thank N. Kaiser
for the online version of IMCAT, A. R\'efr\'egier and D. Clowe for useful suggestions.
\bibliographystyle{aa}
|
2,869,038,156,890 | arxiv | \section{Introduction}
\label{sec:introduction}
Supersymmetry (SUSY) \cite{1} provides a solution to the gauge hierarchy
problem if it breaks dynamically \cite{2}. However, the general
supersymmetric extension of the Standard Model (SM) suffers from the
flavor changing neutral current (FCNC) problem \cite{3}. The SUSY FCNC
should be suppressed by certain specific mass patterns of the sleptons
and squarks. The sfermion mass pattern depends on the underlying
physics of SUSY breaking. For example, one of the popular choice for the
sfermion mass matrix is the universality in the flavor space. Such a
mass matrix can be resulted from the gauge mediated SUSY breaking (GMSB)
scenario \cite{4,5}. Here gauge means the SM gauge interactions. Another
inspiring choice is that the first two generation sfermions are very heavy
around $(10-100)$ TeV where the third generation sfermions are at the weak
scale \cite{6}. This kind of model is often referred to as effective SUSY.
It can be realized in the GMSB scenario \cite{7}, or in that where an
anomalous U(1) mediates SUSY breaking \cite{8}. One point in this case
is that the first two generations and the third generation are treated
differently. For example, they maybe in different representations of some
new gauge interactions mediating SUSY breaking.
In this work, we consider a sfermion mass pattern which looks opposite to
that of the effective SUSY. It is that the first two generations with the
same chirality are degenerate with masses around the weak scale, and the
third generation is super heavy. The SUSY FCNC is also suppressed in this
case \cite{3}.\footnote{There are other viable alternatives. For example,
only the squarks satisfy this mass pattern, while the slepton mass
matrices follow universality.} In fact, this pattern is not new. It
could be understood by an U(2) symmetry between the first two generations
in the supergravity scenario \cite{9}. In this paper an alternative
origin of it will be discussed. We note that the above mass pattern can
be also a result of a supersymmetric topcolor model with GMSB. Here
gauge (G) means the gauge interactions of the topcolor model.
Topcolor models \cite{10} were proposed for a dynamical understanding of
the third generation quark masses. The basic idea is that the third
generation (at least the top quark) undergoes a super strong interaction
which results in a top quark condensation. The condensation gives top
quark a large mass, and the bottom quark mainly gets its mass due to the
instanton effect of the topcolor interactions. This top quark
condensation contributes only a small part of the electroweak symmetry
breaking (EWSB). The Higgs mechanism may be introduced for the EWSB.
Therefore, the idea of the topcolor can be generalized into SUSY models
naturally. In the scenario of the GMSB, suppose the strong topcolor gauge
interaction involves the full third generation, and the first two
generations participate a weaker gauge interaction universally, the above
described sfermion mass pattern will be generated.
Note that the decoupling of the third generation scalars is a consistent
choice in the supersymmetric topcolor models. Because the third
generation quarks obtain dynamical masses, the Yukawa couplings are always
small.
The whole physical picture is like as follows. At the energy scale about
$10^6$ GeV, SUSY breaking occurs in a secluded sector. It is mediated to
the observable sector through the gauge interactions. The scale of the
messengers is around $10^7$ GeV. The topcolor scale is around $(1-10)$
TeV. Below this scale, the gauge symmetries break into that of the SM.
The resultant sparticle spectrum of the observable sector is the following.
Besides the squarks, the gauginos of the super strong interaction are around
$100$ TeV. The gauginos of the weaker gauge interaction are at about the
weak scale. The Higgs bosons for the topcolor symmetry breaking are as
heavy as $100$ TeV, and the Higgs bosons for the EWSB at the weak scale.
By integrating out the heavy fields above $1$ TeV or so, the
effective theory is the ordinary (two-Higgs-doublets) topcolor model with
the weak scale gauginos, Higgsinos and the first two generation squarks
with degeneracy.
This paper is organized as follows. After a brief review of the topcolor
model in the next section. The supersymmetric extension of the topcolor
model within the framework of the GMSB is described in Sec. III. Summary
and discussions are presented in Sec. IV.
\section{Brief Review of the Topcolor Model}
\label{sec:review}
In this paper, we consider the topcolor model which, at the scale about
$(1-10)$ TeV, has interactions \cite{10}
SU(3)$_1\times$SU(3)$_2\times$U(1)$_{Y_1}\times$U(1)$_{Y_2}\times$SU(2)$_L$.
The fermions are assigned
(SU(3)$_1$, SU(3)$_2$, U(1)$_{Y_1}$, U(1)$_{Y_2}$) quantum numbers as
follows,
\begin{eqnarray}
\label{1}
(t, b)_L&\sim& (3\;, 1\;, \frac{1}{3} \;, 0) \; ,~~~
(t, b)_R\sim (3\;, 1\;, (\frac{4}{3} \;, -\frac{2}{3}) \;, 0)\;,
\nonumber\\
(\nu_{\tau}\;, \tau)_L&\sim& (1\;, 1\;, -1\;, 0) \; ,~~~
\tau_R\sim (1\;, 1\;, -2\;, 0)\;,\nonumber\\
(u\;, d)_L\;, (c\;, s)_L&\sim& (1\;, 3\;, 0\;, \frac{1}{3})\;,~~~
(u\;, d)_R\;, (c\;, s)_R\sim (1\;, 3\;, 0\;, (\frac{4}{3}\;,
-\frac{2}{3}))\;, \nonumber\\
(\nu\;, l)_L(l=e, \mu)&\sim& (1\;, 1\;, 0\;, -1) \; ,~~~
l_R\sim (1\;, 1\;, 0\;, -2)\; .
\end{eqnarray}
The topcolor symmetry breaks spontaneously to SU(3)$_1\times$SU(3)$_2\to$
SU(3)$_{\rm QCD}$ and U(1)$_{Y_1}\times$U(1)$_{Y_2}\to$U(1)$_Y$ through an
scalar field $\phi(3\;, \bar{3}\;, \frac{1}{3}\;, -\frac{1}{3})$ which
develops a vacuum expectation value (VEV). The SU(3)$_1\times$U(1)$_{Y_1}$
are assumed to be strong which make the formation of a top quark condensate
but disallow the bottom quark condensate. The bottom quark mainly gets its
mass due to the SU(3)$_1$ instanton effect. The $\tau$ lepton does not
condensate.
\section{Supersymmetric Topcolor Model}
\label{sec:susyt}
In the supersymmetric extension, the gauge symmetries of the above topcolor
model keep unchanged. The particle contents are given below. In addition
to the superpartners of the particles described in the last section, some
elementary Higgs superfields are introduced. The breaking of the topcolor
symmetry needs one pair of the Higgs superfields $\Phi_1$ and $\Phi_2$.
And the EWSB requires another pair of the Higgs superfields $H_u$ and
$H_d$, like in the ordinary supersymmetric SM. Their quantum numbers under
the
SU(3)$_1\times$SU(3)$_2\times$U(1)$_{Y_1}\times$U(1)$_{Y_2}\times$SU(2)$_L$
are
\begin{eqnarray}
\label{3}
\Phi_1(3\;, \bar{3}\;, \frac{1}{3}\;, -\frac{1}{3}\;, 0)\;, &~~~&
\Phi_2(\bar{3}\;, 3\;, -\frac{1}{3}\;, \frac{1}{3}\;, 0)\;; \nonumber\\
H_u(1\;, 1\;, 0\;, 1\;, 2)\;, &~~~& H_d(1\;, 1\;, 0\;, -1\;, 2)\;.
\end{eqnarray}
The messenger sector is introduced as
\begin{eqnarray}
\label{3}
S_1\;, S'_1 &=& (1\;, 1\;, 1\;, 0\;, 2)\;, ~~~
\bar{S_1}\;, \bar{S'_1} = (1\;, 1\;, -1\;, 0\;, 2)\;, \nonumber\\
T_1\;, T'_1 &=& (3\;, 1\;, -\frac{2}{3}\;, 0\;, 1)\;, ~~~
\bar{T_1}\;, \bar{T'_1} = (\bar{3}\;, 1\;, \frac{2}{3}\;, 0\;, 1)\;,
\end{eqnarray}
and
\begin{eqnarray}
\label{4}
S_2\;, S'_2 &=& (1\;, 1\;, 0\;, 1\;, 2)\;, ~~~
\bar{S_2}\;, \bar{S'_2} = (1\;, 1\;, 0\;, -1\;, 2)\;, \nonumber\\
T_2\;, T'_2 &=& (1\;, 3\;, 0\;, -\frac{2}{3}\;, 1)\;, ~~~
\bar{T_2}\;, \bar{T'_2} = (1\;, \bar{3}\;, 0\;, \frac{2}{3}\;, 1)\;.
\end{eqnarray}
Compared to Ref. \cite{4}, we have introduced an extra set of messengers
so as to mediate the SUSY breaking to both SU(3)$_1\times$U(1)$_{Y_1}$ and
SU(3)$_2\times$U(1)$_{Y_2}$. Furthermore, there are three gauge-singlet
superfields, $X$, $Y$ and $Z$. $Y$ is responsible for the SUSY breaking,
$X$ is related to the EWSB, and $Z$ to the topcolor symmetry breaking.
The superpotential is written as follows,
\begin{eqnarray}
\label{5}
{\cal W}&=&m_1(\bar{S'_1}S_1+S'_1\bar{S_1})+m_2(\bar{T'_1}T_1
+T'_1\bar{T_1})+m_3S_1\bar{S_1}+m_4T_1\bar{T_1} \nonumber\\
&&m_1'(\bar{S'_2}S_2+S'_2\bar{S_2})+m_2'(\bar{T'_2}T_2
+T'_2\bar{T_2})+m_3'S_2\bar{S_2}+m_4'T_2\bar{T_2} \nonumber\\
&&+Y(\lambda_1S_1\bar{S_1}+\lambda_2T_1\bar{T_1}
+\lambda_1'S_2\bar{S_2}+\lambda_2'T_2\bar{T_2}-\mu_1^2) \nonumber\\
&&+\lambda_3X(H_uH_d-\mu_2^2)+\lambda_4Z[{\rm Tr}(\Phi_1\Phi_2)-\mu_3^2]
\; ,
\end{eqnarray}
where the Yukawa interactions are not written. It is required that
$m_3^{(\prime)}/m_4^{(\prime)}\not=\lambda_1^{(\prime)}/
\lambda_2^{(\prime)}$ so that the terms proportional to $m_3^{(\prime)}$
and $m_4^{(\prime)}$ cannot be eliminated by a shift in $Y$.
The model conserves the number of $S_i$-type and $T_i$-type ($i=1\;, 2$)
fields. In addition, the superpotential has a discrete symmetry of
$(\bar{S_i}^{(\prime)}\;, \bar{T_i}^{(\prime)})
\leftrightarrow(S_i^{(\prime)}\;, T_i^{(\prime)})$.
The way of introducing the singlet fields $X$, $Y$ and $Z$ more naturally
was discussed in Ref. \cite{11} where these kind of fields are taken to be
composite. Moreover, the Fayet-Iliopoulos D-terms for the U(1) charges
have been omitted. This is natural in the GMSB scenario.
The above discrete symmetry and the exchange symmetry of $\Phi_1$ and
$\Phi_2$ in the superpotential avoid such D-terms at the one-loop order.
The SUSY breaking is characterized by the term $\mu_1^2Y$ in Eq. (\ref{5}).
It is communicated to the observable sector through the gauge interactions
by the messengers. The SU(3)$_2\times$U(1)$_{Y_2}\times$SU(2)$_L$ are
weak enough to be described in perturbation theory. Their gauginos acquire
masses in the one-loop order \cite{4,12},
\begin{eqnarray}
\label{6}
M_{\lambda_{\rm SU(3)_2}}&=&\frac{\alpha_3'}{4\pi}{\cal M}_T \; ,
\nonumber\\[3mm]
M_{\lambda_{\rm U(1)_{Y_2}}}&=&\frac{\alpha_1'}{4\pi}
({\cal M}_S+\frac{2}{3}{\cal M}_T) \; , \nonumber\\[3mm]
M_{\tilde{W}}&=&\frac{\alpha_2}{4\pi}{\cal M}_S \; ,
\end{eqnarray}
where $\alpha_i^{(\prime)}=g_i^{(\prime)2}/4\pi$ with $g_3'$, $g_1'$ and $g_2$
being the gauge coupling constants of the
SU(3)$_2\times$U(1)$_{Y_2}\times$SU(2)$_L$. For simplicity, we take
$m_1^2\sim m_2^2\sim m_1^{\prime 2}\sim m_2^{\prime 2}\gg
\lambda_{1, 2}^{(\prime)}\mu_1^2\gg m_3^{(\prime)2}\sim m_4^{(\prime)2}$ and
$\lambda_1\sim\lambda_2\sim\lambda_1'\sim\lambda_2'\sim O(1)$. In this case,
the ${\cal M}_S$ and ${\cal M}_T$ are approximately $\lambda_1\mu_1^2/m_1$.
They are about $100$ TeV by choosing, say
$\mu_1\sim 10^6$ GeV and $m_1\sim 10\mu_1$. For the
SU(3)$_1\times$U(1)$_{Y_1}$, however, the gaugino masses cannot be
calculated by the perturbation method, because the interactions are too
strong. Nevertheless they should be at the order of
$\lambda_1\mu_1^2/m_1$ given the above parameter choice,
\begin{equation}
\label{7}
M_{\lambda_{\rm SU(3)_1}}\sim M_{\lambda_{\rm U(1)_{Y_1}}}\sim 100
~~{\rm TeV} \; .
\end{equation}
Similarly, the first two generation scalar quarks and the electroweak
Higgs particles obtain their masses in the two-loop order,
\begin{eqnarray}
\label{8}
m_{\tilde{Q}_1}^2&=&m_{\tilde{Q}_2}^2\simeq
\frac{4}{3}\left(\frac{\alpha_3'}{4\pi}\right)^2\Lambda_T^2
+\frac{3}{4}\left(\frac{\alpha_2}{4\pi}\right)^2\Lambda_S^2
+\frac{1}{4}\left(\frac{\alpha_1'}{4\pi}\right)^2(\Lambda_S^2
+\frac{2}{3}\Lambda_T^2) \; , \nonumber \\
m_{\tilde{c}_R}^2&=&m_{\tilde{u}_R}^2\simeq
\frac{4}{3}\left(\frac{\alpha_3'}{4\pi}\right)^2\Lambda_T^2
+\frac{4}{9}\left(\frac{\alpha_1'}{4\pi}\right)^2(\Lambda_S^2
+\frac{2}{3}\Lambda_T^2) \; , \nonumber \\
m_{\tilde{s}_R}^2&=&m_{\tilde{d}_R}^2\simeq
\frac{4}{3}\left(\frac{\alpha_3'}{4\pi}\right)^2\Lambda_T^2
+\frac{1}{9}\left(\frac{\alpha_1'}{4\pi}\right)^2(\Lambda_S^2
+\frac{2}{3}\Lambda_T^2) \; , \nonumber \\
m_{h_u}^2&=&m_{h_d}^2\simeq
\frac{3}{4}\left(\frac{\alpha_2}{4\pi}\right)^2\Lambda_S^2
+\frac{1}{4}\left(\frac{\alpha_1'}{4\pi}\right)^2(\Lambda_S^2
+\frac{2}{3}\Lambda_T^2) \; ,
\end{eqnarray}
where $Q_1$ and $Q_2$ stand for the superfields of $(u\;, d)_L$ and
$(c\;, s)_L$ respectively. And $(h_u\; h_d)$ are the scalar
components of $(H_u\;, H_d)$. $\Lambda_S^2$ and $\Lambda_T^2$ was
calculated to be \cite{4}
\begin{equation}
\label{9}
\Lambda_S^2=\frac{4\lambda_1'^2\mu_1^4}{m_1'^2} \; , ~~~
\Lambda_T^2=\frac{4\lambda_2'^2\mu_1^4}{m_2'^2} \; .
\end{equation}
For the third generation squarks and the topcolor Higgs' $\phi_1$ and
$\phi_2$, the masses are
around $\Lambda_S^2$ or $\Lambda_T^2$,
\begin{equation}
\label{10}
m_{\tilde{Q}_3}^2\sim m_{\tilde{t}_R}^2\sim m_{\tilde{b}_R}^2\sim
m_{\phi_1}^2=m_{\phi_2}^2\sim \Lambda_S^2 \; ,\Lambda_T^2\sim
(100 ~~{\rm TeV})^2 \; .
\end{equation}
We have seen that for the super strong topcolor interactions, the
relevant supersymmetric particles are super heavy $\sim 100$ TeV so that
they decouple at the topcolor scale. The topcolor physics does not
change even after the supersymmetric extension. However the topcolor
Higgs fields seem to be too heavy.
Let us consider the breaking of the gauge symmetries. The
SU(3)$_1\times$SU(3)$_2\times$U(1)$_{Y_1}\times$U(1)$_{Y_2}$ break into
the diagonal subgroups SU(3)$_{\rm QCD}\times$U(1)$_Y$ when the Higgs
fields $\phi_1$ and $\phi_2$ get non-vanishing VEVs,
\begin{equation}
\label{11}
\langle\phi_1\rangle=v_1\left(\begin{array}{ccc}
1 &0 &0 \\
0 &1 &0\\
0 &0 &1
\end{array}
\right)
~~~{\rm and}~~~
\langle\phi_2\rangle=v_2\left(\begin{array}{ccc}
1 &0 &0 \\
0 &1 &0\\
0 &0 &1
\end{array}
\right) \; .
\end{equation}
$v_1$ and $v_2$ are determined by the minimum of the following potential,
\begin{equation}
\label{12}
V_{topc}=|\lambda_4(3v_1v_2-\mu_3^2)|^2+\frac{g_1^2+g_1^{\prime 2}}{2}
(v_1^2-v_2^2)^2+m_{\phi_1}^2v_1^2+m_{\phi_2}^2v_2^2 \; ,
\end{equation}
where $g_1$ is the coupling constant of the U(1)$_{Y_1}$. It is easy to
see that in the case of $\lambda_4\mu_3^2\geq m_{\phi_i}^2$,
\begin{equation}
\label{13}
v_1=v_2=\frac{1}{\sqrt{3}}\left(\mu_3^2-\frac{m_{\phi_i}^2}{\lambda_4}
\right)^{1/2} \; .
\end{equation}
To keep $v_1$ and $v_2$ to be around a few TeV, certain fine-tuning for
the scale $\mu_3$ is required in this model to cancel the $100$ TeV
$m_{\phi_i}$, where the coupling $\lambda_4$ is $O(1)$. The value of
$\mu_3$ is more natural if the topcolor
scale is higher, such as $10$ TeV.
However, it should be noted that raising topcolor scale makes the
effective topcolor theory more tuned.
At the energy below the topcolor scale, the model is described by an
effective theory in which the gauge symmetry groups are that of the SM,
and there are two Higgs doublets and three generation quarks with
four-fermion topcolor interaction for the third generation. In addition,
there are weak scale gauginos of the SM, squarks of the first and second
generations and doublet Higgsinos which become massive after the EWSB.
There are also topcolor Higgsinos of $\Phi_1$ and $\Phi_2$ after the
topcolor symmetry breaking. They typically have $(1-10)$ TeV mass and
are not expected to be important to the low energy physics. Because of
the degeneracy between the first two generation squarks and the
decoupling of the third generation squarks, this model is free from the
SUSY FCNC problem.
The physics of the topcolor four-fermion interaction and the EWSB in
this model is essentially the same as that without SUSY, which will not be
discussed further.
\section{Summary and Discussion}
\label{sec:summary}
It has been known that the SUSY FCNC problem can be avoided if the squarks
take the mass pattern that the first two generations with the same
chirality are degenerate and the third generation is super heavy. We have
constructed a supersymmetric topcolor model within GMSB to realize this
mass pattern. The pattern is stable under the correction of the Yukawa
interactions because they are weak and the third generation quarks obtain
masses dynamically.
This model has therefore, the phenomenologies of both SUSY and topcolor.
It predicts weak scale SUSY particles, like the SM gauginos, Higgsinos.
It also predicts top pions. These predictions can be tested directly in
the experiments in the near future.
The indirect evidences of this model in low energy processes, such as in
the B decays \cite{13}, and the $R_b$ problem of it \cite{14} are more
complicated because of the involvement of
both the SUSY and the topcolor, and deserve a separate study.
It should be addressed that this model has an inherent tuning problem.
This required tuning follows from the very large masses ($\sim 100$ TeV)
of the third generation scalars and the topcolor Higgs. These fields are
closely related to the topcolor and the EWSB scales which are, however,
lower than 100 TeV. We have explicitly mentioned the tuning below
Eq. (\ref{13}). Another aspect of this tuning is that the naturalness of
the EWSB requires the third generation scalars to be lighter than 20 TeV
\cite{6}. Note that the large mass 100 TeV is just a rough estimate due
to that we are lack of nonperturbative calculation method. On the other
hand, if we adjust the SUSY breaking scale and the messenger scale to be
somewhat lower than what we have chosen, this problem can be less severe.
We emphasis that it is the degeneracy of the first two generations, rather
than the heaviness of the third generation, that plays the essential role
in solving the SUSY FCNC problem. In this sense, the consideration of this
paper is less nontrival than the idea of effective SUSY. However, if we
further consider the underlying theory, the models which realize effective
SUSY \cite{7,8} and the SUSY topcolor model of this paper are on an equal
footing.
A comment should be made on the necessity of the supersymmetric
topcolor. Although SUSY does not necessarily need the help from topcolor,
their combination has certain advantages. As is well-known, SUSY keeps
the weak scale, but cannot explain it. The weak scale may have a
dynamical origin \cite{15,16,11,17}. In this case, it is natural to
expect that the physics which explains the fermion masses is also at some
low energy. Topcolor provides such physics for the hierarchy between the
third generation and the first two generations. On the other hand, SUSY
maybe helpful to understand the hierarchy between the first and the second
generation further. For instance, it is possible that the second
generation quarks mainly get their masses from the electroweak Higgs VEVs,
and the first generation quarks purely from the sneutrino VEVs \cite{18}.
Finally, it should be noted that the very heavy third generation squarks
may pull up the light scalars. This pull up occurs through two- or
more-loop diagrams with the topcolor Higgs exchanges. The heavy topcolor
Higgs suppress this quantum correction. The suppression, however, may be
not enough to keep the results of Eq. (\ref{8}) from significant changing
numerically. The fine-tuning problem which was discussed above re-appears
here. In fact, the drawback of the SUSY and topcolor combination is that
the SUSY breaking scale and the topcolor scale are irrelevant. It might be
hopeful to think of certain dynamics to make relation between them. For
example, it is possible that the topcolor Higgs superfields are also the
SUSY breaking messengers. This possibility will simplify the model and
reduce the fine-tuning.
It is reasonable to say that the supersymmetric topcolor is an interesting
scenario which is worthy to be studied further.
\acknowledgments
The author would like to thank many Korean colleagues for various helpful
discussions.
|
2,869,038,156,891 | arxiv | \section{Preliminaries}
The Riemann curvature operator of a Riemannian manifold $(M^n,g)$ is defined,
as in~\cite{gahula}, by
$$
\mathrm{Riem}(X,Y)Z=\nabla_{Y}\nabla_{X}Z-\nabla_{X}\nabla_{Y}Z+\nabla_{[X,Y]}Z\,.
$$
In a local coordinate system the components of the $(3,1)$--Riemann
curvature tensor are given by
$\RRR^{l}_{ijk}\frac{\partial}{\partial
x^{l}}=\mathrm{Riem}\big(\frac{\partial}{\partial
x^{i}},\frac{\partial}{\partial
x^{j}}\big)\frac{\partial}{\partial x^{k}}$ and we denote by
$\RRR_{ijkl}=g_{lm}\RRR^{m}_{ijk}$ its $(4,0)$--version.
With the previous choice, for the sphere $\SS^n$ we have
${\mathrm{Riem}}(v,w,v,w)=\RRR_{abcd}v^iw^jv^kw^l>0$.
\medskip
{\em In all the paper the Einstein convention of summing
over the repeated indices will be adopted.}
\medskip
The Ricci tensor is obtained by the contraction
$\RRR_{ik}=g^{jl}\RRR_{ijkl}$ and $\RRR=g^{ik}\RRR_{ik}$ will
denote the scalar curvature.
We recall the interchange of derivative formula,
$$
\nabla^2_{ij}\omega_k-\nabla^2_{ji}\omega_k=\RRR_{ijkp}g^{pq}\omega_q\,,
$$
and Schur lemma, which follows by the second Bianchi identity,
$$
2g^{pq}\nabla_p\RRR_{qi}=\nabla_i\RRR\,.
$$
They both will be used extensively in the computations that follows.
\medskip
The so called Weyl tensor is then
defined by the following decomposition formula (see~\cite[Chapter~3,
Section~K]{gahula}) in dimension $n\geq 3$,
\begin{equation}\label{decriem}
\RRR_{ijkl}=\frac{1}{n-2}(\RRR_{ik}g_{jl}-\RRR_{il}g_{jk}
+\RRR_{jl}g_{ik}-\RRR_{jk}g_{il})-\frac{\RRR}{(n-1)(n-2)}(g_{ik}g_{jl}-g_{il}g_{jk})
+\WWW_{ijkl}\,.
\end{equation}
The Weyl tensor satisfies all the symmetries of the curvature tensor, moreover,
all its traces with the metric are zero,
as it can be easily seen by the above formula.\\
In dimension three $\WWW$ is identically zero for every Riemannian
manifold. It becomes relevant instead when $n\geq 4$ since its
vanishing is a condition equivalent for $(M^n,g)$ to be {\em locally
conformally flat}, that is, around every point $p\in M^n$ there is a
conformal deformation $\widetilde{g}_{ij}=e^fg_{ij}$ of the original
metric $g$, such that the new metric is flat,
namely, the Riemann tensor associated to $\widetilde{g}$ is zero in
$U_p$ (here $f:U_p\to\RR$ is a smooth function defined in a open
neighborhood $U_p$ of $p$).\\
In dimension $n=3$, instead, locally conformally flatness is
equivalent to the vanishing of the following {\em Cotton tensor}
\begin{equation}\label{Cottonn}
\CCC_{ijk} =\nabla_k\RRR_{ij} - \nabla_j\RRR_{ik} -
\frac{1}{2(n-1)} \big(\nabla_k\RRR g_{ij} - \nabla_j\RRR g_{ik} \big)\,,
\end{equation}
which expresses the fact that the {\em Schouten} tensor
$$
\SSS_{ij} =\RRR_{ij} - \frac{\RRR g_{ij}}{2(n-1)}
$$
is a {\em Codazzi} tensor (see~\cite[Chapter~16, Section~C]{besse}), that is, a symmetric bilinear form $\TTT_{ij}$ such that $\nabla_k\TTT_{ij}=\nabla_i\TTT_{kj}$.
By means of the second Bianchi identity, one can easily get
(see~\cite{besse}) that
\begin{equation}\label{CottonWeyl}
\,\nabla^l\WWW_{lijk}=-\frac{n-3}{n-2}\CCC_{ijk}\,.
\end{equation}
Hence, when $n\geq 4$, if we assume that the manifold is locally
conformally flat (that is, $\WWW=0$), the Cotton tensor is identically
zero also in this case, but this is only a necessary condition.
By direct computation, we can see that the tensor $\CCC_{ijk}$
satisfies the following symmetries
\begin{equation}\label{CottonSym}
\CCC_{ijk}=-\CCC_{ikj},\,\quad\quad\CCC_{ijk}+\CCC_{jki}+\CCC_{kij}=0\,,
\end{equation}
moreover it is trace--free in any two indices,
\begin{equation}\label{CottonTraces}
g^{ij}\CCC_{ijk}=g^{ik}\CCC_{ijk}=g^{jk}\CCC_{ijk}=0\,,
\end{equation}
by its skew--symmetry and Schur lemma.
We suppose now that $(M^n, g(t))$ is a Ricci flow in some time
interval, that is, the time--dependent metric $g(t)$ satisfies
\[
\ \frac{\partial}{\partial t}g_{ij}=-2\RRR_{ij}.
\]
We have then the following evolution equations for the Christoffel
symbols, the Ricci tensor and the scalar curvature (see for instance~\cite{hamilton1}),
\begin{eqnarray}\label{evolutioncurv}
\frac{\partial\,}{\partial
t}\Gamma_{ij}^k&=&-g^{ks}\nabla_i\RRR_{js}-g^{ks}\nabla_j\RRR_{is}+g^{ks}\nabla_s\RRR_{ij}\nonumber\\
\frac{\partial\,}{\partial
t}\RRR_{ij}&=&\Delta\RRR_{ij}-2\RRR^{kl}\RRR_{kijl}-2g^{pq}\RRR_{ip}\RRR_{jq}\\
\frac{\partial\,}{\partial t}\RRR&=&\Delta\RRR+2|\Ric|^2\,.\nonumber
\end{eqnarray}
\medskip
{\em All the computations which follow will be done in a fixed local
frame, not in a moving frame.}
\bigskip
\begin{ackn} The first and second authors are partially supported by the
Italian FIRB Ideas ``Analysis and Beyond''.
\end{ackn}
\begin{note}
We remark that Huai-Dong Cao also, independently by us, worked out the
computation of the evolution of the Cotton tensor in dimension three,
in an unpublished note.
\end{note}
\bigskip
\section{The Evolution Equation of the Cotton Tensor in 3D}
The goal of this section is to compute the evolution equation under
the Ricci flow of the Cotton tensor $\CCC_{ijk}$ in dimension
three (see~\cite{mancat1} for the evolution of the Weyl tensor), the
general computation in any dimension is postponed to
section~\ref{nsec}.
In the special three--dimensional case we have,
\begin{align}
\RRR_{ijkl}=&\,\RRR_{ik}g_{jl}-\RRR_{il}g_{jk}
+\RRR_{jl}g_{ik}-\RRR_{jk}g_{il}-\frac{\RRR}{2}(g_{ik}g_{jl}-g_{il}g_{jk})\,,
\label{Weyl_zero}\\
\CCC_{ijk} =&\,\nabla_k\RRR_{ij} - \nabla_j\RRR_{ik} -
\frac{1}{4} \big(\nabla_k\RRR g_{ij} - \nabla_j\RRR g_{ik} \big)\,,
\label{Cotton_three}
\end{align}
hence, the evolution equations~\eqref{evolutioncurv} become
\begin{align*}
\frac{\partial\,}{\partial
t}\Gamma_{ij}^k=&\,-g^{ks}\nabla_i\RRR_{js}-g^{ks}\nabla_j\RRR_{is}+g^{ks}\nabla_s\RRR_{ij}\\
\frac{\partial\,}{\partial t}\RRR_{ij}=&\,\Delta\RRR_{ij}-6g^{pq}\RRR_{ip}\RRR_{jq}
+3\RRR\RRR_{ij}+2\vert\Ric\vert^2g_{ij}-\RRR^2 g_{ij}\\
\frac{\partial\,}{\partial t}\RRR=&\,\Delta\RRR+2|\Ric|^2\,.\\
\end{align*}
From these formulas we can compute the evolution equations of the
derivatives of the curvatures assuming, from now on, to be in normal coordinates,
\begin{eqnarray}
\frac{\partial\,}{\partial
t}\nabla_l\RRR&=&\nabla_l\Delta\RRR+2\nabla_{l}|\Ric|^2\,,\nonumber\\
&&\nonumber\\
\frac{\partial\,}{\partial t}\nabla_s\RRR_{ij}
&=&\nabla_{s}\Delta\RRR_{ij}-6\nabla_{s}\RRR_{ip}\RRR_{jp}-6\RRR_{ip}\nabla_{s}\RRR_{jp}
+3\nabla_{s}\RRR\RRR_{ij}+3\RRR\nabla_{s}\RRR_{ij}\nonumber\\
&&+2\nabla_{s}\vert\Ric\vert^2g_{ij}-\nabla_{s}\RRR^2 g_{ij}\nonumber\\
&&+(\nabla_i\RRR_{sp}+\nabla_s\RRR_{ip}-\nabla_p\RRR_{is})\RRR_{jp}\nonumber\\
&&+(\nabla_j\RRR_{sp}+\nabla_s\RRR_{jp}-\nabla_p\RRR_{js})\RRR_{ip}\nonumber\\
&=&\nabla_{s}\Delta\RRR_{ij}-5\nabla_{s}\RRR_{ip}\RRR_{jp}-5\RRR_{ip}\nabla_{s}\RRR_{jp}
+3\nabla_{s}\RRR\RRR_{ij}+3\RRR\nabla_{s}\RRR_{ij}\nonumber\\
&&+2\nabla_{s}\vert\Ric\vert^2g_{ij}-\nabla_{s}\RRR^2 g_{ij}\nonumber\\
&&+(\nabla_i\RRR_{sp}-\nabla_p\RRR_{is})\RRR_{jp}+(\nabla_j\RRR_{sp}-\nabla_p\RRR_{js})\RRR_{ip}\nonumber\\
&=&\nabla_{s}\Delta\RRR_{ij}-5\nabla_{s}\RRR_{ip}\RRR_{jp}-5\RRR_{ip}\nabla_{s}\RRR_{jp}
+3\nabla_{s}\RRR\RRR_{ij}+3\RRR\nabla_{s}\RRR_{ij}\nonumber\\
&&+2\nabla_{s}\vert\Ric\vert^2g_{ij}-\nabla_{s}\RRR^2
g_{ij}+\CCC_{spi}\RRR_{jp}+\CCC_{spj}\RRR_{ip}\nonumber\\
&&+\RRR_{jp}[\nabla_i\RRR g_{sp}-\nabla_p\RRR g_{is}]/4
+\RRR_{ip}[\nabla_j\RRR g_{sp}-\nabla_p\RRR g_{js}]/4\,,\nonumber
\end{eqnarray}
where in the last passage we substituted the expression of the Cotton tensor.
We then compute,
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}
&=&\frac{\partial\,}{\partial
t}\nabla_k\RRR_{ij} -\frac{\partial\,}{\partial t} \nabla_j\RRR_{ik}
-\frac{\partial\,}{\partial t}\big(\nabla_k\RRR g_{ij} - \nabla_j\RRR
g_{ik} \big)/4\\
&=&\nabla_{k}\Delta\RRR_{ij}-5\nabla_{k}\RRR_{ip}\RRR_{jp}-5\RRR_{ip}\nabla_{k}\RRR_{jp}
+3\nabla_{k}\RRR\RRR_{ij}+3\RRR\nabla_{k}\RRR_{ij}\\
&&+2\nabla_{k}\vert\Ric\vert^2g_{ij}-\nabla_{k}\RRR^2
g_{ij}+\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}\\
&&+\RRR_{jp}[\nabla_i\RRR g_{kp}-\nabla_p\RRR g_{ik}]/4
+\RRR_{ip}[\nabla_j\RRR g_{kp}-\nabla_p\RRR g_{jk}]/4\\
&&-\nabla_{j}\Delta\RRR_{ik}+5\nabla_{j}\RRR_{ip}\RRR_{kp}+5\RRR_{ip}\nabla_{j}\RRR_{kp}
-3\nabla_{j}\RRR\RRR_{ik}-3\RRR\nabla_{j}\RRR_{ik}\\
&&-2\nabla_{j}\vert\Ric\vert^2g_{ik}+\nabla_{j}\RRR^2
g_{ik}-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}\\
&&-\RRR_{kp}[\nabla_i\RRR g_{jp}-\nabla_p\RRR g_{ij}]/4
-\RRR_{ip}[\nabla_k\RRR g_{jp}-\nabla_p\RRR g_{kj}]/4\\
&&+(\RRR_{ij}\nabla_k\RRR -\RRR_{ik}\nabla_j\RRR\big)/2\\
&&-\big(\nabla_k\Delta\RRR+2\nabla_{k}|\Ric|^2\big)g_{ij}/4
+\big(\nabla_j\Delta\RRR+2\nabla_{j}|\Ric|^2\big) g_{ik}/4\\
&=&\nabla_{k}\Delta\RRR_{ij}-5\nabla_{k}\RRR_{ip}\RRR_{jp}-5\RRR_{ip}\nabla_{k}\RRR_{jp}
+3\nabla_{k}\RRR\RRR_{ij}+3\RRR\nabla_{k}\RRR_{ij}\\
&&+3\nabla_{k}\vert\Ric\vert^2g_{ij}/2-\nabla_{k}\RRR^2
g_{ij}+\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}\\
&&+\RRR_{jk}\nabla_i\RRR/4-\RRR_{jp}\nabla_p\RRR g_{ik}/4
+\RRR_{ik}\nabla_j\RRR/4 -\RRR_{ip}\nabla_p\RRR g_{jk}/4\\
&&-\nabla_{j}\Delta\RRR_{ik}+5\nabla_{j}\RRR_{ip}\RRR_{kp}+5\RRR_{ip}\nabla_{j}\RRR_{kp}
-3\nabla_{j}\RRR\RRR_{ik}-3\RRR\nabla_{j}\RRR_{ik}\\
&&-3\nabla_{j}\vert\Ric\vert^2g_{ik}/2+\nabla_{j}\RRR^2
g_{ik}-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}\\
&&-\RRR_{kj}\nabla_i\RRR/4 +\RRR_{kp}\nabla_p\RRR g_{ij}/4
-\RRR_{ij}\nabla_k\RRR/4+\RRR_{ip}\nabla_p\RRR g_{kj}/4\\
&&+(\RRR_{ij}\nabla_k\RRR -\RRR_{ik}\nabla_j\RRR\big)/2\\
&&-\nabla_k\Delta\RRR g_{ij}/4 + \nabla_j\Delta\RRR g_{ik}/4\\
&=&\nabla_{k}\Delta\RRR_{ij}-5\nabla_{k}\RRR_{ip}\RRR_{jp}-5\RRR_{ip}\nabla_{k}\RRR_{jp}
+13\nabla_{k}\RRR\RRR_{ij}/4+3\RRR\nabla_{k}\RRR_{ij}\\
&&+3\nabla_{k}\vert\Ric\vert^2g_{ij}/2-\nabla_{k}\RRR^2
g_{ij}+\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}\\
&&-\RRR_{jp}\nabla_p\RRR g_{ik}/4\\
&&-\nabla_{j}\Delta\RRR_{ik}+5\nabla_{j}\RRR_{ip}\RRR_{kp}+5\RRR_{ip}\nabla_{j}\RRR_{kp}
-13\nabla_{j}\RRR\RRR_{ik}/4-3\RRR\nabla_{j}\RRR_{ik}\\
&&-3\nabla_{j}\vert\Ric\vert^2g_{ik}/2+\nabla_{j}\RRR^2
g_{ik}-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}\\
&&+\RRR_{kp}\nabla_p\RRR g_{ij}/4\\
&&-\nabla_k\Delta\RRR g_{ij}/4 + \nabla_j\Delta\RRR g_{ik}/4
\end{eqnarray*}
and
$$
\Delta\CCC_{ijk}=\Delta\nabla_k\RRR_{ij}-\Delta\nabla_j\RRR_{ik}-\Delta\nabla_k\RRR
g_{ij}/4+\Delta\nabla_j\RRR g_{ik}/4\,,
$$
hence,
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}
&=&\nabla_{k}\Delta\RRR_{ij}-\nabla_{j}\Delta\RRR_{ik}-\Delta\nabla_k\RRR_{ij}+\Delta\nabla_j\RRR_{ik}\\
&&-\nabla_k\Delta\RRR g_{ij}/4 + \nabla_j\Delta\RRR
g_{ik}/4+\Delta\nabla_k\RRR g_{ij}/4-\Delta\nabla_j\RRR g_{ik}/4\\
&&-5\nabla_{k}\RRR_{ip}\RRR_{jp}-5\RRR_{ip}\nabla_{k}\RRR_{jp}
+13\nabla_{k}\RRR\RRR_{ij}/4+3\RRR\nabla_{k}\RRR_{ij}\\
&&+3\nabla_{k}\vert\Ric\vert^2g_{ij}/2-\nabla_{k}\RRR^2
g_{ij}+\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}\\
&&-\RRR_{jp}\nabla_p\RRR g_{ik}/4\\
&&+5\nabla_{j}\RRR_{ip}\RRR_{kp}+5\RRR_{ip}\nabla_{j}\RRR_{kp}
-13\nabla_{j}\RRR\RRR_{ik}/4-3\RRR\nabla_{j}\RRR_{ik}\\
&&-3\nabla_{j}\vert\Ric\vert^2g_{ik}/2+\nabla_{j}\RRR^2
g_{ik}-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}\\
&&+\RRR_{kp}\nabla_p\RRR g_{ij}/4
\end{eqnarray*}
Now to proceed, we need the following commutation rules for the
derivatives of the Ricci tensor and of the scalar curvature, where we will employ the
special form of the Riemann tensor in dimension three given by
formula~\eqref{Weyl_zero},
\begin{eqnarray*}
\nabla_k\Delta\RRR_{ij}-\Delta\nabla_k\RRR_{ij}
&=&\nabla^3_{kll}\RRR_{ij}-\nabla^3_{lkl}\RRR_{ij}+
\nabla^3_{lkl}\RRR_{ij}-\nabla^3_{llk}\RRR_{ij}\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\RRR_{klip}\nabla_l\RRR_{jp}
+\RRR_{kljp}\nabla_l\RRR_{ip}\\
&&+\nabla^3_{lkl}\RRR_{ij}-\nabla^3_{llk}\RRR_{ij}\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\RRR_{ik}\nabla_j\RRR/2
+\RRR_{jk}\nabla_i\RRR/2\\
&&-\RRR_{kp}\nabla_i\RRR_{jp}
-\RRR_{kp}\nabla_j\RRR_{ip}
+\RRR_{lp}\nabla_l\RRR_{jp}g_{ik}
+\RRR_{lp}\nabla_l\RRR_{ip}g_{jk}\\
&&-\RRR_{li}\nabla_l\RRR_{jk}
-\RRR_{lj}\nabla_l\RRR_{ik}
-\RRR\nabla_j\RRR g_{ik}/4
-\RRR\nabla_i\RRR g_{jk}/4\\
&&+\RRR\nabla_i\RRR_{jk}/2
+\RRR\nabla_j\RRR_{ik}/2\\
&&+\nabla_l\big(\RRR_{klip}\RRR_{pj}+\RRR_{kljp}\RRR_{pi}\big)\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\RRR_{ik}\nabla_j\RRR/2
+\RRR_{jk}\nabla_i\RRR/2\\
&&-\RRR_{kp}\nabla_i\RRR_{jp}
-\RRR_{kp}\nabla_j\RRR_{ip}
+\RRR_{lp}\nabla_l\RRR_{jp}g_{ik}
+\RRR_{lp}\nabla_l\RRR_{ip}g_{jk}\\
&&-\RRR_{li}\nabla_l\RRR_{jk}
-\RRR_{lj}\nabla_l\RRR_{ik}
-\RRR\nabla_j\RRR g_{ik}/4
-\RRR\nabla_i\RRR g_{jk}/4\\
&&+\RRR\nabla_i\RRR_{jk}/2
+\RRR\nabla_j\RRR_{ik}/2\\
&&+\nabla_l\big(
\RRR_{ik}\RRR_{lj}
-\RRR_{il}\RRR_{kj}
+\RRR_{pl}\RRR_{pj}g_{ik}
-\RRR_{pk}\RRR_{pj}g_{il}
-g_{ik}\RRR\RRR_{lj}/2
+g_{il}\RRR\RRR_{jk}/2\\
&&
+\RRR_{jk}\RRR_{li}
-\RRR_{jl}\RRR_{ki}
+\RRR_{pl}\RRR_{pi}g_{jk}
-\RRR_{pk}\RRR_{pi}g_{jl}
-g_{jk}\RRR\RRR_{li}/2
+g_{jl}\RRR\RRR_{ik}/2\big)\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\RRR_{ik}\nabla_j\RRR/2
+\RRR_{jk}\nabla_i\RRR/2\\
&&-\RRR_{kp}\nabla_i\RRR_{jp}
-\RRR_{kp}\nabla_j\RRR_{ip}
+\RRR_{lp}\nabla_l\RRR_{jp}g_{ik}
+\RRR_{lp}\nabla_l\RRR_{ip}g_{jk}\\
&&-\RRR_{li}\nabla_l\RRR_{jk}
-\RRR_{lj}\nabla_l\RRR_{ik}
-\RRR\nabla_j\RRR g_{ik}/4
-\RRR\nabla_i\RRR g_{jk}/4\\
&&+\RRR\nabla_i\RRR_{jk}/2
+\RRR\nabla_j\RRR_{ik}/2\\
&&-\nabla_i\RRR_{pk}\RRR_{pj}
+\nabla_i\RRR\RRR_{jk}/2
+g_{ik}\RRR_{pl}\nabla_l\RRR_{pj}\\
&&-\RRR_{pk}\nabla_i\RRR_{pj}
-g_{ik}\RRR\nabla_j\RRR/4
+\RRR\nabla_i\RRR_{jk}/2\\
&&-\nabla_j\RRR_{pk}\RRR_{pi}
+\nabla_j\RRR\RRR_{ik}/2
+g_{jk}\RRR_{pl}\nabla_l\RRR_{pi}\\
&&-\RRR_{pk}\nabla_j\RRR_{pi}
-g_{jk}\RRR\nabla_i\RRR/4
+\RRR\nabla_j\RRR_{ik}/2\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\RRR_{ik}\nabla_j\RRR
+\RRR_{jk}\nabla_i\RRR\\
&&-2\RRR_{kp}\nabla_i\RRR_{jp}
-2\RRR_{kp}\nabla_j\RRR_{ip}
+2\RRR_{lp}\nabla_l\RRR_{jp}g_{ik}
+2\RRR_{lp}\nabla_l\RRR_{ip}g_{jk}\\
&&-\RRR_{li}\nabla_l\RRR_{jk}
-\RRR_{lj}\nabla_l\RRR_{ik}
-\RRR_{pj}\nabla_i\RRR_{pk}
-\RRR_{pi}\nabla_j\RRR_{pk}\\
&&-\RRR\nabla_j\RRR g_{ik}/2
-\RRR\nabla_i\RRR g_{jk}/2
+\RRR\nabla_i\RRR_{jk}
+\RRR\nabla_j\RRR_{ik}
\end{eqnarray*}
and
$$
\nabla_k\Delta\RRR-\Delta\nabla_k\RRR=\RRR_{kllp}\nabla_p\RRR=-\RRR_{kp}\nabla_p\RRR\,.
$$
Then, getting back to the main computation, we obtain
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\RRR_{ik}\nabla_j\RRR
+\RRR_{jk}\nabla_i\RRR\\
&&-2\RRR_{kp}\nabla_i\RRR_{jp}
-2\RRR_{kp}\nabla_j\RRR_{ip}
+2\RRR_{lp}\nabla_l\RRR_{jp}g_{ik}
+2\RRR_{lp}\nabla_l\RRR_{ip}g_{jk}\\
&&-\RRR_{li}\nabla_l\RRR_{jk}
-\RRR_{lj}\nabla_l\RRR_{ik}
-\RRR_{pj}\nabla_i\RRR_{pk}
-\RRR_{pi}\nabla_j\RRR_{pk}\\
&&-\RRR\nabla_j\RRR g_{ik}/2
-\RRR\nabla_i\RRR g_{jk}/2
+\RRR\nabla_i\RRR_{jk}
+\RRR\nabla_j\RRR_{ik}\\
&&+\RRR_{jp}\nabla_p\RRR_{ik}
-\RRR_{ij}\nabla_k\RRR
-\RRR_{kj}\nabla_i\RRR\\
&&+2\RRR_{jp}\nabla_i\RRR_{kp}
+2\RRR_{jp}\nabla_k\RRR_{ip}
-2\RRR_{lp}\nabla_l\RRR_{kp}g_{ij}
-2\RRR_{lp}\nabla_l\RRR_{ip}g_{kj}\\
&&+\RRR_{li}\nabla_l\RRR_{kj}
+\RRR_{lk}\nabla_l\RRR_{ij}
+\RRR_{pk}\nabla_i\RRR_{pj}
+\RRR_{pi}\nabla_k\RRR_{pj}\\
&&+\RRR\nabla_k\RRR g_{ij}/2
+\RRR\nabla_i\RRR g_{kj}/2
-\RRR\nabla_i\RRR_{kj}
-\RRR\nabla_k\RRR_{ij}\\
&&+\RRR_{kp}\nabla_p\RRR g_{ij}/4 -\RRR_{jp}\nabla_p\RRR g_{ik}/4\\
&&-5\nabla_{k}\RRR_{ip}\RRR_{jp}-5\RRR_{ip}\nabla_{k}\RRR_{jp}
+13\nabla_{k}\RRR\RRR_{ij}/4+3\RRR\nabla_{k}\RRR_{ij}\\
&&+3\nabla_{k}\vert\Ric\vert^2g_{ij}/2-\nabla_{k}\RRR^2
g_{ij}+\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}\\
&&-\RRR_{jp}\nabla_p\RRR g_{ik}/4\\
&&+5\nabla_{j}\RRR_{ip}\RRR_{kp}+5\RRR_{ip}\nabla_{j}\RRR_{kp}
-13\nabla_{j}\RRR\RRR_{ik}/4-3\RRR\nabla_{j}\RRR_{ik}\\
&&-3\nabla_{j}\vert\Ric\vert^2g_{ik}/2+\nabla_{j}\RRR^2
g_{ik}-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}\\
&&+\RRR_{kp}\nabla_p\RRR g_{ij}/4\\
&=&\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}\\
&&+[2\RRR_{lp}\nabla_l\RRR_{jp}+3\RRR\nabla_j\RRR/2
-\RRR_{jp}\nabla_p\RRR/2-3\nabla_{j}\vert\Ric\vert^2/2]g_{ik}\\
&&+[-2\RRR_{lp}\nabla_l\RRR_{kp}-3\RRR\nabla_k\RRR/2
+\RRR_{kp}\nabla_p\RRR/2+3\nabla_{k}\vert\Ric\vert^2/2]g_{ij}\\
&&-\RRR_{kp}\nabla_i\RRR_{jp}+\RRR_{jp}\nabla_i\RRR_{kp}\\
&&-3\nabla_{k}\RRR_{ip}\RRR_{jp}-4\RRR_{ip}\nabla_k\RRR_{jp}
+9\nabla_{k}\RRR\RRR_{ij}/4+2\RRR\nabla_{k}\RRR_{ij}\\
&&+3\nabla_{j}\RRR_{ip}\RRR_{kp}+4\RRR_{ip}\nabla_{j}\RRR_{kp}
-9\nabla_{j}\RRR\RRR_{ik}/4-2\RRR\nabla_{j}\RRR_{ik}
\end{eqnarray*}
Now, by means of the very definition of the Cotton tensor in
dimension three~\eqref{Cotton_three} and the
identities~\eqref{CottonSym}, we substitute
\begin{align*}
\CCC_{kpj}-\CCC_{jpk}=&\,-\CCC_{kjp}-\CCC_{jpk}=\CCC_{pkj}\\
\nabla_l\RRR_{jp}=&\,\nabla_j\RRR_{lp}+\CCC_{pjl}+\frac{1}{4}
\big(\nabla_l\RRR g_{pj} - \nabla_j\RRR g_{pl} \big)\\
\nabla_l\RRR_{kp}=&\,\nabla_k\RRR_{lp}+\CCC_{pkl}+\frac{1}{4}
\big(\nabla_l\RRR g_{pk} - \nabla_k\RRR g_{pl} \big)\\
\nabla_i\RRR_{jp}=&\,\nabla_j\RRR_{ip}+\CCC_{pji}+\frac{1}{4}
\big(\nabla_i\RRR g_{jp} - \nabla_j\RRR g_{ip} \big)\\
\nabla_i\RRR_{kp}=&\,\nabla_k\RRR_{ip}+\CCC_{pki}+\frac{1}{4}
\big(\nabla_i\RRR g_{kp} - \nabla_k\RRR g_{ip} \big)
\end{align*}
in the last expression above, getting
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}
&=&\RRR_{jp}\CCC_{kpi}-\RRR_{kp}\CCC_{jpi}+\RRR_{ip}\CCC_{pkj}\\
&&+\Big[2\RRR_{lp}\big(\nabla_j\RRR_{lp}+\CCC_{pjl}+\nabla_l\RRR
g_{pj}/4 - \nabla_j\RRR g_{pl}/4\big)\\
&&\phantom{+\Big[}+3\RRR\nabla_j\RRR/2
-\RRR_{jp}\nabla_p\RRR/2-3\nabla_{j}\vert\Ric\vert^2/2\Big]g_{ik}\\
&&+\Big[-2\RRR_{lp}\big(\nabla_k\RRR_{lp}+\CCC_{pkl}+\nabla_l\RRR
g_{pk}/4 - \nabla_k\RRR g_{pl}/4\big)\\
&&\phantom{+\Big[}-3\RRR\nabla_k\RRR/2
+\RRR_{kp}\nabla_p\RRR/2+3\nabla_{k}\vert\Ric\vert^2/2\Big]g_{ij}\\
&&-\RRR_{kp}\big(\nabla_j\RRR_{ip}+\CCC_{pji}+\nabla_i\RRR g_{jp}/4 - \nabla_j\RRR g_{ip}/4\big)\\
&&+\RRR_{jp}\big(\nabla_k\RRR_{ip}+\CCC_{pki}+\nabla_i\RRR g_{kp}/4 - \nabla_k\RRR g_{ip}/4\big)\\
&&-3\nabla_{k}\RRR_{ip}\RRR_{jp}-4\RRR_{ip}\nabla_k\RRR_{jp}
+9\nabla_{k}\RRR\RRR_{ij}/4+2\RRR\nabla_{k}\RRR_{ij}\\
&&+3\nabla_{j}\RRR_{ip}\RRR_{kp}+4\RRR_{ip}\nabla_{j}\RRR_{kp}
-9\nabla_{j}\RRR\RRR_{ik}/4-2\RRR\nabla_{j}\RRR_{ik}\\
&=&\RRR_{jp}\big(\CCC_{kpi}+\CCC_{pki}\big)
-\RRR_{kp}\big(\CCC_{jpi}+\CCC_{pji}\big)+\RRR_{ip}\CCC_{pkj}\\
&&+2\RRR_{lp}\CCC_{pjl} g_{ik}-2\RRR_{lp}\CCC_{pkl} g_{ij}\\
&&+\big[\RRR\nabla_j\RRR-\nabla_{j}\vert\Ric\vert^2/2\big]g_{ik}
-\big[\RRR\nabla_k\RRR-\nabla_{k}\vert\Ric\vert^2/2\big]g_{ij}\\
&&-2\nabla_{k}\RRR_{ip}\RRR_{jp}-4\RRR_{ip}\nabla_k\RRR_{jp}
+2\nabla_{k}\RRR\RRR_{ij}+2\RRR\nabla_{k}\RRR_{ij}\\
&&+2\nabla_{j}\RRR_{ip}\RRR_{kp}+4\RRR_{ip}\nabla_{j}\RRR_{kp}
-2\nabla_{j}\RRR\RRR_{ik}-2\RRR\nabla_{j}\RRR_{ik}\,.
\end{eqnarray*}
then, we substitute again
\begin{align*}
\nabla_k\RRR_{jp}=&\,\nabla_p\RRR_{kj}+\CCC_{jpk}+\frac{1}{4}
\big(\nabla_k\RRR g_{jp} - \nabla_p\RRR g_{jk} \big)\\
\nabla_j\RRR_{kp}=&\,\nabla_p\RRR_{jk}+\CCC_{kpj}+\frac{1}{4}
\big(\nabla_j\RRR g_{kp} - \nabla_p\RRR g_{kj} \big)\\
\nabla_k\RRR_{ij}=&\,\nabla_i\RRR_{kj}+\CCC_{jik}+\frac{1}{4}
\big(\nabla_k\RRR g_{ij} - \nabla_i\RRR g_{jk} \big)\\
\nabla_j\RRR_{ik}=&\,\nabla_i\RRR_{jk}+\CCC_{kij}+\frac{1}{4}
\big(\nabla_j\RRR g_{ik} - \nabla_i\RRR g_{kj} \big)\,,
\end{align*}
finally obtaining
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}
&=&\RRR_{jp}\big(\CCC_{kpi}+\CCC_{pki}\big)
-\RRR_{kp}\big(\CCC_{jpi}+\CCC_{pji}\big)+\RRR_{ip}\CCC_{pkj}\\
&&+2\RRR_{lp}\CCC_{pjl} g_{ik}-2\RRR_{lp}\CCC_{pkl} g_{ij}\\
&&+\big[\RRR\nabla_j\RRR-\nabla_{j}\vert\Ric\vert^2/2\big]g_{ik}
-\big[\RRR\nabla_k\RRR-\nabla_{k}\vert\Ric\vert^2/2\big]g_{ij}\\
&&-2\nabla_{k}\RRR_{ip}\RRR_{jp}-4\RRR_{ip}\big(
\nabla_p\RRR_{kj}+\CCC_{jpk}+\nabla_k\RRR g_{jp}/4 - \nabla_p\RRR g_{jk}/4 \big)\\
&&+2\nabla_{k}\RRR\RRR_{ij}
+2\RRR\big(
\nabla_i\RRR_{kj}+\CCC_{jik}+\nabla_k\RRR g_{ij}/4 - \nabla_i\RRR g_{jk}/4 \big)\\
&&+2\nabla_{j}\RRR_{ip}\RRR_{kp}+4\RRR_{ip}\big(
\nabla_p\RRR_{jk}+\CCC_{kpj}+\nabla_j\RRR g_{kp}/4 - \nabla_p\RRR g_{kj}/4 \big)\\
&&-2\nabla_{j}\RRR\RRR_{ik}
-2\RRR\big(
\nabla_i\RRR_{jk}+\CCC_{kij}+\nabla_j\RRR g_{ik}/4 - \nabla_i\RRR
g_{kj}/4 \big)\\
&=&\RRR_{jp}\big(\CCC_{kpi}+\CCC_{pki}\big)
-\RRR_{kp}\big(\CCC_{jpi}+\CCC_{pji}\big)+\RRR_{ip}\CCC_{pkj}\\
&&+4\RRR_{ip}\big(\CCC_{kpj}-\CCC_{jpk}\big)
+2\RRR\big(\CCC_{jik}-\CCC_{kij}\big)\\
&&+2\RRR_{lp}\CCC_{pjl} g_{ik}-2\RRR_{lp}\CCC_{pkl} g_{ij}\\
&&+\big[\RRR\nabla_j\RRR/2-\nabla_{j}\vert\Ric\vert^2/2\big]g_{ik}
-\big[\RRR\nabla_k\RRR/2-\nabla_{k}\vert\Ric\vert^2/2\big]g_{ij}\\
&&-2\nabla_{k}\RRR_{ip}\RRR_{jp}+2\nabla_{j}\RRR_{ip}\RRR_{kp}\\
&&+\nabla_{k}\RRR\RRR_{ij}-\nabla_{j}\RRR\RRR_{ik}\\
&=&\RRR_{jp}\big(\CCC_{kpi}+\CCC_{pki}\big)
-\RRR_{kp}\big(\CCC_{jpi}+\CCC_{pji}\big)+5\RRR_{ip}\CCC_{pkj}\\
&&+2\RRR\CCC_{ijk}+2\RRR_{lp}\CCC_{pjl} g_{ik}-2\RRR_{lp}\CCC_{pkl} g_{ij}\\
&&+\big[\RRR\nabla_j\RRR/2-\nabla_{j}\vert\Ric\vert^2/2\big]g_{ik}
-\big[\RRR\nabla_k\RRR/2-\nabla_{k}\vert\Ric\vert^2/2\big]g_{ij}\\
&&+2\nabla_{j}\RRR_{ip}\RRR_{kp}-2\nabla_{k}\RRR_{ip}\RRR_{jp}\\
&&+\nabla_{k}\RRR\RRR_{ij}-\nabla_{j}\RRR\RRR_{ik}\,,
\end{eqnarray*}
where in the last passage we used again the
identities~\eqref{CottonSym}.\\
Hence, we can resume this long computation in the following
proposition, getting back to a generic coordinate basis.
\begin{prop}\label{cot}
During the Ricci flow of a 3--dimensional Riemannian manifold $(M^3,
g(t))$,
the Cotton tensor satisfies the following evolution equation
\begin{eqnarray}\label{main_eq1}
\bigl(\partial_t-\Delta\bigr)\CCC_{ijk}&=&g^{pq}\RRR_{pj}(\CCC_{kqi}+\CCC_{qki})+5g^{pq}\RRR_{ip}\CCC_{qkj}
+g^{pq}\RRR_{pk}(\CCC_{jiq}+\CCC_{qij})\\
&&+2\RRR\CCC_{ijk}+2\RRR^{ql}\CCC_{qjl}g_{ik}-2\RRR^{ql}\CCC_{qkl}g_{ij}\nonumber\\
&&+\frac{1}{2}\nabla_k|\Ric|^2g_{ij}-\frac{1}{2}\nabla_j|\Ric|^2g_{ik}+\frac{\RRR}{2}\nabla_j\RRR g_{ik}
-\frac{\RRR}{2}\nabla_k\RRR g_{ij}\nonumber\\
&&+2g^{pq}\RRR_{pk}\nabla_j\RRR_{qi}-2g^{pq}\RRR_{pj}\nabla_k\RRR_{qi}
+\RRR_{ij}\nabla_k\RRR-\RRR_{ik}\nabla_j\RRR\,.\nonumber
\end{eqnarray}
In particular if the Cotton tensor vanishes identically along the flow we obtain,
\begin{eqnarray}\label{main_eq2}
0&=&\nabla_k|\Ric|^2g_{ij}-\nabla_j|\Ric|^2g_{ik}+{\RRR}\nabla_j\RRR g_{ik}
-{\RRR}\nabla_k\RRR g_{ij}\\
&&+4g^{pq}\RRR_{pk}\nabla_j\RRR_{qi}-4g^{pq}\RRR_{pj}\nabla_k\RRR_{qi}
+2\RRR_{ij}\nabla_k\RRR-2\RRR_{ik}\nabla_j\RRR\,.\nonumber
\end{eqnarray}
\end{prop}
\begin{cor}
If the Cotton tensor vanishes identically along the Ricci flow of a 3--dimensional Riemannian manifold $(M^3, g(t))$, the following tensor
$$
|\Ric|^2g_{ij}-4\RRR_{pj}\RRR_{pi}+3\RRR\RRR_{ij}-\frac{7}{8}\RRR^2g_{ij}
$$
is a Codazzi tensor (see~\cite[Chapter~16, Section~C]{besse}).
\end{cor}
\begin{proof}
We compute in an orthonormal basis,
\begin{align*}
4\RRR_{pk}\nabla_j\RRR_{pi}-4&\,\RRR_{pj}\nabla_k\RRR_{pi}+2\RRR_{ij}\nabla_k\RRR-2\RRR_{ik}\nabla_j\RRR\\
=&\,4\nabla_j(\RRR_{pk}\RRR_{pi})-4\nabla_k(\RRR_{pj}\RRR_{pi})
-4\RRR_{pi}\nabla_j\RRR_{pk}+4\RRR_{pi}\nabla_k\RRR_{pj}\\
&\,+2\RRR_{ij}\nabla_k\RRR-2\RRR_{ik}\nabla_j\RRR\\
=&\,4\nabla_j(\RRR_{pk}\RRR_{pi})-4\nabla_k(\RRR_{pj}\RRR_{pi})
+\RRR_{pi}(4\CCC_{pjk}+\nabla_k\RRR g_{pj}-\nabla_j\RRR g_{pk})\\
&\,+2\RRR_{ij}\nabla_k\RRR-2\RRR_{ik}\nabla_j\RRR\\
=&\,4\nabla_j(\RRR_{pk}\RRR_{pi})-4\nabla_k(\RRR_{pj}\RRR_{pi})
+3\RRR_{ij}\nabla_k\RRR-3\RRR_{ik}\nabla_j\RRR\\
=&\,4\nabla_j(\RRR_{pk}\RRR_{pi})-4\nabla_k(\RRR_{pj}\RRR_{pi})
+3\nabla_k(\RRR\RRR_{ij})-3\nabla_j(\RRR\RRR_{ik})\\
&\,-3\RRR(\nabla_k\RRR_{ij}-\nabla_j\RRR_{ik})\\
=&\,4\nabla_j(\RRR_{pk}\RRR_{pi})-4\nabla_k(\RRR_{pj}\RRR_{pi})
+3\nabla_k(\RRR\RRR_{ij})-3\nabla_j(\RRR\RRR_{ik})\\
&\,-3\RRR(4\CCC_{ijk}+\nabla_k\RRR g_{ij}-\nabla_j\RRR g_{ik})/4\\
=&\,4\nabla_j(\RRR_{pk}\RRR_{pi})-4\nabla_k(\RRR_{pj}\RRR_{pi})
+3\nabla_k(\RRR\RRR_{ij})-3\nabla_j(\RRR\RRR_{ik})\\
&\,-\frac{3}{8}\nabla_k\RRR^2 g_{ij}+\frac{3}{8}\nabla_j\RRR^2 g_{ik}\,.
\end{align*}
Hence, we have, by the previous proposition,
\begin{align*}
0=&\,\nabla_k|\Ric|^2g_{ij}-\nabla_j|\Ric|^2g_{ik}
+4\nabla_j(\RRR_{pk}\RRR_{pi})-4\nabla_k(\RRR_{pj}\RRR_{pi})\\
&\,+3\nabla_k(\RRR\RRR_{ij})-3\nabla_j(\RRR\RRR_{ik})
-\frac{7}{8}\nabla_k\RRR^2 g_{ij}+\frac{7}{8}\nabla_j\RRR^2 g_{ik}\,,
\end{align*}
which is the thesis of the corollary.
\end{proof}
\begin{rem}
All the traces of the 3--tensor in the LHS of equation~\eqref{main_eq2} are zero.
\end{rem}
\begin{rem}
From the trace--free property~\eqref{CottonTraces} of the Cotton
tensor and the fact that along the Ricci flow there holds
$$
\bigl(\partial_t-\Delta\bigr)g^{ij}=2\RRR^{ij}\,,
$$
we conclude that the following relations have to hold
\begin{eqnarray*}
g^{ij}(\partial_t-\Delta) \CCC_{ijk}&=&-2\RRR^{ij}\CCC_{ijk}\,,\\
g^{ik}(\partial_t-\Delta) \CCC_{ijk}&=&-2\RRR^{ik}\CCC_{ijk}\,,\\
g^{jk}(\partial_t-\Delta) \CCC_{ijk}&=&-2\RRR^{jk}\CCC_{ijk}=0\,.\
\end{eqnarray*}
They are easily verified for formula~\eqref{main_eq1}.
\end{rem}
\begin{cor}\label{CorEv}
During the Ricci flow of a 3--dimensional Riemannian manifold $(M^3,
g(t))$, the squared norm of the Cotton tensor satisfies the following evolution equation, in an orthonormal basis,
\begin{eqnarray*}
\bigl(\partial_t-\Delta\bigr)\vert\CCC_{ijk}\vert^2
&=&-2\vert \nabla \CCC_{ijk}\vert^2-16\CCC_{ipk}\CCC_{iqk}\RRR_{pq}
+24\CCC_{ipk}\CCC_{kqi}\RRR_{pq}+4\RRR\vert\CCC_{ijk}\vert^2\\
&&+8\CCC_{ijk}\RRR_{pk}\nabla_j\RRR_{pi}
+4\CCC_{ijk}\RRR_{ij}\nabla_k\RRR\,.\nonumber
\end{eqnarray*}
\end{cor}
\begin{proof}
\begin{eqnarray*}
\bigl(\partial_t-\Delta\bigr)\vert\CCC_{ijk}\vert^2
&=&-2\vert \nabla \CCC_{ijk}\vert^2
+2\CCC^{ijk}\RRR_{ip}g^{pq}\CCC_{qjk}
+2\CCC^{ijk}\RRR_{jp}g^{pq}\CCC_{iqk}
+2\CCC^{ijk}\RRR_{kp}g^{pq}\CCC_{ijq}\\
&&+2\CCC^{ijk}\Bigl[g^{pq}\RRR_{pj}(\CCC_{kqi}+\CCC_{qki})+5g^{pq}\RRR_{ip}\CCC_{qkj}
+g^{pq}\RRR_{pk}(\CCC_{jiq}+\CCC_{qij})\\
&&+2\RRR\CCC_{ijk}+2\RRR^{ql}\CCC_{qjl}g_{ik}-2\RRR^{ql}\CCC_{qkl}g_{ij}\\
&&+\frac{1}{2}\nabla_k|\Ric|^2g_{ij}-\frac{1}{2}\nabla_j|\Ric|^2g_{ik}+\frac{\RRR}{2}\nabla_j\RRR g_{ik}
-\frac{\RRR}{2}\nabla_k\RRR g_{ij}\\
&&+2g^{pq}\RRR_{pk}\nabla_j\RRR_{qi}-2g^{pq}\RRR_{pj}\nabla_k\RRR_{qi}
+\RRR_{ij}\nabla_k\RRR-\RRR_{ik}\nabla_j\RRR\Bigr]\\
&=&-2\vert \nabla \CCC_{ijk}\vert^2
+2(\CCC^{kij}+\CCC^{jki})\RRR_{ip}g^{pq}(\CCC_{kqj}+\CCC_{jkq})\\
&&+2\CCC^{ijk}\RRR_{jp}g^{pq}\CCC_{iqk}
+2\CCC^{ikj}\RRR_{kp}g^{pq}\CCC_{iqj}\\
&&+2\CCC^{ijk}\Bigl[2g^{pq}\RRR_{pj}(\CCC_{kqi}+\CCC_{qki})+5g^{pq}\RRR_{ip}\CCC_{qkj}\Bigr]\\
&&+4\RRR\vert\CCC_{ijk}\vert^2
+8g^{pq}\CCC^{ijk}\RRR_{pk}\nabla_j\RRR_{qi}
+4\CCC^{ijk}\RRR_{ij}\nabla_k\RRR\\
&=&-2\vert \nabla \CCC_{ijk}\vert^2-16\CCC_{ipk}\CCC_{iqk}\RRR_{pq}
+24\CCC_{ipk}\CCC_{kqi}\RRR_{pq}+4\RRR\vert\CCC_{ijk}\vert^2\\
&&+8\CCC_{ijk}\RRR_{pk}\nabla_j\RRR_{pi}
+4\CCC_{ijk}\RRR_{ij}\nabla_k\RRR
\end{eqnarray*}
where in the last line we assumed to be in a orthonormal basis.
\end{proof}
\section{Three--Dimensional Gradient Ricci Solitons}\label{cgrad}
The structural equation of a gradient Ricci soliton $(M^n,g, \nabla f)$ is the following
\begin{equation}\label{SolEq0}
\RRR_{ij}+\nabla_i\nabla_jf=\lambda g_{ij}\,,
\end{equation}
for some $\lambda\in\mathbb{R}$.\\
The soliton is said to be {\em steady}, {\em shrinking} or {\em expanding} according to the fact that the constant $\lambda$ is zero, positive or negative, respectively.
It follows that in dimension three, for $(M^3,g, \nabla f)$ there holds
\begin{eqnarray}
\Delta\RRR_{ij}&=&\nabla_l\RRR_{ij}\nabla_l
f+2\lambda\RRR_{ij}-2|\Ric|^2g_{ij}+\RRR^2
g_{ij}-3\RRR\RRR_{ij}+4\RRR_{is}\RRR_{sj}\,\label{SolEq1}\\
\Delta\RRR&=&\nabla_l\RRR\nabla_l f+2\lambda\RRR-2|\Ric|^2\,\label{SolEq2}\\
\nabla_i\RRR&=&2\RRR_{li}\nabla_l f\label{SolEq3}\\
\CCC_{ijk}&=&\frac{\RRR_{lk}g_{ij}}{2}\nabla_lf-\frac{\RRR_{lj}g_{ik}}{2}\nabla_l
f+\RRR_{ij}\nabla_k f-\RRR_{ik}\nabla_j f
+\frac{\RRR g_{ik}}{2}\nabla_j f -\frac{\RRR g_{ij}}{2}\nabla_k f\label{SolEq4}\\
&=&\frac{\nabla_k\RRR}{4}g_{ij}-\frac{\nabla_j\RRR}{4}g_{ik}
+\Bigl(\RRR_{ij}-\frac{\RRR}{2}g_{ij}\Bigr)\nabla_k f
-\Bigl(\RRR_{ik}-\frac{\RRR}{2}g_{ik}\Bigr)\nabla_j f\,.\nonumber
\end{eqnarray}
In the special case of a {\em steady} soliton the first two equations above simplify as follows,
\begin{eqnarray*}
\Delta\RRR_{ij}&=&\nabla_l\RRR_{ij}\nabla_l
f-2|\Ric|^2g_{ij}+\RRR^2
g_{ij}-3\RRR\RRR_{ij}+4\RRR_{is}\RRR_{sj}\\
\Delta\RRR&=&\nabla_l\RRR\nabla_l f-2|\Ric|^2\,.
\end{eqnarray*}
\begin{rem}
We notice that, by relation~\eqref{SolEq4}, we have
\begin{eqnarray*}
\CCC_{ijk}\nabla_i f&=&\frac{\nabla_k\RRR\nabla_jf}{4}-\frac{\nabla_j\RRR\nabla_k f}{4}+\RRR_{ij}\nabla_if\nabla_kf-\frac{\RRR}{2}\nabla_jf\nabla_k f
-\RRR_{ik}\nabla_if\nabla_jf+\frac{\RRR}{2}\nabla_kf\nabla_j f\\
&=&\frac{\nabla_j\RRR\nabla_kf}{4}-\frac{\nabla_k\RRR\nabla_j f}{4}\,,
\end{eqnarray*}
where in the last passage we used relation~\eqref{SolEq3}.\\
It follows that
$$
\CCC_{ijk}\nabla_i f\nabla_j f=\frac{\langle\nabla f,\nabla\RRR\rangle}{4}\nabla_k f-\frac{\vert\nabla f\vert^2}{4}\nabla_k\RRR\,.
$$
Hence, if the Cotton tensor of a three--dimensional
gradient Ricci soliton is identically zero, we have that at every point where $\nabla\RRR$ is not zero, $\nabla f$ and $\nabla\RRR$ are proportional.
This relation is a key step in (yet another) proof of the fact that a
three--dimensional, locally conformally flat, steady or
shrinking gradient Ricci soliton is locally a warped product of a
constant curvature surface on a
interval of $\RR$, leading to a full classification, first obtained by
H.-D. Cao and Q. Chen~\cite{caochen} for the steady case and
H.-D. Cao, B.-L. Chen and X.-P. Zhu~\cite{caochenzhu} for the
shrinking case (actually this is the last paper of a series finally
classifying, in full generality, all the three-dimensional gradient
shrinking Ricci solitons, even without the LCF assumption).
\end{rem}
\begin{prop}
Let $(M^3, g, f)$ be a three--dimensional gradient Ricci soliton. Then,
\begin{eqnarray*}
\Delta|\CCC_{ijk}|^2&=&\nabla_l|\CCC_{ijk}|^2
\nabla_l f+2|\nabla\CCC_{ijk}|^2-2\RRR|\CCC_{ijk}|^2\\
&&-6\CCC_{ijk}\RRR_{ij}\nabla_k\RRR+8\CCC_{jsk}\CCC_{jik}\RRR_{si}-16\CCC_{jsk}\CCC_{kij}\RRR_{si}
-8\CCC_{ijk}\RRR_{lk}\nabla_j\RRR_{il}\,.
\end{eqnarray*}
\end{prop}
\begin{proof}
First observe that
\[
\ \Delta|\CCC_{ijk}|^2
=2\CCC_{ijk}\Delta\CCC_{ijk}+2|\nabla\CCC_{ijk}|^2.
\]
Using relations~\eqref{SolEq4},~\eqref{SolEq1} and, repeatedly, the trace--free property~\eqref{CottonTraces} of the Cotton tensor, we have that
\begin{eqnarray*}
\CCC_{ijk}\Delta\CCC_{ijk}&=&\Delta(\RRR_{ij}\nabla_k f-\RRR_{ik}\nabla_j f)\CCC_{ijk}\\
&=&(\Delta\RRR_{ij}\nabla_k f+\RRR_{ij}\Delta\nabla_k f+2\nabla_l\RRR_{ij}\nabla_l\nabla_k f)\CCC_{ijk}\\
&&-(\Delta\RRR_{ik}\nabla_j f+\RRR_{ik}\Delta\nabla_j f+2\nabla_l\RRR_{ik}\nabla_l\nabla_j f)\CCC_{ijk}\\
&=&(\nabla_s\RRR_{ij}\nabla_s f-3\RRR\RRR_{ij}+4\RRR_{is}\RRR_{sj})\nabla_k f\CCC_{ijk}\\
&&+\RRR_{ij}\Delta\nabla_k f\CCC_{ijk}+2\nabla_l\RRR_{ij}\nabla_l\nabla_k f \CCC_{ijk}\\
&&-(\nabla_s\RRR_{ik}\nabla_s f-3\RRR\RRR_{ik}+4\RRR_{is}\RRR_{sk})\nabla_j f\CCC_{ijk}\\
&&-\RRR_{ik}\Delta\nabla_j f\CCC_{ijk}-2\nabla_l\RRR_{ik}\nabla_l\nabla_j f \CCC_{ijk}\\
&=&(\nabla_s\RRR_{ij}\nabla_k f- \nabla_s\RRR_{ik}\nabla_j f)\nabla_s f\CCC_{ijk}\\
&&-3\RRR(\RRR_{ij}\nabla_k f-\RRR_{ik}\nabla_j f)\CCC_{ijk}\\
&&+4\RRR_{is}(\RRR_{sj}\nabla_k f-\RRR_{sk}\nabla_j f)\CCC_{ijk}\\
&&+(\RRR_{ij}\nabla_l\nabla_l\nabla_k f-\RRR_{ik}\nabla_l\nabla_l\nabla_j f)\CCC_{ijk}\\
&&+2(\nabla_l\RRR_{ij}\nabla_l\nabla_k f-\nabla_l\RRR_{ik}\nabla_l\nabla_j f)\CCC_{ijk}\\
&=&(\nabla_s\RRR_{ij}\nabla_k f- \nabla_s\RRR_{ik}\nabla_j f)\nabla_s f\CCC_{ijk}\\
&&+(-3\RRR)\vert\CCC_{ijk}\vert^2\\
&&+4\RRR_{is}(\RRR_{sj}\nabla_k f-\RRR_{sk}\nabla_j f)\CCC_{ijk}\\
&&+(\RRR_{ij}\nabla_l\nabla_l\nabla_k f-\RRR_{ik}\nabla_l\nabla_l\nabla_j f)\CCC_{ijk}\\
&&+2(\nabla_l\RRR_{ij}\nabla_l\nabla_k f-\nabla_l\RRR_{ik}\nabla_l\nabla_j f)\CCC_{ijk}\,,
\end{eqnarray*}
where we used the identity
\begin{equation}\label{eq_NormaCottonSol}
(\RRR_{ij}\nabla_k f-\RRR_{ik}\nabla_j f)\CCC_{ijk}=\vert\CCC_{ijk}\vert^2
\end{equation}
which follows easily by equation~\eqref{SolEq4} and the fact that every trace of the Cotton tensor is zero.\\
Using now equations~\eqref{SolEq0},~\eqref{SolEq4},~\eqref{CottonTraces},~\eqref{CottonSym}, and~\eqref{SolEq3}, we compute
\begin{eqnarray*}
(\nabla_s\RRR_{ij}\nabla_k f-\nabla_s\RRR_{ik}\nabla_j f)\nabla_s f\CCC_{ijk}&=&(\nabla_s(\RRR_{ij}\nabla_k f)-\RRR_{ij}\nabla_s\nabla_k f)\nabla_s f\CCC_{ijk}\\
&&-(\nabla_s(\RRR_{ik}\nabla_j f)-\RRR_{ik}\nabla_s\nabla_j f)\nabla_s f\CCC_{ijk}\\
&=&(\nabla_s(\RRR_{ij}\nabla_k f-\RRR_{ik}\nabla_j f)+\RRR_{ij}(\RRR_{sk}))\nabla_s f\CCC_{ijk}\\
&&-(\RRR_{ik}(\RRR_{sj}))\nabla_s f\CCC_{ijk}\\
&=&\nabla_s\CCC_{ijk}\CCC_{ijk}\nabla_s f+\RRR_{ij}\RRR_{sk}\nabla_s f\CCC_{ijk}-\RRR_{ik}\RRR_{sj}\nabla_s f \CCC_{ijk}\\
&=&\frac{1}{2}\nabla_s|\CCC_{ijk}|^2\nabla_s f+\frac{1}{2}\RRR_{ij}\nabla_k \RRR\CCC_{ijk}-\frac{1}{2}\RRR_{ik}\nabla_j \RRR\CCC_{ijk}\\
&&\\
&&\\
4\RRR_{is}(\RRR_{sj}\nabla_k f-\RRR_{sk}\nabla_j f)\CCC_{ijk}
&=&4\RRR_{is}(\CCC_{sjk}-\frac{1}{4}\nabla_k\RRR g_{sj}+\frac{1}{4}\nabla_j\RRR g_{sk}+\frac{\RRR}{2}\nabla_k f g_{sj}-\frac{\RRR}{2}\nabla_j f g_{sk})\CCC_{ijk}\\
&=&4\RRR_{is}(-\CCC_{jks}-\CCC_{ksj})(-\CCC_{jki}-\CCC_{kij})-\RRR_{ij}\nabla_k\RRR\CCC_{ijk}\\
&&+\RRR_{ik}\nabla_j\RRR\CCC_{ijk}+2\RRR\RRR_{ij}\nabla_k f\CCC_{ijk}-2\RRR\RRR_{ik}\nabla_j f\CCC_{ijk}\\
&=&8\RRR_{is}\CCC_{jsk}\CCC_{jik}-8\RRR_{is}\CCC_{jsk}\CCC_{kij}\\
&&-\RRR_{ij}\nabla_k\RRR\CCC_{ijk}+\RRR_{ik}\nabla_j\RRR\CCC_{ijk}+2\RRR|\CCC_{ijk}|^2\\
&&\\
&&\\
(\RRR_{ij}\nabla_l\nabla_l\nabla_k f-\RRR_{ik}\nabla_l\nabla_l\nabla_j
f)\CCC_{ijk}&=&(\RRR_{ij}\nabla_l(-\RRR_{lk})
-\RRR_{ik}\nabla_l(-\RRR_{lj}))\CCC_{ijk}\\
&=&-\frac{1}{2}\RRR_{ij}\nabla_k\RRR\CCC_{ijk}+\frac{1}{2}\RRR_{ik}\nabla_j\RRR\CCC_{ijk}\\
&&\\
&&\\
2(\nabla_l\RRR_{ij}\nabla_l\nabla_k f-\nabla_l\RRR_{ik}\nabla_l\nabla_j f)\CCC_{ijk}&=&2((\CCC_{ijl}+\nabla_j\RRR_{il}+\frac{1}{4}\nabla_l\RRR g_{ij}-\frac{1}{4}\nabla_j\RRR g_{il})(-\RRR_{lk}))\CCC_{ijk}\\
&&-2((\CCC_{ikl}+\nabla_k\RRR_{il}+\frac{1}{4}\nabla_l\RRR g_{ik}-\frac{1}{4}\nabla_k\RRR g_{il})(-\RRR_{lj}))\CCC_{ijk}\\
&=&-2\CCC_{ijl}\CCC_{ijk}\RRR_{lk}-2\CCC_{ijk}\RRR_{lk}\nabla_j\RRR_{il}+\frac{1}{2}\CCC_{ijk}\RRR_{ik}\nabla_j\RRR\\
&&+2\CCC_{ikl}\CCC_{ijk}\RRR_{lj}+2\CCC_{ijk}\RRR_{lj}\nabla_k\RRR_{il}-\frac{1}{2}\CCC_{ijk}\RRR_{ij}\nabla_k\RRR\\
&=&-2\CCC_{ilj}\CCC_{ikj}\RRR_{lk}-2\CCC_{ijk}\RRR_{lk}\nabla_j\RRR_{il}+\frac{1}{2}\CCC_{ijk}\RRR_{ik}\nabla_j\RRR\\
&&-2\CCC_{ilk}\CCC_{ijk}\RRR_{lj}+2\CCC_{ijk}\RRR_{lj}\nabla_k\RRR_{il}-\frac{1}{2}\CCC_{ijk}\RRR_{ij}\nabla_k\RRR.
\end{eqnarray*}
Hence, getting back to the main computation and using again the symmetry relations~\eqref{CottonSym}, we finally get
\begin{eqnarray*}
\CCC_{ijk}\Delta\CCC_{ijk}&=&\frac{1}{2}\nabla_s|\CCC_{ijk}|^2\nabla_s f-\RRR|\CCC_{ijk}|^2\\
&&-\frac{3}{2}\CCC_{ijk}\RRR_{ij}\nabla_k\RRR+\frac{3}{2}\CCC_{ijk}\RRR_{ik}\nabla_j\RRR\\
&&+4\CCC_{jsk}\CCC_{jik}\RRR_{si}-8\CCC_{jsk}\CCC_{kij}\RRR_{si}\\
&&-2\CCC_{ijk}\RRR_{lk}\nabla_j\RRR_{il}+2\CCC_{ijk}\RRR_{lj}\nabla_k\RRR_{il}\\
&=&\frac{1}{2}\nabla_s|\CCC_{ijk}|^2\nabla_s f-\RRR|\CCC_{ijk}|^2\\
&&-3\CCC_{ijk}\RRR_{ij}\nabla_k\RRR+4\CCC_{jsk}\CCC_{jik}\RRR_{si}-8\CCC_{jsk}\CCC_{kij}\RRR_{si}-4\CCC_{ijk}\RRR_{lk}\nabla_j\RRR_{il}\,
\end{eqnarray*}
where in the last passage we applied the skew--symmetry of the Cotton tensor in its last two indexes. The thesis follows.
\end{proof}
\section{The Evolution Equation of the Cotton Tensor in any Dimension}\label{nsec}
In this section we will compute the evolution equation under
the Ricci flow of the Cotton tensor $\CCC_{ijk}$, for every
$n$--dimensional Riemannian manifold $(M^n,g(t))$ evolving by Ricci flow.
Among the evolution equations~\eqref{evolutioncurv} we expand the one for the Ricci tensor,
\begin{align*}
\frac{\partial\,}{\partial t}\RRR_{ij}=&\,\Delta\RRR_{ij}-\frac{2n}{n-2}g^{pq}\RRR_{ip}\RRR_{jq}
+\frac{2n}{(n-1)(n-2)}\RRR\RRR_{ij}+\frac{2}{n-2}\vert\Ric\vert^2g_{ij}\\
&\,-\frac{2}{(n-1)(n-2)}\RRR^2 g_{ij}-2\RRR^{pq}\WWW_{pijq}\,.
\end{align*}
Then, we compute the evolution equations of the
derivatives of the curvatures assuming, from now on, to be in normal coordinates,
\begin{eqnarray}
\frac{\partial\,}{\partial
t}\nabla_l\RRR&=&\nabla_l\Delta\RRR+2\nabla_{l}|\Ric|^2\,,\nonumber\\
&&\nonumber\\
\frac{\partial\,}{\partial t}\nabla_s\RRR_{ij}
&=&\nabla_{s}\Delta\RRR_{ij}-\frac{2n}{n-2}\nabla_{s}\RRR_{ip}\RRR_{jp}-\frac{2n}{n-2}\RRR_{ip}\nabla_{s}\RRR_{jp}
+\frac{2n}{(n-1)(n-2)}\nabla_{s}\RRR\RRR_{ij}\nonumber\\
&&+\frac{2n}{(n-1)(n-2)}\RRR\nabla_{s}\RRR_{ij}+\frac{2}{n-2}\nabla_{s}\vert\Ric\vert^2g_{ij}-\frac{2}{(n-1)(n-2)}\nabla_{s}\RRR^2 g_{ij}\nonumber\\
&&-2\nabla_{s}\RRR_{kl}\WWW_{kijl}-2\RRR_{kl}\nabla_{s}\WWW_{kijl}+(\nabla_i\RRR_{sp}+\nabla_s\RRR_{ip}-\nabla_p\RRR_{is})\RRR_{jp}\nonumber\\
&&+(\nabla_j\RRR_{sp}+\nabla_s\RRR_{jp}-\nabla_p\RRR_{js})\RRR_{ip}\nonumber\\
&=&\nabla_{s}\Delta\RRR_{ij}-\frac{n+2}{n-2}\nabla_{s}\RRR_{ip}\RRR_{jp}-\frac{n+2}{n-2}\RRR_{ip}\nabla_{s}\RRR_{jp}
+\frac{2n}{(n-1)(n-2)}\nabla_{s}\RRR\RRR_{ij}\nonumber\\
&&+\frac{2n}{(n-1)(n-2)}\RRR\nabla_{s}\RRR_{ij}+\frac{2}{n-2}\nabla_{s}\vert\Ric\vert^2g_{ij}-\frac{2}{(n-1)(n-2)}\nabla_{s}\RRR^2 g_{ij}\nonumber\\
&&-2\nabla_{s}\RRR_{kl}\WWW_{kijl}-2\RRR_{kl}\nabla_{s}\WWW_{kijl}+(\nabla_i\RRR_{sp}-\nabla_p\RRR_{is})\RRR_{jp}+(\nabla_j\RRR_{sp}-\nabla_p\RRR_{js})\RRR_{ip}\nonumber\\
&=&\nabla_{s}\Delta\RRR_{ij}-\frac{n+2}{n-2}\nabla_{s}\RRR_{ip}\RRR_{jp}-\frac{n+2}{n-2}\RRR_{ip}\nabla_{s}\RRR_{jp}
+\frac{2n}{(n-1)(n-2)}\nabla_{s}\RRR\RRR_{ij}\nonumber\\
&&+\frac{2n}{(n-1)(n-2)}\RRR\nabla_{s}\RRR_{ij}+\frac{2}{n-2}\nabla_{s}\vert\Ric\vert^2g_{ij}-\frac{2}{(n-1)(n-2)}\nabla_{s}\RRR^2
g_{ij}\nonumber\\
&&-2\nabla_{s}\RRR_{kl}\WWW_{kijl}-2\RRR_{kl}\nabla_{s}\WWW_{kijl}+\CCC_{spi}\RRR_{jp}+\CCC_{spj}\RRR_{ip}\nonumber\\
&&+\frac{1}{2(n-1)}\RRR_{jp}[\nabla_i\RRR g_{sp}-\nabla_p\RRR g_{is}]+\frac{1}{2(n-1)}\RRR_{ip}[\nabla_j\RRR g_{sp}-\nabla_p\RRR g_{js}]\,,\nonumber
\end{eqnarray}
where in the last passage we substituted the expression of the Cotton tensor.
We then compute,
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}
&=&\frac{\partial\,}{\partial
t}\nabla_k\RRR_{ij} -\frac{\partial\,}{\partial t} \nabla_j\RRR_{ik}
-\frac{1}{2(n-1)}\frac{\partial\,}{\partial t}\big(\nabla_k\RRR g_{ij} - \nabla_j\RRR
g_{ik} \big)\\
&=&\nabla_{k}\Delta\RRR_{ij}-\frac{n+2}{n-2}\nabla_{k}\RRR_{ip}\RRR_{jp}-\frac{n+2}{n-2}\RRR_{ip}\nabla_{k}\RRR_{jp}
+\frac{2n}{(n-1)(n-2)}\nabla_{k}\RRR\RRR_{ij}\nonumber\\
&&+\frac{2n}{(n-1)(n-2)}\RRR\nabla_{k}\RRR_{ij}+\frac{2}{n-2}\nabla_{k}\vert\Ric\vert^2g_{ij}-\frac{2}{(n-1)(n-2)}\nabla_{k}\RRR^2
g_{ij}\nonumber\\
&&-2\nabla_{k}\RRR_{pl}\WWW_{pijl}-2\RRR_{pl}\nabla_{k}\WWW_{pijl}+\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}\\
&&+\frac{\RRR_{jp}}{2(n-1)}[\nabla_i\RRR g_{kp}-\nabla_p\RRR g_{ik}]
+\frac{\RRR_{ip}}{2(n-1)}[\nabla_j\RRR g_{kp}-\nabla_p\RRR g_{jk}]\\
&&-\nabla_{j}\Delta\RRR_{ik}+\frac{n+2}{n-2}\nabla_{j}\RRR_{ip}\RRR_{kp}+\frac{n+2}{n-2}\RRR_{ip}\nabla_{j}\RRR_{kp}
-\frac{2n}{(n-1)(n-2)}\nabla_{j}\RRR\RRR_{ik}\nonumber\\
&&-\frac{2n}{(n-1)(n-2)}\RRR\nabla_{j}\RRR_{ik}-\frac{2}{n-2}\nabla_{j}\vert\Ric\vert^2g_{ik}+\frac{2}{n-1}\nabla_{j}\RRR^2
g_{ik}\nonumber\\
&&-2\nabla_{k}\RRR_{pl}\WWW_{pijl}-2\RRR_{pl}\nabla_{k}\WWW_{pijl}-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}\\
&&-\frac{\RRR_{kp}}{2(n-1)}[\nabla_i\RRR g_{jp}-\nabla_p\RRR g_{ij}]
-\frac{\RRR_{ip}}{2(n-1)}[\nabla_k\RRR g_{jp}-\nabla_p\RRR g_{kj}]\\
&&+\frac{1}{n-1}(\RRR_{ij}\nabla_k\RRR -\RRR_{ik}\nabla_j\RRR\big)\\
&&-\big(\nabla_k\Delta\RRR+2\nabla_{k}|\Ric|^2\big)\frac{g_{ij}}{2(n-1)}
+\big(\nabla_j\Delta\RRR+2\nabla_{j}|\Ric|^2\big) \frac{g_{ik}}{2(n-1)}\\
&=&\nabla_{k}\Delta\RRR_{ij}-\frac{n+2}{n-2}\nabla_{k}\RRR_{ip}\RRR_{jp}-\frac{n+2}{n-2}\RRR_{ip}\nabla_{k}\RRR_{jp}
\\&&+\frac{5n-2}{2(n-1)(n-2)}\nabla_{k}\RRR\RRR_{ij}+\frac{2n}{(n-1)(n-2)}\RRR\nabla_{k}\RRR_{ij}\\
&&+\frac{n}{(n-1)(n-2)}\nabla_{k}\vert\Ric\vert^2g_{ij}-\frac{2}{(n-1)(n-2)}\nabla_{k}\RRR^2
g_{ij}\\
&&+\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}-2\nabla_k \RRR_{pl}\WWW_{pijl}-2\RRR_{pl}\nabla_k\WWW_{pijl}\\
&&-\frac{1}{2(n-1)}\RRR_{pj}\nabla_p\RRR g_{ik}\\
&&-\nabla_{j}\Delta\RRR_{ik}+\frac{n+2}{n-2}\nabla_{j}\RRR_{ip}\RRR_{kp}+\frac{n+2}{n-2}\RRR_{ip}\nabla_{j}\RRR_{kp}
\\&&-\frac{5n-2}{2(n-1)(n-2)}\nabla_{j}\RRR\RRR_{ik}-\frac{2n}{(n-1)(n-2)}\RRR\nabla_{j}\RRR_{ik}\\
&&-\frac{n}{(n-1)(n-2)}\nabla_{j}\vert\Ric\vert^2g_{ik}+\frac{2}{(n-1)(n-2)}\nabla_{j}\RRR^2
g_{ik}\\
&&-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}+2\nabla_j\RRR_{pl}\WWW_{pikl}+2\RRR_{pl}\nabla_j\WWW_{pikl}\\
&&+\frac{1}{2(n-1)}\nabla_l\RRR\RRR_{lk}g_{ij}-\frac{1}{2(n-1)}\nabla_k\Delta\RRR g_{ij}+\frac{1}{2(n-1)}\nabla_j\Delta\RRR g_{ik}
\end{eqnarray*}
and
$$
\Delta\CCC_{ijk}=\Delta\nabla_k\RRR_{ij}-\Delta\nabla_j\RRR_{ik}-\frac{1}{2(n-1)}\Delta\nabla_k\RRR
g_{ij}+\frac{1}{2(n-1)}\Delta\nabla_j\RRR g_{ik}\,,
$$
hence,
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}
&=&\nabla_{k}\Delta\RRR_{ij}-\nabla_{j}\Delta\RRR_{ik}-\Delta\nabla_k\RRR_{ij}+\Delta\nabla_j\RRR_{ik}\\
&&-\frac{1}{2(n-1)}(\nabla_k\Delta\RRR g_{ij} - \nabla_j\Delta\RRR
g_{ik}-\Delta\nabla_k\RRR g_{ij}+\Delta\nabla_j\RRR g_{ik})\\
&&-\frac{n+2}{n-2}(\nabla_{k}\RRR_{ip}\RRR_{jp}+\RRR_{ip}\nabla_{k}\RRR_{jp})
+\frac{5n-2}{2(n-1)(n-2)}\nabla_{k}\RRR\RRR_{ij}\\
&&+\frac{2n}{(n-1)(n-2)}\RRR\nabla_{k}\RRR_{ij}\\
&&+\frac{n}{(n-1)(n-2)}\nabla_{k}\vert\Ric\vert^2g_{ij}-\frac{2}{(n-1)(n-2)}\nabla_{k}\RRR^2
g_{ij}\\
&&+\CCC_{kpi}\RRR_{jp}+\CCC_{kpj}\RRR_{ip}-2\nabla_k\RRR_{pl}\WWW_{pijl}-2\RRR_{pl}\nabla_k\WWW_{pijl} \\
&&-\frac{1}{2(n-1)}\RRR_{jp}\nabla_p\RRR g_{ik}\\
&&+\frac{n+2}{n-2}(\nabla_{j}\RRR_{ip}\RRR_{kp}+\RRR_{ip}\nabla_{j}\RRR_{kp})
-\frac{5n-2}{2(n-1)(n-2)}\nabla_{j}\RRR\RRR_{ik}\\
&&-\frac{2n}{(n-1)(n-2)}\RRR\nabla_{j}\RRR_{ik}\\
&&-\frac{n}{(n-1)(n-2)}\nabla_{j}\vert\Ric\vert^2g_{ik}+\frac{2}{(n-1)(n-2)}\nabla_{j}\RRR^2
g_{ik}\\
&&-\CCC_{jpi}\RRR_{kp}-\CCC_{jpk}\RRR_{ip}+2\nabla_j\RRR_{pl}\WWW_{pikl}+2\RRR_{pl}\nabla_j\WWW_{pikl}\\
&&+\frac{1}{2(n-1)}\RRR_{kp}\nabla_p\RRR g_{ij}
\end{eqnarray*}
Now to proceed, we need the following commutation rules for the
derivatives of the Ricci tensor and of the scalar curvature, where we
will employ the decomposition formula of the Riemann tensor~\eqref{decriem}.
\begin{eqnarray*}
\nabla_k\Delta\RRR_{ij}-\Delta\nabla_k\RRR_{ij}
&=&\nabla^3_{kll}\RRR_{ij}-\nabla^3_{lkl}\RRR_{ij}+
\nabla^3_{lkl}\RRR_{ij}-\nabla^3_{llk}\RRR_{ij}\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\RRR_{klip}\nabla_l\RRR_{jp}
+\RRR_{kljp}\nabla_l\RRR_{ip}\\
&&+\nabla^3_{lkl}\RRR_{ij}-\nabla^3_{llk}\RRR_{ij}\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\frac{1}{2(n-2)}(\RRR_{ik}\nabla_j\RRR
+\RRR_{jk}\nabla_i\RRR)\\
&&-\frac{1}{n-2}(\RRR_{kp}\nabla_i\RRR_{jp}
+\RRR_{kp}\nabla_j\RRR_{ip}
-\RRR_{lp}\nabla_l\RRR_{jp}g_{ik}
-\RRR_{lp}\nabla_l\RRR_{ip}g_{jk})\\
&&-\frac{1}{n-2}(\RRR_{li}\nabla_l\RRR_{jk}
+\RRR_{lj}\nabla_l\RRR_{ik})
-\frac{1}{2(n-1)(n-2)}(\RRR\nabla_j\RRR g_{ik}
+\RRR\nabla_i\RRR g_{jk})\\
&&+\frac{1}{(n-1)(n-2)}(\RRR\nabla_i\RRR_{jk}
+\RRR\nabla_j\RRR_{ik})\\
&&+\nabla_l\big(\RRR_{klip}\RRR_{pj}+\RRR_{kljp}\RRR_{pi}\big)\\
&&+\WWW_{kljp}\nabla_l\RRR_{ip}+\WWW_{klip}\nabla_j\RRR_{jp}\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\frac{1}{2(n-2)}(\RRR_{ik}\nabla_j\RRR
+\RRR_{jk}\nabla_i\RRR)\\
&&-\frac{1}{n-2}(\RRR_{kp}\nabla_i\RRR_{jp}
+\RRR_{kp}\nabla_j\RRR_{ip}
-\RRR_{lp}\nabla_l\RRR_{jp}g_{ik}
-\RRR_{lp}\nabla_l\RRR_{ip}g_{jk})\\
&&-\frac{1}{n-2}(\RRR_{li}\nabla_l\RRR_{jk}
+\RRR_{lj}\nabla_l\RRR_{ik})
-\frac{1}{2(n-1)(n-2)}(\RRR\nabla_j\RRR g_{ik}
+\RRR\nabla_i\RRR g_{jk})\\
&&+\frac{1}{(n-1)(n-2)}(\RRR\nabla_i\RRR_{jk}
+\RRR\nabla_j\RRR_{ik})\\
&&+\nabla_l\Big(\frac{1}{n-2}(\RRR_{ki}g_{pl}\RRR_{pj}+\RRR_{pl}g_{ki}\RRR_{pj}-\RRR_{li}g_{kp}\RRR_{pj}-\RRR_{kp}g_{li}\RRR_{pj})\\
&&-\frac{1}{(n-1)(n-2)}(\RRR\RRR_{pj}g_{ki}g_{lp}-\RRR\RRR_{pj}g_{kp}g_{il})+\WWW_{klip}\RRR_{pj}\\
&&+\frac{1}{n-2}(\RRR_{kj}g_{lp}\RRR_{pi}+\RRR_{lp}g_{kj}\RRR_{pi}-\RRR_{lj}g_{kp}\RRR_{pi}-\RRR_{kp}g_{lj}\RRR_{pi})\\
&&-\frac{1}{(n-1)(n-2)}(\RRR\RRR_{pi}g_{kj}g_{pl}-\RRR\RRR_{pi}g_{kp}g_{lj})+\WWW_{kljp}\RRR_{pi}\Big)\\
&&+\WWW_{kljp}\nabla_l\RRR_{ip}+\WWW_{klip}\nabla_j\RRR_{jp}\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}
+\frac{1}{2(n-2)}(\RRR_{ik}\nabla_j\RRR
+\RRR_{jk}\nabla_i\RRR)\\
&&-\frac{1}{n-2}(\RRR_{kp}\nabla_i\RRR_{jp}
+\RRR_{kp}\nabla_j\RRR_{ip}
-\RRR_{lp}\nabla_l\RRR_{jp}g_{ik}
-\RRR_{lp}\nabla_l\RRR_{ip}g_{jk})\\
&&-\frac{1}{n-2}(\RRR_{li}\nabla_l\RRR_{jk}
+\RRR_{lj}\nabla_l\RRR_{ik})
-\frac{1}{2(n-1)(n-2)}(\RRR\nabla_j\RRR g_{ik}
+\RRR\nabla_i\RRR g_{jk})\\
&&+\frac{1}{(n-1)(n-2)}(\RRR\nabla_i\RRR_{jk}
+\RRR\nabla_j\RRR_{ik})\\
&&+\frac{1}{n-2}(\nabla_p\RRR_{ki}\RRR_{pj}+\RRR_{ki}\nabla_j\RRR/2+\nabla_p\RRR g_{ki}\RRR_{pj}/2\\
&&+\RRR_{lp}\nabla_l\RRR_{pj}g_{ik}-\nabla_i\RRR\RRR_{jk}/2-\RRR_{pi}\nabla_p\RRR_{kj}-\nabla_i\RRR_{kp}\RRR_{pj}\\
&&-\RRR_{kp}\nabla_i\RRR_{pj})\\
&&-\frac{1}{(n-1)(n-2)}(\nabla_p\RRR\RRR_{pj}g_{ik}+\RRR\nabla_j\RRR g_{ik}/2-\nabla_i\RRR\RRR_{kj}-\RRR\nabla_i\RRR_{jk})\\
&&+\frac{n-3}{n-2}\CCC_{kip}\RRR_{pj}+\WWW_{klip}\nabla_l\RRR_{pj}\end{eqnarray*}
\begin{eqnarray*}
\phantom{\nabla_k\Delta\RRR_{ij}-\Delta\nabla_k\RRR_{ij}}
&\phantom{=}&+\frac{1}{n-2}(\nabla_p\RRR_{kj}\RRR_{pi}+\RRR_{kj}\nabla_i\RRR/2+\nabla_p\RRR g_{kj}\RRR_{pi}/2\\
&&+\RRR_{lp}g_{kj}\nabla_l\RRR_{pi}-\nabla_j\RRR\RRR_{ki}/2-\RRR_{pj}\nabla_p\RRR_{ki}-\nabla_j\RRR_{kp}\RRR_{pi}-\RRR_{kp}\nabla_j\RRR_{pi})\\
&&-\frac{1}{(n-1)(n-2)}(\nabla_p\RRR\RRR_{pi}g_{kj}+\RRR\nabla_i\RRR g_{jk}/2-\nabla_j\RRR \RRR_{ki}-\RRR\nabla_j\RRR_{ki})\\
&&+\frac{n-3}{n-2}\CCC_{kjp}\RRR_{pi}+\WWW_{kljp}\nabla_l\RRR_{pi}\\
&&+\WWW_{kljp}\nabla_l\RRR_{ip}+\WWW_{klip}\nabla_j\RRR_{jp}\\
&=&-\RRR_{kp}\nabla_p\RRR_{ij}+\frac{n+1}{2(n-1)(n-2)}\RRR_{kj}\nabla_i\RRR-\frac{2}{n-2}\RRR_{kp}\nabla_j\RRR_{ip}\\
&&+\frac{2}{n-2}\RRR_{lp}\nabla_l\RRR_{pi}g_{jk}-\frac{1}{n-2}\RRR_{pj}\nabla_p\RRR_{ik}-\frac{1}{(n-1)(n-2)}\RRR\nabla_i\RRR g_{jk}\\
&&+\frac{2}{(n-1)(n-2)}\RRR\nabla_j\RRR_{ik}+\frac{n+1}{2(n-1)(n-2)}\nabla_j\RRR\RRR_{ki}-\frac{2}{n-2}\RRR_{kp}\nabla_i\RRR_{jp}\\
&&+\frac{2}{n-2}\RRR_{lp}\nabla_l\RRR_{pj}g_{ik}-\frac{1}{n-2}\RRR_{pi}\nabla_p\RRR_{jk}-\frac{1}{(n-1)(n-2)}\RRR\nabla_j\RRR g_{ik}\\
&&+\frac{2}{(n-1)(n-2)}\RRR\nabla_i\RRR_{jk}+2\WWW_{kljp}\nabla_l\RRR_{pi}+2\WWW_{klip}\nabla_l\RRR_{pj}\\
&&+\frac{n-3}{2(n-1)(n-2)}(\nabla_p\RRR g_{ik}\RRR_{pj}+\nabla_p\RRR g_{jk}\RRR_{pi})\\
&&+\frac{n-3}{n-2}(\CCC_{kip}\RRR_{pj}+\CCC_{kjp}\RRR_{pi})-\frac{1}{n-2}(\nabla_i\RRR_{kp}\RRR_{pj}+\nabla_j\RRR_{kp}\RRR_{pi})
\end{eqnarray*}
and
$$
\nabla_k\Delta\RRR-\Delta\nabla_k\RRR=\RRR_{kllp}\nabla_p\RRR=-\RRR_{kp}\nabla_p\RRR\,.
$$
Then, getting back to the main computation, we obtain
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}
&=&-\RRR_{kp}\nabla_p\RRR_{ij}+\frac{n+1}{2(n-1)(n-2)}\RRR_{kj}\nabla_i\RRR\\
&&-\frac{2}{n-2}\RRR_{kp}\nabla_j\RRR_{ip}+\frac{2}{n-2}\RRR_{lp}\nabla_l\RRR_{pi}g_{jk}\\
&&-\frac{1}{n-2}\RRR_{jp}\nabla_p\RRR_{ik}-\frac{1}{(n-1)(n-2)}\RRR\nabla_i\RRR g_{jk}\\
&&+\frac{2}{(n-1)(n-2)}\RRR\nabla_j\RRR_{ik}+\frac{n+1}{2(n-1)(n-2)}\nabla_j\RRR\RRR_{ki}\\
&&-\frac{2}{n-2}\RRR_{kp}\nabla_i\RRR_{pj}+\frac{2}{n-2}\RRR_{lp}\nabla_l\RRR_{pj}g_{ik}\\
&&-\frac{1}{n-2}\RRR_{pi}\nabla_p\RRR_{kj}-\frac{1}{(n-1)(n-2)}\RRR\nabla_j\RRR g_{ik}\\
&&+\frac{2}{(n-1)(n-2)}\RRR\nabla_i\RRR_{jk}+2\WWW_{kljp}\nabla_l\RRR_{pi}+2\WWW_{klip}\nabla_l\RRR_{pj}\\
&&+\frac{n-3}{2(n-1)(n-2)}(\nabla_p\RRR g_{ik}\RRR_{pj}+\nabla_p\RRR g_{jk}\RRR_{pi})\\
&&+\frac{n-3}{n-2}(\CCC_{kip}\RRR_{pj}+\CCC_{kjp}\RRR_{pi})\\
&&-\frac{1}{n-2}(\nabla_i\RRR_{kp}\RRR_{jp}+\nabla_j\RRR_{kp}\RRR_{pi})\\
&&+\RRR_{jp}\nabla_p\RRR_{ik}-\frac{n+1}{2(n-1)(n-2)}\RRR_{kj}\nabla_i\RRR + \frac{2}{n-2}\RRR_{jp}\nabla_k\RRR_{ip}\\
&&-\frac{2}{n-2}\RRR_{lp}\nabla_l\RRR_{pi} g_{kj}+\frac{1}{n-2}\RRR_{pk}\nabla_p\RRR_{ij}\\
&&+\frac{1}{(n-1)(n-2)}\RRR\nabla_i\RRR g_{jk}-\frac{2}{(n-1)(n-2)}\RRR\nabla_k\RRR_{ij}\\
&&-\frac{n+1}{2(n-1)(n-2)}\nabla_k\RRR\RRR_{ij}+\frac{2}{n-2}\RRR_{jp}\nabla_i\RRR_{kp}\\
&&-\frac{2}{n-2}\RRR_{lp}\nabla_p\RRR_{pk}g_{ij}+\frac{1}{n-2}\RRR_{pi}\nabla_p\RRR_{kj}\\
&&+\frac{1}{(n-1)(n-2)}\RRR\nabla_k\RRR g_{ij}-\frac{2}{(n-1)(n-2)}\RRR\nabla_i\RRR_{kj}\\
&&-2\WWW_{jlkp}\nabla_l\RRR_{pi}-2\WWW_{jlip}\nabla_l\RRR_{pk}\\
&&-\frac{n-3}{2(n-2)(n-2)}(\nabla_p\RRR g_{ij}\RRR_{pk}+\nabla_p\RRR g_{jk}\RRR_{pi})\\
&&-\frac{n-3}{n-2}(\CCC_{jip}\RRR_{pk}+\CCC_{jkp}\RRR_{pi})+\frac{1}{n-2}(\nabla_i\RRR_{pj}\RRR_{pk}+\nabla_k\RRR_{jp}\RRR_{pi})\\
&&+\frac{1}{2(n-1)}(\RRR_{kp}\nabla_p\RRR g_{ij}-\RRR_{jp}\nabla_p\RRR g_{ki})-\frac{n+2}{n-2}(\nabla_k\RRR_{pi}\RRR_{pj}+\RRR_{pi}\nabla_k\RRR_{pj})\\
&&+\frac{n}{(n-1)(n-2)}\nabla_k|\Ric|^2 g_{ij}+\frac{5n-2}{2(n-1)(n-2)}\nabla_k\RRR \RRR_{ij}\\
&&+\frac{2n}{(n-1)(n-2)}\RRR\nabla_k\RRR_{ij}-\frac{2}{(n-1)(n-2)}\nabla_k\RRR^2 g_{ij}\\
&&-2\nabla_k\RRR_{pl}\WWW_{pijl}-2\RRR_{pl}\nabla_k\WWW_{pijl}\\
&&+\CCC_{kli}\RRR_{lj}-\frac{1}{2(n-1)}\nabla_l\RRR\RRR_{lj} g_{ik}+\CCC_{klj}\RRR_{li}\end{eqnarray*}
\begin{eqnarray*}\phantom{\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}}&\phantom{=}&+\frac{n+2}{n-2}(\nabla_j\RRR_{pi}\RRR_{pk}+\RRR_{pi}\nabla_j\RRR_{pk})\\
&&-\frac{n}{(n-1)(n-2)}\nabla_j|\Ric|^2g_{ki}-\frac{5n-2}{2(n-1)(n-2)}\nabla_j \RRR\RRR_{ik}\\
&&-\frac{2n}{(n-1)(n-2)}\RRR\nabla_j\RRR_{ik}+\frac{2}{(n-1)(n-2)}\nabla_j\RRR^2 g_{ik}\\
&&+2\nabla_j\RRR_{pl}\WWW_{pikl}+2\RRR_{pl}\nabla_j\WWW_{pikl}\\
&&-\CCC_{jli}\RRR_{lk}+\frac{1}{2(n-1)}\nabla_l\RRR\RRR_{lk}g_{ij}-\CCC_{jlk}\RRR_{li}
\\
&=&\frac{1}{n-2}(\RRR_{pi}\CCC_{jkp}+\RRR_{pk}\CCC_{jip}-\CCC_{kip}\RRR_{pj}-\CCC_{kjp}\RRR_{pi})\\
&&+\Big[\frac{2}{n-2}\RRR_{lp}\nabla_l\RRR_{pj}+\frac{3}{2(n-1)(n-2)}\nabla_j\RRR^2\\
&&-\frac{1}{2(n-2)}\nabla_p\RRR\RRR_{pj}-\frac{n}{(n-1)(n-2)}\nabla_j|\Ric|^2\Big]g_{ik}\\
&&-\Big[\frac{2}{n-2}\RRR_{lp}\nabla_l\RRR_{pk}+\frac{3}{2(n-1)(n-2)}\nabla_k\RRR^2\\
&&-\frac{1}{2(n-2)}\nabla_p\RRR\RRR_{pk}-\frac{n}{(n-1)(n-2)}\nabla_k|\Ric|^2\Big]g_{ij}\\
&&-\frac{n-3}{n-2}\RRR_{kp}\nabla_p\RRR_{ij}+\frac{n-3}{n-2}\RRR_{pj}\nabla_p\RRR_{ik}\\
&&+\frac{n}{n-2}\RRR_{kp}\nabla_j\RRR_{pi}+\frac{n+1}{n-2}\nabla_j\RRR_{pk}\RRR_{pi}-\frac{2}{n-2}\RRR\nabla_j\RRR_{ik}\\
&&-\frac{4n-3}{2(n-1)(n-2)}\nabla_j\RRR\RRR_{ik}-\frac{1}{n-2}\RRR_{kp}\nabla_i\RRR_{pj}+\frac{1}{n-2}\RRR_{pj}\nabla_i\RRR_{pk}\\
&&-\frac{n}{n-2}\RRR_{jp}\nabla_k\RRR_{ip}-\frac{n+1}{n-2}\nabla_k\RRR_{jp}\RRR_{ip}+\frac{2}{n-2}\RRR\nabla_k\RRR_{ij}\\
&&+\frac{4n-3}{2(n-1)(n-2)}\nabla_k\RRR\RRR_{ij}+2\WWW_{klip}\nabla_l\RRR_{pj}+2\WWW_{kljp}\nabla_l\RRR_{pi}-2\WWW_{jlkp}\nabla_l\RRR_{pi}\\
&&-2\WWW_{jlip}\nabla_l\RRR_{pk}-2\nabla_k\RRR_{pl}\WWW_{pijl}-2\RRR_{pl}\nabla_k\WWW_{pijl}+2\nabla_j\RRR_{pl}\WWW_{pikl}+2\RRR_{pl}\nabla_j\WWW_{pikl}
\end{eqnarray*}
Now, by means of the very definition of the Cotton tensor~\eqref{Cottonn}, the
identities~\eqref{CottonSym}, and the symmetries of the Weyl tensor,
we substitute
\begin{align*}
\CCC_{kpj}-\CCC_{jpk}=&\,-\CCC_{kjp}-\CCC_{jpk}=\CCC_{pkj}\\
\nabla_l\RRR_{jp}=&\,\nabla_j\RRR_{lp}+\CCC_{pjl}+\frac{1}{2(n-1)}
\big(\nabla_l\RRR g_{pj} - \nabla_j\RRR g_{pl} \big)\\
\nabla_l\RRR_{kp}=&\,\nabla_k\RRR_{lp}+\CCC_{pkl}+\frac{1}{2(n-1)}
\big(\nabla_l\RRR g_{pk} - \nabla_k\RRR g_{pl} \big)\\
\nabla_i\RRR_{jp}=&\,\nabla_j\RRR_{ip}+\CCC_{pji}+\frac{1}{2(n-1)}
\big(\nabla_i\RRR g_{jp} - \nabla_j\RRR g_{ip} \big)\\
\nabla_i\RRR_{kp}=&\,\nabla_k\RRR_{ip}+\CCC_{pki}+\frac{1}{2(n-1)}
\big(\nabla_i\RRR g_{kp} - \nabla_k\RRR g_{ip} \big)\\
\nabla_p\RRR_{ij}=&\,\nabla_j\RRR_{pi}+\CCC_{ijp}+\frac{1}{2(n-1)}
\big(\nabla_p\RRR g_{ji} - \nabla_j\RRR g_{pi} \big)\\
\nabla_p\RRR_{ik}=&\,\nabla_k\RRR_{pi}+\CCC_{ikp}+\frac{1}{2(n-1)}
\big(\nabla_p\RRR g_{ki} - \nabla_k\RRR g_{pi} \big)
\end{align*}
in the last expression above, getting
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}
&=&\frac{1}{n-2}(\RRR_{pi}\CCC_{pkj}+\RRR_{pk}\CCC_{jip}-\CCC_{kip}\RRR_{pj})\\
&&+\Big[\frac{2}{n-2}\RRR_{lp}\Big(\nabla_j\RRR_{lp}+\CCC_{pjl}+\frac{1}{2(n-1)}\nabla_l\RRR g_{pj}\\
&&-\frac{1}{2(n-1)}\nabla_j\RRR g_{pl})\Big)+\frac{3}{2(n-1)(n-2)}\nabla_j\RRR^2\\
&&-\frac{1}{2(n-2)}\nabla_p\RRR\RRR_{pj}-\frac{n}{(n-1)(n-2)}\nabla_j|\Ric|^2\Big]g_{ik}\\
&&-\Big[\frac{2}{n-2}\RRR_{lp}\Big(\nabla_k\RRR_{pl}+\CCC_{pkl}+\frac{1}{2(n-1)}\nabla_l\RRR g_{pk}\\
&&-\frac{1}{2(n-1)}\nabla_k\RRR g_{pl}\Big)+\frac{3}{2(n-1)(n-2)}\nabla_k\RRR^2\\
&&-\frac{1}{2(n-2)}\nabla_p\RRR\RRR_{pk}-\frac{n}{(n-1)(n-2)}\nabla_k|\Ric|^2\Big]g_{ij}\\
&&-\frac{n-3}{n-2}\RRR_{kp}\Big(\CCC_{ijp}+\nabla_j\RRR_{ip}+\frac{1}{2(n-1)}(\nabla_p\RRR g_{ij}-\nabla_j\RRR g_{ip})\Big)\\
&&+\frac{n-3}{n-2}\RRR_{pj}\Big(\CCC_{ikp}+\nabla_k\RRR_{ip}+\frac{1}{2(n-1)}(\nabla_p\RRR g_{ik}-\nabla_k\RRR g_{ip})\Big)\\
&&+\frac{n}{n-2}\RRR_{kp}\nabla_j\RRR_{pi}+\frac{n+1}{n-2}\nabla_j\RRR_{pk}\RRR_{pi}-\frac{2}{n-2}\RRR\nabla_j\RRR_{ik}\\
&&-\frac{4n-3}{2(n-1)(n-2)}\nabla_j\RRR\RRR_{ik}\\
&&-\frac{1}{n-2}\RRR_{kp}\Big(\nabla_j\RRR_{ip}+\CCC_{pji}+\frac{1}{2(n-1)}(\nabla_i\RRR g_{jp}-\nabla_j\RRR g_{ip})\Big)\\
&&+\frac{1}{n-2}\RRR_{pj}\Big(\nabla_k\RRR_{ip}+\CCC_{kpi}+\frac{1}{2(n-1)}(\nabla_i\RRR g_{kp}-\nabla_k\RRR g_{ip})\Big)\\
&&-\frac{n}{n-2}\RRR_{jp}\nabla_k\RRR_{ip}-\frac{n+1}{n-2}\nabla_k\RRR_{jp}\RRR_{ip}+\frac{2}{n-2}\RRR\nabla_k\RRR_{ij}\\
&&+\frac{4n-3}{2(n-1)(n-2)}\nabla_k\RRR\RRR_{ij}\\
&&+2\CCC_{plj}\WWW_{pikl}-2\CCC_{plk}\WWW_{pijl}-2\CCC_{pil}\WWW_{jklp}\\
&&-2\WWW_{jklp}\nabla_i\RRR_{pl}-2\RRR_{pl}\nabla_k\WWW_{pijl}+2\RRR_{pl}\nabla_j\WWW_{pikl}\\
&=&\frac{1}{n-2}\left(\RRR_{pi}\CCC_{pkj}+\RRR_{pk}(\CCC_{jip}-\CCC_{pji}-(n-3)\CCC_{ijp})+\RRR_{pj}(\CCC_{pki}-\CCC_{kip}+(n-3)\CCC_{ikp})\right)\\
&&+\frac{2}{n-2}\CCC_{pjl}\RRR_{pl}g_{ik}-\frac{2}{n-2}\CCC_{pkl}\RRR_{pl}g_{ij}-2\CCC_{pjl}\WWW_{pikl}+2\CCC_{pkl}\WWW_{pijl}-2\CCC_{pil}\WWW_{jklp}\\
&&+g_{ik}\Big[\frac{\nabla_j\RRR^2}{(n-1)(n-2)}-\frac{1}{(n-1)(n-2)}\nabla_j|\Ric|^2\Big]\\
&&-g_{ij}\Big[\frac{\nabla_k\RRR^2}{(n-1)(n-2)}-\frac{1}{(n-1)(n-2)}\nabla_k|\Ric|^2\Big]\\
&&-\frac{2}{n-2}\RRR_{jp}\nabla_k\RRR_{ip}-\frac{n+1}{n-2}\nabla_k\RRR_{jp}\RRR_{ip}+\frac{3n-1}{2(n-1)(n-2)}\nabla_k\RRR\RRR_{ij}+\frac{2}{n-2}\RRR\nabla_k\RRR_{ij}\\
&&+\frac{2}{n-2}\RRR_{kp}\nabla_j\RRR_{ip}+\frac{n+1}{n-2}\nabla_j\RRR_{kp}\RRR_{ip}-\frac{3n-1}{2(n-1)(n-2)}\nabla_j\RRR \RRR_{ik}-\frac{2}{n-2}\RRR\nabla_j\RRR_{ik}\\
&&-2\WWW_{jklp}\nabla_i\RRR_{lp}-2\RRR_{lp}\nabla_k\WWW_{pijl}+2\RRR_{pl}\nabla_j\WWW_{pikl}\,.
\end{eqnarray*}
then, we substitute again
\begin{align*}
\nabla_k\RRR_{jp}=&\,\nabla_p\RRR_{kj}+\CCC_{jpk}+\frac{1}{2(n-1)}
\big(\nabla_k\RRR g_{jp} - \nabla_p\RRR g_{jk} \big)\\
\nabla_j\RRR_{kp}=&\,\nabla_p\RRR_{jk}+\CCC_{kpj}+\frac{1}{2(n-1)}
\big(\nabla_j\RRR g_{kp} - \nabla_p\RRR g_{kj} \big)\\
\nabla_k\RRR_{ij}=&\,\nabla_i\RRR_{kj}+\CCC_{jik}+\frac{1}{2(n-1)}
\big(\nabla_k\RRR g_{ij} - \nabla_i\RRR g_{jk} \big)\\
\nabla_j\RRR_{ik}=&\,\nabla_i\RRR_{jk}+\CCC_{kij}+\frac{1}{2(n-1)}
\big(\nabla_j\RRR g_{ik} - \nabla_i\RRR g_{kj} \big)\,,
\end{align*}
finally obtaining
\begin{eqnarray*}
\frac{\partial\,}{\partial t}\CCC_{ijk}-\Delta\CCC_{ijk}
&=&\frac{1}{n-2}\left(\RRR_{pi}\CCC_{pkj}+\RRR_{pk}(\CCC_{jip}-\CCC_{pji}-(n-3)\CCC_{ijp})+\RRR_{pj}(\CCC_{pki}-\CCC_{kip}+(n-3)\CCC_{ikp})\right)\\
&&+\frac{2}{n-2}\CCC_{pjl}\RRR_{pl}g_{ik}-\frac{2}{n-2}\CCC_{pkl}\RRR_{pl}g_{ij}-2\CCC_{pjl}\WWW_{pikl}+2\CCC_{pkl}\WWW_{pijl}-2\CCC_{pil}\WWW_{jklp}\\
&&+g_{ik}\Big[\frac{\nabla_j\RRR^2}{(n-1)(n-2)}-\frac{1}{(n-1)(n-2)}\nabla_j|\Ric|^2\Big]\\
&&-g_{ij}\Big[\frac{\nabla_k\RRR^2}{(n-1)(n-2)}-\frac{1}{(n-1)(n-2)}\nabla_k|\Ric|^2\Big]\\
&&-\frac{2}{n-2}\RRR_{jp}\nabla_k\RRR_{ip}-\frac{n+1}{n-2}\RRR_{ip}\nabla_p\RRR_{kj}-\frac{n+1}{n-2}\RRR_{ip}\CCC_{jpk}\\
&&-\frac{n+1}{2(n-1)(n-2)}\RRR_{ij}\nabla_k\RRR + \frac{n+1}{2(n-1)(n-2)}\RRR_{ip}\nabla_p\RRR g_{jk}\\
&&+\frac{3n-1}{2(n-1)(n-2)}\nabla_k\RRR\RRR_{ij}+\frac{2}{n-2}\RRR(\nabla_i\RRR_{jk}+\CCC_{jik}+\frac{1}{2(n-1)}(\nabla_k\RRR g_{ij}-\nabla_i\RRR g_{jk}))\\
&&+\frac{2}{n-2}\RRR_{kp}\nabla_j\RRR_{ip}+\frac{n+1}{n-2}\RRR_{ip}\nabla_p\RRR_{kj}+\frac{n+1}{n-2}\RRR_{ip}\CCC_{kpj}+\frac{n+1}{2(n-1)(n-2)}\nabla_j\RRR \RRR_{ik}\\
&&-\frac{n+1}{2(n-1)(n-2)}\RRR_{ip}\nabla_p\RRR g_{jk}-\frac{3n-1}{2(n-1)(n-2)}\nabla_j\RRR \RRR_{ik}\\
&&-\frac{2}{n-2}\RRR(\nabla_i\RRR_{jk}+\CCC_{kij}+\frac{1}{2(n-1)}(\nabla_j\RRR g_{ik}-\nabla_i\RRR g_{jk}))\\
&&-2\WWW_{jklp}\nabla_i\RRR_{lp}-2\RRR_{lp}\nabla_k\WWW_{pijl}+2\RRR_{pl}\nabla_j\WWW_{pikl}\\
&=&\frac{1}{n-2}(\RRR_{pk}(\CCC_{jip}-\CCC_{pji}-(n-3)\CCC_{ijp})-\RRR_{pj}(\CCC_{kip}-\CCC_{pki}-(n-3)\CCC_{ikp})\\
&&+(n+2)\RRR_{pi}\CCC_{pkj})+\frac{2}{n-2}(\CCC_{pjl}\RRR_{pl}g_{ik}-\CCC_{pkl}\RRR_{pl}g_{ij})+\frac{2}{n-2}\RRR\CCC_{ijk}\\
&&-2\WWW_{pikl}\CCC_{pjl}+2\WWW_{pijl}\CCC_{pkl}-2\CCC_{pil}\WWW_{jklp}\\
&&+g_{ik}\Big[\frac{\nabla_j\RRR^2}{2(n-1)(n-2)}-\frac{1}{(n-1)(n-2)}\nabla_j|\Ric|^2\Big]\\
&&-g_{ij}\Big[\frac{\nabla_k\RRR^2}{2(n-1)(n-2)}-\frac{1}{(n-1)(n-2)}\nabla_k|\Ric|^2\Big]\\
&&-\frac{2}{n-2}\RRR_{jp}\nabla_k\RRR_{ip}+\frac{1}{n-2}\nabla_k\RRR\RRR_{ij}\\
&&+\frac{2}{n-2}\RRR_{kp}\nabla_j\RRR_{ip}-\frac{1}{n-2}\nabla_j\RRR\RRR_{ik}\\
&&+2\RRR_{lp}\nabla_j\WWW_{pikl}-2\RRR_{lp}\nabla_k\WWW_{pijl}\, ,
\end{eqnarray*}
where in the last passage we used again the
identities~\eqref{CottonSym} and the fact that
$$
\WWW_{jklp}\nabla_i\RRR_{lp}=\WWW_{jkpl}\nabla_i\RRR_{pl}=\WWW_{jkpl}\nabla_i\RRR_{lp}=-\WWW_{jklp}\nabla_i\RRR_{lp}\,.
$$
Hence, we can resume this long computation in the following
proposition, getting back to a generic coordinate basis.
\begin{prop}\label{cotn}
During the Ricci flow of a $n$--dimensional Riemannian manifold $(M^n,
g(t))$,
the Cotton tensor satisfies the following evolution equation
\begin{eqnarray*}
\bigl(\partial_t-\Delta\bigr)\CCC_{ijk}&=&\frac{1}{n-2}\left[g^{pq}\RRR_{pj}(\CCC_{kqi}+\CCC_{qki}+(n-3)\CCC_{ikq})\right.\nonumber\\
&&\left.+(n+2)g^{pq}\RRR_{ip}\CCC_{qkj}
-g^{pq}\RRR_{pk}(\CCC_{jqi}+\CCC_{qji}+(n-3)\CCC_{ijq})\right]\\
&&+\frac{2}{n-2}\RRR\CCC_{ijk}+\frac{2}{n-2}\RRR^{ql}\CCC_{qjl}g_{ik}-\frac{2}{n-2}\RRR^{ql}\CCC_{qkl}g_{ij}\nonumber\\
&&+\frac{1}{(n-1)(n-2)}\nabla_k|\Ric|^2g_{ij}-\frac{1}{(n-1)(n-2)}\nabla_j|\Ric|^2g_{ik}\nonumber\\
&&+\frac{\RRR}{(n-1)(n-2)}\nabla_j\RRR g_{ik}
-\frac{\RRR}{(n-1)(n-2)}\nabla_k\RRR g_{ij}\nonumber\\
&&+\frac{2}{n-2}g^{pq}\RRR_{pk}\nabla_j\RRR_{qi}-\frac{2}{n-2}g^{pq}\RRR_{pj}\nabla_k\RRR_{qi}
+\frac{1}{n-2}\RRR_{ij}\nabla_k\RRR-\frac{1}{n-2}\RRR_{ik}\nabla_j\RRR\,\nonumber\\
&&-2g^{pq}\WWW_{pikl}\CCC_{qjl}+2g^{pq}\WWW_{pijl}\CCC_{qkl}-2g^{pq}\WWW_{jklp}\CCC_{qil}+2g^{pq}\RRR_{pl}\nabla_j\WWW_{qikl}-2g^{pq}\RRR_{pl}\nabla_{k}\WWW_{qijl}\,.\nonumber
\end{eqnarray*}
In particular if the Cotton tensor vanishes identically along the flow we obtain,
\begin{eqnarray*}
0&=&\frac{1}{(n-1)(n-2)}\nabla_k|\Ric|^2g_{ij}-\frac{1}{(n-1)(n-2)}\nabla_j|\Ric|^2g_{ik}\nonumber\\
&&+\frac{\RRR}{(n-1)(n-2)}\nabla_j\RRR g_{ik}
-\frac{\RRR}{(n-1)(n-2)}\nabla_k\RRR g_{ij}\nonumber\\
&&+\frac{2}{n-2}g^{pq}\RRR_{pk}\nabla_j\RRR_{qi}-\frac{2}{n-2}g^{pq}\RRR_{pj}\nabla_k\RRR_{qi}
+\frac{1}{n-2}\RRR_{ij}\nabla_k\RRR-\frac{1}{n-2}\RRR_{ik}\nabla_j\RRR\,\nonumber\\
&&+2g^{pq}\RRR_{pl}\nabla_j\WWW_{qikl}-2g^{pq}\RRR_{pl}\nabla_{k}\WWW_{qijl}\,,\nonumber
\end{eqnarray*}
while, in virtue of relation~\eqref{CottonWeyl}, if the Weyl tensor vanishes along the flow we obtain (compare with~\cite[Proposition~1.1 and Corollary~1.2]{mancat1})
\begin{eqnarray*}
0&=&\frac{1}{(n-1)(n-2)}\nabla_k|\Ric|^2g_{ij}-\frac{1}{(n-1)(n-2)}\nabla_j|\Ric|^2g_{ik}\nonumber\\
&&+\frac{\RRR}{(n-1)(n-2)}\nabla_j\RRR g_{ik}
-\frac{\RRR}{(n-1)(n-2)}\nabla_k\RRR g_{ij}\nonumber\\
&&+\frac{2}{n-2}g^{pq}\RRR_{pk}\nabla_j\RRR_{qi}-\frac{2}{n-2}g^{pq}\RRR_{pj}\nabla_k\RRR_{qi}
+\frac{1}{n-2}\RRR_{ij}\nabla_k\RRR-\frac{1}{n-2}\RRR_{ik}\nabla_j\RRR\,.\nonumber
\end{eqnarray*}
\end{prop}
\begin{cor}\label{CorEvn}
During the Ricci flow of a $n$--dimensional Riemannian manifold $(M^n,
g(t))$, the squared norm of the Cotton tensor satisfies the following evolution equation, in an orthonormal basis,
\begin{eqnarray*}
\bigl(\partial_t-\Delta\bigr)\vert\CCC_{ijk}\vert^2
&=&-2\vert \nabla \CCC_{ijk}\vert^2-\frac{16}{n-2}\CCC_{ipk}\CCC_{iqk}\RRR_{pq}
+\frac{24}{n-2}\CCC_{ipk}\CCC_{kqi}\RRR_{pq}\\
&&+\frac{4}{n-2}\RRR\vert\CCC_{ijk}\vert^2+\frac{8}{n-2}\CCC_{ijk}\RRR_{pk}\nabla_j\RRR_{pi}
+\frac{4}{n-2}\CCC_{ijk}\RRR_{ij}\nabla_k\RRR\nonumber\\
&&+8\CCC_{ijk}\RRR_{lp}\nabla_j\WWW_{pikl}-8\CCC_{ijk}\CCC_{pjl}\WWW_{pikl}-4\CCC_{jpi}\CCC_{ljk}\WWW_{pikl}\,.\nonumber
\end{eqnarray*}
\end{cor}
\begin{eqnarray*}
\bigl(\partial_t-\Delta\bigr)\vert\CCC_{ijk}\vert^2
&=&-2\vert \nabla \CCC_{ijk}\vert^2+2\CCC^{ijk}\RRR_{ip}g^{pq}\CCC_{qjk}+2\CCC^{ijk}\RRR_{jp}g^{pq}\CCC_{iqk}+2\CCC^{ijk}\RRR_{kp}g^{pq}\CCC_{iqk}\nonumber\\
&&+2\CCC_{ijk}\Bigl[\frac{1}{n-2}\left[(\RRR_{pj}(\CCC_{kpi}+\CCC_{pki}+(n-3)\CCC_{ikp})\right.\\
&&+(n+2)\RRR_{pi}\CCC_{pkj}-\RRR_{pk}(\CCC_{jpi}+\CCC_{pji}+(n-3)\CCC_{ijp}))\\
&&+\frac{2}{n-2}\RRR\CCC_{ijk}+\frac{2}{n-2}\RRR_{ql}\CCC_{qjl}g_{ik}-\frac{2}{n-2}\RRR_{ql}\CCC_{qkl}g_{ij}\nonumber\\
&&+\frac{1}{(n-1)(n-2)}\nabla_k|\Ric|^2g_{ij}-\frac{1}{(n-1)(n-2)}\nabla_j|\Ric|^2g_{ik}\nonumber\\
&&+\frac{\RRR}{(n-1)(n-2)}\nabla_j\RRR g_{ik}
-\frac{\RRR}{(n-1)(n-2)}\nabla_k\RRR g_{ij}\nonumber\\
&&+\frac{2}{n-2}\RRR_{qk}\nabla_j\RRR_{qi}-\frac{2}{n-2}\RRR_{qj}\nabla_k\RRR_{qi}
+\frac{1}{n-2}\RRR_{ij}\nabla_k\RRR-\frac{1}{n-2}\RRR_{ik}\nabla_j\RRR\,\nonumber\\
&&-2\WWW_{pikl}\CCC_{pjl}+2\WWW_{pijl}\CCC_{pkl}-2\WWW_{jklp}\CCC_{pil}+2\RRR_{pl}\nabla_j\WWW_{pikl}-2\RRR_{pl}\nabla_{k}\WWW_{pikl}\Bigr]\\
&=&-2\vert \nabla \CCC_{ijk}\vert^2-\frac{16}{n-2}\CCC_{ipk}\CCC_{iqk}\RRR_{pq}
+\frac{24}{n-2}\CCC_{ipk}\CCC_{kqi}\RRR_{pq}\nonumber\\
&&+\frac{4}{n-2}\RRR\vert\CCC_{ijk}\vert^2+\frac{8}{n-2}\CCC_{ijk}\RRR_{pk}\nabla_j\RRR_{pi}
+\frac{4}{n-2}\CCC_{ijk}\RRR_{ij}\nabla_k\RRR\nonumber\\
&&+8\CCC_{ijk}\RRR_{lp}\nabla_j\WWW_{pikl}-8\CCC_{ijk}\CCC_{pjl}\WWW_{pikl}-4\CCC_{jpi}\CCC_{ljk}\WWW_{pikl}\,.\nonumber
\end{eqnarray*}
\begin{rem} Notice that if $n=3$ the two formulas in Proposition~\ref{cotn}
and Corollary~\ref{CorEvn} become the ones in Proposition~\ref{cot} and
Corollary~\ref{CorEv}.
\end{rem}
\section{The Bach Tensor}
The Bach tensor in dimension three is given by
$$
\BBB_{ik}=\nabla_j\CCC_{ijk}\,.
$$
Let $\SSS_{ij}=\RRR_{ij}-\frac{1}{2(n-1)}\Scal g_{ij}$ be the Schouten
tensor, then
\begin{equation}\label{eq_DefBach}
\BBB_{ik}=\nabla_j\CCC_{ijk}=\nabla_j(\nabla_k
\SSS_{ij}-\nabla_j\SSS_{ik})=\nabla_j\nabla_k\SSS_{ij}-\Delta\SSS_{ik}\,.
\end{equation}
We compute, in generic dimension $n$,
\begin{eqnarray*}\nabla_{j}\CCC_{ijk}&=&\nabla_j\nabla_k\RRR_{ij}-\frac{1}{2(n-1)}\nabla_j\nabla_k\Scal g_{ij}-\Delta \SSS_{ik}\\
&=&+\RRR_{jkil}\RRR_{jl}+\RRR_{jkjl}\RRR_{il}+\nabla_k\nabla_j\RRR_{ij}-\frac{1}{2(n-1)}\nabla_k\nabla_j\Scal g_{ij}-\Delta\SSS_{ik}\\
&=&+\frac{1}{n-2}\left(\RRR_{ij}g_{kl}-\RRR_{jl}g_{ki}+\RRR_{kl}g_{ij}-\RRR_{ki}g_{jl}-\frac{\Scal}{(n-1)}(g_{ij}g_{kl}-g_{jl}g_{ki})\right)\RRR_{jl}+\WWW_{jkil}\RRR_{jl}\\
&&+\RRR_{kl}\RRR_{il}+\frac{1}{2}\nabla_k\nabla_i\Scal-\frac{1}{2(n-1)}\nabla_k\nabla_i\Scal-\Delta \SSS_{ik}\\
&=&+\frac{1}{n-2}(\RRR_{ji}\RRR_{jk}-|\Ric|^2g_{ik}+\RRR_{kl}\RRR_{il}-\Scal\RRR_{ik})-\frac{\Scal}{(n-1)(n-2)}\RRR_{ik}+\frac{\Scal^2}{(n-1)(n-2)}g_{ik}\\
&&+\WWW_{jkil}\RRR_{jl}+\RRR_{kl}\RRR_{il}+\frac{n-2}{2(n-1)}\nabla_k\nabla_i\Scal-\Delta\SSS_{ik}\\
&=&\frac{n}{n-2}\RRR_{ij}\RRR_{kj}-\frac{n}{(n-1)(n-2)}\Scal\RRR_{ik}-\frac{1}{n-2}|\Ric|^2g_{ik}+\frac{\Scal^2}{(n-1)(n-2)}g_{ik}\\&&+\WWW_{jkil}\RRR_{jl}+\frac{n-2}{2(n-1)}\nabla_k\nabla_i\Scal-\Delta \SSS_{ik}\,.
\end{eqnarray*}
From this last expression, it is easy to see that the Bach tensor in dimension $3$ is
symmetric, i.e. $\BBB_{ik}=\BBB_{ki}$. Moreover, it is trace--free, that is, $g^{ik}\BBB_{ik}=0$ as $g^{ik}\nabla\CCC_{ijk}=0$.
\begin{rem}In higher dimension, the Bach tensor is given by
$$\BBB_{ik}=\frac{1}{n-2}(\nabla_j \CCC_{ijk}-\RRR_{jl}\WWW_{ijkl})\,.$$
We note that, since
$\RRR_{jl}\WWW_{ijkl}=\RRR_{jl}\WWW_{klij}=\RRR_{jl}\WWW_{kjil}$, from the above computation we get that the Bach tensor is symmetric in any dimension; finally, as the
Weyl tensor is trace-free in every pair of indexes, there holds
$g^{ik}\BBB_{ik}=0$.
\end{rem}
We recall that Schur lemma yields the following equation for the divergence of the Schouten tensor
\begin{equation}\label{eq_SchoutenSchur}\nabla_j\SSS_{ij}=\frac{n-2}{2(n-1)}\nabla_i\Scal\,.\end{equation}
We write
$$
\nabla_k\nabla_j\CCC_{ijk}=\nabla_k\nabla_j\nabla_k\SSS_{ij}-\nabla_k\nabla_j\nabla_j\SSS_{ik}=[\nabla_j,\nabla_k]\nabla_j\SSS_{ik}\,,
$$
therefore,
\begin{eqnarray*}\nabla_k\nabla_j\CCC_{ijk}&=&\RRR_{jkjl}\nabla_l\SSS_{ik}+\RRR_{jkil}\nabla_j\SSS_{lk}+\RRR_{jkkl}\nabla_j\SSS_{li}\\
&=&\RRR_{kl}\nabla_l\SSS_{ik}+\RRR_{jkil}\nabla_j\SSS_{lk}-\RRR_{jl}\nabla_j\SSS_{li}\\
&=&\left[\frac{1}{n-2}\,(\RRR_{ij}g_{kl}-\RRR_{jl}g_{ik}+\RRR_{kl}g_{ij}-\RRR_{ik}g_{jl})-\frac{1}{(n-1)(n-2)}\,\RRR(g_{ij}g_{kl}-g_{ik}g_{jl}) + \WWW_{jkil}\right]\nabla_j\SSS_{lk}\\
&=& \frac{1}{n-2}(-\RRR_{jl}\nabla_j\SSS_{il}+\RRR_{kl}\nabla_i\SSS_{kl})+\WWW_{jkil}\nabla_j\SSS_{lk}\\
&=&\frac{1}{n-2}\RRR_{jl}(\nabla_i\SSS_{lj}-\nabla_j\SSS_{il})+\WWW_{jkil}\nabla_j\RRR_{kl}\\
&=&\frac{1}{n-2}\RRR_{jl}\CCC_{lji}+\WWW_{iljk}\nabla_j\RRR_{kl}\,,
\end{eqnarray*}
where we repeatedly used equation~\eqref{eq_SchoutenSchur}, the trace--free property of the Weyl tensor and the definition of the Cotton tensor.
Recalling that
$$
\nabla_k\WWW_{ijkl}=\nabla_k\WWW_{klij}=-\frac{n-3}{n-2}\CCC_{lij}=\frac{n-3}{n-2}\CCC_{lji}\,,
$$
the divergence of the Bach tensor is given by
\begin{eqnarray*}\nabla_k\BBB_{ik}&=&\frac{1}{n-2}\nabla_k(\nabla_j \CCC_{ijk}-\RRR_{jl}\WWW_{ijkl})=\frac{1}{(n-2)^2}\RRR_{jl}\CCC_{jli}-\frac{n-3}{(n-2)^2}\,\CCC_{jli}\RRR_{jl}\\&=&-\frac{n-4}{(n-2)^2}\CCC_{jli}\RRR_{jl}\,.\end{eqnarray*}
In particular, for $n=3$, we obtain $\nabla_k\BBB_{ik}=\nabla_k\BBB_{ki}=\RRR_{jl}\CCC_{jli}$ and, for $n=4$, we get the classical result $\nabla_k\BBB_{ik}=\nabla_k\BBB_{ki}=0$.
\subsection{The Evolution Equation of the Bach Tensor in 3D}\ \\
We turn now our attention to the evolution of the Bach tensor along
the Ricci flow in dimension three. In order to obtain its evolution
equation, instead of calculating directly the time derivative and the
Laplacian of the Bach tensor, we employ the following equation
\begin{equation}\label{eq_EvBach}
(\partial_t-\Delta)\BBB_{ik}=\nabla_j(\partial_t-\Delta)\CCC_{ijk}-[\Delta,\nabla_j]\CCC_{ijk}+2\RRR_{pj}\nabla_p\CCC_{ijk}+[
\partial_t, \nabla_j]C_{ijk}\,,
\end{equation}
which relates the quantity we want to compute with the evolution of the
Cotton tensor, the evolution of the Christoffel symbols and the
formulas for the exchange of covariant derivatives.
We will work on the various terms separately.
By the commutations formulas for derivatives, we have
\begin{eqnarray*}
\nabla_l\nabla_l\nabla_q \CCC_{ijk}-\nabla_l\nabla_q\nabla_l\CCC_{ijk}=\nabla_l(\RRR_{lqip}\CCC_{pjk}+\RRR_{lqjp}\CCC_{ipk}+\RRR_{lqkp}\CCC_{ijp})\end{eqnarray*}
\begin{eqnarray*}\nabla_l\nabla_q\nabla_s\CCC_{ijk}-\nabla_q\nabla_l\nabla_s\CCC_{ijk}=\RRR_{lqsp}\nabla_p\CCC_{ijk}+\RRR_{lqip}\nabla_s\CCC_{pjk}+\RRR_{lqjp}\nabla_s\CCC_{ipk}+\RRR_{lqkp}\nabla_s\CCC_{ijp},
\end{eqnarray*}
and putting these together with $q=j$ and $l=s$, we get
\begin{eqnarray*}
[\Delta,\nabla_j]\CCC_{ijk}&=&\nabla_l(\RRR_{ljip}\CCC_{pjk}-\RRR_{lp}\CCC_{ipk}+\RRR_{ljkp}\CCC_{ijp})\\
&&+\RRR_{jp}\nabla_p\CCC_{ijk}+\RRR_{ljip}\nabla_l\CCC_{pjk}-\RRR_{lp}\nabla_l\CCC_{ipk}+\RRR_{ljkp}\nabla_l\CCC_{ijp}\\
&=&\nabla_l\left[\left(\RRR_{li}g_{jp}-\RRR_{lp}g_{ji}+\RRR_{jp}g_{li}-\RRR_{ji}g_{lp}-\frac{\RRR}{2}(g_{li}g_{jp}-g_{lp}g_{ji})\right)\CCC_{pjk}\right.\\
&&\left.-\RRR_{lp}\CCC_{ipk}+\left(\RRR_{lk}g_{jp}-\RRR_{lp}g_{jk}+\RRR_{jp}g_{lk}-\RRR_{jk}g_{lp}-\frac{\RRR}{2}(g_{lk}g_{jp}-g_{lp}g_{jk})\right)\CCC_{ijp}\right]\\
&&+\RRR_{jp}\nabla_{p}\CCC_{ijk}+\RRR_{ljip}\nabla_{l}\CCC_{pjk}-\RRR_{lp}\nabla_{l}\CCC_{ipk}+\RRR_{ljkp}\nabla_{l}\CCC_{ijp}\\
&=&-\frac{1}{2}\nabla_{p}\RRR\CCC_{pik}-\RRR_{lp}\nabla_{l}\CCC_{pik}+\nabla_{i}\RRR_{jp}\CCC_{pjk}+\RRR_{jp}\nabla_{i}\CCC_{pjk}-\nabla_{p}\RRR_{ji}\CCC_{pjk}-\RRR_{ji}\nabla_{p}\CCC_{pjk}\\
&&+\frac{1}{2}\nabla_{p}\RRR\CCC_{pik}+\frac{\RRR}{2}\nabla_{p}\CCC_{pik}-\frac{1}{2}\nabla_{p}\RRR\CCC_{ipk}-\RRR_{lp}\nabla_{l}\CCC_{ipk}-\frac{1}{2}\nabla_{p}\RRR\CCC_{ikp}-\RRR_{lp}\nabla_{l}\CCC_{ikp}\\
&&+\nabla_{k}\RRR_{jp}\CCC_{ijp}+\RRR_{jp}\nabla_{k}\CCC_{ijp}-\nabla_{p}\RRR_{jk}\CCC_{ijp}-\RRR_{jk}\nabla_{p}\CCC_{ijp}+\frac{1}{2}\nabla_{p}\RRR\CCC_{ikp}+\frac{\RRR}{2}\nabla_{p}\CCC_{ikp}\\
&&+\RRR_{jp}\nabla_{p}\CCC_{ijk}-\RRR_{lp}\nabla_{l}\CCC_{pik}+\RRR_{jp}\nabla_{i}\CCC_{pjk}-\RRR_{ji}\nabla_{p}\CCC_{pjk}+\frac{\RRR}{2}\nabla_{p}\CCC_{pik}\\
&&-\RRR_{lp}\nabla_{l}\CCC_{ipk}-\RRR_{lp}\nabla_{l}\CCC_{ikp}+\RRR_{jp}\nabla_{k}\CCC_{ijp}-\RRR_{jk}\nabla_{p}\CCC_{ijp}+\frac{\RRR}{2}\nabla_{p}\CCC_{ikp}\\
&=&\nabla_{i}\RRR_{jp}\CCC_{pjk}-\nabla_{p}\RRR_{ji}\CCC_{ijp}-\nabla_{p}\RRR_{jk}\CCC_{ijp}-\nabla_{p}\RRR_{jk}\CCC_{ijp}-2\RRR_{lp}\nabla_{l}\CCC_{pik}\\
&&+2\RRR_{lp}\nabla_{i}\CCC_{plk}-2\RRR_{ji}\nabla_{p}\CCC_{pjk}+\RRR\nabla_{p}\CCC_{pik}+\frac{1}{2}\nabla_{p}\RRR\CCC_{ikp}+2\RRR_{jp}\nabla_{k}\CCC_{ijp}\\
&&-2\RRR_{jk}\nabla_{p}\CCC_{ijp}+\RRR\nabla_{p}\CCC_{ikp}+\RRR_{jp}\nabla_{p}\CCC_{ijk}\\
&=&\nabla_{i}\RRR_{lp}\CCC_{plk}-\nabla_{p}\RRR_{li}\CCC_{plk}+\nabla_{k}\RRR_{lp}\CCC_{ilp}-\nabla_{p}\RRR_{lk}\CCC_{ilp}\\
&&-2\RRR_{lp}\nabla_{l}\CCC_{pik}+2\RRR_{lp}\nabla_{i}\CCC_{plk}+2\RRR_{li}\BBB_{kl}-2\RRR_{li}\BBB_{lk}+2\RRR_{lp}\nabla_{k}\CCC_{ilp}\\
&&+2\RRR_{lk}\BBB_{il}+\RRR_{lp}\nabla_{p}\CCC_{ilk}-\RRR\BBB_{ik}+\frac{1}{2}\nabla_{p}\RRR\CCC_{ikp}+\RRR\BBB_{ik}-\RRR\BBB_{ik}\\
&=&\nabla_{i}\RRR_{lp}\CCC_{plk}-\nabla_{p}\RRR_{li}\CCC_{plk}+\nabla_{k}\RRR_{lp}\CCC_{ilp}-\nabla_{p}\RRR_{lk}\CCC_{ilp}\\
&&+\RRR_{lp}\nabla_{p}\CCC_{ilk}+2\RRR_{lp}\nabla_{i}\CCC_{plk}+2\RRR_{lp}\nabla_{k}\CCC_{ilp}-2\RRR_{lp}\nabla_{l}\CCC_{ipk}\\
&&+\frac{1}{2}\nabla_{p}\RRR\CCC_{ikp}+2\RRR_{lk}\BBB_{il}-\RRR\BBB_{ik}\,.
\end{eqnarray*}
The covariant derivative of the evolution of the Cotton tensor is given by
\begin{eqnarray*}\nabla_j(\partial_t-\Delta)\CCC_{ijk}&=&\frac{5}{2}\nabla_{p}\RRR\CCC_{ipk}+\nabla_{p}\RRR\CCC_{pki}+\RRR_{lp}\nabla_{p}\CCC_{kli}+\RRR_{lp}\nabla_{p}\CCC_{lki}-\nabla_{p}\RRR_{kl}\CCC_{pli}\\
&&-\nabla_{p}\RRR_{kl}\CCC_{lpi}-\RRR_{kp}\BBB_{pi}+5\nabla_{p}\RRR_{il}\CCC_{lkp}-5\RRR_{ip}\BBB_{pk}+2\RRR\BBB_{ik}\\
&&+2\nabla_{s}\RRR_{pl}\CCC_{psl}g_{ik}+2\RRR_{pl}\BBB_{pl}g_{ik}-2\nabla_{i}\RRR_{pl}\CCC_{pkl}-2\RRR_{pl}\nabla_{i}\CCC_{pkl}\\
&&+\frac{1}{2}(|\nabla\RRR|^2+\RRR\Delta\RRR-\Delta|\Ric|^2)g_{ik}-\frac{1}{2}(\nabla_{i}\RRR\nabla_{k}\RRR+\RRR\nabla_{i}\nabla_{k}\RRR-\nabla_{i}\nabla_{k}|\Ric|^2)\\
&&+2\Delta\RRR_{ip}\RRR_{kp}+2\nabla_{l}\RRR_{ip}\nabla_{l}\RRR_{kp}-2\nabla_{l}\nabla_{k}\RRR_{ip}\RRR_{lp}-\nabla_{k}\RRR_{ip}\nabla_{p}\RRR\\
&&+\nabla_{l}\nabla_{k}\RRR\RRR_{il}+\frac{1}{2}\nabla_{k}\RRR\nabla_{i}\RRR-\Delta\RRR\RRR_{ik}-\nabla_{l}\RRR\nabla_{l}\RRR_{ik}.
\end{eqnarray*}
Finally, the commutator between the covariant derivative and the time derivative can be expressed in terms of the time derivatives of the Christoffel symbols, as follows
\begin{eqnarray*}
[\partial_t, \nabla_j]\CCC_{ijk}&=&-\partial_t\Gamma_{ij}^p\CCC_{pjk}-\partial_t\Gamma_{jk}^p\CCC_{ijp}\\
&=&\nabla_i\RRR_{jp}\CCC_{pjk}+\nabla_j\RRR_{ip}\CCC_{pjk}-\nabla_p\RRR_{ij}\CCC_{pjk}+\nabla_j\RRR_{kp}\CCC_{ijp}+\nabla_k\RRR_{jp}\CCC_{ijp}-\nabla_{p}\RRR_{jk}\CCC_{ijp}\\
&=&\nabla_{i}\RRR_{jp}\CCC_{pjk}+\nabla_{p}\RRR_{ij}\CCC_{jpk}+\nabla_{p}\RRR_{ij}\CCC_{pkj}+\nabla_{p}\RRR_{kj}\CCC_{ipj}+\nabla_{k}\RRR_{jp}\CCC_{ijp}+\nabla_{p}\RRR_{jk}\CCC_{ipj}\\
&=&\nabla_{i}\RRR_{jp}\CCC_{pjk}-\nabla_{p}\RRR_{ij}\CCC_{pkj}-\nabla_{p}\RRR_{ij}\CCC_{kjp}+\nabla_{p}\RRR_{ij}\CCC_{pkj}+2\nabla_{p}\RRR_{kj}\CCC_{ipj}\\
&=&\nabla_{i}\RRR_{jp}\CCC_{pjk}-\nabla_{p}\RRR_{ij}\CCC_{kjp}+2\nabla_{p}\RRR_{kj}\CCC_{ipj}\,.
\end{eqnarray*}
Substituting into~\eqref{eq_EvBach}, and making some computations, we obtain the evolution equation
\begin{prop}\label{EvBachRF3D}
During the Ricci flow of a $3$--dimensional Riemannian manifold $(M^3, g(t))$ the Bach tensor satisfies the following evolution equation
\begin{eqnarray*}
(\partial_t-\Delta)\BBB_{ik}=&&\left[3\nabla_{p}\RRR\CCC_{ipk}+\nabla_{p}\RRR\CCC_{pki}-\nabla_{p}\RRR\nabla_{k}\RRR_{ip}\right]\\
&+&\left[-2\RRR_{pl}\nabla_{p}\CCC_{ikl}-3\RRR_{pk}\BBB_{pi}-5\RRR_{pi}\BBB_{pk}+2\Delta\RRR_{ip}\RRR_{kp}\right.\\
&&\,\left.-2\nabla_{l}\nabla_{k}\RRR_{pi}\RRR_{pl}+\nabla_{l}\nabla_{k}\RRR\RRR_{li}-\Delta\RRR\RRR_{ik}\right]\\
&+&\left[-2\nabla_{p}\RRR_{kl}\CCC_{lpi}-2\nabla_{p}\RRR_{kl}\CCC_{ilp}-4\nabla_{p}\RRR_{il}\CCC_{lpk}-2\nabla_{i}\RRR_{pl}\CCC_{pkl}\right]\\
&+&\left[3\RRR\BBB_{ik}+2\nabla_{s}\RRR_{pl}\CCC_{psl}g_{ik}+2\RRR_{pl}\BBB_{pl}g_{ik}\right.\\
&&\,\left.+\frac{1}{2}(|\nabla\RRR|^2+\RRR\Delta\RRR-\Delta|\Ric|^2)g_{ik}-\frac{1}{2}(\RRR\nabla_{i}\nabla_{k}\RRR-\nabla_{i}\nabla_{k}|\Ric|^2)\right.\\
&&\,\left.+2\nabla_{l}\RRR_{ip}\nabla_{l}\RRR_{kp}-\nabla_{l}\RRR\nabla_{l}\RRR_{ik}\right].
\end{eqnarray*}
Hence, if the Bach tensor vanishes identically along the flow, we have
\begin{eqnarray*}
0&=&3\nabla_{p}\RRR\CCC_{ipk}+\nabla_{p}\RRR\CCC_{pki}-\nabla_{p}\RRR\nabla_{k}\RRR_{ip}-2\RRR_{pl}\nabla_{p}\CCC_{ikl}\\
&&+2\Delta\RRR_{ip}\RRR_{kp}-2\nabla_{l}\nabla_{k}\RRR_{pi}\RRR_{pl}+\nabla_{l}\nabla_{k}\RRR\RRR_{li}-\Delta\RRR\RRR_{ik}\\
&&-2\nabla_{p}\RRR_{kl}\CCC_{lpi}-2\nabla_{p}\RRR_{kl}\CCC_{ilp}-4\nabla_{p}\RRR_{il}\CCC_{lpk}-2\nabla_{i}\RRR_{pl}\CCC_{pkl}\\
&&+2\nabla_{s}\RRR_{pl}\CCC_{psl}g_{ik}+\frac{1}{2}(|\nabla\RRR|^2+\RRR\Delta\RRR-\Delta|\Ric|^2)g_{ik}\\
&&-\frac{1}{2}(\RRR\nabla_{i}\nabla_{k}\RRR-\nabla_{i}\nabla_{k}|\Ric|^2)+2\nabla_{l}\RRR_{ip}\nabla_{l}\RRR_{kp}-\nabla_{l}\RRR\nabla_{l}\RRR_{ik}.
\end{eqnarray*}
\end{prop}
\begin{rem}
Note that, from the symmetry property of the Bach tensor, we have that
the RHS in the evolution equation of the Bach tensor should be
symmetric in the two indices. It is not so difficult to check that
this property is verified for the formula
in Proposition \ref{EvBachRF3D}. Indeed, each of the terms in between
square brackets is symmetric in the two indices.
\end{rem}
As a consequence of Proposition \ref{EvBachRF3D}, we get that during the Ricci flow of a $3$--dimensional Riemannian manifold the squared norm of the Bach tensor satisfies
\begin{eqnarray*}(\partial_t-\Delta)|\BBB_{ik}|^2&=&-2|\nabla \BBB_{ik}|^2-12\BBB_{ik}\BBB_{iq}\RRR_{qk}+6\BBB_{ik}\nabla_{p}\RRR-4\BBB_{ik}\RRR_{pl}\nabla_{p}\CCC_{ikl}\\
&&+4\BBB_{ik}\nabla_{p}\RRR_{kl}\CCC_{pil}-8\BBB_{ik}\nabla_{p}\RRR_{kl}\CCC_{lpi}-4\BBB_{ik}\nabla_{i}\RRR_{pl}\CCC_{pkl}+6\RRR|\BBB_{ik}|^2\\
&&-2\BBB_{ik}\nabla_{p}\RRR\nabla_{k}\RRR_{ip}+4\BBB_{ik}\Delta\RRR_{ip}\RRR_{kp}-4\BBB_{ik}\nabla_{l}\nabla_{k}\RRR_{pi}\RRR_{pl}+2\BBB_{ik}\nabla_{l}\nabla_{k}\RRR\RRR_{li}\\
&&-2\BBB_{ik}\Delta\RRR\RRR_{ik}-\BBB_{ik}\RRR\nabla_{i}\nabla_{k}\RRR+\BBB_{ik}\nabla_{i}\nabla_{k}|\Ric|^2-2\BBB_{ik}\nabla_{l}\RRR\nabla_{l}\RRR_{ik}\\
&&+4\BBB_{ik}\nabla_{l}\RRR_{ip}\nabla_{l}\RRR_{kp}.
\end{eqnarray*}
\subsection{The Bach Tensor of Three--Dimensional Gradient Ricci Solitons}\ \\
In what follows, we will use formulas~\eqref{SolEq0}--\eqref{SolEq4} to
derive an expression of the Bach tensor and of its divergence in the
particular case of a gradient Ricci soliton in dimension three.
By straightforward computations, we obtain
\begin{eqnarray*}\BBB_{ik}&=&\nabla_j \CCC_{ijk}\\
&=&\frac{\nabla_i\nabla_k \Scal}{4}-\frac{\Delta\Scal}{4}g_{ik}-\nabla_j\RRR_{ik}\nabla_j f + \frac{g_{ik}}2 \nabla_j\Scal\nabla_j f\\
&&+\left(\RRR_{ij}-\frac{\Scal}{2}g_{ij}\right)\nabla_j\nabla_k f - \left(\RRR_{ik}-\frac{\Scal}{2}g_{ik}\right) \Delta f\\
&=&\frac{1}{4}\nabla_i\nabla_k \Scal - \frac{1}{4}\Delta \Scal g_{ik}-\nabla_j\RRR_{ik}\nabla_j f + \frac{1}{2}\nabla_j\Scal\nabla_jf g_{ik}-\RRR_{ij}\RRR_{jk}+\lambda \RRR_{ik}\\
&&+\frac{1}{2}\Scal\RRR_{ik}-\frac{\lambda}{2}\Scal g_{ik}-3\lambda\RRR_{ik}+\Scal\RRR_{ik}+\frac32\lambda\Scal g_{ik}-\frac{1}2\Scal^2g_{ik}\\
&=&\frac{1}{2}\nabla_i\RRR_{lk}\nabla_l f-\frac{1}{2}\RRR_{lk}\RRR_{li}+\frac{\lambda}{2}\RRR_{ik}-\frac{1}{4}\nabla_l\Scal\nabla_l fg_{ik}-\frac{\lambda}{2}\Scal g_{ik}\\
&&+\frac{1}{2}|\Ric|^2g_{ik}-\nabla_j\RRR_{ik}\nabla_j f + \frac{1}{2}\nabla_j\Scal\nabla_j f g_{ik}-\RRR_{ij}\RRR_{jk}+\lambda\RRR_{ik}+\frac{1}{2}\Scal\RRR_{ik}\\
&&-\frac{\lambda}{2}\Scal g_{ik}-3\lambda\RRR_{ik}+\Scal\RRR_{ik}+\frac{3}{2}\lambda\Scal g_{ik}-\frac{1}{2}\Scal^2 g_{ik}\\
&=&\frac{1}{2}\nabla_i\RRR_{lk}\nabla_l f + \frac{1}{4}\nabla_j\Scal\nabla_j f g_{ik}-\nabla_j\RRR_{ik}\nabla_j f -\frac{3}{2}\RRR_{ij}\RRR_{jk}-\frac{3}{2}\lambda\RRR_{ik}\\
&&+\frac{3}{2}\Scal\RRR_{ik}+\frac{\lambda}{2}\Scal g_{ik}+\frac{1}{2}|\Ric|^2 g_{ik}-\frac{1}{2}\Scal^2 g_{ik}\,.
\end{eqnarray*}
A more compact formulation, employing equations~\eqref{SolEq1} and~\eqref{SolEq2}, is given by
\begin{equation*}
\BBB_{ik}=\frac{1}{2}\nabla_i\RRR_{lk}\nabla_l f+\frac{1}{4}\Delta
\Scal g_{ik}-\frac{1}{2}\Delta
\RRR_{ik}-\frac{1}{2}\nabla_j\RRR_{ik}\nabla_j
f-\frac{\lambda}{2}\RRR_{ik}+\frac{1}{2}\RRR_{ij}\RRR_{jk}\,.
\end{equation*}
Moreover, as we know that $\nabla_k\BBB_{ik}=\CCC_{lji}\RRR_{lj}$, we have
\begin{eqnarray*}
\nabla_{k}\BBB_{ik}&=&\frac{1}{4}\Scal \nabla_i\Scal-\frac{1}{4}\RRR_{ij}\nabla_j\Scal + |\Ric|^2\nabla_i f -\frac{1}{2}\Scal^2\nabla_i f-\RRR_{il}\nabla_j f\RRR_{lj}+\frac{1}{2}\Scal\RRR_{ij}\nabla_j f\\
&=&\frac{1}{2}\Scal\nabla_i\Scal-\frac{3}{4}\RRR_{il}\nabla_l\Scal +
|\Ric|^2\nabla_i f- \frac{1}{2}\Scal^2\nabla_i f\,.
\end{eqnarray*}
Therefore, if the divergence of the Bach tensor vanishes, we conclude
\begin{equation*}
\frac{1}{2}\Scal\nabla_i\Scal-\frac{3}{4}\RRR_{ik}\nabla_k\Scal +
|\Ric|^2\nabla_i f- \frac{1}{2}\Scal^2\nabla_i f=0\,.
\end{equation*}
Taking the scalar product with $\nabla f$ in both sides of this equation, we obtain
$$
0=\frac{1}{2}\Scal\langle\nabla\Scal, \nabla f\rangle -
\frac{3}{8}|\nabla \Scal|^2 + |\Ric|^2|\nabla f|^2 -
\frac{1}{2}\Scal^2|\nabla f|^2
$$
and, from formulas~\eqref{SolEq4} and~\eqref{eq_NormaCottonSol}, we compute
\begin{eqnarray*}|\CCC_{ijk}|^2&=&(\RRR_{ij}\nabla_k f-\RRR_{ik}\nabla_j f)\left(\frac{\nabla_k \Scal}{4}g_{ij}-\frac{\nabla_j \Scal}{4}g_{ik}+\left(\RRR_{ij}-\frac{\Scal}{2}g_{ij}\right)\nabla_k f - \left(\RRR_{ik}-\frac{\Scal}{2}g_{ik}\right)\nabla_j f\right)\\
&=&\frac{\Scal}{4}\nabla_k\Scal\nabla_k f-\frac{1}{4}\RRR_{kj}\nabla_j \Scal\nabla_k f+|\Ric|^2|\nabla f|^2-\frac{\Scal^2}{2}|\nabla f|^2-\RRR_{ij}\nabla_j f\RRR_{ik}\nabla_k f+\frac{\Scal}{2}\RRR_{kj}\nabla_k f\nabla_j f\\ & &- \frac{1}{4}\RRR_{jk}\nabla_j f\nabla_k\Scal+\frac{\Scal}{4}\nabla_j\Scal\nabla_j f-\RRR_{ik}\nabla_k f\RRR_{ij}\nabla_j f+\frac{\Scal}{2}\RRR_{jk}\nabla_j f\nabla_k f+|\Ric|^2|\nabla f|^2 - \frac{\Scal^2}{2}|\nabla f|^2\\
&=&2|\Ric|^2|\nabla f|^2 - \Scal^2|\nabla f|^2+\Scal\nabla_k\Scal\nabla_k f-\frac{3}{4}|\nabla \Scal|^2\,,
\end{eqnarray*}
where we repeatedly used equation~\eqref{SolEq3}. \\
Therefore, we obtain
$$
\nabla_k \BBB_{ik}\nabla_i f=\frac{1}{2}|\CCC_{ijk}|^2\,,
$$
so, if the divergence of the Bach tensor vanishes then the
Cotton tensor vanishes as well (this was already obtained in~\cite{caochen3}).
As a consequence, getting back to Section~\ref{cgrad}, the soliton
is locally a warped product of a constant curvature surface on a
interval of $\RR$.
\bibliographystyle{amsplain}
|
2,869,038,156,892 | arxiv | \section{Introduction}
\label{intro}
With the advent of highly sensitive spectropolarimeters like the
Zurich Imaging Polarimeter (ZIMPOL) we now have access to the linearly
polarized spectrum of the Sun that is due to coherent scattering processes
in the Sun's atmosphere (and which has nothing to do with the well-known
transverse Zeeman effect). This new linearly polarized spectrum of the
Sun is commonly referred to as the ``Second Solar Spectrum"
\citep{1996Natur.382..588S,1997A&A...321..927S}. It is richly
structured with signatures of different kinds of scattering processes
taking place in atomic systems of varying complexity. Of particular
interest are the many often enigmatic signatures of quantum interference
effects between fine structure states, hyperfine structure states, and
magnetic substates (Hanle effect).
Atoms with non-zero electron spin $S$ undergo fine structure splitting
and exhibit $J$-state interference whereas the atoms with non-zero nuclear spin $I_s$
undergo hyperfine structure splitting (HFS) and show $F$-state interference.
The Sc~{\sc ii} 4247\,\AA\ line is governed by $F$-state interference.
Here we extend our previous work on the Ba~{\sc ii} D$_2$ line
\citep{2013ApJ...768..163S},
to study the Sc~{\sc ii} line at 4247~\AA. This line arises due to the transition $J=2 \to J=2$.
Due to {coupling with} the nuclear spin ($I_s=7/2$)
both the upper and the lower $J$ levels are split into five $F$-states each with
13 radiative transitions between them. The level diagram of this system
is shown in Figure~{\ref{level-diag}}.
We use the theory of $F$-state interference
presented in \citet[][see also \citealt{2012ApJ...758..112S}]{2013ApJ...768..163S},
which takes account of the partial frequency redistribution (PRD) effects
in the absence of magnetic fields.
The results in the present paper
do not include the contributions from magnetic fields.
The theory of $F$-state interference in the
presence of magnetic fields including the effects of PRD
has been recently developed in \cite{2014ApJ...786..150S}.
The 4247\,\AA\ line of Sc~{\sc ii} is a chromospheric line with an approximate
height of formation between 900-1100~km above the photosphere. $^{45}$Sc is the only
stable isotope of scandium. It shows prominent triple peak structure in its
$Q/I$ spectra \citep[see][]{2002sss..book.....G,2003ASPC..307..385S}.
Modeling of this triple peak structure using the last scattering approximation
was attempted by \cite{2009A&A...508..933B}.
The effects of PRD and radiative transfer were neglected in that work.
In the present paper, by taking account of both PRD and radiative transfer effects,
we study the sensitivity of the $(I,Q/I)$ profiles to different atomic and atmospheric
parameters. From our efforts we find it difficult to reproduce the triple peak
structure in $Q/I$ and also the rest intensity.
The rest intensity is dependent on the model atmospheric properties.
A plausible reason for the failure of reproducing the rest intensity is the use of
1D model atmospheres.
In several earlier works such as \cite{2007A&A...467..695H,2011A&A...529A.139S,
2013ApJ...768..163S,2014arXiv1407.5461S}, difficulties with the 1D models were encountered.
The limitations of the 1D model atmospheres are also described in
\cite{1997fsp..proc..217K, 2002JAD.....8....8R, 2011ApJ...736...69U}.
We believe that the 3D model atmospheres may improve the fit to the
observed rest intensity in case of the Sc~{\sc ii}~4247~\AA\ line.
In $Q/I$, the central peak is
suppressed due to depolarization from HFS. However, the PRD peaks and the near wing
continuum in the theoretical profiles match closely with the observed profiles.
Our tests suggest that the observed $Q/I$ profiles cannot be
reproduced by modifications in the existing 1D standard model atmospheres.
Hence we suspect the role of other physical effects in
shaping the observed profiles, which may not have been accounted for in the
present treatment.
The lower level Hanle effect could qualify as being one such
effect which can increase the polarization of the central peak,
but its contribution is significant only for fields $\le1$G
\citep{2009A&A...508..933B}.
The details of the various tests conducted by us are discussed in the sections below.
\begin{figure}[ht]
\centering
\includegraphics[width=5.0cm]{fig1.eps}
\caption{Level diagram showing the hyperfine structure splitting
of the $3d4s$ and $3d4p$ atomic levels of the Sc~{\sc ii} atom.}
\label{level-diag}
\end{figure}
\section{Computing the theoretical profiles}
The details of computing the Stokes profiles with $F$-state interference including the effects of
PRD and radiative transfer using realistic 1D model atmospheres are
presented in \cite{2013ApJ...768..163S}.
We use the same here too and hence do not repeat them.
However, certain physical quantities need to be redefined to
represent the Sc~{\sc ii} 4247\,\AA\ line system, and they are presented below.\\
\noindent{\it The Voigt profile function:} For the case of
Sc~{\sc ii} 4247\,\AA\ line, the Voigt profile function defined in Equation~(3) of \cite{2013ApJ...768..163S}
is to be replaced by
\begin{eqnarray}
&&\phi(\lambda,z)= \bigg[\frac{1}{25}\phi(\lambda_{3\,3},z)+
\frac{3}{50}\phi(\lambda_{3\,5},z) +\frac{3}{50}\phi(\lambda_{5\,3},z) \nonumber \\ &&+
\frac{1}{1400}\phi(\lambda_{5\,5},z + \frac{5}{56}\phi(\lambda_{5\,7},z)+
\frac{5}{56}\phi(\lambda_{7\,5},z)\nonumber \\ && + \frac{2}{105}\phi(\lambda_{7\,7},z)+
\frac{11}{120}\phi(\lambda_{7\,9},z + \frac{11}{120}\phi(\lambda_{9\,7},z) \nonumber \\ && +
\frac{25}{264}\phi(\lambda_{9\,9},z) + \frac{7}{110}\phi(\lambda_{9\,11},z)+
\frac{7}{110}\phi(\lambda_{11\,9},z)\nonumber \\ && +
\frac{13}{55}\phi(\lambda_{11\,11},z)\bigg],
\label{comb-prof}
\end{eqnarray}
where $\phi(\lambda_{F_a\,F_b},z)$ is the Voigt profile function for the
$F_a \to F_b$ transition with $F_a$ and $F_b$ being the
initial and excited $F$-states respectively. For notational brevity, the subscripts $F_a$ and $F_b$
in the $\phi$ terms are multiplied by 2 in the above equation. \\
\noindent{\it The depolarizing elastic collision rate $D^{(2)}$:} The branching ratios
which describe the contribution from type-II and collisional redistribution (type-III) are defined in
Equation~(7) of \cite{2013ApJ...768..163S}. The depolarizing elastic collision rate $D^{(2)}$
which enter through the branching ratio $B^{(2)}$ can be computed
using Equation~(7.102) of \cite{2004ASSL..307.....L}
\begin{equation}
D^{(K)}(J) = C^{(0)}_E(J) - C^{(K)}_E(J),
\end{equation}
where $ C^{(K)}_E(J)$ is given by
\begin{eqnarray}
C^{(K)}_E(J) = (-1)^K
\frac{\left\lbrace
\begin{array}{ccc}
J & J & K\\
J & J & \tilde{K}\\
\end{array}
\right\rbrace}
{\left\lbrace
\begin{array}{ccc}
J & J & 0\\
J & J & \tilde{K}\\
\end{array}
\right\rbrace}
C^{(0)}_E(J),
\end{eqnarray}
with $C^{(0)}_E(J)=\Gamma_E/(2J+1)$.
If the interaction between the atom and the colliding particle is
assumed to be of dipolar type then $\tilde{K}=1$. In this case
$D^{(2)}(J)=0.1\, \Gamma_E$. If the interaction is assumed to be
dipole-dipole in nature, then $\tilde{K}=2$. In this case
$D^{(2)}(J)=0.243\, \Gamma_E$.
We have tested that both these values of $D^{(2)}$ give nearly identical
emergent $Q/I$ profiles.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig3.eps}
\caption{Contribution function $C_I$ computed from the FAL model atmospheres
at $\mu=0.1$, for the line center wavelength.}
\label{contrib}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=7.5cm]{fig2.eps}
\caption{Temperature structures of the FAL and FCHHT model atmospheres.}
\label{temp-struc}
\end{figure}
This is because the Sc~{\sc ii}~4247\,\AA\ line is formed at a height of 900-1100 km
above the photosphere. This can be seen from Figure~\ref{contrib} where the
intensity contribution functions $C_I$ are plotted as a function of height using
different FAL model atmospheres at line center wavelength.
The temperature structures of the FAL models as a function of height is shown
in Figure~\ref{temp-struc}a (see Section~\ref{fal-models} for a discussion on these models).
The function $C_I$ is defined as
\citep[see][]{1994ASSL..189.....S,1999A&A...341..902F}
\begin{equation}
C_{I}(\tau,\mu) = \frac{1}{\mu}S_{I}(\tau,\mu)\,e^{-\tau/\mu},
\label{c-i}
\end{equation}
with $d\tau=-k_{l}dz$, $k_l$ is the line absorption coefficient, and
\begin{equation}
I_{\lambda}(\mu,z=\infty) = \int_{-\infty}^{\infty} C_{I}(z^\prime,\mu) dz'.
\end{equation}
From Equation~(\ref{c-i}), the contribution function is proportional to the
source function $S_I$. Since $S_I$ differs for each model atmosphere,
the contribution functions also differ. However, the maximum contributions from the FAL
model atmospheres fall within the height range 900-1100km.
At these heights, the branching ratio
$B^{(2)}$ is nearly zero as seen from Figure~{\ref{branch}}.
We choose $D^{(2)}(J)=0.243\,\Gamma_E$ for further computations.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig4.eps}
\caption{ The branching ratios $A$ and $B^{(K)}$, with $K=0$ and $2$,
as a function of height computed using the FALX and FALP model atmospheres.}
\label{branch}
\end{figure}
\section{Observations}
\label{obs}
The observations of the Sc~{\sc ii} 4247\,\AA\ line analyzed in this
paper were recorded on September 15, 2012 at IRSOL, Switzerland using the
ZIMPOL-3 imaging polarimeter \citep{2010SPIE.7735E..66R}.
The photoelastic modulator (PEM) followed by a linear polarizer (beam splitter)
was used as the polarization analyzer.
Though the telescope is almost free from instrumental polarization and cross talk effects around
the equinox, to minimize residual instrumental signatures, a glass compensation
plate was inserted in the optical path between the calibration optics and the analyzer.
This also reduces the residual linear polarization offset.
The optics was adjusted such that the positive $Q$ represents the
linear polarization parallel to the spectrograph slit.
An image derotator (Dove prism) placed between the analyzer and the slit-jaw
allowed to rotate the solar image, and compensate for the solar rotation. The
analyzer and the calibration optics were also rotated correspondingly.
The observations were performed at a quiet region with the spectrograph slit
placed parallel to the solar East limb. The spectrograph grating angle
and a prefilter were selected to work with the 13$^{\rm th}$ spectral order.
On the CCD we got a resolution of 1.44\arcsec\ per pixel along the
spatial direction and 5.25 m\AA\ per pixel along the spectral direction.
Three measurements were obtained by placing the slit at 5\arcsec, 15 \arcsec, and 25\arcsec\ from the solar limb.
The observations at each $\mu$-position consisted of a sum of 1000 frames obtained with an exposure of 1 sec,
making the total exposure time as 16 minutes.
\begin{figure}[htbp]
\centering
\includegraphics[width=7.0cm]{fig5.eps}
\caption{CCD image showing $(I,Q/I)$ for the Sc~{\sc ii} 4247\,\AA\ line
recorded on September 15, 2012 using the ZIMPOL spectropolarimeter at IRSOL, Switzerland.}
\label{ccd-image}
\end{figure}
The image motion perpendicular to the limb was compensated with a glass tilt-plate.
The tilt of the plate was determined automatically with a limb recognition software using
the information in the slit jaw image.
The Stokes $(I,Q/I)$ images shown in Figure \ref{ccd-image} were obtained after the data reduction.
We also did a flat-field recording by moving the telescope around
the disk center while recording 20 frames. The flat-field
observations were used to correct the intensity images.
The observed $(I,Q/I)$ profiles used in this paper were obtained after performing a
spatial averaging from 60\arcsec\ to 140\arcsec\ along the slit.
\subsection{Determining the absolute zero level of polarization}
\label{zero-level}
The absolute zero level of polarization is determined
using the blend lines as described in
\citet[][see also \citealt{1998A&A...329..319S}]{2005A&A...429..713S}.
According to this method, the relative line depths of the depolarizing blend lines in
Stokes $I$ and $Q/I$ are related with the following one-parameter model as
\begin{equation}
\left(\frac{p_c-p}{p_c}\right) = \left(\frac{I_c-I}{I_c}\right)^\alpha,
\label{pc-ic}
\end{equation}
where $I_c,\,p_c$ are the intensity and polarization of the continuum,
and $I,\,p$ are the respective quantities for the blend lines. $\alpha$
is a free model parameter that determines the shape of the depolarizing lines.
We choose $\alpha=0.6$ for further analysis.
Figure~\ref{sc-stray} shows the comparison between the observed profile (solid line)
and the profile computed using Equation~(\ref{pc-ic}, second panel: dotted line).
This dotted line represents $p_c[1-(1-\frac{I}{I_c})^{0.6}] - p_0$.
Here $p_0$ is a free model parameter that represents the apparent level of the true zero point
of the polarization scale.
The blend line depth is sensitive to the
value of $p_c$. To get the observed line depths we need $p_c=0.15\%$ $(p_{c,obs})$.
Also to match the solid and the dotted profiles a shift of $p_0=0.07\%$
has to be applied.
As seen from this figure, we obtain a good match between the
solid and the dotted profiles for this set of parameters.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig6.eps}
\caption{Fit to the blend lines around the Sc~{\sc ii}~4247~\AA\ line profile
using Equation~(\ref{pc-ic}). The solid lines represent the observations, and dotted line
in the second panel represents $p_c[1-(1-\frac{I}{I_c})^{0.6}] - p_0$.
To obtain a fit we choose $p_c=0.15\%$ and $p_0=0.07\%$. The dashed lines show the
observed profiles corrected for 2\% stray light. The dashed and solid lines nearly overlap.}
\label{sc-stray}
\end{figure}
\subsection{Stray light correction}
\label{stray-correction}
Next, we applied a stray light correction of 2\% of the
continuum to both $I$ and $Q/I$. For correcting the $Q/I$ profile
we have used the value of $p_c$ determined above.
The details of the steps followed are
given in \cite{2014arXiv1407.5461S}.
The comparison between the observed profiles with stray light correction (dashed line)
and without (solid line) is shown in Figure~{\ref{sc-stray}}.
The stray light corrected observed profiles
nearly overlap with the profiles without this correction.
\section{Comparing the theoretical and the observed Stokes profiles}
We compute the theoretical Stokes profiles using a procedure similar to the one described in \citet[][see also
\citealt{2011ApJ...737...95A,2012A&A...541A..24S,2013ApJ...768..163S}]{2005A&A...434..713H}.
We use the PRD-capable MALI (Multi-level Approximate Lambda
Iteration) code developed by \citet[][referred to as the RH-code]{2001ApJ...557..389U}
to compute the opacities, collision rates and intensity. These quantities are then given as inputs
to the polarized radiative transfer equation defined in Equation~(1) of \cite{2013ApJ...768..163S}.
The expressions used to compute quantities such as the line and continuum source vectors, and their sum
are same as Equations~(5), (9), and (4) of \cite{2013ApJ...768..163S}.
The Stokes profiles are computed by solving the transfer equation perturbatively
\citep[see][]{1987A&A...178..269F,2002A&A...395..305N}.
The Sc~{\sc ii} atom model which is used as an input to the RH-code is constructed with
eight $J$-levels coupled by six line transitions
and ten continuum transitions. The main line is treated in PRD and the
others are treated under the assumption of complete frequency redistribution (CRD).
The angle averaged redistribution functions of \cite{1962MNRAS.125...21H} are used for intensity computations
and the angle-averaged redistribution matrix presented in \cite{2013ApJ...768..163S} is used
for computing the polarization. The continuum is computed
by assuming frequency coherent scattering and its expression is given in Equation~(9) of \cite{2013ApJ...768..163S}.
In the RH-code, all the blend lines are assumed to be depolarizing and are treated under LTE.
We compare the observed and the theoretical profiles computed using
several model atmospheres and this is discussed below, in detail.
\begin{table} [ht]
\label{table-2}
\caption{HFS constants in MHz}
\vskip1.0pt
\centering
\begin{tabular}{ccccc}
\hline
Level & \multicolumn{2}{c}{Experiment} & \multicolumn{2}{c}{Theory} \\
\hline
\ & A & B & A & B \\
Lower & 128.2(8) & -39(11) & 146.8 & -25.5 \\
Upper & 215.7(8) & 18(7) & 202.5 & -10.8 \\
\hline
\end{tabular}
\end{table}
\subsection{FAL models}
\label{fal-models}
The FAL realistic 1D model atmospheres are taken from \citet[][FALA, FALC, FALF, and FALP models]
{1993ApJ...406..319F} and \citet[][FALX model]{1995itsa.conf..303A}.
Figure~\ref{temp-struc}a shows their temperature structures as a function of height.
The theoretical $(I,Q/I)$ profiles computed using the FAL models are shown in Figure~{\ref{fig2}}.
We have used the experimentally determined HFS constants given in Table~1 to
calculate the energies of the $F$-states.
The profiles in this figure are smeared using a Gaussian with FWHM=80\,m\AA.
This smearing contains contributions from both instrument and macroturbulent velocity fields.
Instrumental broadening is about 40\,m\AA. The rest corresponds to a macroturbulent velocity
of 2.9 km/s.
Upon comparing the theoretical and observed profiles, it is evident that
our present treatment cannot provide an exact match to (a) the rest intensity;
(b) the triple peak structure in $Q/I$;
and (c) the continuum polarization.
The theoretical intensity profiles are deeper than the observed profile and the
values of rest intensity do not show much sensitivity to the variation in the model atmospheres
within the FAL set.
The theoretical $Q/I$ profiles do not show the triple peak structure
for any of the FAL model atmospheres whereas the observed $Q/I$ profile shows
prominent triple peak structure.
\begin{figure}
\centering
\includegraphics[width=7.0cm]{fig7.eps}
\caption{Theoretical $(I,Q/I)$ profiles from the five standard FAL model
atmospheres.}
\label{fig2}
\end{figure}
The values of the continuum polarization ($p_{c,th}$) from the FAL models
are greater than the value determined using the
blend lines (Section~{\ref{zero-level}}).
Such a discrepancy between $p_{c,th}$ and $p_{c,obs}$ has been studied in detail
by \cite{2005A&A...429..713S}. In that paper, the author points out that
for $\lambda>4000$\,\AA, the $p_{c,th} > p_{c,obs}$ \citep[see Figure~6 of][]{2005A&A...429..713S}.
We also note that a similar problem with $p_{c,th}$ and $p_{c,obs}$ was
encountered while modeling the Cr~{\sc i} triplet around 5206\,\AA\ in \cite{2012A&A...541A..24S}, and
the model atmosphere FALF had to be modified in the deeper layers
of the atmosphere, where the continuum is formed, to fit the $p_{c,obs}$.
Here too we face a similar problem.
Among FAL models, hotter the model atmosphere, smaller is the value of $p_{c,th}$. This is because,
as seen from Figure~\ref{temp-struc}a, the hotter models have smaller temperature gradients between 0 -- 200~km,
which leads to smaller anisotropy. The height of continuum formation can be obtained
using the contribution functions shown in Figure~\ref{cont-aniso}. In the bottom panel of this figure,
we have plotted the anisotropy factor $J^{2}_{0}/J^{0}_{0}$ \citep[defined for instance in][]{2005A&A...434..713H}
at a continuum wavelength as a function of height. Among the FAL models, the FALP model gives the smallest
anisotropy for the continuum. This results in smaller values of $p_{c,th}$.
The values of $p_{c,th}$ computed from the FAL models are given in Table~\ref{cont-pol}.
\begin{figure}[htbp]
\centering
\includegraphics[width=7.0cm]{fig8.eps}
\caption{Contribution functions and anisotropy factors
at a continuum wavelength plotted as a function of height for several FAL model
atmospheres.}
\label{cont-aniso}
\end{figure}
The difficulties in reproducing the observed Stokes profiles
can originate from various sources such as an incorrect choice of model atmosphere,
multidimensional radiative transfer effects, an incomplete treatment of atomic physics, etc.
To shed light on these issues we now try to model the observed profiles using
another set of models with temperature structures different from the FAL models.
\subsection{FCHHT models}
The FCHHT models are the updated models published by \cite{2009ApJ...707..482F}.
The temperature structures of these models as a function of height are shown in
Figure~\ref{temp-struc}b.
For comparison, the temperature structures of the FALC and FALP models are also shown.
As seen from this figure, the temperature gradient in the FCHHT models are similar
to the FAL models upto 500 km and then vary from the FAL models. Among the FCHHT models,
the P model has the least temperature gradient which also is the hottest model.
The B model, has the largest gradient and is also the coolest.
In the height range 0-500 km, among the FAL models, it is the FALP model which has the least gradient,
and this gradient is smaller than the FCHHT-P model too. A comparison between the FAL and FCHHT models
is also presented in \cite{2012A&A...540A..86R}.
The $(I,Q/I)$ profiles computed
from the FCHHT models are shown in Figure~{\ref{fchht-profiles}}. We find that the updated models
too fail to reproduce the observed Stokes profiles. We continue to face difficulties
in reproducing the rest intensity, triple peak structure in $Q/I$, and continuum polarization.
We discuss each of these issues in the sections below.
\begin{figure}[htbp]
\centering
\includegraphics[width=7.0cm]{fig9.eps}
\caption{Stokes $(I,Q/I)$ profiles of the Sc~{\sc ii} 4247\,\AA\ line computed using the
FCHHT models.}
\label{fchht-profiles}
\end{figure}
\subsubsection{Intensity profile}
In addition to a mismatch in line depth, the intensity profiles from the
FCHHT models are wider than the observed profile.
The FCHHT-B, D, F, and H models produce intensity profiles wider and shallower than the
observed profile whereas the FCHHT-P model, produces profile deeper than the observed one.
Hence in principle it should be possible to construct a two-component model, like the one
described in \cite{2006A&A...449L..41H}, by
mixing two appropriate models to get the required rest intensity. However, such a fit would
give an intensity profile that is much too wide.
There also could be a role of multidimensional radiative transfer effects
in shaping the intensity profiles. However, the scope of the present paper is restricted to
1D models only. Therefore it is hard to quantify to what extent the multidimensional
effects play a role.
\subsubsection{Central peak in $Q/I$ profile}
The FCHHT models have different temperature gradients in the upper
chromospheric layers, where the line is formed, compared to the FAL models. However, this is of
little help in improving the fit to the triple peak structure in $Q/I$.
Hence, the failure of both FAL and FCHHT models, with very different temperature structures,
and the fact that the central peak is mostly governed by the $F$-state interference effects,
indicates that the mismatch at the central peak in $Q/I$ is originating from an incomplete
treatment of atomic physics. An effect not accounted for, in our treatment, is the lower-level Hanle effect.
\cite{2009A&A...508..933B} showed that this effect can increase the
amplitude of the central peak in $Q/I$ when the magnetic field strength, $B\approx1$G.
\subsubsection{Continuum polarization}
The $p_{c,th}$ from the FCHHT models are similar in magnitude to the ones predicted by the FAL models.
Among FCHHT models, the warmer model, FCHHT-P, predicts smaller $p_{c,th}$.
For this reason, the FCHHT-P model provides a better fit to the PRD peaks when compared to other FCHHT models.
However, this fit is not better than the one obtained from FALP model.
The values of $p_{c,th}$ from FCHHT models are given in Table~\ref{cont-pol}. Once again, none
of these values match with $p_{c,obs}$.
The FALP and FCHHT-P models, though represent faculae conditions,
provide better fits to the PRD wing peaks and continuum because of the above stated reasons.
Hence, in the sections below we conduct a few more tests using the FALP model atmosphere
and vary some of the atomic parameters.
Note that in all the figures, the horizontal thin solid line in $Q/I$
represents the value of $p_{c,obs}$.
\subsubsection{Effect of macro and micro-turbulent velocities}
Figure~{\ref{fchht-profiles}} shows that the intensity profiles computed
from the FCHHT models are wide compared to the observed profile. Both macro and micro-turbulent velocities
influence the width of the intensity profiles. Reducing the macro-turbulence leads to a decrease in the smearwidth.
In the top two panels of Figure~\ref{fchht-smear}, we show a comparison between the Stokes profiles
computed using a smearwidth of 80 m\AA\ ($V_{\rm turb-ma}$=2.9 km/s) and 40 m\AA\ ($V_{\rm turb-ma}$=0 km/s).
In the case of 4247\AA\ line we do not see a significant variation in ($I,Q/I$) with the variation in smearwidth.
However, upon reducing the micro-turbulent
velocity ($V_{\rm turb-mi}$) by a constant factor (1.5 and 4.0), we see that the width of the intensity
profile decreases significantly. This is shown in
the bottom two panels of Figure~\ref{fchht-smear}. In addition,
the central dip in $Q/I$ gets deeper. Thus it is not possible to improve the fit to
the observed profile by varying the turbulent velocity.
\begin{figure}[htbp]
\centering
\includegraphics[width=7.0cm]{fig10.eps}
\caption{Stokes $(I,Q/I)$ profiles of the Sc~{\sc ii} 4247\,\AA\ line computed by varying the
macro-turbulent ($V_{\rm turb-ma}$) velocity in the top two panels and micro-turbulent velocity ($V_{\rm turb-mi}$)
in the bottom two panels. Model atmosphere used is FCHHT-B.}
\label{fchht-smear}
\end{figure}
\begin{table}[ht]
\caption{Continuum polarization from FAL and FCHHT models}
\label{cont-pol}
\centering
\begin{tabular}{|cccc|}
\hline
Atmosphere & $p_{c,th}$(\%) & Atmosphere & $p_{c,th}$ (\%)\\
\hline
FALP & 0.16 & FCHHT-P & 0.19 \\
FALF & 0.19 & FCHHT-H & 0.22 \\
FALC & 0.21& FCHHT-F & 0.23\\
FALA & 0.21 & FCHHT-D & 0.24 \\
FALX & 0.22 & FCHHT-B & 0.25 \\
\hline
\end{tabular}
\end{table}
\subsection{Studying the sensitivity of the $(I,Q/I)$ profiles}
We now study the response of the Stokes profiles by varying a few atomic parameters.
We note that in some of these tests, we do recover the triple peak structure in $Q/I$.
However, the modifications made to the atomic parameters are artificial and
are done only for the purpose of demonstrating the sensitivity of the Stokes profiles
on these parameters (see Figures~\ref{no-hfs} - \ref{hfs-mod}).
These tests also demonstrate that the principle of spectroscopic
stability is being satisfied which proves the correctness of our treatment.
Since the intensity profiles do not show much sensitivity to these tests, we only show the
$Q/I$ profiles.
\subsubsection{Effects of $F$-state interference}
It is well known that the decoherence caused by the hyperfine structure splitting of the $J$ states
leads to a depolarization in the core of the $Q/I$ line profile.
In case of the Sc~{\sc ii} 4247\,\AA\ line, the splitting between
them is quite large and hence the decoherence.
This leads to an enhanced depolarization in the line core resulting in a
fully suppressed central peak.
When the nuclear spin $I_s=0$, we recover the triple peak structure in $Q/I$
as demanded by the principle of spectroscopic stability \citep{1994ASSL..189.....S}.
Figure~{\ref{no-hfs}} shows the comparison between the profiles with and without HFS.
\begin{figure}[htbp]
\centering
\includegraphics[width=6.8cm]{fig11.eps}
\caption{The $Q/I$ profiles computed with and without hyperfine structure splitting.}
\label{no-hfs}
\end{figure}
To better understand the large depolarization, we try to compare the $F$-state splitting
with the radiative widths of the upper levels. We recall that, in our treatment, the lower levels
are assumed to be infinitely sharp and hence do not interfere.
The interfering upper $F$-states, the splitting between them and the ratio ($\Omega$) between the
splitting and the radiative width are given in Table~\ref{omega}, where the
Einstein $A$ co-efficient is taken as $1.29\times10^8$/s.
\begin{table*}[ht]
\caption{Comparison between the $F$-state splitting and their radiative widths}
\label{omega}
\centering
\begin{tabular}{|ccccc|}
\hline
$F_{u1}$ & $F_{u2}$ & $\Delta E$ (Hz) & $s=(\Delta \lambda)_F$ (m\AA) & $\Omega$ \\
\hline
3/2 & 5/2 & $5.3121\times10^{8}$ & 2.605 & 25.874\\
3/2 & 7/2 & $1.27941\times10^{9}$ & 6.274 & 62.296\\
3/2 & 9/2 & $2.24910\times10^{9}$ & 11.029 & 109.542\\
3/2 & 11/2 & $3.44605\times10^{9}$ & 16.89 & 167.844\\
5/2 & 7/2 & $7.48200\times10^8$ & 3.669 & 36.442\\
5/2 & 9/2 & $1.71788\times10^{9}$ & 8.424 & 83.672\\
5/2 & 11/2 & $2.91484\times10^{9}$ & 14.293& 141.972\\
7/2 & 9/2 & $9.69685\times10^8$ & 4.755 &47.230\\
7/2 & 11/2 & $2.16664\times10^{9}$ & 10.624 & 105.518\\
9/2 & 11/2 & $1.19695\times10^{9}$ & 5.869 & 58.297\\
\hline
\end{tabular}
\end{table*}
We know that when $\Omega$ is close to unity, the splitting sensitivity is maximum.
But in case of the Sc~{\sc ii} 4247\,\AA\ line system, we see from Table~\ref{omega} that $\Omega$
is much greater than one. This partly explains the large depolarization in $Q/I$ at the line center.
{When the HFS constants are rescaled by a factor of 50 or 100, such that $\Omega$
approaches unity, the central peak rises up}. This again
is a proof of the principle of spectroscopic stability being satisfied. These profiles are shown in
Figure~\ref{sc-rescale}. Rescaling the HFS constants reduces the splitting between $F$-states and hence
the decoherence.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig12.eps}
\caption{The $Q/I$ profiles computed by reducing the HFS constants of the upper level by factors of 50 and 100.}
\label{sc-rescale}
\end{figure}
Also, as seen from Equation~(\ref{comb-prof}) the
$F_b=11/2 \to F_a=11/2$ is the strongest transition and it
has maximum coupling with the $F_b=9/2 \to F_a=11/2$ transition. In other words, the
shape of the emergent $Q/I$ profile is controlled mainly by these transitions and
the interference between their upper levels.
When the HFS wavelengths of these two transitions are set equal to each other,
we recover the central peak in $Q/I$ as shown in Figure~\ref{hfs-mod}.
Such a modification, once again, reduces the decoherence and hence the depolarization.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig13.eps}
\caption{The $Q/I$ profiles computed by modifying one of the HFS wavelengths and
by modifying the abundance of Sc.}
\label{hfs-mod}
\end{figure}
One can notice from Figure~\ref{hfs-mod} that the width of theoretical $Q/I$
profile and the amplitude of the PRD peaks are larger than in the observed profile.
Both of these are sensitive to the solar abundance of Sc.
\cite{2008A&A...481..489Z} discuss the uncertainty in the abundance value of
Sc in the Sun. Their study is based on modeling the observed intensity
profiles of different Sc lines.
They find that different abundances are needed to fit different lines
and conclude that the abundance value is $3.07\pm 0.04$.
The long dashed line in Figure~\ref{hfs-mod} is the profile computed
with an abundance of 2.90. With this reduced abundance, the fit to the
PRD peaks and the near wing continuum in the $Q/I$ profile improves.
\subsubsection{Collisions}
In addition to the HFS, collisions can significantly modify the line core
polarization of the observed profiles.
The contribution from collisional redistribution depends on the branching ratio $B$.
In case of the Sc~{\sc ii} 4247\,\AA\ line, this contribution is insignificant.
Figure~\ref{fig2a} shows the individual contributions from
type-II frequency redistribution and CRD,
with their corresponding branching ratios being multiplied.
We note that in our computations the type-III redistribution has been
replaced with CRD like in \cite{2012A&A...541A..24S,2013ApJ...768..163S}
to reduce the computing time. This replacement does not affect the Stokes profiles.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig14.eps}
\caption{Contributions to the $Q/I$ profile from type-II frequency
redistribution and CRD. The profiles before and after multiplying the branching ratios are shown.}
\label{fig2a}
\end{figure}
The variation of the branching ratios $A$ and $B^{(K)}$ as a function
of height in the atmosphere for the
FALP model is shown in Figure~{\ref{branch}}. $B^{(K)}$
takes a value close to zero at higher layers in the atmosphere. {Since the
Sc~{\sc ii}~4247\,\AA\ line is formed in the upper chromosphere}, the
contribution to the line center is primarily from type-II redistribution.
The $Q/I$ profile $B^{(K)} \times$ CRD goes nearly to zero at the line center
(dotted line in Figure~{\ref{fig2a}}). Thus we can exclude the
possibility that an approximate treatment of collisions
might be contributing to the difficulties in reproducing the $Q/I$ central peak.
Figure~\ref{fig2a} also shows the $Q/I$ profiles computed with
type-II redistribution and CRD alone (without $A$ and $B^{(K)}$ multiplied).
The two side peaks on either side of the central peak are formed due to PRD
and can be reproduced only by type-II redistribution. CRD alone cannot reproduce
them. Thus a proper account of PRD is essential to model this line.
\subsubsection{Variation in $\mu$}
The observed profiles studied till now were recorded at a limb distance $\mu=0.1$.
When the line profiles were observed at nearby $\mu$ positions, they
showed a large variation in the polarization of the central peak.
These profiles are shown in Figure~\ref{allarcsec}. At $\mu=0.145, 0.175$,
the central peak is depolarized and only the two PRD side peaks stand out.
The larger CLV of the central peak as compared with the side peaks
is to be expected from spatially varying magnetic fields, since the Hanle effect
can only operate in the Doppler line core but not in the wings. This behavior
is supported by the observed spatial fluctuations along the spectrograph slit:
we find the line core amplitude of $Q/I$ to vary much more than the side peaks.
In contrast, the theoretical profiles computed for different $\mu$ values in
the absence of magnetic fields (cf. Figure~\ref{sc-all-mu}) do not show a variation
of this kind.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7.0cm]{fig15.eps}
\caption{The $Q/I$ profiles of Sc~{\sc ii} 4247\,\AA\ line observed at
different limb distances.}
\label{allarcsec}
\end{center}
\end{figure}
There is therefore strong reasons to believe that the line core is greatly influenced
by magnetic fields via the Hanle effect. This influence is normally in the form of
depolarization, reduction of the polarization in the core. However, the Hanle effect
may also go in the opposite direction when the atomic polarization in the lower level
is considered, as found by \cite{2009A&A...508..933B} for fields of order 1\,G. It
therefore remains a possibility that the observed $Q/I$ central peak that we are
unable to reproduce with our non-magnetic modeling could be due to the Hanle effect
in the lower atomic level.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig16.eps}
\caption{The theoretical $Q/I$ profiles computed at various limb distances.}
\label{sc-all-mu}
\end{figure}
\section{Conclusions}
In this paper, we have tried to study the Sc~{\sc ii} 4247\,\AA\ line, the polarization
profiles of which are governed by the $F$-state interference effects.
The observations, used by us, were taken at IRSOL using ZIMPOL 3 polarimeter in September, 2012
in a quiet region near the solar limb.
Due to its large nuclear spin, the upper and lower $J$-levels
split into five $F$-states each giving rise to thirteen radiative transitions between them.
The decoherence between the $F$-states is quite large and the emergent
polarization profiles are sensitive to the energy difference between the $F$-states.
We have investigated
the sensitivity of the theoretical Stokes profiles, in the absence of magnetic fields, to different
atmospheric and atomic parameters.
All the 1-D model atmospheres tried by us (FAL and FCHHT models), fail to reproduce the triple
peak structure in $Q/I$ and also the rest intensity. Also, the continuum polarization
predicted by all models, except FALP, is larger than the observed value. The PRD peaks and the near wing continuum
in the theoretical profiles match closely with the observed ones only for the FALP model,
but this model represents faculae conditions. We also show that
a proper treatment of PRD is essential to model this line, and CRD alone cannot reproduce the PRD peaks.
The intensity profiles computed from the FAL models match well with the observed profile
except at the line center. In case of FCHHT models, both the line width and line depth of the
theoretical profiles match poorly with the observations. It may be possible to improve the fit to the
intensity profiles if multidimensional radiative transfer effects are taken into account.
The $Q/I$ core peak is more sensitive to the variations in atomic parameters than the atmospheric parameters.
Observations indicate that the central peak in $Q/I$ is quite sensitive to the Hanle effect.
There might be positive contributions from the magnetic field
to the central peak polarization through the lower level Hanle effect for field strengths of order 1\,G.
Thus, in spite of a detailed account of PRD, radiative transfer and HFS effects we are unable to
reproduce the central peak. All these results lead us to believe that there might be other
physical effects, unaccounted for in our treatment, playing a role in shaping the $Q/I$ profiles.
One such effect is the mentioned lower-state Hanle effect, a possibility that needs to be explored in the future.
\acknowledgments
We acknowledge the use of HYDRA
cluster facility at the Indian Institute of Astrophysics for
computing the results presented in the paper.
We are grateful to Dr. Han Uitenbroek for providing us with his
realistic atmospheric modeling code. HNS would like to thank
Ms. H. D. Supriya for interesting discussions.
Research at IRSOL is financially supported by State Secretariat for
Education, Research and Innovation, SERI, Canton Ticino, the city of
Locarno, the local municipalities,
the foundation Aldo e Cele Dacc\`o,
and the Swiss National Science
Foundation grant 200021-138016. RR acknowledges financial support by the
Carlo e Albina Cavargna foundation. We thank the Referee for
useful comments and suggestions which helped improve the results presented in this paper.
|
2,869,038,156,893 | arxiv | \section{Introduction}
The origin of glass transition and the stability of a supercooled liquid against crystallization is
still not well understood and is an open question \cite{inoue_international,tanaka-epje}. It is usually found that during fast cooling due to a large change in viscosity,
crystallization can be avoided and the system is vitrified. The vitrified materials are tougher, stronger and have large strain limits.
When compared to their crystalline counterparts these glassy materials can be easily used to prepare homogeneous, isotropic solids in large dimensions.
Although vitrification is desirable but not all supercooled systems form glasses, many undergo crystallization.
Thus it is important to understand the origin of stability of supercooled liquids against crystallization.
In the metallic glass community some empirical rules are used based on the analysis of glass forming ability (GFA) of metallic alloys \cite{Acta_mater}.
The rules state that i) the system should have more than three components, ii) the size ratio between the components should be about 12 \%, and iii) the enthalpy
of mixing should be negative. Although having more than three components is a desirable criteria for GFA but some binary metallic alloys are also known to form
glasses \cite{inoue_international}. One such glass former Ni$_{80}$P$_{20}$ has been the motivation behind the development of a well known model
system, known as the Kob-Anderson (KA) model \cite{kob,stillinger}. This model system has been extensively used in computer simulation studies of supercooled liquids \cite{dyre,valdes-jcp}.
The KA model has never been
found to crystallize except for one case \cite{dyre}. However, the origin of its stability against crystallization is not fully understood
\cite{ dyre, valdes-jcp,harowell, harrowell-jpcb, harrowell-jcp, doye, vlot}.
There are some frustration based approaches to explain the stability of supercooled liquids
\cite{ tanaka-epje,harowell,frank, tarjus, tarjus-2, tarjus-charbo, tanaka-jnoncryst1,tanaka-jnoncryst2, tanaka-nature06,tanaka-vshaped-prl,tanaka22,tanaka23,tanaka29}.
The role of frustration in supercooling
has been invoked first by Frank \cite{ frank}. He has pointed out that the local icosahedral ordering of liquid although cannot be spanned in space, is locally more stable
than crystal ordering.
The crystal ordering wins over only because it becomes economical when spanned over long range. Thus when a liquid is cooled it requires substantial costly rearrangement of
molecules to crystallize and this slows down the crystallization process and promotes supercooling below the melting point. Kivelson {\it et al } have
proposed a frustration
theory to connect the slow dynamics in the system to the local preferred structure (LPS) \cite{tarjus}. According to their theory, the liquid will prefer to freeze in
the locally preferred liquid structure (icosahedral for Lennard-Jones (LJ) liquids) which is different from the crystal structure. Since the local structure cannot tile
the ordinary three dimensional space, in trying to do so the liquid will be geometrically frustrated and will break up into domains.
The rearrangement in these domains gives rise to the slow dynamics and glass transition.
A different picture of frustration has been proposed by Tanaka and co-workers
\cite{ tanaka-epje,tanaka-jnoncryst1,tanaka-jnoncryst2,tanaka-nature06,tanaka-vshaped-prl,tanaka22,tanaka23,tanaka29}. According to their theory,
liquid-glass
transition is connected to crystallization \cite{tanaka-jnoncryst1,tanaka-jnoncryst2}. They have proposed that there is frustration between short
range bond ordering to form LPS
and long range density ordering which gives rise to crystal structure. This frustration leads to GFA of a system.
Thus it is obvious that the origin and the role of frustration are different in all these different studies.
As mentioned before the KA model has been extensively used to study the dynamics of supercooled liquids because of its stability against crystallization.
There has been a large number of studies by different groups, devoted to the understanding of the kinetics of
crystallization \cite{ dyre, valdes-jcp,harowell, harrowell-jpcb}and also the stability of crystal phases \cite{harowell, harrowell-jcp, doye, vlot}.
These studies have been performed for not only KA model, but in general for the binary LJ mixture.
Fernandez and Harrowell have performed crystal phase analysis of binary LJ for different inter-species interaction length and also for different
compositions \cite{harrowell-jcp}.
Their study has revealed that for the KA model at T=0 the most stable equilibrium structure is a coexistence between AB (CsCl) crystal and pure A (fcc) crystal with a coherent
(001) interface. They have also suggested that the crystal growth of KA model might be frustrated because of the competition
between the growth of AB (CsCl form) and A (fcc form) structures \cite{harowell}. According to them this frustration might be the origin of stability of the KA model.
Valdes {\it et al.} have studied the binary LJ mixture at different compositions \cite{valdes-jcp}. They claim that for the compositions where the system undergoes amorphization
they find either CsCl type or fcc-hcp type crystal seeds in the liquid. Thus they predict that since both the structures do not coexist there is no competition
between these two type of crystal growth.
Toxvaerd {\it et al.} have pointed out that negative mixing enthalpy or energy leads to the system to be stable supercooled mixtures \cite{dyre}.
Doye {\it et al.} have done isolated stable cluster analysis of the preferred coordination of A atoms around the smaller B atoms for both $x_A=0.80$ (KA model)
which is known not to crystallize, and for $x_A=0.50$ which quite easily forms a CsCl type interpenetrating bcc crystal structure \cite{doye}.
They found that for $x_A=0.80$ the structures are related to the square anti-prism. Similar structures were also found to be present in the local structure
analysis of supercooled KA liquid \cite{harrowell-jpcb, coslovich-pastore}.
Doye {\it et al.} have also found that for $x_A=0.50$ stable structure is the CsCl type crystal.
Equimolar binary LJ systems for different inter-species interaction lengths ($\sigma_{12}$) are also known to crystallize in different forms depending on
the $\sigma_{12}$ values \cite{vlot}.
In this study we explore the interplay between crystallization and supercooling for a number of binary LJ mixtures at various compositions and inter-species interaction lengths
described by $s$ where $s= \sigma_{12} / \sigma_{11}$. We have used the local Bond Orientational Order (BOO) parameters and the local coordination number to identify
the locally preferred structures and crystal structures.
Recently the local BOO parameters have been extensively used by Tanaka and coworkers to study properties of not only crystals but also
supercooled liquids \cite{tanaka-nature12}.
The systems studied here have all
negative enthalpy of mixing and the size ratio between the two components are kept fixed at 12$\%$. In this range of systems we have found that some easily
crystallize and some remain in supercooled liquid state. The focus of this study is to understand: i) the origin of this variation in the crystallization behaviour and
ii) the origin of stability against crystallization in terms of frustration between two crystal forms and also
the frustration between the LPS and the global structure.
The simulation details are given in the next section. In section 3 we have the results and discussion, and section 4 ends with a brief summary.
\section{Simulation Details}
We have performed molecular dynamics study with composition variation and interaction length variation.
The atomistic models which are simulated are two component mixtures of N=500 classical
particles, where particles of type
{\it i} interact with those of type {\it j} with pair
potential, $U_{ij}(r)$, where r is the distance between the pair.
$U_{ij}(r)$ is described by a shifted and truncated Lennard-Jones (LJ) potential, as given by:
\begin{equation}
U_{ij}(r)=
\begin{cases}
U_{ij}^{(LJ)}(r;\sigma_{ij},\epsilon_{ij})- U_{ij}^{(LJ)}(r^{(c)}_{ij};\sigma_{ij},\epsilon_{ij}), & r\leq r^{(c)}_{ij}\\
0, & r> r^{(c)}_{ij}
\end{cases}
\end{equation}
\noindent where $U_{ij}^{(LJ)}(r;\sigma_{ij},\epsilon_{ij})=4\epsilon_{ij}[({\sigma_{ij}}/{r})^{12}-({\sigma_{ij}}/{r})^{6}]$ and
$r^{(c)}_{ij}=2.5\sigma_{ij}$. Subsequently, we'll denote A and B types of particles by indices 1 and 2, respectively.
The different models are distinguished by different choices of lengths and composition parameters. Length, temperature and
time are given in units of $\sigma_{11}$, ${k_{B}T}/{\epsilon_{11}}$ and $\surd({m\sigma_{11}^2}/{\epsilon_{11}})$,
respectively.
Here we have simulated various binary mixtures
with the interaction parameters $\sigma_{11}$ = 1.0, $\sigma_{22}$ =0.88 , $\epsilon_{11}$ =1, $\epsilon_{12}$ =1.5,
$\epsilon_{22}$ =0.5, $m_{1}$ =1, $m_2$=0.5 and the inter-species interaction length $\sigma_{12}$
has been varied such that the size ratio $s = \sigma_{12}/\sigma_{11}$ varies from 0.70 to 0.94 with an interval of 0.02.
In this article $\sigma_{12}$ and $s$ have been used interchangeably because for $\sigma_{11}=1$, $s=\sigma_{12}$.
We have also
simulated systems with different compositions, varying $x_A$ from 0.50 to 0.90, where $x_A$ is the mole fraction of the bigger A type particles.
The systems with $s$= 0.94 follow LB rule of mixing for distance \cite{lorentz}. Note that the mixture with $s=0.80$ and $x_A=0.80$ is the
well-known Kob Anderson (KA) Model which is extensively used as a model supercooled liquid \cite{dyre,valdes-jcp}.
The molecular dynamics (MD) simulations have been carried out using the LAMMPS
package \cite{lammps}.
We have performed MD simulations in the isothermal−isobaric ensemble (NPT) using Nos\'{e}-Hoover thermostat and Nos\'{e}-Hoover barostat with integration timestep 0.005$\tau$. The time
constants for Nos\'{e}-Hoover thermostat and barostat are taken to be 100 and 1000 timesteps, respectively.
The sample is kept in a cubic box with periodic boundary condition. To study crystallization we have done stepwise cooling with $\Delta T^*=0.1$. At each temperature the system has been usually equilibrated for 10ns, however in certain
cases where it was difficult to crystallize we have run the simulation even for 10$\mu s$.
Bond Orientational Order parameter was
first prescribed by
Steinhardt {\it et al.} to characterize specific crystalline structures \cite{steinhardt}. Leocmach {\it et al.} have shown that these BOO parameter can be used not only
for crystals but also for supercooled liquids where although there is no clear crystalline order but a tendency towards crystalline ordering can be identified by the transient
local BOO analysis
\cite{tanaka-nature12}.
To characterize specific crystal structures and also to identify the tendency towards crystallization in a liquid here we have calculated the local BOO
parameters ($q_{lm}$) of \textit{l}-fold symmetry as a 2\textit{l}+1 vector ,
\begin{eqnarray}
q_l =\sqrt{\frac{4\pi}{2l+1}\sum_{m=-l}^{l}\arrowvert q_{lm}\arrowvert^2} \nonumber \\
q_{lm}(i)=\frac{1}{N_i} \sum_{0}^{N_{i}} Y_{lm}(\theta(r_{ij}), \phi(r_{ij}))
\end{eqnarray}
where $Y_{lm}$ are the spherical harmonics, $\theta(r_{ij})$ and $\phi(r_{ij})$ are
spherical coordinates of a bond $r_{ij}$ in a fixed reference frame,
and $N_i$ is the number of neighbours of the {\it i}-th particles. Two particles are considered neighbours if $r_{ij}< r_{max}$, where $r_{max}$ is the first minimum of the radial distribution
function (RDF).
\section{Results}
The range of system studied here have negative mixing enthalpy and the size ratio(s) between the two components are always kept 12\%.
Crystallization has been identified by a sudden drop in the potential energy while gradually cooling the system. We have further quantified the crystallization
process by calculating the RDF and the local BOO parameters before and after the energy drop.
We have used the local BOO parameters to identify not only the crystal forms but also the transient ordering present in the liquids. For the range of system studied here,
it is found that primarily face centered cubic (fcc),
body centered cubic (bcc), simple cubic (sc) and hexagonal closed packed (hcp) structures are formed . The $q_4$ and $q_6$ parameters for these
different perfect crystal structures are listed in Table-1, which we have used to identify
our crystal structures. Instead of calculating the average $q_4$ and $q_6$ parameters, we have calculated the probability distribution of these values over individual
particles and over the length of the trajectory. Such a distribution provides us more microscopic information regarding the tendencies of local structure formation even when
a perfect crystal structure is not achieved.
\begin{table}
\caption{Reduced invariants $q_4$ and $q_6$ for face-centered cubic (fcc), body-centered cubic (bcc), simple cubic (sc) and hexagonal closed packed (hcp) structures. }
\centering
\begin{tabular}{ l r r }
\hline
\hline
& $q_4$& $q_6$ \\
\hline
fcc & 0.191&0.575 \\
bcc & 0.036&0.511 \\
sc & 0.764&0.355 \\
hcp & 0.097& 0.485\\
\hline
\hline
\end{tabular}
\end{table}
Formation of various crystalline forms or lack of it has been summarized in Fig-\ref{composition_phase_plot} for various compositions and $s$ values.
Our results agree well with the study of Vlot {\it et al.} for the equimolar mixture ($x_A=0.50$) for all $s$ values \cite{vlot}. For $0.7\leq s \leq0.74$ the systems form NaCl
type of crystal (interpenetrating fcc) where
the A particles show sharp fcc peak obtained from $q_4-q_6$ calculation. However when we compare the local BOO for all of these systems
we find that as we increase the $s$ the distribution becomes broader suggesting crystalline frustration. At $s =0.76$ the system shows a sharp
jump to bcc/sc (all particle/ A-A pairs) crystal form. As we increase the $s$ value till $s=0.90$ this bcc/sc (all particles/A-A pairs) signature continues.
We refer to the region $0.76\leq$s$\leq0.90$ as the \textbf{bcc zone}.
For $s=0.92$
we find that the system makes a sharp transition to all atom disordered fcc + hcp form. This signature is also there for $s=0.94$. Small size disparity between A and B type of
particles leads to a fcc and hcp type of mixed crystal formation. Note that as the activation energy between fcc and hcp type of
crystals is very less and there packing fraction is similar (0.74), even in single component system there is a chance of getting fcc-hcp mixed crystal \cite{frenkel}.
Note that there is a small shift in the crystal range from that observed by Vlot. et. al \cite{vlot}. This we believe is due to the fact that unlike their system where
$\sigma_{11} =\sigma_{22}$ in our system $\sigma_{22}$ is less than $\sigma_{11}$. Our crystal structures are consistent with the lattice energy calculation of Fernandez and
Harrowell \cite{harrowell-jcp}. Although we claim that all systems in the equimolar mixture forms crystals but there can always be some $s$ values in the transition
region for which the system will not undergo crystallization as has been observed earlier for $s=0.75$ \cite{harrowell-jcp}.
\begin{figure}[]
\centering
\includegraphics[width=0.5\textwidth]{fig1.jpg}
\caption{ Phase diagram of different types of crystal structures and amorphous structures plotted against variation of interaction lengths ($s$) for
$x_A=0.50$ and $x_A=0.80$. In the left side the brown zone is where NaCl type of crystal is found and in the right side the brown zone is where the distorted
hcp + fcc crystal structure is obtained for the $x_A=0.50$.
At this composition CsCl type bcc structure is found in the intermediate values of s and shown by the dashed cyan zone (\textbf{bcc zone}).
The above panel of the plot describes different
structures obtained for $x_A=0.80$. The region forming NaCl+ fcc crystal is shown in the left cyan zone and the one forming fcc + hcp crystal is shown in the right
cyan zone. We do not find any crystal in the full range of brown zone. At $s=0.74$ and $s=0.90$ we do find a drop in energy but the local BOO does not show any
crystalline ordering . Interestingly the \textbf{bcc zone} for $x_A=0.50$ almost overlaps with the no crystal zone for $x_A=0.80$. }
\label{composition_phase_plot}
\end{figure}
For the range where NaCl crystal was formed for $x_A=0.50$, we now find that for $x_A=0.80$
the A particles are arranged in a fcc lattice. The B particles although
do not show any fcc ordering as would have been expected for NaCl type crystal but the local BOO of the A particles around the B particles show sc characteristic similar to that found
for NaCl type crystal. Also the lattice spacing between the A particles are larger than that found for pure fcc crystal and similar to that found for NaCl type structure.
Thus we believe the absence of fcc ordering between the B particles is only due to the fact that they are lesser in number and scattered over the full system. We would assume
that this system has NaCl + fcc type crystal ordering with some defects. As reported by Fernandez and Harrowell for the equimolar mixture we too find that it is difficult to crystallize the
system for $s>0.9$ \cite{harrowell-jcp}. However, for $x_A=0.80$ and for $s\geq0.92$ the systems easily crystallized to fcc+hcp structure.
The local BOO parameters for $x_A=0.50$ is found to be broader
in distribution when compared to that for $x_A=0.80$.
This led us to conclude that for $s>0.9$ equimolar mixture ($x_A=0.50$) is more frustrated. In our study
we also found that the drop in the enthalpy at crystallization is directly related to the width of the distribution of the local BOO and thus to frustration.
The larger the drop the narrower is the local BOO distribution.
However, the most interesting result here is that for $x_A=0.80$
we do not find any crystallization for $0.74\leq$s$\leq0.90$ (Fig-\ref{composition_phase_plot}). Although for s$=0.74$ and s$=0.90$ we do find a
drop in energy but the local BOO does not show any crystal ordering.
Note that this range of $s$ ,except for s$=0.74$, exactly coincides with the appearance of the CsCl crystal for $x_A=0.50$.
As reported by Fernandez and Harrowell in this range the lowest energy state is a combination of CsCl+fcc crystal structure \cite{ harowell,harrowell-jcp}.
This is quite expected because as we increase the number of A particles
the excess A particles would like to form fcc type crystal and the AB mixture would like to form CsCl (bcc) type crystal, similar to that found for the range $0.7\leq$s$\leq0.74$
where
NaCl+fcc lattice is formed. However the inability of these systems to crystallize led us to believe that the stability of the supercooled
liquid in this range of $s$ is related to the difficulty of nucleation of bcc type crystals. This difficulty in nucleation can be due to the frustration between fcc and bcc
crystal formation as has been suggested by Fernandez and Harrowell \cite{harowell} or it can due to frustration between the LPS and both fcc and bcc crystal structures.
In order to understand this in
greater detail we have further studied one of the systems, $s=0.8$, in the range $0.5 \leq x_A\leq 0.9$. Note that $x_A =0.8$ and $s=0.8$
represents the well known KA model.
According to the lattice energy study as we change the composition and increase the number of A type particles the lowest energy state of the system is expected to have
a mixture of pure A fcc type crystal
and mixed CsCl type crystal \cite{harrowell-jcp}. Since the local BOOs (both $q_{4}$ and $q_{6}$) for fcc and bcc type crystals have very similar values (see Table-1)
it becomes difficult to identify
the presence of both the structures unless the ordering is sharply peaked at the respective local BOO values. However, if we consider only the A particles, then they are
expected to have both fcc and sc
ordering if the system is a mixture of fcc and CsCl type of crystal. The local BOO of the fcc and sc are well separated, particularly in the $q_{4}$ value (see Table-1).
Thus monitoring the A-A local BOO parameters enables us to observe the signature of both the crystalline
forms in one system and also the transition from one form to another across the systems.
Similar to that observed for $x_A=0.5$, the $x_A=0.55$ (for $\sigma_{ij}=0.8$) system also undergoes a CsCl type of crystallization.
For the composition of $x_A =0.6$, crystallization has not been observed. We have simulated a five times bigger system to rule out any system size dependence
and also used parallel tempering method \cite{parallel-tempering}. But the system did not undergo crystallization in any of these cases, even for a trajectory length of 10$\mu s$.
Leocmach and Tanaka have shown that the distribution of the local BOO in a liquid at low temperature can also provide information about the tendency
of the system to undergo a certain form of crystallization \cite{tanaka-nature12}.
As mentioned earlier to identify the pure A fcc and mixed AB bcc signature we have studied the local BOO of the A-A pairs.
The population vs $q_4-q_6$ contour plot (Fig-\ref{60_40_contour}) shows
a clear tendency towards both sc (bcc in total)and fcc positions for A-type of particles. Thus
interestingly the system with $x_A =0.6$ although does not undergo crystallization the local BOO parameters show a strong tendency towards two different forms of crystal structures.
\begin{figure}[h]
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{fig2a.jpg}
\caption{}
\label{50_50_contour}
\end{subfigure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{fig2b.jpg}
\caption{}
\label{60_40_contour}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{fig2c.jpg}
\caption{ }
\label{70_30_contour}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{fig2d.jpg}
\caption{ }
\label{80_20_contour}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{fig2e.jpg}
\caption{}
\label{90_10_contour}
\end{subfigure}
\caption{ Population vs $q_4-q_6$ plot for different composition variation for $s=0.8$. If the total system is a mixture of CsCl +fcc crystalline form then the A particles are expected
to form sc + fcc ordering. The sc ordering of the A particles is related to the CsCl formation of the AB mixture.
The $q_4$ values for the fcc and sc structures are well separated (see Table-1). Thus monitoring the A-A local BOO parameters enable us to
observe the signature of both the crystalline form in one system and also the transition from one form to the other across the systems.
(a) For equimolar mixture ($x_A =0.5$),the distribution of population of $q_4-q_6$ is at sc position. (b) For $x_A =0.6$ it shows a tendency towards two different forms of
crystal structures. Bold dotted arrows stress the ordering tendency. (c) For the composition of $x_A =0.7$, there is no tendency towards sc type of crystal formation and
there is weak tendency towards fcc type of crystal form (d) At $x_A=0.8$ composition the system follows same trend as that for
$x_A =0.7$. (e) At $x_A =0.9$ the distribution of population of $q_4-q_6$ is at fcc and hcp position. }
\label{contour}
\end{figure}
For the composition of $x_A =0.7$, there is no tendency towards sc type of crystal formation for A-type of particles and a slight tendency is there towards the fcc position
( Fig- \ref{70_30_contour}). This is expected because the mixture now has more A-particles. For $x_A=0.8$ the same trend follows (Fig-\ref{80_20_contour}).
The system with the composition of $x_A =0.9$ does show crystallization of the A particles in fcc +hcp form (Fig-\ref{90_10_contour}).
Although these results are similar to that observed by Valdes {\it et al.} \cite {valdes-jcp}, and
Fernandez and Harrowell \cite{harrowell-jcp}, however the dual tendency for $x_A =0.6$ has not been observed earlier.
In order to understand the origin and the effect of the dual tendency of the liquid we further analyse these systems. Both the coordination number (CN)
and the local BOO parameter can give us information about the locally preferred structure \cite{harrowell-jpcb}. In Fig-\ref{coordination_number_0.80}
we plot the fraction of B particles having `n' (n=1-12) A type neighbours,
($F_{B-An}$) and
fraction of A particles having `n' A type neighbours, ($F_{A-An}$) at different compositions.
\begin{figure}[h]
\centering
\begin{subfigure}{.76\textwidth}
\includegraphics[width=\textwidth]{fig3a.jpg}
\label{ab_0.80_CN}
\end{subfigure}
\begin{subfigure}{.8\textwidth}
\centering
\includegraphics[width=\textwidth]{fig3b.jpg}
\label{aa_0.80_CN}
\end{subfigure}
\caption{ $F_{B-An}$ describes the fraction of B particles having `n' A neighbours and $F_{A-An}$ describes the fraction of A particles having `n' A neighbours
where n is the coordination number. (a) We have plotted $F_{B-An}$ vs n for different compositions for $s=0.80$.
For pure bcc crystalline form $F_{B-A8}$ should be 1. Crystal structure obtained for $x_A=0.50$ shows the peak at $F_{B-A8}$, but the liquid state of this composition shows
the peak around $F_{B-A7}$ as obtained in Ref-9. For $x_A=0.60$ the peak value of $F_{B-An}$ is at n=8.
(b) We have plotted $F_{A-An}$ vs n for different compositions. For pure fcc crystal
structure $F_{A-A12}$ should be 1, and for pure bcc crystal structure $F_{A-A6}$ should be 1. In case of $x_A =0.6$ the peak value of $F_{A-An}$ is at 8, it does not satisfy any
of these conditions. Thus the LPS does not allow the formation of either CsCl type of crystal between AB particles and fcc type of crystal between AA particles.
For the composition of $x_A=0.80$, the peak value of $F_{A-An}$ is further away from n=6 value. }
\label{coordination_number_0.80}
\end{figure}
For a perfect mixed bcc crystal, CsCl type, the ideal values of the parameters should be, $F_{B-A8}=1$ and $F_{A-A6}=1$.
We find that for $x_A=0.50$ although there is a distribution of the parameters in the crystalline state, the peak lies at $F_{B-A8}$ and $F_{A-A6}$. The peak value of the
parameters in the liquid state for $x_A=0.50$ does not match their crystalline values.
We find that $F_{B-An}$ has a peak at smaller n
and $F_{A-An}$ has a peak at larger n when compared to its crystalline counterpart (Fig-3a, Fig-3b). As observed by Fernandez and Harrowell for a liquid to
form CsCl type crystal the $F_{B-An}$
is always found to be lower than the ideal value \cite{harrowell-jpcb}. This lower value might help the rearrangement of the neighbours between the neighbouring A and B particles to form a bcc seed
where the A particle can give away
one of its extra neighbour to the neighbouring B particle. In case of $x_A=0.80$ we find that
the $F_{B-An}$ peak has shifted to n=8 but at the same time the $F_{A-An}$ peak has shifted to n=8 which is away from the
n=6 value required for a CsCl structure or n=12 required for the fcc structure. Thus even if the B particles have required neighbours the surrounding A particles have more
A neighbours
than that required for the formation of the CsCl type crystal. It will require large rearrangement of neighbours between the A particles to form bimodal distribution of
its neighbours with peaks at n=6 and n=12. Thus this locally preferred structure does not allow the formation of either CsCl type crystal between AB particles or
fcc crystal between the AA particles. For the composition of $x_A=0.80$ the $F_{A-An}$ distribution moves further away from the n=6 value. Hence we can say that there is a frustration
between LPS, and the global structure which is a combination of bcc and fcc crystal structure.
\begin{figure}[h]
\centering
\begin{subfigure}{0.72\textwidth}
\includegraphics[width=\textwidth]{fig4a.jpg}
\label{ab_0.70_CN}
\end{subfigure}
\begin{subfigure}{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{fig4b.jpg}
\label{aa_0.70_CN}
\end{subfigure}
\caption{ $F_{B-An}$ and $F_{A-An}$ are the same as defined in Fig-\ref{coordination_number_0.80}.
(a) We have plotted $F_{B-An}$ vs n for different compositions for $s=0.70$ .
For both NaCl and NaCl +fcc type crystal structure $F_{B-A6}$ should be 1. Here the plots for the crystal structure of $x_A=0.8$, liquid and crystal structures of $x_A=0.5$
are overlapping.
(b) We have plotted $F_{A-An}$ vs n for different compositions for the same s value.
For both NaCl and NaCl + fcc type crystal $F_{B-A6}$ should be 1. Here we find that the peak positions are at their expected crystalline values.
So there is less frustration between LPS and NaCl + fcc form for $x_A=0.80$. }
\label{coordination_number_0.70}
\end{figure}
In order to further substantiate our claim and also to show that the change in coordination number between A-A particles is not only a density (of A particles)
effect, we do the same coordination number analysis for the system with $s=0.7$ value for two compositions.
This system is known to crystallize
in NaCl form for the composition of $x_A=0.50$ and NaCl+fcc
form for the composition of $x_A=0.80$. We plot the $F_{B-An}$ values and $F_{A-An}$ values for both the systems in their liquid and crystalline states (Fig-4a, Fig-4b).
We notice that in all the cases the
$F_{B-An}$ peaks at n=6 and $F_{A-An}$ peaks at n=12. This should be true for all the other intermediate compositions also. Since for both NaCl and fcc crystal the $F_{A-An}$
needs to peak at n=12
thus the coordination number around a A particle does not require much rearrangement for the system to crystallize. This shows that there is less frustration
between the LPS and the NaCl+fcc crystal.
This is precisely where the CsCl+fcc and NaCl+fcc crystal differ from each other.
Thus we can infer that due to the requirement of large rearrangement of neighbours between the A-A particles the bcc crystallization has a large nucleation barrier.
Our analysis further reveals that this should be true not only for the KA model but for any system which is in the bcc zone and has large value of $x_A$.
The coordination number analysis shows that this barrier for crystallization to the bcc type structure should become
higher as we increase the composition of the A particles.
Our picture of frustration is similar to that given by Tanaka and co-workers who
claim that when there is a mismatch between the LPS and the global structure, the LPS acts as a source of frustration against crystallization
\cite{ tanaka-nature06,tanaka-vshaped-prl, tanaka22, tanaka23, tanaka29}.
However, although the barrier for bcc crystallization increases as the composition is increased but the tendency for fcc formation also increases at the same time.
For the composition of $x_A=0.80$ both the local BOO and
the $F_{A-An}$ distribution show fcc like characteristic. Toxvaerd {\it et al.} have modified the KA model (MKA) by reducing the A-B attraction parameter \cite{dyre}.
They have claimed that this modified model
can help to predict the crystallization process of the KA model. The parameters for the MKA model in its liquid state has been plotted in Fig 3. It indeed has a resemblance
with that of the KA model and the MKA model is reported to show a crystallization of the A particles .
We also observe that around the \textbf{bcc zone} there are two different types of fcc crystal, one is the disordered fcc crystal at higher $s$ value and the other is the NaCl+fcc crystal at lower $s$ value .
The KA model shows a tendency towards crystallizing in fcc type structure. This tendency should be present for the whole \textbf{bcc zone} in the range of $0.70\leq x_A < 0.9$.
Thus it is imperative to understand the tendency of crystallization of the \textbf{bcc zone} to form any of these two types of fcc crystal.
In order to do that we study the melting of the disordered fcc crystal (formed for $s=0.94$)
and NaCl+fcc
crystal (formed for $s=0.7$) by varying the $s$ value. We start with the crystalline structures obtained for the system with $s=0.7$ and $s=0.94$ for $x_A=0.80$.
The systems are first cooled to $T^*=0.1$ and then these structures are taken as a reference structure for different $s$ values \cite{melting}. It is interesting to note that the local BOO parameter
obtained for the NaCl+fcc structure with $s=0.8$ is similar to that obtained for the MKA model of Toxvaerd {\it et al.} \cite{dyre}. This shows that the NaCl+fcc structure is where the MKA model
crystallizes and the KA is expected to crystallize. The $T-s$ phase diagram clearly shows that the stability of any form of fcc type crystal is less in the \textbf{bcc zone}.
In our study we could not predict a triple point as in the range $0.80<s<0.86$ none of the crystal forms were found to be stable even at $T^*=0.1$ (Fig-\ref{phase_plot_temp}).
The energy per particle at T=0 and P=0 for
the KA model in this NaCl+fcc structure was found to be -7.291 which is higher than the energy per particle for the amorphous state reported earlier (-7.72)
\cite{harowell,srikanth}. Thus it might be possible that the KA model will never crystallize even in the NaCl+fcc form.
The phase diagram found here is similar to that obtained by Molinero {\it et al.} for Si-like potential by modifying the tetrahedral character in
the Stillinger-Weber potential \cite{valeria} and by Tanaka { \it et al.} for water-LiCl mixture \cite{tanaka-vshaped-prl}. According to Tanaka this kind of V shaped diagram is
related to the Glass forming ability of the system where systems sitting at the bottom of the V have higher GFA and are stable against crystallization
\cite{tanaka-epje,tanaka-vshaped-prl}.
\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth]{fig5.jpg}
\caption{ V-shaped phase diagram of two different variants of fcc crystal structure. Melting points for Nacl + fcc type of crystal (black solid circles) and
mixed hcp +fcc crystal form (black solid squares) for different $s$ values are plotted here \cite{melting}.
We donot find any triple point. Here blue lines (dotted and solid) denote the range where various types of crystal forms are found for $x_A=0.50$. Red dotted lines
denote the same for $x_A=0.80$. We do not see any crystallization in the range shown by the red solid line. Black solid lines are guide to the eyes.}
\label{phase_plot_temp}
\end{figure}
\section{Conclusion}
In this article we have tried to understand the interplay between the crystallization and the glass transition in binary Lennard-Jones mixtures.
The study has explored the effect of the inter-species interaction length ($s$) and also the composition. The systems studied here have
negative enthalpy of mixing and the size ratio between the components are always kept 12\%. For the range of $s$ studied here the equimolar mixture crystallizes
into three
different forms of crystals similar to that found by Vlot { \it et al.} \cite{vlot}. For the large and the small $s$ values distorted fcc + hcp structure
and interpenetrating
fcc structure (NaCl type) are found respectively.
The systems with intermediate $s$ values are found to form bcc structure (CsCl type). For $x_A=0.80$ although the systems with small and
large $s$ values crystallize to NaCl+fcc and hcp + fcc crystal, respectively, the bcc zone does not crystallize. This shows that the frustration against
crystallization has a connection with the formation of bcc crystal structure.
The study with $s=0.80$ at different compositions gives further insight to this frustration. The LPS of the composition for $x_A=0.60$ analysed
from local BOO and coordination number show a strong frustration between LPS and both bcc and fcc crystal forms. The LPS does not favour either of the crystal form. However the LPS
for $x_A=0.80$ shows a tendency towards fcc crystal formation.
Thus we can claim that as the composition of A particle increases, the nucleation barrier to form a bcc crystal also increases. This conclusion is coherent with the finding of
Fernandez and Harrowell.
They have reported that even after putting a bcc seed in a the KA mixture they have not found the growth of bcc crystal \cite{harrowell-jcp}.
This must also be the reason why Toxvaerd {\it et al.} could form a mixed fcc + bcc phase in the KA mixture only
after putting the complete bcc structure and allowing the growth of fcc lattice around it \cite{dyre}. However, Valdes {\it et al.} have reported that for $x_A=0.30$ the low temp state of
the system seems to be composed of bcc+fcc crystalline structure \cite{valdes-jcp}. This shows that instead of increasing the bigger particles if we increase the composition of
smaller particles then nucleation barrier for bcc crystalline structure will reduce.
It will be interesting to perform a free energy calculation of the nucleation barrier for bcc crystal formation at different compositions similar to that performed for other
systems \cite{monson, swetlana}.
This is beyond the scope of the present work and would be taken as a future project.
Since the LPS for $x_A=0.80$ shows a fcc characteristic, we have also studied the phase diagram of the melting temperature of two different fcc types
of crystal forms which are present for the higher and lower $s$ values. The study shows that
the phase diagram has a V-shape where the bcc zone which does not crystallize sits at the bottom of the V. This V-shaped phase diagram has also been
observed earlier for Si-like system and also for water-LiCl system. It has been always found that due
to frustration between the LPS and the global structure, the systems sitting at the bottom of the V are good glass formers.
Although we have not studied the phase diagram by varying the composition , but the local BOO and CN analysis predicts a similar V-shaped phase diagram where at $x_A=0.50$
the system forms bcc type crystal and the pure monoatomic
system ($x_A=1.0$) forms fcc type crystal. For the intermediate values of $x_A$ where the crystal structure analysis shows that the mixture of fcc+CsCl is the global structure,
the analysis of the LPS shows that there is a frustration between
the LPS and the global structure. Thus the picture suggests that the intermediate $x_A$ values will be sitting at the bottom of the V and the $x_A=0.5$ and $x_A=1.0$ will
be forming the two ends. Hence the bcc zone for composition of $x_A=0.80$ is a good
glass former not only due to the frustration between the two different fcc lattice structures but also due to the frustration between the LPS and fcc+bcc lattice formation.
Our study suggests that whenever we increase the composition of one of the species of a binary system which in its equimolar composition forms bcc crystal (CsCl type) we
will find a frustration between the LPS and global structure.
In more general terms if a global structure of a mixed system has two crystalline forms such that any of the species which is present in both the
crystal structures has a large difference in its order parameter (coordination number or local BOO or any
other order parameter) in the two crystal forms, there will be frustration between the LPS and the global structure.
The LPS will not be closer to either of the crystalline states
and this frustration will lead to the stability of the system against crystallization.
\section{Acknowledgements}
This work has been supported by the Department of Science and Technology (DST), India and CSIR-Multi-Scale Simulation and Modeling project.
AB thanks DST for fellowship. Authors thank Dr. Srikanth Sastry,
Dr. Rahul Banerjee, Dr. G. Kumaraswamy, Dr. Mantu Santra, Dr. Vishwas Vasisht, Mr. Rajib Biswas
for discussions.
\clearpage
|
2,869,038,156,894 | arxiv |
\section{Introduction} \label{sec:introduction}
The first decades of exoplanet science have enabled detection and some
characterization of exoplanets with a much wider range of properties
than anticipated. In turn, this has prompted a reinvention of the
formation history of the solar system. However, so far we barely have the capability to be sensitive to the planetary systems, like our own solar system, around nearby stars. New high-precision
facilities such as {\it TESS} \citep{ricker14}, {\it Gaia} \citep{brown18},
ESPRESSO \citep{hernandez17}, and the upcoming {\it James Webb Space Telescope } \citep{beichman14} bring us to a golden age of exoplanet science
where a comprehensive survey of nearby planets becomes feasible
and the discoveries of nearby Earth-like planets around Sun-like stars
(or ``Earth twins'') become possible. This in turn leads to the ability
to detect biosignatures and begin habitability studies and to test planet formation theories. The high-precision and overlapping
constraints afforded by these new instruments might also suffice for
tests of relativity theory \citep{jordan08,mignard10} analogous to that achieved for pulsar timing \citep{hulse75,weisberg05,wex14}. For example, short-period binaries on eccentric orbits would show strong variation of gravitational Doppler shift.
Five primary methods are used to detect exoplanets: radial
velocity, transit, astrometry, microlensing, and direct imaging. We
classify these methods into four categories according to the
dimension of the data used in them. Modern instruments produce timing,
photometric, spectroscopic, and astrometric data. The radial velocity
method uses the timing and spectroscopic data; the transit method uses the timing and photometry data;
the astrometry method uses the timing and astrometry data; the
microlensing method uses the timing and photometry or
spectroscopic data; direct imaging typically uses all four types of
data. Thus, a general model for exoplanet detection would require a
precise modeling of timing, photometry, astrometry, and
spectroscopy. Considering that radial velocity and transits are the main working
methods for exoplanet detection and the astrometry method will probably
be used to find thousands of exoplanets by {\it Gaia} \citep{perryman14}, the
immediate need for general and combined analysis of precision
exoplanet data is a combined model of timing, radial velocity, and astrometry. In
the pulsar timing community, TEMPO2 (\citealt{edwards06}, here ``E06''; \citealt{hobbs06}) is currently the only known package used to test general
relativity (GR) and indirectly detect gravitational waves due to its
unprecedented timing precision at a level of a nanosecond
(ns). However, a similar high-precision package is not available for
independent pulsar timing analysis and for
the search for exoplanets despite various efforts being made to improve
the precision in some special cases \citep{eastman10,wright14}.
In this work, we introduce a new package called ``PEXO'' \footnote{PEXO is an acronym for {\bf P}recise {\bf EXO}planetology.} to model the
timing, radial velocity, and astrometry simultaneously and precisely in order to
detect and characterize small planets such as Earth twins and test GR. PEXO is able to model timing to a precision of about 1\,ns, radial velocity to a precision of 1\,$\mu$m/s, and space-based astrometry
to a precision of 1\,$\mu$as. PEXO models the motion of
the target star around the target system barycenter (TSB) due to its
companion (so-called ``reflex motion''), heliocentric motion of the
TSB, and the Earth's motion simultaneously to avoid any bias caused by
decoupling and separating these motions. PEXO can be used for combined
analysis of timing, radial velocity, and astrometry data and to determine the orbital parameters of potential companions, as well as refining the astrometric
parameters and the motion of the observing instrument with respect to the barycenter
of the solar system. PEXO is also able to model the relativistic
effects in the binary motion and thus is able to test GR in systems with multiple stars and companions.
PEXO is developed in particular to address the following issues
in previous exoplanet packages and studies:
\begin{itemize}
\item The decoupling of remote and local effects or the so-called
``barycentric correction'', though efficient for single stars
hosting planets, is not appropriate for detection of low-mass planets around stars with massive companions. We will discuss this issue in section \ref{sec:decoupling}.
\item The exoplanet community might be more focused on exoplanet
detection and characterization than tests of GR
although the classical astrometric effects induced by small planets could be comparable to relativistic effects. In section \ref{sec:relativity_test}, we
address this issue by proposing the companion-induced
gravitational redshift as a unique way to test gravity
theories.
\item The relativistic effects in extrasolar systems are not well
modeled, leading to potential bias in transit timing
variation and radial velocity detection of exoplanets. This
issue will be addressed in section \ref{sec:ttv}.
\item The current packages are not able to analyze multiple types
of exoplanet data in a consistent way, due to a lack of
simultaneous modeling of timing, radial velocity, photometry, and
astrometry. We will briefly discuss this problem in section \ref{sec:comparison_table}.
\end{itemize}
This paper is structured as follows. In section \ref{sec:geometric},
we introduce the geometric and kinematic model of astrometry and
radial velocity. We then introduce the relativistic effects in timing, astrometry,
and radial velocity in section \ref{sec:relativistic}. We compare PEXO with TEMPO2
and other packages to examine the precision of PEXO in section
\ref{sec:comparison}. This is followed by assessments of the
significance of various relativistic effects on a few key nearby
objects using two example transit systems and $\alpha$ Centauri A and B in section \ref{sec:effects}. Finally we conclude in section \ref{sec:conclusion}.
\section{Geometric and kinematic model of astrometry and radial velocity}\label{sec:geometric}
We follow the {\it Hipparcos} and {\it Gaia} team \citep{esa97,lindegren11} and use
vectors to model astrometry. In this section, we assume that the speed
of light is infinite and ignore the relativistic effects on the light
rays. In other words, we consider the kinematics and geometry of stars
and observers. We show the propagation of the light in Fig. \ref{fig:light_ray}. As we develop the model, the elements are described, though there are many of these, so we also provide a tabulation in the appendix.
\begin{figure}
\centering
\includegraphics[scale=0.5]{light_ray.pdf}
\caption{Illustration of the propagation of a photon vastly
exaggerated in order to represent the underlying geometry of the
model. A photon emitted from the target star (T) is delayed and
deflected by other bodies in the target system with barycenter at
B and deflected and delayed by solar system bodies with barycenter at
S before arriving at the observation site (O). The companion in
the target system is denoted by C. The vectors in the diagram are
related according to ${\bf r}_{\rm OT}={\bf r}_{\rm OB}+{\bf
r}_{\rm BT}$ and ${\bf r}_{\rm OB}={\bf r}_{\rm OS}+{\bf r}_{\rm
SB}$. The time derivatives of these vectors are corresponding
velocities, which are related in the same way. }
\label{fig:light_ray}
\end{figure}
\subsection{Astrometry Model}\label{sec:astrometry}
The observed position of the target star is
\begin{equation}
{\bm r}_{\rm OT}={\bm r}_{\rm OS}+{\bm r}_{\rm SB}+{\bm r}_{\rm BT}~,
\label{eqn:rOT}
\end{equation}
where ${\bm r}_{\rm OS}$ is the position of the solar system
barycenter (SSB) with respect to the observer, ${\bm r}_{\rm SB}$ is
the position of the TSB with respect to the SSB, and ${\bm r}_{\rm
BT}$ is the target star with respect to the TSB. We also define the
opposite of these vectors as ${\bm r}_{\rm TO}= -{\bm r}_{\rm OT}$,
${\bm r}_{\rm SO}= -{\bm r}_{\rm OS}$, ${\bm r}_{\rm BS}= -{\bm
r}_{\rm SB}$, and ${\bm r}_{\rm TB}= -{\bm r}_{\rm BT}$. In the
following sections, the variable in bold is a vector, while the variable in normal font is a scalar or the mode of the corresponding vector.
Denoting the velocity of the TSB relative to the SSB as ${\bm
v}_{\rm SB}$ and assuming ${\bm v}_{\rm SB}$ to be constant, that
is, ${\bm v}_{\rm SB}(t)={\bm v}_{\rm SB}(t_0)$, we find that \ref{eqn:rOT} becomes
\begin{equation}
{\bm r}_{\rm OT}(t)={\bm v}_{\rm SB}(t_0)(t-t_0) +{\bm r}_{\rm SB}(t_0)+{\bm r}_{\rm BT}(t) -{\bm r}_{\rm SO}(t)~,
\label{eqn:rOT2}
\end{equation}
where $t_0$ is a reference time. Here, ${\bm r}_{\rm BT}$ is determined by
the motion of the target star around the TSB (or reflex motion),
and ${\bm r}_{\rm SO}$ is determined by the ephemeris of the observer.
Considering that ${\bm v}_{\rm SB}(t_0)$ and ${\bm r}_{\rm SB}(t_0)$ are provided by astrometric observations, we replace them with astrometry, and equation \ref{eqn:rOT2} becomes
\begin{equation}
{\bm r}_{\rm OT}(t)= {\bm r}_{\rm SB}(t_0)+\frac{A}{\widetilde\omega^b}({\bm p}_b\mu_\alpha^b+{\bm q}_b\mu_\delta^b+{\bm u}_b\mu_r^b)(t-t_0)+{\bm r}_{\rm BT}(t) -{\bm r}_{\rm SO}(t)~,
\label{eqn:rOT3}
\end{equation}
where the relevant notations are defined as follows: ${\bm u}_b$ is the Barycentric Celestial Reference System (BCRS; \citealt{rickman01}) coordinate direction to the TSB at the reference TCB epoch $t_0$;
$\alpha^b$ is the BCRS R.A. of the TSB at the reference epoch;
$\delta^b$ is the BCRS decl. the TSB at the reference epoch;
$\widetilde\omega^b$ is the annual parallax of the TSB at the reference epoch;
$\mu_\alpha^b$ is the proper motion in R.A. of the TSB at the reference epoch;
$\mu_\delta^b$ is the proper motion in decl. of the TSB at the reference epoch;
${\bm \mu}={\bm p}_b\mu_\alpha^b+{\bm q}_b\mu_\delta^b$ is the total proper motion of the TSB at the reference epoch;
$\mu_r^b=v_r^b \widetilde\omega^b/A$ is the so-called ``radial proper
motion'' of the TSB at the reference epoch, where $v_r$ is the radial velocity of the TSB and $A$ is the astronomical unit;
the unit vectors of ${\bm p}_b$, ${\bm q}_b$, and ${\bm u}_b$ form a triad, which is
\begin{equation}
\begin{bmatrix}
{\bm p}_b& {\bm q}_b& {\bm u}_b
\end{bmatrix}
=
\begin{bmatrix}
-\sin\alpha^b & -\sin\delta^b\cos\alpha^b & \cos\delta^b\cos\alpha^b \\
\cos\alpha^b &-\sin\delta^b\sin\alpha^b & \cos\delta^b\sin\alpha^b \\
0 & \cos\delta^b & \sin\delta^b
\end{bmatrix}
~.
\label{eqn:triad}
\end{equation}
Here, ${\bm p}_b$ and ${\bm q}_b$ are respectively the unit vectors in the
directions of increasing $\alpha$ and $\delta$ at the reference
epoch. The coordinate system determined by the triad $[{\bm p}_b~
{\bm q}_b~ {\bm u}_b]$ is illustrated in
Fig. \ref{fig:binary_orbit}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{binary_orbit.pdf}
\caption{Illustration of the binary motion in the sky plane
reference frame defined by the triad $[{\bm p}_b~
{\bm q}_b~ {\bm u}_b]$. This coordinate system is
fixed at the reference epoch and does not rotate as the TSB moves
with respect to the Sun. In the orbital-plane coordinate system, the
x axis ${\bm e}_x$ points to the periastron, the y axis ${\bm e}_y$
is 90$^\circ$ in the direction of orbital motion from ${\bm
e}_y$ in the orbital plane, and the z axis ${\bm e}_z$ is
perpendicular to the orbital plane (parallel to the angular
momentum). The three axes form a right-handed Cartesian coordinate
system. Here, $\bm q_b$ points to the north while $\bm p_b$ points to the east from the barycentric observer's perspective. The reflex
motion of the target star $T$ is determined by the five orbital
elements: semi-major axis $a_{\rm T}$, eccentricity $e$, inclination $I$
(the angle between -$\bm u_b$ and the angular momentum),
longitude of ascending node $\Omega$ (counterclockwise angle
from the north to the ascending node viewed from the observer),
and argument of periastron $\omega_{\rm T}$. The true anomaly $f$ is
needed to determine the location of the target star. For the orbit
of a companion around the barycenter, the semi-major axis is
$a_{\rm C}=\frac{m_{\rm T}}{m_{\rm C}}a_{\rm T}$ where $m_{\rm T}$ and $m_{\rm C}$ are respectively the masses of the target and
the companion. The argument of periastron is $\omega_C=\omega_T+\pi$
while the other orbital elements are the same as the reflex orbit of
the target star. We call the convention illustrated by this
figure ``astrometric convention.'' This and other conventions of
binary motion and the transformation between the orbital plane
frame and the sky plane
frame are explained in detail in Appendix
\ref{sec:convention}. According to equation \ref{eqn:triad}, the
sky plane system can be further transformed into the equatorial coordinate system defined by the Vernal Equinox and the north
Celestial Pole (NCP). }
\label{fig:binary_orbit}
\end{figure}
Because astrometric observations measure the direction of a star on the sky, we estimate the observed direction of a star $\hat{\bm u}_o$ from equation \ref{eqn:rOT3}, following \cite{esa97} and \cite{lindegren11},
\begin{equation}
{\bm u}_{\rm OT}(t)=\langle{\bm u}_b+({\bm p}_b\mu_\alpha^b+{\bm q}_b\mu_\delta^b+{\bm u}_b\mu_r^b)(t-t_0)+\left[{\bm r}_{\rm BT}(t) -{\bm r}_{\rm SO}(t)\right]\widetilde\omega^b/A\rangle~,
\label{eqn:uOT}
\end{equation}
where the angular brackets denote vector normalization. Similarly, the BCRS direction of the TSB is
\begin{equation}
{\bm u}_{\rm SB}(t)=\langle{\bm u}_b+({\bm p}_b\mu_\alpha^b+{\bm q}_b\mu_\delta^b+{\bm u}_b\mu_r^b)(t-t_0)\rangle~,
\label{eqn:uSB}
\end{equation}
and the BCRS direction of the target star is
\begin{equation}
{\bm u}_{\rm ST}(t)=\langle{\bm u}_b+({\bm p}_b\mu_\alpha^b+{\bm q}_b\mu_\delta^b+{\bm u}_b\mu_r^b)(t-t_0)+{\bm r}_{\rm BT}(t) \widetilde\omega^b/A\rangle~,
\label{eqn:uST}
\end{equation}
The only difference between equation \ref{eqn:uOT} in this paper and the one in equation 4 of \cite{lindegren11} is the inclusion of stellar reflex motion.
Although a robust model of astrometry is typically expressed with
vectors, one is typically interested in the variation of the sky
position of a star rather than its absolute position. To model the
astrometry relative to a reference epoch, we follow \cite{esa97} to
project the position of a component relative to the TSB onto the
offset coordinates ($\xi,\eta$), which are defined as rectangular
coordinates in the tangent plane at the reference point ${\bm r}_{\rm
SB}(t_0)$, with $\xi$ and $\eta$ increasing in the directions of
${\bm p}_b$ and ${\bm q}_b$. The offset of the target with respect to
the TSB in the topocentric reference frame is
\begin{align}
\sin{\xi(t)}&=\frac{\mu_\alpha^b(t-t_0)+{\bm p}_b\cdot \left[{\bm r}_{\rm BT}(t) -{\bm r}_{\rm SO}(t)\right]\widetilde\omega^b/A}{1+\mu_r^b(t-t_0)+{\bm u}_b\cdot \left[{\bm r}_{\rm BT}(t) -{\bm r}_{\rm SO}(t)\right]\widetilde\omega^b/A}~,\nonumber\\
\sin\eta(t)&=\frac{\mu_\delta^b(t-t_0)+{\bm q}_b\cdot \left[{\bm r}_{\rm BT}(t) -{\bm r}_{\rm SO}(t)\right]\widetilde\omega^b/A}{1+\mu_r^b(t-t_0)+{\bm u}_b\cdot \left[{\bm r}_{\rm BT}(t) -{\bm r}_{\rm SO}(t)\right]\widetilde\omega^b/A}~.
\label{eqn:offset}
\end{align}
In the above equation, $\sin\xi(t)$ and $\sin\eta(t)$ differ from
$\xi(t)$ and $\eta(t)$ by about 0.2\,mas over 100 yr for the
case of $\alpha$ Centauri. The above formula differs from Equation
1.2.26 of \cite{esa97} in terms of the consideration of the stellar
reflex motion and the use of the sinusoidal function for offset
coordinates. Although the companion-induced offset is small, the
integration of this offset over time would strongly bias the predicted
position of a star. While equation \ref{eqn:offset} can
provide high-precision geometric modeling in offset coordinates, we
need to model absolute astrometry to account for relativistic
effects. Thus we model the geometric astrometry in the equatorial
coordinate system using equation \ref{eqn:uOT}. We then consider
relativistic effects (section \ref{sec:relativistic}) to form the full
astrometry model. This model is formulated in the offset coordinates
according to equation \ref{eqn:offset} if the astrometric data are
given in the offset coordinate system.
To compare equation \ref{eqn:offset} with the astrometry model in
previous studies, we expand the offsets to second-order in a Taylor
series. Because all terms in the equations are small quantities compared with 1, the second-order Taylor expansion is
\begin{align}
\xi(t)&=\mu_\alpha^b(t-t_0)+{\bm p}_b\cdot {\bm R}-\mu_\alpha^b\mu_r^b(t-t_0)^2-[\mu_\alpha^b{\bm u}_b \cdot {\bm R}+\mu_r^b{\bm p}_b\cdot {\bm R}](t-t_0)-({\bm p}_b \cdot {\bm R})({\bm u}_b \cdot {\bm R})+\mathcal{O}(\xi^3)~,\nonumber\\
\eta(t)&=\mu_\delta^b(t-t_0)+{\bm q}_b\cdot {\bm R}-\mu_\delta^b\mu_r^b(t-t_0)^2-[\mu_\delta^b{\bm u}_b \cdot {\bm R}+\mu_r^b{\bm q}_b\cdot {\bm R}](t-t_0)-({\bm q}_b \cdot {\bm R})({\bm u}_b \cdot {\bm R})+\mathcal{O}(\eta^3)~,
\label{eqn:2nd}
\end{align}
where ${\bm R}\equiv\left[{\bm r}_{\rm BT}(t) -{\bm r}_{\rm SO}(t)\right]\widetilde\omega^b/A$. We explain the terms in the above equations as follows:\\
\begin{itemize}
\item $\mu_\alpha^b(t-t_0)$ and $\mu_\delta^b(t-t_0)$ are linear
displacements from the TSB due to proper motions and so are first-order terms.
\item ${\bm p}_b\cdot {\bm R}$ and ${\bm q}_b\cdot {\bm R}$ are the
parallax of a star if there is no companion around the star. If a star hosts companions, these terms reflect the combined effect of the motion of the observer and the reflex stellar motion on the displacement of the star with respect to the TSB. This is a first-order term.
\item $\mu_\alpha^b\mu_r^b(t-t_0)^2$ and
$\mu_\delta^b\mu_r^b(t-t_0)^2$ are second-order terms related to
the so-called ``perspective acceleration.'' Because this effect is
proportional to the square of time, it becomes significant for
long-baseline astrometry. For example, $\mu_\alpha$ will change by
about 6\,mas/yr due to the perspective acceleration over 10 years
of observations of $\alpha$ Centauri.
\item $[\mu_\alpha^b{\bm u}_b \cdot {\bm R}+\mu_r^b{\bm p}_b\cdot
{\bm R}](t-t_0)$ and $[\mu_\delta^b{\bm u}_b \cdot {\bm
R}+\mu_r^b{\bm q}_b\cdot {\bm R}](t-t_0)$ are second-order
terms and are linearly proportional to time. These terms are
related to the coupling of the proper motion or radial velocity
with the motion of the Earth and stellar reflex motion. Because they are linear functions of time, they only become important for
the interpretation of observations taken over decades. For example,
this term will contribute to 0.1\,mas offset over a decade of
observation of $\alpha$ Centauri.
\item $({\bm p}_b \cdot {\bm R})({\bm u}_b \cdot {\bm R})$ and
$({\bm q}_b \cdot {\bm R})({\bm u}_b \cdot {\bm R})$ are terms
related to the coupling of Earth's motion and stellar reflex motion in different directions. This term does not significantly increase
with time because the orbits of the observer and the stellar
reflex motion are periodic and the corresponding semi-major axis
does not change much over time. Thus this term only contributes
to a $\mu$as displacement even for nearby stars such as $\alpha$ Centauri.
\end{itemize}
Although equation \ref{eqn:2nd} only expands the offset to the second-order, there are two third-order terms that become important for decades-long astrometry observations:
\begin{itemize}
\item $\xi(t)^3/6=\xi(t)-\sin\xi(t)$ and
$\eta(t)^3/6=\eta(t)-\sin\eta(t)$ are the third-order terms for a
Taylor expansion of a sinusoidal function in the vicinity of zero. This
term can introduce a sub-mas offset for high proper motion stars. For
example, this term is about 0.3\,mas for $\alpha$ Centauri for a
100 yr observational baseline. It would be about 8.8\,mas for a
century of observations of Barnard's star.
\item $\mu_\alpha\mu_r^2(t-t_0)^3$ and $\mu_\delta\mu_r^2(t-t_0)^3$
are related to the coupling of proper motion and radial motion of
the TSB. The sums of these two terms are respectively about 1\,mas
and 40\,mas for 100 yr observations of $\alpha$ Centauri and Barnard's star.
\end{itemize}
Models that only account for the first-order terms are not
reliable for the detection or characterization of planets which induce
sub-mas reflex motion. For example, the maximum reflex offset of a
Sun-like star at 10\,pc is 0.50\,mas for a Jupiter-like planet,
0.27\,mas for a Saturn-like planet, 0.08\,mas for Uranus, and 0.16\,mas
for Neptune. Without including these higher-order terms, it would be
impossible to robustly detect them even if data from all the
individual {\it Gaia} epochs were to be available.
Reference stars are typically difficult to obtain due to the large
relative brightness of stars such as $\alpha$ Centauri A and B. Thus, relative astrometry is more reliable than absolute astrometry in terms of constraining the orbit of $\alpha$ Centauri \citep{pourbaix16, kervella16}.
By removing the third-order terms from equation \ref{eqn:2nd}, we derive the relative offset of the secondary with respect to the primary,
\begin{align}
\Delta\xi(t)&={\bm p}_b\cdot \Delta{\bm R}(t)-[\mu_\alpha^b{\bm u}_b \cdot \Delta{\bm R}(t)+\mu_r^b{\bm p}_b\cdot \Delta{\bm R}(t)](t-t_0)~,\nonumber\\
\Delta\eta(t)&={\bm q}_b\cdot \Delta{\bm R}(t)-[\mu_\alpha^b{\bm u}_b \cdot \Delta{\bm R}(t)+\mu_r^b{\bm q}_b\cdot \Delta{\bm R}(t)](t-t_0)~,
\label{eqn:relative}
\end{align}
where $\Delta{\bm R}(t)\equiv {\bm R_2}(t)-{\bm R_1}(t)=\Delta{\bm
r}(t)\widetilde\omega^b/A$, and $\Delta{\bm r}\equiv{\bm r}_{\rm
BT2}(t) -{\bm r}_{\rm BT1}(t)$ denotes the Keplerian motion of the
secondary with respect to the primary. It is notable that the
relative astrometry depends not only on the reflex motion but also
on the astrometry and radial velocity of the TSB when considering
secondary effects. This linear-time secondary effect could
contribute to sub-mas offsets that are comparable with the signal
caused by Jupiter-like planets around nearby stars.
\subsection{Radial Velocity Model}\label{sec:rv_model}
The time derivative of ${\bm r}_{\rm OT}$ (equation \ref{eqn:rOT}) determines the observed velocity of a star:
\begin{equation}
{\bm v}_{\rm OT}={\bm v}_{\rm OS}+{\bm v}_{\rm SB}+{\bm v}_{\rm BT}~.
\label{eqn:vOT}
\end{equation}
The observed radial velocity is the projection of $ {\bm v}_{\rm OT}$ onto the observed direction of the star:
\begin{equation}
v_r(t)={\bm v}_{\rm OT}(t)\cdot {\bm u}_{\rm OT}(t)
\label{eqn:vr_kepler}
\end{equation}
The terms ${\bm v}_{\rm OT}(t)$ and ${\bm u}_{\rm OT}$ can be respectively
calculated from Eqns. \ref{eqn:uOT} and \ref{eqn:vOT} given the
astrometry and radial velocity of a star and its reflex motion as
well as the Jet Propulsion Laboratory (JPL) ephemeris such as DE430 \citep{folker14} and the rotation of the Earth. Thus, the above vectorized formula is the most robust nonrelativistic modeling of radial velocity. However, to compare with other radial velocity models, we need to approximate this model through Taylor expansions. We expand ${\bm u}_{\rm OT}(t)$ to the first-order Taylor series
\begin{align}
{\bm u}_{\rm OT}(t)&={\bm u}_b+\Delta {\bm u_t}(t)+\mathcal{O}(|\Delta {\bm u}_t(t)|^2)~,
\label{eqn:uobs2}
\end{align}
where
\begin{equation}
\Delta {\bm u_t}(t)= ({\bm p}_b\mu_\alpha^b+{\bm q}_b\mu_\delta^b)(t-t_0)+{\bm R_t}(t)
\end{equation}
is the tangential component of the change of ${\bm u}_{\rm OT}$, and ${\bm R_t}(t)$ is the tangential component of ${\bm R}(t)$. Then the radial velocity becomes
\begin{equation}
v_r(t)= {\bm v}_{\rm OT}(t)\cdot{\bm u}_b+ {\bm v}_{\rm OT}(t)\cdot\Delta {\bm u_t}(t).
\end{equation}
In the above equation, the first term is the classical radial velocity model,
which does not account for the influence of the perspective
variation on the radial velocity. This perspective change is approximated by the second term, which is related to the tangential reflex motion and the tangential motion of the observer perpendicular to ${\bm u}_b$. Because the radial velocities are typically measured with respect to a reference epoch $t_0$, we derive the variation of radial velocity, which is
\begin{equation}
\Delta v_r(t)={\bm v}_{\rm OT}(t) {\bm u}_{\rm OT}(t)-{\bm
v}_{\rm OT}(t_0) {\bm u}_{\rm OT}(t_0)
\label{eqn:dRV}
\end{equation}
The above geometric model of radial velocity does not account for relativistic
effects, which will be discussed in section \ref{sec:relativistic_rv}.
\subsection{Stellar Reflex Motion}\label{sec:reflex}
We calculate the stellar reflex motion in the coordinate system formed
by the triad $[{\bm p}_b~ {\bm q}_b~ {\bm u}_b]$. Because this
coordinate system is determined at the reference epoch, the orbital
parameters are also calculated with respect to the reference
epoch. This is different from the rotation reference frame, where the
orbital elements may vary with time due to the adoption of a
noninertial frame \citep{kopeikin96}.
In the orbital-plane coordinate system shown in
Fig. \ref{fig:binary_orbit}, the position of the target star is
denoted by ${\bm r}_{\rm orb}=(x,y,0)^{\rm T}$. The orbital-plane
coordinate system is transformed into the sky plane coordinate system through a sequence of rotations such that
the position of the target star with respect to the TSB is
\begin{equation}
\begin{bmatrix}
x_{\rm BT} \\
y_{\rm BT}\\
z_{\rm BT}
\end{bmatrix}
=
\begin{bmatrix}
B'& G'&\cos\Omega\sin I\\
A'& F'&-\sin\Omega\sin I\\
C'& H' &-\cos I
\end{bmatrix}
\begin{bmatrix}
x \\
y\\
0
\end{bmatrix} ~,
\end{equation}
where
\begin{align*}
A'&=\cos\Omega \cos\omega_{\rm T} - \sin\Omega \sin\omega_{\rm T} \cos{I}\\
B'&=\sin\Omega \cos\omega_{\rm T} + \cos\Omega \sin\omega_{\rm T} \cos{I}\\
F'&=-\cos\Omega \sin\omega_{\rm T}-\sin\Omega \cos\omega_{\rm T} \cos{I}\\
G'&=-\sin\Omega \sin\omega_{\rm T} + \cos\Omega \cos\omega_{\rm T} \cos{I}\\
C'&=\sin\omega_{\rm T} \sin{I}\\
H'&=\cos\omega_{\rm T} \sin{I}~,
\end{align*}
are the elements of the rotation matrix and a scaled
version of the so-called ``Thiele Innes constants''. In the above
equations, $\Omega$ is the longitude of ascending node, $\omega_T$ is
the argument of periastron for the orbit of the target star around the
TSB, and $I$ is the inclination. The argument of periastron for the
barycentric motion of the companion is $\omega_C=\omega_T+\pi$. Since
this convention of binary motion is consistent with the triad used for
the astrometry model, we call it the ``astrometric convention'' which is
described in details in Appendix \ref{sec:conv3}.
The Keplerian motion of the target star with respect to the TSB is
\begin{align}
{\bm r}_{\rm BT}(t)&=[B'x(t)+G'y(t)]{\bm p}_b + [A'x(t)+F'y(t)]{\bm q}_b + [C'x(t)+H'y(t)]{\bm u}_b~,\nonumber\\
{\bm v}_{\rm BT}(t)&=[B'\dot{x}(t)+G'\dot{y}(t)]{\bm p}_b + [A'\dot{x}(t)+F'\dot{y}(t)]{\bm q}_b + [C'\dot{x}(t)+H'\dot{y}(t)]{\bm u}_b~,
\label{eqn:rvBT}
\end{align}
and the Keplerian motion in the orbital-plane (see Fig. \ref{fig:binary_orbit}) is
\begin{eqnarray}
x(t)&=&a_{\rm T}\left[\cos E(t) -e\right] \nonumber\\
y(t)&=&a_{\rm T}\left[\sqrt{1-e^2}\sin E(t)\right] \nonumber\\
\dot{x}(t)&=&-a_{\rm T}\frac{n\sin E(t)}{1-e\cos E(t)}\\
\dot{y}(t)&=&a_{\rm T}\sqrt{1-e^2}\frac{n\cos E(t)}{1-e\cos E(t)},\nonumber
\label{eqn:orbital_plane}
\end{eqnarray}
where $a_{\rm T}=\frac{m_{\rm C}}{m_{\rm C}+m_{\rm T}}a$ is the semi-major axis of the
target star with respect to the TSB, $a$ is the semi-major
axis of the target star with respect to its companion, $E(t)$ is the
eccentric anomaly which is determined by solving the Kepler's equation
$M(t)=E(t)-e\sin E(t)$ for a given time $t$, $e$ is the
eccentricity; $n\equiv 2\pi/P$ is the mean orbital motion; and $P$ is
the orbital period. By transforming the Keplerian motion from the
orbital-plane reference frame into the sky plane reference frame using equations \ref{eqn:rvBT}, we derive ${\bm r}_{\rm BT}$ and ${\bm v}_{\rm BT}$ to model astrometry and radial velocity fully using equations \ref{eqn:uOT} and \ref{eqn:vr_kepler}.
\section{Relativistic and high-order geometric effects}\label{sec:relativistic}
Relativistic effects can be important for nearby or binary systems
such as $\alpha$ Centauri. For example, assuming an orbital period of
80\,yr, a semi-major axis of 17.6\,au, and an inclination of 79$^\circ$
for $\alpha$ Centauri A and B, the arrival time of light from A or B is
changed by 6.3\,hr due to binary motion over 40\,yr. Although this is small compared with the
binary orbital period, it would introduce timing noise and thus bias the detection of
potential exoplanets in this system.
For the convenience of calculation of relativistic terms, we
define various times following E06. The proper emission time $\tau_e$ of a photon is derived from the proper observed arrival time $\tau_o$ by including various delays as follows:
\begin{equation}
\tau_e=\tau_o-\Delta_{\rm S}-\Delta_{\rm is}-\Delta_{\rm T}
\label{eqn:te}
\end{equation}
where $\Delta_{\rm S}$ is the time delay due to effects in the solar
system, $\Delta_{\rm is}$ is related to the travel time of a photon in
the interstellar medium, and $\Delta_{\rm T}$ is the delay related to the target system. We introduce the coordinate time of light arrival at the SSB,
$t_{a}^{\rm SSB}=\tau_o-\Delta_{\rm S}$, and the coordinate time of
light arrival at the TSB, $t_{a}^{\rm TSB}=t_a^{\rm SSB}-\Delta_{\rm
is}$. The proper emission time and the arrival time at TSB are related
by $\tau_e=t_a^{\rm TSB}-\Delta_{\rm T}$. We also define the
coordinate reference time $t_{\rm pos}$ as the epoch when the position
or astrometry of the target star is measured. At the reference epoch,
$t_{a}^{\rm SSB}=t_{a}^{\rm TSB}=t_{\rm pos}$. We will introduce post-Newtonian and GR models of binary motion in section \ref{sec:PN}, and
describe various relativistic terms in the models of timing (section
\ref{sec:timing}), astrometry (section \ref{sec:relativistic_astrometry}) and radial velocity (section \ref{sec:relativistic_rv}).
\subsection{Post-Newtonian Stellar Reflex Motion}\label{sec:PN}
In section \ref{sec:reflex}, we derive the formula for the classical
Keplerian motion in the reference frame formed by the triad $[{\bm
p}_b~ {\bm q}_b~ {\bm u}_b]$. Here we model the post-Newtonian
Keplerian (PPK) motion in terms of proper emission time $\tau_e$ according to previous works (\citealt{damour86}, \citealt{taylor89}, and E06):
\begin{equation}
{\bm r}_{\rm BT}=[{\bm p}_b~ {\bm q}_b~ {\bm u}_b]
\begin{bmatrix}
\sin\Omega & -\cos\Omega & 0 \\
\cos\Omega & \sin\Omega & 0 \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & -\cos I& -\sin I \\
0 & \sin I & -\cos I
\end{bmatrix}
\begin{bmatrix}
r_{\rm reflex} \cos \theta\\
r_{\rm reflex} \sin \theta\\
0
\end{bmatrix}~,
\label{eqn:rBT}
\end{equation}
where
\begin{align}
r_{\rm reflex}&=a_r(1-e_r\cos U)~,\nonumber\\
M&=n(\tau_e-\tau_p)=U-e_\theta\sin U~,\nonumber\\
\theta&=\omega+A_{e_\theta}(U)~,\nonumber\\
A_{e_\theta}(U)&=2\tan^{-1}\left[\left(\frac{1+e_\theta}{1-e_\theta}\right)^2\tan{\frac{U}{2}}\right]~,\nonumber\\
e_r&=e(1+\delta_r)~,\nonumber\\
e_\theta&=e(1+\delta_\theta)~,\\
n&=\frac{2\pi}{P_0}+\frac{\pi \dot{P}(\tau_e-\tau_p)}{P_0^2}~,\nonumber\\
\omega &=\omega_0+kA_e(U)~,\nonumber\\
k&=\frac{\dot{\omega}}{n}~,\nonumber\\
e&=e_0+\dot{e}(\tau_e-\tau_p)~,\nonumber\\
x_a&=x_{a0}+\dot{x_a}(\tau_e-\tau_p)\nonumber~,
\label{eqn:x}
\end{align}
where $\omega_0$, $e_0$, $P_0$ are the Keplerian parameters at the reference
epoch, $\tau_p$ is the proper time of periastron, $U$ is the relativistic
eccentric anomaly, $a_r$ is the semi-major axis of the primary with
respect to the barycenter of the target system, $\delta_r$ and
$\delta_\theta$ are PPK terms used to define
eccentricities, and $x_a\equiv a\sin{I}/c$ is the light travel time across the
projected semi-major axis. Because this model was first proposed by
\cite{damour86}, we call it ``DD'', following \cite{taylor89} and
E06. Because the x axis of the orbital-plane is in the direction of
the ascending node rather than periastron as in the astrometric
convention, we call this coordinate system framework
the ``precession-compatible convention,'' which is described in detail
in Appendix \ref{sec:conv3}.
Considering GR, we define $m_{\rm tot}=m_{\rm C}+m_{\rm T}$ and give the following relativistic terms after E06's eqs. 71 and 80-88,
\begin{align}
\dot{\omega}^{\rm GR}&=3T_\odot^{2/3}n^{5/3}\frac{m^{2/3}_{\rm tot}}{1-e^2}~,\nonumber\\
g^{\rm GR}&=T_\odot^{2/3}n^{-1/3}e\frac{m_{\rm C}(m_{\rm T}+2m_{\rm C})}{m_{\rm tot}^{4/3}}~,\nonumber\\
r_s^{\rm GR}&=T_\odot m_{\rm C}~,\nonumber\\
s_s^{\rm GR}&=\sin I=T_\odot^{-1/3}n^{2/3}x_a\frac{m_{\rm tot}^{2/3}}{m_{\rm C}}~,\\
\delta_r^{\rm GR}&=T_\odot^{2/3}n^{2/3}\frac{3m_{\rm T}^2+6m_{\rm T}m_{\rm C}+2m_{\rm C}^2}{m_{\rm tot}^{4/3}}~,\nonumber\\
\delta_\theta^{\rm GR}&=T_\odot^{2/3}n^{2/3}\frac{\frac{7}{2}m_{\rm T}^2+6m_{\rm T}m_{\rm C}+2m_{\rm C}^2}{m_{\rm tot}^{4/3}}~,\nonumber\\
\dot{P}^{\rm GR} &=-\frac{192\pi}{5} T_\odot^{5/3} n^{5/3}\frac{m_{\rm T}m_{\rm C}}{m_{\rm tot}^{1/3}}\frac{1+73e^2/24 + 37e^4/96}{(1-e^2)^{7/2}}~,\nonumber
\label{eqn:grPar}
\end{align}
where ``GR'' denotes GR, $c$ is the speed of light,
$T_\odot=Gm_\odot/c^3$ is half the light travel time across the solar
Schwarzschild radius, $m_\odot$ is the Solar mass, $G$ is the
gravitational constant, $g$ is the timing model parameter, $r_s^{\rm GR}$
and $s_s^{\rm GR}$ are parameters for the Shapiro delay in the target
system assuming GR. We call this model ``DDGR'' following the syntax of TEMPO2.
In a combined model of radial velocity and astrometry, the free classical orbital
parameters are $\{ P_0 ,e_0, \omega_0, I, \Omega, \tau_p\}$ as well as
masses $\{m_{\rm C}, m_{\rm T}\}$. For general post-Newtonian theories, the
additional fittable parameters are $\{\dot{\omega}, g, \dot{P}, s_s,
r_s, \dot{x}_a, \dot{e}\}$, where $s_s$ and $r_s$ are respectively the
shape and range parameters of Shapiro delay. For a classical Keplerian
orbit, $\dot{\omega},\dot{P},\dot{x}_a, \dot{e}$ are all zero.
\subsection{Timing model}\label{sec:timing}
We transform the proper light arrival time at the observatory
$\tau_o$ to the
barycentric coordinate time ($t_a^{\rm SSB}$) by calculating the
``tropospheric delay'' ($\Delta_{\rm tropo}$), ``Roemer delay'' ($\Delta_{\rm rS}$), ``Shapiro delay'' ($\Delta_{\rm sS}$), and ``Einstein delay'' ($\Delta_{\rm eS}$). Then we transform
$t_a^{\rm SSB}$ to the light arrival coordinate time at the TSB
($t_a^{\rm TSB}$) by calculating the vacuum propagation time of the
light traveling from SSB to TSB ($\Delta_{\rm vp}$) as well as the
Einstein delay ($\Delta_{\rm ei}$) due to the relative motion between TSB
and SSB. Finally, we derive the proper emission time $\tau_e$ from
$t_a^{\rm TSB}$ by calculating the Roemer delay ($\Delta_{\rm rT}$),
Shapiro delay ($\Delta_{\rm sT}$), and Einstein delay ($\Delta_{\rm eT}$) in the target system.
The purpose of modeling the light emission time is to calculate the
mean anomaly of the stellar reflex orbit precisely given an
observation proper time $\tau_o$. The formulae in the following sections are similar to the formulae given in E06 for pulsar timing but are adapted and implemented to be more suitable for exoplanets.
\subsubsection{Tropospheric Delay}\label{sec:tropo}
The time delay in the solar system is
\begin{equation}
\Delta_{\rm S}\simeq \Delta_{\rm tropo}+\Delta_{\rm eS}+\Delta_{\rm
rS}+\Delta_{\rm pS}+\Delta_{\rm sS}~,
\label{eqn:delay_solar}
\end{equation}
where $\Delta_{\rm pS}$ is a second-order Roemer delay, called
``parallax delay'' (E06).
An incident light ray is refracted by the Earth's atmosphere and is delayed by \citep{nilsson13}
\begin{equation}
\Delta_{\rm tropo}=c\int_{\mathcal L}[n(l)-1]dl+(t_{\mathcal L}- t_{\mathcal G})~,
\label{eqn:Dtropo1}
\end{equation}
where $\mathcal L$ is the light ray path in the atmosphere, $n$
is the refractive index, and $t_{\mathcal L}$ and $t_{\mathcal G}$ are respectively
the vacuum light propagation times for deflected and straight light
rays. Because the atmosphere model typically consists of the
hydrostatic and wet components, the tropospheric delay is typically
split into two parts:
\begin{align}
\Delta_{\rm tropo}&=\frac{10^{-6}}{c}\int_{\mathcal
L}N_{\rm hydro}(l)dl+\frac{10^{-6}}{c}\int_{\mathcal L}N_{\rm wet}(l)dl+(t_{\mathcal L}- t_{\mathcal G})\\
&=\Delta_{\rm hydro}+\Delta_{\rm wet}+(t_{\mathcal L}- t_{\mathcal
G})~,
\label{eqn:Dtropo2}
\end{align}
where $N_{\rm hydro}$ and $N_{\rm wet}$ are respectively the
hydrostatic and wet refractivity. The refractivity is related to
refractive index by $N=10^{-6}(n-1)$. Each of these two components is a
product of a zenith delay and a mapping function
. The geometric delay term $t_{\mathcal L}-
t_{\mathcal G}$ is typically included in the mapping function of the
hydrostatic component \citep{nilsson13}. Hence, the tropospheric delay becomes
\begin{equation}
\Delta_{\rm tropo}=\Delta_{\rm hydro}m_h(\Theta)+\Delta_{\rm
wz}m_w(\Theta)~,
\label{eqn:Dtropo3}
\end{equation}
where $\Theta$ is the observed elevation angle of the source, and
$\Delta_{\rm hydro}$ is\footnote{Note that the routine {\small tropo.C}
in the TEMPO2 package contains an error. The cosine function in the
denominator should be $\cos(2\phi_{\rm O})$ rather than
$\cos{\phi_{\rm O}}$. However, this error may not significantly influence the TEMPO2 precision because $\cos(2\phi_{\rm O})$ is multiplied by 0.00266.}
\begin{equation}
\Delta_{\rm hydro}=\frac{0.02268 (\frac{p_{\rm O}}{\rm kPa})}{(\frac{c}{{\rm
m\,s}^{-1}})[1-0.00266\cos(2\phi_{\rm O})-2.8\times
10^{-7}(\frac{h_{\rm O}}{\rm m})]}~,
\end{equation}
where $\phi_{\rm O}$ is the latitude of the observatory, $p_{\rm O}$ is the air
pressure, and $h_{\rm O}$ is the telescope altitude. The zenith hydrostatic
delay is typically a few nanoseconds. On the other hand, the wet zenith delay is not
well modeled and is highly variable. However, it is about one order of
magnitude smaller than the hydrostatic delay and thus is only
important for high-precision applications. Following E06, we adopt the
Niell mapping function (NMF; \citealt{niell96}) to calculate $m_h$ and
$m_w$ in Equation \ref{eqn:Dtropo3}. Like E06, we only consider the wet
component if the zenith wet delay is given by the observatory. Similar
to the wet component, the refraction caused by the ionosphere is
highly variable and cannot be separated from the dispersion in
interstellar and interplanetary medium. Thus, we consider them as
time-correlated noise, which can be modeled using a red noise model
such as the moving-average model \citep{feng17a}.
\subsubsection{Time Delay in the solar system}\label{sec:delay_solar}
The Einstein delay is caused by the gravitational effect on the time measurement in different reference systems. According to E06 and \cite{irwin99}, the Einstein delay is
\begin{equation}
\Delta_{\rm
eS}=\frac{1}{c^2}\int_{t_0}^t\bigg(U_\oplus+\frac{v_\oplus^2}{2}+\Delta
L_C^{\rm (PN)}+\Delta L_C^{\rm (A)}\bigg){\rm d} t+\frac{{\bm
s}\cdot{\bm \dot{\bm r}_\oplus}+W_0\tau_o}{c^2}~,
\label{eqn:einstein}
\end{equation}
where $U_\oplus$ is the gravitational potential of all solar system
objects apart from the Earth, $v_\oplus$ is the barycentric velocity
of the geocenter, $W_0$ is approximately the gravitational and spin
potential of the Earth at the geoid, and $\Delta L_C^{\rm (PN)}$ and $\Delta L_C^{\rm (A)}$
respectively characterize the post-Newtonian effects and asteroidal
effects. The integral is to transform the Geocentric Coordinate Time
(TCG) into the barycentric time (TCB) at the geocenter. In the above
equation, the last term corresponds to the time difference between the
observer and the geocenter and transforms the terrestrial time (TT)
to TCG. The rate of TT with respect to TCG at the geocenter is
$L_G=W_0/c^2=6.969290134\times 10^{-10}$. The term $\Delta_{\rm rot}={\bm s}\cdot{\bm \dot{r}_\oplus}/c^2$ induces a
periodic delay with an amplitude of about 2\,$\mu$s. We model the
Earth's rotation using equation 26 in E06.
Instead of calculating the integral in equation \ref{eqn:einstein}
directly, we use the time ephemeris of the Earth in JPL DE430 to
transform TT at the geocenter to Barycentric Dynamical Time (TDB) and
use $L_B=1.550519768\times10^{-8}$ to transform TDB into TCB according to ${\rm TCB}={\rm TDB}/(1-L_B)$. Because the rotation-induced delay ${\bm s}\cdot{\bm
\dot{r}_\oplus}/c^2$ is not accounted for in the transformation from
TT to TDB by the JPL ephemeris, we add it in the transformation and
determine TDB in an iterative way as follows:
\begin{enumerate}
\item Transform Coordinate Universal Time (UTC) to the International
Atomic Time (TAI; {\it Temps Atomique} in French) using the
SOFA\footnote{\url{http://www.iausofa.org/}} routine {\small iauUtctai}.
\item Transform TAI to TT(BIPMXY). ``BIPM'' denotes the International
Bureau of Weights and Measures, and XY represents the year when the
BIPM realization of TT is published. TT(BIPMXY)=TAI+32.184\,s+$\delta
t$, where $\delta T$ is a the difference between the BIPMXY and TAI
realizations of TT \citep{petit04} and can be downloaded from
\url{https://www.bipm.org/jsp/en/TimeFtp.jsp?TypePub=ttbipm}. In
this work, we use the TT(BIPM17) realization by default. The BIPM file is
automatically updated to the latest version by PEXO.
\item Determine TT-TDB as a function of TT at the geocenter using the latest JPL time ephemeris (e.g., DE430t).
\item For a ground-based observer, determine the observer's geocentric
position and velocity using the Earth rotation model recommended by
IAU2006 resolutions \citep{capitaine06,wallace06}. For space
telescopes, their ephemerides are determined using the JPL HORIZONS
system. We have implemented an automated downloading of JPL ephemerides in PEXO.
\item Calculate $\Delta_{\rm rot}$ in the TDB coordinate system based
on step 4 and add it onto TT-TDB. Note that {$\Delta_{\rm rot}$ is
calculated using TT and thus needs to be scaled with $d$TT/$d$TDB
although this scaling is a negligible secondary effect and only
contributes at most 1\,ps (1\,ps=picosecond=$1\times 10^{-12}$\,s)}.
\item Transform TDB to TCB using ${\rm TCB}={\rm TDB}/(1-L_B)$.
\end{enumerate}
In summary, the transformation chain of various time standards is UTC$\rightarrow$TAI$\rightarrow$TT$\rightarrow$TCG$\rightarrow$TDB$\rightarrow$TCB.
To derive the barycentric time, we need to account for the difference in the light travel time to the observer and to the SSB. This is the so-called Roemer delay, which is
\begin{equation}
\Delta_{\rm rS}=-\frac{{\bm r}_{\rm SO}\cdot{\bm u}_{\rm SB}}{c}~,
\label{eqn:dRS}
\end{equation}
where ${\bm r}_{\rm SO}={\bm r}_\oplus+{\bm s}$ is the sum of the BCRS
position of the geocenter ${\bm r}_\oplus$ and the position of the
observatory with respect to the geocenter ${\bm s}$. For space
telescopes, ${\bm r}_{\rm SO}$ can be obtained from the ephemeris of
the telescope from JPL HORIZONS. However, the Roemer delay assumes that the fiducial observer at the SSB receives plane waves from the light source. To account for the curvature of the wave, we include the second-order ``Roemer delay'':
\begin{equation}
\Delta_{pS}=\frac{|{\bm r}_{\rm SO}\times{\bm u}_{\rm SB}|^2}{2cr_{\rm SB}}~.
\label{eqn:dpS}
\end{equation}
This so-called ``parallax delay'' is included in the Roemer delay in
some other studies (e.g., \citealt{lindegren03} and
\citealt{eastman10}). For example, the parallax delay for $\alpha$
Centauri is about 0.7\,ms. Note that this parallax delay is equal
to the one in equation (8) of (\citealt{eastman10}; hereafter E10), who use ${\bm u}_{\rm OT}$ rather than ${\bm u}_{\rm SB}$ as the reference unit vector, leading to an
opposite sign of parallax delay. However, equations \ref{eqn:dRS} and
\ref{eqn:dpS} do not account for higher-order astrometric effects, as mentioned
in section \ref{sec:astrometry}. To improve the precision of PEXO
for solar system objects, we calculate the total Roemer delay (including parallax delay) using
\begin{equation}
\Delta_{\rm rS}=\frac{r_{\rm OT}-r_{\rm BT}}{c}~.
\label{eqn:dRS1}
\end{equation}
Because the third-order astrometric terms contribute sub-mas position
offsets over decades, we expect tens of nanoseconds bias introduced by using
equations \ref{eqn:dRS} and \ref{eqn:dpS}. Such a bias is inversely
proportional to the heliocentric distance and increases with time, as
we will see in section \ref{sec:compare_timing}. Because this bias is
cumulative, the estimation of $<1$\,ns for third-order delays in E10 is not representative for long-term timing observations.
A photon is deflected by the gravitational field of the solar system, leading to the so-called ``Shapiro delay'' \citep{shapiro64}, which is
\begin{equation}
\Delta_{\rm sS}=(1+\gamma)\sum_i\frac{Gm_i}{c^3}\left\{\ln\left(\frac{2r_{\rm ST}}{A}\right)-\ln\left[\frac{r_{\rm SO}(1-\cos\psi_i)}{A}\right]\right\}~,
\label{eqn:DSO}
\end{equation}
where $A=1$\,au, and $\psi_i$ is the coordinate angle distance between the
center of the body $i$ and the target star from the perspective of the
observer. The angle between the Sun and the target star dominates the
Shapiro delay and is determined by $\cos\psi_i=\frac{{\bm r}_{\rm
OT}{\bm r}_{\rm OS}}{r_{\rm OT}r_{\rm OS}}$. The Shapiro delay
formulated in equation \ref{eqn:DSO} differs from equation (5) of \cite{eastman10}, who ignore the terms related to $r_{\rm ST}$ and $r_{\rm SO}$. However, $r_{\rm SO}$ is not constant for an observer on an eccentric orbit. Although this change may not be important for current exoplanet research, it is crucial for high-precision pulsar timing and thus is included in the model of TEMPO2 by E06.
In summary, the barycentric Julian date (BJD) in the TCB standard
(${\rm BJD_{\rm TCB}}$) is determined by the corresponding Julian date
(${\rm JD_{\rm TCB}}$) through ${\rm BJD_{TCB}}={\rm
JD_{TCB}}-\Delta_{\rm rS}-\Delta_{\rm pS}-\Delta_{\rm sS}$,
where ${\rm BJD}_{\rm TCB}$ and ${\rm JD}_{\rm TCB}$ are BJD and JD in the TCB time standards, respectively. BJD can only be determined
precisely if the precise location of the observed target is known at a
given epoch. However, this is impossible even with {\it Gaia} astrometry
because the astrometric solution is based on the assumption of a single
star. Thus, BJD is known {\it a posteriori} rather than {\it a priori}
by simultaneously modeling the motions of the Earth, the barycenter of the
target system and the stellar reflex motion. We will discuss the
influence of decoupling the Solar and target systems on timing in
section \ref{sec:decoupling}. Although PEXO does not separate the
solar system dynamics and the target system dynamics in its timing
model, we provide $\rm BJD_{\rm TDB}$ and BJD$_{\rm TCB}$ for users
who do not require a timing precision of $<$0.02\,s. The upper
limit of this precision corresponds to the timing bias amplitude for
$\alpha$ Centauri A due to the decoupling of $\alpha$ Centauri and the solar system over one decade (see section \ref{sec:comparison_table}). For users who need high-precision timing model, PEXO provides a combined modeling of all motions and various times as optional outputs.
PEXO generates quantities compatible with both TDB and TCB time
standards. TCB is used as the time standard for Gaia \citep{brown18}
while TDB is used by TESS \citep{ricker14}. Since TDB is a time
standard compatible with JPL ephemeris and has a time increasing rate
very similar to that of TT and TAI, it is frequently used in the
exoplanet community. TCB by definition is not a relativistic time
standard and is not sensitive to relativistic effects in the solar
system although its realization may depend on the relativistic simulation of the solar system. Both TDB and TCB systems have particular advantages, we provide the ability to introduce data from both time standards for example for the combined analysis of the data from Gaia and TESS. The critical matter is the consistent transformation when using data sets with different time standards. PEXO is designed to provide for this. We refer the readers to \cite{klioner10} and \cite{iers10} for detailed discussion of different time standards.
\subsubsection{Interstellar Time Delay}\label{sec:delay_interstellar}
Ignoring the interaction between a photon with the interstellar medium, the arrival time at the SSB is delayed with respect to the TSB by
\begin{equation}
\Delta_{\rm is}\simeq \Delta_{\rm vp} + \Delta_{\rm ei}
\end{equation}
where $\Delta_{\rm vp} =|{\bm v}_{\rm SB}(t_a^{\rm SSB}-t_{\rm
pos})+{\bm r}_{\rm SB}(t_{\rm pos})|/c$. Because the vacuum
propagation of light at the reference time ($|{\bm r}_{\rm
SB}(t_{\rm pos})|/c$) is a constant, we only model the relative vacuum
propagation delay, $\Delta_{\rm vp} =|{\bm v}_{\rm SB}(t_a^{\rm SSB}-t_{\rm
pos})+{\bm r}_{\rm SB}(t_{\rm pos})|/c-|{\bm r}_{\rm SB}(t_{\rm
pos})|/c$. The Einstein delay due to the relative motion
between TSB and SSB is
\begin{equation}
\Delta_{\rm ei}=\frac{v_{\rm SB}^2}{2c^2}(t_a^{\rm SSB}-t_{\rm pos}-\Delta_{\rm vp})~.
\end{equation}
\subsubsection{Time Delay in the Target System}\label{sec:delay_target}
Similar to the time delay in the solar system, the delay in the target system is
\begin{equation}
\Delta_{\rm T}\simeq \Delta_{\rm rT}+\Delta_{\rm pT}+\Delta_{\rm
eT}+\Delta_{\rm sT}~.
\label{eqn:dT}
\end{equation}
According to E06, the Roemer delay is
\begin{equation}
\Delta_{\rm rT}=\frac{{\bm r_{\rm BT}}\cdot {\bm u}_b}{c}+\frac{1}{cr_{\rm SB}}\left({\bm \mu}\cdot {\bm r}_{\rm BT,\perp}-{\bm r}_{\rm SO, \perp}\cdot {\bm r}_{\rm BT, \perp}+\frac{|{\bm r}_{\rm BT, \perp}|^2}{2}\right)~,
\label{eqn:drT}
\end{equation}
where ${\bm r}_{\rm SO, \perp}={\bm u}_b\times({\bm r}_{\rm
SO}\times{\bm u}_b)$ and ${\bm r}_{\rm BT, \perp}={\bm
u}_b\times({\bm r}_{\rm BT}\times {\bm u}_b)$. In the above
equation, the first term is related to the Roemer delay that is
due to the motion of TSB, while the other terms are named
``Kopeikin terms'' related to the orbital variation of the target
system that is due to the changing perspective caused by the proper motion of TSB \citep{kopeikin96}. In a rotation reference frame perpendicular to the line of sight, these terms can ``change'' the orbital elements of the target system. However, such an apparent change disappears if the orbit is defined at the reference epoch in a fixed reference frame, as in equation \ref{eqn:rOT3}.
Instead of using the reference unit vector ${\bm u}_b$, we use the
time-varying vector ${\bm u}_{\rm SB}$ to calculate the combined
Roemer and parallax delay as
\begin{equation}
\Delta_{\rm rT}+\Delta_{\rm pT}=\frac{{\bm r_{\rm BT}}\cdot {\bm u}_{\rm SB}}{c}-\frac{|{\bm r}_{\rm BT}\times {\bm u}_{\rm SB}|^2}{2cr_{\rm SB}}~.
\label{eqn:drT-pT}
\end{equation}
This delay is similar to its counterpart in the solar system, as expressed in equations \ref{eqn:dRS} and \ref{eqn:dpS}.
According to \cite{blandford76} and \cite{damour86}, the Einstein delay in the target system is
\begin{equation}
\Delta_{\rm eT}=g U~,
\label{eqn:deT}
\end{equation}
where $g$ is the timing model parameter.
According to \cite{damour86}, the Shapiro delay for the target system is
\begin{equation}
\Delta_{\rm sT}=-2r_s\log\left\{1-e\cos{U}-s_s\left[\sin{\omega}(\cos{U}-e) +(1-e^2)^{1/2}\cos{\omega}\sin{U}\right]\right\}~,
\label{eqn:DST}
\end{equation}
where all of the variables are given in section \ref{sec:PN}. Although
higher-order Shapiro delay terms are available \citep{kopeikin99},
they are insignificant because the first-order term is of the order of $(v_{\rm BT}/c)^3$.
\subsection{Astrometry Model}\label{sec:relativistic_astrometry}
The direction of a light ray observed by an observer is deflected by
the gravitational field between the source and the frame
transformation between the observer and the target. Thus we aim to
find the observed direction of the target star by tracing the
direction of a photon forward from the emission time to the arrival
time at the observatory. To avoid confusion with
the geometric modeling of the observed direction of the source derived
in section \ref{sec:astrometry}, we use $\bm l$ to denote the
direction of a light ray at a given time.
\subsubsection{Stellar aberration}\label{sec:abberration}
According to special relativity, the Lorentz transformation from a static reference frame to a moving reference frame would introduce a change in the direction of the target star. This effect is called ``stellar aberration''. After equation (7) of \cite{klioner03},
the direction of the observed light ray is
\begin{align}
\hat{\bm u}_o&=\langle-{\bm l}_o+c^{-1}{\bm l}_o\times({\bm v}_{\rm SO}\times {\bm l}_o)+c^{-2}[({\bm l}_o\cdot {\bm v}_{\rm SO}) {\bm l}_o\times({\bm v}_{\rm SO}\times {\bm l}_o)+\frac{1}{2}{\bm v}_{\rm SO}\times({\bm l}_o\times {\bm v}_{\rm SO})]\nonumber\\
&+c^{-3}\left\{\left[({\bm l}_o\cdot {\bm v}_{\rm SO})^2+(1+\gamma)w(r_{\rm SO})\right] {\bm l}_o\times({\bm v}_{\rm SO}\times {\bm l}_o)
+\frac{1}{2}({\bm l}_o\cdot {\bm v}_{\rm SO}) {\bm v}_{\rm SO}\times({\bm l}_o\times {\bm v}_{\rm SO})\right\}+\mathcal{O}(c^{-4})\rangle~,
\label{eqn:uo}
\end{align}
where ${\bm l}_o$ is the light ray direction when it is observed, the
absolute value of potential $w(r_{\rm SO})$ is approximated by a spherically symmetric Sun by
\begin{equation}
w(r_{\rm SO})\approx Gm_\odot/r_{\rm SO}
\end{equation}
and $\gamma$ is a dimensionless parameter in the Parameterized
post-Newtonian formalism (PPN; \citealt{nordtvedt72}). It is equal
to 1 if GR is true. It could be fitted to astrometry
data in the case of weak-field relativity tests although a fully
post-Newtonian formalization of the timing, astrometry, and radial
velocity models are required to test GR
consistently. For strong-field relativity tests, only the PPK
parameters (see Equation \ref{eqn:x}) are fitted. For the difference
between PPN and PPK parameters, we recommend \cite{taylor92} for
more details. Due to gravitational lensing, ${\bm l}_o\neq -{\bm u}_{\rm OT}$.
\subsubsection{Atmospheric refraction}\label{sec:refraction}
As mentioned in section \ref{sec:tropo}, a light ray is refracted when
it propagates in the Earth's atmosphere. This effect is one of the
main factors that limits the precision of ground-based astrometry
\citep{gubler98,mangum15}. We use the routine {\it slaRefro} in
{\small
SLALIB}\footnote{\url{http://star-www.rl.ac.uk/star/docs/sun67.htx/sun67.html}}
to calculate the refraction,
\begin{equation}
\mathcal{R}=\int_{1}^{n_{\rm O}}\frac{\tan{Z}}{n}dn~,
\label{eqn:R1}
\end{equation}
where $n_{\rm O}$ is the refractive index at the telescope and $Z$ is the refracted
zenith angle. The observed zenith angle $Z_o$ is the sum of the
incident zenith angle above the atmosphere $Z_i$ and the refraction:
\begin{equation}
Z_o=Z_i+\mathcal{R}~.
\end{equation}
As $\tan{Z}$ diverges when $Z$ approaches 90$^\circ$ (see equation
\ref{eqn:R1}), \cite{auer00} reformulate the integrant as a function of zenith
angle, and the refraction becomes
\begin{equation}
\mathcal{R}=-\int_{0}^{Z_o}\frac{rdn/dr}{n+rdn/dr}dZ~,
\end{equation}
where $r$ is the distance from the geocenter. Because refraction is
wavelength dependent, the effective temperature or wavelength of a
star should be known in order to calculate the refraction. By adopting the
atmospheric model developed by \cite{rueger02} and using the {\it
slaRefro} routine adapted from the {\it AREF} routine given by
\cite{hohenkerk85}, we can calculate refraction $R$ to a precision of
about 1\,arcsec \citep{mangum15} and the differential refraction $\Delta
R$ to a precision of 10\,$\mu$as \citep{gubler98}. However, in order to achieve such relative astrometric precision for a typical binary, those authors find that the effective
temperature of stars should be measured to a precision of 100\,K,
absolute zenith angle to a precision of 36\,arcsec, relative zenith
angle to a precision of 30\,mas, air temperature at the observatory to
a precision of 0.6\,K, air pressure to a precision of 160\,Pa,
and relative humidity to a precision of 10\%. Because the refraction is
calculated using the observed zenith in {\it slaRefro}, we set
$Z_o=Z_i$ and repeat the calculation of $R$ until it converges. Because the refraction occurs in the plane formed by the zenith and the
incident light ray and is perpendicular to the incident light ray, the refraction vector is
\begin{equation}
\bm{\mathcal{R}}=\frac{{\bm u}_Z-({\bm u}_Z\cdot {\bm u}_{\rm
OT}){\bm u}_{\rm OT}}{\sin{Z}}\mathcal{R}~,
\label{eqn:refro}
\end{equation}
where ${\bm u}_Z$ is the unit vector in the zenith direction. Then the light ray direction when it is observed
is
\begin{equation}
{\bm l}_o={\bm l}_i-\bm{\mathcal{R}}~,
\end{equation}
where ${\bm l}_i$ is the
approximately $-{\bm u}_{\rm OT}$. Such an assumption would at most
induce third order effects.
\subsubsection{Gravitational light deflection}\label{sec:deflection}
For a target system outside of the solar system (with heliocentric
distance $>10^5$\,au), the emitted light from the target star would be
deflected by the gravitational fields of companions. This effect is
also called gravitational lensing and will also contribute to the
Shapiro delay, as discussed in section \ref{sec:timing}. After equation
(70) of \cite{klioner03}, we convert the light ray direction at the emission time ${\bm l}_e$ into the direction after leaving the target system as
\begin{equation}
{\bm l}_l = {\bm l}_e-\sum_A\frac{(1+\gamma) Gm_A}{c^2}\frac{{\bm r}_{\rm
OT}\times({\bm r}_{\rm TA}\times{\bm r}_{\rm OA})}{ |{\bm r}_{\rm OT}||{\bm r}_{\rm OA}|(|{\bm r}_{\rm TA}||{\bm r}_{\rm OA}|+{\bm r}_{\rm OA}{\bm r}_{\rm TA})}~,
\label{eqn:ll}
\end{equation}
where A denotes the body in the target system parameter and
$\gamma=1$ if GR is assumed. We ignore the gravitational
deflection of light that is due to the nonspherical gravitational potential
of lenses because it only contributes 1$\mu$as when the light source is very close to the lens (see Table 1 of \cite{klioner03} for details).
Assuming vacuum propagation of the light ray between the target and
the solar system, the direction of the incident light beyond the atmosphere is
\begin{equation}
{\bm l}_i = {\bm l}_l-\sum_{\rm L}\frac{(1+\gamma) Gm_{\rm L} {\bm
d}_{\rm L}}{c^2d_{\rm L}^2}(1+\cos{\psi_{\rm L}})~,
\label{eqn:lo}
\end{equation}
where $\cos{\psi_{\rm L}} ={\bm u}_{\rm OT}\cdot{\bm r}_{\rm OL}/r_{\rm
OL}$ is the angular distance between the light ray and lens L, and
${\bm d}_{\rm L}={\bm l}_e\times({\bm r}_{\rm OL} \times {\bm
l}_e)$. For an observer at the geocenter, the light ray does not
bend if one assumes the gravitational field of the Earth is spherically
symmetric. According to \cite{klioner03}, the main light deflection
is caused by the Sun and the Earth, while the Moon and other planets
are only important if the light ray passes them closely.
In summary, the emitted light ray direction is derived from the
geometric observed direction using ${\bm l}_e=-{\bm u}_{\rm OT}$ with
${\bm u}_{\rm OT}$ derived from equation \ref{eqn:uOT}. Here, ${\bm l}_i$ is
calculated using equations \ref{eqn:ll} and \ref{eqn:lo}. The
incident light is further refracted by the atmosphere by
$\bm{\mathcal{R}}$. The direction of the light ray at the telescope
is $\bm{l}_o=\bm{l}_i-\bm{\mathcal{R}}$. Then $\hat{\bm u}_o$ is calculated using equation \ref{eqn:uo} to model the observed direction of star ${\bm u}_o$.
\subsection{Radial Velocity Model}\label{sec:relativistic_rv}
In this section, we model the observed radial velocity related to the kinematics, geometry, and relativistic effects of the target star and the observer.
\subsubsection{Einstein Doppler Shift}\label{sec:VG_shift}
In an inertial reference frame, the Schwarzschild solution to the Einstein field equations leads to the following exact ratio between the rate of proper time and the rate of coordinate time for a clock:
\begin{equation}
\frac{d\tau}{dt}=\sqrt{1-\left(\frac{v^2}{c^2}+\frac{v_e^2}{c^2}+\frac{(v_{||}/c)^2(v_e/c)^2}{1-(v_e/c)^2}\right)}~,
\end{equation}
where $v_{||}$ is the radial velocity of the clock with respect to the
inertial frame, and
\begin{equation}
v_e=\sqrt{\sum_i{\frac{2Gm_i}{r_i}}}
\end{equation}
is the escape velocity determined by the sum of the gravitational potential of
nearby bodies. Applying the above formula to the solar system and the
target system and ignoring $c^{-4}$ terms, we derive the increment
ratio of the proper observation time $\tau_o$ and the proper emission
time $\tau_e$ as
\begin{align}
1+z&\equiv \frac{\lambda_o}{\lambda_e}=\frac{\nu_e}{\nu_o}=\frac{d\tau_o}{d\tau_e}=\frac{d\tau_o}{dt_o}\frac{dt_o}{dt_i}\frac{dt_i}{dt_e}\frac{dt_e}{d\tau_e}\nonumber\\
& =(1-\frac{\Phi_{\rm S}}{c^2}-\frac{v_{\rm SO}^2}{2c^2})
(1-\frac{\Phi_{\rm T}}{c^2}-\frac{v_{\rm ST}^2}{2c^2})^{-1} \frac{dt_o}{dt_i}\frac{dt_i}{dt_e}
\label{eqn:VG_shift}
\end{align}
where $\lambda_o$ and $\lambda_e$ are respectively the observed and
emission wavelength, $\lambda_o$ and $\lambda_e$ are respectively
the observed and emitted light frequency, $\Phi_{\rm
S}=\sum\limits_i{\frac{Gm_i}{r_i}}$ is the absolute value of gravitational potential of the solar system at the observer's location
while $\Phi_{\rm T}=\sum\limits_j{\frac{Gm_j}{r_j}}$ is the absolute
value of gravitational potential of the target system when it emits
the light, $dt_o/dt_i$ is determined by the atmospheric refraction, and $dt_i/dt_e$ is determined by Shapiro delay and vacuum propagation. We define
\begin{equation}
z_{\rm
grS}\equiv\Phi_{\rm S}/c^2
\label{eqn:zgrS}
\end{equation}
and
\begin{equation}
z_{\rm grT}\equiv\Phi_{\rm T}/c^2
\label{eqn:zgrT}
\end{equation}
as gravitational Doppler shifts in the solar and target systems,
respectively. We also define
\begin{equation}
z_{\rm srS}\equiv\frac{v_{\rm SO}^2}{2c^2}
\label{eqn:zsrS}
\end{equation}
and
\begin{equation}
z_{\rm srT}\equiv\frac{v_{\rm ST}^2}{2c^2}
\label{eqn:zsrT}
\end{equation}
as the Doppler shifts due to special
relativity effects in the solar and target systems, respectively. Because
\begin{equation}
\frac{d\tau_o}{dt_o}=\frac{\rm dTT}{\rm dTCB}=1-z_{\rm grS}-z_{\rm srS}~,
\label{eqn:zrS}
\end{equation}
the relativistic effects on the Doppler shifts in the SS can be derived
from $\Delta_{\rm eS}$ later in section \ref{sec:timing}.
For photons emitted from different places on the surface of a star,
they experience different gravitational Doppler shifts especially if
there is a massive companion close to the target star. For example,
the velocity variation corresponding to the gravitational Doppler
shift caused by a Sun-like star located about 1\,au from the target
star is 3\,m/s. Assuming that the radius of the target star is
comparable with the solar radius, which is about $1/215$\,au, the
differential Doppler shift would lead to about 1\,cm/s of radial velocity variation. Such a differential Doppler shift should be accounted for together with the rotation-induced differential Doppler shift in the case of exoplanet detection in close binary systems.
\subsubsection{Kinematic, lensing, and tropospheric Doppler shift}\label{sec:special_shift}
As discussed in section \ref{sec:timing}, the emission
coordinate time $t_e$ is delayed from the coordinate arrival time
of nonrefracted light ray $t_i$ by
\begin{equation}
t_i-t_e=\Delta_{\rm geo}+\Delta_{\rm sS}+\Delta_{\rm sT}~,
\end{equation}
where
\begin{equation}
\Delta_{\rm geo}=\frac{r_{\rm OT}}{c}
\label{eqn:Dgeo}
\end{equation}
is the vacuum propagation time from the target to the observer.
The differential of the above delay gives
\begin{equation}
\frac{{\rm d}t_e}{{\rm d}t_i}=\frac{1+z_{\rm kS}-z_{\rm lS}}{1+z_{\rm kT}-z_{\rm lT}}~,\\
\end{equation}
where $z_{\rm lS}$ and $z_{\rm lT}$ are respectively the lensing Doppler shift corresponding to the Shapiro delay in the solar system and in the target system. Thus the ratio of time rate is
\begin{equation}
z_{\rm kS}=\frac{{\bm u}_{\rm OT}\cdot{\bm v}_{\rm SO}}{c},
\end{equation}
and
\begin{equation}
z_{\rm kT}=\frac{{\bm u}_{\rm OT}\cdot{\bm v}_{\rm ST}}{c},
\end{equation}
are respectively the kinematic Doppler shift in the solar and target
systems. We calculate the relativistic effects by adopting the
direction from the observer to the target as in \cite{kopeikin99} and
\cite{lindegren03}.
In \cite{lindegren03}, the gravitational deflection and the Shapiro
delay of the light are not thoroughly treated because of their negligible
effects. For example, \cite{lindegren03} dropped the Shapiro delay
term because it contributes at most 0.3\,m/s radial velocity
precision. Although this upper limit is determined from the extreme
situation when the light ray grazes the solar limb, the lensing effect
for stars with a large angular distance from the Sun can still be important for achieving 1\,cm/s radial velocity precision and we
consider this further below. Based on a more rigorous treatment of the
Shapiro effect in
equations (169), (173), and (238) of \cite{kopeikin99}, the lensing Doppler shift in the solar system is
\begin{equation}
z_{\rm lS}= \left(\frac{\delta \nu}{\nu_{\rm o}}
\right)_S=\sum_{\rm L}\frac{1}{c}({\bm v}_{\rm SL}-\frac{r_{\rm
LT}}{r_{\rm OT}}{{\bm v}_{\rm SO}}-\frac{r_{\rm OL}}{r_{\rm
OT}}\cdot{{\bm v}_{\rm ST}}) \cdot{{\bm \alpha}({{\bm
\lambda}_{\rm L}})}
\label{eqn:zlS1}
\end{equation}
where ${\bm \lambda}_{\rm L}$ is the impact parameter of the
unperturbed path of photons with respect to lens L, and
\begin{equation}
{\bm \alpha}({{\bm \lambda}_{\rm L}})=2(1+\gamma)\frac{G
m_{\rm L}}{c^2\lambda_{\rm L}^2}{\bm \lambda}_{\rm L}~,
\label{eqn:alphaL}
\end{equation}
where $m_{\rm L} $ is the mass of lens L.
Ignoring the lensing effects of planets, assuming a static Sun with
respect to the SSB, and considering $r_{\rm OA}\ll r_{\rm AT}$ and
$r_{\rm OT}\simeq r_{\rm OA}+ r_{\rm AT}$, we find that Equation \ref{eqn:zlS1} becomes
\begin{equation}
z_{\rm lS}= \left( \frac{\delta \nu}{\nu_{\rm o}} \right)_S=-\frac{{\bm v_{\rm SO}} \cdot{\bm \alpha}({\bm \lambda_S})}{c}~,
\label{eqn:zlS}
\end{equation}
where ${\bm \lambda}_S={\bm u}_{\rm OT}\times({\bm r}_{\rm OS} \times{\bm
u}_{\rm OT})$. Because the lensing effect is proportional to $c^{-3}$,
the above assumptions would at most have fourth-order effect. Equation
\ref{eqn:zlS} is the lensing formula used in most literature. Similarly, the lensing Doppler shift in the target system is approximately
\begin{equation}
z_{\rm lT}= \left( \frac{\delta \nu}{\nu_{\rm o}} \right)_{\rm T}=-\frac{{\bm v_{\rm CT}} \cdot{\bm \alpha}({\bm \lambda_{\rm C}})}{c}~,
\label{eqn:zlT}
\end{equation}
where ${\bm v_{\rm SC}}$ and ${\bm v_{\rm ST}}$ are respectively the
velocity of the companion and target star with respect to the SSB and
${\bm \lambda}_{\rm C}={\bm u}_{\rm OT}\times({\bm r}_{\rm OC}
\times{\bm u}_{\rm OT})$. The Sun is the main gravitational {\rm lens} in the solar system which
induces a gravitational shift of about $\frac{1{\rm
AU}}{\lambda_S}$\,mm/s assuming the observer's tangential velocity
of 30\,km/s. For an impact parameter comparable with the solar radius,
the shift would be about 0.3\,m/s. The angle between the target source
and the light ray from the perspective of the observer $\psi$ should
be less than 7$^\circ$ based on equations \ref{eqn:zlS} and \ref{eqn:zlT} in order to induce
$>1$\,cm/s line shift. If the target system is like the solar system,
this effect leads to $>1$\,cm/s doppler shift in edge-on systems. Although the
lensing effect is typically ignored in current exoplanet packages such
as {\small EXOFAST} \citep{eastman13}, it could become significant in
the search for small planetary signals whose amplitude is comparable with the
lensing effect.
Atmospheric refraction not only causes timing delay and deflects light
rays but also leads to Doppler shift. The Doppler shift induced by
tropospheric refraction is
\begin{equation}
z_{\rm tropo}=\frac{dt_o}{dt_i}-1=\frac{d\Delta_{\rm
tropo}}{dt_i}\approx (\Delta_{\rm hydro}m_h'(\Theta)+\Delta_{\rm
wz}m_w'(\Theta))\frac{d\Theta}{dt_i}~,
\label{eqn:ztropo}
\end{equation}
where $m_h'=\frac{{\rm d}m_h}{{\rm d}\Theta}$ and $m_w'=\frac{{\rm
d}m_w}{{\rm d}\Theta}$.
The differential tropospheric delay is derived numerically using {\it
slaRefro}. The rotation of the Earth leads to a continuous change of
the elevation and thus changes the mapping functions $m_h$ and
$m_w$. This effect would induce diurnal radial velocity variation of a
few mm/s for elevation angles of lower than 30$^\circ$ if only
hydrostatic delay was considered. For elevation angles less than
10$^\circ$, the refraction could induce up to a few m/s radial velocity
variation due to the exponential variation of refraction near the horizon
(see P4 of Fig. \ref{fig:acrel}).
By combining all Doppler effects, the Doppler shift is
\begin{equation}
\frac{v_r^{\rm obs}}{c}\equiv z=\frac{1-z_{\rm grS}-z_{\rm
srS}}{1-z_{\rm grT}-z_{\rm srT}}\frac{1+z_{\rm kT}-z_{\rm
lT}}{1+z_{\rm kS}-z_{\rm lS}-z_{\rm tropo}}-1~.
\label{eqn:vr_obs}
\end{equation}
Unlike \cite{wright14} and \cite{butkevich14}, we do not explicitly add
a term related to the light travel effect. Rather, we calculate the
quantities at the corresponding retarded time for a given light ray. We
calculate the emitted frequency at the proper emission time according
to the time transformation described in section \ref{sec:timing}. In
equation \ref{eqn:vr_obs}, the special and general relativistic Doppler shifts
($z_{\rm srS}$, $z_{\rm grS}$, $z_{\rm srT}$, and $z_{\rm grT}$) are
proportional to $c^{-2}$, and lensing effects ($z_{\rm lS}$ and $z_{\rm
lT}$) lead to $\mathcal{O}(c^{-3})$ Doppler shift. The kinematic
Doppler shifts ($z_{\rm kS}$ and $z_{\rm kT}$) are proportional to
$c^{-1}$ and are thus significant radial velocity
variations. In the case of detection of small planets like the
Earth, $z_{\rm kT}$ corresponds to $<$1\,m/s radial velocity
variation, as large as the radial velocity effects caused by some
relativistic effects. Thus, a comprehensive modeling of these effects
is essential for reliable detection of Earth-like planets.
\subsection{Caveats in the decoupling of the Solar and target systems}\label{sec:decoupling}
The so-called ``barycentric correction'' is typically used to
transform the measured radial velocity into the BCRS radial
velocity. However, it is only possible if we can separate the local effects and the remote effects caused by the target system. Specifically, the total Doppler shift is split into local and remote Doppler shifts:
\begin{align}
z&=(1+z_{\rm T})(1+z_{\rm S})-1~,\\
1+z_{\rm S}&=\frac{1-z_{\rm grS}-z_{\rm srS}}{1+z_{\rm kS}-z_{\rm lS}-z_{\rm tropo}}~,\\
1+z_{\rm T}&=\frac{1+z_{\rm kT}-z_{\rm lT}}{1-z_{\rm grT}-z_{\rm srT}}~.
\label{eqn:zb}
\end{align}
Although most terms in $z_{\rm S}$ can be precisely determined, ${\bm
u}_{\rm OT}$ is typically not known {\it a priori}. For single
stars, the error in proper motion may bias the barycentric correction
for decades-long radial velocity data. For example, a 10\,mas/yr uncertainty in
proper motion would lead to 1.5\,cm/s uncertainty in barycentric
correction over one year.
For stars with massive planet companions, the catalog astrometry of
the barycenter is biased by the typical assumption of a single
star in the data reduction. Because the stellar reflex motion is coupled
with the Earth's motion (see equation \ref{eqn:uOT}), a barycentric
correction for binaries would not only lead to a spurious trend but
also introduce false periodic signals in the corrected radial velocity
data. For $\alpha$ Centauri, these false signals lead to sub-m/s radial velocity
variation, hindering the detection of Earth-like planets in this
system. Thus, precise barycentric correction is only possible if the
stellar reflex orbit is accurately determined {\it a priori}. However,
this is rarely the case even in the {\it Gaia} era because the five-parameter astrometry solution assumes no companions around a target star.
Even if companions are considered in astrometry modeling, potential
uncertainty and bias are expected because of a lack of precise modeling of
instrumental bias, stellar activity, and other noise terms.
Two types of biases are caused by barycentric correction:
\begin{itemize}
\item A trend bias is caused by using the barycentric
velocity of the target system as the velocity of the target star
without considering the stellar reflex motion. This assumption would
bias the astrometric solution and thus induce long-term trend in the
radial velocity data. Thus this bias is related to the velocity of the stellar
reflex motion and is important for long-term observations.
\item A periodic bias is caused by ignoring the position offset of the target
star with respect to the barycenter of the target system. Because the
stellar reflex motion is periodic, this assumption would cause
periodic variation of the visual direction of the target star, leading
to periodic variation of radial velocity. It is important for observations with
baselines longer than the period of stellar reflex motion.
\end{itemize}
Because the Earth's motion is coupled with the barycentric and binary motions, the
annual and diurnal Earth motions are manifested in both biases. To estimate the trend bias, we calculate the average reflex motion
of the target star as
\begin{equation}
\bar{v}_{\rm reflex}=\frac{m_{\rm C}}{m_{\rm C}+m_{\rm T}}\sqrt{\frac{G(m_{\rm C}+m_{\rm T})}{a}}~.
\end{equation}
The corresponding proper motion bias caused by ignoring this reflex
motion is
\begin{equation}
\delta\mu=\bar{v}_{\rm reflex}\frac{\widetilde\omega^b}{A}~.
\end{equation}
This proper motion offset leads to a positional bias of
\begin{equation}
\delta u=\delta\mu \delta t
\end{equation}
over a time span of $\delta t$.
Assuming that the characteristic radial velocity caused by the motions of the
target star and the Earth is $v_{\rm tot}$=50\,km/s, we estimate the radial
velocity bias related to $\delta u$ as
\begin{equation}
\delta v_r^{\rm trend}=v_{\rm tot} \delta
u=1.52\left(\frac{m_{\rm C}}{M_\odot}\right)\left(\frac{m_{\rm C}+m_{\rm T}}{M_\odot}\right)^{-1/2}\left(\frac{a}{{\rm
au}}\right)^{-1/2}\left(\frac{\widetilde\omega^b}{{\rm
mas}}\right)\left(\frac{\delta t}{{\rm yr}}\right)~ {\rm mm}~{\rm s}^{-1}~.
\label{eqn:vr_trend}
\end{equation}
The corresponding acceleration of the trend bias is
\begin{equation}
\delta \dot{v}_r^{\rm trend}=\frac{\delta {v}_r^{\rm trend}}{\delta t}=1.52\left(\frac{m_{\rm C}}{M_\odot}\right)\left(\frac{m_{\rm C}+m_{\rm T}}{M_\odot}\right)^{-1/2}\left(\frac{a}{{\rm
au}}\right)^{-1/2}\left(\frac{\widetilde\omega^b}{{\rm
mas}}\right)~ {\rm mm}~{\rm s}^{-1}~{\rm yr}^{-1}~.
\end{equation}
The periodic bias is determined by the semi-major axis of the
stellar reflex motion and is
\begin{equation}
\delta v_r^{\rm period}=v_{\rm tot}a\frac{m_{\rm C}}{m_{\rm C}+m_{\rm T}}\frac{\widetilde\omega^b}{A}=0.24\left(\frac{a}{{\rm au}}\right)\left(\frac{\widetilde\omega^b}{{\rm mas}}\right)~ {\rm mm}~{\rm s}^{-1}~.
\label{eqn:vr_period}
\end{equation}
Considering that the barycentric correction is also frequently used in
astrometry and timing, we calculate the time delay biases
corresponding to the trend and periodic radial velocity biases, which
are
\begin{equation}
\delta \Delta^{\rm trend}=\frac{\delta v_r^{\rm period}}{v_{\rm tot}}\frac{A}{c}=15.17\left(\frac{m_{\rm C}}{M_\odot}\right)\left(\frac{m_{\rm C}+m_{\rm T}}{M_\odot}\right)^{-1/2}\left(\frac{a}{{\rm
au}}\right)^{-1/2}\left(\frac{\widetilde\omega^b}{{\rm
mas}}\right)\left(\frac{\delta t}{{\rm yr}}\right)~\mu {\rm s}
\label{eqn:timing_trend}
\end{equation}
and
\begin{equation}
\delta \Delta^{\rm periodic}=\frac{\delta v_r^{\rm period}}{v_{\rm tot}}\frac{A}{c}=2.40\left(\frac{a}{{\rm au}}\right)\left(\frac{\widetilde\omega^b}{{\rm mas}}\right)~ \mu{\rm s}~.
\label{eqn:timing_period}
\end{equation}
In the above equations, we consider the light travel time from the Earth
to the Sun (about 1\,au) as the characteristic Roemer delay.
Similarly, the astrometric biases corresponding to the trend and
periodic radial velocity biases are
\begin{equation}
\delta u^{\rm trend}=\frac{\delta v_r^{\rm trend}}{v_{\rm tot}}=6.27\left(\frac{m_{\rm C}}{M_\odot}\right)\left(\frac{m_{\rm C}+m_{\rm T}}{M_\odot}\right)^{-1/2}\left(\frac{a}{{\rm
au}}\right)^{-1/2}\left(\frac{\widetilde\omega^b}{{\rm
mas}}\right)\left(\frac{\delta t}{{\rm yr}}\right)~{\rm mas}~.
\label{eqn:u_trend}
\end{equation}
and
\begin{equation}
\delta u^{\rm periodic}=\frac{\delta v_r^{\rm period}}{v_{\rm tot}}=0.99\left(\frac{a}{{\rm au}}\right)\left(\frac{\widetilde\omega^b}{1~{\rm mas}}\right)~
{\rm mas}~.
\label{eqn:u_period}
\end{equation}
Therefore, a radial velocity bias of 1\,mm/s corresponds to a timing
bias of 10\,$\mu$s and an astrometric bias of about 4\,mas.
To investigate the influence of the barycentric correction or more
generally the decoupling of local and remote effects on the detection of exoplanets, we calculate the boundary of companion mass and orbital
period corresponding to a trend bias with an acceleration of 1\,mm/s/yr and a
periodic bias of 1\,cm/s for stars with a heliocentric distance of 1, 10, and
100\,pc. We show these boundaries together with the currently known
planets in Fig. \ref{fig:barycorr_bias}. In this figure, we add
circles around plotted points to denote planets that are subject to
trend bias with an acceleration larger than 1\,mm/s/yr or periodic
bias larger than 1\,cm/s. An acceleration of
1\,mm/s/yr corresponds to a 1\,cm/s trend bias for observations over one
decade. There are only six planets strongly influenced by trend
bias. Five of them are transit planets, while one of them is detected
through astrometry. They are all massive planets with relatively short
orbital periods. The boundaries for trend biases also suggest that
short-period massive planets such as hot Jupiters would induce large
stellar reflex motions and thus bias the initial proper motions,
leading to a spurious radial velocity trend. On the other hand, the
periodic bias is manifested in stars with long-period and massive companions, most of which are detected through direct imaging.
Nevertheless, our estimation of the bias for a given system
is only a lower limit because the bias is determined by the largest
companion in a system, which might not be detected. Therefore, the
boundaries shown in Fig. \ref{fig:barycorr_bias} are a better guide
for the estimation of decoupling bias because it is calculated for the most
massive companion in a system.
\begin{figure}
\centering
\includegraphics[scale=0.5]{decouple_bias.pdf}
\caption{The plot shows exoplanets downloaded from the NASA
Exoplanet Archive and color coded for different detection methods
as a function of orbital period and planetary mass. The lines on
the plot are to indicate potential radial velocity biases caused
by barycentric corrections or more generally by decoupling local
and remote effects with the assumption that plotted exoplanets are
hosted by single stars. The black lines show the boundaries for
the 1\,mm/s/yr trend bias while the gray lines show the boundaries for
the 1\,cm/s periodic bias. The top left part of the phase space is
particularly susceptible to large trend bias while the top right
part of the phase space is particularly susceptible to large
periodic bias. The solid, dashed, and dotted lines show the biases
for stars with distances $d=$1, 10, and 100\,pc, respectively. The
planets denoted by open circles are influenced by at least
1\,mm/s/yr trend bias or 1\,cm/s periodic bias because of decoupling. The orbital periods for directly imaged planets are derived from their semi-major axes by assuming a face-on circular orbit.}
\label{fig:barycorr_bias}
\end{figure}
Because the transit timing is not sensitive to $<0.02$\,s bias
(corresponding to decoupling bias for $\alpha$ Centauri A), a decoupling
in timing modeling is efficient and reliable for most transit
systems. However, the transit system is sensitive to remote effects
such as the transit timing variation (TTV) caused by binary
motions. Relative astrometry is sensitive to the astrometry and radial velocity of
the TSB (see Equation \ref{eqn:relative}), which could be biased
through decoupling. Absolute astrometry is
sensitive to decoupling, and this is why the high-precision astrometry
software GREM \citep{klioner03} is used to model all motions
simultaneously for {\it Gaia} astrometry although its timing model is biased by decoupling effects.
For the radial velocity method, decoupling is unlikely to achieve
1\,cm/s radial velocity precision because even distant close binaries
($d>1$\,kpc) show 1\,cm/s trend bias for decade-long observations. On
the other hand, wide binaries (with orbital periods longer than one
decade) show strong periodic bias, which can be approximated as a trend
for observations with a baseline far shorter than the orbital
period. Because nearly half of the solar-type stars are binaries (e.g.,
\citealt{sana11,moe18}), a combined modeling of the target and local systems is essential to achieve 1\,cm/s precision
over decade-long observations. As illustrated in
Fig. \ref{fig:barycorr_bias}, for nearby stellar systems, planets
with hot and cold Jupiter companions are sensitive to trend and
periodic biases, respectively. Because most TESS targets are close to
the Sun, the radial velocity follow up for hot Jupiters detected by
TESS may need to consider the trend bias. Specifically, decoupling
could introduce $\sim$0.1\,m/s bias in ten years of radial velocity
measurements of a nearby star ($<$10 pc) with hot or cold Jupiters. It
could introduce $\sim$1\,m/s bias over one year for a nearby star hosting stellar-mass companions.
Considering the above difficulties, a separation of local and remote
radial velocity effects through decoupling is unlikely to achieve 1\,cm/s precision for decades-long radial velocity data especially for
nearby stars with massive companions (e.g., with a mass $>1~M_{\rm Jup}$). Because astrometry data is essential for a reliable
decoupling, a combined modeling of radial velocity and astrometry is
the proper way to avoid bias induced by decoupling. Another more efficient approach is to use astrometry offsets or jitter terms to model potential bias and fit
these offsets together with the radial velocity model parameters to
the radial velocity data ``corrected'' for barycentric effects.
\subsection{Significance of Relativistic Effects in Extrasolar
Systems}\label{sec:relativity_test}
In this section, we investigate the sensitivity of currently confirmed
exoplanets to relativistic effects. The main relativistic effect in
extrasolar systems is the precession of the longitude of
periastron. According to \cite{misner73} and \cite{jordan08}, it is
\begin{equation}
\dot{\omega}_{\rm
GR}=\frac{3Gm_{\rm T}}{ac^2(1-e^2)}n=\frac{7.78}{1-e^2}\left(\frac{m_{\rm T}}{1~M_\odot}\right)\left(\frac{a}{0.05~{\rm
au}}\right)^{-1}\left(\frac{P}{1~{\rm
day}}\right)^{-1}~^\circ/{\rm century}~,
\label{eqn:omega.dot}
\end{equation}
where $n\equiv [G(m_{\rm T}+m_{\rm C})/a^3]^{1/2}$ is the Keplerian mean
motion and $P$ is the orbital period. Assuming $m_{\rm C}\ll m_{\rm T}$, we derive the period-mass
boundaries for $e=$0,0.5, and 0.9 for $\dot{\omega}_{\rm GR}=$10$^\circ$ per
century or 1$^\circ$ per decade and show them in the period-mass
distribution of currently known planets in Fig. \ref{fig:precession}. There are about 144 transit planets with strong relativistic precession, although the
planetary perturbation and tidal deformations may also contribute at a
level comparable to the relativistic precession \citep{jordan08}. However, these nonrelativistic
effects only become important when the planet is very close to the
star. According to \cite{jordan08}, planetary orbits with semi-major axis
larger than 0.05\,au are suitable for relativity tests. The precession is
detectable in the variation of primary transit duration
\citep{miralda02} and in the changes of longitude of periastron in
radial velocity data \citep{jordan08}. Although such effects will
probably be detected in the near future, the current radial velocity
and transit timing data are not likely to be precise enough to put strong
constraints on various post-Newtonian theories and to test GR in particular.
\begin{figure}
\centering
\includegraphics[scale=0.5]{relativity_test.pdf}
\caption{In order to illustrate the detectability of
relativistic effects in currently known planetary systems, we
select exoplanets with stellar mass larger than 0.1\,$M_\odot$ and
orbital period less than 100 days from the known exoplanets
selected for Fig. \ref{fig:barycorr_bias}. All of the exoplanets to
the left of the solid, dashed, and dotted lines (for $e=$0, 0.5,
and 0.9) are the exoplanets with a precession larger than 10$^\circ$ per century and are highlighted with open circles.}
\label{fig:precession}
\end{figure}
Unlike star-planet systems, binaries have relatively stronger
gravitational fields and thus are more suitable for relativity
testing. Considering that timing data is limited to one-dimensional
information, we investigate the feasibility of using astrometry and
radial velocity data to test relativity. The former has been
studied previously (e.g., \citealt{kopeikin99b,klioner03,kopeikin07}). We will focus on the latter by
assessing the significance of the Doppler shift induced by special
($z_{\rm srT}$) and general ($z_{\rm grT}$) relativity. Because these
two Doppler shifts are proportional to $c^{-2}$, they dominate the
relativistic Doppler shifts in the target system, compared with the
lensing Doppler shifts, which are proportional to
$c^{-3}$. Because the constant relativistic Doppler shift is not
detectable in radial velocity data, we only estimate the variation of
relativistic Doppler shifts. According to Equation \ref{eqn:zsrT} the
amplitude of the variation of $z_{\rm srT}$ is
\begin{equation}
\delta z_{\rm srT}= \frac{\delta v_{\rm ST}^2}{2c^2}=\frac{{\bm v}_{\rm ST, max}^2-{\bm v}_{\rm ST, min}^2}{2c^2}~,
\end{equation}
where $\bm v_{\rm ST,max}$ and $\bm v_{\rm ST,min}$ are respectively the
maximum and minimum $\bm v_{\rm ST}$. The variation of $({\bm
v}_{\rm SB}+{\bm v}_{\rm BT})^2=v_{\rm SB}^2+v_{\rm BT}^2+2{\bm
v}_{\rm SB}\cdot{\bm v}_{\rm BT}$ depends on the angle between ${\bm v}_{\rm SB}$ and ${\bm v}_{\rm BT}$ and thus depends on the inclination and angular parameters of the binary
orbit. To simplify the problem, we explore the range of $\delta z_{\rm
srT}$ for a given eccentricity $e$ and barycentric velocity $v_{\rm
SB}$. The minimum $\delta z_{\rm srT}$ is
\begin{equation}
\delta z_{\rm srT.min}=\frac{\delta{\bm v}^2_{\rm BT}}{2c^2}=\frac{v_{\rm BT, max}^2-v_{\rm BT,min}^2}{2c^2}=\frac{2e}{c^2(1-e^2)}\left[\frac{2\pi G}{P}\right]^{2/3}(m_{\rm T}+m_{\rm C})^{-4/3}m_{\rm C}^2~.
\end{equation}
Then the minimum amplitude of radial velocity variation induced by special relativity is
\begin{equation}
\delta v_{\rm srT,min} = c\delta z_{\rm
srT}=5.92\frac{e}{1-e^2}\left(\frac{P}{{\rm
year}}\right)^{-2/3}\left(\frac{m_{\rm T}+m_{\rm C}}{M_\odot}\right)^{-4/3}\left(\frac{m_{\rm C}}{M_\odot}\right)^2~{\rm m/s}~.
\label{eqn:dvsrTmin}
\end{equation}
The maximum $\delta z_{\rm srT}$ is
\begin{eqnarray}
\delta z_{\rm srT,max}&=&\frac{(v_{\rm SB}+v_{\rm BT,max})^2-(v_{\rm
SB}-v_{\rm BT,min})^2}{2c^2}=\frac{v_{\rm SB}(v_{\rm
BT,max}-v_{\rm BT,min})}{c^2}+\delta z_{\rm srT,min}\\
&=&v_{\rm SB}\frac{2e}{c^2\sqrt{1-e^2}}\left[\frac{2\pi
G}{P}\right]^{1/3}(m_{\rm T}+m_{\rm C})^{-2/3}m_{\rm C}+\delta z_{\rm srT,min}~.
\end{eqnarray}
The maximum amplitude of radial velocity variation induced by special relativity is
\begin{equation}
\delta v_{\rm srT,max} = c\delta z_{\rm
srT,max}=\delta v_{\rm srT,min}+\delta v_{\rm srT,couple}~,
\label{eqn:dvsrTmax}
\end{equation}
where
\begin{equation}
\delta v_{\rm srT,couple}=9.94\frac{e}{\sqrt{1-e^2}}\left(\frac{P}{{\rm
year}}\right)^{-1/3}\left(\frac{m_{\rm T}+m_{\rm C}}{M_\odot}\right)^{-2/3}\left(\frac{m_{\rm C}}{M_\odot}\right)\left(\frac{v_{\rm SB}}{50~{\rm km/s}}\right)~{\rm m/s}
\label{eqn:dvsrTcouple}
\end{equation}
is the relativistic radial velocity related to the coupling of the
heliocentric motion of the TSB and the binary motion. For $\alpha$
Centauri A, $\delta v_{\rm srT,min}\approx 0.08$\,m/s and $\delta
v_{\rm srT,max}\approx 0.61$\,m/s over half of the binary orbital period.
According to Equations \ref{eqn:zgrT} and \ref{eqn:orbital_plane}, the amplitude of the variation of
gravitational Doppler shift for a binary is
\begin{equation}
\delta z_{\rm grT}=\Phi_T/c^2=\frac{Gm_{\rm C}}{r_{\rm CT,min}}-\frac{Gm_{\rm C}}{r_{\rm CT,max}}=\frac{2m_{\rm C}}{c^2}\frac{e}{1-e^2}\left[\frac{4\pi^2G^2}{(m_{\rm C}+m_{\rm T})P^2}\right]^{1/3}~.
\end{equation}
Hence the corresponding amplitude of radial velocity variation is
\begin{equation}
\delta v_{\rm grT} =5.92\frac{e}{1-e^2}\left(\frac{P}{{\rm
year}}\right)^{-2/3}\left(\frac{m_{\rm T}+m_{\rm C}}{M_\odot}\right)^{-1/3}\left(\frac{m_{\rm C}}{M_\odot}\right)~{\rm m/s}.
\label{eqn:dvgrT}
\end{equation}
For $\alpha$ Centauri A, $\delta v_{\rm grT}\approx 0.16$\,m/s over
half of the binary orbital period.
To investigate the sensitivity of binary orbits to relativistic
effects, we show the relativistic radial velocity variation as a
function of binary mass and orbital period. We show a sample of 652
binaries with dynamical masses derived by \cite{malkov12} in
Fig. \ref{fig:binary}. According to equations
\ref{eqn:dvsrTmin}, \ref{eqn:dvsrTmax}, \ref{eqn:dvsrTcouple}, and
\ref{eqn:dvgrT}, the relativistic radial velocity variation is not
sensitive to eccentricity if the binary orbit is not circular
(e.g. $e>0.1$). Because only 8\% binaries in the binary sample have
$e<0.1$, we adopt $e=0.1$ to more easily calculate the relativistic radial velocity variation. Fig. \ref{fig:binary}
illustrates that the relativistic radial velocity is relatively sensitive to the mass of the secondary $m_{\rm C}$ for a low-mass primary compared with a high-mass
primary. Thus the optimal targets for detecting relativistic effects
are the low-mass companions of massive primaries. While many binaries
show a relativistic radial velocity variation of a few cm/s over one
orbital period, the detection of such a variation is at most marginal
and thus is not suitable for relativity tests. To select the optimal targets
for relativity tests, we note that there are 52 binaries with $v_{\rm grT}>$1\,m/s
and orbital period $P<10$\,yr. Based on a PPN formulation of the
gravitational redshift (e.g., \citealt{misner73,kopeikin99b,gravity18}), these
binaries can be used to constrain the strong principle of equivalence
to a relative precision of 1\% if a few cm/s radial velocity
precision can be achieved by high-precision spectrographs. Because the
gravitational redshift caused by a binary companion has a period
that differs from the orbital periods of potential planets around the
target star, the gravitational redshift variation can be detected
without considering planetary perturbations, although a combined modeling may reduce the residual and improve the significance
of detection.
\begin{figure}
\centering
\includegraphics[scale=0.6]{binary_test.pdf}
\caption{We show the observed binary masses and orbital periods
from \cite{malkov12} along with period sensitivity for the
relativistic radial velocity variation. The three groups of lines
are for different values of the mass of target star $m_{\rm T}$
(0.1, 1, and 10\,$M_\odot$). The solid upper solid lines are for
minimum amplitude of radial velocity induced by special
relativity (equation \ref{eqn:dvsrTmin}). The dashed lines are for
the relativistic radial velocity related to the coupling of the
heliocentric motion of the TSB and the binary motion (equation
\ref{eqn:dvsrTcouple}). The dotted lines are the amplitude of radial
velocity variation due to the gravitational Doppler shift of a
binary (equation \ref{eqn:dvgrT}). On the basis that primaries and secondaries have the same
mass, the 52 binaries with gravitational radial velocity $\delta
v_{\rm grT} >$ 1 m/s and orbital period $P<10$\,yr are denoted
by blue circles. We note that in all our predictions we assume GR to be true and use $e=0.1$ although a wide range of eccentricity values are observed, as denoted by the colored eccentricity legend. }
\label{fig:binary}
\end{figure}
In summary, the gravitational redshift variation in binary systems
can provide a new method to test GR. To demonstrate the
uniqueness of this method, we show the mass and dimensionless
gravitational potential for various relativity tests in
Fig. \ref{fig:relativity_test}. Although current efforts are
focused on strong-field tests of GR, few tests have
been done in the weak-field regime. It is in the extreme weak-field
regime where dark matter needs to be invoked to explain phenomena such
as galactic rotation and gravitational lensing. However, in the extremely
weak gravitational field, relativistic effects become weak as well,
and thus it is not clear whether the weak-field anomaly is due to the
breakdown of the classical or relativistic predictions of GR if the null detection of dark matter over the past two decades
\citep{cosine100} indicates alternative gravity theories. To this end,
the binary test of relativity provides a unique way to probe the weak-field and stellar-mass regime in order to test GR and
alternative theories such as the modified Newtonian dynamics (MOND; \citealt{milgrom83}).
\begin{figure}
\centering
\includegraphics[scale=0.7]{potential_mass.pdf}
\caption{Relativity tests as a function of logarithmic mass and dimensionless gravitational potential, inspired
by \cite{psaltis04,gravity18}. The blue region represents the gravitational lensing
effect at the Einstein radius for a reduced distance $d=\frac{d_{\rm L}d_{\rm S}}{d_{\rm LS}}$ from 1\,pc to 1\,Gpc, where
$d_{\rm L}$ and $d_{\rm S}$ are respectively the distances to the
lens and to the source, and $d_{\rm LS}$ is the distance from the
lens to the source. The red region represents the
binary test with mass from 0.1 to 150\,$M_\odot$ and gravitational
radial velocity variation from 1\,cm/s to 1\,km/s. The gray region
represents the galactic rotation where the width of the gray region is determined by an
acceleration ranging from $10^{-12}$ to $10^{-10}$\,m/s$^2$
\citep{lelli17}. The black dots show well-established tests (from top
to bottom): the imaging of the M87 black hole horizon by \cite{eht1}, the
relativistic broadening of Fe K$\alpha$ lines
\citep{tanaka95,fabian00}, the LIGO/Virgo detection of gravitational
waves \citep{gw150914,gw170817}, the S2 orbit around the Galactic-centre massive black hole \citep{gravity18}, the self-gravitational
redshift of Sirius B \citep{greenstein71,barstow05}, the Hulse-Taylor
pulsar \citep{taylor82}, the light deflection and Shapiro delay in the
solar system (e.g., \citealt{shapiro64}), the precession of Mercury
\citep{einstein16}, and the \cite{pound59} experiment. The red dot
denotes the gravitational redshift in the $\alpha$ Centauri AB binary system. The gray dots
represent tests of dark matter and MOND theories and thus also
provide tests of GR in a nonrelativistic
regime. They are galactic rotation curves represented by the Milky
Way and the wide binary acceleration represented by $\alpha$ and Proxima Centauri
\citep{banik18}.
}
\label{fig:relativity_test}
\end{figure}
\section{Comparison between PEXO and TEMPO2}\label{sec:comparison}
To estimate the precision of PEXO, we compare PEXO with TEMPO2, which
is able to model timing to a precision of $\sim$1\,ns. Because radial velocity is
simply the time derivative of various delay terms, we also use TEMPO2
to estimate the radial velocity model precision of PEXO. However, TEMPO2 does not
model astrometry precisely. Because our astrometry model is similar to the model used by GREM \citep{klioner03} which is able to achieve
$\mu$as precision, we expect a similar precision for
PEXO. Considering that our radial velocity and astrometry models are consistent with each
other, we also expect 1\,$\mu$as precision for the astrometry model if
the radial velocity modeling precision is 1\,cm/s for most stars over decades\footnote{The velocity precision is $\delta v=1$\,cm/s\,$=2.109\times
10^{-6}$\,au/yr. The corresponding astrometry precision $\delta u=v\delta
t/d=2.109\times 10^{-6}$\,au/yr\,$ \times 10$\,yr$/10$\,pc\,$=$\,2.109\,$\mu$as, where $d=10$\,pc is the distance of the target star and $\delta t=10$\,yr is the time span.}.
\subsection{Timing}{\label{sec:compare_timing}}
We use $\tau$ Ceti as an example to compare the timing model of PEXO with the one in TEMPO2 and the one
introduced by \cite{eastman10}. The position of $\tau$ Ceti is
characterized by $\alpha=-15^\circ.93955572$
(ICRF), $\delta^\circ=26.02136459$ (ICRF),
$\widetilde\omega=273.96$\,mas, $\mu_\alpha=-1721.05$\,mas/yr,
$\mu_\delta=854.16$\,mas/yr, and radial velocity $v_r=-16.68$\,km/s
\citep{brown18}. We use the online applet developed by
E10 to calculate the ${\rm BJD_{TDB}}$ from JD$_{\rm UTC}$. Because
this online applet does not propagate the orbit of the target star, we
set $\mu_\alpha$, $\mu_\delta$, and $v_r$ to be zero in order to
compare with PEXO as well as TEMPO2. We use the GPS position of CTIO
determined by \cite{mamajek12} as an example observatory geocentric
coordinate. We calculate $\rm BJD_{TDB}$ for $\rm JD_{UTC}$ over
a 10,000 day time span in a step of 10 days and use the ephemeris
of JPL DE405 \citep{standish98} to determine the motions of the Earth and
observatory\footnote{Because E10 uses DE405 by default, we use DE405
in PEXO for comparison, though DE430 is used in other cases.}. We use the 2001 version (hereafter FB01) of the analytical method developed by
\cite{fairhead90} and recommended by \cite{mccarthy04} to calculate TDB-TT for PEXO. The 1990 version (hereafter FB90) is used for TEMPO2 because it is the only available version in TEMPO2. We also use the 2000B model of the Earth rotation \citep{capitaine03,mccarthy03} for TEMPO2 and PEXO.
We show the difference in $\rm BJD_{TDB}$ between the online applet
and IDL versions of E10 and TEMPO2 in the left panel of
Fig. \ref{fig:bjd}. The IDL version gives a few $\mu$s timing
precision, while the applet gives sub-ms precision due to
its use of double precision to store the unreduced JD. However, the original IDL version
of E10 has an error in the calculation of ``parallax delay''
(equation \ref{eqn:dpS}). In the E10 paper, they correctly add a positive
sign in the parallax delay shown in equation 8 by using ${\bm u}_{\rm
OT}$ as the reference direction. In the E10 IDL code {\small utc2bjd.pro}, the input R.A. and
decl. are barycentric, and thus the reference unit vector is ${\bm
u}_{\rm SB}$. However, E10, calculates the total Roemer delay by
adding the parallax delay onto rather than subtracting it from the
first-order Roemer delay. The latter is used to calculate the correct
Roemer delay shown in the left and middle panels of Fig. \ref{fig:bjd}.
To compare PEXO, E10, and TEMPO2 on the same footing, we include all
astrometric parameters for $\tau$ Ceti and use $u_{\rm OT}$ calculated
by PEXO in the IDL version of E10. We compare the three packages in
the middle and right panels of Figure \ref{fig:bjd}. We see that the E10 timing
precision is about 4\,$\mu$s, while the PEXO difference from TEMPO2 is less
than 50\,ns. In the right panel, we compare PEXO with its degraded
version (``PEXOt''), which does not include Roemer delay terms higher
than two orders (see section \ref{sec:astrometry}). The degraded
version differs from TEMPO2 by less than 8\,ns. There is an offset
of $\sim$4\,ns due to the different computation methods of
TDB-TT. Instead of using the FB90 method to derive TDB-TT like TEMPO2,
we use the FB01 method, which is updated and more accurate \citep{petit10}. We also
use DE430t to derive TDB-TT and find a similar offset, suggesting a bias
of a few nanoseconds in the FB90 method. The annual variation in $\Delta {\rm
BJD}_{\rm TDB}$ suggests that the bias caused by FB90 depends on the
barycentric distance of the geocenter. The minimum difference between PEXO
and TEMPO2 occurs at the reference {\it Hipparcos} epoch. Therefore, third-order geometric terms shown in section \ref{sec:astrometry} are
necessary for a timing model with a precision of $\sim$1\,ns. The failure
to consider this in TEMPO2 might bias its modeling of decade-long pulsar timing data.
\begin{figure}
\centering
\includegraphics[scale=0.38]{timing_TC_tempoFB90_DE405_ttt2tdbFB01_e10_par2.pdf}
\includegraphics[scale=0.38]{timing_TC_tempoFB90_DE405_ttt2tdbFB01_e10_par4.pdf}
\includegraphics[scale=0.38]{pexot_TC_tempoFB90_DE405_ttt2tdbFB01_e10_par4.pdf}
\caption{Comparison of $\rm BJD_{TDB}$ calculated by E10, PEXO,
and TEMPO2. Left panel: $\rm BJD_{TDB}$ modeled by the online
applet (black) and IDL version (blue) of E10 after subtraction by
the TEMPO2 values. Considering that the applet does not propagate
the coordinates of targets, zero proper motion of $\tau$ Ceti is
assumed. Middle panel: $\rm BJD_{TDB}$ modeled by E10 and PEXO
with respect to the TEMPO2 values. In this comparison, proper
motion effects are considered. Right panel: $\rm BJD_{TDB}$
modeled by PEXO with and without third-and-higher-order Roemer
delay terms. Because the latter is the approach adopted by TEMPO2,
we call it ``PEXOt''. Proper motion effects are considered in
this comparison.}
\label{fig:bjd}
\end{figure}
In the Fig. \ref{fig:bjd} comparison of the degraded PEXO (i.e. PEXOt) and TEMPO2, we only account for the Shapiro delay due to the Sun because we find a related bug in TEMPO2. The term $1-\cos(\psi)$ in equation \ref{eqn:DSO} is implemented as
$1+\cos(\psi)$ in the TEMPO2 routine {\small shapiro\_delay.C}. This
will lead to considerable bias in Shapiro delay caused by the solar
system planets. We show the Shapiro delays induced by the Sun, Jupiter,
Saturn, and Uranus in Fig. \ref{fig:shapiro}. The Sun is the dominant
source of Shapiro delay. Jupiter contributes about 30\,ns to the
total Shapiro delay and thus is the second important source. Saturn
and Uranus contribute about 10 and 1.5\,ns, respectively. The other
solar system planets only induce less than 1\,ns Shapiro
delay. Therefore, the Shapiro delays due to the Sun, Jupiter, Saturn,
and Uranus are essential components in the model for $\sim$1\,ns
timing. The Shapiro delay and lensing effects due to Jupiter and
Saturn have been detected using very-long-baseline interferometry
\citep{fomalont03,fomalont09}. In the timing, astrometry, and
radial velocity models of PEXO, Shapiro or lensing effects of the
Sun, Mercury, Venus, Earth, Moon, Mars, Jupiter, Saturn, Uranus,
and Neptune are considered as standard.
\begin{figure}
\centering
\includegraphics[scale=0.5]{Sun_TC_tempoFB90_DE405_ttt2tdbFB01_par4}
\includegraphics[scale=0.5]{Jupiter_TC_tempoFB90_DE405_ttt2tdbFB01_par4}
\includegraphics[scale=0.5]{Saturn_TC_tempoFB90_DE405_ttt2tdbFB01_par4}
\includegraphics[scale=0.5]{Uranus_TC_tempoFB90_DE405_ttt2tdbFB01_par4}
\caption{Shapiro delay induced by the Sun, Jupiter, Saturn, and
Uranus.}
\label{fig:shapiro}
\end{figure}
In the comparison shown in Fig. \ref{fig:bjd}, we modify PEXO to use a
relatively outdated ephemeris, DE405. To assess the significance of
ephemeris difference, in Fig. \ref{fig:ephemeris} we compare the JPL
ephemerides DE405 \citep{standish98}, DE414 \citep{standish06}, DE421
\citep{folkner08}, DE435, DE436 \citep{folkner16}, and DE438 with
DE430 \citep{folker14}. From left to right, the plots of
Fig. \ref{fig:ephemeris} indicate that DE405 and DE414 typically
differ from DE 430 and other recent ephemerides by more than 1000\,ns
in BJD$_{\rm TDB}$ ($\Delta {\rm BJD}_{\rm TDB}$), more than 1\,km in
barycentric position of the geocenter ($r_{\rm SG}$), and more than
0.05\,mm/s in barycentric velocity of the geocenter ($v_{\rm
SG}$). DE436 and DE438 differ from DE430 by $\Delta {\rm BJD}_{\rm
TDB}\sim 400$\,ns, $r_{\rm SG}\sim 200$\,m, $v_{\rm SG}\sim
0.02$\,mm/s. In contrast, DE435 and DE436 differ from each other by $\Delta {\rm BJD}_{\rm TDB}\sim 35$\,ns,
$r_{\rm SG}\sim 20$\,m, $v_{\rm SG}\sim 0.0005$\,mm/s. These stated
differences are only a guideline as they are based on average differences and constitute an annual variation superposed on the trend due to perspective change.
The significant difference between DE405 and other ephemerides has been studied
frequently (e.g., \citealt{viswanathan17,wang17}). The precision of
an ephemeris is determined by the quality of the solar system model,
as well as the amount of data available when the ephemeris was computed and fit. Thus we
encourage a use of the most recent ephemeris if high-precision
timing data are analyzed. For the ephemeris of DE430 and more recent ones,
we expect a timing precision of about 100\,ns, a positional
precision of about 100\,m for the geocenter, and a velocity precision of about
0.01\,mm/s. Thus the timing precision of both PEXO and TEMPO2 is mainly limited by
the solar system ephemeris. Potential signals in precise timing data
should be analyzed with various ephemerides to confirm, as done by
the team of the north
American Nanohertz Observatory for Gravitational Waves (NANOGrav; \citealt{arzoumanian18}) to constrain the gravitational-wave background.
\begin{figure}
\centering
\includegraphics[scale=1]{legend.pdf}
\includegraphics[scale=0.38]{ephemeris_comparison_BJDtdb_tttdbJPL.pdf}
\includegraphics[scale=0.38]{ephemeris_comparison_pos_tttdbJPL.pdf}
\includegraphics[scale=0.38]{ephemeris_comparison_vel_tttdbJPL.pdf}
\caption{Difference in BJD$_{\rm TDB}$ (left), barycentric position
(middle), and velocity (right) of the geocenter. }
\label{fig:ephemeris}
\end{figure}
To explore precision limits of E10 and TEMPO2 relative to PEXO, we
apply them to the nearby star $\tau$ Ceti in
Fig. \ref{fig:pexo}. Considering the lack of third-order Roemer delays
and coding errors in TEMPO2, we consider PEXO as the package with the
highest precision and compare E10 and TEMPO2 to it in order to explore
the precision limit of these well-known packages. We use the Earth
rotation model recommended by IAU2006 resolutions \citep{capitaine06,wallace06} and the DE430 ephemeris of JPL as well
as DE430t to derive TT-TDB. First, we compare the original {\small
utc2bjd.pro} routine without correcting the error of ``parallax delay'' with PEXO. We show the results in the left panel of
Fig. \ref{fig:pexo}. As seen from the left panel, the coding error in
E10 leads to about 0.1\,ms bias. The ignorance of proper motion leads
to about 0.06\,s timing bias for $\tau$ Ceti over the 30\,yr time
span (see middle panel of Fig. \ref{fig:pexo}). This timing bias will
lead to about 2\,mm/s radial velocity bias. This bias could be
significant for the analysis of data with high timing resolution, such
as fast radio bursts with millisecond resolution (e.g.,
\citealt{chime19}). Moreover, proper-motion-induced timing bias is comparable with relativistic precession for
some systems and thus needs to be modeled in order to detect
relativistic effects in timing data. As seen in the right panel of
Fig. \ref{fig:pexo}, TEMPO2 and PEXO with the same Earth rotation
model and with the DE430 ephemeris are similar at the level
of tens of nanoseconds. The original TEMPO2 (with coding error for planetary
Shapiro delays) deviates from PEXO due to a combined effects of third-order Roemer delays (see the right panel of Fig. \ref{fig:bjd}) and
planet Shapiro delays (see Fig. \ref{fig:shapiro}). For distant
pulsars, the third-order Roemer delays are not significant, although the
coding error in the calculation of planet Shapiro delays still biases
TEMPO2 timing by tens of nanoseconds. Therefore the original TEMPO2 has a
timing precision of a few tens of nanoseconds for decade-long
observations, and the original E10 has a timing precision of subseconds.
\begin{figure}
\centering
\includegraphics[scale=0.38]{timing_E10original_TC_tempoFB90_DE430_ttt2tdbJPL_tempo_par4}
\includegraphics[scale=0.38]{timing_TC_tempoFB90_DE430_ttt2tdbJPL_e10_par4_originalTRUE}
\includegraphics[scale=0.38]{pexot_TC_tempoFB90_DE430_ttt2tdbJPL_e10_par4_originalTRUE.pdf}
\caption{Comparison of the timing precision of E10 and TEMPO2 with
PEXO for $\tau$ Ceti. }
\label{fig:pexo}
\end{figure}
\subsection{Radial velocity}{\label{sec:rv}}
We compare the precision of radial velocity modeling by PEXO and
TEMPO2 by calculating the so-called barycentric correction term
$z_{\rm S}$ (see equation \ref{eqn:zb}). Unlike \cite{wright14}, we do
not compare the Doppler shift of pulse frequency with $z_{\rm T}$
because pulse frequency is influenced by aberration delay (E06). We
instead calculate Doppler shift numerically using
\begin{equation}
1+z_{\rm S}=\frac{\delta \tau_a^{\rm SSB}}{\delta\tau_o}~,
\end{equation}
where $\tau_a^{\rm SSB}$ is $\rm BJD_{TDB}$ and $\tau_o$ is $\rm
JD_{\rm UTC}$. Because the analytical value of the local Doppler shift
$z_{\rm S}$ is not given in TEMPO2, we calculate it numerically by using
${\rm z_{bary}=(BJD_{\rm TDB2}-BJD_{\rm TDB1})/(JD_{\rm UTC2}-JD_{\rm UTC1})-1}$, where UTC2,
UTC1 are separated by 0.02\,day and TDB2 and TDB1 are corresponding TDB times. This UTC time step is chosen such that the rounding error
for both PEXO and TEMPO2 could be as small as possible. However, such
a numerical treatment is only used for comparison. We use the analytical radial velocity model in equation \ref{eqn:vr_obs} for the application of PEXO.
We take $\tau$ Ceti as a test case and calculate the local Doppler
shift $z_{\rm S}$ numerically for PEXO and TEMPO2. We show the
difference in the corresponding radial velocities over 5\,yr in
Fig. \ref{fig:rv}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{paper_t2RV_TC_tempoFB90_DE430_ttt2tdbFBgeo_dTstep001d.pdf}
\includegraphics[scale=0.4]{paper_RV_TC_tempoFB90_DE430_ttt2tdbFBgeo.pdf}
\caption{Difference of barycentric correction radial velocity term
calculated by PEXO and TEMPO2. The left panel shows the
numerical comparison while the right one shows the analytical
comparison. }
\label{fig:rv}
\end{figure}
We see that the PEXO radial velocities deviate from the TEMPO2 values with a
peak-to-peak difference of 2\,$\mu$m/s, indicating a radial velocity precision comparable to TEMPO2.
We also compare the analytical PEXO and TEMPO2 models of barycentric
radial velocity defined in equation 28 of \cite{wright14} and show the
results in the right panel of Fig. \ref{fig:rv}. The main error seems to arise from
the annual motion of the Earth, which is projected onto the source
direction to derive kinematic Doppler shift. Because we use DE430
both for TEMPO2 and for PEXO, the annual variation might be caused by
uncertainty in the Earth's rotation model and in the numerical calculation of
TDB-TT. Readers are referred to \cite{kopeikin99b} for a rigorous
treatment of higher-order relativistic Doppler effects.
Therefore, PEXO's radial velocity precision is comfortably beyond the specification of the
current best radial velocity instruments such as ESPRESSO (about a few
cm/s). However, a precision at the mm/s level is only achievable with an
ideal treatment of the atmospheric chromatic aberration
\citep{wright14}. The ESPRESSO instrument's exposure meter provides
three spectral channels, so in time the identification of possible
chromatic effects on exposure time midpoints for radial velocity
measurements can be evaluated. A precision of 1\,mm/s is also
challenged by the acceleration of the barycenters of the target system
and the solar system. The acceleration of the SSB is about 1\,mm/s/yr,
leading to 1\,cm/s bias in the radial velocity model prediction over
one decade. This could provide an opportunity to use radial velocity
data to estimate the acceleration of stars and thus provide
measurements relevant to the Galactic potential analogous to ongoing
efforts to quantify cosmological variations in the fine-structure constant (e.g., \citealt{whitmore14}).
\section{PEXO Simulation of Relativistic Effects in extrasolar systems}\label{sec:effects}
In this section, we assess various relativistic effects in transit timing,
astrometry, and radial velocity models through comparison of PEXO simulations and real data for example systems. These tests are aimed at roughly assessing the precision of PEXO more than detecting relativistic effects in real data.
\subsection{Transit Timing}\label{sec:ttv}
TTV \citep{miralda02,holman05} is an
efficient method of constraining the mass and orbital parameters of
transiting planets such as the TRAPPIST-1 system
\citep{gillon16,grimm18}. However, relativistic effects are typically
ignored to simplify the TTV modeling because of their small effects,
such as in {\small EXOFAST} \citep{eastman13}, although these effects
could be detectable with decade-long observations of some systems
\citep{miralda02,jordan08}. To assess the importance of relativistic
effects on transit timing, we use XO-3 b
\citep{johns-krull08}, a transiting hot Jupiter, as an example
because it is a hot Jupiter on an eccentric orbit and is also
recommended by \cite{jordan08} for searching for relativistic precession.
XO-3 b has a mass of 11.79\,$M_{\rm Jup}$ and an
orbital period of 3.1915239$\pm$0.00023 days
\citep{johns-krull08,winn08}. We simulate the system over 100
orbital periods using PEXO, calculate the transit epoch for each orbit, and compare the simulated relativistic TTV with the observed transit timing data
\citep{winn08} in Fig. \ref{fig:xo3}. The period of primary transits is changed by about
0.4\,s over 100 orbits (or about 319 days) due to relativistic
precession. This corresponds to a time derivative of transit period
of $7.98\times 10^{-5}$, consistent with the prediction using
equation 21 of \cite{miralda02}. Such a signal might be
detectable in transit timing measurements with a precision of about
one minute for each over one decade. Although such a detection of
relativistic TTV is not impossible with
current instruments, it is not as efficient as other methods such as
Transit Duration Variation (TDV) and the variation of time between primary and
secondary transit (PSV). This is because TTV is proportional to
$\dot{\omega}_{\rm GR}^2$ while TDV and PSV are proportional to
$\dot{\omega}_{\rm GR}$ \citep{miralda02}.
\begin{figure}
\centering
\includegraphics[scale=0.8]{XO3.pdf}
\caption{Comparison of the relativistic TTV ($\Delta T_c^{\rm
GR}$; black solid line) and observed TTV (red circles with
error bars). To visualize the small relativistic TTV signal, we
show the amplified relativistic TTV prediction for $100 \Delta T_c^{\rm
GR}$. }
\label{fig:xo3}
\end{figure}
Because the duration for each transit of XO-3 b is not provided in
previous studies, we select Kepler-210 c \citep{ioannidis14} to assess the
TDV effect. This planet has an orbital period of 7.9725 days and an
eccentricity of 0.5 and thus has significant relativistic
precession. We use the transit duration epoch data provided by
\cite{holczer16} to compare to PEXO predictions. Treating
the planet as the target and the star as the companion, we simulate the
system over 200 orbits and calculate
the velocity $v_{\rm TC}$. We derive the transit duration ${\rm
TD}=2\sqrt{(R_{\rm star}+R_{\rm pl})^2-(bR_{\rm star})^2}/v_{\rm
TC}$, where $b$ is the impact parameter, and $R_{\rm star}$ and $R_{\rm pl}$ are
respectively the radii of Kepler-210 and Kepler-210 c. Because
\cite{holczer16} only provide the epoch data for Transit Duration Fraction (${\rm TDF}=({\rm TD}-\overline{\rm TD})/\overline{\rm TD}$), we derive TDF from
$v_{\rm TC}$ using ${\rm TDF}=(\overline{v}_{\rm TC}-v_{\rm
TC})/v_{\rm TC}$ assuming the impact parameter does not change over
time. We show the TDF data and the PEXO prediction in
Fig. \ref{fig:tdf}. To visualize the relativistic TDF properly, we
also show an amplified TDF in the figure. The relativistic TDF
changes by $1.8\times 10^{-4}$ over 1500 days, equivalent to 1.93\,s
variation in transit duration. This is consistent with the prediction using
equation 23 of \cite{miralda02} or equation 15 of
\cite{jordan08}. Considering that the mean uncertainty of transit
durations is about 4\,min and the relativistic precession is about
$\dot{\omega}_{\rm GR}=0^\circ.61$/century, the relativistic TDF is not
detectable with the current {\it Kepler} data for this system. However, such
an effect becomes detectable if high-precession systems are observed
with high cadence ($<10$\,s) by space- or ground-based telescopes (e.g. \citealt{ivanov11}).
\begin{figure}
\centering
\includegraphics[scale=0.8]{kepler210_TDV.pdf}
\caption{TDF for Kepler-210 over 200 orbits or
1500 days. The blue line denotes the relativistic TDF, while the red
line represents the relativistic TDF multiplied by 100. The gray error
bars show the raw TDF data, while the black ones are the binned data
with a bin width of 100 days.}
\label{fig:tdf}
\end{figure}
In summary, the relativistic precession is unlikely to be detectable
in current transit timing data due to the large timing uncertainty caused
by low-cadence observations. For transit systems such as XO-3, the
relativistic precession is larger than 1 degree per century and is thus
detectable with high-cadence observations with exposure time as small
as a few seconds.
\subsection{Astrometry}\label{sec:astrometry_binary}
PEXO is similar to the GREM package \citep{klioner03,lindegren18} used
by {\it Gaia} to model single stars. For binaries or stars hosting massive
companions, PEXO also accounts for the gravitational lensing caused by
companions in the target system. Additionally, PEXO models the
atmospheric refraction in order to account for the differential
refraction effects in ground-based direct imaging. For decade-long astrometry
data, the parameter uncertainty in an astrometry catalog would lead to
significant deviation of model prediction from the real position. For
example, a proper motion error of 1\,mas/yr would result in 100\,mas position
error over one century. Hence, the astrometry parameters should be
determined {\it a posteriori} in combination with the orbital
parameters of companions through Markov chain Monte Carlo (MCMC) posterior sampling. This is also
an approach adopted by pulsar timing and is more suitable for
precision exoplanet research than the traditional approach, which separates
the barycentric correction from the motions in the target system.
To compare the various effects on the measured astrometry, we consider the
nearest binary system $\alpha$ Centauri as an example. We use
the CTIO observatory as an example observatory site. The
orbital and astrometry parameters of $\alpha$ Centauri
A and B are determined by \cite{kervella16} through a combined radial
velocity and astrometry analysis. Based on a simulation of the
position of $\alpha$ Centauri A over one orbital period with a
time step of ten days, we compare
various astrometric effects in Fig. \ref{fig:ac}. One of the main position changes is caused by the heliocentric motion of the barycenter of $\alpha$
Centauri. Because it is linear and easy to comprehend, we do not show
this effect in the figure. For ground-based observations, the
atmospheric refraction (P1 in Fig. \ref{fig:ac}) induces a few arcminute position offset. Because this effect can only be modeled to a precision of about 1\,arcsec
\citep{mangum15}, the ground-based astrometry is not able to achieve
1\,arcsec absolute precision. The segments of curves in P1 are
due to the annual variation of the elevation angle. We use the {\it
slaRefro} routine to calculate the refraction with an effective
wavelength of about 500\,nm, temperature of 278\,K, and relative humidity of 0.1.
The stellar aberration due to the Earth's location (P2) is
another main factor alternating the observed direction of $\alpha$
Centauri A. This effect is linearly proportional to the barycentric
velocity of the observatory for ground-based astrometry. Hence, a
precise knowledge of the Earth's ephemeris and rotation are required
to properly model this effect. The third most significant effect is
caused by the binary motion (P3). Instead of showing the
barycentric motion of A in P3 of Fig. \ref{fig:ac}, we show the orbit of
$\alpha$ Centauri B with respect to A and scale the axes such that
the binary orbit is comparable with the one shown in figure 1 of
\cite{pourbaix99}. The good match between P3 and the one in
\cite{pourbaix99} demonstrates the consistency of our convention
(see Appendix \ref{sec:conv3} for details) with the ones used
in previous studies of visual binaries. Although the binary
motion of visual binaries such as $\alpha$ Centauri A and B is significant, it was typically ignored in previous radial velocity modeling due to a decoupling of the solar and target systems. A more rigorous treatment of the stellar motion around the Galactic center is also needed to account for the secular aberration \citep{kopeikin06}.
The other less significant effects are the gravitational lensing in the solar and target systems. The gravitational lensing effects
caused by the Sun (P4) and Earth (P5) are detectable in
astrometric data with mas and sub-mas precision. As see in the bottom
left panel of Fig. \ref{fig:ac}, the annual motion of the Earth is
superposed on the binary motion of $\alpha$ Centauri in the solar
lensing effect. On the other hand, the lensing effect in the target system only changes the apparent position of the target star by less
than 1\,$\mu$as. According to \cite{kopeikin99}, the lensing effect
due to a companion in the target system is only significant for nearly
edge-on systems that host massive companions.
In panels P7 to P9, we show the position of $\alpha$ Centauri A
with various combinations of effects. In the geometric position of
$\alpha$ Centauri A (P7), we only combine the proper motion, parallax, and
binary motion. We see that the orbit is dominated by a linear trend
caused by proper motion and a periodic component due to the binary
motion. The annual parallax is superposed on this long-term
trend. If we add stellar aberration and lensing effects (P8), we
find that the aberration adds another dimension to the trend
and forms a ``tube.'' The diameter of the tube is determined by the
magnitude of stellar aberration. If we combine all effects (P9), the
position offset is dominated by the refraction effect. If the elevation angle is large enough, the refraction can be smaller than 1\,arcmin, which is still much more significant than other effects.
\begin{figure}
\centering
\includegraphics[scale=0.6]{absolute_alphaCenA_astrometry_DDGR_dt10day_Ntime2923_refro.pdf}
\caption{Various effects on the observed position of $\alpha$
Centauri A over an orbital period. The linear proper motion
effect is easy to comprehend and thus is not shown here. The
panels are denoted by ``Pn'' where n is the panel number. The
names of effects are denoted by the panel titles. For P1 to P6,
the panels are ordered in the order of decreasing
significance. The plots P7 to P9 in the bottom row show the absolute
position of $\alpha$ Centauri A without relativistic and
atmospheric refraction effects, without refraction effect, and
with all effects, respectively. The binary orbit in P3
shows the motion of $\alpha$ Centauri B with respect to A. It is scaled to be comparable with figure 1 in \cite{pourbaix99}. The north (N) and east directions are shown by arrows, and epochs are denoted by red crosses. The black cross represents $\alpha$ Centauri A.}
\label{fig:ac}
\end{figure}
To see the influence of various effects on relative astrometry, we
show the position of B with respect to A in Fig. \ref{fig:acrel}. The
offset coordinates are defined as
\begin{eqnarray}
\Delta\alpha^*&=&(\alpha_2-\alpha_1)\cos{\frac{\delta_1+\delta_2}{2}}~,\\
\Delta\delta&=&\delta_2-\delta_1~,
\end{eqnarray}
where $(\alpha_1,\delta_1)$ and
$(\alpha_2,\delta_2)$ are the equatorial coordinates of two points on
the celestial sphere which are close to each other. We explain the various effects shown in Fig. \ref{fig:acrel} as follows.
\begin{itemize}
\item {\bf P1: Differential refraction. }Because we only consider a
single wavelength in the calculation of refraction, this
refraction effect is achromatic. Chromatic refraction can be
calculated simply by applying the {\it slaRefro} routine to different
wavelengths. Despite much scatter and complexity in the pattern
observed in the offsets, the differential refraction is less than
0.05\,arcsec for most time steps when the elevation angle is
higher than 30$^\circ$. Without properly modeling this
differential effect to a sub-mas precision, relative astrometry based on direct imaging would be significantly biased in characterizing exoplanets.
\item {\bf P2: Differential refraction. }Because we adopt a uniform
time step of 10 days, the modulation of elevation and refraction
over time is due to the Earth's motion with respect to the SSB.
\item {\bf P3: Differential refraction. }This panel shows the
differential refraction as a function of elevation angle. The
upper limit of the differential refraction is determined by
the elevation angle while the binary motion modulates the
differential refraction at a given elevation angle.
\item {\bf P4: Atmospheric refraction. }This panel shows the
refraction as a function of elevation. For the atmospheric
parameter adopted in this simulation, the absolute refraction
is less than 1\,arcminute, and the relative refraction is less
than 0.05\,arcsec if the elevation angle is larger than
30$^\circ$ (see section \ref{sec:special_shift}). In principle,
the current atmospheric model allows a differential refraction
model precision of 10\,$\mu$as \citep{gubler98}. Parameters such as temperature of the star, local pressure and temperature, and relative humidity may not be well known. As already routinely practiced by some ground-based astrometric programs, it is necessary to observe target systems close to the zenith if sub-mas relative astrometry is required.
\item {\bf P5 and P6: Differential aberration. }The aberration
is determined by the component of $\bm{r}_{\rm SO}$ which is
perpendicular to the target direction. Hence, the aberration is
modulated by the Earth's rotation and barycentric
motion. Although this effect contributes a few mas positional
offset (comparable with the astrometric signal induced by a Jupiter analog) and shows strong variation in time, it is rarely considered in analyses of relative astrometry data.
\item {\bf P7: Differential solar lensing. }This effect is
at most a few $\mu$as and thus is only important for future
space-based astrometry missions such as the SIM PlanetQuest
\citep{catanzarite06,unwin08}.
\item {\bf P8: Geometric orbit.} This is the binary motion
projected onto the plane of the sky and is frequently used by the community to model relative astrometry.
\item {\bf P9: Observed orbit.} This is a combination of
geometric and other effects. The atmospheric refraction
biases the binary orbit by at most 2\,arcsec,
producing about 10\% of the total offsets. If the system
is observed with an elevation angle larger than 30$^\circ$,
we expect $<$0.05\,arcsec refraction bias,
equivalent to a 0.2\% uncertainty in the binary orbital
solution. Without properly removing this bias through
modeling, it is unlikely to detect astrometric signals
of exoplanets reliably.
\end{itemize}
Based on the above analyses of the $\alpha$ Centauri orbit, the
atmospheric and aberration effects are only marginally important for
constraints on its binary orbit based on relative astrometry. However,
these effects are far more significant than potential planetary
signals. For example, an Earth-like planet around $\alpha$ Centauri B
would induce $\sim 1~\mu$as stellar reflex motion, and a Jupiter-like
planet would induce $\sim 1$\,mas reflex motion. Hence, the atmospheric
and aberration effects should be modeled to a high-precision level if
solar system analogs are to be detected through the astrometry
method.
\begin{figure}
\centering
\includegraphics[scale=0.6]{relative_alphaCenA_astrometry_DDGR_dt10day_Ntime2923_refro.pdf}
\caption{Various effects on the relative position of $\alpha$
Centauri B with respect to A. The differential
refraction is shown in the offset coordinates (P1), and as a function
of time (P2), as a function of elevation angle (P3). The
refraction as a function of elevation angle is shown in P4. The
horizontal dashed line indicates a refraction of 1\,arcmin. The
differential aberrations in the offset coordinates and as a
function of time are shown in P5 and P6, respectively. P7 shows the
differential Solar lensing in offset coordinates. The geometric
orbit of B around A without lensing, aberration, and refraction
effects is shown in P8. The observed orbit with all effects are
shown in P9. }
\label{fig:acrel}
\end{figure}
To roughly test the precision of PEXO prediction, we compare the
geometric binary orbit (P8 in Fig. \ref{fig:acrel}) with the
astrometry data from \cite{kervella16}. The model and data for the
angular separation and the position angle are shown in
Fig. \ref{fig:acmodel}. The small residual suggests that PEXO is
able to recover previous results. The observational details for each astrometry data point are beyond this work, so we have not considered the aberration and atmospheric effects that could introduce differential positional offsets.
\begin{figure}
\centering
\includegraphics[scale=0.8]{comparison_ACA_astrometry_DDGR_dt10day_Ntime400.pdf}
\caption{Predicted and observed angular separation (left) and
position angle (right) of $\alpha$ Centauri B with respect to
A. The red lines denote the best-fit orbital solution given by
\cite{kervella16}. The black error bars denote the astrometry data
that the solution is based on and can be visualized in the lower observed-calculated plot.}
\label{fig:acmodel}
\end{figure}
\subsection{Radial Velocity}\label{sec:rv_binary}
Adopting the same observatory coordinates and orbital parameters as in
section \ref{sec:astrometry_binary}, we now consider the relativistic
and classical effects on the radial velocities of $\alpha$ Centauri
below and show the results in Fig. \ref{fig:rv_binary}:
\begin{figure}
\centering
\includegraphics[scale=0.45]{paper_alphaCenA_RV_DDGR_dt10day_Ntime2923.pdf}
\caption{Relativistic and classical effects on the measured radial velocity of
$\alpha$ Centauri A over an orbital period. }
\label{fig:rv_binary}
\end{figure}
\begin{itemize}
\item {\bf P1: General relativity in the target system. }This GR effect is caused by the gravitational field of $\alpha$
Centauri B, which makes the light ray from $\alpha$ Centauri A
Doppler shifted. This leads to a radial velocity variation of 0.17\,m/s, which is larger than the maximum radial velocity variation of 0.1\,m/s that would be caused by the presence of any Earth-like planets around $\alpha$ Centauri A. To the best of our knowledge, this effect is not considered in current radial velocity analysis packages.
\item {\bf P2: Special relativity in the target system. }This is a special relativity
effect due to the motion of $\alpha$ Centauri A around the
barycenter of the binary. This effect contributes to a radial
velocity variation of 0.53\,m/s over one orbital period and again does not appear to be included in existing radial velocity packages.
\item {\bf P3: Motion of the target with respect to the TSB.} This radial velocity variation is due to the motion of $\alpha$ Centauri A around the binary barycenter. This motion is typically ignored for the barycentric correction, although it can be determined {\it a priori} if
the Keplerian parameters of the binary motion are known to a high precision.
\item {\bf P4: Motion of the TSB with respect to the SSB. }This is a kinematic effect
due to the relative motion of the TSB with respect to
the SSB. This motion would change the viewing perspective and lead
to the so-called ``perspective acceleration'' in radial velocity. This
perspective acceleration is coupled with the binary motion and the
observer's motion in the solar system. This is evident in the corresponding
radial velocity acceleration shown in Fig. \ref{fig:rv_accelerate}. The mean
acceleration is the perspective acceleration, the short
periodic variation is due to the Earth's annual motion around the
Sun, and the long periodic variation is caused by the binary motion
of $\alpha$ Centauri A and B. This time-varying acceleration casts doubt on the reliability of subtracting a linear trend with a constant perspective acceleration from the radial velocity data (e.g., \citealt{zechmeister13}).
\begin{figure}
\centering
\includegraphics[scale=0.45]{remoteSB.pdf}
\caption{Radial velocity acceleration induced by the motion of the barycenter of
$\alpha$ Centauri relative to the SSB. }
\label{fig:rv_accelerate}
\end{figure}
\item {\bf P5: Lensing in the target system.} This effect corresponds to the
gravitational lensing of the companion, which is $\alpha$ Centauri B
in this case. This effect contributes at most a 0.1\,mm/s radial velocity
variation. Considering that this effect is proportional to the inclination
and the semi-major axis, it might be detectable
with radial velocity instruments such as ESPRESSO in a nearly edge-on
binary system, in short-period binaries, and in transiting systems
hosting massive, short-period planets.
\item {\bf P6: Lensing in the solar system.} This effect is due to the gravitational lensing of the Sun and the Earth, contributing to a 0.1\,mm/s radial velocity variation. This effect is proportional to $\cot{\psi/2}$ \citep{klioner03}, where $\psi$ is the angular distance between the Sun and the target star. Considering that the minimum $\psi$ is 40$^\circ$ for the $\alpha$ Centauri example, the lensing effect would contribute to 1\,cm/s if the target star is less than
1$^\circ$ from the Sun.
\item {\bf P7: Relativistic effects in the solar system.} These effects are due to the gravitational field in the solar system and the barycentric motion of the observer. These effects
also cause the Einstein delay, which transforms TT to TCB. The ratio of the increments of TT and TCB is simply the relativistic Doppler shift. Because the motion of the observatory
in the solar system is derived from the JPL ephemeris, we can subtract this type of radial velocity variation directly from the measured radial velocity to remove these local effects. However, in the cases
that the observatory site or the ephemeris is not well determined,
the Earth's motion as well as the motions of the target star should
be determined {\it a posteriori}.
\item {\bf P8: Motion of geocenter with respect to the SSB.} This effect is due to the
barycentric motion of the geocenter and contributes about a 20\,km/s radial velocity variation. Such significant kinematic effects are not completely local
because the corresponding radial velocity variation depends on the direction of
the target, which changes over time due to the motion of the target
star.
\item {\bf P9: Earth rotation.} This effect is due to the
Earth's rotation and contributes to a 200\,m/s variation. Hence, the radial velocity
data precision highly depends on the Earth rotation
model. Specifically, a radial velocity precision of 1\,cm/s requires 1\,cm/s
modeling precision of the Earth rotation. Alternatively, the Earth's
rotation can be determined a posterori through a fit of the
combined model to the data.
\item {\bf P10: Motion of observer with respect to the SSB.} It is a combination
of the P9 and P10 effects.
\item {\bf P11: Troposphere refraction.} In this panel, we show
refraction-induced radial velocity variation for elevation angles
larger than 10$^\circ$ to be representative of most ground-based
observations. This indicates at most a few mm/s variation in
radial velocity, and thus refraction-induced effects are
negligible for the current radial velocity observations.
\item {\bf P12: All effects. }This is the observed radial velocity, which is a
combination of all effects.
\end{itemize}
We further assess the performance of PEXO by comparing the PEXO model
prediction of radial velocity with the radial velocity data in the
literature in Fig. \ref{fig:ACcombined}. For data sets with relative
radial velocities, we add an offset so that the mean predicted and
observed radial velocities are equal. It is obvious that the PEXO
prediction well fits the combined data, leading to 4.7 and 4.2\,m/s standard
deviations of residual radial velocities for $\alpha$ Centauri A and B,
respectively. Nevertheless, we still see significant variations in
the CHIRON and HARPS data sets, indicating potential bias in the
\cite{kervella16} solution and in the barycentric correction. To
mitigate such biases, a comprehensive modeling of the $\alpha$
Centauri system is needed and is beyond the scope of this work.
\begin{figure}
\centering
\includegraphics[scale=0.5]{AC_combined_comparison.pdf}
\caption{Comparison of the radial velocity model prediction and the
radial velocity data sets from various sources. The red line shows
the model prediction based on the solution given by
\cite{kervella16}. Left panel: model prediction and observed
radial velocities of $\alpha$ Centauri A and B; middle panel:
radial velocity residual for $\alpha$ Centauri A; right panel:
radial velocity residual for $\alpha$ Centauri B. In each panel,
the top axis shows the modified Julian date (MJD$=$JD$-$2400000.5). The HARPS data is
from \cite{lisogorskyi19}, the CHIRON
and ES data sets are from \cite{zhao18}, the UVES data is obtained
by \cite{kjeldsen05}, the AAT and CORALIE data are from
\cite{pourbaix02}, and the LC and VLC data are from \cite{endl01}. }
\label{fig:ACcombined}
\end{figure}
\subsection{A Comparison of Relativistic Effects on Timing,
Astrometry, and Radial Velocities between Different Packages}\label{sec:comparison_table}
We apply PEXO to simulate the timing, astrometry, and radial velocity
of $\tau$ Ceti, $\alpha$ Centauri A, and XO-3 over one decade to
assess the significance of various effects. In the XO-3 system,
we treat XO-3 b as the primary and XO-3 as the secondary so that the timing model can be used to predict transit
timing, the astrometry model is applicable to the relative astrometry
of directly imaged planets, and the radial velocity model can be used in the
studies of systems such as the S2-Sgr A* system
\citep{gravity18}. Additionally, we compare the functions in PEXO with
those in TEMPO2, EXOFAST+\footnote{EXOFAST+ represents a
combination of the timing routines such as {\small utc2bjd.pro}
developed by E10, EXOFAST developed by \cite{eastman13}, and {\small zbarycorr.pro} developed by \cite{wright14}. The Roemer delay in TS
applies to EXOFAST, the radial velocity functionalities apply to
{\small zbarycorr.pro}, and the other timing functionalities apply to {\small utc2bjd.pro}.}, and GREM\footnote{Because we do not have
access to the software, information about the functions implemented
comes from \cite{klioner92} and \cite{klioner03}.} and show the results in Table \ref{tab:functions}.
Although TEMPO2 is able to model radial velocity by
differentiating the timing function over time, such a numerical
treatment is likely to result in significant rounding errors due to
the huge scale difference between various quantities. The pulse
frequency in TEMPO2 is not the same as the light frequency; for example, the
light from a star can be gravitationally red shifted while the
rotation frequency of a pulsar cannot. Thus, TEMPO2 is not optimal for radial velocity modeling especially in the case of exoplanet detections.
As seen in Table \ref{tab:functions}, TEMPO2 provides a precise timing
model by including many high-order effects, EXOFAST+ includes some
relativistic effects in timing and radial velocity, and GREM is able to model
astrometry to a high precision. In comparison with these packages,
PEXO aims to include most high-order effects whilst also providing a
combined modeling of timing, astrometry, and radial velocity. As shown in the table, the significant time delay not included in
EXOFAST+ and GREM is the Einstein delay in the target system,
which is about 0.01\,s for $\alpha$ Centauri A and XO-3
b. The Einstein delay in $\alpha$ Centauri over one binary
orbit is as large as 0.1\,s. This delay variation actually measures
the difference between periastron and apastron for a given Keplerian
orbit. Thus it is significant for an eccentric orbit and can be measured
by the timing difference between the primary and secondary transits for
a long-period transit planet.
Without considering proper motion effects, EXOFAST+ introduces
subsecond timing bias for $\tau$ Ceti and $\alpha$ Centauri
A. This bias increases with the time difference between the reference
epoch of the astrometry catalog and the mean epoch of the data. On the other hand, the decoupling approach adopted by EXOFAST+ and GREM would introduce 0.02\,s bias
in their timing model for $\alpha$ Centauri. Because TEMPO2 does not
consider the third-order Roemer delay, there would be timing biases
of 200 and 900\,ns for $\tau$ Ceti and $\alpha$ Centauri for one
decade, respectively. Because the third-order Roemer delay is
determined by the time difference between simulated epochs and the
reference epoch, the corresponding 200\,ns timing bias for the
simulated epochs starting from the {\it Gaia} reference epoch for $\tau$
Ceti (shown in Table \ref{tab:functions}) is much higher than that
for earlier epochs shown in the right panel of Fig. \ref{fig:bjd}.
The atmospheric effects contribute a few arcminute position offset
and are not modeled by GREM, which is aimed at space-based
astrometry. The second-most significant astrometric effect is the
stellar aberration. Because the first-order aberration is considered
in most packages, we only show the second and third-order aberration
values in Table \ref{tab:functions}. The second-order aberration
contributes at most 1\,mas position offset and is not sensitive to
stellar distance. Thus it is important for mas-precision astrometry
for all stars. Another significant effect is
gravitational lensing by the Sun, which could contribute more than
10\,mas position offset. The Earth lensing dominates the planetary
lensing effects and contributes sub-mas position offset. The third-order geometric effect contributes to a 30\,$\mu$as astrometric
bias for $\alpha$ Centauri A and is not included in GREM. Such an
effect could be significant for some nearby binary systems and needs
to be properly modeled if $\mu$as precision is required. The decoupling effect contributes a few arcsecond offset.
For the comparison of radial velocity models, the total decoupling bias is about 4\,m/s for $\alpha$ Centauri over one decade and thus significantly higher than potential signals in the radial velocity data \citep{zhao18}. The time-varying special relativity effect in the $\alpha$ Centauri system is as much
as 0.3\,m/s and is not included in current radial velocity
packages. The relativistic effects are even more significant in nearly
edge-on systems such as XO-3. In such cases, even the lensing
Doppler effect in the target system contributes up to 2\,m/s and is measurable with current radial velocity instruments.
It is evident in Table \ref{tab:functions} that no current packages
are able to precisely model different types of data consistently. In contrast, PEXO stands out as a package for modeling
multiple types of data in a precise and consistent fashion. This makes
it a suitable package for synthesizing data from high precision
ground-based or space-based facilities.
\begin{longrotatetable}
\begin{table}
\caption{Comparison of various packages and the amplitudes of high-order classical and relativistic effects in timing, astrometry, and
radial velocity. The amplitudes are determined based on
simulations over ten years starting from the Gaia DR2 epoch. The solar system is denoted by SS while the target system is denoted by TS. We do not show constant effects such as the frame transformation between TSB and SSB. Because TEMPO2 is not optimal and is not aimed at radial velocity modeling, but in principle might offer the feature, we assign a dash mark to each radial velocity effect for TEMPO2. Because the wet component of
tropospheric delay is not well modeled, we follow TEMPO2 to only
model the hydrostatic component. The delay values are for
elevation angle larger than 10$^\circ$ and thus is representative
for most astronomical observations. Like TEMPO2, PEXO only models
the well understood components such as hydrostatic delay in the
troposphere in these effects and model the other components as
fittable parameters. The trend decoupling bias is calculated for a
ten year time span while the period decoupling bias is for an orbital
period of a given target system. The decoupling effects are recorded
as a function of a package if all motions are modeled
simultaneously to avoid decoupling bias and the decoupling effects
can be separated as a data product (or ``barycentric
correction''). If a function is included in a package, a tick is
assigned. Otherwise, a cross is assigned. If there is a coding
error for a function, we use two crosses to mark it. }
\label{tab:functions}
\footnotesize{
\hspace{-1in}
\begin{tabular}{lp{5cm}cc ccc cccc}
\hline
\hline
Model&Function&Equations&Unit&$\tau$ Ceti&$\alpha$ Centauri A& XO-3
b&PEXO&TEMPO2\footnote{TEMPO2
is not designed
primarily for radial velocity
modeling, although
the pulse
frequency
variation can be
converted into
Doppler shift.
}&EXOFAST+
& GREM\\
\hline
Timing&Second-order Roemer delay in SS&\ref{eqn:dpS}&s&$3\times
10^{-4}$&$5\times
10^{-4}$&$4\times
10^{-6}$&\cmark&\cmark&\xmark\xmark&\cmark\\
Timing&Third-order Roemer delay in SS&\ref{eqn:offset}&s&$6\times10^{-5}$&$9\times 10^{-7}$&$3\times
10^{-12}$&\cmark&\xmark&\xmark&\xmark\\
Timing&Einstein delay in SS&\ref{eqn:einstein}&s&20&20&20&\cmark&\cmark&\cmark&\cmark\\
Timing&Shapiro delay due to the Sun&\ref{eqn:DSO}&s&$3\times
10^{-5}$&$2\times
10^{-5}$&$2\times
10^{-5}$&\cmark&\cmark&\cmark&\cmark\\
Timing&Shapiro delay due to SS planets&\ref{eqn:DSO}&s&$4\times
10^{-8}$&$2\times
10^{-8}$&$3\times
10^{-8}$&\cmark&\xmark\xmark&\xmark&\xmark\\
Timing&Proper motion of TS&\ref{eqn:dpS}&s&0.2&0.6&$1\times 10^{-4}$&\cmark&\cmark&\xmark&\cmark\\
Timing&Roemer delay in TS&\ref{eqn:drT-pT}&s&0&$3\times 10^3$&40&\cmark&\cmark&\cmark&\cmark\\
Timing&Einstein delay in TS&\ref{eqn:deT}&s&0&0.01&0.01&\cmark&\cmark&\xmark&\xmark\\
Timing&Shapiro delay in TS&\ref{eqn:DST}&s&0&$4\times 10^{-6}$&$2\times 10^{-7}$&\cmark&\cmark&\xmark&\xmark\\
Timing&Atmospheric effects&\ref{eqn:Dtropo2}&s&$8\times 10^{-8}$&$3\times 10^{-8}$&$3\times 10^{-8}$ &\cmark&\cmark&\xmark&\xmark\\
Timing&Trend decoupling effects&\ref{eqn:timing_trend}&s&0&0.02&$4\times 10^{-3}$&\cmark&\cmark&\xmark&\xmark\\
Timing&Period decoupling effects&\ref{eqn:timing_period}&s&0&0.02&$5\times 10^{-7}$&\cmark&\cmark&\xmark&\xmark\\
\hline
Astrometry&Second-order stellar aberration&\ref{eqn:uo}&as&$6\times10^{-4}$&$7\times 10^{-4}$&$7\times 10^{-6}$&\cmark&\xmark&\xmark&\cmark\\
Astrometry&Third-order stellar aberration&\ref{eqn:uo}&as&$2\times 10^{-7}$&$1\times 10^{-7}$&$1\times 10^{-7}$&\cmark&\xmark&\xmark&\cmark\\
Astrometry&Lensing by the Sun&\ref{eqn:lo}&as&0.02&0.009&0.01&\cmark&\xmark&\xmark&\cmark\\
Astrometry&Lensing by SS planets&\ref{eqn:lo}&as&$6\times 10^{-4}$&$2\times
10^{-4}$&$9\times10^{-4}$&\cmark&\xmark&\xmark&\cmark\\
Astrometry&Gravitational lensing in TS&\ref{eqn:ll}&as&0&$3\times 10^{-9}$&$1\times
10^{-9}$&\cmark&\xmark&\xmark&\xmark\\
Astrometry&Second-order geometric
effects&\ref{eqn:offset}&as&0.2&4&$2\times 10^{-3}$&\cmark&\xmark&\xmark&\cmark\\
Astrometry&Third-order geometric effects&\ref{eqn:offset}&as&$2\times 10^{-6}$&$3\times
10^{-5}$&$2\times10^{-14}$&\cmark&\xmark&\xmark&\xmark\\
Astrometry&Atmospheric effects&\ref{eqn:refro}&as&$4\times 10^3$&$1\times 10^3$&$3\times 10^3$&\cmark&\xmark&\xmark&\xmark\\
Astrometry&Trend decoupling effects&\ref{eqn:u_trend}&as&0&9&2&\cmark&\xmark&\xmark&\cmark\\
Astrometry&Period decoupling effects&\ref{eqn:u_period}&as&0&0.7&$5\times
10^{-3}$&\cmark&\xmark&\xmark&\cmark\\
\hline
Radial velocity&Relativistic effects in the SS&\ref{eqn:zrS}&m/s&0.2&0.2&0.2&\cmark&-&\cmark&\xmark\\
Radial velocity&Lensing by the Sun&\ref{eqn:zlS1}&m/s&$2\times10^{-3}$&$1\times10^{-3}$&$1\times
10^{-3}$&\cmark&-&\cmark&\xmark\\
Radial velocity&Lensing by SS planets&\ref{eqn:zlS1}&m/s&$5\times10^{-6}$&$2\times
10^{-6}$&$2\times
10^{-6}$&\cmark&-&\cmark&\xmark\\
Radial velocity&Special relativity in TS&\ref{eqn:zsrT}&m/s&0&0.2&40&\cmark&-&\xmark&\xmark\\
Radial velocity&General relativity in TS&\ref{eqn:zgrT}&m/s&0&0.04&50&\cmark&-&\xmark&\xmark\\
Radial velocity&Lensing Doppler shift in TS&\ref{eqn:zlS}&m/s&0&$2\times10^{-5}$&2&\cmark&-&\xmark&\xmark\\
Radial velocity&Second-order geometric
effects&\ref{eqn:offset}&m/s&0.07&0.9&$8\times 10^{-3}$&\cmark&-&\cmark&\xmark\\
Radial velocity&Third-order geometric effects&\ref{eqn:offset}&m/s&$6\times 10^{-7}$&$1\times
10^{-5}$&$2\times
10^{-14}$&\cmark&-&\xmark&\xmark\\
Radial velocity&Atmospheric effects&\ref{eqn:ztropo}&m/s&$8\times
10^{-3}$&$3\times
10^{-3}$&$4\times
10^{-3}$&\cmark&-&\xmark&\xmark\\
Radial velocity&Trend decoupling effects&\ref{eqn:vr_trend}&m/s&0&2&0.4&\cmark&-&\xmark&\xmark\\
Radial velocity&Period decoupling effects&\ref{eqn:vr_period}&m/s&0&2&$5\times 10^{-5}$&\cmark&-&\xmark&\xmark\\\hline
\end{tabular}
}
\end{table}
\end{longrotatetable}
\section{Conclusion}\label{sec:conclusion}
In this work, we introduce relativistic models of timing, astrometry,
and radial velocity in the PEXO package, which is mainly aimed at data
analysis for exoplanets as well as tests of GR. PEXO
includes the general and special relativistic effects both in the
solar system and in the target system. These relativistic effects lead
to Einstein delays in timing, stellar aberration in astrometry as well
as gravitational Doppler shift in radial velocity. PEXO also models
the gravitational lensing and high-order geometric effects in both
systems. The lensing effects lead to the Shapiro delay in timing,
light deflection in astrometry, and Doppler shift in radial
velocity. Based on our comparison of PEXO with TEMPO2, PEXO is able to
achieve a timing precision of $\sim$1\,ns, an astrometry precision of
$\sim$1\,$\mu$as, and a radial velocity precision of $\sim$1\,$\mu$m/s.
These figures are comfortably better than what were expected to be achieved by
current facilities. To test the precision of PEXO, we compare it with TEMPO2 and the package developed by E10. The timing precision of PEXO is at least
comparable with TEMPO2 at the level of a few nanoseconds. It is
better than TEMPO2 for decade-long timing data for nearby targets
due to its consideration of the third-order terms of Roemer delay. We find an error in the routine {\small shapiro\_delay.C} of TEMPO2 that could induce tens of nanoseconds timing
bias in the calculation of Shapiro delay. Considering the popularity
of TEMPO2 and the potential for coding errors in complex packages, we
strongly recommend the application of independent packages for
important discoveries in pulsar timing and exoplanetology as well as
in other astrophysical applications. We also compare the IDL routine {\small
utc2bjd.pro} developed by E10 and its corresponding applet with
TEMPO2. We find that the applet is able to provide a timing precision of a few
milliseconds if ignoring the proper motion effects. However, we notice an
error in the calculation of parallax delay in {\small
utc2bjd.pro}, leading to a timing error of about 0.3\,ms for
$\tau$ Ceti. Although such a bug is not significant for current
exoplanet science, it could become significant for high-precision
applications. The corrected IDL version of E10 is able to model timing to a precision of a few microseconds if the propagation of TSB is provided externally. The errors in high-precision packages such as TEMPO2 and {\small utc2bjd.pro} demonstrate the utility of new packages in minimizing coding errors and their potential spread in other applications.
The numerical implementations of barycentric correction of radial
velocity for PEXO and TEMPO2 differ by a few \,$\mu$m/s. Considering the
timing error in radial velocity data, PEXO is able to provide a
practical radial velocity precision of 1\,cm/s. The main
limitation of radial velocity precision comes from the bias in the
determination of the appropriate mid-point of an exposure caused by effects such as the
atmospheric chromatic aberration. We do not compare PEXO with the
known high-precision astrometry package GREM because it is not
publicly available. Considering the consistency between astrometry and radial velocity
modeling, we expect the astrometry precision to be $\mu$as.
We test various effects in transit timing by applying PEXO to XO-3 b and Kepler-210 c. The relativistic effects are not significant enough to be detectable in these two cases. High cadence
and long term observations are needed to reliably detect relativistic
precession in short-period and eccentric-transit systems where
planet-induced precession is minimized. Follow-up work on how transit
timing is sensitive to various relativistic effects is needed for the potential
application of transit timing in the test of GR, as
done in pulsar timing. The Einstein delay in the target system
contributes to a 0.2\,s timing variation for $\alpha$ Centauri
over one decade but is not included in many previous packages
such as EXOFAST and GREM. The Einstein delay in some high-eccentricity
transit systems might be detectable in the timing difference between
the primary and secondary transits. We further investigate the
feasibility of using PEXO for relativity tests in binaries. The
gravitational redshift variation caused by the companion of the target
star is as large as a few m/s for 52 binaries with orbital periods
less than 10\,yr. Such tests are able to examine GR
and MOND theories in a new regime of stellar mass and weak gravitational field.
Using $\alpha$ Centauri as an example, we assess relativistic effects
in the modeling of astrometry. We find that the lensing effect in the
solar system contributes to 9\,mas position variation, while the
lensing effect in $\alpha$ Centauri only contributes to 0.003$\mu$as
position variation over one decade. The barycentric motion of the
binary and the stellar aberration due to the Earth's motion, as well as the atmospheric refraction are the
major effects changing the observed direction of photons. For
ground-based astrometric observations, the atmospheric modeling
uncertainty limits the absolute positional precision to be 1\,arcsec
and the relative positional precision to be tens of $\mu$as. For space-based
observations, an astrometry precision of $\mu$as requires a telescope
ephemeris with 1\,cm/s velocity precision. An alternative approach is
to determine the Earth's motion {\it a posteriori} through a combined fit to the
data. The third-order geometric effect contributes to a few $\mu$as position bias and is not
included in the GREM package, which is used by {\it Gaia} for relativistic
astrometric solutions \citep{lindegren18}.
We assess various effects in the radial velocity model using the
example of $\alpha$ Centauri A. The general and special relativistic
effects in the $\alpha$ Centauri system affect the radial velocity of
$\alpha$ Centauri A by 0.04 and 0.2\,m/s over one decade,
respectively. Although these effects are essential for sub-m/s radial
velocity modeling of binary systems, they are not accounted for in
radial velocity packages. The special relativity effect due to
$\alpha$ Centauri B changes the radial velocity by nearly 1\,m/s over
one orbital period. The binary motion of the target star would change
the viewing perspective, leading to a change in the projection of various motions
onto the radial velocity direction. Furthermore, errors in astrometry data would
lead to considerable radial velocity variation for nearby
stars. We find that decoupling could introduce $\sim$0.1\,m/s bias in one
decade of radial velocity observations of a nearby star ($<$10\,pc) with
hot or cold Jupiters and could introduce $\sim$1\,m/s bias over one year for
nearby stars with stellar-mass companions. Therefore, a barycentric
correction of the measured radial velocity is not appropriate to
achieve 1\,cm/s. We suggest a combined modeling of stellar reflex motion, stellar proper motion, and the Earth's motion for high-precision radial velocity modeling.
The $\sim$1\,ns timing precision of PEXO and TEMPO2 can be achieved if the uncertainty in the ephemeris
of solar system bodies is less than 1\,m and the effect of interstellar
scattering is well understood (E06). On the other hand, the radial velocity precision of
1\,$\mu$m is the software precision that is achievable only if we could
model the observational effects to a high precision. For example, to achieve a precision of 1\,mm/s in radial
velocity modeling, we need to determine the midpoint of exposures in
spectroscopic observations to a precision of a few milli-seconds by
properly modeling the atmospheric chromatic aberration of incoming
photons. For future space-based spectrographs such as EarthFinder
\citep{plavchan18}, the exposure time might be better determined and
telluric effects would disappear, enabling $<$1\,cm/s radial velocity
precision if combined with PEXO's precise modeling of astrophysical
effects. PEXO’s precision can be further improved by the
appropriate modeling of the Galactic acceleration of stars and the
dispersion of photons in the interplanetary and interstellar
medium. The extra packages envisaged for PEXO are (1) the Galactic
acceleration of stars and the corresponding secular aberration as
well as cosmological effects (e.g., \citealt{klioner03} and
\citealt{lindegren03}) and (2) the inclusion of gravitational wave effects to provide an independent package for the detection of gravitational waves using pulsar timing arrays such as NANOGrav (e.g., \citealt{arzoumanian18}).
Based on our investigation of various relativistic effects and
comparison of various packages, we summarize the main results of
this paper and give relevant recommendations as follows:
\begin{itemize}
\item By accounting for relativistic and high-order geometric
effects, PEXO is able to model timing to a precision of 1\,ns,
astrometry to 1\,$\mu$as, and radial velocity to 1\,$\mu$m/s. PEXO
is able to model multiple types of data precisely and consistently.
\item Decoupling of the target system and the solar system introduces
considerable bias in the modeling of timing, astrometry and
radial velocity. For stars with stellar mass companions and for
nearby stars with Jupiter-mass companions, we recommend a
combined modeling of binary motion, binary barycentric motion,
and telescope ephemeris. An alternative, efficient approach is to model the
decoupling trend bias using astrometric offsets and fit all
motions to the data corrected for barycentric effects.
\item A detectable relativistic effect in extrasolar systems is the
gravitational redshift of the light from a target star caused by
its companion star. This test is feasible for a few binaries
with long-term and high-precision radial velocity data
($\sim$1\,m/s uncertainty), such as $\alpha$ Centauri A and B.
\item Atmospheric effects limit absolute astrometry to a precision
of 1\,arcsec and relative astrometry to a precision of tens of
$\mu$as. Such precisions are only achievable if various
atmospheric parameters are well measured at the observation site
and the refraction effects are correctly modeled through
reliable packages. To avoid high model uncertainty, we recommend an
elevation angle of at least 30$^\circ$ for direct imaging of
planets (see section \ref{sec:astrometry_binary}).
\item Second-order stellar aberration is needed to model space-based astrometric observations correctly. The third-order
geometric effects become significant for decade-long observations
of nearby stars. Because GREM is the core package used for {\it Gaia}'s
astrometry solution \citep{lindegren18}, we recommend a more
comprehensive analysis of astrometric epoch data for nearby stars to reveal potential planetary signals.
\item The consideration of relativistic effects in the target
system is the missing piece in previous exoplanet
packages. The Einstein delay and gravitational Doppler shift
are significant in some binary systems and are detectable with
current technology.
\item The coding error in the calculation of planetary Shapiro
delay would bias TEMPO2 timing modeling by at least tens of nanoseconds
over one decade. The bug in the calculation of parallax delay in
{\small utc2bjd.pro} would bias its timing precision by
$\sim$1\,ms for nearby stars.
\item PEXO is tested by comparison with TEMPO2 and by recovering
previous fitting results for the astrometric and radial velocity
data for $\alpha$ Centauri.
\end{itemize}
In summary, PEXO provides a high-precision combined model for timing,
radial velocity, and astrometry data. PEXO is versatile enough to take
binary motions into account and precise enough to consider high-order
classical and relativistic effects. By applying PEXO to the
analysis of high-precision data provided by the state-of-the-art
facilities such as TESS, ESPRESSO, and {\it Gaia}, we expect to have the
ability to make a reliable detection of an Earth twin as well as a
test of GR in extrasolar systems in the near
future.
\section*{Acknowledgements}
We are indebted to the anonymous referee of this paper for their
inspiring and insightful comments that led to very substantial improvements in the content and clarity of this manuscript and PEXO. M.L. is supported by a
University of Hertfordshire PhD studentship. F.F. and H.J. acknowledge
support from the UK Science and Technology Facilities Council
[ST/M001008/1]. This work has made use of data from the European
Space Agency (ESA) mission {\it Gaia} ({\url
https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data
Processing and Analysis Consortium (DPAC, {\url
https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for
the DPAC has been provided by national institutions, in particular the
institutions participating in the {\it Gaia} Multilateral Agreement. We adapt various SOFA routines (\url{http://www.iausofa.org}) to R functions in PEXO.
|
2,869,038,156,895 | arxiv | \section{Introduction}
\label{Intro}
Relational Quantum Mechanics (RQM) is an interpretation of quantum theory proposed by Carlo Rovelli in several publications (cf.\ \cite{Rovelli:1996, Rovelli:2016, Rovelli:2018}, \cite{Laudisa:2019b}). According to this interpretative framework, Quantum Mechanics (QM) concerns to the observable properties of physical systems \emph{relative} to specific observers. In this context, indeed, quantum systems can be described differently by distinct observers, and these diverse representations are not contradictory. Thus, the relational character of RQM entails that the notion of an absolute, unique reality is abolished.
Such a theory is motivated by Rovelli's pioneering work in quantum gravity\footnote{For details about Rovelli's research on quantum gravity the reader may refer to \cite{Rovelli:2004} and \cite{Rovelli:2014}.}, as explicitly stated in \cite{Rovelli:2014} and more recently in \cite{Rovelli:2018}. In loop quantum gravity, in fact, one does not rely on a background spacetime as a container, or an arena where to ``locate things''. Rather, the fundamental items of this framework somehow create it---or better, they reproduce its salient features---providing a \emph{relational} perspective about this notion. Moreover, Rovelli argues that also Einstein's theory of general relativity endorses a relational view about spacetime, since the localization of dynamical items---i.e.\ spacetime regions, the gravitational field, \emph{etc.}---is not determined with respect to a fixed background. Similarly, RQM underlines the \emph{relational} character of quantum theory:
\begin{quote}
[t]he theory yields probability amplitudes for processes, where a process is what happens between interactions. Thus quantum theory describes the universe in terms of the way systems affect one another. States are descriptions of ways a system can affect another system. Quantum mechanics is therefore based on \emph{relations} between systems, where the relation is instantiated by a physical interaction (\cite{Rovelli:2014}, p.\ 54).
\end{quote}
\noindent Referring to this, Rovelli more explicitly states that
\begin{quote}
a quantum mechanical description of a certain system (state and/or values of physical quantities) cannot be taken as an ``absolute'' (observer-independent) description of reality, but rather as a formalization, or codification, of properties of a system \emph{relative} to a given observer.\ Quantum mechanics can therefore be viewed as a theory about the states of systems and values of physical quantities relative to other systems. [...] Therefore, I maintain that in quantum mechanics, ``state'' as well as ``values of a variable''---or ``outcome of a measurement''--- are relational notions in the same sense in which velocity is relational in classical mechanics'' (\cite{Rovelli:1996}, pp.\ 1648-1649).
\end{quote}
Remarkably, the relational character of general relativity can be made compatible with that of RQM in virtue of the notion of \emph{locality}---a central concept in Einstein's theories. Indeed, \cite{Rovelli:2018} claims that if we identify the quantum mechanical notion of ``physical system'' with the general relativistic concept of ``spacetime region'', then the notion of ``interaction'' between systems in QM becomes specular to that of ``adjacency'' between spacetime regions: locality assures that interaction requires physical adjacency. As a consequence,
\begin{quote}
quantum states are associated to three dimensional surfaces bounding spacetime regions and quantum mechanical transition amplitudes are associated to ``processes'' identified with the spacetime regions themselves. In other words, variables actualise at three dimensional boundaries, with respect to (arbitrary) spacetime partitions. The theory can then be used locally, without necessarily assuming anything about the global aspects of the universe (\cite{Rovelli:2018}, pp.\ 10-11).
\end{quote}
This is a perfect exemplification of what has been stated above concerning the rejection of an absolute, global characterization of reality in RQM.\footnote{Analyzing Rovelli's essays on RQM, it is immediately clear that Einstein's relativity theories and their axioms played a crucial role in order to develop the relational interpretation of QM, as evident in \cite{Rovelli:1996}. In the present essay, however, I will not discuss the notion of locality in the context of relational quantum mechanics; the interested reader may refer to \cite{Rovelli:2019} and \cite{Pienaar:2019}.}
Another motivation for a relational interpretation of QM lies in the fact that relational quantities are ubiquitous in physics, from classical mechanics and electromagnetism to quantum field theory and cosmology. For instance, the classical notion of velocity makes sense only relative to a given reference frame, potentials of a single point in electromagnetism do not have meaning \emph{per se}, but they acquire physical significance only if another point is taken into consideration as a reference. Similarly, the Unruh effect in quantum field theory affirms that, from the point of view of an accelerating observer, the vacuum of an inertial observer will contain a gas of particles at a temperature which is proportional to the acceleration, so that these two observers will find a different number of particles. Another example is the passage of time which is different from an observer on Earth and an observer located on the edge of a black hole. We also can experience relational phenomena in our everyday life by simply looking at the stars: when observing the light emitted by a star situated very far from us, we are looking at events happened in the far past. Reality, then, is very different from the perspective of the observed star and from ours.
The main novelty of Rovelli's approach is to take seriously into account the relational character of physics, applying it to the formalism of standard quantum mechanics taken at face value. This strategy entails remarkable consequences not only for the \emph{Weltanschauung} that RQM provides, but also for the interpretation of the central notion of QM, the wave function. On the one hand, RQM aims at providing a realistic picture of what happens in spacetime starting from the principle that physical systems have properties whose values are observer-dependent---i.e.\ such values may differ from the perspective of two distinct observers. On the other hand, the quantum mechanical wave function $\psi$ loses its central role as observer-independent representation of quantum states. In this respect, relational quantum mechanics is conceptually close to Heisenberg's matrix mechanics, where (i) the ontologically relevant items are observables whose values evolve in time, and (ii) quantum jumps refer to the updates of these values upon interactions. Nevertheless, although Heisenberg's views about quantum theory oscillated from positivism to a form of realism over the years\footnote{The reader may refer to \cite{Jaeger:2009}, Chapter 3, for details about Heisenberg's views on quantum theory.}, Rovelli always emphasized the realist commitment of his interpretation.
Quite ironically, however, relational quantum mechanics is seldom considered as a realist interpretation of quantum mechanics. Looking at the literature concerning the philosophical foundations of non-relativistic QM, indeed, the proposals usually considered as realistic approaches to quantum physics are Bohmian mechanics (cf.\ \cite{Bohm:1952aa}), the spontaneous collapse theories (cf.\ \cite{Bassi:2003}), the many world interpretation (cf.\ \cite{Wallace:2012aa}), the family of modal interpretations (cf.\ \cite{Lombardi:2017}), \emph{etc.}, but not RQM. Contrary to this attitude, the main aim of the present essay is twofold: (i) to provide Rovelli's theory with a new ontological interpretation, and (ii) to argue that under this novel reading it can be considered a full-fledged realist interpretation of QM.
The ontology of relational quantum mechanics is usually defined in terms of ``events'' or ``facts'', recalling the well-known first proposition of Wittgenstein's Tractatus, where it is stated that the world is the collection of facts, and not of things (cf.\ \cite{Wittgenstein:1921}).\footnote{There is evidence that Wittgenstein's work influenced some reflections on RQM since the Tractatus is explicitly quoted in \cite{Smerlak:2007}.} The notions of events and facts refer in RQM to interactions among physical systems (cf.\ \cite{Smerlak:2007}, \cite{Laudisa:2019b}).\ Nonetheless, one may further clarify the ontology of this theory by asking what physical systems are, given that they interact in spacetime. In the present paper, my contribution is to provide an answer to this question using the tools of analytic metaphysics, defining systems as a mereological bundle of properties which not only can vary in time, but also are observer-dependent. In this manner, the event ontology of RQM can be presented as an ontology of interactions among systems, where the latter are now unambiguously defined. Consequently, I will point out also that RQM does not need to be framed in the Aristotelian tradition, where the notion of substance is primary, but it can be understood in terms of the property realism initiated by David Hume in his \emph{Treatise on the Human Nature} (cf.\ \cite{Hume:2007})---where a devastating criticism to the concept of substance is given---and currently evolved in different versions of bundle theories. Finally, since a metaphysically satisfying definition of physical systems can be given in RQM, I argue that this theory is compatible with moderate structural realism, a position where relations and relata are both fundamental.
\vspace{2mm}
The paper is organized as follows: in Section \ref{RQM} an overview of relational quantum mechanics and its assumptions is given (readers familiar with this framework can skip this section). Mereological Bundle Theory (MBT) is introduced in Section \ref{bundle} and it will be applied to relational quantum mechanics in Section \ref{RQMBTA} in order to provide a new metaphysical interpretation of physical systems in this framework.\ Moreover, in this section it is argued that Rovelli's theory is compatible with moderate structural realism. Finally, Section \ref{conc} concludes the essay.
\section{A Brief Overview of Relational Quantum Mechanics}
\label{RQM}
It is well-known that in RQM the notion of absolute, observer-independent state of a system withers away together with an observer-independent attribution to values of physical magnitudes, in favor of a relational view of quantum states.\footnote{This presentation of RQM follows closely \cite{Rovelli:1996}.} Indeed, according to relational quantum mechanics the state of a system is meaningfully defined only with respect to another system, which plays the role of an external observer. More importantly, RQM rejects the idea of an absolute, observer-independent reality. Although this is a radical departure from the common sense intuition of the world we live in, Rovelli's relational conception of reality does not lead to subjectivism.\footnote{Interestingly, \cite{Candiotto:2017} convincingly argued that RQM does not entail any form of ontological relativism.} To this regard, the word `observer' refers in RQM to \emph{every} physical object having a particular definite state of motion; in this framework an electron, an air molecule, as well as a Geiger counter or a human experimenter can be observers (cf.\ footnote 9 below).\footnote{To this regard, it is also worth noting that RQM describes \emph{every system} quantum mechanically, regardless if it is microscopic or macroscopic, avoiding by construction the cogent problem of ``where to put the cut'' between the quantum and the classical regime. More on this below.} Hence, this notion is not necessarily referring to conscious beings, which do not have any power to create reality---which exists \emph{per se}---contrary to other approaches to quantum theory.
According to Rovelli, such a relativization of states and values of physical quantities has a strong empirical basis, which ``derives from the observation that the experimental evidence at the basis of quantum mechanics forces us to accept that distinct observers give different descriptions of the same events'' (\cite{Rovelli:1996}, p.\ 1638). Interestingly, another motivation to rebut the notion of absolute reality comes from a serious consideration of the methodological evolution of Einstein's theory of special relativity. Indeed, Rovelli notes that the difficulty physicists had to interpret correctly Lorentz transformations were due to the concept of \emph{absolute simultaneity}. Removing this notion, Einstein was able to endow such transformations with a physical \emph{interpretation}. Similarly, in relational quantum mechanics it is claimed that the conceptual and technical conundra of standard quantum theory are due to the presence of the physically incorrect notion of \emph{absolute state} of a system. Taking the quantum formalism at face value, and following Einstein's steps eliminating the notion of ``absolute state'', the main aim of RQM is to provide it with a new interpretation in order to understand what this theory tells us about the world. Furthermore, the project of RQM is to re-derive the formalism of quantum theory starting from simple, physically meaningful principles which are experimentally motivated---retracing again the example of Einstein's derivation of Lorentz's transformations from the simple postulates of special relativity.\footnote{For space reasons the details of such a derivation will not be given here; the interested reader may refer to \cite{Rovelli:1996}, Section 3.}
That being said, it is useful to recall what principles of standard quantum mechanics carry over intact in RQM, in order to delineate its formal structure:
\begin{itemize}
\item \textbf{Eigenvalue-Eigenstate Link:} The properties of physical systems in RQM are represented by self-adjoint operators, i.e.\ the usual quantum observables. More precisely, given a quantum observable $A$, a quantum system $s$ possess a sharp value $a$ for the quantity $A$ if and only if $s$ is in an eigenstate of $A$ that correpsonds to $a$;
\item \textbf{The Schr\"odinger Equation (SE):}
\begin{equation}
\label{SE}
i\hbar\frac{\partial \psi}{\partial t}=H\psi
\end{equation}
\noindent where $i$ represent the imaginary unit, $\hbar$ is the Planck constant and $H$ is the Hamiltonian operator, which gives the total energy of the system under consideration being defined as the sum of kinetic and potential energy. This equation is unitary and linear and provides the fundamental law of motion of quantum systems. Nonetheless, it is useful to stress that the physics of RQM is best expressed in terms of the Heisenberg Equation (HE), since according to Rovelli's account only physical quantities evolve in time and not quantum states:
\begin{equation}
\label{HE}
\frac{d}{dt}A(t)=\frac{i}{\hbar}[H, A(t)]+\bigg(\frac{\partial A}{\partial t}\bigg)_H
\end{equation}
\noindent where $A$ is an observable and $[H, A(t)]$ is the commutator of the two operators $H$ and $A$;
\item \textbf{The projection postulate (Collapse rule):} Performing a measurement of a certain observable quantity $A$ on a given system $s$, relative to an observer $O$, the measurement interaction randomly projects the system in one of the possible eigenstates of the corresponding operator $A$. The probabilities for this stochastic transition are given by the Born's rule;
\item \textbf{The Born's Rule:} If an observable quantity represented by the self-adjoint operator $A$ is measured:
\begin{itemize}
\item the result will be one of the eigenvalues $a_i$ of $A$;
\item the probability $P$ to obtain an eigenvalues $a_i$ is given by $\langle\psi|P_i|\psi\rangle$.
\end{itemize}
\end{itemize}
The substantial difference introduced by RQM with respect to QM is that distinct observers provide in general different descriptions of sequences of physical events, and such descriptions are equally correct. This is what Rovelli calls the ``Main Observation'' of relational quantum mechanics, and it is deduced by the ``third person problem'' which we are going to present.
Let us consider the following idealized experimental situation: an observer $O$ is going to perform a measurement of a certain quantity $A$ on a system $s$. Suppose for the sake of the discussion that this magnitude can take just two (eigen)values $1, 2$, where $|1\rangle$ and $|2\rangle$ are the corresponding eigenstates. Before the measurement, say at time $t_1$, the system $s$ is in a superposition of the possible eigenstates of $A$, $a|1\rangle + b|2\rangle$ (where $|a|^2, |b|^2$ give the probabilities to find $s$ in $|1\rangle$ and $|2\rangle$ respectively). Assume also that performing the measurement at a later time $t_2$, $O$ will find the result ``1'', meaning that $s$ is projected into the state $|1\rangle$.
Then, the sequence $\textbf{E}$ of physical events taking place in the lab can be schematically summarized as follows:
\begin{equation}
\left.\begin{aligned}
\label{E}
t_1 \longrightarrow t_2\\
a|1\rangle+b|2\rangle \longrightarrow |1\rangle
\end{aligned}\right\} = \textbf{E}
\end{equation}
\noindent This is just the usual quantum mechanical description of the measurement process, where the interaction between system and measuring device causes the suppression of the Schr\"odinger evolution and projects the quantum state onto one of the possible eigenstates of the observed quantity. Now, let us consider the perspective of another observer $P$ that (i) knows the initial states of $s$ and $O$, and (ii) describes quantum mechanically the interaction between $s$ and $O$. In this scenario the observer $P$ does not perform herself any measure on the complex system $s+O$ in the time interval $t_1-t_2$. Prior the measurement of $A$ on $s$ by $O$, $P$ knows that $O$ is in a neutral state not pointing at any particular result (i.e.\ neither in the ``1'', nor in the ``2'' direction), it is then in its ``ready state''. Such a ``ready'' state is correlated with the system $s$, which before the measurement is in a superposition of $|1\rangle$ and $|2\rangle$ states, as stated a few lines above. Given the linear dynamics of QM provided by \eqref{SE}, $P$ obtains the following correlation:
\begin{equation}
\left.\begin{aligned}
\label{Estar}
t_1 \longrightarrow t_2\\
(a|1\rangle+b|2\rangle)\otimes |O-ready\rangle \longrightarrow a|1\rangle\otimes|O_1\rangle + b|2\rangle\otimes |O_2\rangle
\end{aligned}\right\} = \textbf{E'}
\end{equation}
\noindent Thus, in the context of RQM the same sequence of physical events is described differently by two distinct observers; however, in virtue of the principle of relativity of states assumed by Rovelli, \emph{both} descriptions \textbf{E} and \textbf{E'} are equally correct and legitimate. According to $O$ the system $s$ has a sharp value for $A$ after the interaction, whereas for $P$ the system $s$ is not in the state $|1\rangle$ and the measuring device does not indicate any sharp value, meaning that the complex system $s+O$ is in a superposition of states represented by \eqref{Estar}. This is the specific and characteristic trait of RQM.
This conclusion entails several consequences.\ Firstly, the separation between observed system and observer cannot be univocally determined, meaning that every system can play the role of observed system and of observer. In the third person scenario, not only $O$ observes the system $s$, but also it is part of the composite observed system $s+O$ according to $P$'s perspective. Secondly, as in special theory of relativity there is no privileged observer, meaning that all systems are equivalent: ``[n]othing a priori distinguishes macroscopic systems from quantum systems. If the observer $O$ can give a quantum description of the system $s$, then it is also legitimate for an observer $P$ to give a quantum description of the system formed by the observer $O$'' (\cite{Rovelli:1996}, p.\ 1644).\footnote{Referring to this, in \cite{Smerlak:2007} it is stated that ``[a]n observer, in the sense used here, does not need to be, say ``complex'', or even less so ``conscious''. An atom interacting with another atom can be considered an observer. Obviously this does not mean that one atom must be capable of storing the information about the other atom, and consciously computing the outcome of its future interaction with it; the point is simply that the history of its past interaction is is principle sufficient information for this computation'', p.\ 430, footnote 9.} Thirdly, in RQM different observers may provide distinct descriptions of the same sequence of events (cf.\ the Main Observation above). What is crucial to underline is that two different observers do not assign different \emph{probabilities} for possible measurement results, but rather they provide descriptions of different \emph{state of affairs}, as in the example discussed above. Consequently, RQM entails that the notion of state of a quantum system is relative to some observer. Thus, the notions of ``system'', ``measurement outcome'' and ``value of a variable'' are relational concepts.
Furthermore, according to Rovelli's theory, QM provides a \emph{complete} description of the world: there is no deeper theory describing how absolute reality behaves. An interesting question to ask, then, is how different perspectives may coexist without generating contradictions. This query can be answered considering the third person problem discussed above. Taking into account the sequence \eqref{E} we know that $O$ performed a measurement of $A$ on the system $s$ obtaining a definite result. On the other hand, from \eqref{Estar} we infer that $P$ knows that $O$ performed a measurement of $A$ on $s$, obtaining information about the value of $A$. However, it should be emphasized that $P$ does not have any information about the measurement result obtained by $O$, i.e.\ $P$ does \emph{not} know which one among the possible alternatives has been actualized.\ $P$ does not have this information since it did not interact directly with $O$.\ What $P$ may predict, given its amount of knowledge and the quantum formalism, is just that $O$ made a measurement on $s$; however, it cannot predict the exact value that has been obtained.\ Thus, there is no contradiction between these two different perspectives. Clearly, $P$ can know $O$'s result through physical interaction:
\begin{quote}
if $P$ knows that $O$ has measured $A$ (notation adapted), and then she measures $A$, and then she measures what $O$ has obtained in measuring $A$, consistency requires that the results obtained by $P$ about the variable $A$ and the pointer are correlated'' (\cite{Rovelli:1996}, p.\ 1652).
\end{quote}
\noindent We are arrived to another tenet of RQM: information can be obtained only via physical interaction; what is important to underline for the purposes of the present essay is that in RQM information is concerned with the attribution of values to physical quantities.\footnote{For a technical discussion about the notion of information in relational quantum mechanics the reader may refer to \cite{Rovelli:1996}, Sections 2.5 and 3.} In sum, one can summarize the physical content of RQM as follows:
\begin{quote}
\emph{quantum mechanics provides a description of the world in terms of properties of physical systems relative to other systems functioning as observers, and such a description is complete}.
\end{quote}
Finally, to complete our qualitative survey of relational quantum mechanics, let us speak about the ontology of this theory. RQM employs a discrete ontology of events (or physical facts) in spacetime, recalling the Wittgensteinian idea according to which the world is the totality of facts, and not of things. More precisely, the ontology of the theory is given in terms of physical variables (as in classical mechanics) whose value change in interactions. The main difference between RQM and classical physics is that in the former theory observables take definite values only in interactions---taking into account also the contextual nature of quantum observables (cf.\ \cite{Kochen:1967}) and the limit imposed by Heisenberg uncertainty relations---and such values are relative to a particular observer. Against this background, it is worth noting that events are just observer-dependent change of physical values taking place in interactions among systems. According to RQM, thus, the world is just ``an evolving network of sparse relative events, described by punctual relative values of physical variables'' (\cite{Laudisa:2019b}). Moreover, variables in RQM can take values at certain time, and have no sharp values at other times, a fact referred as to the discreteness of quantum observables in Laudisa and Rovelli's introduction to RQM.
Contrary to standard QM, relational quantum mechanics should not be considered as a theory about the behaviour of the wave function in spacetime.\ Remarkably, $\psi$ is just a convenient formal tool, useful for the computation of future quantum probabilities given a certain amount of knowledge at disposal of a particular observer. Thus, in RQM $\psi$ is not interpreted realistically, meaning that the wave function does neither represent a property of an individual system, nor a concrete object in physical space.
Contrary to the Schr\"odinger's picture where the wave function describes completely the state of a system and evolves unitarily in space and time according to \eqref{SE}, in the Heisenberg picture $\psi$ just encodes information about past interactions, and it changes only as a result of another interaction. Moreover, RQM provides a description of reality in terms of discrete events which are modeled as discrete changes in the relative state, ``when the information is updated and nothing else. What evolves with time are the operators, whose expectation values code the time-dependent probabilities that can be computed on the basis of the past quantum events'' (\cite{Smerlak:2007}, p.\ 431). This is why it has been claimed that the physical content of RQM is best described using Heisenberg equation \eqref{HE}.\footnote{Remarkably, the collapse postulate assumes different meanings in RQM and QM; since the wave function is not considered a physical object in the former, literally speaking nothing physical collapses in measurements interactions. Moreover, given that the proper dynamical description of RQM should be given in terms of the Heisenberg equation \eqref{HE}, it follows that the collapse of the wave function should be interpreted as an information update relative to a certain observer and concerning the value of some magnitude measured on a particular system.}
Hence, one can affirm that in RQM $\psi$ encodes the information referring to the values of physical magnitudes of a certain system relative to another system functioning as observer\footnote{As a consequence, in RQM also the notion of ``wave function of the universe'' present in several interpretations of quantum theory---as for instance in Everett's relative state formulation of QM, the Many Worlds interpretation, Bohmian mechanics, \emph{etc.}---is rejected.}:
\begin{quote}
[t]he state $\psi$ that we associate with a system $S$ is therefore, first of all, just a coding of the outcome of these previous interactions with $S$. Since these are actual only with respect to $A$, the state $\psi$ is only relative to $A$: $\psi$ \emph{is the coding of the information that $A$ has about $S$}. Because of this irreducible epistemic character, $\psi$ is but a relative state, which cannot be taken to be an objective property of the single system $S$, independent from $A$. Every state of quantum theory is a relative state (\cite{Smerlak:2007}, p.\ 431).
\end{quote}
In sum, we can conclude this section by saying that ``the ontology of RQM is a sparse (``flash'') ontology of relational quantum events, taken as primitive, and not derived from any ``underlying'' representation'' (\cite{Laudisa:2019b}).
\section{Mereological Bundle Theory}
\label{bundle}
\subsection{Why Bundle Theory?}
At the end of the previous section it has been stated that relational quantum mechanics implements an event ontology.\ Nonetheless, in the literature concerning RQM it is not precisely stated what interacting physical systems are supposed to be.\ Referring to this, in many places it is even claimed that RQM rejects an object-oriented ontology, favoring an ontology of processes and relations (cf.\ for instance \cite{Candiotto:2017} and references therein). Contrary to this thesis, in the opinion of the present author Rovelli's theory can be made compatible with an ontology of properties from which objects can be easily defined. Indeed, here I propose a new metaphysical definition of physical systems in the context of relational quantum mechanics using the conceptual tools of mereological bundle theory, building on previous work contained in \cite{Paul:2017}. By shedding light on what kind of physical objects populate spacetime in RQM, I aim at clarifying the ontological picture of reality provided by this theoretical framework.
In a nutshell, it is possible to summarize the central idea of the present essay by saying that physical systems in RQM should be defined as mereological bundles of observer-dependent properties varying in virtue of interactions. Given the central role played by quantum observables in RQM, it is more than plausible to claim that this framework is compatible with a form of realism towards \emph{properties} rather than substances.\ This proposal finds evidence and justification in Rovelli's work, since he stressed in several places the ontological priority of properties over states, and that RQM should be framed in the Heisenberg picture and not in Schr\"odinger's picture, as recalled in the previous section. Moreover, in order to provide RQM with an property-oriented ontology, it is crucial to note that Rovelli's theory discards the Aristotelian conception of object---a tradition centered on the notion of substance carrying attributes which dominated Western philosophy---in favor of an ontology of properties firstly proposed in David Hume's \emph{Treatise of Human Nature} (\cite{Hume:2007}). In this work Hume advances a devastating criticism of the notion of substance, more specifically of \emph{bare particulars}, i.e.\ propertyless substances that are things in themselves and property bearers. Referring to this, the Scottish philosopher writes that:
\begin{quote}
I wou'd fain ask those philosophers, who found so much of their reasonings on the distinction of substance and accident, and imagine we have clear ideas of each, whether the idea of \emph{substance} be deriv'd from the impressions of sensation or of reflection? If it be convey'd to us by our senses, I ask, which of them; and after what manner? If it be perceiv'd by the eyes, it must be a colour; if by the ears, a sound; if by the palate, a taste; and so of the other senses. But I believe none will assert, that substance is either a colour, or sound, or a taste. The idea of substance must therefore be deriv'd from an impression of reflection, if it really exist. But the impressions of reflection resolve themselves into our passions and emotions; none of which can possibly represent a substance. We have therefore no idea of substance, distinct from that of a collection of particular qualities, nor have we any other meaning when we either talk or reason concerning it. The idea of a substance as well as that of a mode, is nothing but a collection of simple ideas, that are united by the imagination, and have a particular name assign'd them, by which we are able to recal, either to ourselves or others, that collection (\cite{Hume:2007}, p.\ 16).
\end{quote}
In this passage Hume seeks to show that philosophers do not have any definition of what a substance really is: neither it can be defined from senses and perceptions, nor from abstract reasoning. In particular, he stresses that we cannot think an object without attributing properties to it. Thus, the notion of bare particular would not be actually conceivable. More precisely, Hume claims that our beliefs about substances derive from an illusion of thought: we perceive that objects change in time and modify (often radically) their attributes; then, we are led to ascribe a temporal identity to those objects even though we assign them a set of different and contradicting properties in time. Thus, ``[i]n order to reconcile which contradictions'' Hume says, ``the imagination is apt to feign something unknown and invisible, which it supposes to continue the same under all these variations; and this unintelligible something it calls a \emph{substance}, or \emph{original and first matter}'' (\cite{Hume:2007}, p.\ 146). As \cite{Robinson:2018} notes, Hume's critique of the notion of substance is very similar to his criticism of the concept of causation: it is a projection and a tendency of our mind, that seeks to associate the things that we perceive with the passage of time. If bare particulars do not exist, what are then the objects populating space and time? Hume answered this question resorting to what is now called bundle theory, according to which objects are defined as bundles---i.e.\ collections---of properties. Individual objects, therefore, are simply a collection of properties colocated in spacetime, without any bare substratum carrying them.\footnote{Various bundle theories propose different criteria to individuate objects and distinguish them with respect to mere collections of properties.\ For space reasons I will not enter here in these details. Nonetheless, I will explicitly address how objects are individuated according to mereological bundle theory, which will be employed to define what physical systems are in RQM.} For instance, in this account a billiard ball is simply the collection of its attributes, e.g.\ its roundness, its weight, its color, \emph{etc.}, however, there is no bare substance of a propertyless billiard ball bearing such properties. In the XX century Hume's proposal evolved into various bundle theories which have been endorsed and improved by several prominent philosophers.\footnote{The interested reader may refer for instance to \cite{Russell:1940} \cite{Simons:1994}, \cite{Williams:1953}.}\ Moreover, bundle theories have been introduced recently also in the philosophy of physics in order to provide quantum mechanics and quantum field theory with a clear ontology of properties (cf.\ \cite{Lombardi:2013}, \cite{Lombardi:2016} and \cite{Kuhlmann:2010aa} respectively).\footnote{For a similar proposal see \cite{Falkenburg:2007}, where she argues in favor of an ontology of properties.}
To this regard, another motivation to discard the notion of object as substance is given by the quantum mechanical formalism itself, which does not allow to conceive quantum objects as independent, localized entities along the lines of classical mechanics. Indeed, QM entails essential features as indistinguishability, contextuality and non-separability, which make quantum particles inherently different from their classical counterparts. Thus, the notion of individual system should strongly differ in the quantum and classical regime. For all these reasons, taking into account a different philosophical tradition, where properties (without substances) are the fundamental elements of the ontology, can be helpful in order to provide a novel definition of what objects are in the context of quantum theory.\footnote{To this regard cf.\ \cite{Lombardi:2016}, p.\ 127.}
Following these latter examples, I am going to introduce mereological bundle theory, a metaphysical framework which will give us the necessary tools to propose a meaningful notion of object suitable for relational quantum mechanics.
\subsection{Abolishing Substances: Mereological Bundle Theory}
MBT is a metaphysical theory proposed by L.A. Paul (cf.\ \cite{Paul:2017}) providing a one-category ontology of the world in terms of properties\footnote{Here I will refer to what Paul calls the ``mosaic model'' of MBT.}; such an ontology, moreover, employs a unique world-making relation, i.e.\ mereological composition. This proposal is particularly apt for the aim of the present essay, since in Paul's view spacetime, matter and complex physical objects are ``constructed from mereological fusions of qualities, so the world is simply a vast mixture of qualities, including polyadic properties (i.e., relations)'' (\emph{ibid.}, p.\ 33). Hence, mereological bundle theory abolishes the traditional distinction between an object and its properties, given that the former notion is ontologically reduced to the latter. On the other hand, this view preserves the possibility for objects---properly understood as mereological fusion of properties---to be located in spacetime, since they are ``qualitative fusions that are fused with spatiotemporal relations or relational properties'' (p.\ 34).\footnote{In traditional bundle theories such as those defended by Russell, Simons and Williams we find a distinction between universal properties and particular objects, therefore, these should not be considered as a one category ontology, contrary to MBT. For details see \cite{Paul:2017}, p.\ 36. To this regard, the distinction between universal and particulars seems to be present also in \cite{Lombardi:2013} and \cite{Lombardi:2016}. Indeed, in these works self-adjoint operators represent physical observables, which in turn correspond to ``type-properties''. On the other hand, eigenvalues of self-adjoint operators correspond to the values of observables, and to ``case-properties'' from an ontological perspective. Here type-properties should be conceived as universal attributes which can have countless particular instances of case-properties. However, \cite{Lombardi:2016} claim that their approach remains neutral with respect to the existence of universal properties. It is worth noting that in mereological bundle theory non-instantiated universals do not exist, so that its ontology is not inflated with uninstantiated properties.} Thus, also spacetime is ontologically reduced to the spatial relations in which objects stand with respect to each other; MBT, therefore, endorses a relational view of spacetime.\ It is straightforward to note that this metaphysical view shares fundamental features with RQM since both theories (i) eliminate the notion of object as propertyless substance in favor of an ontology of properties, and (ii) do not consider spacetime to be a substance \emph{per se} over and above relational (i.e.\ spatial) properties. Hence, MBT seems to be a perfect candidate in order to provide a correct definition of what physical objects are in the context of Rovelli's quantum theory.
In a nutshell, conforming to MBT the elementary, fundamental building blocks of our world are properties (including relations), and everything else is mereologically composed from them. In this metaphysical theory properties are literally taken as objects (and part of objects) bundled together via the composition relation.\footnote{The basic axioms of MBT are given in \cite{Paul:2017}, p.\ 38.\ In what follows I am not going to introduce them since they are not essential for the purpose of the present essay. Nonetheless, one can say that the primitive mereological notion at play in MBT is that of ``proper part'', which is irreflexive, asymmetric and transitive. Such a concept is complemented by the other notions of parthood and composition, which are essential in the context of classical extensional mereology.}\ Furthermore, properties---and hence objects---are individuated since they are fused to spatiotemporal relations; therefore, MBT avoids well-known issues related with the notions of co-location and compresence typical of traditional bundle theories. Indeed, in virtue of MBT's view about spacetime, spatiotemporal locations are defined in Paul's account as $n$-adic properties which individuate objects and assign them a relational identity.
To this regard, it is worth noting that mere collections of properties do not form an object in Paul's theory: properties must be bundled together using exclusively mereological composition which include also spatiotemporal relations, so that strange sums of properties like ``being squared'' and ``being triangular'' cannot count as a proper object in this framework. Remarkably, according to Paul's bundle theory the derivative ontological structure of the macroscopic world is reduced to the fundamental entities appearing in the vocabulary of our most advanced physical theories, such as quantum mechanics and quantum field theory.\ These fundamental entities, in turn, are constituted by inherent and relational properties that form their mereological structure. What is metaphysically interesting is that, conforming to MBT, sums of properties do not generate new ontological categories: the emergence of complex objects from the Plank to the macroscopic and cosmological scales does not entail novel ontological categories with respect to properties.
At this point, however, an objection can be raised: properties seem to be essentially different from the discrete, concrete material objects populating and interacting in spacetime, so that we should think of them as a different and separate category of being. To this worry, Paul replies that
\begin{quote}
[i]t is just a mistake to think properties cannot be chunky, concrete, complete, or independent. They are chunky, and concrete, and complete, and independent---because some of them are chunky, concrete, complete, and independent. In particular, some of the properties that are ordinary objects are chunky, concrete, complete, and independent. (As are some of the fundamental physical properties, such as field intensities. [...]) Do not be tempted by the fallacious idea that fusing is what somehow ``makes'' the ordinary object (which is a fusion of properties) chunky or substantial. That's not how fusing works: it makes many into one, it doesn't make non-substances into substances or abstract things into concrete ones (\cite{Paul:2017}, p.\ 42).
\end{quote}
\noindent Referring to this, the mosaic model of mereological bundle theory claims that extended objects are composed by two sorts of properties, qualitative and spatiotemporal, where the latter should be thought as relational properties, as repeatedly underlined above; ``Rocks, persons, stars, and abstract objects are all fusions built from quality fusions then fused together by spatiotemporal composition. Such fusions, in addition to being complex constructions of quality and spatiotemporal fusions, are also plain vanilla property fusions, where the properties fused are the whole (distributed) properties of the object'' (\emph{ibid.}, p. 45).
Another interesting feature of mereological bundle theory concerns its treatment of identical objects, a topic which is essential for the present discussion because our aim is to apply MBT to quantum objects. What is important for our purpose is to allow that objects with the same properties exist and are also numerically distinct. For instance, we want to be able to say that all electrons are characterized by the same properties of mass, charge and spin, and that they are numerically distinct. Since MBT cannot rely on haecceities or substances in order to individuate objects and to distinguish between identical instances of the same object, it has to resort to another strategy, i.e.\ to admit as a primitive fact the existence of distinct objects which are fusions of the same properties. In this way, one has qualitatively indistinguishable, but numerically distinct objects. The problem of individuating indistinguishable but distinct objects stems from Leibniz's Principle of the Identity of Indiscernibles (PII), which affirms that if two objects $a,b$ share the same properties, then they are identical. This issue is particularly delicate and deserves some attention. It is well-known that PII has been given many interpretations, but Paul considers the following two:
\begin{enumerate}
\item $a$ and $b$ share all their properties, including those such as ``being identical to $a$'';
\item $a$ and $b$ share all their pure intrinsic and extrinsic properties.
\end{enumerate}
\noindent For our discussion it is crucial to underline that pure properties exclude primitive identity-based properties as for instance ``being this laptop'', ``being Olga'', ``being that car'', \emph{etc.}. Now, if PII is interpreted according to (1), then every philosopher agrees that it is true; however, PII is not necessarily true if indiscernibility is interpreted in the sense of (2). The usual problem for bundle theorists, Paul argues, is that bundle theory is generally considered as entailing the truth of PII according to interpretation (2). This entailment, she adds, is due to another implicit premise, the ``Supervenience of Identity Thesis'', which states that the property ``being identical to $a$'' reductively supervenes on $a$'s pure intrinsic and extrinsic properties. Nonetheless, we stated a few lines above that pure properties do not concern identity-based properties, thus, the mereological bundle theorist should reject the Supervenience of Identity Thesis, exactly as the substance theorists do. However, if the latter appeals to the intrinsic essence of objects, the former needs to say that identity facts just ``supervene on the object themselves: i.e., the identity of $a$ supervenes on $a$, and that's all'' (\cite{Paul:2017}, p.\ 50). This rejection leads to a new primitive assumption for MBT, the ``ungrounded difference''. This primitive fact allows us to claim that, according to the mosaic model of mereological bundle theory, the fusions of identical properties generate barely different objects; thus, the mereological bundle theorist can accommodate the existence of identical, but numerically distinct objects.\footnote{To this regard, Paul discusses in depth how MBT deals with identical objects located at the very same spatial location considering the case of $n$-bosons localized in the very spacetime point. For spatial reasons we will not discuss this issue in the present essay.}
Finally, to conclude our presentation of MBT and to make it compatible with quantum theory, we have to take into account typical features of quantum particles.\footnote{Here I am slightly modifying MBT by applying the quantum formalism to it.} It is well-known that in quantum mechanics the word ``particle'' does not refer to point-like material objects with well-defined properties as in classical mechanics. This must be also valid in the context of MBT: here different species of quantum particles refer to fusions of specific properties, as for instance having a certain mass, a given charge, a particular spin \emph{etc.}. Moreover, in order to be empirically adequate, this ontology of properties must take into account the contextual nature of quantum theory, following the work of \cite{Lombardi:2013}, and \cite{Lombardi:2016}. These authors recall that in virtue of the quantum formalism not every property constituting a quantum system can have well-defined value. More precisely, the Kochen-Specker theorem entails that it is not possible to ascribe simultaneously well-defined values to all the observables attributable to a quantum system without generating contradictions.
Thus, also in MBT not every property defining a given system will have an actual, definite value. For instance, in virtue of the quantum formalism, a certain particle will not have simultaneously well-defined position and momentum, although the properties of ``having position'' and ``having momentum'' can be ascribed to quantum objects. Consequently, the fundamental building blocks of our reality are defined as bundles of fused properties, where some of them remain metaphysically indeterminate in virtue of the contextual nature of quantum formalism (cf.\ \cite{Calosi:2020}). For instance, an electron is defined by the fusion of its intrinsic properties like mass, charge, being spin$-1/2$, and extrinsic properties as momentum, velocity, energy, position \emph{etc.}. Remarkably, whereas the former class of properties have the same values for every instance of electron (i.e.\ every electron will have the same mass, charge and will be a spin$-1/2$ particle), the latter (among which there are also relational properties) depend on the specific interaction of the particular particle under consideration. In any case, the fusion of these attributes is what defines each particular electron according to mereological bundle theory. This examples can be extended to every other family of particles. Composite systems, like nucleons, atoms and molecules are mereologically---and hence, ontologically---dependent on the composition of different species of particles. Let us now make the last step, and apply MBT to relational quantum mechanics.
\section{A New Definition of Physical Objects in RQM}
\label{RQMBTA}
\subsection{The Mereological Bundle Theory Approach to RQM}
Having introduced the essential ideas of mereological bundle theory, in this section I will propose a new metaphysical interpretation of physical systems in relational quantum mechanics in terms of Paul's theory.
According to RQM a physical system can be characterized ``by a family of yes/no questions that can be meaningfully asked to it'' (\cite{Rovelli:1996}, p.\ 1655), where such questions are measurements\footnote{From von Neumann's treatise on quantum mechanics (\cite{vonNeumann:1955}), it became popular to define measurements as question asked on quantum systems. This is still a standard practice in the field of quantum logic and quantum information.} that can be performed on physical observables, i.e.\ properties, attributable to the system under consideration. An observer $O$ may ask a set of potentially \emph{infinite} questions $Q_1, Q_2, \dots, Q_n$ to the system $s$, obtaining the string
\begin{align}
\label{string}
(e_1, e_2, \dots, e_n)
\end{align}
\noindent where each $e_i$ represent a specific answer.\ Nonetheless, it is worth stressing that RQM postulates that ``there is maximum amount of \emph{relevant information} that can be extracted from a system'' (First postulate of RQM \emph{ibid.}, p.\ 1657). From this principle, it follows that a complete description of a physical system $s$ is given in terms of the string
\begin{align}
\label{string2}
[e_1, e_2, \dots, e_k]
\end{align}
\noindent with $k<n$ which is a subset of \eqref{string}. Given that in RQM information about systems is obtainable only through interaction, and since questions are essentially measurements on the system $s$ performed by some observer $O$, it follows that \eqref{string2} contains the information that $O$ has (extracted) about the system $s$. It is worth noting that \eqref{string2} provides the \emph{relevant} information about $s$, where ``[t]he relevant information is the subset of \eqref{string} [equation number adapted] obtained by discarding the $e_i$ that do not affect the outcomes of future questions'' (\emph{ibid.}, p.\ 1656).\footnote{It is important to underline that repeating a measurement of an observable on a given system and obtaining the same result will not increase the information about the system at hand.} In sum, the string \eqref{string2} is the knowledge that an observer $O$ has about the system $s$, and the subscript $k$ refers to the number of questions asked to $s$ and that characterize it.\ Clearly, this string represents the description of $s$ relative to $O$; indeed, another observer $P$ may ascribe to $s$ a different list of properties, as we have seen in the third person scenario described in Section \ref{RQM}. Referring to this, it is important to underline that, contrary to the case of classical mechanics, the amount of information that an observer can extract from a quantum system is finite in virtue of the algebraic structure of quantum observables. Although the information about a particular system is finite, RQM postulates that it is always possible to acquire new information about it (Second postulate of RQM). These two claims are not in tension, since the second postulate merely refers to the fact that, even if one knows the ``state'' of a certain system, it is possible to gain new information performing a measurement of a given observable $A$, such that the system $s$ at hand is not in an eigenstate of $A$. Given that the amount of information has a upper bound as imposed by the first postulate,
\begin{quote}
when new information is acquired, part of the old relevant information must become irrelevant. In particular, if a new question $Q$ (not determined by the previous information gathered) is asked, then $O$ should lose (at least) one bit of the previous information. Thus, after asking the question $Q$, new information is available, but the total amount of information about the system does not exceed $N$ bits (\emph{ibid.}, p.\ 1658).
\end{quote}
\noindent Thus, in agreement with the Heisenberg picture and the algebraic approach to quantum mechanics, also in Rovelli's theory a system is defined via the family of observables that are attributed to it. For instance, meaningful questions concerning a quantum particle may refer to the possibility to find it in a certain spatial region, about its momentum or its spin in some direction, \emph{etc}..
As a consequence, since in relational quantum mechanics physical systems are defined via a specific set of observables, and the latter are usually associated to their instantiated properties, it is then plausible to say that such systems may be characterized as mereological bundles of properties, as anticipated in Section 3.1. In this manner RQM can be provided with an property-oriented ontology where objects are straightforwardly defined, because in MBT properties do correspond to objects, as already stated. However, given that the values of the properties of quantum systems in relational quantum theory are observer-dependent, we define the objects of this theoretical framework as mereological bundles of properties whose values depend on the perspective from which they are observed. Referring to this, one should say more precisely that there is a set of inherent properties characterizing a certain species of particles---such as mass, charge, spin \emph{etc.}---which are not observer-dependent, so that their value remains constant.\footnote{If intrinsic properties of quantum particles could change, then different observers could observe particles with diverse identities. This fact, however, would entail disastrous empirical consequences to be admitted.} On the contrary, the values of extrinsic properties (as for instance energy, position, momentum, \emph{etc.}) are observer-dependent and change relatively to specific observers. Furthermore, since these bundles of properties are subjected to the formalism of QM, it follows that not all observables associated with a certain system, that is, not all properties constituting the system,
can have definite values; this fact must be respected to avoid contradictions with the Kochen-Specker theorem and the contextuality of quantum theory.
In sum, this proposal suggests that physical systems in RQM should be defined as mereological bundles of properties, where (i) the intrinsic properties characterizing a certain species of particles have constant values, (ii) the extrinsic qualities take definite values relative to particular observers, and (iii) not every observable defining a system can have a definite value in virtue of the contextual nature of the quantum formalism. To give an example, in RQM an electron is composed by its inherent properties characterizing this species of particles as for instance its mass, charge, having spin$-1/2$, \emph{etc.}, and by its extrinsic properties as momentum, energy, angular momentum, or position. These latter have relational and contextual values which depend on the specific interactions of the particle under consideration with different observers; not all these extrinsic properties have definite values, in agreement with the Kochen-Specker theorem. Clearly, all electrons are composed of the same intrinsic and extrinsic properties fused together, where the former have the constant value for every instance of single electron, and the latter depend on the specific interactions of individual electrons. Since MBT admit the existence of numerically distinct objects which are bundles of the same properties, we can meaningfully speak of the class, or the family of electrons. This example can be easily extended to every family of quantum particles.
A remarkable consequence of this proposal is that, under this reading, RQM is committed to a realism towards properties; thus, from the perspective of mereological bundle theory, one can properly speak about material objects in motion in spacetime, whose attributes vary in relation to different observers. Hence, not only one can define what physical systems are according to relational quantum mechanics, but also it is possible to clarify its event ontology. Indeed, with this new approach to RQM we can properly claim that there are interactions among physical objects in spacetime. The notable feature of this proposal is that RQM can speak about interactions without resorting to the notion of bare substance, which is abolished in this theoretical framework.
\subsection{Relational QM and Structural Realism}
\label{MSR}
In a recent and interesting paper Laura Candiotto argues that relational quantum mechanics is an instantiation of the ontology proposed by Ontic Structural Realism (OSR), according to which the fundamental building blocks of nature are not objects, but relations (cf.\ \cite{Candiotto:2017}). More precisely, Candiotto claims that, according to RQM, relations constitute the fundamental elements of our reality, i.e.\ relations are the only primitives in this interpretation of QM. Furthermore, given the event ontology of Rovelli's theory, Candiotto says that in interactions physical systems change their properties, and the net of such physical interactions constitute our world. Hence, she concludes that in RQM only relations are fundamental, relata are not. Quoting Rovelli himself, Candiotto affirms that in RQM there are no objects entering into a given relation: it is the relation which generates objects. From this viewpoint, given that in RQM (i) there is an ontological priority of relations over relata, and (ii) that relations form real structures, Candiotto claims that the natural framework to cast relational quantum mechanics and to understand its metaphysical content is OSR, a well-known philosophical perspective which argues in favor of an ontology of pure structures.\footnote{Cf.\ \cite{Ladyman:2007b} and \cite{Ladyman:2014} for details on this perspective.}
Although I agree with Candiotto's primary aim, i.e.\ to argue that RQM should be considered a realist quantum theory, I disagree with the claim that relational quantum mechanics denies the notion of object \emph{altogether}. In the opinion of the present author Candiotto is absolutely right in pointing out that ``RQM implies a critique of the notion of object.\ The objects denied by the RQM are the objects' things which characterize naive realism; denying their existence does not imply that there is nothing or nothing is real. The challenge is to think reality in relational terms, engendering thus a new way of understanding ``objects' '' (\cite{Candiotto:2017}, p.\ 6).
Indeed, as it has been argued in the previous sections, Rovelli's theory demands a notion of object that is completely detached from the classical concept of bare substance carrying attributes. Furthermore, denying the existence of such substances in relational quantum mechanics does not entail that Rovelli's theory discards \emph{tout court} the notion of object, as has been shown taking into account the ontology of properties shaped applying MBT to RQM. Thus, a property-oriented ontology can be implemented in this framework, so that the notion of object can be retained. As a consequence, then, there is argumentative room to claim that relations are fundamental in RQM, \emph{as well as} objects (if this notion is properly understood). This conclusion, in turn, entails that RQM would be compatible with a Moderate form of Structural Realism (MSR) as presented in \cite{Esfeld:2011aa}, where not only relations are fundamental and primitive notions of the theory, but also \emph{relata}, i.e.\ concrete physical objects---in this case mereological bundles of properties---standing in such relations are so.
More precisely, according to MSR there is no ontological priority between relations and relata: both are considered equally fundamental. Interestingly, supporters of this form of structural realism claim that the distinction between objects and relations---and properties more in general---is not ontological but only conceptual, in the sense that although such a distinction is present in our language, it does not reflect a real dichotomy of the world. Consequently, as \cite{Esfeld:2011aa} argue:
\begin{quote}
there is no point in enquiring into the relationship between objects and properties, including relations or structures, and, in particular, to talk in terms of a mutual ontological dependence between objects and properties, including relations or structures, or an ontological priority of the one over the others. There are not two types of entities, objects and properties including relations or structures, that entertain a certain relationship of ontological dependence. The dependence is only conceptual.
\end{quote}
In sum, both relata and relations exist, and there is no question concerning the ontological priority of one category over the other. This feature of MSR reflects what has been proposed in the previous section, where it has been argued that systems in RQM are mereological fusion of properties and relations.\ Given that objects in this bundle theory \emph{are} properties and relations, it follows that in this new interpretation of RQM both relata and relations are equally fundamental. Hence, it follows that, in this new reading of Rovelli's theory a structure consists of a network of physical relations among objects interacting in spacetime. Thus, we conclude that since it is possible to provide relational quantum mechanics with a property-oriented ontology where object are easily defined, one should admit the fundamentality of these objects together with relations.
\vspace{2mm}
In conclusion, from what has been argued so far, it is fair to claim that relational quantum mechanics should be included among the full-fledged realistic interpretations of quantum mechanics. Referring to this, it is worth noting that Rovelli's theory postulates the existence of mind-independent, concrete, physical objects interacting in spacetime. In this framework physical objects exist per se, there is no need of minds with some sort of creative power in the ontology of RQM. Moreover, this theory shows clearly the three-dimensional character of scientific realism, as \cite{Chakravartty:2017} Section 1.2 puts it. Indeed, (i) RQM endorses a form of metaphysical realism, i.e.\ the idea that the world (and the objects that compose it) exists independently of human minds perceiving or observing it, (ii) from a semantic perspective a supporter of RQM will believe in the statements, explanations and predictions of this theory, in the sense that relational quantum mechanics makes (approximately) true statements about the features of reality, finally (iii) the explanations of physical phenomena in RQM provide actual knowledge of the world to the supporter of the theory. Hence, the metaphysical, semantical and epistemological character of scientific realism are maintained in the context of Rovelli's theory.
Hence, since RQM can be provided with a property-oriented ontology which is compatible with moderate structural realism, and that the three fundamental aspects of scientific realism are respected, one can safely conclude that relational quantum mechanics must be included in the list of realist approaches to quantum theory.
\section{Conclusion}
\label{conc}
Elaborating on Paul's mereological bundle theory and following the proposals for a quantum ontology of properties advanced by \cite{Lombardi:2013} and \cite{Lombardi:2016}, in the present essay I proposed a new metaphysical interpretation of physical systems in the context of relational quantum mechanics, characterizing them as mereological bundles of properties. Essentially, this reading of RQM defines quantum systems as bundles of properties which can take contextual values with respect to particular observers. Such an ontology of properties is motivated by the principles of relational quantum mechanics, since in this theory observable quantities assume a primary ontological status, following the Heisenberg's picture and the algebraic approach to QM. Furthermore, this new interpretation elucidates the event ontology of Rovelli's theory, making it clear what kind of items populate the world in RQM.
As a consequence of this proposal, RQM is made compatible with scientific realism, if the latter is intended to be about properties. More precisely, in this essay it has been claimed that RQM is compatible with a moderate form of structural realism, since also objects, i.e.\ the \emph{relata}, can be meaningfully defined in this theoretical framework. Thus, in the opinion of the present author, relational quantum mechanics should be incorporated among the full-fledged realistic interpretations of quantum theory.
Let me conclude by saying that although in this paper I tried to provide RQM with an object-oriented ontology, a lot of philosophical work remains to be done in order to clarify the rich metaphysical implications of this theory, as for instance what interpretation of probabilities is best suited for this framework, the problem of locality, the extension of this approach to the standard model of particle physics, \emph{etc.}. This is a positive fact, since both physicists and philosophers will have the opportunity to discuss the foundations of quantum theory from a new, hopefully fruitful perspective.
\vspace{5mm}
\noindent \textbf{Acknowledgements:} I warmly thank Carlo Rovelli, Olimpia Lombardi, Claudio Calosi and his group at the University of Geneva, Cristian Lopez and Olga Sarno for helpful and positive comments on previous drafts of this paper. This research is financially supported by the Swiss National Science Foundation (Grant No. 105212-175971).
\clearpage
\bibliographystyle{apalike}
|
2,869,038,156,896 | arxiv | \section{Introduction}
Manin \cite{Manin3fold} proposed to study (uni)rationality
of Fano threefolds over nonclosed fields, in situations where geometric
(uni)rationality is known.
In cases where the Picard group is generated
by the canonical class, i.e., those of rank and index one, he assigned an
`Exercise' \cite[p.~47]{Manin3fold} to explore the rationality of
degree $12, 16, 18$, and $22$. See \cite[p.~215]{IskPro} for a list of
geometrically rational Fano threefolds of rank one.
We have effective criteria for deciding the rationality of surfaces
over nonclosed fields -- the key invariant is the Galois action on the
geometric Picard group. This invariant is trivial for Fano threefolds
considered above.
Kuznetsov and Prokhorov are pursuing the rationality
question from the perspective of
derived categories, relating the derived category of the threefold to
categories of twisted sheaves on auxillary curves. Here we take a
more geometric approach based on the analysis of torsors over the
intermediate Jacobians presented in \cite{HTcycle,BW}. Throughout, we work over
a field $k$ of characteristic zero. Our main result is:
\begin{theo} \label{theo:main}
Let $X$ be a smooth Fano threefold of degree $18$ defined over $k$ and
admitting a $k$-rational point. Then $X$ is rational over $k$ if and only if
$X$ admits a
conic over $k$.
\end{theo}
Here a conic means a geometrically connected curve of degree two -- possibly
nonreduced or reducible.
Much recent work on rationality has focused on applications of specialization
techniques to show the {\em failure} of (stable) rationality
\cite{VoisinInv,CTP,HTK,Totaro,HPTActa,Schreieder,NS,KTspecialize}. Here we use it to
{\em prove} rationality, avoiding complicated case-by-case arguments for special
geometric configurations; see Theorem~\ref{theo:speciality}. This technique
was also used to analyze rationality for cubic fourfolds \cite{RS}.
\
{\bf Acknowledgments:} The first author was partially supported by NSF grant
1701659 and the Simons Foundation.
\section{Projection constructions}
Let $X$ be a smooth Fano threefold of degree $18$ over $k$.
\subsection{Projection from lines}
The variety of lines $R_1(X)$ is nonempty and connected of pure
dimension one \cite[Prop.~4.2.2]{IskPro}
and sweeps out a divisor in $X$ with class $-3K_X$ \cite[Th.~4.2.7]{IskPro}.
For generic $X$, $R_1(X)$ is a smooth curve of genus
ten \cite[Th.4.2.7]{IskPro}.
If $R_1(X)$ is smooth then $X$ admits no nonreduced conics
\cite[Rem.~2.1.7]{KuzPro}.
Suppose that $\ell \subset X$ is a line and write $\widetilde{X}$
for the blow-up of $X$ along $\ell$. Then double projection along
$\ell$ induces a birational map
$$\widetilde{X} \dashrightarrow {\widetilde{X}}^+ \rightarrow Y,$$
where $Y\subset \mathbb P^4$ is a smooth quadric hypersurface \cite[Th.~4.3.3]{IskPro}.
This flops the three lines incident to $\ell$ and contracts a divisor
$$D\in |-2K_{{\widetilde{X}}^+} - 3 E^+|$$
to a smooth curve $C\subset Y$ of degree seven and genus two.
Since $Y$ admits a $k$-rational point it is rational over $k$; the
same holds true for $X$.
\begin{prop}
If $X$ is a Fano threefold of degree $18$ admitting a line over $k$
then $X$ is rational.
\end{prop}
\subsection{Projection from conics}
\label{subsect:conic}
We discuss the structure of the variety $R_2(X)$ of conics on $X$:
\begin{itemize}
\item{$R_2(X)$ is nonempty of pure dimension two \cite[Th.~4.5.10]{IskPro}.}
\item{$R_2(X)$ is geometrically isomorphic to the Jacobian of a
genus two curve $C$ \cite[Prop.~3]{IlMa} \cite[Th.~1.1.1]{KuzPro}.}
\item{
Through each point of $X$ there pass finitely many conics
\cite[Lem.~4.2.6]{IskPro}; indeed, through a generic such point
we have nine conics \cite[2.8.1]{Takeuchi}.}
\item{Given a conic $D \subset X$, let $\widetilde{X}$ denote the
blowup of $X$ along $D$. Then double projection along $D$ induces
a fibration \cite[Cor.~4.4.3,Th.~4.4.11]{IskPro}
$$X \dashrightarrow \widetilde{X}^+ \stackrel{\phi}{\rightarrow} \mathbb P^2$$
in conics with quartic degeneracy curve.}
\end{itemize}
\subsection{Projection from points}
\label{subsect:projpoint}
We recall the results of Takeuchi \cite{Takeuchi} presented in
\cite[Th.~4.5.8]{IskPro}.
Let $\widetilde{X}$ denote the blowup of $X$ at $x$,
with exceptional divisor $E$.
\begin{prop} \label{prop:Takeuchi}
Suppose we have a point $x\in X(k)$ and let $\widetilde{X}$ denote
the blowup of $X$ at $x$. We assume that
\begin{itemize}
\item{$x$ does not lie on a line in $X$;}
\item{there are no effective divisors $D$ on $X$ such that
$(K_{\widetilde{X}}^2) \cdot D =0$.}
\end{itemize}
Then triple-projection from $x$ gives a fibration
$$X \stackrel{\sim}{\dashrightarrow} {\widetilde{X}}^+ \stackrel{\phi}{\rightarrow} \mathbb P^1$$
in sextic del Pezzo surfaces.
\end{prop}
We offer a more detailed analysis of double projection from a point
$x\in X(k)$ not on a line.
By \cite[\S~4.5]{IskPro}
the projection morphism
$$\tilde{\phi}: \widetilde{X} \rightarrow \mathbb P^7$$
is generically finite onto its image $\overline{X}$
and the Stein factorization
$$ \widetilde{X} \stackrel{\phi'}{\rightarrow} X'
\stackrel{\overline{\phi}}{\rightarrow} \overline{X}$$
yields a Fano threefold of genus six with canonical Gorenstein singularities.
The condition precluding effective divisors $D$
with $(K_{\widetilde{X}})^2 D=0$ means that $\overline{\phi}$ admits no
exceptional divisors.
The nontrivial fibers of $\phi'$ are all isomorphic to $\mathbb P^1$'s, with
the following possible images in $X$:
\begin{enumerate}
\item{a conic in $X$ through $x$;}
\item{a quartic curve of arithmetic genus one
in $X$, spanning a $\mathbb P^3$, with a singularity of multiplicity two at $x$;}
\item{a sextic curve of arithmetic genus two
in $X$, spanning a $\mathbb P^4$, with a singularity of multiplicity three at $x$.}
\end{enumerate}
Moreover, if $\phi'$ does not contract any surfaces then the exceptional
divisor $E$ over $x$ is embedded
in $\mathbb P^7$ as a Veronese surface.
The quartic curves on $X$ with node at a fixed point
$x$ have expected dimension $0$. The sextic curves on $X$ with
transverse triple point at a fixed point $x$ have expected dimension
$-1$. Indeed, we have:
\begin{prop} \label{prop:howgeneric}
\cite[Prop.~4.5.1]{IskPro}
Retain the notation above.
For a generic $x\in X$
\begin{itemize}
\item{the quartic and sextic curves described above do not occur;}
\item{$\phi'$ is a small contraction;}
\item{the rational map $X \stackrel{\sim}{\dashrightarrow} {\widetilde{X}}^+$
factors as follows
\begin{enumerate}
\item{blow up the point $x$;}
\item{flop the nine conics through $x$;}
\end{enumerate}
}
\item{$\phi$ restricts to the proper transform $E^+$ of $E$
as an elliptic fibration associated with cubics
based at nine points.}
\end{itemize}
\end{prop}
\section{Unirationality constructions}
In this section, we consider the following question inspired by
\cite[p.~46]{Manin3fold}:
\begin{ques}
Let $X$ be a Fano threefold of degree $18$ over $k$.
Suppose $X(k) \neq \emptyset$. Is $X$ is unirational over $k$?
\end{ques}
From our perspective, unirationality is more delicate than rationality
as we lack a specialization theorem for smooth families in this context.
We cannot apply the theorem of \cite{KTspecialize} -- as we do
in the proof of Theorem~\ref{theo:speciality} -- to reduce to
configurations in general position.
The geometric constructions below highlight some of the issues that arise.
\subsection{Using a point}
\begin{prop} \label{prop:unirat3}
Let $X$ be a Fano threefold of degree $18$ over $k$ admitting
a point $x\in X(k)$ satisfying the condition in Proposition~\ref{prop:Takeuchi}.
Then $X$ is unirational over $k$ and rational points are Zariski dense.
\end{prop}
\begin{proof}
We retain the notation from Proposition~\ref{prop:Takeuchi}.
Note that the proper transforms of lines $L \subset E^+$ give trisections
of our del Pezzo fibration
$$\phi: {\widetilde{X}}^+ \rightarrow \mathbb P^1.$$
Basechanging to $L$ yields
$$\phi_L: {\widetilde{X}}^+ \times_{\mathbb P^1}L \rightarrow L,$$
a fibration of sextic del Pezzo surfaces with a section.
Thus the generic fiber of $\phi_L$ is rational over $k(L)$ by
\cite[p.~77]{Manin66}.
Since $L\simeq \mathbb P^1$, the total space of the fibration is rational
over $k$. As it dominates ${\widetilde{X}}^+$, we conclude that
$X$ is unirational.
\end{proof}
If the rational points are Zariski dense then we can find one
where Proposition~\ref{prop:Takeuchi} applies.
However, if we are given only a single rational point on $X$
we must make
a complete analysis of degenerate cases as partly described in
Section~\ref{subsect:projpoint}. In addition, we must consider cases
where there exist lines over $\bar{k}$
passing through our given rational point.
For instance, consider the case where a single line
$x\in \ell \subset X$. To resolve the double projection
at $x$, we must take the following steps:
\begin{itemize}
\item{blow up $x$ to obtain an exceptional divisor $E_1\simeq \mathbb P^2$;}
\item{blow up the proper transform $\ell'$ of the line $\ell$
with
$$N_{\ell'}= \mathcal O_{\mathbb P^1}(-1) \oplus \mathcal O_{\mathbb P^1}(-2)$$
to obtain an exceptional divisor $E_2 \simeq \mathbb F_1$.}
\end{itemize}
Let $E_1'$ and denote the proper transform of $E_1$ in the
second blowups.
The linear series resolving the double
projection is
$$h - 2E'_1 - E_2$$
which takes $E'_1 \simeq \mathbb F_1$ to a cubic scroll,
$E_2$ to a copy of $\mathbb P^2$, and the $(-1)$-curve on $E_2$ to an
ordinary singularity on the image.
The induced contraction
$$\phi': \widetilde{X} \rightarrow X' \subset \mathbb P^7$$
has degree
$$(h - 2E'_1 - E_2)^3 = 10.$$
Thus $X'$ admits a `degenerate Veronese surface' consisting of a
cubic scroll and a plane meeting along a line
coinciding with the $(-1)$-curve of the scroll; $X'$ has an ordinary
singularity along that line.
Of course, the most relevant degenerate cases for arithmetic purposes
involve multiple lines through $x$ conjugated over the ground field.
It would be interesting to characterize the possibilities.
\subsection{Using a point and a conic}
Here is another approach:
Let $X$ admit
a point $x\in X(k)$ and a conic $D\subset X$ defined over $k$.
The results recalled in Section~\ref{subsect:conic} imply that
$X$ is birational over $k$ to
$$\phi: \widetilde{X}^+ \rightarrow \mathbb P^2,$$
a conic bundle degenerating over a plane quartic curve $B$.
Suppose there exists a rational point on $\widetilde{X}^+$
whose image $p\in \mathbb P^2$ is not contained in the degeneracy curve.
Consider the pencil of lines through $p$. The corresponding
pencil of surfaces on $\widetilde{X}^+$ are conic bundles
over $\mathbb P^1$ with four degenerate fibers and the resulting
fibration admits a section. Such a surface is either
isomorphic to a quartic del Pezzo surface or birational to such
a surface \cite[p.~48]{KST}. It is a classical fact that a quartic
del Pezzo surface with a rational point is unirational. This yields the unirationality of $X$ over $k$.
The argument works even when $p$ is a smooth point or node
of $B$. Here
we necessarily have higher-order ramification over the nodes -- this is because
the associated generalized Prym variety is compact -- which
we can use to produce a section of the resulting pencil
of degenerate quartic del Pezzo surfaces. However, there is
trouble when $p$ is a cusp of $B$.
\section{Rationality results}
Our first statement describes the rationality construction under favorable
genericity assumptions:
\begin{prop} \label{prop:genericityrat}
Let $X$ be a Fano threefold of degree $18$ over $k$. Assume that:
\begin{itemize}
\item{there exists an $x\in X(k)$ satisfying the conditions of
Proposition~\ref{prop:Takeuchi} so that $X$ is birational to a fibration
$\phi:{\widetilde{X}}^+ \rightarrow \mathbb P^1$
in sextic del Pezzo surfaces;}
\item{there exists an irreducible curve $M\subset X$, disjoint from the indeterminacy of
$X\stackrel{\sim}{\dashrightarrow} {\widetilde{X}}^+$, with degree prime to three.}
\end{itemize}
Then $X$ is rational over $k$.
\end{prop}
\begin{proof}
We saw in the proof of
Proposition~\ref{prop:unirat3} that the generic fiber $S$ of $\phi$ is a
sextic del Pezzo surface admitting a rational point of a degree-three
extension. Our assumptions imply that
$S\cdot M = \deg(M)$ which is prime to three, so applying \cite[p.~77]{Manin66}
we conclude that $S$ is rational over $k(\mathbb P^1)$ and $X$ is rational
over $k$.
\end{proof}
We now show these genericity assumptions are not necessary:
\begin{theo} \label{theo:speciality}
Let $X$ be a Fano threefold of degree $18$ over $k$. Assume that $X$ admits a
rational point $x$
and a conic $D$, both defined over $k$. Then $X$ is rational.
\end{theo}
\begin{proof}
Let $B$ denote the Hilbert scheme of all triples
$(X,x,D)$ of objects described in the statement. This is smooth and connected
over the moduli stack of degree $18$ Fano threefolds; indeed, we saw in
Section~\ref{subsect:conic} that the
parameter space of conics on $X$ is an abelian surface. The moduli
stack itself is a smooth Deligne-Mumford stack since
Kodaira vanishing gives $H^i(T_X)=0$ for $i=2,3$ and $H^0(T_X)=0$ by
\cite{ProkhorovAut}. The classification of Fano threefolds shows that
the moduli stack is connected. Thus $B$ is smooth and connected.
Consider the
universal family
$$(\mathcal X\stackrel{\pi}{\rightarrow} B, \mathbf{x}:B\rightarrow \mathcal X, \mathcal D\subset \mathcal X), $$
where $\pi$ is smooth and projective. The generic fiber of
$\pi$ is rational over $k(B)$
as the genericity conditions of Proposition~\ref{prop:genericityrat} are
tautologically satisfied -- see Proposition~\ref{prop:howgeneric} for details.
The specialization theorem \cite[Th.~1]{KTspecialize} implies that every
$k$-fiber of $\pi$ is rational over $k$. This theorem assumes that the base
is a curve. However, our parameter space $B$ is smooth so Bertini's Theorem
implies that each $b\in B(k)$ may be connected to the generic point by a curve
smooth at $b$.
\end{proof}
\section{Analysis of principal homogeneous spaces}
\subsection{Proof of Theorem~\ref{theo:main}}
One direction is Theorem~\ref{theo:speciality}; we focus on the
converse.
Suppose that $X$ is rational over the ground field.
Let $C$ be the genus two curve whose Jacobian $J(C)$ is isomorphic
to the intermediate Jacobian $IJ(X)$ over $k$.
The mechanism of \cite[\S~5]{HTcycle} gives a principal homogeneous space
$P$ over $J(C)$ with the property that the Hilbert scheme $\mathcal{H}_d$
parametrizing
irreducible curves of degree $d$ admits a morphism
$$\mathcal{H}_d \rightarrow P_d$$
descending the Abel-Jacobi map to $k$,
where $[P_d]=d[P]$ in the Weil-Ch\^atelet group of $J(C)$.
By Theorem~22 of \cite{HTcycle}, if $X$ is rational then
$P \simeq \operatorname{Pic}^i(C)$ for $i=0$ or $1$.
In particular, we have
$$R_1(X) \hookrightarrow P$$
and by the known results of Section~\ref{subsect:conic}
$$R_2(X) \simeq P_2 \simeq J(C).$$
Indeed, since $C$ has genus two we have identifications
$$J(C) = \operatorname{Pic}^0(C) \simeq \operatorname{Pic}^2(C),$$
which gives the desired interpretation of $P_2$ whether
$P=\operatorname{Pic}^0(C)$ or $\operatorname{Pic}^1(C)$. As a consequence, $R_2(X)$
admits a $k$-rational point.
\subsection{A corollary to Theorem~\ref{theo:main}}
Retain the notation of the previous section.
Without assumptions on the existence of points or conics
on $X$ defined over $k$, we know that
$$18[P]=0 \text{ and } 9[R_2(X)]=0$$
in the Weil-Ch\^atelet group.
This allows us to deduce an extension of our main result:
\begin{coro}
Let $X$ be Fano threefold of degree of degree $18$ over $k$
with $X(k)\neq \emptyset$. Suppose that $X$ admits a curve of
degree prime to three, defined over $k$. Then $X$ is rational.
\end{coro}
Our assumption means that $2[P]=0$, whence $[R_2(X)]=0$ and
$X$ admits a conic defined over $k$. Hence Theorem~\ref{theo:main}
applies.
\subsection{Generic behavior}
There are examples over function fields where the
principal homogeneous space is not annihilated
by two:
\begin{prop}
Over $k=\mathbb C(\mathbb P^2)$, there exist examples of $X$ such that
the order of $[P]$ is divisible by three.
\end{prop}
\begin{proof}
Let $S$ be a complex K3 surface with $\operatorname{Pic}(S)=\mathbb Z h$, with
$h^2=18$. Mukai \cite{Mukai} has shown that $S$
is a codimension-three linear section of a homogeneous
space $W \subset \mathbb P^{13}$ arising as the closed orbit
for the adjoint representation of $G_2$
$$S = \mathbb P^{10} \cap W.$$
Consider the associated net of Fano threefolds
$$
\varpi:\mathcal X \rightarrow \mathbb P^2,
$$
obtained by intersecting $W$ with codimension-two linear
subspaces
$$\mathbb P^{10} \subset \mathbb P^{11} \subset \mathbb P^{13}.$$
Write $X$ for the generic fiber over $\mathbb C(\mathbb P^2)$.
Let $R_2(\mathcal X/\mathbb P^2)$ denote the relative variety of conics.
This was analyzed in \cite[\S~3.1]{IlMa}:
The conics in fibers of $\varpi$ cut out pairs of points on $S$,
yielding a birational identification and natural abelian fibration
$$S^{[2]} \stackrel{\sim}{\dashrightarrow} R_2(\mathcal X/\mathbb P^2)
\stackrel{\psi}{\rightarrow} \mathbb P^2.$$
The corresponding principal homogeneous space has order
divisible by nine; its order is divisible by three if it is
nontrivial.
These fibrations are analyzed in more depth in \cite[\S~3.3]{MSTVA}
and \cite{KR}.
Let $T$ denote the moduli space of rank-three stable
vector bundles $V$ on $S$ with $c_1(V)=h$ and $\chi(V)=6$.
Then we have
\begin{itemize}
\item{$T$ is a K3 surface of degree two;}
\item{the primitive cohomology of $S$ arises as an index-three
sublattice of the primitive cohomology of $T$
$$H^2(S,\mathbb Z)_{prim} \subset H^2(T,\mathbb Z)_{prim},$$
compatibly with Hodge structures;}
\item{the Hilbert scheme $T^{[2]}$ is birational to
the relative Jacobian
fibration of the degree-two linear series on $T$
$$\mathcal J \rightarrow \mathbb P^2;$$
}
\item{the relative Jacobian fibration of $\psi$
is birational to $\mathcal J$ over $\mathbb P^2$.}
\end{itemize}
The last statement follows from \cite[p.~486]{Sawon} or
\cite[\S~4]{Mark}: The abelian fibration $\psi$
is realized as a twist of the
fibration $\mathcal J \rightarrow \mathbb P^2$; the twisting data is
encoded by an element $\alpha \in \operatorname{Br}(T)[3]$ annihilating
$H^2(S,\mathbb Z)_{prim}$ modulo three.
Now suppose that $\psi$ had a section. Then $\mathcal J$ and
$S^{[2]}$ would be birational holomorphic symplectic varieties.
The Torelli Theorem implies that
their transcendental degree-two cohomology
-- $H^2(T,\mathbb Z)_{prim}$ and $H^2(S,\mathbb Z)_{prim}$ respectively --
are isomorphic. This contradicts our computation above.
\end{proof}
\subsection{Connections with complete intersections?}
Assume $k$ is algebraically closed and $X$ a Fano threefold of degree
$18$ over $k$. Kuznetsov, Prokhorov, and Shramov \cite{KuzPro} have
pointed out the existence of a smooth complete intersection of
two quadrics $Y\subset \mathbb P^5$ with
\begin{equation} \label{eqn:XYiso}
R_1(Y) \simeq R_2(X),
\end{equation}
Both have intermediate Jacobian isomorphic to the Jacobian
of a genus two curve $C$.
Now suppose that $X$ and $Y$ are defined over a nonclosed field $k$
with $IJ(X)\simeq IJ(Y)$. In general, we would not expect
$R_2(X)$ and $R_1(Y)$ to be related as principal homogeneous
spaces; for example, we generally have
$9[R_2(X)]=0$ and $4[R_1(Y)]=0$ (see \cite{HT2quad}).
Verra \cite{VerraSlide} has found a direct connection between
complete intersections of quadrics and {\em singular} Fano threefolds
of degree $18$. Suppose we have a twisted cubic curve
$$R \subset Y \subset \mathbb P^5,$$
which forces $Y$ to be rational.
Consider the linear series of quadrics vanishing along $R$;
the resulting morphism
$$\operatorname{Bl}_R(Y) \rightarrow \mathbb P^{11}$$
collapses the line residual to $R$ in $\operatorname{span}(R)\cap Y$.
Its image $X_0$ is a nodal Fano threefold of degree $18$.
\bibliographystyle{alpha}
|
2,869,038,156,897 | arxiv | \section{Introduction}
Modeling and simulation of traffic flow on road networks, has been investigated intensively using hyperbolic partial differential equations.
Different models have been used, ranging from scalar conservation laws like the Lighthill Whitham Richards model through
models using system of conservation laws, \cite{P79, AR,Ber,Hel,G02,Dag1,Dag2}, to kinetic descriptions of the flow \cite{Hel,HPRV20,KW97,PSTV17,FT13}.
Derivations of these models from the underlying models in such hierarchies have been discussed as well, see, \cite{AKMR,Hel,KW00,B11}, for a non-exhaustive list of references.
To obtain a model for the dynamics on the full network, all these models have to be supplemented with coupling conditions at the nodes of the network.
Coupling conditions for scalar conservation laws and systems of conservation laws on networks have been discussed in many papers, see, for example, \cite{Cor17,BNR14,HR95,GPBook,LS02,CGP05,BHK06a,CG08,CM08,G10,KCGG}.
Kinetic and relaxation equations on networks have been considered, for example, in \cite{HM09,BKKP16}.
A procedure to derive coupling conditions for macroscopic equations from the underlying ones of the kinetic or relaxation equation has been discussed for linear systems in \cite{BK18c} using an asymptotic analysis of the situation near the nodes. A simple nonlinear case has been treated in \cite{BK18b}.
To explain the general procedure in more detail, we consider a relaxation equation in 1D involving a scaling parameter $\epsilon$, which converges for $\epsilon \rightarrow 0$ to an associated scalar conservation law for traffic flow.
If such equations are considered on a network, it is sufficient to study a single coupling point or node, where coupling conditions are required.
Suitable coupling conditions have to be imposed for the relaxation problem at each node.
If $\epsilon$ is send to zero, layers near the junctions can arise.
To consider the limit $\epsilon \rightarrow 0$, one has to proceed similarly
as in the case of boundary value problems, where a complete picture of the convergence is only obtained, once boundary- and initial layers are investigated.
We refer to \cite{BSS84,BLP79,G08,UTY03} for such a procedure for boundary value problems in the case of kinetic equations and to
\cite{WY99,WX99,LX96,X04} for the case of hyperbolic relaxation systems.
In the present work, we consider the case of a relaxation system on a network with a small parameter $\epsilon$
leading in the limit $\epsilon \rightarrow 0 $ to a LWR-type scalar conservation law.
Besides the definition of suitable coupling conditions for the relaxation system at the junction, the present work aims at presenting a matched asymptotic expansion procedure leading from the relaxation model on the network to the scalar conservation law.
Analytical, as well as numerical investigations are presented.
Based on the discussion of the Riemann problems at the nodes we propose coupling conditions for the relaxation model for merging and diverging junctions. These conditions are developed in a similiar way as those for other well-known
higher order traffic models like the ARZ-equations, see \cite{HR}.
However, due to the simpler structure of the relaxation model compared to the ARZ-equations, the conditions are much easier to handle and to investigate and allow a relatively straightforward asymptotic derivation of classical coupling conditions for the LWR-type traffic equations in the limit
$\epsilon $ going to $0$. Moreover, in the case of diverging junctions the coupling conditions defined here guarantee that the coupled solution of the two-equation model remain in the physically reasonable state-space domain defined by bounds on density and velocity in contrast to coupling conditions for the ARZ-model discussed in the literature \cite{HR,KCGG} and references therein.
The asymptotic procedure gives a detailed account of the situation near the node in the case of small values of
the relaxation parameter $\epsilon$. Besides the structure of the layers near the nodes and the coupling conditions for the limit problem, it reveals, that the effective densities for the relaxation system at the node for small $\epsilon$ are not necessarily the same as those found by the coupling conditions for the relaxation system.
The paper is organized in the following way.
In section \ref{equations} we present the relaxation model, compare \cite{BK18}, and the associated scalar conservation law.
In section \ref{sec:kineticcouplingconditions} different coupling conditions for the relaxation model for merging and diverging junctions are discussed and the associated Riemann problems at the nodes are investigated.
Moreover, the associated classical coupling conditions for the limiting scalar conservation law are stated.
In section \ref{asyproc} the asymptotic procedure and the matched asymptotic expansion on the network is explained together with a discussion of the layer solutions of the relaxation system.
Then, the asymptotic procedure is investigated in detail analytically in section
\ref{macroscopiccc}. There, it is shown that a special merge conditions for the relaxation system leads in the relaxation limit to
a classical merge condition for the nonlinear scalar conservation law.
Finally, the solutions of the relaxation system on the network are compared numerically to the solution of the scalar conservation law on the network in section \ref{Numerical results} for the case of merging and diverging junctions and a broader range of coupling conditions.
\section{Relaxation model and scalar traffic equations}
\label{equations}
Consider the LWR-traffic flow equations
\begin{align}\label{LWR0}
\begin{aligned}
\partial_t \rho + \partial_x F(\rho) &=0,\\
\end{aligned}
\end{align}
where $F= F(\rho)$ is a given traffic density-flow function or fundamental diagram, i.e. a smooth function $F:[0,1]\rightarrow[0,1]$ with $F(0)=0=F(1)$ and $F^\prime(\rho) \le 1$.
In the following we restrict ourselves to strictly concave fundamental diagrams $F$, where
the point, where the maximum of $F$ is attained, is denoted by $\rho^{\star}$ and the maximal value is
$ F(\rho^{\star})=\sigma$.
We are interested in the investigation of relaxation systems for the LWR equations on networks. The minimal requirements of such a relaxation system for the density $\rho$ and the flux $q$ are the convergence towards the LWR-equations and the invariance of the 'traffic domain'
given by $0 \le \rho \le 1$ and $0 \le q \le \rho$.
This is the physically reasonable state-space for a 2x2 traffic equation, see e.g. \cite{Ber}.
The above two conditions correspond to an upper bound on the density $\rho_{max} =1$ and an upper bound to the
velocity $v=q/\rho$ given by $v_{max}$.
A simple example is given by the following relaxation system \cite{BK18} for the LWR equations for the variables density $\rho$ and flux $q$, which we will use as a prototype for the discussion of the issues mentioned in the introduction.
The equations are
\begin{align}\label{macro0}
\begin{aligned}
\partial_t \rho + \partial_x q &=0\\
\partial_t q + \frac{ q}{1-\rho} \partial_x \rho + (1-\frac{ q}{1-\rho})\partial_x q &=-\frac{1}{\epsilon} \left(q-F(\rho) \right) \ .
\end{aligned}
\end{align}
This is a hyperbolic system with
the eigenvalues $\lambda_1 = - \frac{q}{1-\rho}\leq 0 <\lambda_2 = 1$.
The respective eigenvectors are
$r_1 = \left(
1,
\lambda_1 \right)^T,
r_2 = \left(
1,1
\right)\ $.
A straightforward computation shows that the $r_1$- and the $r_2$-field are both linearly degenerate.
The system is totally linear degenerate (TLD).
The integral curves (and shock curves) of the hyperbolic system are given by $q= q_L \frac{1-\rho}{1-\rho_L}$ for the 1-field and by $q = \rho -\rho_R+q_R$ for the 2-field.
The region $0 \le \rho \le 1, 0 \le q \le \rho$ is an invariant region for the relaxation system as it is easily seen by considering the
integral curves, see Figure \ref{state0}.
We refer to \cite{BK18} for details and for a kinetic interpretation of the equations.
Note that the fact, that the system is TLD strongly simplifies the calculations and leads in many situations to explicitly computable quantities and conditions.
The equations can be rewritten in conservative form choosing the variable
$
z = \frac{q}{1-\rho}\ .
$
Rewriting \eqref{macro0} we obtain
\begin{align}\label{eq:lindeg+relax}
\begin{aligned}
\partial_t \rho + \partial_x q&=0\\
\partial_t z + \partial_x z &= -\frac{1}{\epsilon} \left(z - Z(\rho) \right)\
\end{aligned}
\end{align}
with $q = z (1-\rho)$ and $Z(\rho) = \frac{F(\rho)}{1-\rho}$.
A Riemann invariant of the first characteristic family is obviously
$$z= \frac{q}{1-\rho} \in [0,\infty)\ .$$
A Riemann invariant of the second characteristic family is
$$w =\rho - q \in [0,1]\ .$$
Concerning the convergence of its solutions towards the solutions of the scalar conservation law $\partial_t \rho + \partial_x F(\rho) =0$ as
$\epsilon $ tends to $0$ the subcharacteristic condition has to be satisfied \cite{LX96}.
Setting $q = F(\rho)$ in the formula for the eigenvalues, the subcharacteristic condition states
$$
- \frac{ F(\rho)}{1-\rho} \le F^\prime(\rho) \le 1 \ \mbox{ for } \ 0 \le \rho \le 1\ .
$$
\begin{remark}
The condition is fulfilled for strictly concave fundamental diagrams $F$.
For example, in the classical LWR case with $F(\rho) =\rho (1-\rho) $ and $F^\prime (\rho) = 1- 2 \rho$ the above condition is
$$
- \rho \le 1- 2 \rho \le 1\ \mbox{ for } \ 0 \le \rho \le 1\ ,
$$
which is obviously satisfied.
\end{remark}
Finally we note that boundary conditions for the relaxation system \eqref{macro0} on the interval $[x_L,x_R]$ have to be prescribed in the following way. Since the first eigenvalue is always non-positive and the second is a positive constant, the number of boundary conditions is fixed.
At the left boundary at $x=x_L$ we have to prescribe a value for the 1-Riemann invariant $z(x_L) = \frac{q(x_L)}{1-\rho(x_L)} $. For the right boundary $x= x_R$ the 2-Riemann invariant $w(x_R)=\rho(x_R)- q(x_R))$
has to be prescribed.
\begin{remark}
Compare the present model with the ARZ model \cite{AR}
\begin{align}
\label{macro0rascle}
\partial_t \rho +\partial_x q &=0\\
\partial_t q +\frac{q}{\rho}\left(\rho p'(\rho)- \frac{q}{\rho}\right)\partial_x \rho +\left( \frac{q}{\rho}\ + \frac{q}{\rho}-\rho p'(\rho) \right)\partial_x q&= -\frac{1}{\epsilon} \left(q-F(\rho) \right). \nonumber
\end{align}
or in conservative form
\begin{align}
\label{macro0konsrascle}
\partial_t \rho +\partial_x q &=0\\
\partial_t (\rho z_R) +\partial_x (qz_R)&= -\frac{1}{\epsilon} \left(q-F(\rho) \right). \nonumber
\end{align}
with $q = \rho z_R - \rho p(\rho)$.
We note that the above domain $0 \le \rho \le 1$ and $0 \le q \le \rho$ is also an invariant domain for the ARZ conditions, if $p$ is appropriately chosen
with a singularity at $\rho=1$, see \cite{Ber}.
\end{remark}
\begin{figure}[h]
\center
\externaltikz{Statespacelayer_H1}{
\begin{tikzpicture}[scale = 3]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\node[below] at (0.72,0.22) {$F(\rho)$};
\draw[red](0.2,0.0)--(1,0.8);
\draw(0.78,0.58)--(0.69,0.77);
\node[below] at (0.27,0.6) {$1$-curve};
\draw[red](1.0,0.0)--(0.6,0.6);
\draw(0.47,0.5)--(0.67,0.5);
\node[below] at (0.58,0.9) {$2$-curve};
\draw[dashed](0.5,0.0)--(0.5,0.25) node[below] at (0.5,0) {$\rho^\star$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\draw[domain=0.0:1,smooth,variable=\x,green] plot ({\x},{\x*(1-\x)});
\end{tikzpicture}
\begin{tikzpicture}[scale = 3]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\node[below] at (0.88,0.15) {$F(\rho)$};
\draw(0.65,0.43)--(0.73,0.5);
\node[below] at (0.82,0.64) {$1$-curve};
\draw[red](0.0,0.0)--(1,0.5);
\draw(0.86,0.43)--(0.89,0.35);
\node[below] at (0.88,0.36) {$2$-curve};
\draw(0.15,0.85)--(0.4,0.44);
\node[below] at (0.25,1.0) {sonic line};
\draw[dashed](0.5,0.0)--(0.5,0.25) node[below] at (0.5,0) {$\rho^\star$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\draw[domain=0.0:0.6,smooth,variable=\x,red,dashed] plot ({\x},{\x*(2.5-\x/(1-\x))});
\draw[domain=0.6:0.715,smooth,variable=\x,red] plot ({\x},{\x*(2.5-\x/(1-\x))});
\draw[domain=0.0:1,smooth,variable=\x,green] plot ({\x},{\x*(1-\x)});
\draw[domain=0.0:0.45,smooth,variable=\x] plot ({\x},{\x^2/(1-\x)^2});
\end{tikzpicture}
}
\caption{Lax-curves in $(\rho,q)$ variables for the totally degenerate relaxation system (left figure) and the ARZ equations
with $p(\rho ) = \frac{\rho}{1-\rho}$ (right figure).}
\label{state0}
\end{figure}
\section{Coupling conditions for the relaxation system on networks and associated conditions for the LWR equations} \label{sec:kineticcouplingconditions}
In this section we propose general coupling conditions for the relaxation model \eqref{macro0} for different physical situations. As long as appropriate we will proceed in a similiar way as in \cite{GP06,HR} for the definition of the coupling conditions for the ARZ-model. At the same time we state classical coupling conditions for these situations for the LWR equations.
In section \ref{macroscopiccc} the coupling conditions for the relaxation system will be related via a matched asymptotic expansion procedure to the coupling conditions
for the LWR model on networks. There, we consider the case of a merging junction with a special merge condition analytically. The other cases will be investigated numerically in section \ref{Numerical results}.
We restrict ourselves here to the case of junctions with either two ingoing and one outgoing lane (merging junction) or a junction with two outgoing and one ingoing lane (diverging junction), see Figure \ref{fig:junction}.
\begin{figure}[h!]
\begin{center}
\externaltikz{sketch_21node}{
\begin{tikzpicture}[thick]
\def2{2}
\node[fill,circle] (N) at (0,0){};
\draw[->] (-2,0.6)--(N) node[above,pos = 0.5]{$1$};
\draw[->] (-2,-0.6)--(N) node[below,pos = 0.5]{$2$};
\draw[->] (0,0)--(2,0) node[above,pos = 0.5]{$3$};
\end{tikzpicture}
}
\;\;
\externaltikz{sketch_12node}{
\begin{tikzpicture}[thick]
\def2{2}
\node[fill,circle] (N) at (0,0){};
\draw[->] (-2,0)--(N) node[above,pos = 0.5]{$1$};
\draw[->] (0,0)--(2,0.6) node[above,pos = 0.5]{$2$};
\draw[->] (0,0)--(2,-0.6) node[below,pos = 0.5]{$3$};
\end{tikzpicture}
}
\end{center}
\caption{On the left: A junction with two ingoing and one outgoing road (2-1 node). On the right: A junction with one incoming and two outgoing roads (1-2 node).}
\label{fig:junction}
\end{figure}
As on each road, there is exactly one outgoing characteristic family for the relaxation system, we have to provide three conditions at a junction connecting three roads.
We denote the quantities at the junctions by $\rho^i,q^i, i=1,2,3$ and the corresponding Riemann invariants by $z^i,w^i$.
In any case the conservation of mass will be imposed, i.e. all cars entering a junction via one of the incoming roads
will exit on the outgoing road.
The other two conditions depend on the physical situation under consideration. We consider first the case of a 2-1 node.
\subsection{Merging lanes}
In this case the Riemann invariants $z^1,z^2,w^3$ are prescribed and the admissible states at the junction in $(\rho,q)$-plane fulfill
\begin{align}
\label{eq:merge_char}
q^1 = z^1 (1-\rho^1)\\
q^2=z^2 (1-\rho^2) \nonumber\\
q^3 = \rho^3 -w^3 \nonumber.
\end{align}
The coupling conditions are the balance of fluxes
\begin{align}
\label{eq:merge_1}
q^1+q^2=q^3
\end{align}
and a relation for the Riemann invariants for the first characteristic family $z$ (which is also the momentum flux of the conservative variable). For merging junctions, one might simply choose the balance of $z$, i.e. $z^3=z^1+z^2$ or more general
\begin{align}
\label{eq:merge_2}
z^3 = g(z^1, z^2 , w^3)
\end{align}
with some function $g$. We use as a simple example $g= z^1+z^2$ for the investigations in section \ref{macroscopiccc} and \ref{Numerical results}
and also for the illustrations in Figures \ref{fairRiemann} and \ref{state1}.
\begin{remark}
Compare \cite{HR} for the corresponding balance condition of the momentum flux for the ARZ-equations
$$
q^3z_R^3 = q^1z_R^1 + q^2z_R^2.
$$
Note that this condition is required to obtain weak solutions on the network in the sense of \cite{HR95}.
The condition $z^3=z^1+z^2$ is the analogue to this condition for the equations considered here.
\end{remark}
To continue, one more relation is
needed to solve the Riemann problem at the junction uniquely. We consider two approaches.
The first approach assumes a relation of the incoming fluxes given by the prescribed quantities $z^1,z^2,w^3$.
That means, additionally to \ref{eq:merge_1} and \ref{eq:merge_2} we consider the general condition
\begin{align}
\label{eq:merge_3}
\frac{q^1}{q^2} = f
\end{align}
with
$f = f(z^1,z^2,w^3)$
or equivalently
\begin{align}
\label{fluxes}
q^{1} = \frac{f}{1+f} q^3\\
q^2= \frac{1}{1+f} q^3. \nonumber
\end{align}
As an example, we use simply
\begin{align}
\label{eqz}
f= \frac{z^1}{z^2},
\end{align}
that means, that the relation between $q^1$ and $q^2$ is given by the relation of the corresponding Riemann invariants.
The second, more general approach describes a lane merging via a condition on the flux $q^1$.
We prescribe a relation
\begin{align}
\label{fluxesgeneral}
q^{1} = F(q^3;z^1,z^2,w^3) \le \rho^1.
\end{align}
For example, for a merging with a priority lane, here lane 1, we consider
additionally to the conditions \ref{eq:merge_1} and \ref{eq:merge_2},
the following condition on the flux $q^1$
\begin{align}
\label{eq:merge_4}
q^1 = \min\{q^3,\rho^1\} .
\end{align}
This can be rewritten as
\begin{align}
\label{priore}
q^1 = \bar q^1
\end{align}
with $\bar q^1 = \min(q^3,q^{max}(z^1))$, where the maximal possible flux for the Riemann invariant $z$ is defined as $q^{max}(z)= \frac{z}{1+z}$.
A more general condition giving a partial priority to one of the roads depending on a value $P \in [0,1]$ is given by
a convex combination of the priority conditions for lane 1 and 2, i.e.
\begin{align}
\label{P}
q^1= (1-P) \bar q^1+ P (q^3-\bar q^2)
\end{align}
See e.g. \cite{KCGG}
for a partial priority merging condition for the ARZ equations.
This leads to a distribution depending on the maximal possible fluxes on the two ingoing roads as long as the maximal flux
given by $q^3$ is not exceeded and to a distribution acording to the priority value $P$, if both maximal possible fluxes are larger than $q^3$.
For the symmetric case $P=\frac{1}{2}$ we obtain
\begin{align}
\label{mixed2}
q^1=\frac{1}{2} \left(q^3+ \bar q^1-\bar q^2\right)
\end{align}
which is easily seen to be equivalent to
\begin{align}
\label{mixed3}
q^1 = \min \left(\bar q^1, q^3 -\min(\bar q^1, \bar q^2,\frac{q^3}{2})\right).
\end{align}
See Figure \ref{alpha} for the proportion $\frac{q^1}{q3}$ of the flux in lane 1 for $z^2 $ ranging from $0$ to $0.5$.
All these conditions lead to a well-posed Riemann problem at the junction. This can be easily seen by solving the problems explicitly.
\subsubsection{Solution of the Riemann problems at the junction}
Condition \ref{eq:merge_2} directly gives $(\rho^3,q^3)$, since
\begin{align*}
\rho^3 -w^3 = z^3(1-\rho^3)
\end{align*}
and
\begin{align}
\rho^3 = \frac{w^3+z^3}{1+z^3}
\end{align}
and
\begin{align}
q^3 = z^3 (1-\rho^3 )= \frac{z^3}{1+z^3} (1-w^3).
\end{align}
See Figure \ref{fairRiemann}. for an illustration using the relation (\ref{eqz}),
We obtain from \ref{fluxes} the values of $q^1$ and $q^2$ and then $\rho^1$ and $ \rho^2$ from
the characteristic equations.
Note that using the relation (\ref{eqz}) simply gives
that $\rho^1 = \rho^2$ and then from \ref{eq:merge_2} one obtains the equality of densities
$\rho^3 = \rho^1 = \rho^2=\bar \rho$.
For the second approach we obtain
$\rho^3,q^3$ as before. Then $q^1$ from (\ref{fluxesgeneral}) and then directly the remaining quantities.
\begin{example}
For the priority condition (\ref{eq:merge_4}) we have to distinguish two cases. Either $\frac{z^1}{1+z^1} \ge q^3$
then $q^1=\min\{q^3,\rho^1\}=q^3$. This leads to $q^2=0$ and $\rho^2=1$.
If $\frac{z^1}{1+z^1} \le q^3$
then $q^1=\min\{q^3,\rho^1\}=\rho^1$ and we obtain $\rho^1 = \frac{z^1}{1+z^1}$ and $q^2= q^3-q^1$. See Figure \ref{state1} for condition (\ref{eq:merge_4}).
\end{example}
\begin{figure}[h]
\center
\externaltikz{Statespacelayer1}{
\begin{tikzpicture}[scale = 4]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\draw(0.68,0)--(0.68,0.48);
\node[below] at (0.68,0) {$\bar \rho$};
\draw[red](0.2,0.0)--(1,0.8);
\draw(0.48,0.63)--(0.63,0.55);
\node[below] at (0.42,0.76) {$z^3$};
\draw[red](1.0,0.0)--(0.485,0.485);
\draw(0.2,0.32)--(0.45,0.31);
\node[below] at (0.16,0.45) {$z^2$};
\draw[red](1.0,0.0)--(0.36,0.36);
\draw(0.3,0.47)--(0.51,0.46);
\node[below] at (0.27,0.6) {$z^1$};
\draw[red](1.0,0.0)--(0.6,0.6);
\draw(0.63,0.78)--(0.75,0.55);
\node[below] at (0.58,0.9) {$w^3$};
\draw[](0.68,0.48)--(1.1,0.48);
\node[right] at (1.1,0.48) {$q^3$};
\draw[](0.68,0.3)--(1.1,0.3);
\node[right] at (1.1,0.3) {$q^1$};
\draw[](0.68,0.18)--(1.1,0.18);
\node[right] at (1.1,0.18) {$q^2$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\end{tikzpicture}
}
\caption{Solution of Riemann problems for merging with $f=z^1/z^2$.}
\label{fairRiemann}
\end{figure}
\begin{figure}[h]
\center
\externaltikz{Statespacelayer2}{
\begin{tikzpicture}[scale = 4]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\draw(0.68,0)--(0.68,0.48);
\node[below] at (0.68,0) {$ \rho_3$};
\draw(0.4,0)--(0.4,0.4);
\node[below] at (0.4,0) {$ \rho_1$};
\draw(0.9,0)--(0.9,0.08);
\node[below] at (0.9,0) {$ \rho_2$};
\draw[red](0.2,0.0)--(1,0.8);
\draw(0.78,0.58)--(0.69,0.77);
\node[below] at (0.37,0.67) {$z^3$};
\draw[red](1.0,0.0)--(0.6,0.6);
\draw(0.21,0.26)--(0.47,0.34);
\node[below] at (0.18,0.36) {$z^1$};
\draw[red](1.0,0.0)--(0.45,0.45);
\draw(0.3,0.42)--(0.47,0.43);
\node[below] at (0.27,0.52) {$z^2$};
\draw[red](1.0,0.0)--(0.4,0.4);
\draw(0.4,0.57)--(0.63,0.55);
\node[below] at (0.65,0.91) {$w^3$};
\draw[](0.48,0.48)--(1.,0.48);
\draw(0.78,0.58)--(0.69,0.77);
\node[right] at (1.,0.48) {$q^3$};
\draw[](0.4,0.4)--(1.0,0.4);
\node[right] at (1.0,0.4) {$q^1$};
\draw[](0.9,0.08)--(1.,0.08);
\node[right] at (1.,0.08) {$q^2$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\end{tikzpicture}
\begin{tikzpicture}[scale = 4]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\draw(0.68,0)--(0.68,0.48);
\node[below] at (0.68,0) {$ \rho_3$};
\draw(0.56,0)--(0.56,0.48);
\node[below] at (0.56,0) {$ \rho_1$};
\draw[red](0.2,0.0)--(1,0.8);
\draw(0.78,0.58)--(0.69,0.77);
\node[below] at (0.37,0.67) {$z^3$};
\draw[red](1.0,0.0)--(0.6,0.6);
\draw(0.27,0.42)--(0.6,0.43);
\node[below] at (0.22,0.52) {$z^1$};
\draw[red](1.0,0.0)--(0.52,0.52);
\draw(0.19,0.24)--(0.39,0.24);
\node[below] at (0.16,0.33) {$z^2$};
\draw[red](1.0,0.0)--(0.285,0.285);
\draw(0.4,0.57)--(0.63,0.55);
\node[below] at (0.64,0.9) {$w^3$};
\draw[](0.48,0.48)--(1.,0.48);
\draw(0.78,0.58)--(0.69,0.77);
\node[right] at (1.,0.48) {$q^3=q^1$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\end{tikzpicture}
}
\caption{Solution of Riemann problems for junction with priority lane. On the left $\frac{z^1}{1+z^1} \le q^3$. On the right $\frac{z^1}{1+z^1} \ge q^3$.}
\label{state1}
\end{figure}
\begin{figure}[h]
\center
\externaltikz{proportion}{
\begin{tikzpicture}[scale=0.65]
\def0.2{0.2}
\def0.3{0.3}
\def0.6{0.6}
\pgfmathsetmacro{\za}{0.2}
\pgfmathsetmacro{\zb}{0.3}
\pgfmathsetmacro{\zc}{\za+\zb}
\pgfmathsetmacro{\wc}{0.6*0.6}
\pgfmathsetmacro{\barrho}{(\wc+\zc)/(1+\zc)}
\pgfmathsetmacro{\frhoc}{0.6*(1-0.6)}
\pgfmathsetmacro{\barrhozero}{(1+sqrt(1-2*\frhoc))/2}
\begin{axis}[
legend style = {at={(1,1)}, xshift=-0.1cm, yshift=0.1cm, anchor=south east},
legend columns= 2,
xlabel = ${z^2}$,
ylabel = ${q^1 / q^3}$,
]
\addplot[color = blue!0!red,thick] file{Data/case1.txt};
\addlegendentry{$f = z^1/z^2$}
\addplot[color = blue!33!red,thick] file{Data/case2.txt};
\addlegendentry{$P=1/2$}
\end{axis}
\end{tikzpicture}
}
\caption{Proportion $\frac{q^1}{q3}$ of the flux in lane 1 for $z^2 \in [0,0.5]$ and $z^1=0.4$. Coupling conditions
(\ref{eqz}) and (\ref{mixed2}) are shown.}
\label{alpha}
\centering
\end{figure}
\subsubsection{LWR-conditions}
\label{LWRcond}
The classical conditions for LWR networks describing
'fair merging' with equal priority and situations with a priority lane are given by the following.
We use the supply-demand representation \cite{L} and denote the sets of valid resulting LWR fluxes $C^i$ by $\Omega^i$, compare \cite{CGP05,L,Dag1,Dag2,HR} and Figure \ref{fig:supply}.
For the incoming roads $i=1,2$ this is
\begin{align*}
\rho_B^i \le \rho^\star \Rightarrow \Omega^i = [0, F(\rho_B^i)] &&\text{ and }&&
\rho_B^i \ge \rho^\star \Rightarrow \Omega^i = [0, \sigma]\ .
\end{align*}
For the outgoing road $i=3$
\begin{align*}
\rho_B^i \le\rho^\star \Rightarrow \Omega^i = [0, \sigma] &&\text{ and }&&
\rho_B^i\ge \rho^\star \Rightarrow \Omega^i = [0, F(\rho_B^i)]\ .
\end{align*}
We define the maximal admissible flux $c^i$ such that $\Omega^i = [0, c^i]$.
\begin{figure}[h]
\center
\externaltikz{supplydemand}{
\begin{tikzpicture}[scale = 3]
\def \rhobar {0.3}
\def 0.5 {0.5}
\draw[->] (0,0)--(1.2,0) node[below]{$\rho$};
\draw[->] (0,0)--(0,1.2) node[left]{$F(\rho)$};
\draw[dashed] (1,1.2)--(1,0.0) node[below]{$1$};
\draw[black,line width=1pt,domain=0.5:1,smooth,variable=\x,] plot ({\x},{1}) ;
\draw[black,line width=1pt,domain=0.0:0.5,smooth,variable=\x,] plot ({\x},{4*\x*(1-\x)}) ;
\draw[dashed,black,line width=1pt,domain=0.5:1,smooth,variable=\x,] plot ({\x},{4*\x*(1-\x)}) ;
\draw[dashed] (0.5,{4*0.5*(1-0.5)})--(0.5,0) node[below]{$\rho^*$}node at (0.5,1.1) {$\sigma$};
\end{tikzpicture}
\hspace{0.5cm}
\begin{tikzpicture}[scale = 3]
\def \rhobar {0.3}
\def 0.5 {0.5}
\draw[->] (0,0)--(1.2,0) node[below]{$\rho$};
\draw[->] (0,0)--(0,1.2) node[left]{$F(\rho)$};
\draw[dashed] (1,1.2)--(1,0.0) node[below]{$1$};
\draw[black,line width=1pt,domain=0.0:0.5,smooth,variable=\x,] plot ({\x},{1}) ;
\draw[black,line width=1pt,domain=0.5:1,smooth,variable=\x,] plot ({\x},{4*\x*(1-\x)}) ;
\draw[dashed,black,line width=1pt,domain=0.0:0.5,smooth,variable=\x,] plot ({\x},{4*\x*(1-\x)}) ;
\draw[dashed] (0.5,{4*0.5*(1-0.5)})--(0.5,0) node[below]{$\rho^*$}node at (0.5,1.1) {$\sigma$};;
\end{tikzpicture}
}
\caption{Supply- and demand functions $c^i$ for ingoing (left) and outgoing (right) roads.}
\label{fig:supply}
\end{figure}
The coupling conditions are in both cases given by the balance of fluxes $C^3 = C^1+C^2$. Moreover, for
$c^1 +c^2 \le c^3$ the conditions
\begin{align*}
C^1= c^1, C^2 = c^2\
\end{align*}
are used. For $c^1+c^2\geq c^3 $ we have the 'fair merging' conditions
\begin{align}
\label{fair}
\begin{aligned}
C^i =
\min\left(c^i,c^3-\min\left(c^1,c^2,\frac{c^3}{2}\right)\right),
\qquad i = 1,2\
\end{aligned}
\end{align}
and in the case, where lane 1 is a
priority lane,
\begin{align}\label{prio}
C^1 &= \min\Big( c^1, c^3 \Big)\\
C^2 &= c^3-C^1= \max \Big(c^3-c^1,0 \Big) \nonumber .
\end{align}
\begin{remark}
That means, in the 'fair merging' case the merging is symmetric, if both incoming roads have a flux which is larger than their share in the outgoing road.
We refer, for example, to \cite{GPBook} for such coupling conditions for LWR networks.
\end{remark}
In sections \ref{macroscopiccc} and \ref{Numerical results} we consider the limit of the above coupling conditions for the relaxation system as
$\epsilon \rightarrow 0$ analytically and numerically.
In section \ref{macroscopiccc} an analytical investigation of the asymptotic procedure is given for conditions (\ref{eqz}) and it is shown that the LWR condition (\ref{fair}) is obtained in the limit
as $\epsilon $ goes to 0.
The numerical experiments in Section \ref{Numerical results} show that the macroscopic fair merging conditions (\ref{fair}) are obtained not only from conditions (\ref{eqz}), but
also, for example, from condition (\ref{mixed2}), which is less surprising comparing (\ref{mixed3}) and (\ref{mixed2}). There is obviously a larger range of conditions on the level of the relaxation system leading to the macroscopic
conditions (\ref{fair}).
We note that the coupling condition for the relaxation system modelling a priority lane (\ref{eq:merge_4}) leads to the corresponding macroscopic condition (\ref{prio}).
\subsection{Diverging lanes}
\label{div}
In this case the quantities
$z^1,w^2,w^3$ are given at the junction
and the admissible states fulfill
\begin{align}
\label{eq:div_char}
q^1 = z^1 (1-\rho^1)\\
q^2=\rho^2-w^2 \nonumber\\
q^3 = \rho^3 -w^3. \nonumber
\end{align}
Again, the coupling conditions contain in all cases the balance of fluxes
\begin{align}
\label{eq:div_1}
q^1=q^2+q^3.
\end{align}
To ensure that the resulting values at the junction remain in the physical domain $0 \le \rho \le 1, 0 \le q \le \rho$, we choose $q^1$ as
$$
q^1 = \min \{ \bar q, \rho^1 \}.
$$
where $\bar q$ is determined such that a prescribed relation
\begin{align}
\label{eq:div_0}
z^1 = g(z^2,z^3)
\end{align}
is fulfilled.
\begin{example}
As an example we use, in Figures \ref{div1} and \ref{div2} and in the numerical experiments, the function $g = z^2+z^3$.
Note that in this case the balance of the momentum flux is only fulfilled as long as this condition leads to values inside the domain
$ 0\le \rho \le 1, 0 \le q \le \rho $.
For a more detailed discussion, see Section \ref{relarz}.
\end{example}
Additionally, either the relation of the outgoing fluxes is prescribed, i.e.
\begin{align}
\label{eq:div_2}
\frac{q^2}{q^3} = f
\end{align}
with $f = f(w^2,w^3,z^1) \in \R_+$, which is equivalent to
\begin{align}
\label{eq:div_2b}
q^2 = \frac{f}{1+f} q^1, \; q^3= \frac{1}{1+f} q^1.
\end{align}
Or we use an additive relation
\begin{align}
\label{eq:div_3}
q^2-q^3 = f
\end{align}
with $f = f(w^2,w^3,z^1) \in [-1,1]$.
(\ref{eq:div_3}) can be rewritten as
\begin{align}
\label{eq:div_3b}
q^{2} = \frac{q^1}{2} +\frac{f}{2} \; \mbox{and} \; q^{3} =\frac{q^1}{2} -\frac{f}{2} .
\end{align}
\begin{remark}
\label{typ}
In the first case, typically, one prescribes a fixed relation
\begin{align}
\label{pref}
f= \frac{\alpha}{1-\alpha},
\end{align}
where $ \alpha \in [0,1]$ is given by the drivers preferences
to go to one of the roads.
This can be rewritten as
\begin{align}
\label{eq:div_2ba}
q^{2} = \alpha q^1 \; \mbox{and} \; q^{3} = (1-\alpha) q^1 .
\end{align}
As an example for the function $f$ in the second case, we use
\begin{align}
\label{ex2}
f= w^3-w^2.
\end{align}
Such a relation describes a distribution of the outgoing fluxes adapted to the situation in the outgoing roads
without drivers preferences.
\end{remark}
For a comparison of these conditions with the ARZ equations, as given for example in \cite{HR}, see section
\ref{relarz}.
Again we discuss the explicit solutions of the Riemann problems at the junction.
\subsubsection{Solution of the Riemann problems}
In the first case, we determine $q^1$ as
$$
q^1 = \min \{\frac{z^1}{1+z^1}, \bar q \}
$$
where $\bar q$ is determined by solving (in general numerically) the equation
\begin{align}
\label{eq1}
z^1 = g\left(\frac{f \bar q}{(1-w^2)(1+f) - f \bar q },\frac{ \bar q}{(1-w^3)(1+f) - \bar q }\right),
\end{align}
which is assumed to have a unique solution $ q \ge 0 $ in a range, such that $\rho^2 = w^2+\frac{f}{1+f} \bar q$ and $\rho_3=w^3+ \frac{1}{1+f} \bar q $ are in $[0,1)$.
Then $q^2$ and $q^3$ are obtained straightforwardly. This yields finally $\rho^1, \rho^2$ and $\rho^3$
due to the characteristic equations.
\begin{example}
For example, for $g = z^2+z^3$ and $f = 1$, i.e. $\alpha =\frac{1}{2}$ and $w^2= \bar w =w^3$ we have explicitly for
$\bar w \ge \frac{z^1}{2(1+z^1)} $
$$
q^1= \bar q = \frac{2 z^1}{2+z^1} (1- \bar w)
$$
and $\rho^1=\rho^2=\rho^3$.
For
$\bar w \le \frac{z^1}{2(1+z^1)} $
we have $
q^1= \rho^1= \frac{ z^1}{1+z^1}
$ and $\rho^2=\rho^3$. See Figure \ref{div1} for an illustration in phase-space.
Note that for $g$ as before, but general $\alpha\in [0,1]$ and $w^2,w^3 \in [0,1]$, equation (\ref{eq1}) is equivalent to
the quadratic equation $$
z^1 (a- \bar q )(b-\bar q )= \bar q \left( (a +b) - 2\bar q \right)
$$
with
$a= \frac{1-w^{2}}{\alpha}, b= \frac{1-w^{3}}{1-\alpha}$. This equation is easily seen to have a unique solution in the range
$0 \le \bar q \le \min (a,b)$, which is equivalent to $\rho^2 = w^2+\alpha \bar q$ and $\rho^2=w^3+(1-\alpha) \bar q$ in $[0,1]$.
\end{example}
For the second relation in Section \ref{div}, i.e. for (\ref{eq:div_3}) we have
$$
q^1 = \min \{\frac{z^1}{1+z^1}, \bar q \},
$$
where $\bar q$ is determined from the equation
$$
z^1 = g\left(\frac{ \bar q+f}{2(1-\rho^2)}, \frac{\bar q-f}
{2(1-\rho^3)}\right)
$$
with $\rho^2 = q^2 + w^2= \frac{\bar q + f }{2}+w^2$ and $\rho^3 = q^3 + w^3= \frac{\bar q -f }{2}+w^3$.
Again the equation is assumed to have a unique solution $ q \ge 0$ such that
$\rho^1$ and $\rho^2$ above are in $[0,1)$.
The remaining quantities are obtained in a straightforward way.
\begin{example}
Using $g = z^1+z^2$ and $f=w^3-w^2 $ we have always $\rho^2=\rho^3 = \bar \rho$.
Moreover,
$\bar q$ is now determined from
$$
z^1 = \frac{ \bar q+w^3-w^2}{2(1-\rho^2)}+\frac{\bar q-w^3+w^2}{2(1-\rho^3)}
= \frac{\bar q}{1-\frac{\bar q}{2}-\frac{w^2+w^3}{2}} .
$$
This is explicitly solved and we obtain for $w^2+w^3 \ge \frac{z^1}{1+z^1} $
$$
q^1 = \bar q =(2-(w^2+w^3))\frac{ z^1}{2+z^1}
$$
and
$$
\rho^1 = \rho^2=\rho^3 = \frac{w^2+w^3+z^1}{2+z^1}.
$$
For $w^2+w^3 \le \frac{z^1}{1+z^1} $ we obtain
$
q^1 = \frac{z^1}{1+z^1}$ and
$\rho^2 = \rho^3= \frac{z^1+(1+z^1)(w^2+w^3)}{2(1+z^1)}$.
See Figure \ref{div2} for the illustration of these conditions.
\end{example}
\begin{figure}[h]
\center
\externaltikz{Statespacelayer6}{
\begin{tikzpicture}[scale = 4]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\draw(0.6,0)--(0.6,0.6);
\node[below] at (0.6,0) {$ \rho^1$};
\draw(0.5,0)--(0.5,0.3);
\node[below] at (0.49,0) {$ \rho^{2/3}$};
\draw[red](0.2,0.0)--(1,0.8);
\node[below] at (0.32,0.65) {$z^1$};
\draw[red](1.0,0.0)--(0.6,0.6);
\draw(0.4,0.57)--(0.63,0.55);
\node[below] at (0.65,0.91) {$\bar w$};
\draw[](0.5,0.3)--(1.,0.3);
\draw(0.78,0.58)--(0.69,0.77);
\node[right] at (1.,0.3) {$\frac{q^1}{2}$};
\draw[](0.6,0.6)--(1.0,0.6);
\node[right] at (1.0,0.6) {$q^1$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\end{tikzpicture}
\begin{tikzpicture}[scale = 4]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\draw(0.71,0)--(0.71,0.44);
\node[below] at (0.71,0) {$ \rho^1=\rho^{2/3}$};
\draw[red](0.5,0.0)--(1,0.5);
\draw(0.4,0.57)--(0.63,0.55);
\node[below] at (0.35,0.65) {$z^1$};
\draw[red](1.0,0.0)--(0.6,0.6);
\node[below] at (0.63,0.91) {$\bar w$};
\draw[](0.71,0.43)--(1.,0.43);
\draw(0.68,0.78)--(0.85,0.35);
\node[right] at (1.,0.43) {$q^1$};
\draw[](0.71,0.215)--(1.,0.215);
\node[right] at (1.,0.215) {$\frac{q^1}{2}$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\end{tikzpicture}
}
\caption{Solution of Riemann problems for diverging junction with drivers preferences, $\bar w= w^2=w^3, \alpha=\frac{1}{2}$. On the left $\frac{z^1}{2(1+z^1)} \ge \bar w$. On the right $\frac{z^1}{2(1+z^1)} \le \bar w$.}
\label{div1}
\end{figure}
\begin{figure}[h]
\center
\externaltikz{Statespacelayer10}{
\begin{tikzpicture}[scale = 4]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\draw(0.55,0)--(0.55,0.35);
\node[below] at (0.48,0) {$ \rho^{2/3}$};
\draw(0.6,0)--(0.6,0.6);
\node[below] at (0.65,0) {$ \rho^1$};
\draw(0.4,0.57)--(0.63,0.55);
\node[below] at (0.33,0.65) {$z^1$};
\draw[red](1.0,0.0)--(0.6,0.6);
\draw[red](0.2,0.0)--(1,0.8);
\draw(0.65,0.75)--(0.822,0.62);
\node[below] at (0.63,0.85) {$w^2$};
\draw[red](0.3,0.0)--(1,0.7);
\draw(0.74,0.81)--(0.92,0.62);
\node[below] at (0.73,0.95) {$w^3$};
\draw(0.55,0.35)--(1.0,0.35);
\node[right] at (1.,0.35) {$q^2$};
\draw(0.55,0.25)--(1.0,0.25);
\node[right] at (1.,0.26) {$q^3$};
\draw[](0.6,0.6)--(1.0,0.6);
\node[right] at (1.0,0.6) {$q^1$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\end{tikzpicture}
\begin{tikzpicture}[scale = 4]
\def0.2{0.2}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\node[left] at (0,1) {$1$};
\draw(1,0)--(1,1);
\draw(0,0)--(1,1);
\draw(-0.02,1)--(0.02,1);
\draw(0.74,0)--(0.74,0.39);
\node[below] at (0.74,0) {$ \rho^1=\rho^{2/3}$};
\draw(0.4,0.57)--(0.63,0.55);
\node[below] at (0.35,0.65) {$z^1$};
\draw[red](1.0,0.0)--(0.6,0.6);
\draw[red](0.5,0.0)--(1,0.5);
\node[below] at (0.65,0.91) {$w^{2}$};
\draw[](0.8,0.88)--(0.94,0.34);
\draw[red](0.6,0.0)--(1,0.4);
\node[below] at (0.8,1) {$w^{3}$};
\draw[](0.74,0.39)--(1.,0.39);
\draw(0.68,0.78)--(0.85,0.35);
\node[right] at (1.1,0.38) {$q^1$};
\draw[](0.74,0.24)--(1.,0.24);
\node[right] at (1.1,0.24) {$q^2$};
\draw[](0.74,0.14)--(1.,0.14);
\node[right] at (1.1,0.14) {$q^3$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,1.2) node[left]{$q$};
\end{tikzpicture}
}
\caption{Solution of Riemann problems for diverging junction without drivers preferences. On the left $\frac{z^1}{1+z^1} \ge w^2+w^3$. On the right $\frac{z^1}{1+z^1} \le w^2+w^3$.}
\label{div2}
\end{figure}
\subsubsection{LWR-conditions}
Classical coupling conditions for the LWR network for the two diverging situations are well known, see \cite{GPBook}.
Using the notation from Section \ref{LWRcond} we always have $C^1 = C^2+C^3$.
Additionally, we have for the situation with drivers preferences
\begin{align}
\label{divmacro1}
C^1 = \min\Big( c^1, \frac{1}{\alpha} c^2 , \frac{1}{1-\alpha} c^3 \Big)
\end{align}
and
\begin{align}
\frac{C^2}{C^3} = \frac{\alpha}{1-\alpha}.
\end{align}
Without drivers preferences
the additional conditions are for $c^2+c^3 \le c^1$
\begin{align*}
C^2 =c^2\;,\; C^3=c^3\
\end{align*}
and for
$
c^2+c^3 \ge c^1$
\begin{align}
\label{divmacro2}
C^1= c^1\;,\; C^2= \min\left(c^2,c^1-\min\left(c^2,c^3,\frac{c^1}{2}\right)\right)\ .
\end{align}
In the latter case the flow is equally distributed, if both capacities exceed the incoming flow. Otherwise, the smaller capacity
is fully used and the lane with larger capacity has maximal flow under these constraints.
The numerical investigations in Section \ref{Numerical results} show that condition \ref{divmacro1} is obtained in the limit
$\epsilon $ to $0$ from condition (\ref{eq:div_2}) with (\ref{pref}), whereas condition \ref{divmacro2} is obtained as the limit of condition \ref{eq:div_3} with \ref{ex2}.
\subsection{Relation to coupling conditions for the ARZ equations on networks}
\label{relarz}
The coupling conditions for the ARZ-equations, see
\cite{HR,KCGG} and many others,
rely on the balance of the momentum flux $q z_R $ in the conservative formulation and the related definition of weak network solutions \cite{HR95}.
See the work in \cite{GP06} for an exception not requiring this condition.
The counterpart for the present model is the balance of the momentum flux $z =\frac{q}{1-\rho}$ which has been used here for merging junctions.
However, for diverging junctions, the balance of the quantity $z$ can in general not be prescribed anymore, if one requires that the coupling conditions lead to solutions at the nodes which remain inside the traffic domain
$0 \le \rho \le 1$ and $0 \le q \le \rho$.
In general, the Riemann problem is not solvable inside this domain.
We note that this is also true for the ARZ equations. In particular, the coupling conditions for the ARZ-equations in \cite{HR,KCGG} do not guarantee that the network solution at the nodes
remain in $0 \le \rho \le 1$ and $0 \le q \le \rho$. This is easily seen by looking, for example, at the construction
in \cite{HR}, where values outside this region are obtained in general.
As a final remark, we note that the present model leads to much simpler explicitly solvable conditions compared to the ARZ model. This allows to investigate the coupling problem in more detail, see the asymptotic investigation as $\epsilon \rightarrow 0$ in the following sections.
\section{An asymptotic procedure for the relaxation system on networks in the zero relaxation limit}
\label{asyproc}
In this section we relate the coupling conditions for the scalar conservation law to coupling conditions of the nonlinear relaxation system in the limit $\epsilon \rightarrow 0$.
This is done via a matched asymptotic expansion using a boundary layer analysis around the node.
This leads to the consideration of a half-space problem for each lane at the node. We refer to \cite{BSS84,BLP79,CGS,N99} for boundary layers of kinetic equations and to \cite{AM04,LX96,NT01,WX99,WY99} for investigations of boundary layers for hyperbolic relaxation systems and kinetic equations.
The general procedure is as follows: a half space layer problem is determined by a rescaling of the spatial coordinate on each lane in a spatial layer
near the node.
The coupling conditions for the layer problems are given by the coupling condition for the relaxation system.
Finally, the asymptotic values of the layer problems are matched to half Riemann problems for the macroscopic equations.
Finally, this gives the macroscopic coupling conditions for the LWR equations.
\subsection{The matched asymptotic expansion on the network}
We consider a single node and ingoing and outgoing lanes $[x_L^i , x_R^i]$ numbered by $i$.
The network relaxation system is given on each lane $i$ by the relaxation equations (\ref{macro0}) for the quantities
\begin{align*}
\rho^i (x), q^i(x)
\end{align*}
and the coupling conditions from section \ref{sec:kineticcouplingconditions} for these values at the nodes, i.e. at
$x=x_L$ or $x=x_R$ depending, whether the lane is outgoing or ingoing.
Now, the solution of the relaxation system is approximated on each lane by an asymptotic expansion.
For outgoing lanes this is
\begin{align*}
\rho^i (x) \sim \rho^i_L (\frac{x-x_L}{\epsilon}) - \rho_L^i(\infty) + \rho^i_{LWR}(x)+ \mathcal{O}(\epsilon).
\end{align*}
Here $\rho^i_L(y), y \in [0,\infty)$ is the left layer solution on lane $i$ and $\rho^i_{LWR}$ is the LWR solution on this lane.
The LWR value at the node is given by
$$
\rho_{LWR}^i = \rho_{LWR}^i (x^i_L)= \rho^i_L (\infty)= \rho^i_K.
$$
For ingoing lanes
\begin{align*}
\rho^i (x) \sim \rho^i_R (\frac{x_R-x}{\epsilon}) - \rho_R^i (\infty) + \rho^i_{LWR}(x)+ \mathcal{O}(\epsilon)
\end{align*}
with the layer solution $\rho^i_R(y), y \in [0,\infty)$ and
$$
\rho_{LWR}^i = \rho_{LWR}^i (x_R^i)= \rho^i_R (\infty)= \rho^i_K.
$$
The coupling of the asymptotic expansions at the nodes means that
$
\rho_L^i (0), q^i_L(0)
$
for outgoing lanes and
$
\rho_R^i (0), q^i_R(0)
$
for ingoing lanes
fulfill the coupling conditions for the relaxation system, but not the characteristic equations (\ref{eq:merge_char}) or (\ref{eq:div_char}).
\begin{remark}
We note that the quantities $\rho_L^i (0), q^i_L(0)$
for outgoing lanes and
$\rho_R^i (0), q^i_R(0)$ for the ingoing lanes, which are denoted later on by $\rho_0^i, q^i_0$, are in general not equal to the values $\rho^i,q^i$ of the solution of the relaxation system at the nodes! Both fulfill the coupling conditions, but have different characteristic equations. The transition from $\rho^i,q^i$ to $\rho_R^i (0), q^i_R(0)$ is given through a
layer in time depending on $\epsilon$.
See Figure \ref{couplingtotal} for a graphical discussion of the situation at the node in state space and Figure \ref{junctionvalues} for the corresponding time development of the solutions at the node.
\end{remark}
Initial conditions $(\rho^i_{init} (x) , q^i_{init} (x))$ on the lanes are chosen in equilibrium, i.e.
$q^i_{init} (x) = F(\rho^i_{init} (x)))$ and the corresponding values at the nodes are denoted by $\rho_B^i= \rho_{init}^i(x_L)$ or $\rho_B^i= \rho_{init}^i(x_R)$ for outgoing and ingoing roads respectively.
In the following we investigate first the layer equations and their asymptotic states and then the admissible half-Riemann problems for the macroscopic equations.
Showing the validity of the asymptotic procedure is then equivalent to matching the asymptotic states of the
layer problems with the admissible boundary conditions for the LWR problem (i.e. half-Riemann problems for the LWR equations) and proving that there is a unique
matching.
This will be done in section \ref{macroscopiccc} considering the case of a merging junction with fair-merging conditions.
The other cases will be treated numerically in Section \ref{Numerical results}. We proceed by discussing the layer problems.
\subsection{Layer solutions for the relaxation equation}
\label{kinlayer}
We investigate the layers of the relaxation system at the left (outgoing lanes) and the right (ingoing lanes) boundary.
\subsubsection{Left layer}
Consider the left boundary of the domain being located at $x=x_L$.
Starting from equation \eqref{macro0} and rescaling space as $y= \frac{x-x_L}{\epsilon}$ and neglecting higher order terms in $\epsilon$ one obtains
the layer equations for the left boundary for the layer solutions $(\rho_L,q_L)$ and $y \in [0, \infty)$ as
\begin{align}
\label{layerproblem}
\begin{aligned}
\partial_y q_L &=0\\
\frac{q_L}{1-\rho_L} \partial_y \rho_L + (1-\frac{q_L}{1-\rho_L})\partial_y q_L & =- \left(q_l-F(\rho_L) \right) \ .
\end{aligned}
\end{align}
This yields
\begin{align}\label{layer}
\begin{aligned}
q_L &=C\\
\partial_y \rho_L &= (1-\rho_L) \frac{F(\rho_L)-C}{C}\ .
\end{aligned}
\end{align}
For $0<C < F(\rho^{\star})=\sigma$, where $\rho^{\star}$ denotes the point where the maximum of $F$ is attained,
the above problem has two relevant fix-points
$$\rho_{-} (C)\le \rho^{\star}\quad \text{and}\quad
\rho_+ (C)= \tau (\rho_-) \ge \rho^{\star}\ .$$
Here, $\tau(\rho)\neq \rho$ is defined by $F(\tau(\rho))= F(\rho)$.
$\rho_-$ is instable, $\rho_+ $ is stable. The domain of attraction of the stable fix point $\rho_+$ is the interval $(\rho_-,1)$.
The third fix point $\rho=1$ is not relevant for the further matching procedure, since it requires $C=0$ in the macroscopic limit. In case $C=0$ we have the instable fix point $\rho_+ = 1$ and the stable fix point $\rho_- =0$ with domain of attraction $[0,1)$.
Moreover, we note that for $C=F(\rho^{\star})$ we have $\rho_- = \rho_+ = \rho^{\star}$ and
all solutions with initial values above $\rho^{\star}$ converge towards $\rho^{\star}$, all other solutions diverge.
\begin{remark}
In case of the LWR model with $F(\rho) = \rho(1-\rho)$, see Figure \ref{figfund}, we have $$\rho_{\pm} (C) = \frac{1}{2} (1 \pm \sqrt{1-4 C})\ ,$$
with $C< \frac{1}{4}$.
For $C=\frac{1}{4}$ we have $\rho_- = \rho_+ = \frac{1}{2}$. Moreover, $\tau(\rho) = 1-\rho$.
\begin{figure}[h]
\center
\externaltikz{LWR2}{
\begin{tikzpicture}[scale = 3]
\def \rhobar {0.3}
\def 0.5 {0.5}
\draw[->] (0,0)--(1.2,0) node[below]{$\rho$};
\draw[->] (0,0)--(0,1.2) node[left]{$F(\rho)$};
\draw[dashed] (0,{4*\rhobar*(1-\rhobar)})--(1.2,{4*\rhobar*(1-\rhobar)}) node at (-0.1,{4*\rhobar*(1-\rhobar)}) {$C$}; ;
\draw[black,line width=1pt,domain=0.0:1,smooth,variable=\x,] plot ({\x},{4*\x*(1-\x)}) ;
\draw[dashed] (\rhobar,{4*\rhobar*(1-\rhobar)})--(\rhobar,0) node[below]{$\rho_-$};
\draw[dashed] (1-\rhobar,{4*\rhobar*(1-\rhobar)})--(1-\rhobar,0) node[below]{$\rho_+ $};
\draw[dashed] (0.5,{4*0.5*(1-0.5)})--(0.5,0) node[below]{$\rho^*$}node at (0.5,1.1) {$\sigma$};
\end{tikzpicture}
\hspace{0.5cm}
\begin{tikzpicture}[scale = 3]
\draw[->] (0,0)--(1.2,0) node[below]{$C$};
\draw[->] (0,0)--(0,1.2) node[left]{$\rho$};
\draw[dashed] (0,0.5)--(1.2,0.5) ;
\draw[dashed] (1.0,0)--(1.0,1.2) ;
\node at (0.5,0.25) {$\rho_-$};
\node at (0.5,0.75) {$\rho_+$};
\node[left] at (0,0.5) {$\rho^\star$};
\node[below] at (1.0,0) {$\sigma$};
\draw[black,line width=1pt,domain=0.0:1.0,smooth,variable=\x,samples = 100] plot ({\x},{0.5*(1-sqrt(1-\x))});
\draw[black,line width=1pt,domain=0.0:1.0,smooth,variable=\x,samples = 100] plot ({\x},{0.5*(1+sqrt(1-\x))});
\end{tikzpicture}
}
\caption{Fundamental diagram, $F(\rho)$ and fix-points of the layer problem $\rho_\mp$.}
\label{figfund}
\end{figure}
\end{remark}
\subsubsection{Right layer}
For the right boundary at $x_R$ a scaling $y=\frac{x_R-x}{\epsilon}$ gives the layer equations for $(\rho_R,q_R)$ and $y\in[0, \infty)$ as
\begin{align}\label{layerright}
\begin{aligned}
q_R &=C\\
- \partial_y \rho_R &= (1-\rho_R) \frac{F(\rho_R)-C}{C}\ .
\end{aligned}
\end{align}
For $0 < C < F(\rho^{\star})$
the above problem has again two relevant fix points
$$\rho_{-}(C) \le \rho^{\star}\ ,\
\rho_+ (C) = \tau (\rho_-) \ge \rho^{\star}\; .$$
In this case
$\rho_- $ is stable, $\rho_+ $ is instable.
The domain of attraction of the stable fix point $\rho_-$ is $[0,\rho_+)$.
For $C=F(\rho^{\star})=\sigma$ we have $\rho_-= \rho_+ = \rho^{\star}$ and
all solutions with initial values below $\rho^{\star}$ converge towards $\rho^{\star}$, all other solutions converge to not admissible states.
For $C=0$ we have the instable fix point $\rho_+ = 1$ and the stable fix point $\rho_- =0$ with domain of attraction $[0,1)$.
\subsubsection{Summary}
\label{summary}
In summary we have the following cases denoting with $U$ the unstable fix points and with $S$ the stable ones.
Moreover, we use, as before, the notation $\rho_K$ for the values $\rho_L(\infty)$ and $\rho_R(\infty)$ at infinity
and the notation $\rho_0$ for the values at $y=0$, i.e. $\rho_L(0)$ and $\rho_R(0)$.
\\
\paragraph{Layer Problem at the left boundary}
\begin{align*}
&\left.\begin{array}{lll}
\rho_K = \rho_-(C) \quad &\Rightarrow\quad \rho_0 =\rho_-(C), &0 \le C< \sigma
\end{array}\right\}
&\quad \text{(U)}\\
&\left.\begin{array}{lll}
\rho_K = \rho_+(C) \quad &\Rightarrow\quad \rho_0 \in (\rho_-(C),1), &0< C < \sigma\\
\rho_K = \rho^\star \quad &\Rightarrow\quad \rho_0 \in [\rho^\star,1),& C=\sigma\\
\rho_K = 1 \quad &\Rightarrow\quad \rho_0 \in (0,1],& C=0
\end{array}\right\}
&\quad \text{(S)}
\end{align*}
\paragraph{The Layer Problem at the right boundary}
\begin{align*}
&\left.\begin{array}{lll}
\rho_K = \rho_+(C) \quad &\Rightarrow\quad \rho_0 =\rho_+(C), &0 \le C< \sigma
\end{array}\right\}
&\quad \text{(U)}\\
&\left.\begin{array}{lll}
\rho_K = \rho_-(C) \quad &\Rightarrow\quad \rho_0 \in [0,\rho_+(C)), &0 < C < \sigma\\
\rho_K = \rho^\star \quad &\Rightarrow\quad \rho_0 \in [0,\rho^\star],& C=\sigma\\
\rho_K = 0 \quad &\Rightarrow\quad \rho_0 \in [0,1),& C=0
\end{array}\right\}
&\quad \text{(S)}
\end{align*}
We use for the three cases of the stable fix point (S) the notation
$$
\rho_K = \rho_+(C) \quad \Rightarrow\quad \rho(0) \in \lceil \rho_-(C),1 \rfloor, 0 \le C \le \sigma
$$
for the left boundary and
$$
\rho_K = \rho_-(C) \quad \Rightarrow\quad \rho(0) \in \lceil 0, \rho_+(C) \rfloor, 0 \le C \le \sigma
$$
for the right boundary.
\subsection{Half-Riemann problems for the limit conservation law}
\label{Riemann}
We consider the limit conservation law $ \partial_t \rho + \partial_x F(\rho)=0$. The initial trace at the boundary of the scalar equation is as before denoted by $\rho_B$.
For a matching of layer solutions and solutions of the scalar conservation, those states $\rho_K$ are required which can be connected to $\rho_B$ with LWR-waves (shocks and rarefaction waves) with
non-negative velocity at the left and non-positive velocity at the right boundary. They are summarized in the following:\\
\paragraph{The half-Riemann problem at the left boundary}
\begin{align*}
\rho_B&\leq \rho^\star \ (\text{RP 1}) \quad &\Rightarrow\quad \rho_K &\in [0,\rho^\star ]\\
\rho_B&> \rho^\star \ (\text{RP 2}) \quad &\Rightarrow\quad \rho_K &\in [0,\tau(\rho_B)]\cup\{\rho_B\}
\end{align*}
\paragraph{The half-Riemann problem at the right boundary}
\begin{align*}
\rho_B&\geq \rho^\star \ (\text{RP 1}) \quad &\Rightarrow\quad \rho_K &\in [ \rho^\star ,1]\\
\rho_B&< \rho^\star \ (\text{RP 2}) \quad &\Rightarrow\quad \rho_K &\in \{\rho_B\}\cup [\tau(\rho_B),1]
\end{align*}
\section{The asymptotic procedure for merging junctions}
\label{macroscopiccc}
Here we consider the merging case with coupling conditions (\ref{eqz}), i.e. the equality of densities, in detail.
First we investigate the layers of the relaxation system at the nodes coupled to each other via the coupling conditions and determine resulting conditions on their asymptotic states. Then, we match these results to Riemann solutions of the macroscopic problems on each of the roads. The main result of this section is to show, that the asymptotic procedure starting from the relaxation network with the conditions (\ref{eqz}) leads in the limit $\epsilon \rightarrow 0$ to the LWR-network with the fair merging conditions \ref{fair}.
Assuming the boundary traces $\rho_B^i,i=1,2,3$ on the three roads to be given, we have to determine the new states $\rho_K^i$ at the node. On the one hand $\rho_K^i$ are the asymptotic states of the respective layer problems, on the other hand they are the right (for road 1 and 2) or left (for road 3) states of the half-Riemann problems for the LWR equations, i.e. the boundary conditions for the LWR equations.
The states at the junction in the asymptotic limit (corresponding to $y=0$ for the layers) are denoted in the following by
$\rho^i_0$ and determined together with the other values.
We have to consider eight different configurations of Riemann problems.
For each of them all possible combinations with stable or unstable layer solutions have to be discussed.
Not admissible combinations are not listed.
We give first
a detailed discussion of the coupling of the layer solutions in section \ref{layerproof} and second a discussion of the matching of the layer solutions to the half Riemann problems on the respective roads in section
\ref{proof}.
\begin{remark}
A similiar procedure could be used for the other conditions. We limit ourselves to a numerical investigation, see Section \ref{Numerical results}.
\end{remark}
\subsection{Coupling the layers}
\label{layerproof}
Here, we consider the coupling of the layers at the node.
The states at the junction (corresponding to the ingoing states at $y=0$ for the layers) are
$\rho_0^i$.
Each layer can have either a stable solution (S) or an unstable solution (U).
Thus, for three lanes we have eight possible combinations, which be denote by U/S-U/S-U/S.
\noindent{\bf Case1, U-U-U.}
We have $\rho_0^1 = \rho_+ (C^1), \rho_0^2= \rho_+(C^2) ,\rho_0^3= \rho_-(C^3)$, compare subsection \ref{summary}.
The coupling conditions give
\begin{align*}
\rho_+(C^1) &= \rho_+(C^2)= \rho_-(C^3)\\
C^3&= C^1+C^2
\end{align*}
with $0 \le C^1,C^2,C^3< \sigma$.
The second equality gives $C^2= C^3=\sigma$. This is not consistent with the range of $C^2$ and $C^3$.
The case is not admissible.
\noindent{\bf Case 2, S-U-U}
According to subsection \ref{summary}, we have $\rho^1_0 \in \lceil 0,\rho_+ (C^1)\rfloor$ and $ \rho^2_0= \rho_+(C^2), \rho^3_0= \rho_-(C^3)$.
Inserting into the coupling conditions gives
\begin{eqnarray*}
\rho_0^1 &= &\rho_+(C^2)= \rho_-(C^3)\\
C^3 & =&C^1+C^2
\end{eqnarray*}
with $0 \le C^1 \le \sigma$ and $0 \le C^2,C^3< \sigma$.
Again the second equation gives $C^2=C^3= \sigma$ which is not in the range of $C^2,C^3$.
The case is not admissible.
\noindent{\bf Case 3, U-S-U}
We have $\rho_0^1 = \rho_+ (C^1), \rho_0^2 \in [0,\rho_+(C^2)), \rho_0^3= \rho_-(C^3)$.
The case is symmetric to the above and not admissible.
\noindent{\bf Case 4, U-U-S}
We have $\rho_0^1 = \rho_+ (C^1), \rho_0^2 = \rho_+(C^2)$, $ \rho_0^3= \lceil\rho_-(C^3),1\rfloor$.
We have
\begin{align*}
\rho_+(C^1) &= \rho_+(C^2)= \rho_0^3\\
C^3&=C^1+C^2
\end{align*}
with $0 \le C^1,C^2 < \sigma$ and $0 \le C^3 \le \sigma$.
This gives $C^1=C^2= \frac{C^3}{2}$ and $$ \rho_0^1 = \rho_0^2 = \rho_0^3 =\rho_+(\frac{C^3}{2}).$$
\noindent{\bf Case 5, U-S-S}
We have $\rho_0^1 = \rho_+ (C^1), \rho_0^2 \in \lceil0,\rho_+(C^2)\rfloor$
and $ \rho_0^3\in \lceil\rho_-(C^3),1\rfloor$ with
$0 \le C^1 < \sigma$ and $0 \le C^2, C^3 \le \sigma$.
We have
\begin{align*}
\rho_+(C^1) &= \rho_0^2= \rho_0^3\\
C^3&=C^1+C^2.
\end{align*}
This gives $\rho_0^2 = \rho_0^3 = \rho_+(C^1) =\rho_+(C^3-C^2)$ with
the requirement $0 \le C^3-C^2 \le \sigma$ or
$ C^3 \ge C^2 $ and $\rho_0^2 = \rho_0^3 =\rho_+(C^3-C^2) \in [\rho_-(C^3),\rho_+(C^2)]$. It leads to $\rho_+(C^3-C^2) \le \rho_+(C^2)$ or $C^3-C^2 \ge C_2$ or $C^3 \ge 2 C^2$.
Altogether, we have for $ 2 C^2 \le C^3$ and $C^1 = C^3-C^2$
\begin{eqnarray*}
\rho_0^1&=\rho_0^2 = \rho_0^3 = \rho_+(C^3-C^2)\ .
\end{eqnarray*}
\noindent{\bf Case 6, S-U-S}
We have $\rho^1_0 \in \lceil0,\rho_+ (C^1)\rfloor$ and $\rho_0^2 =\rho_+(C^2), \rho_0^3\in \lceil\rho_-(C^3),1\rfloor$ with
$0 \le C^1 \le \sigma$ and $0 \le C^2, C^3 < \sigma$.
The case is symmetric to case 5.
For $ 2 C^1 \le C^3 $ and $C^2 = C^3-C^1$ we have
\begin{eqnarray*}
\rho_0^1=\rho_0^2 &= \rho_0^3 = \rho_+(C^3-C^1)\ .
\end{eqnarray*}
\noindent{\bf Case 7, S-S-U}
We have $\rho_0^1 \in \lceil 0,\rho_+ (C^1)\rfloor $ and $\rho_0^2 \in \lceil0,\rho_+(C^2)\rfloor$
and $ \rho_0^3 = \rho_-(C^3)$ with $0 \le C^1,C^2 \le \sigma$ and $0 \le C^3 < \sigma$.
The coupling conditions give
\begin{align*}
\rho_0^1 &= \rho_0^2= \rho_-(C^3)\\
C^3&=C^1+C^2\ .
\end{align*}
This gives $\rho_0^1 = \rho_0^2 = \rho_-(C^3) $
with the condition $0 \le C^1+C^2 < \sigma$. Thus, for $0 \le C^1+C^2 < \sigma$ we have
$$\rho_0^1 = \rho_0^2 = \rho_0^3 = \rho_-(C^1+C^2)\ .$$
\noindent{\bf Case 8, S-S-S}
We have $\rho_0^1 \in \lceil 0,\rho_+ (C^1)\rfloor, \rho_0^2 \in \lceil0,\rho_+(C^2)\rfloor, \rho_0^3 \in\lceil \rho_-(C^3),1\rfloor$
with $0 \le C^1,C^2,C^3 \le \sigma$.
The conditions are
\begin{align*}
\rho_0^1 &= \rho_0^2= \rho_0^3\\
C^3&=C^1+C^2.
\end{align*}
The values of $\rho^1_0 = \rho^2_0=\rho^3$ are not uniquely determined, but they are restricted to the interval $ [\rho_-(C^1+C^2),\min(\rho_+(C^1),\rho_+(C^2))]$.
These considerations yield all possible combinations of layer problems at the node. Now, they have to be matched
to the half-Riemann problems at the respective lanes.
\subsection{Matching of Riemann problem and layer equations}
\label{proof}
Assuming the initial states $\rho_B^i, i=1, 2,3$ to be given, we have to determine the fluxes $C^i$ and new states $\rho_K^i$ at the node. As mentioned, on the one hand $\rho_K^i$ are the asymptotic states of the respective layer problems fulfilling the conditions in the last section \ref{layerproof}.
On the other hand they are the left (road 1 and 2) or right (road 3) states of the half Riemann problems which have to be connected with $\rho_B^i$.
As before, $\rho_0^i$ are the states at the junction in the limit $\epsilon \rightarrow 0$.
We consider eight different configurations for the states $\rho_B^i$ corresponding to the possible combinations of different half Riemann problems.
For each of them all possible combinations with stable or unstable layer solutions have to be discussed.
Not admissible combinations are not listed.
We consider the cases ordered in terms of
the different possible combinations of Riemann problems on the 3 roads using the notation
RP1/2-1/2-1/2 for the respective combination of the half Riemann problems.\\
\noindent{\bf Case 1, RP1-1-1} $\rho_B^1 \ge \rho^\star , \rho_B^2 \ge \rho^\star , \rho_B^3 \le \rho^\star $.
From Section \ref{Riemann} we obtain
\begin{align*}
\rho_K^1 &\in [\rho^\star,1] :
& (U) &\text{ or } ((S) \text{ with } C^1=\sigma) \\
\rho_K^2 &\in [\rho^\star,1]:
& (U) &\text{ or } ((S) \text{ with } C^2=\sigma)\\
\rho_K^3 &\in [0,\rho^\star]:
& (U) &\text{ or } ((S) \text{ with } C^3=\sigma)
\end{align*}
Then, the discussion in Section \ref{layerproof} leads to 5 different cases:
\begin{enumerate}
\item[{\bf UUS}] with $C^3 = \sigma$ and $C^1=C^2=\frac{\sigma}{2}$ and $\rho_0^3 =\rho_+(\frac{\sigma}{2})$.
\item[{\bf USS}] with $C^2 = C^3 = \sigma$
which contradicts $C^3 \ge 2 C^2$.
\item[{\bf SUS}] with $C^1 = C^3 = \sigma$ which contradicts $C^3 \ge 2 C^1$.
\item[{\bf SSU}] with $C^1 = C^2 = \sigma$ and a contradiction to $C^1+C^2 \le \sigma$.
\item[{\bf SSS}] with $C^1 = C^2 = C^3 = \sigma$, which gives a contradiction to the balance of fluxes.
\end{enumerate}
This gives $C_1=C_2=\frac{\sigma}{2}, C_3=\sigma$ and $\rho_0^i = \rho_+(\frac{\sigma}{2})$.
The values for
$\rho_K^i $ follow directly.
\vspace{0.3cm}
\noindent{\bf Case 2, RP1-1-2} $\rho_B^1 \ge \rho^\star , \rho_B^2 \ge \rho^\star , \rho_B^3 \ge \rho^\star $.
\begin{align*}
\rho_K^1 &\in [\rho^\star,1] :
& (U) &\text{ or } ((S) \text{ with } C^1=\sigma) \\
\rho_K^2 &\in [\rho^\star,1]:
& (U) &\text{ or } ((S) \text{ with } C^2=\sigma)\\
\rho_K^3 &\in [0,\tau(\rho_B^3)] \cup \{\rho_B^3\}:
& ((U) \text{ with } C^3 \le F(\rho_B^3)) &\text{ or } ((S)
\text{ with } C^3=F(\rho_B^3))
\end{align*}
\begin{enumerate}
\item[{\bf UUS}] with $C^3 = F(\rho_B^3)$ and $C^1=C^2= \frac{1}{2} F(\rho_B^3)$ and $\rho_0^3 =\rho_+(\frac{1}{2}F(\rho_B^3)).$
\item[{\bf USS}] with $C^2 = \sigma$
which contradicts $C^3 \ge 2 C^2$.
\item[{\bf SUS}] with $C^1 = \sigma$ which contradicts $C^3 \ge 2 C^1$.
\item[{\bf SSU}] with $C^1 = C^2 = \sigma$ and a contradiction to $C^1+C^2 \le \sigma$.
\item[{\bf SSS}] with $C^1 = C^2 = C^3 = \sigma$, which gives a contradiction to the balance of fluxes.
\end{enumerate}
This gives $C_1=C_2=F(\frac{\rho_B^3}{2}), C_3=F(\rho_B^3)$ and $\rho_0^i = \rho_+(\frac{1}{2}F(\rho_B^3))$.
\vspace{0.3cm}
\noindent{\bf Case 3, RP1-2-1} $\rho_B^1 \ge \rho^\star , \rho_B^2 \le \rho^\star , \rho_B^3 \le \rho^\star $.
\begin{align*}
\rho_K^1 &\in [\rho^\star,1] :
& (U) &\text{ or } ((S) \text{ with } C^1=\sigma) \\
\rho_K^2 &\in [\tau(\rho_B^2),1] \cup \{\rho_B^2\}:
& ((U) \text{ with } C^2 \le F(\rho_B^2)) &\text{ or } ((S)
\text{ with } C^2=F(\rho_B^2)) \\
\rho_K^3 &\in [\rho^\star,1]:
& (U) &\text{ or } ((S) \text{ with } C^3=\sigma)
\end{align*}
\begin{enumerate}
\item[{\bf UUS}] with $C^3 = \sigma$ which gives $C^1 = C^2= \frac{\sigma}{2}$. Moreover, $C^2\le F(\rho_B^2)$. This is possible, if $\frac{\sigma}{2} \le F(\rho_B^2)$.
Then, $\rho_0^3 = \rho_+(\frac{\sigma}{2})$
\item[{\bf USS}] with $C^3 = \sigma, C^2=F(\rho_B^2)$.
$C^3 \ge 2 C^2$ gives the requirement $\frac{\sigma}{2} \ge F(\rho_B^2)$. Moreover, we have $C^1 = \sigma-F(\rho_B^2) $ and
$\rho_0^2 = \rho_0^3= \rho_+(C^1)$.
\item[{\bf SUS}] with $C^1 = \sigma$ which contradicts $C^3 \ge 2 C^1$.
\item[{\bf SSU}] with $C^1 = \sigma$ and $C^2 = F(\rho_B^2)$. This is only possible for $\rho_B^2 = 0$
and $C^2 =0$. Then $C^3 =\sigma$ and $\rho_0^i = \rho_-(\sigma) = \rho^\star$.
\item[{\bf SSS}] with $C^1 = C^3 = \sigma$ and $C^2 = F(\rho_B^2)$. This gives again $\rho_B^2=0$ and
$C^2=0$. Then $\rho_0^i \in [\rho_-(C^1+C^2),\min(\rho_+(C^1),\rho_+(C^2))]$ gives
$\rho_0^i \in [\rho_-(\sigma),\rho_+(\sigma)]$. This leaves only $\rho_0^i =\rho^\star$.
\end{enumerate}
This gives for $\frac{\sigma}{2} \le F(\rho_B^2) $ that $C^1 = C^2= \frac{C^3}{2}=\frac{\sigma}{2}$ and $\rho_0^i = \rho_+(\frac{\sigma}{2})$.
For $\frac{\sigma}{2} \ge F(\rho_B^2) $ one has $C^1 = \sigma-F(\rho_B^2) $ , $C^3 = \sigma, C^2=F(\rho_B^2)$ and
$\rho_0^i = \rho_+( \sigma-F(\rho_B^2) )$.
\vspace{0.3cm}
\noindent{\bf Case 4, RP2-1-1} $\rho_B^1 \le \rho^\star , \rho_B^2 \ge \rho^\star , \rho_B^3 \le \rho^\star $.
This case is symmetric to Case 3.
We have for $\frac{\sigma}{2} \ge F(\rho_B^1)$ that $C^1 = F(\rho_B^1) , C^3 =\sigma$ , $C^2 = \sigma - F(\rho_B^1)$ and
$\rho_0^i = \rho_+(\sigma-F(\rho_B^1))$.
For $\frac{\sigma}{2} \le F(\rho_B^1) $ one has $C^1 = C^2= \frac{C^3}{2} = \frac{\sigma}{2}$ and
$\rho_0^i = \rho_+(\frac{\sigma}{2})$.
\vspace{0.3cm}
\noindent{\bf Case 5, RP1-2-2} $\rho_B^1 \ge \rho^\star , \rho_B^2 \le \rho^\star , \rho_B^3 \ge \rho^\star .$
\begin{align*}
\rho_K^1 &\in [\rho^\star,1] :
& (U) &\text{ or } ((S)
\text{ with } C^1=\sigma) \\
\rho_K^2 &\in [\tau(\rho_B^2),1] \cup \{\rho_B^2\}:
& ((U)\text{ with } C^2 \le F(\rho_B^2)) &\text{ or } ((S) \text{ with } C^2= F(\rho_B^2))\\
\rho_K^3 &\in [0,1-\rho_B^3] \cup \{\rho_B^3\}:
&((U)\text{ with } C^3 \le F(\rho_B^3)) &\text{ or } ((S) \text{ with } C^3= F(\rho_B^3))
\end{align*}
\begin{enumerate}
\item[{\bf UUS}] with $C^2 \le F(\rho_B^2) $ and $C^3= F(\rho_B^3)$. If $ F(\rho_B^3)\le 2 F(\rho_B^2)$ then
$C^1= C^2 = \frac{F(\rho_B^3)}{2}$ and
$\rho_0^3 = \rho_+(\frac{C^3}{2})$.
\item[{\bf USS}] with $C^2 = F(\rho_B^2) , C^3 = F(\rho_B^3) $. With $C^3\ge 2 C^2$ or $F(\rho_B^3) \ge F(\rho_B^2)$ we have $C^1 =C^3-C^2$.
\item[{\bf SUS}] with $C^1 = \sigma , C^2 \le F(\rho_B^2), C^3 = F(\rho_B^3)$, which gives a contradiction to $C^3 \ge 2 C^1$.
\item[{\bf SSU}] with $C^1 = \sigma$ and $C^2 = F(\rho_B^2),C^3 \le F(\rho_B^3) $. This is only possible for $\rho_B^2 = 0$. Then $C^3 =\sigma, \rho_B^3 = \rho^\star$ and $\rho_0^1=\rho_-(\sigma) = \rho^\star$.
\item[{\bf SSS}] with $C^1 = \sigma$ and $C^2 =F(\rho_B^2) , C^3 =F(\rho_B^3) $. This is only possible, if $C^2 =0$ and $\rho_B^2 =0$. This yields $C^3=\sigma$ and $\rho_B^3= \rho^\star$.
Then $\rho_0^i \in [\rho_-(C^1+C^2),\min(\rho_+(C^1),\rho_+(C^2))]$ gives
$\rho_0^i \in [\rho_-(\sigma),\rho_+(\sigma)]$, which leaves only $\rho_0^i =\rho^\star$.
\end{enumerate}
This gives for $F(\rho_B^3) \le 2 F(\rho_B^2)$ that $C_1=\frac{F(\rho_B^3)}{2}=C_2, C_3=F(\rho_B^3)$ and
$\rho_0^i = \rho_+(\frac{F(\rho_B^3)}{2})$.
For $F(\rho_B^3) \ge 2 F(\rho_B^2) $ one has $C_1=F(\rho_B^3)- F(\rho_B^2), C^2 = F(\rho_B^2), C_3=F(\rho_B^3)$ and
$\rho_0^i = \rho_+(F(\rho_B^3)- F(\rho_B^2))$.
\vspace{0.3cm}
\noindent{\bf Case 6, RP2-1-2} $\rho_B^1 \le \rho^\star , \rho_B^2 \ge \rho^\star , \rho_B^3 \ge \rho^\star $.
This case is symmetric to case 5.
We have for $F(\rho_B^3) \le 2 F(\rho_B^1)$ that $C_1=\frac{F(\rho_B^3)}{2}=C_2, C_3=F(\rho_B^3)$ and
$\rho_0^i = \rho_+(\frac{F(\rho_B^3)}{2})$.
For $F(\rho_B^3 ) \ge 2 F(\rho_B^1) $ one has $C_1=F(\rho_B^1), C^2 = F(\rho_B^3)- F(\rho_B^1), C_3=F(\rho_B^3)$
and $\rho_0^i = \rho_+(F(\rho_B^3)- F(\rho_B^1))$.
\vspace{0.3cm}
\noindent{\bf Case 7, RP2-2-1} $\rho_B^1 \le \rho^\star , \rho_B^2 \le \rho^\star , \rho_B^3 \le \rho^\star $.
\begin{align*}
\rho_K^1 &\in [\tau(\rho_B^1),1] \cup \{\rho_B^1\}:
& ((U)\text{ with } C^1 \le F(\rho_B^1)) &\text{ or } ((S) \text{ with } C^1= F(\rho_B^1))\\
\rho_K^2 &\in [\tau(\rho_B^2),1] \cup \{\rho_B^2\}:
&((U)\text{ with } C^2 \le F(\rho_B^2)) &\text{ or } ((S) \text{ with } C^2= F(\rho_B^2))\\
\rho_K^3 &\in [0,\rho^\star] :
& (U) &\text{ or } ((S)
\text{ with } C^3=\sigma)
\end{align*}
\begin{enumerate}
\item[{\bf UUS}] with $C^1 \le F(\rho_B^1) $ and $C^2 \le F(\rho_B^2) $. $C^3=\sigma$ yields
$C^1= C^2 = \frac{\sigma}{2}$, if $F(\rho_B^1) \ge \frac{\sigma}{2}$ and $F(\rho_B^2) \ge \frac{\sigma}{2}$ .
Then $\rho_0^3 = \rho_+(\frac{C^3}{2})$.
\item[{\bf USS}] with $C^1 \le F(\rho_B^1) , C^3 =\sigma, C^2 = F(\rho_B^2) $. $C^3\ge 2 C^2$ is equivalent to
$\frac{\sigma}{2} \ge F(\rho_B^2)$. Moreover, $C^1 = \sigma - F(\rho_B^2) $ requires $F(\rho_B^1) +F(\rho_B^2) \ge \sigma$.
\item[{\bf SUS}] with $C^1 = F(\rho_B^1), C^2 \le F(\rho_B^2), C^3 =\sigma$. $C^3 \ge 2 C^1$ gives $ \frac{\sigma}{2} \ge F(\rho_B^1)$, $F(\rho_B^2) \ge \frac{\sigma}{2}$ and $C^2 =\sigma - F(\rho_B^1) \ge \frac{\sigma}{2}$.
Moreover, $\rho_0^1= \rho_+(C^3-C^1)$.
\item[{\bf SSU}] with $C^1 = F(\rho_B^1)$ and $C^2 = F(\rho_B^2) $. This gives $F(\rho_B^1)+ F(\rho_B^2)\le \sigma$ and $\rho_0^i =
\rho_-(C^3) $.
\item[{\bf SSS}] with $C^1 = F(\rho_B^1) $ and $C^2 =F(\rho_B^2) , C^3 =\sigma$. This is only possible, if $ F(\rho_B^1)+ F(\rho_B^2)= \sigma$. In this case, since $ \rho_0^i\in [\rho_-(C^1+C^2),\min(\rho_+(C^1),\rho_+(C^2))]$ we obtain
$ \rho_0^i\in [\rho_-(\sigma),\min(\rho_+(F(\rho_B^1)),\rho_+(F(\rho_B^2)))]$. This gives the restriction
$ \rho_0^i\in $$ [\rho^\star,\min(\tau(\rho_B^1),\tau(\rho_B^2))]$ according to the range of $\rho_B^1, \rho_B^2 $.
\end{enumerate}
We obtain for $F(\rho_B^1)+F(\rho_B^2) \le \sigma (SSU)$ that $C_1=F(\rho_B^1), C_2=F(\rho_B^2), C_3=F(\rho_B^1)+F(\rho_B^2)$ and $\rho_0^i = \rho_-(F(\rho_B^1)+ F(\rho_B^2))$.
For $F(\rho_B^1)+F(\rho_B^2) \ge \sigma ,F(\rho_B^1) \ge \frac{\sigma}{2}, F(\rho_B^2) \ge \frac{\sigma}{2} (UUS)$ one has $C_1= \frac{\sigma}{2}=C^2, C^3 =\sigma $ and $\rho_0^i = \rho_+(\frac{\sigma}{2})$.
For $F(\rho_B^1)+F(\rho_B^2) \ge \sigma ,F(\rho_B^1) \le \frac{\sigma}{2}, F(\rho_B^2) \ge \frac{\sigma}{2} (SUS)$ one has $C_1=F(\rho_B^1), C_2=\sigma-F(\rho_B^1)$, $C_3=\sigma$ and $\rho_0^i = \rho_+(\sigma-F(\rho_B^1))$.
For $F(\rho_B^1)+F(\rho_B^2) \ge \sigma ,F(\rho_B^1) \ge \frac{\sigma}{2}, F(\rho_B^2) \le \frac{\sigma}{2}(USS)$ one has $C_1=\sigma-F(\rho_B^2), C_2=F(\rho_B^2)$, $ C_3=\sigma$ and $\rho_0^i = \rho_+(\sigma- F(\rho_B^2))$.
\begin{remark}
We note that at the interfaces between the different conditions we obtain values $\rho_0^i \in [\rho^\star,\min(\rho_+(F(\rho_B^1)),\rho_+(F(\rho_B^2))]$. This is exactly the interval for the $\rho^i$-values in case (SSS).
\end{remark}
\vspace{0.3cm}
\noindent{\bf Case 8, RP2-2-2} $\rho_B^1 \le \rho^\star , \rho_B^2 \le \rho^\star , \rho_B^3 \ge \rho^\star $.
\begin{align*}
\rho_K^1 &\in [\tau(\rho_B^1),1] \cup \{\rho_B^1\}:
& ((U)\text{ with } C^1 \le F(\rho_B^1)) &\text{ or } ((S) \text{ with } C^1= F(\rho_B^1))\\
\rho_K^2 &\in [\tau(\rho_B^2),1] \cup \{\rho_B^2\}:
& ((U)\text{ with } C^2 \le F(\rho_B^2)) &\text{ or } ((S) \text{ with } C^2= F(\rho_B^2))\\
\rho_K^3 &\in [0,\tau(\rho_B^3)] \cup \{\rho_B^3\}:
&((U)\text{ with } C^3 \le F(\rho_B^3)) &\text{ or } ((S) \text{ with } C^3= F(\rho_B^3))
\end{align*}
\begin{enumerate}
\item[{\bf UUS}] with $C^1 \le F(\rho_B^1) $, $C^2 \le F(\rho_B^2) $ and $C^3 = F(\rho_B^3) $. If $ F(\rho_B^3)\le 2 F(\rho_B^1)$ and $ F(\rho_B^3)\le 2 F(\rho_B^2)$ then
$C^1= C^2 = \frac{F(\rho_B^3)}{2}$ and
$\rho_0^3 = \rho_+(\frac{C^3}{2})$.
\item[{\bf USS}] with $C^1 \le F(\rho_B^1) , C^2 =F(\rho_B^2), C^3 = F(\rho_B^3) $. With $C^3\ge 2 C^2$ we have
$ F(\rho_B^3)\ge 2 F(\rho_B^2)$ and $ F(\rho_B^3)- F(\rho_B^2) \le F(\rho_B^1)$
or $ F(\rho_B^1)+ F(\rho_B^2) \ge F(\rho_B^3)$.
\item[{\bf SUS}] with $C^1 = F(\rho_B^1), C^2\le F(\rho_B^2),C^3 = F(\rho_B^3)$. $C^3 \ge 2 C^1$ gives $ F(\rho_B^3)\ge 2 F(\rho_B^1)$ and $F(\rho_B^3)- F(\rho_B^1) \le F(\rho_B^2)$ or $F(\rho_B^1)+ F(\rho_B^2) \ge F(\rho_B^3)$. Moreover $\rho_0^1= \rho_+(C^3-C^1)$.
\item[{\bf SSU}] with $C^1 = F(\rho_B^1)$ and $C^2 = F(\rho_B^2) ,C^3 \le F(\rho_B^3) $.
This is only possible for $F(\rho_B^1)+F(\rho_B^2) \le F(\rho_B^3)$. Moreover, $\rho_0^i =\rho_-(C^3) $.
\item[{\bf SSS}] with $C^1 = F(\rho_B^1) $ and $C^2 =F(\rho_B^2), C^3 =F(\rho_B^3) $. This is only possible, if $F(\rho_B^1)+F(\rho_B^2) = F(\rho_B^3)$.
We obtain
$ \rho_0^i\in [\rho_-(F(\rho_B^3)),\min(\rho_+(F(\rho_B^1)),\rho_+(F(\rho_B^2)))]$. This gives
according to the range of $\rho_B^2, \rho_B^3 $, that $ \rho_0^i\in [\tau(\rho_B^3),\min(\tau(\rho_B^1),\tau(\rho_B^2))]$.
\end{enumerate}
This gives for $ F(\rho_B^3)\le 2 F(\rho_B^1)$ and $ F(\rho_B^3)\le 2 F(\rho_B^2) (UUS)$
that $C_1=\frac{F(\rho_B^3)}{2}=C^2, C_3=F(\rho_B^3)$ and $\rho_0^i = \rho_+(\frac{F(\rho_B^3)}{2})$.
For $ F(\rho_B^3)\ge 2 F(\rho_B^2)$ and $ F(\rho_B^1)+ F(\rho_B^2) \ge F(\rho_B^3)(USS)$ one has
$C^2=F(\rho_B^2)$, $C^3=F(\rho_B^3)$, $C_1=F(\rho_B^3)-F(\rho_B^2)$ and $\rho_0^i = \rho_+(F(\rho_B^3)- F(\rho_B^2))$.
For $F(\rho_B^3)\ge 2 F(\rho_B^1)$ and $F(\rho_B^1)+ F(\rho_B^2) \ge F(\rho_B^3) (SUS)$ one has
$C_1=F(\rho_B^1)$, $C^3=F(\rho_B^3)$, $C^2=F(\rho_B^3)-F(\rho_B^1)$ and $\rho_0^i = \rho_+(F(\rho_B^3)- F(\rho_B^1))$.
For $F(\rho_B^1)+F(\rho_B^2) \le F(\rho_B^3)(SSU)$ one has $C_1=F(\rho_B^1),C^2=F(\rho_B^2)$, $C^3=F(\rho_B^1)+F(\rho_B^2)$
and $\rho_0^i = \rho_-(F(\rho_B^1)+F(\rho_B^2))$.
\begin{remark}
Note that the sub-cases in Case 8 partition uniquely the range of admissible states since for $0 \le x,y,z \le 1$ either ($x+y\le z$) or ($x+y\ge z$ and $z\ge2y$) or ($x+y\ge z$ and $z\ge 2x$) or ($z\le 2x$ and $z\le 2y$).
Moreover, note that at the interfaces between the different conditions we obtain that $\rho_0^i \in [\rho_-(F(\rho_B^3)),
\min(\rho_+(F(\rho_B^1)),\rho_+(F(\rho_B^2)))]$. This is exactly the interval for the $\rho^i$-values in case (SSS).
\end{remark}
The above computations show that there is a unique matching of layer solutions and LWR solutions and that
the asymptotic expansion lead to well defined conditions for the LWR network.
Considering only the fluxes and neglecting the information on the $\rho_0^i$ the above result can be rewritten in a more convenient way using the supply- and demand formulation, see section \ref{LWRcond} or \cite{L}.
We obtain
{\bf Case 1, RP1-1-1.}
This is a case with $c^1,c^2\le \frac{c^3}{2}: C^1 =c^1, C^2 = c^2.$
{\bf Case 2, RP1-1-2.}
This is a case with $c^1,c^2\ge \frac{c^3}{2}: C^1 =C^2 = \frac{c^3}{2}.$
{\bf Case 3, RP1-2-1}
We have $c^1 \ge c^2$ and two cases:
\begin{align*}
c^2 \ge \frac{c^3}{2}&: C^1= C^2 = \frac{c^3}{2}\\
c^2 \le \frac{c^3}{2}&: C^1= c^3-c^2, C^2 =c^2.
\end{align*}
{\bf Case 4, RP2-1-1} Symmetric to Case 3.
We have $c^1 \le c^2$ and two cases:
\begin{align*}
c^1 \ge \frac{c^3}{2}&: C^1= C^2 = \frac{c^3}{2}\\
c^1 \le \frac{c^3}{2}&: C^1= c^1, C^2 =c^3 -c^1.
\end{align*}
{\bf Case 5, RP1-2-2}
In terms of the $c^i $ this case is the same as Case 3.
{\bf Case 6, RP2-1-2}
This case is the same as Case 4.
{\bf Case 7, RP2-2-1}
We have four cases:
\begin{align*}
c^1 +c^2 \le c^3&:C^1= c^1, C^2 = c^2\\
c^1 +c^2 \ge c^3,c^1 \ge \frac{c^3}{2}, c^2\ge \frac{c^3}{2} &:C^1= C^2 =\frac{c^3}{2}\\
c^1 +c^2 \ge c^3,c^1 \ge \frac{c^3}{2}, c^2\le \frac{c^3}{2}&:C^1= c^3-c^2,C^2 =c^2\\
c^1 +c^2 \ge c^3,c^1 \le \frac{c^3}{2}, c^2\ge \frac{c^3}{2} &:C^1= c^1, C^2 =c^3-c^1.
\end{align*}
{\bf Case 8, RP2-2-2}
We obtain the same as in Case 7.\\
One oberves directly, that this result can be rewritten in the
more compact form given in \ref{fair} which shows that the relaxation network with conditions (\ref{eqz})
converges for $\epsilon \rightarrow 0$ to the LWR network with the fair merging conditions (\ref{fair}).
\begin{remark}
The above derivation shows that the classical merge condition (\ref{fair}) for the LWR-network
can be obtained as the asymptotic limit of condition (\ref{eqz}) for the relaxation system, that means the equality of densities.
The equality of densities is not fulfilled on the macroscopic level of the conservation law,
only the balance of fluxes is common for both levels of coupling conditions.
Moreover, we note that this is not the only coupling condition for the relaxation system leading to (\ref{fair}). A similiar investigation leading to the same macroscopic coupling conditions could be performed for condition
(\ref{mixed2}), i.e. the priority condition with priority $P=\frac{1}{2}$.
\end{remark}
A graphical sketch of the different quantitites in state space is given in Figure \ref{couplingtotal} for the special example
$\rho_B^1= 0.2$, $ \rho_B^2= 0.3$ and $\rho_B^3= 0.6$.
One observes the difference at the junction between the values $\bar \rho= \rho^i$ found by solving the coupling conditions for the relaxation system and the values $\bar \rho_0= \rho^i_0 $ found by the asymptotic investigation.
We note that both values fulfill the coupling conditions, but different equations (characteristic equations for the relaxation system versus layer plus LWR-wave for the limit problem) connecting them to the $\rho_B^i$.
The time development of the value at the nodes for the relaxation system with different $\varepsilon$ is shown in Figure \ref{junctionvalues}.
One observes the evolution of the values at the junction from $\bar \rho$ to $\bar \rho_0$.
The size of the temporal layer in Figure \ref{junctionvalues} also depends on $\epsilon$.
\begin{figure}[h]
\center
\externaltikz{cc1}{
\begin{tikzpicture}[scale = 9.5]
\def0.2{0.2}
\def0.3{0.3}
\def0.6{0.6}
\pgfmathsetmacro{\za}{0.2}
\pgfmathsetmacro{\zb}{0.3}
\pgfmathsetmacro{\zc}{\za+\zb}
\pgfmathsetmacro{\wc}{0.6*0.6}
\pgfmathsetmacro{\barrho}{(\wc+\zc)/(1+\zc)}
\pgfmathsetmacro{\zam}{\za/(1+\za)}
\pgfmathsetmacro{\zbm}{\zb/(1+\zb)}
\pgfmathsetmacro{\zcm}{\zc/(1+\zc)}
\pgfmathsetmacro{\qa}{\za*(1-\barrho)}
\pgfmathsetmacro{\qb}{\zb*(1-\barrho)}
\pgfmathsetmacro{\qc}{\zc*(1-\barrho)}
\pgfmathsetmacro{\frhoa}{0.2*(1-0.2)}
\pgfmathsetmacro{\frhob}{0.3*(1-0.3)}
\pgfmathsetmacro{\frhoc}{0.6*(1-0.6)}
\pgfmathsetmacro{\frhochalf}{\frhoc/2}
\pgfmathsetmacro{\rhoatau}{1-0.2}
\pgfmathsetmacro{\watau}{\rhoatau*\rhoatau}
\pgfmathsetmacro{\frhoatau}{\rhoatau*(1-\rhoatau)}
\pgfmathsetmacro{\barrhozero}{(1+sqrt(1-2*\frhoc))/2}
\pgfmathsetmacro{\wabar}{\barrho-\qa}
\pgfmathsetmacro{\wbbar}{\barrho-\qb}
\node[below] at (0,0) {$0$};
\node[left] at (0,0) {$0$};
\node[below] at (1,0) {$1$};
\draw(1,0)--(1,0.7);
\draw(0,0)--(0.7,0.7);
\node[left] at (\zcm,\zcm) {$z^3$};
\node[left] at (\zbm,\zbm) {$z^2$};
\node[left] at (\zam,\zam) {$z^1$};
\node[right] at (1,1-\wc) {$w^3$};
\draw[dashed](\barrho,\qc)--(1.0,\qc);
\node[right] at (1,\qc) {$q^3$};
\draw[dashed](\barrho,\qb)--(1.0,\qb);
\node[right] at (1,\qb+0.01) {$q^2$};
\draw[dashed](\barrho,\qa)--(1.0,\qa);
\node[right] at (1,\qa) {$q^1$};
\draw[dashed](\barrho,0)--(\barrho,0.3) node[below] at (\barrho-0.01,0) {$\bar \rho$};
\draw[dashed](\barrhozero,0)--(\barrhozero,0.3) node[below] at (\barrhozero,0) {$\bar \rho_0$};
\draw[->](0,0)--(1.2,0) node[below]{$\rho$};
\draw[->](0,0)--(0,0.7) node[left]{$q$};
\draw[domain=\wc:1.0,smooth,variable=\x,red] plot ({\x},{\x-\wc});
\draw[domain=\zam:1.0,smooth,variable=\x,red] plot ({\x},{(1-\x)*\za});
\draw[domain=\zbm:1.0,smooth,variable=\x,red] plot ({\x},{(1-\x)*\zb});
\draw[domain=\zcm:1.0,smooth,variable=\x,red] plot ({\x},{(1-\x)*\zc});
\draw[domain=0.0:1,smooth,variable=\x,green] plot ({\x},{\x*(1-\x)});
\draw[dashed](0.2,0)--(0.2,0.2) node[below] at (0.2,0.0) {$\rho_B^1$};
\draw[dashed](0.3,0)--(0.3,0.25) node[below] at (0.3,0.0) {$ \rho_B^2$};
\draw[dashed](0.6,0)--(0.6,0.25) node[below] at (0.6+0.01,0.0) {$\rho_B^3$};
\draw[dashed](0.5,0)--(0.5,0.25) node[below] at (0.5,0.0) {$ \rho^\star$};
\draw[] node at (0.2,\frhoa) {\textsf{x}};
\draw[] node at (0.3,\frhob) {\textsf{x}};
\draw[] node at (0.6,\frhoc) {\textsf{x}};
\draw[] node at (\barrho,\qa) {\textsf{x}};
\draw[] node at (\barrho,\qb) {\textsf{x}};
\draw[] node at (\barrho,\qc) {\textsf{x}};
\draw[] node at (\barrhozero,\frhochalf) {\textsf{x}};
\draw[] node at (\barrhozero,\frhochalf) {\textsf{x}};
\draw[] node at (\barrhozero,\frhoc) {\textsf{x}};
\draw[dashed](\barrhozero,\frhochalf)--(1.08, \frhochalf) node[right] at (1.07,\frhochalf) {$ \frac{F(\rho_B^3)}{2}$};
\draw[dashed](0.6,\frhoc)--(1.08, \frhoc) node[right] at (1.07,\frhoc) {$ F(\rho_B^3)$};
\end{tikzpicture}
}
\caption{Fair merging coupling conditions with $\rho_B^1= 0.2$, $ \rho_B^2= 0.3$ and $\rho_B^3= 0.6$. $\bar \rho$ is the value found from solving (\ref{eqz}) at the node and $\bar \rho_0$ is the ingoing value of the solution of the layer problems at the node found from the analysis in Section \ref{proof}.}
\label{couplingtotal}
\end{figure}
\begin{figure}[h]
\center
\externaltikz{solutionnode}{
\begin{tikzpicture}[scale=0.65]
\def0.2{0.2}
\def0.3{0.3}
\def0.6{0.6}
\pgfmathsetmacro{\za}{0.2}
\pgfmathsetmacro{\zb}{0.3}
\pgfmathsetmacro{\zc}{\za+\zb}
\pgfmathsetmacro{\wc}{0.6*0.6}
\pgfmathsetmacro{\barrho}{(\wc+\zc)/(1+\zc)}
\pgfmathsetmacro{\frhoc}{0.6*(1-0.6)}
\pgfmathsetmacro{\barrhozero}{(1+sqrt(1-2*\frhoc))/2}
\begin{axis}[
legend style = {at={(1,0)}, xshift=-0.1cm, yshift=0.1cm, anchor=south east},
legend columns= 2,
xlabel = t,
ylabel = $\rho$,
ytick = {\barrho,\barrhozero},
yticklabels={$\bar \rho$, $\bar \rho_0$},
]
\addplot[color = blue!0!red,thick] file{Data/merge_Lindeg_rho_3ex111_eps01_trace.txt};
\addlegendentry{$\varepsilon=0.1$}
\addplot[color = blue!33!red,thick] file{Data/merge_Lindeg_rho_3ex111_eps001_trace.txt};
\addlegendentry{$\varepsilon=0.01$}
\addplot[color = blue!66!red,thick] file{Data/merge_Lindeg_rho_3ex111_eps0001_trace.txt};
\addlegendentry{$\varepsilon=0.001$}
\addplot[color = blue!100!red,thick] file{Data/merge_Lindeg_rho_3ex111_eps00001_trace.txt};
\addlegendentry{$\varepsilon=0.0001$}
\end{axis}
\end{tikzpicture}
}
\caption{Solution of the relaxation system with different values of $\varepsilon$ for the initial values in Figure \ref{couplingtotal}. Time development of the density at the junction from $\bar \rho$ to $\bar \rho_0$.}
\label{junctionvalues}
\centering
\end{figure}
\section{Numerical results}
\label{Numerical results}
In this section we compare relaxation and macroscopic network solutions with the different coupling conditions for several characteristic numerical examples.
The relaxation model is discretized in its conservative form \eqref{eq:lindeg+relax} using a Godunov scheme. A Godunov scheme is also used for the LWR model.
In all numerical examples the intervals on the edges $[0,1]$ are discretized with $1000$ cell.
The ingoing edges are connected to the junction at $x=1$, while the cars enter at $x=0$ into the outgoing edges.
At the outer boundaries zero-Neumann boundary conditions are imposed.
The scaling parameter $\varepsilon $ in the relaxation system is chosen as $\varepsilon = 0.001$.
As initial conditions the densities $\rho^i$ are chosen constant on each road.
The additional initial condition for $z^i$ in the relaxation model is chosen in equilibrium $z^i=\frac{F(\rho^i)}{1-\rho^i}$.
All solutions are computed up to $T=1$.
\subsection{Fair merging}
First we compare the numerical solutions of the relaxation model with the coupling conditions \ref{eqz} with the results obtained for the LWR model with the coupling conditions \eqref{fair}.
In Figure \ref{fig:Merge_case1} the initial densities are chosen as $\rho^1 = 0.1$, $\rho^2 = 0.15$ and $\rho^3 = 0.2$.
The densities are small enough, such that all cars can pass the junction, which corresponds to Case 7, first subcase. The $\rho_0^i$ are given by $\rho_-(F(\rho_B^1)+F(\rho_B^2))$ with the numerical value $\rho_0^i=0.3197$.
In Figure \ref{fig:Merge_case1} the numerical solutions are shown.
Outside of the layer regions, the solution of the relaxation model (blue) is almost identical to the solution of the LWR model (red).
\begin{figure}[h]
\externaltikz{merge_case11}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.65cm},
width = 6.6cm,
height = 4cm,
xmin = 0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,1)},xshift=0.2cm,yshift=-0.1cm,anchor=north},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_1ex1_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_1ex1.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_2ex1_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_2ex1.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_3ex1_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_3ex1.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.0, ymax = 0.2]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_1ex1_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_1ex1.txt};
\nextgroupplot[xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.1, ymax = 0.25]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_2ex1_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_2ex1.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.3, ymax = 0.4]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_3ex1_eps0001.txt};
\addlegendentry{relax}
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_3ex1.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Fair merging with $\rho^1 = 0.1$, $\rho^2 = 0.15$, $\rho^3 = 0.2$. Red: solutions of the LWR-equations, blue: solutions of the relaxation system. First row: solutions on the full domain, second row: zoom around the node.}
\label{fig:Merge_case1}
\end{figure}
In the second row a zoom into the layer regions at the junctions is shown.
On edge $1$ and $2$ we can observe two boundary layers, as these correspond to stable cases.
In edge $3$ there is no layer, since the half space solution is unstable. The solution at $x=0$ fits exactly to the analytical value
of $\rho_0^i$.
In Figure \ref{fig:Merge_case2} the numerical solutions to the initial values $\rho^1 = 0.7$, $\rho^2 = 0.6$ and $\rho^3 = 0.2$ are shown.
\begin{figure}[h]
\externaltikz{fairmerge_case2a}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.75cm},
width = 6.5cm,
height = 4cm,
xmin = -0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,1)},xshift=0.2cm,yshift=-0.1cm,anchor=north},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_1ex2_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_1ex2.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_2ex2_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_2ex2.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_3ex2_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_3ex2.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.7, ymax = 0.9]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_1ex2_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_1ex2.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.7, ymax = 0.9]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_2ex2_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_2ex2.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.4, ymax = 0.9]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_3ex2_eps0001.txt};
\addlegendentry{relax}
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_3ex2.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Fair merging with $\rho^1 = 0.7$, $\rho^2 = 0.6$, $\rho^3 = 0.2$. }
\label{fig:Merge_case2}
\end{figure}
In this situation more cars are approaching the junction than can enter road $3$. We are in the situation of Case 1 with the analytical value $\rho_0^i =\rho_+(\sigma/2)=0.83536$.
Thus the flow in the exiting road is set to its maximum, while there are jams propagating upstream in the ingoing roads.
Here we observe only in edge $3$ a layer, which interacts with the tail of the rarefaction wave.
In the ingoing roads the unstable layer solution enforce the new values at the junction.
In these roads the shock waves of the relaxation model are slightly behind those of the macroscopic one.
This stems from an initial layer, as the layer at the junction has to form at the beginning, see Figure \ref{junctionvalues}.
This happens in short time and is not visible at the rarefaction waves, but it remains noticeable at the shocks.
The speeds of the shocks is identical in both models, as the connected states coincide, i.e. the delay does not change over time.
In the next example, with the initial values $\rho^1 = 0.05$, $\rho^2 = 0.6$ and $\rho^3 = 0.2$, few cars enter from road $1$ but many from road $2$. We are in Case 4, first subcase. The analytical value at the junction is $\rho_0^i=\rho_+(\sigma-F(\rho_B^1))=0.7179$.
As shown in Figure \ref{fig:Merge_case3}, the flow in road $3$ is at maximum such that all cars from road $1$ and most of road $2$ can pass.
\begin{figure}[h]
\externaltikz{fairmerge_case3}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.75cm},
width = 6.5cm,
height = 4cm,
xmin = -0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,1)},xshift=0.2cm,yshift=-0.1cm,anchor=north},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_1ex3_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_1ex3.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_2ex3_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_2ex3.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_3ex3_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_3ex3.txt};
\nextgroupplot[
xmin = 0.95, xmax = 1.0,
ymin = 0.0, ymax = 0.4]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_1ex3_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_1ex3.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.6, ymax = 0.8]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_2ex3_eps0001.txt};
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_2ex3.txt};
\nextgroupplot[ xlabel = $x$
xmin = 0.0, xmax = 0.05,
ymin = 0.4, ymax = 0.9]
\addplot[color = blue,thick] file{Data/FairMerge_LindegNew_rho_3ex3_eps0001.txt};
\addlegendentry{relax}
\addplot[color = red,thick] file{Data/FairMerge_LWR_rho_3ex3.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Fair merging with $\rho^1 = 0.05$, $\rho^2 = 0.6$, $\rho^3 = 0.2$. }
\label{fig:Merge_case3}
\end{figure}
Those which do not fit in, create a jam in road $2$.
Again we see a delay of the shock, as in the previous example.
Similarly we observe a layer in edge $3$.
But here also a layer in road $1$ is present, as the solution of the half space is now stable.
\begin{remark}
We mention that the numerical investigation of conditions (\ref{mixed2}) gives slightly different values for the relaxation system at the nodes, but the same results in the interior of the domain. That means, also in this case, the relaxation system leads to the LWR equations on the network with the fair merging condition (\ref{fair}).
\end{remark}
\subsection{Merging with priority lane}
Here, the numerical solutions of the relaxation model with the coupling conditions \ref{eq:merge_4} are compared to those obtained for the LWR model with the coupling conditions \ref{prio}.
In the first example with $\rho^1 = 0.6$, $\rho^2 = 0.7$ and $\rho^3 = 0.2$, shown in Figure \ref{fig:Merge_case10}, many cars arrive at the junction.
\begin{figure}[h]
\externaltikz{merge_case10}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.75cm},
width = 6.5cm,
height = 4cm,
xmin = -0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,1)},xshift=0.2cm,yshift=-0.1cm,anchor=north},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_1ex10_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_1ex10.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_2ex10_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_2ex10.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_3ex10_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_3ex10.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.2, ymax = 0.6]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_1ex10_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_1ex10.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.9, ymax = 1.1]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_2ex10_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_2ex10.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.45, ymax = 0.8]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_3ex10_eps0001.txt};
\addlegendentry{relax}
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_3ex10.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Priority merge with $\rho^1 = 0.6$, $\rho^2 = 0.7$, $\rho^3 = 0.2$. }
\label{fig:Merge_case10}
\end{figure}
As those of road $1$ have priority, the maximal flow is established, while all cars in road $2$ have to wait.
Layers can be observed in road $1$ and $3$.
In the second example we consider a situation, with the same amount of cars in the ingoing roads and only little space in the outgoing one
$\rho^1 = 0.4$, $\rho^2 = 0.4$, $\rho^3 = 0.7$.
As expected, we can see in Figure \ref{fig:Merge_case12} that all the cars in road $2$ have to wait and thus a larger shock forms.
Not all cars in the first road can pass, but the flow is larger as in the second road.
\begin{figure}[h]
\externaltikz{merge_case12}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.75cm},
width = 6.5cm,
height = 4cm,
xmin = -0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,1)},xshift=0.2cm,yshift=-0.1cm,anchor=north},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_1ex12_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_1ex12.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_2ex12_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_2ex12.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_3ex12_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_3ex12.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.6, ymax = 0.8]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_1ex12_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_1ex12.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.9, ymax = 1.1]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_2ex12_eps0001.txt};
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_2ex12.txt};
\nextgroupplot[ xlabel = $x$
xmin = 0.0, xmax = 0.05,
ymin = 0.6, ymax = 0.9]
\addplot[color = blue,thick] file{Data/PriorityMerge_LindegNew_rho_3ex12_eps0001.txt};
\addlegendentry{relax}
\addplot[color = red,thick] file{Data/PriorityMerge_LWR_rho_3ex12.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Priority merge with $\rho^1 = 0.4$, $\rho^2 = 0.4$, $\rho^3 = 0.7$. }
\label{fig:Merge_case12}
\end{figure}
In both cases we obtain a perfect numerical agreement of relaxation solutions and LWR-solutions on the network
outside of the layers.
\subsection{Diverging with drivers preferences}
We consider a junction with equal drivers preferences $\alpha=\frac{1}{2}$ and conditions \ref{divmacro1} for the macroscopic equations and (\ref{eq:div_2}), (\ref{pref}) for the relaxation problem. We consider two examples.
First, with the initial conditions $\rho^1 = 0.8$, $\rho^2 = 0.1$ and $\rho^3 = 0.3$, there is enough space in both outgoing roads such that the maximal flow can be established, as shown in Figure \ref{fig:Split_case7}.
\begin{figure}[h!]
\externaltikz{split_case7}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.75cm},
width = 6.5cm,
height = 4cm,
xmin = -0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,1)},xshift=0.2cm,yshift=-0.1cm,anchor=north},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_1ex20_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_1ex20.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_2ex20_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_2ex20.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_3ex20_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_3ex20.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.25, ymax = 0.55]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_1ex20_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_1ex20.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.1, ymax = 0.2]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_2ex20_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_2ex20.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.1, ymax = 0.2]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_3ex20_eps0001.txt};
\addlegendentry{relax}
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_3ex20.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Diverging with driver preferences: $\rho^1 = 0.8$, $\rho^2 = 0.1$, $\rho^3 = 0.3$ }
\label{fig:Split_case7}
\end{figure}
In Figure \ref{fig:Split_case8} the solutions corresponding to the initial values $\rho^1 = 0.6$, $\rho^2 = 0.9$ and $\rho^3 = 0.0$ are shown.
\begin{figure}[h!]
\externaltikz{split_case8}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.75cm},
width = 6.5cm,
height = 4cm,
xmin = -0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,1)},xshift=0.2cm,yshift=-0.1cm,anchor=north},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_1ex21_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_1ex21.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_2ex21_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_2ex21.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_3ex21_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_3ex21.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.7, ymax = 0.8]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_1ex21_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_1ex21.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.75, ymax = 0.95]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_2ex21_eps0001.txt};
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_2ex21.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.05, ymax = 0.15]
\addplot[color = blue,thick] file{Data/DivergeAlpha_LindegNew_rho_3ex21_eps0001.txt};
\addlegendentry{relax}
\addplot[color = red,thick] file{Data/DivergeAlpha_LWR_rho_3ex21.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Diverging with driver preferences: $\rho^1 = 0.6$, $\rho^2 = 0.9$, $\rho^3 = 0.0$ }
\label{fig:Split_case8}
\end{figure}
Although road $3$ is completely free, only few case can enter, as their way is blocked by cars waiting to enter road $2$.
Thus the high density on road $2$ is causing a left going shock on the ingoing road.
A layer forms only on road $2$, since on the other two the macroscopic characteristics move away from the junction.
\subsection{Diverging without preferences}
We consider the situation with condition \ref{divmacro2} for the macroscopic and conditions \ref{eq:div_3}, \ref{ex2}
for the relaxation model.
The first example investigates the case $\rho^1 = 0.7$, $\rho^2 = 0.2$, $\rho^3 = 0.1$, see Figure
\ref{fig:Split_case4}.
\begin{figure}[h!]
\externaltikz{split_case4}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.75cm},
width = 6.5cm,
height = 4cm,
xmin = -0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,1)},xshift=0.2cm,yshift=-0.1cm,anchor=north},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_1ex40_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_1ex40.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_2ex40_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_2ex40.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_3ex40_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_3ex40.txt};
\nextgroupplot[
xmin = 0.95, xmax = 1.0,
ymin = 0.25, ymax = 0.6]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_1ex40_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_1ex40.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.1, ymax = 0.2]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_2ex40_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_2ex40.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.1, ymax = 0.2]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_3ex40_eps0001.txt};
\addlegendentry{relax}
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_3ex40.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Diverging without preferences: $\rho^1 = 0.7$, $\rho^2 = 0.2$, $\rho^3 = 0.1$. }
\label{fig:Split_case4}
\end{figure}
In this case, the flux is distributed equally onto the outgoing roads, such that a small shock and a small rarefaction wave arise.
On the right hand side we observe that a layer forms in the first road but not in the two exiting ones.
In the second example, Figure \ref{fig:Split_case6}, the results with the initial conditions $\rho^1 = 0.6$, $\rho^2 = 0.1$ and $\rho^3 = 0.95$ are shown.
\begin{figure}[h!]
\externaltikz{split_case6}{
\begin{tikzpicture}[scale=0.65]
\begin{groupplot}[
group style={group size=3 by 2, vertical sep = 0.75cm, horizontal sep = 1.75cm},
width = 6.5cm,
height = 4cm,
xmin = -0.0, xmax = 1.0,
ymin = 0.0, ymax = 1.0,
legend style = {at={(0.5,0)},xshift=0.2cm,yshift=0.1cm,anchor=south},
legend columns= 3,
]
\nextgroupplot[ title = $\rho^1$]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_1ex42_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_1ex42.txt};
\nextgroupplot[ title = $\rho^2$]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_2ex42_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_2ex42.txt};
\nextgroupplot[ title = $\rho^3$]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_3ex42_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_3ex42.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.95, xmax = 1.0,
ymin = 0.25, ymax = 0.55]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_1ex42_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_1ex42.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.25, ymax = 0.35]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_2ex42_eps0001.txt};
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_2ex42.txt};
\nextgroupplot[ xlabel = $x$,
xmin = 0.0, xmax = 0.05,
ymin = 0.3, ymax = 1]
\addplot[color = blue,thick] file{Data/FreeDiverge_LindegNew_rho_3ex42_eps0001.txt};
\addlegendentry{kinetic}
\addplot[color = red,thick] file{Data/FreeDiverge_LWR_rho_3ex42.txt};
\addlegendentry{LWR}
\end{groupplot}
\end{tikzpicture}
}
\caption{Diverging without preferences: $\rho^1 = 0.6$, $\rho^2 = 0.1$, $\rho^3 = 0.95$ }
\label{fig:Split_case6}
\end{figure}
Since the traffic on road $3$ is dense only very few cars enter there.
Most of the vehicles enter into road $2$.
In the solution of the relaxation model we observe two layers, one interacting with the rarefaction wave on road $1$ and one due to the ingoing characteristics on road $3$.
As for the merging case, we observe a very good numerical agreement between the solutions of the relaxation
model and the LWR solution away from the layers at the nodes.
\section{Conclusions}
We have introduced general coupling conditions for a TLD-relaxation model for LWR-networks
These coupling conditions are related, via an asymptotic analysis at the nodes to well known coupling conditions for the LWR-network.
The asymptotic analysis shows that a classical merge condition for the nonlinear scalar conservation law
is related in the zero-relaxation limit to an equal density coupling condition for the relaxation system.
The numerical findings support and illustrate the analytical results. One also observes that there is a range of coupling conditions
for the relaxation model leading to the same merge condition for the scalar conservation law.
For the case of a diverging junction coupling conditions have been defined respecting the physical invariant domain.
They are investigated numerically on the network in the limit as $\epsilon \rightarrow 0 $ showing
again agreement of the
numerical solutions of the relaxation system and the LWR solution for small values of $\epsilon $ .
Finally, we remark, that the analytical procedure, presented here for the merging case and a special coupling condition, could be extended to the diverging case or other coupling conditions in the merging case.
|
2,869,038,156,898 | arxiv | \section{Motivation}
We consider a wireless relay network illustrated in Fig.\ref{fig:model} where
a transmitter communicates with a receiver via $M$ relays and each terminal
has a single antenna. We let ${\bf h},{\bf g}$ denote the channel vector between the
transmitter and the relays, the channel vector between the relays and the
receiver.
For its simplicity, we focus on an amplify-and-forward (AF) protocol
\cite{laneman2004cdw,laneman2000eea} in a two-hop communication: in the first
$T$ channel uses the transmitter broadcasts a codeword, then in the second $T$
channel uses relays amplify and forward the observed codeword by applying some
linear precoder (to be specified later). We assume that a transmitter and $M$
relays have individual power constraints rather than a total power constraint,
since in a practical wireless
network terminals are physically distributed and hence are subject to
their own power supplies.
In a classical AF protocol, the amplifier coefficients have been determined so
as to satisfy required power constraints
\cite{laneman2004cdw}. In order to improve the performance of the AF protocol,
a large number of recent works have considered some additional linear
processing at relays \cite{WHKvtc2004,WHKglobe2004,
jing2007nbu,li2007dap,AFPrecoding, jing2007uoa, Barbarossa,DistLDC}. These
works can be roughly classified into two classes according to their assumption
of channel state information at transmitter (CSIT) and their objective. The
first class assumes perfect CSIT and aims to maximize either the achievable
rate or the instantaneous receive SNR \cite{WHKvtc2004,WHKglobe2004,
jing2007nbu,li2007dap}. The resulting transmit scheme yields beamforming with
appropriate power allocation. The second class assumes only statistical CSIT
and aims to minimize the error probability by designing some type of linear
precoder \cite{AFPrecoding, jing2007uoa, Barbarossa,DistLDC}. In particular,
significant attention has been paid to distributed space-time coding (DSTC) in
which each relay sends a different column of a STC matrix
\cite{laneman2003dst, jing2007uoa,Barbarossa,DistLDC}.
With a single antenna at each terminal, we hasten to say that the goal of DSTC is to
approach full diversity gain of $M$ offered by the relay-receiver channels
${\bf g}$ since the multiplexing gain of the MISO channel under the half-duplex
constraint is very limited, i.e. 1/2. Notice that an AF based DSTC that
achieves the optimal diversity-multiplexing tradeoff \cite{zheng2003dam} has been well studied
\cite{nabar2004frc,azarian2005adm,yang2007toa,yang2007ost}.
It clearly appears that a practical utility of these two approaches depends on the available CSIT, which in turn depends on the speed of fading and that of
feedback and training procedures. Notice that in a two-hop communication model
obtaining perfect CSIT is
rather challenging because the transmitter needs at least a two-step training
and feedback process, i.e. first learns ${\bf h}$ and then ${\bf g}$,
which typically induces additional delay and estimation error. Consequently, the perfect CSIT assumption holds only if
the underlying fading is quasi-static and a sufficiently fast feedback and
training is available. For this particular case, the first approach might be
useful. On the contrary, if a rate of feedback and training is much slower
than the coherence time of the channel, the second approach based on the
statistical channel knowledge is more appropriate.
The above observation motivates us to find a unified approach that can handle
different CSIT scenarios, rather than changing a transmit strategy as a
function of the quality of side information.
To this end, we fix our transmission strategy to DSTC that potentially provides
diversity gain with statistical CSIT and further power gain if additional side
information is available. Our goal is not to find the optimal strategy for
each CSIT case but to propose a unified DSTC scheme that simply adapts the
amplifier power allocation to available CSIT.
Among a large family of DSTC, we consider linear dispersion (LD) codes
\cite{hassibi2002hrc}
because they
offer desirable performance in terms of diversity gain and coding gain
\cite{DistLDC,Jing} and moreover keep the amplified noise white. The latter
considerably simplifies the power allocation strategy. We assume perfect
synchronization between relay terminals and perfect channel state information
available at the receiver, which is necessary for coherent detection. Under
this setting, we will address the following question: how does the quality of
CSIT impact the amplifier power allocation and the resulting performance of
the DSTC? To answer the question, we optimize the amplifier power allocation
in such a manner that the pairwise error probability (PEP) conditioned on
CSIT is minimized.
Note that the conditional PEP is a performance
criteria widely used in the literature of STC
\cite{tarokh1998stc,guey1999sdt,BF-STC} and DSTC \cite{yiu2006dst,DistLDC,Jing}. In particular,
\cite{BF-STC} has provided elegant precoder designs that minimize the PEP
conditioned on CSIT for orthogonal STC. Unfortunately the extension of this work to a two-hop relay network appears very difficult due to the non-convexity of the underlying problem. We
examine the following CSIT cases : 1) perfect knowledge of the absolute value
of the entries of ${\bf h}$ and ${\bf g}$ (\textit{perfect CSIT}), 2) perfect
knowledge of absolute value of the entries of ${\bf h}$ and statistical knowledge
of ${\bf g}$ (\textit{partial CSIT}), and 3) statistical knowledge of ${\bf h}$ and
${\bf g}$ (\textit{statistical CSIT}).
Under perfect CSIT, the PEP minimization reduces to the maximization of an
approximate receive SNR. The optimal power allocation strategy turns out to be an on-off strategy, whereby
some relays are switched off and others transmit at maximum available power.
We propose an on-off gradient algorithm that efficiently finds the optimal set
of relays to switch on. Under partial and statistical CSIT, the conditional
PEP minimization appears very difficult due to the self-interference caused by
amplified noise and calls for a good heuristic approximation. First, we apply a
Laplace-based saddle point approximation of an inherent integral in order to
make the problem amenable. Since
the approximated problem is still non-convex due to
{\it averaged} amplified noises, we transform it into a convex problem via a log transformation \cite{GP} assuming a high transmitter power (which is the regime
of interest). For a new objective function,
we propose a very simple waterfilling algorithm that yields a non-trivial
solution between maximum power allocation and a generalized STC that equalizes the averaged amplified noise, i.e. $p_i\gamma_{g,i}$
with $\gamma_{g,i}$ being the variance of the channel between relay $i$ and the receiver (in a classical STC we consider $\gamma_{g,i}=1,\forall i$). We derive closed-form solutions for
$M=2$ and in certain asymptotic regimes that enable an easy
interpretation of the proposed algorithms. It is found that that an appropriate
power allocation is mandatory for DSTC in order to
provide diversity and power gains in a general network topology.
In order to situate this work in the context of relevant literature, we note
that the LD based DSTC for a two-hop AF network has been addressed
for a single-antenna case \cite{DistLDC} and for a
multiple antenna case \cite{Jing}. In both works, Jing and Hassibi provided
the diversity analysis by optimizing the power partition between the
transmitter and the relays under the assumptions that the transmitter
and $M$ relays are subject to a total power constraint and that both channels have unit variance. Clearly, the optimal power partition
under this setting, letting the transmitter use half of the total power and
each relay share the other half, does not hold for a general network topology
with unequal variances. We make a progress with
respect to this since our proposed waterfilling solution can be applied
for any set of variances under statistical CSIT and moreover handles the
partial CSIT case. With perfect CSIT,
Jing proposed a cooperative beamforming scheme for the same two-hop AF network
\cite{jing2007nbu}. Although this beamforming scheme provides a non-negligible power
gain compared to our on-off power allocation as shown in Section \ref{sect:Result},
we remark that our on-off algorithm is much simpler and can be implemented at the receiver without
requiring any knowledge at the transmitter. To this end, it suffices that the
receiver sends to each relay a feedback of one bit indicating whether to
activate or not. Hence, our on-off algorithm might be appealing due to its
robustness and simplicity despite its suboptimal performance.
The rest of the paper is organized as follows. After briefly introducing the
two-hop network model in Section \ref{sect:Model}, we derive the the conditional PEP upper bounds for different CSIT cases in Section
\ref{sect:ConditionalPEP}. In Section \ref{sect:Algorithm} we propose
efficient algorithms that solve the conditional PEP minimization, namely
on-off gradient algorithm for perfect CSIT and waterfilling algorithm for
partial and statistical CSIT. We provide some asymptotic properties of these
algorithms in Section \ref{sect:AsymptoticBehaviors} and numerical examples in
Section \ref{sect:Result}. Finally we conclude the paper in Section
\ref{sect:Conclusions}.
\section{System Model}
\label{sect:Model} We consider frequency-flat fading channels and let
${\bf g}=[g_{1},\dots,g_{M}]^{T}$, ${\bf h}=[h_{1},\dots,h_{M}]^{T}$ denote the
channel vector between transmitter and relays, the channel vector between
relays and receiver, respectively. We assume the entries of ${\bf h}$ and ${\bf g}$
are i.i.d. zero-mean circularly symmetric complex Gaussian with variance
$\hbox{\boldmath$\gamma$}_{h}=[\gamma_{h1},\dots,\gamma_{hM}],\hbox{\boldmath$\gamma$}_{g}=[\gamma_{g1}%
,\dots,\gamma_{gM}]$ respectively. The variance of each channel is assumed to
capture path-loss and shadowing. We assume a block fading model, namely ${\bf h}$
and ${\bf g}$ remain constant over a block of $2T$ channel uses. In this paper we
do not consider a transmitter-receiver direct link for simplicity. It is well
known however that the direct link should be taken into account if one aims at
optimizing the diversity-multiplexing tradeoff \cite{nabar2004frc,azarian2005adm,yang2007ost,yang2007toa}.
The communication between the transmitter and the receiver is performed in two
steps. The transmitter first broadcasts a symbol vector ${\bf s}=[s_{1}%
,\dots,s_{T}]^{T}\in\mbox{\bb C}^{T\times1}$ with $\mbox{\bb E}[{\bf s}\sv^H]={\bf I}_{T}$ and relay $i$ receives
\[
{\bf y}_{i}=\sqrt{p_{s}}h_{i}{\bf s}+{\bf n}_{i}%
\]
where $p_{s}$ is the power of the transmitter and ${\bf n}_{i}\sim{\cal N}_{\mbox{\bb C}}%
({\bf 0},N_{0}{\bf I}_{T})$ is AWGN. In the second $T$ channel uses, $M$ relays
amplify and forward the observed codeword by applying a linear precoder.
Namely, the transmit vector ${\bf x}_{i}$ of relay $i$ is given by
\begin{equation}
{\bf x}_{i}=q_{i}{\bf A}_{i}{\bf y}_{i} \label{xi}%
\end{equation}
where $q_{i}$ denote a complex amplifier coefficient of relay $i$, ${\bf A}_{i}%
\in\mbox{\bb C}^{T\times T}$ is a unitary matrix satisfying ${\bf A}_{i}^{H}{\bf A}_{i}%
={\bf I}_{T}$, and ${\bf x}_{i}$ should satisfy a power constraint, i.e.
\[
\mbox{\bb E}[||{\bf x}_i||^2]=|q_{i}|^{2}\mbox{\bb E}[||{\bf y}_i||^2]\leq Tp_{r}%
\]
where the expectation is with respect to ${\bf n}_{i}$ for a short-term constraint
and with respect to both ${\bf n}_{i}$ and ${\bf h}_{i}$ for a long-term constraint.
This in turns imposes a constraint on $\{q_{i}\}$ such that $|q_{i}|^{2}\leq
P_{i}$ where $P_{i}$ denotes the maximum amplifier power of relay $i$ given
by
\[
P_{i}=\left\{
\begin{array}
[c]{ll}%
\frac{p_{r}}{p_{s}|h_{i}|^{2}+N_{0}}, & \quad\mbox{short-term }\\
\frac{p_{r}}{p_{s}\gamma_{hi}+N_{0}}, & \quad\mbox{long-term}
\end{array}
\right.
\]
The received signal at
the final destination is given by
\begin{align*}
{\bf r} & =\sum_{i=1}^{M}g_{i}{\bf x}_{i}+{\bf w} =\sum_{i=1}^{M}g_{i}h_{i}q_{i}{\bf A}_{i}{\bf s}+\sum_{i=1}^{M}g_{i}q_{i}{\bf A}_{i}{\bf n}_{i}+{\bf w}
\end{align*}
where ${\bf w}\sim{\cal N}_{\mbox{\bb C}}({\bf 0},N_{0}{\bf I}_{T})$ is AWGN at the receiver
uncorrelated with $\{{\bf n}_{i}\}$. The received vector can be further simplified
to
\begin{equation}
{\bf r}=\sqrt{p_{s}}{\bf S}{\bf Q}{\bf f}+{\bf v} \label{r}%
\end{equation}
where we let ${\bf S}=[{\bf A}_{1}{\bf s},\dots,{\bf A}_{M}{\bf s}]\in\mbox{\bb C}^{T\times M}$ denote a LD
codeword, ${\bf Q}={\hbox{diag}}(q_{1},\dots,q_{M})$ is a diagonal matrix with $M$
amplifier coefficients, ${\bf f}=[h_{1}g_{1},\dots,h_{M}g_{M}]^{T}$ is a composite
channel vector, and we let
\begin{equation}
{\bf v}=\sum_{i=1}^{M}q_{i}g_{i}{\bf A}_{i}{\bf n}_{i}+{\bf w}
\end{equation}
denote the overall noise whose covariance is given by
\[
\mbox{\bb E}[{\bf v}\vv^H]=N_{0}\left( \sum_{i=1}^{M}|q_{i}|^{2}|g_{i}|^{2}+1\right)
{\bf I}_{T}=\sigma_{v}^{2}{\bf I}_{T}%
\]
It follows that the overall noise seen by the receiver is white and this
considerably simplifies the amplifier power allocation in the following sections.
\vspace{-1em}
\section{Conditional PEP}
\label{sect:ConditionalPEP} With perfect knowledge of both ${\bf h}$ and ${\bf g}$,
the receiver can perform Maximum Likelihood decoding \footnote{In practice,
efficient decoding techniques such as sphere decoding can be implemented to
achieve near ML results \cite{hassibi2002hrc}.} by estimating a codeword
according to
\begin{equation}
\hat{{\bf S}}=\arg\min_{{\bf S}\in{\cal S}}||{\bf r}-\sqrt{p_{s}}{\bf S}{\bf Q}{\bf f}||^{2}%
\end{equation}
When the transmitter has only partial knowledge of the channels, it is
reasonable to consider the pairwise error probability (PEP) conditioned on the available CSIT. In the following we derive the expressions of the conditional
PEP for three different CSIT cases: 1) perfect CSIT, where the transmitter
knows the absolute values of the entries of ${\bf h},{\bf g}$, 2) partial CSIT where
the transmitter knows the absolute values of the entries of ${\bf h}$ and
$\hbox{\boldmath$\gamma$}_{g}$, 3) statistical CSIT where the transmitter knows $\hbox{\boldmath$\gamma$}_{h}$
and $\hbox{\boldmath$\gamma$}_{g}$. Perfect CSIT corresponds to the case of quasi-static
fading, while statistical CSIT corresponds to the case of fast fading so that
the transmitter can track only the second order statistics of the channel.
Finally, partial CSIT is an intermediate case relevant
to a time-division duplexing system where the transmitter learns perfectly ${\bf h}$ by reciprocity
but only statistically ${\bf g}$ due to a low-rate feedback.
\vspace{-1em}
\subsection{Perfect CSIT}
The PEP conditioned on ${\bf h},{\bf g}$ for any $k\neq l$ is defined by
\begin{eqnarray*}
P({\bf S}_{k}\rightarrow{\bf S}_{l}|{\bf h},{\bf g})& \stackrel{\Delta}{=}& \Pr\left( ||{\bf r}-\sqrt{p_{s}}{\bf S}_{l}{\bf Q}{\bf H}{\bf g}||^{2}\leq
||{\bf r}-\sqrt{p_{s}}{\bf S}_{k}{\bf Q}{\bf H}{\bf g}||^{2}|{\bf S}_{k},{\bf h},{\bf g}\right) =\Pr\left( d^{2}({\bf S}_{k},{\bf S}_{l})\leq\kappa\right)
\end{eqnarray*}
where the composite channel ${\bf f}$ is decoupled into ${\bf H}{\bf g}$ with
${\bf H}={\hbox{diag}}({\bf h})$, where we define squared Euclidean distance between ${\bf S}_{l}$
and ${\bf S}_{k}$ as
\[
d^{2}({\bf S}_{k},{\bf S}_{l})=p_{s}{\bf g}^{H}{\bf H}^{H}{\bf Q}^{H}({\bf S}_{k}-{\bf S}_{l})^{H}%
({\bf S}_{k}-{\bf S}_{l}){\bf Q}{\bf H}{\bf g}
\]
and where $\kappa=2\sqrt{p_{s}}\Re\{{\bf v}^{H}({\bf S}_{k}-{\bf S}_{l}){\bf Q}{\bf f}\}$ is a
real Gaussian random variable distributed as ${\cal N}_{\mbox{\bb R}}(0,2\sigma_{v}^{2}%
d^{2})$. In order to obtain a upper bound of the PEP, we assume that the term
$({\bf S}_{k}-{\bf S}_{l})^{H}({\bf S}_{k}-{\bf S}_{l})$ has a full rank $M$, i.e. the LD code
achieves a full diversity (for a special case of orthogonal STC this always holds). By letting $\lambda_{min}$ denote the smallest
singular value of $({\bf S}_{k}-{\bf S}_{l})^{H}({\bf S}_{k}-{\bf S}_{l})$ over all possible
codewords, we obtain the inequality
\[
d^{2}({\bf S}_{k},{\bf S}_{l})\geq p_{s}\lambda_{min}{\bf g}^{H}{\bf H}^{H}{\bf Q}^{H}{\bf Q}{\bf H}{\bf g}
\]
which yields a Chernoff bound
\begin{equation}
P({\bf S}_{k}\rightarrow{\bf S}_{l}|{\bf h},{\bf g})\leq\exp\left( -\frac{p_{s}\lambda
_{min}{\bf g}^{H}{\bf H}^{H}{\bf Q}^{H}{\bf Q}{\bf H}{\bf g}}{4\sigma_{v}^{2}}\right) .
\label{2ConditionedPEP}%
\end{equation}
Minimizing the RHS of (\ref{2ConditionedPEP}) corresponds to maximizing
\textit{approximated} receive SNR, given by
\begin{equation}
\frac{p_{s}\lambda_{\min}{\bf g}^{H}{\bf H}^{H}{\bf Q}^{H}{\bf Q}{\bf H}{\bf g}}{4\sigma_{v}^{2}}%
=\eta\frac{\sum_{i=1}^{M}p_{i}|g_{i}|^{2}|h_{i}|^{2}}{\sum_{i=1}^{M}%
p_{i}|g_{i}|^{2}+1} \label{Objective1}%
\end{equation}
where we let $\eta=\lambda_{min}p_{s}/4N_{0}$ and let $p_{i}=|q_{i}|^{2}$
denote the amplifier power of relay $i$. Notice that the above function
depends on the absolute values of channels and of amplifier coefficients.
\vspace{-1em}
\subsection{Partial CSIT}
The PEP upper bound conditioned on ${\bf h},\hbox{\boldmath$\gamma$}_{g}$ is obtained by
averaging (\ref{2ConditionedPEP}) over the distribution of ${\bf g}$
\begin{eqnarray}
P({\bf S}_{k}\rightarrow{\bf S}_{l}|{\bf h},\hbox{\boldmath$\gamma$}_{g})
& \overset{\mathrm{(a)}}{\approx} & \int\frac{1}{\det(\pi{\hbox{diag}}(\hbox{\boldmath$\gamma$}_g))}\exp\left\{
-{\bf g}^{H}\left( \frac{\eta{\bf H}^{H}{\bf P}{\bf H}}{1+\sum_{i=1}^{M}\gamma_{gi}p_{i}
}+{\hbox{diag}}(\hbox{\boldmath$\gamma$}_g)^{-1}\right) {\bf g}\right\} d{\bf g}\nonumber\\
&=&\det\left( \frac{\eta}{1+\sum_{i=1}^{M}\gamma_{gi}p_{i}}{\hbox{diag}}(\hbox{\boldmath$\gamma$}_g)
{\bf H}^{H}{\bf P}{\bf H}+{\bf I}_{M}\right) ^{-1
=\prod_{i=1}^{M}\left( 1+\eta\frac{|h_{i}|^{2}\gamma_{gi}p_{i}}
{1+\sum_{i=1}^{M}\gamma_{gi}p_{i}}\right) ^{-1} \label{Objective2}
\end{eqnarray}
where in (a) we let ${\bf P}={\hbox{diag}}(p_{1},\dots,p_{M})$ and apply a Laplace-based saddle point
approximation that becomes accurate as the number of relays increases without
bound (see further Appendix\ \ref{sect:AppendixSaddle}). This saddle point
approximation is inspired by the approximation method suggested in
\cite{lieberman2004lam} to evaluate the expectation of quotients of quadratic
forms in Gaussian random variables. We remark that, in order to maximize the
corresponding cost function in (\ref{Objective2}), only the absolute values of
the entries of ${\bf h}$ are needed.
\vspace{-1.2em}
\subsection{Statistical CSIT}
The PEP upper bound conditioned on $\hbox{\boldmath$\gamma$}_h,\hbox{\boldmath$\gamma$}_{g}$ is obtained by
averaging (\ref{2ConditionedPEP}) over the distribution of ${\bf h}$ and ${\bf g}$
\begin{align}
P({\bf S}_{k}\rightarrow{\bf S}_{l}|\hbox{\boldmath$\gamma$}_{h},\hbox{\boldmath$\gamma$}_{g})
& \leq \mbox{\bb E}_{{\bf g}}\left[ \det^{-1}\left( {\bf I}_{M}+\frac{\eta}{{\hbox{tr}}({\bf G}^{H}%
{\bf P}{\bf G})}{\hbox{diag}}(\hbox{\boldmath$\gamma$}_h){\bf G}^{H}{\bf P}{\bf G}\right) \right] \nonumber\\
& \overset{\mathrm{(a)}}{\approx}\mbox{\bb E}_{{\bf g}}\left[ \prod_{j=1}^{M}\left(
1+\frac{\eta\gamma_{hj}|g_{j}|^{2}p_{j}}{1+\sum_{i=1}^{M}\gamma_{gi}p_{i}%
}\right) ^{-1}\right]
\overset{\mathrm{(b)}}{=}\prod_{j=1}^{M}\frac{1}{\rho_{j}}e^{1/\rho_{j}%
}E_{1}(1/\rho_{j})\nonumber\\
& \overset{\mathrm{(c)}}{=}\prod_{j=1}^{M}\frac{1}{\rho_{j}}\left[
-\gamma+\ln(\rho_{j})+O\left( \frac{1}{\rho_{j}}\right) \right]
\overset{\mathrm{(d)}}{\approx}\prod_{j=1}^{M}\frac{1+\sum_{i=1}^{M}%
\gamma_{gi}p_{i}}{\eta\gamma_{gj}\gamma_{hj}p_{j}}\ln\left( \frac{\eta
\gamma_{gj}\gamma_{hj}p_{j}}{1+\sum_{i=1}^{M}\gamma_{gi}p_{i}}\right)\label{Objective3}
\end{align}
where in (a) we apply the Laplace-based saddle-point approximation of the
integral mentioned above, (b) follows by noticing that $|g_{i}|^{2}%
/\gamma_{gi}$ is an exponential random variable with unit mean and using
\cite[3.352]{gradshteyn1988tis} where we write $\rho_{j}=\frac{\eta
\gamma_{hj}\gamma_{gj}p_{j}}{1+\sum_{i=1}^{M}\gamma_{gi}p_{i}}$ and let
$E_{1}(x)=\int_{x}^{\infty}\frac{e^{-t}}{t}dt$ denote the exponential
integral. In (c) we assume that $\rho_{j}$ is large ($\eta\rightarrow\infty$)
so that
\begin{align*}
e^{1/\rho_{j}} E_{1}\left( \frac{1}{\rho_{j}}\right) & =-\gamma+\ln(\rho_{j})+\sum
_{k=1}^{\infty}\frac{(-1)^{k+1}(\rho_{j})^{-k}}{kk!
=-\gamma+\ln(\rho_{j})+O\left( \frac{1}{\rho_{j}}\right)
\end{align*}
where $\gamma$ is the Euler constant, and finally in (d) we assume $\ln
(\rho_{j})\gg\gamma$.
\section{Power allocation algorithms}
\label{sect:Algorithm} This section proposes efficient power allocation
algorithms to optimize (\ref{Objective1}),
(\ref{Objective2}), and (\ref{Objective3}).
\subsection{Perfect CSIT}
Under perfect CSIT,
the optimal ${\bf p}^{\star}$ is obtained by maximizing
\begin{eqnarray}
f_{0}({\bf p})\stackrel{\Delta}{=}\frac{\sum_{i=1}^{M}\alpha_{i}p_{i}
}{1+\sum_{i=1}^{M}\beta_{i}p_{i}} \label{QuasiLP}%
\end{eqnarray}
where $p_i$ is subject to the maximum amplifier power of relay $i$, i.e. $p_i\leq P_i=\frac{p_{r}%
}{p_{s}|h_{i}|^{2}+N_{0}}$ and we let $\alpha_{i}=|h_{i}|^{2}|g_{i}|^{2}$ and
$\beta_{i}=|g_{i}|^{2}$. We remark that the linear constraints form a feasible
region ${\cal V}$ composed by $M$ half-spaces with $2^{M}-1$ vertices. For $M=2$
the feasible region ${\cal V}$ is a rectangular region with 3 vertices
$(P_{1},0),(0,P_{2}),(P_{1},P_{2})$ plus the origin. Since this problem is
quasi-linear, it is possible to transform it into a linear program. By
exploiting the structure of the problem, we propose a more efficient algorithm
to find the solution. First, we start with the following proposition.
\textbf{Proposition 1} The solution to (\ref{QuasiLP}) is always found in the
one of $2^{M}-1$ vertices of the feasible region ${\cal V}$. Moreover, at the
solution ${\bf p}^{\star}$, the entries of the gradient satisfy the following
inequality for $i=1,\dots,M$.
\begin{equation}
\frac{\partial f_{0}}{\partial p_{i}}({\bf p}^{\star})\left\{
\begin{array}
[c]{ll}%
>0, & \quad\mbox{if $p_i^{\star}=P_i$}\\
\leq0, & \quad\mbox{if $p_i^{\star}=0$}
\end{array}
\right. \label{SufficientCondition}%
\end{equation}
\textbf{Proof } see Appendix \ref{proof1}.
For $M=2$ we have a closed form solution of the optimal power allocation as an
obvious result of Proposition 1.
\textbf{Corollary 1 } For $M=2$, we find a closed-form solution given by
\begin{equation} \label{2relayOnOff}
(p_{1},p_{2}) = \left\{
\begin{array}
[c]{ll}%
(P_1,0) , &\quad\mbox{if $|h_{1}|^{2}<\frac{p_{r}|g_{2}|^{2}|h_{2}|^{2}}{p_{s}|h_{2}|^{2}+p_{r}%
|g_{2}|^{2}+N_{0}}$}\\
(0,P_2) , &
\quad\mbox{if $|h_{2}|^{2}<\frac{p_{r}|g_{1}|^{2}|h_{1}|^{2}}{p_{s}|h_{1}|^{2}+p_{r}%
|g_{1}|^{2}+N_{0}}$}\\
(P_{1},P_{2}) , & \quad\mbox{if $|h_{1}|^{2}>\frac{p_{r}|g_{2}|^{2}|h_{2}|^{2}}{p_{s}|h_{2}|^{2}+p_{r}%
|g_{2}|^{2}+N_{0}}$ and $|h_{2}|^{2}>\frac{p_{r}|g_{1}|^{2}|h_{1}|^{2}}{p_{s}|h_{1}|^{2}+p_{r}%
|g_{1}|^{2}+N_{0}}$}
\end{array}
\right.
\end{equation}
\textbf{Proof } see Appendix \ref{proof2}.
In order to visualize the conditions of
activating relay 1 and/or relay 2, we provide a graphical representation of the
on-off region in Fig. \ref{fig:Region}. Interestingly, it can be observed
there is a minimum value of $|h_{i}|^{2}$ in order for relay $i$ to be
activated. Namely relay 1 is activated independently of relay 2 if
\[
|h_{1}|^{2} > \frac{p_{r}}{p_{s}} |g_{2}|^{2}%
\]
which is readily obtained when letting $|h_{2}|^{2}\rightarrow\infty$ in the first inequality of the third condition, i.e. $|h_{1}|^{2}>\frac{p_{r}|g_{2}|^{2}|h_{2}|^{2}}{p_{s}|h_{2}|^{2}+p_{r}%
|g_{2}|^{2}+N_{0}}$, and vice versa.
For $M>2$, the solution does not lead itself to a simple closed form
expression. Nevertheless, as a straightforward result of Proposition 1 we
propose the following algorithm to solve (\ref{QuasiLP}). \newline%
\textbf{On-off gradient algorithm}
\begin{enumerate}
\item Initialize ${\bf p}^{(0)}$ to an arbitrary vertex $\in{\cal V}$
\item At iteration $n$, compute the gradients $\frac{\partial f}{\partial
{\bf p}}({\bf p}^{(n)})$\newline and update
\begin{equation}
\label{Update}p_{i}^{(n+1)} = \left[ p_{i}^{(n)} + \nabla_{i}({\bf p}^{(n)}%
)\right] _{0}^{P_{i}},\;\;\; i=1,\dots,M
\end{equation}
where we let $\nabla_{i}({\bf p}^{(n)})=-P_{i}$ if $\frac{\partial f_{0}}{\partial
p_{i}}({\bf p}^{(n)})<0$ and $\nabla_{i}({\bf p}^{(n)})=P_{i}$ if $\frac{\partial
f}{\partial p_{i}}({\bf p}^{(n)})>0$, and
$\left[ x\right] _{a}^{b}$ denotes the value of $x$ truncated to the
interval $[a,b]$.
\item Stop if ${\bf p}^{(n)}$ satisfies (\ref{SufficientCondition}).
\end{enumerate}
\textbf{Proposition 2} The on-off gradient algorithm converges to the global maximum.
\textbf{Proof } See Appendix \ref{proof3}.
Fig. \ref{fig:ConvergenceM} plots the convergence behavior of the proposed
on-off algorithm when we let $p_s=p_r=10$ and consider equal variances $\gamma_{h,i}=\gamma_{g,i}=1$ for all $i$. The objective values are normalized with respect to the
optimal values and averaged over a large number of random channel
realizations.
Fig. \ref{fig:ConvergenceM} as well as other examples show that the proposed on-off
gradient algorithm converges only after a few iterations irrespectively of $M$
and of the initialization.
It is worth noticing that this on-off algorithm can be implemented at the
receiver without any knowledge at the transmitter. To this end, it suffices
that the receiver sends to each relay a feedback of one bit indicating whether
to activate or not.
\subsection{Partial CSIT}
When the transmitter knows ${\bf h}$ and $\hbox{\boldmath$\gamma$}_{g}$, the problem reduces to
minimizing (\ref{Objective2}) or maximizing
\begin{equation}
f_{1}({\bf p})=\sum_{i=1}^{M}\ln\left( 1+\eta|h_{i}|^{2}\frac{\gamma_{gi}p_{i}%
}{1+\sum_{j=1}^{M}\gamma_{gj}p_{j}}\right) \label{f1}%
\end{equation}
The term $\frac{\eta|h_{i}|^{2}\gamma_{gi}p_{i}}{1+\sum_{j=1}^{M}\gamma
_{gj}p_{j}}$, similar to $\rho_{i}=\frac{\eta\gamma_{hi}\gamma_{gi}p_{i}%
}{1+\sum_{j=1}^{M}\gamma_{gj}p_{j}}$ defined in (b) of (\ref{Objective2}) for
the statistical CSIT case, can be interpreted as the contribution of relay $i$
to the receive SNR. With some abuse of notation, we let $\rho_{i}$ denote
$\frac{\eta|h_{i}|^{2}\gamma_{gi}p_{i}}{1+\sum_{j=1}^{M}\gamma_{gj}p_{j}}$
under partial CSIT. Unfortunately, the function $f_{1}$ is neither concave or
convex in ${\bf p}$. Nevertheless, assuming $\eta\rightarrow\infty$ (which
is the regime of our interest), let us consider a new objective function,
given by
\begin{equation}
J({\bf p})=\sum_{i=1}^{M}\ln\left( \frac{a_{i}p_{i}}{1+\sum_{j}\gamma_{gj}p_{j}%
}\right) \label{NewObjective}%
\end{equation}
where we let $a_{i}=\eta\gamma_{gi}|h_{i}|^{2}$ for notation simplicity. It is
well known that the function $J({\bf p})$ can be transformed into a
\textit{concave} function through a log transformation \cite{GP}. In the
following, we use the notation $\widetilde{x}$ to express $\ln x$ for any
variable $x$ (equivalently $x=e^{\tilde{x}}$). The objective function can be
expressed in terms of $\widetilde{{\bf p}}$ as
\begin{equation}
J(\widetilde{{\bf p}})=\sum_{i=1}^{M}\ln(a_{i}e^{\widetilde{p_{i}}})-M\ln\left(
1+\sum_{j}\exp(\widetilde{p_{j}})\gamma_{gj}\right) \label{LogObjective}%
\end{equation}
\textbf{Proposition 3 } The optimal $\widetilde{{\bf p}}$ that maximizes
(\ref{LogObjective}) is given
\begin{equation}\label{WFSolution}
\tilde{p}_{i}=\left[ \tilde{\mu}^{\star}-\tilde{\gamma}_{gi}\right] _{-\infty
}^{\tilde{P}_{i}},\;\;\;\;p_{i}=\left[ \frac{\mu^{\star}}{\gamma_{gi}}\right]
_{0}^{P_{i}}%
\end{equation}
where $\tilde{\mu}^{\star}$ is the \textit{water level} that is
determined as follows. Let $\pi$ denote a permutation such that
\begin{equation}\label{Permutation}
P_{\pi(1)}\gamma_{g,\pi(1)}\leq\ldots\leq P_{\pi(M)}\gamma_{g,\pi(M)}.
\end{equation}
and define $\tilde{\mu_{j}}=\ln\mu_{j}$, where $\mu_{j}$ is given by
\begin{equation}\label{MuCandidate}
\mu_{j}=\frac{1+\sum_{i=1}^{j}P_{\pi(i)}\gamma_{g,\pi(i)}}{j}\;\;\;j=1,\dots
,M.
\end{equation}
The optimal water level $\tilde{\mu}^{\star}$ is obtained as that the value out of these
M possible ones that maximizes the objective function, namely
\begin{equation}\label{OptimalLevel}
\tilde{\mu}^{\star}=\arg\max_{\tilde{\mu}_{1},\dots,\tilde{\mu}_{M}}J(\tilde{\mu}_{j})
\end{equation}
where $J(\tilde{\mu})$ is the objective function (\ref{LogObjective}) parameterized by the water level defined in (\ref{Jmu}).
{\bf Proof } see Appendix \ref{proof4}.
Fig. \ref{fig:KKT} illustrates an example of our waterfilling solution for the
case $M=3$. The power curve of relay $i$ increases linearly with slope
$1/\gamma_{gi}$ and then is bounded at its maximum amplifier power $P_{i}$. In
this example, relays 1 and 2 with $P_{i}\gamma_{gi}<\mu^{\star}$ are allocated their
maximum amplifier powers while relay $3$ is allocated $\frac{\mu^{\star}}{\gamma
_{g,3}}$. Depending on the water level, this waterfilling yields
a non-trivial solution between maximum power allocation ($p_{i}=P_{i},\forall
i$) and a generalized STC that equalizes the averaged amplified noise $p_{i}\gamma_{gi}=\mu^{\star}$ (notice a
classical STC considers $\gamma_{gi}=1$ for all $i$). Note that the proposed
waterfilling approach only requires a search over $M$ values in order to
determine the water level, and consequently it is extremely simple.
\textbf{Corollary 2 } For $M=2$, we find a closed-form solution given by
\begin{equation}
\label{3Subregion}(p_{1},p_{2}) = \left\{
\begin{array}
[c]{ll}%
\left( \frac{1+P_{2}\gamma_{g2}}{\gamma_{g1}}, P_{2}\right) , &
\quad\mbox{if $P_2 \gamma_{g2} \geq P_1 \gamma_{g1} +1 $}\\
\left( P_{1}, \frac{1+P_{1}\gamma_{g1}}{\gamma_{g2}}\right) , &
\quad\mbox{if $P_2 \gamma_{g2} \leq P_1 \gamma_{g1} -1$}\\
(P_{1},P_{2}) , & \quad\mbox{otherwise}
\end{array}
\right.
\end{equation}
\textbf{Proof } see Appendix \ref{proof5}.
By expressing the power constraint $P_{i}=\frac{p_{r}}{p_{s}|h_{i}|^{2}+N_{0}%
}$, the above allocation policy can be graphically represented as a function
of $|h_{1}|^{2}$ and $|h_{2}|^{2}$ in Fig. \ref{fig:HRegion}. Similarly to
Fig. \ref{fig:Region} for the case of perfect CSIT, there exists a minimum value of $|h_{i}|^{2}$ so that
relay $i$ is allocated its maximum amplifier power. Namely, relay $i$ is
allocated its maximum amplifier power independently of relay $j\neq i$ if
$|h_{i}|^2 >\frac{p_{r}\gamma_{g,i}-N_{0}}{p_{s}}$.
Interestingly, the threshold associated to relay $i$ depends only on the
$i$-th channel, as opposed of what happened in the perfect CSIT case. This
means that the power allocation is more selfish under partial/statistical CSIT
in order to increase the reliability of the wireless link.
\vspace{-2.1em}
\subsection{Statistical CSIT}
When the transmitter only knows the variances $\hbox{\boldmath$\gamma$}_{g},\hbox{\boldmath$\gamma$}_{h}$ of the
channels, we minimize the expression (\ref{Objective3}) which is equivalent to
maximizing
\begin{equation}
f_{2}({\bf p})=\sum_{i=1}^{M}\ln\left( \frac{\eta\gamma_{hi}\gamma_{gi}p_{i}%
}{1+\sum_{j=1}^{M}p_{j}\gamma_{gj}}\right) \label{f2}%
\end{equation}
where we ignored doubly logarithmic terms and the amplifier power $p_{i}$ of
relay $i$ is subject to a long-term individual power constraint $P_{i}%
=\frac{p_{r}}{p_{s}\gamma_{hi}+N_{0}}$ for all $i$. Again, by performing a
log-transformation we obtain precisely the same objective function
$J(\tilde{{\bf p}})$ in (\ref{LogObjective}) where $a_{i}=\eta\gamma_{gi}%
|h_{i}|^{2}$ defined in the previous partial CSIT case is replaced with
$\eta\gamma_{gi}\gamma_{hi}$. Hence, the waterfilling solution proposed for
the partial CSIT case can be directly applied to the statistical CSIT case and
needs to be implemented once for a given set of variances $\hbox{\boldmath$\gamma$}_{h}%
,\hbox{\boldmath$\gamma$}_{g}$. The power allocation region for $M=2$ is given in Fig.
\ref{fig:HRegion} where the axes are replaced by $\gamma_{h1}$ and
$\gamma_{h2}$.
\section{Asymptotic behavior}\label{sect:AsymptoticBehaviors}
This section studies the asymptotic behavior
of the proposed power allocation algorithms and gives an informal discussion
on the resulting error rate performance.
\subsection{Relays close to transmitter $\hbox{\boldmath$\gamma$}_{h}\rightarrow\infty$}
\label{subsect:h} We consider the regime where $\gamma_{h,i}\rightarrow\infty$
or equivalently $|h_{i}|^{2}\rightarrow\infty$ for all $i$ at the same rate
while treating other parameters finite. First we examine a two relay case.
Under prefect CSIT, Fig. \ref{fig:Region} implies that both relays tend to be
switched on in this regime. Under partial and statistical CSIT cases, it
follows immediately from Fig. \ref{fig:HRegion} that two relays shall transmit
at their maximum powers.
For $M>2$ the same conclusion can be drawn as a straightforward result of
Proposition 1 for the perfect CSIT case. The condition for which all $M$
relays are switched on under perfect CSIT is given by
\[
|h_{i}|^{2}>\frac{\sum_{j=1}^{M}P_{j}|h_{j}g_{j}|^{2}}{1+\sum_{j=1}^{M}%
P_{j}|g_{j}|^{2}},\;\;i=1,\dots,M
\]
where the RHS corresponds to the objective value $f_{0}(P_{1},\dots,P_{M})$
when letting all relays transmit with maximum power. The RHS is upper bounded
by
\begin{align}
f_{0}(P_{1},\dots,P_{M}) & \overset{\mathrm{(a)}}{\leq}\frac{p_{r}}{p_{s}%
}\frac{\sum_{j=1}^{M}|g_{j}|^{2}}{1+\sum_{j=1}^{M}P_{j}|g_{j}|^{2}%
} \overset{\mathrm{(b)}}{\leq}\frac{p_{r}}{p_{s}}\sum_{j=1}^{M}|g_{j}|^{2}\label{UpBound}
\end{align}
where (a) follows from $P_{j}|h_{j}|^{2}=\frac{p_{r}|h_{j}|^{2}}{p_{s}%
|h_{j}|^{2}+N_{0}}\leq\frac{p_{r}}{p_{s}}$ and (b) follows from $\frac
{1}{1+\sum_{j=1}^{M}|g_{j}|^{2}P_{j}}\leq1$. In the limit of $|h_{j}%
|^{2}\rightarrow\infty,\forall j$, both (a) and (b) hold with equality and we
have
\[
|h_{i}|^{2}>\frac{p_{r}}{p_{s}}\sum_{j=1}^{M}|g_{j}|^{2},\;\;i=1,\dots,M
\]
so that all relays are allocated their maximum powers. From the upper bound
(\ref{UpBound}) of the objective function, it can be expected that the
performance of distributed LD code improves for a larger $M$. Under partial
and statistical CSIT, the proposed waterfilling tends to allocate the maximum
power to each relay. This can be seen immediately from the waterfilling
solution depicted in Fig.\ref{fig:KKT}. As $P_{j}\rightarrow0$, the values
$\{P_{i}\gamma_{g,i}\}$ above which the power curves are bounded become much
smaller than the lowest water level $\mu_{\mathrm{min}}=1/M$. This means that
all relays are allocated the maximum powers. The following remarks are in order;
\begin{enumerate}
\item Since the waterfilling algorithm under partial CSIT and on-off gradient
algorithm coincide, both algorithms yield the same error performance. This
implies that the knowledge of ${\bf g}$ has a negligible effect on the performance
in the regime.
\item As a final remark, the same behavior can be
observed in the following cases.
\begin{itemize}
\item the transmitter power increases $p_{s}\rightarrow \infty$.
\item the variance $\hbox{\boldmath$\gamma$}_{g}$ of the relay-receiver channel decreases, i.e. $\hbox{\boldmath$\gamma$}_{g}\rightarrow{\bf 0}$ or equivalently $|g_i|^2\rightarrow 0$ for all $i$ at the same rate.
\end{itemize}
\end{enumerate}
\subsection{Relays get close to receiver $\hbox{\boldmath$\gamma$}_{g}\rightarrow\infty$}
\label{subsect:g} We consider the regime where $\gamma_{g,i}\rightarrow\infty$
or equivalently $|g_{i}|^{2}\rightarrow\infty$ for all $i$ at the same rate.
First we examine a two-relay case $M=2$. Under perfect CSIT, it can be
observed that the threshold values $\frac{p_{r}|g_{i}|^{2}}{p_{s}}$ above
which each relay becomes activated (represented by straight lines in Fig.
\ref{fig:Region}) get large and the on-off algorithm converges to relay selection.
Similarly, the waterfilling algorithm also tends to allocate only one relay
with maximum power under partial and statistical CSIT cases as expected from
Fig. \ref{fig:HRegion}. The only exception is a symmetric variance case
$\gamma_{h,1}=\gamma_{h,2}$.
For $M>2$ under perfect CSIT, we next show that the on-off strategy
converges to single relay selection as $|g_{i}|^{2}\rightarrow\infty,\forall i$. To see this, first observe that if the one-off algorithm
chooses only one relay to switch on, it selects the relay
\[
i^{\star}=\arg\max_{i}\frac{|h_{i}|^{2}}{1+1/|g_{i}|^{2}P_{i}}%
\]
The objective value is given by (with some abuse of notation in the argument
of the cost function)
\begin{align}
f_{0}(P_{i^{\star}},{\bf 0}_{M-1}) & =\frac{|h_{i^{\star}}|^{2}%
}{1+1/|g_{i^{\star}}|^{2}P_{i^{\star}}} \leq|h_{i^{\star}}|^{2} \label{fselect}
\end{align}
where the inequality holds with equality for $|g_{i^{\star}}|^{2}%
\rightarrow\infty$. If the on-off algorithm activates any arbitrary set of $m>1$
relays, the corresponding objective is upper bounded by
\begin{align}
f_{0}(P_{1},\dots,P_{m},{\bf 0}_{M-m}) & =\frac{\sum_{i=1}^{m}|h_{i}%
|^{2}|g_{i}|^{2}P_{i}}{1+\sum_{j=1}^{m}|g_{j}|^{2}P_{j}} <\frac{\sum_{i=1}^{m}|h_{i}|^{2}|g_{i}|^{2}P_{i}}{\sum_{j=1}^{m}|g_{j}%
|^{2}P_{j}} \overset{\mathrm{(a)}}{=}\sum_{i=1}^{m}\theta_{i}|h_{i}|^{2}\label{mrelay}
\end{align}
where in (a) we define $\theta_{i}=\frac{|g_{i}|^{2}P_{i}}{\sum_{j=1}%
^{m}|g_{j}|^{2}P_{j}}$ with $0<\theta_{i}<1$ and $\sum_{i}\theta_{i}=1$. We
see that the last expression (\ref{mrelay}) is strictly smaller than
(\ref{fselect}) for any $m$ and regardless of a set of relays. This implies
that as the relays get close to the receiver, the on-off algorithm converges to
single relay selection.
The waterfilling algorithm under partial and statistical CSIT lets only one
relay transmit with the maximum power as we see in the following. Let us first
consider the permutation $\pi$ given in (\ref{Permutation}) sorting relays according to
$P_{\pi(1)}\gamma_{g,\pi(1)}<P_{\pi(2)}\gamma_{g,\pi(2)}<\dots<P_{\pi(M)}%
\gamma_{g,\pi(M)}$ with strict inequalities. As $\gamma_{g,i}\rightarrow\infty$ for all $i$, the
possible water level in (\ref{MuCandidate}) is roughly given by $\mu
_{j}\approx\frac{\sum_{i=1}^{j}P_{\pi(i)}\gamma_{g,\pi(i)}}{j}$ and the levels
tend to be sorted as
\begin{equation}
\mu_{\min}\ll\mu_{1}<\mu_{2}\leq\dots<\mu_{M}=\mu_{\max} \label{Order}%
\end{equation}
Notice that the inequality $\mu_{j}<\mu_{j+1}$ holds if $\sum_{i=1}^{j}%
(P_{\pi(j+1)}\gamma_{g,\pi(j+1)}-P_{\pi(i)}\gamma_{g,\pi(i)})>1$. We show that the
function $J$ is monotonically decreasing for $\mu_{1}\leq\mu\leq\mu_{M}$ and
the optimal water level is always given by $\mu_{1}$. We recall that the
derivative of $J$ with respect to $\tilde{\mu}$ in (\ref{DerivativeMu}) can be
expressed as a function of $\mu$ as
\[
\nabla J=1-\frac{M\mu}{1+|{\cal C}(\mu)|\mu+\sum_{i\in\overline{{\cal C}}(\mu)}%
\gamma_{gi}P_{i}}%
\]
where we assumed $|{\cal C}(\mu)|>0$. Under the specific order of the water levels
given in (\ref{Order}), we can further express the derivative $\nabla J_{j}$
for each interval $\mu_{j}<\mu\leq\mu_{j+1}$ for $j=1,\dots,M-1$ such that
\begin{align}
\nabla J_{j}
& =\frac{j(\mu_{j}-\mu)}{1+(M-j)\mu+\sum_{i=1}^{j}\gamma_{g,\pi(i)}P_{\pi
(i)}}\nonumber
\end{align}
Since we have $\nabla J_{j}<0$ for any interval $j$, it clearly appears that
the function is monotonically decreasing thus the waterfilling algorithm
allocates maximum power only to the relay $\pi(1)$ by letting $\mu^{\star}=\mu_1$. In order to have an
insight on the error rate performance achieved by the waterfilling letting
$p_{i}=\frac{\mu_{1}}{\gamma_{gi}}$ for all $i$, we evaluate the approximated
receive SNR value $f_{0}$.
\begin{align}
f_{0}\left( \left\{ \frac{\mu_{1}}{\gamma_{gi}}\right\} \right) &
=\frac{\mu_{1}\sum_{i=1}^{M}|h_{i}|^{2}\frac{|g_{i}|^{2}}{\gamma_{gi}}}%
{1+\mu_{1}\sum_{j=1}^{M}\frac{|g_{j}|^{2}}{\gamma_{gj}}}
\overset{\mathrm{(a)}}{\leq}\frac{\sum_{i=1}^{M}|h_{i}|^{2}|
g_{i}|^{2}/\gamma_{gi}}{\sum_{j=1}^{M}|g_{j}|^{2}/\gamma_{gj}} \overset{\mathrm{(b)}}{=}\sum_{i=1}^{M}\theta_{i}|h_{i}|^{2
\end{align}
where (a) holds with equality as $\mu_{1}\rightarrow\infty$
($P_{i}\gamma_{gi}\rightarrow\infty$ for all $i$), and in (c) we define
$\theta_{i}=\frac{|g_{i}|^2/\gamma_{gi}}{\sum_{j=1}^M|g_{j}|^{2}/\gamma_{gj}}$ with $0<\theta_{i}<1$ and $\sum_{i}\theta_{i}=1$.
We see that the final expression is dominated by (\ref{fselect})
achieved by single relay selection under perfect CSIT. The following remarks
are in order;
\begin{enumerate}
\item The optimal transmit scheme in this regime is single relay selection
that chooses roughly the relay with the largest $|h_{i}|^{2}$. Activating more
than one relay becomes highly suboptimal due to large amplified noise.
\item The power allocation under partial and statistical CSIT is the
same and equalizes $p_{i}\gamma_{gi}$.
\item As a final remark, the same behavior can be observed in the following
cases.
\begin{itemize}
\item the relay power increases $p_{r}\rightarrow\infty$.
\item the variance of the transmitter-relay channel decreases $\hbox{\boldmath$\gamma$}_{h}\rightarrow {\bf 0}$.
\end{itemize}
which yield $P_{i}|g_{i}|^{2}\rightarrow\infty$ under perfect CSIT and
$P_{i}\gamma_{g,i}\rightarrow\infty$ under partial and statistical CSIT.
\end{enumerate}
\section{Numerical results}
\label{sect:Result} In this section, we provide some numerical results to
illustrate the behavior of the proposed power allocation algorithms. Assuming
a homogeneous network, we let $p_{s}=p_{r}$. We consider BPSK modulation and
generate randomly a LD code with $M=T$ drawn from an isotropic distribution.
First, we compare the proposed on-off algorithm with other schemes in a system
with $M=2$ relays and equal variances $\gamma_{hi}=\gamma_{gi}=1$ for $i=1,2$.
Fig. \ref{fig:OnoffvsBeamformingM2} shows the block error probability versus
per-relay SNR $p_{r}/N_{0}$ with the on-off algorithm, network beamforming of
\cite{jing2007nbu}, and maximum power allocation that lets both relays
transmit with their peak powers. For a reference we also plot the performance
of our waterfilling algorithm under statistical CSIT. We observe that network
beamforming outperforms the on-off gradient algorithm by roughly 3 dB by
exploiting full channel knowledge and that both schemes achieve the same
diversity gain. On the contrary, maximum power allocation has a substantial
performance loss and fails to achieve full diversity gain. This clearly shows
that an appropriate power allocation is essential for distributed LD code to
provide diversity gain.
Next, we examine how the network topology impacts the proposed power
allocation algorithms and the resulting BER performance. To model a simple
network topology, we consider a unit transmitter-receiver distance and let the
transmitter-relay distance varies in the range $0< r <1$. The resulting
variances are $\gamma_{hi}=1/r^{2}$ and $\gamma_{gi}=1/(1-r)^{2}$ for all $i$.
For the sake of fair comparison between systems with different $M$, we assume
that the whole network power ${\cal P}$ is equally shared between the transmitter
and $M$ relays so that $p_{r}={\cal P}/(M+1)$. Fig. \ref{fig:BERvsDistance} shows
the BER performance of the proposed power allocation algorithms with $M=2,4,6$
and ${\cal P}/N_{0}=15$ dB along with the performance of the direct transmission
with a fixed power ${\cal P}$. Fig. \ref{fig:NumRelayvsR} shows the averaged
allocated power ratio, i.e. $\sum_{i=1}^{M}\mbox{\bb E}[p_i(t)/P_i(t)]$, or
equivalently the effective number of relays. The following remarks are in
order: 1) As the relays get closer to the transmitter $r\rightarrow0$, the
transmitter activates all relays with their maximum power. The waterfilling
solution under partial CSIT converges to the on-off gradient algorithm in the
limit of $r\rightarrow0$, which implies that the knowledge of ${\bf g}$ has a
negligible impact on the performance. The result agrees well with the analysis
provided in subsection \ref{subsect:h}.
2) As the relays get closer to the receiver $r\rightarrow1$, the optimal
strategy activates only one relay to limit the amplified noise. As seen in
Fig. \ref{fig:NumRelayvsR} the on-off gradient algorithm indeed reduces to
relay selection. On the contrary, the waterfilling solution equalizes
$p_{1}\gamma_{g1}=\dots=p_{M}\gamma_{gM}$ both under partial and statistical
CSIT, and moreover it converges to the same error performance independently
the number of relays. Under the given setting where $P_{1}\gamma_{g1}%
=\dots=P_{M}\gamma_{gM}$, the waterfilling solution under statistical CSIT
lets all relays transmit with maximum power. The result is in a good agreement
with the analysis of subsection \ref{subsect:g}.
Fig. \ref{fig:BER2} shows the BER performance versus ${\cal P}/N_{0}$ for
$M=2,4,8$. Here, we randomly choose the relay-receiver distances and let
$\hbox{\boldmath$\gamma$}_{g}$=[0.85, 3.17, 1.50, 1.89, 2.06, 2.36, 3.19, 3.99]. The
transmitter-relay distance $r=0.5$ is fixed ($\gamma_{hi}=4$ for any $i$).
Compared to the direct transmission, DSTC with our proposed power allocation
algorithms yields significant diversity gain at moderate to high power regime.
Moreover, additional CSIT yields a considerable power gain.
\section{Conclusions}
\label{sect:Conclusions} We considered a two-hop wireless network where $M$
relays aid one transmitter-receiver pair to communicate via DSTC together with the AF protocol. In order to study the
impact of CSIT on the design and the performance of DSTC, we optimized
the amplifier power allocation under individual power constraints so that the
PEP conditioned to the available CSIT is minimized. Under
perfect CSIT we proposed the on-off gradient algorithm that efficiently finds a
subset of relays to switch on. It turns out that this algorithm can be
implemented at the receiver if the receiver can send a one-bit feedback to
each relay indicating whether to switch on or not. Under partial and
statistical CSIT we derived a simple waterfilling algorithm that yields a
non-trivial solution between maximum power allocation and a generalized STC
that equalizes the averaged amplified powers for all relays. Closed-form solutions were derived for $M=2$ and
in certain asymptotic regimes. Namely, when the relays are physically close to
the transmitter, the on-off algorithm and the waterfilling algorithm coincide and
both let all relays transmit with maximum amplifier powers. When the relay are
close to the receiver, the on-off algorithm converges to relay selection in order
to minimize the amplified noise seen by the receiver while the waterfilling
equalizes the averaged amplified noise and becomes highly suboptimal. The proposed
amplifier power allocation algorithms were derived for a particular type of
linear dispersion STC but can be extended to more general LD code as well as
other classes of STC as long as the amplified noise remains white.
\appendices
\vspace{-0.8em}
\section{Saddle point approximation}
\label{sect:AppendixSaddle}The objective of this appendix is to justify the
approximations (a) in (\ref{Objective2}) and
(\ref{Objective3}) and to show that the approximation is valid as the number
of relays increases without bound. Let us denote
\[
a=\mathbf{g}^{H}\mathbf{H}^{H}\mathbf{PHg\quad}b=1+\mathbf{g}^{H}\mathbf{Pg}%
\]
We need to evaluate
\[
\mathbb{E}\left[ \exp\left( -\eta\frac{a}{b}\right) \right] =\mathbb{E}%
\left[ \exp\left( -\eta\frac{\mathbf{g}^{H}\mathbf{H}^{H}\mathbf{PHg}%
}{1+\mathbf{g}^{H}\mathbf{Pg}}\right) \right]
\]
where the expectation is with respect to the statistics of
$\mathbf{g}$. First of all, observe that we can write
\[
\exp\left( -\eta\frac{\mathbf{g}^{H}\mathbf{H}^{H}\mathbf{PHg}}%
{1+\mathbf{g}^{H}\mathbf{Pg}}\right) =\lim_{n\rightarrow\infty}X_{n}%
\]
where
\[
X_{n}=\sum_{k\geq0}^{n}\frac{\left( -\eta\right) ^{k}}{k!}\left( \frac
{a}{b}\right) ^{k}.
\]
On the other hand,
\begin{multline*}
\left\vert X_{n}\right\vert <\sum_{k\geq0}^{n}\frac{\eta^{k}}{k!}\left(
\frac{a}{b}\right) ^{k}<\exp\left[ \eta\frac{\mathbf{g}^{H}\mathbf{H}%
^{H}\mathbf{PHg}}{1+\mathbf{g}^{H}\mathbf{Pg}}\right] <\exp\left[ \eta\frac{\mathbf{g}^{H}\mathbf{H}^{H}\mathbf{PHg}}%
{\mathbf{g}^{H}\mathbf{Pg}}\right] \leq\exp\left( \eta\max_{1\leq j\leq
M} \left\vert h_{j}\right\vert ^{2} \right) <\infty
\end{multline*}
Hence, the bounded convergence theorem ensures that we can write
\begin{equation}
\mathbb{E}\left[ \exp\left( -\eta\frac{a}{b}\right) \right] =\mathbb{E}%
\left[ \lim_{n\rightarrow\infty}X_{n}\right] =\lim_{n\rightarrow\infty
}\mathbb{E}\left[ X_{n}\right] =\sum_{k\geq0}\frac{\left( -\eta\right)
^{k}}{k!}\mathbb{E}\left[ \left( \frac{a}{b}\right) ^{k}\right]
\label{bounded_convergence}%
\end{equation}
and we can therefore concentrate on the study of the moments
\[
r_{k}=\mathbb{E}\left[ \left( \frac{a}{b}\right) ^{k}\right] .
\]
In particular, we can follow the procedure introduced in \cite{lieberman2004lam},
which is based on a Laplace approximation of the integral. More specifically, in \cite{lieberman2004lam} it was shown that a Laplace approximation of $r_{k}$ about the origin leads to the identity%
\begin{equation}
r_{k}=\mathbb{E}\left[ \left( \frac{a}{\mathbb{E}\left[ b\right] }\right)
^{k}\right] +R_{k}\label{rk_asym}%
\end{equation}
where both expectations are with respect to ${\bf g}$ and we have
$R_{k}\rightarrow0$ as $M\rightarrow\infty$. Although the procedure is
somewhat tedious, one can extend this to show that $\sup_{k}R_{k}\rightarrow
0$. As a direct consequence of this theorem, we have that as $M\rightarrow
\infty$ , it holds that\footnote{In the last equality of the following
section, one should justify the fact that expectation and sum can be
interchanged. This is not difficult to see, but the proof is omitted due to
space constraints.}%
\begin{multline*}
\mathbb{E}\left[ \exp\left( -\eta\frac{a}{b}\right) \right] =\sum_{k\geq
0}\frac{\left( -\eta\right) ^{k}}{k!}\mathbb{E}\left[ \left( \frac{a}%
{b}\right) ^{k}\right] \\
=\sum_{k\geq0}\frac{\left( -\eta\right) ^{k}}{k!}\left( \mathbb{E}\left[
\left( \frac{a}{\mathbb{E}\left[ b\right] }\right) ^{k}\right]
+R_{k}\right) =\mathbb{E}\left[ \exp\left( -\eta\frac{a}{\mathbb{E}\left[
b\right] }\right) \right] +\sum_{k\geq0}\frac{\left( -\eta\right) ^{k}%
}{k!}R_{k}%
\end{multline*}
where now
\[
\left\vert \sum_{k\geq0}\frac{\left( -\eta\right) ^{k}}{k!}R_{k}\right\vert
\leq\sup_{k}R_{k}\exp\left( -\eta\right) \rightarrow0
\]
and this justifies the approximation used in the paper.
\vspace{-0.8em}
\section{Proof of Proposition 1 }\label{proof1}
In order to prove Proposition 1, we consider the $i$-th entry
of the gradient of the objective function, given by
\[
\frac{\partial f_{0}}{\partial p_{i}}=\frac{\alpha_{i}+\alpha_{i}\sum_{j\neq
i}\beta_{j}p_{j}-\beta_{i}\sum_{j\neq i}\alpha_{j}p_{j}}{(1+\beta_{i}%
p_{i}+\sum_{j\neq i}\beta_{j}p_{j})^{2}}.
\]
When we treat the variables $\{p_{j}\}_{j\neq i}$ fixed, the $i$-th gradient
can be expressed as a function of $p_{i}$ in the form $\frac{\xi_{i}}%
{(\beta_{i}p_{i}+\zeta_{i})^{2}}$, where $\zeta_{i}>0$ and $\xi_{i}$ are
constants. Depending on the sign of $\xi_{i}$, the gradient is always negative
or positive, i.e. the function is monotonically decreasing or increasing in
each $p_{i}$ \footnote{For $\xi_{i}=0$, the function is constant in $p_{i}$,
then we let $p_{i}=0$.}. Since the objective function cannot be maximized at
$0<p_{i}<P_{i}$, the solution of (\ref{QuasiLP}) is achieved only at one of
the vertices. The second part follows directly from the monotonicity of the
function in each component $p_{i}$. Namely, the solution is achieved by the
vertex at which the objective function cannot further increase beyond the thresholds.
\vspace{-0.8em}
\section{Proof of Corollary 1 : a closed-form solution of $M=2$ under perfect CSIT} \label{proof2}
From Proposition 1, we see immediately that the power allocation of two relays
depend on the sign of
\[
\xi_{1}(p_{2})=\alpha_{1}+\Delta p_{2},\;\;\xi_{2}(p_{1})=\alpha_{2}-\Delta
p_{1}%
\]
where we let $\Delta=\alpha_{1}\beta_{2}-\beta_{1}\alpha_{2}=|g_{1}g_{2}%
|^{2}(|h_{1}|^{2}-|h_{2}|^{2})$. Moreover, it is sufficient to check the sign
of $\xi_{1}$ and $\xi_{2}$ at each vertex to determine the optimum power
allocation. Table \ref{tab:gradient} summarizes the optimal solution and the
conditions ; the optimal solution is given by
$(P_{1},0)$ if and only if $\xi_{2}(P_{1})=\alpha_{2}-\Delta P_{1}<0$
holds
while it is given by a vertex $(0,P_{2})$ if and only if we
have $\xi_{1}(P_{2})=\alpha_{1}+\Delta P_{2}<0$.
Finally, both relays are activated if $\frac{-\alpha_{1}}{P_{2}}<\Delta
<\frac{\alpha_{2}}{P_{1}}$.
These inequalities yield (\ref{2relayOnOff}).
\vspace{-0.8em}
\section{Proof of Proposition 2 : convergence of on-off gradient algorithm}\label{proof3}
We have to first prove that the objective is non-decreasing, i.e.
$f({\bf p}^{(n+1)})\geq f({\bf p}^{(n)})$ for any iteration $n$. Identifying that the
update in (\ref{Update}) is nothing than a discrete steepest ascent algorithm
with a fixed step size, the objective always increases. It remains to prove
that the converged point is the global maximum. In other words, the stopping
criterion above is sufficient to guarantee a global convergence. It is not
difficult to see that there is a \textit{unique} ${\bf p}^{\star}$ satisfying the
condition (\ref{SufficientCondition}) such that the signs of the gradients and
the powers match. Otherwise, we can always increase the objective by switching
on (off) the power with a positive (negative) gradient.
\vspace{-0.8em}
\section{Proof of Proposition 3 : waterfilling solution under partial CSIT}\label{proof4}
Since $J(\widetilde{{\bf p}})$ is a strictly concave function of
$\widetilde{{\bf p}}$, we solve the KKT conditions which are necessary and
sufficient for optimality.
By letting $\lambda_{i}\geq0$ a Lagrangian variable associated to the individual power constraint $p_i\leq P_i$,
we obtain the KKT conditions given by
\begin{align}
\label{KKT}1- \frac{M\gamma_{gi}\exp(\tilde{p}_{i})}{1+ \sum_{j}\gamma
_{gj}\exp(\tilde{p}_{j})} =\lambda_{i},\;\; i =1,\dots,M
\end{align}
Summing the above equation over all $i$ and defining $I=\sum
_{j}\gamma_{gj}\exp(\tilde{p}_{j})$ and $\mu=\frac{1}{\sum_{i} \lambda_{i}}$
we obtain
\begin{equation}
\label{C}I= M\mu-1
\end{equation}
It follows that $\mu$ is lower bounded by $\mu^{\min}=\frac{1}{M}$ and upper
bounded by $\mu^{\max}=\frac{1+\sum_{j}\gamma_{gj}P_{j}}{M}$. Plugging
(\ref{C}) into (\ref{KKT})
and using the inequality $\lambda_{i}\geq0$, we readily obtain the optimal power given in (\ref{WFSolution}).
It remains to determine the optimal \textit{water level} $\tilde{\mu}^{\star}$ that maximizes the objective function $J$ (note that the individual power constraint is always satisfied for any $\mu$).
To this end, we define ${\cal C}(\tilde{\mu})$ and $\overline{{\cal C}}(\tilde
{\mu})$ as
\begin{align*}
{\cal C}(\tilde{\mu}) & =\{i|\tilde{p}_{i}=\tilde{\mu}-\tilde{\gamma}_{gi}\},\;\;\;
\overline{{\cal C}}(\tilde{\mu}) =\{i|\widetilde{p_{i}}=\widetilde{P_{i}}\}
\end{align*}
Plugging (\ref{WFSolution}) into (\ref{LogObjective}), the function can be
expressed in terms of $\tilde{\mu}$
\begin{equation}\label{Jmu}
J(\tilde{\mu})=|{\cal C}(\tilde{\mu})|\tilde{\mu}+\sum_{i\in\overline{{\cal C}}%
(\tilde{\mu})}\tilde{P}_{i}-M\ln\left( 1+|{\cal C}(\tilde{\mu})|\exp(\tilde{\mu
})+\sum_{i\in\overline{{\cal C}}(\tilde{\mu})}\gamma_{gi}\exp(\tilde{P}%
_{i})\right) +\sum_{i=1}^{M}\ln a_{i}%
\end{equation}
where $|{\cal C}|,|\overline{{\cal C}}|$ denotes the cardinality of the set
${\cal C},\overline{{\cal C}}$ respectively. Since the function $J(\tilde{\mu})$ is
strictly concave in $\tilde{\mu}$,
the optimal $\tilde{\mu}$ must satisfy $\frac{\partial J}{\partial\tilde{\mu}%
}=0$ where
\begin{equation}\label{DerivativeMu}
\frac{\partial J}{\partial\tilde{\mu}}=|{\cal C}(\tilde{\mu})|\left( 1-\frac
{M\exp(\tilde{\mu})}{1+|{\cal C}(\tilde{\mu})|\exp(\tilde{\mu})+\sum_{i\in
\overline{{\cal C}}(\tilde{\mu})}\gamma_{gi}\exp(\tilde{P}_{i})}\right)
\end{equation}
which yields
\begin{equation}\label{MuSolution}
\exp(\tilde{\mu})=\frac{1+\sum_{i\in\overline{{\cal C}}(\tilde{\mu})}\exp(\tilde
{P}_{i}+\tilde{\gamma}_{gi})}{|\overline{{\cal C}}(\tilde{\mu})|}%
\end{equation}
for $|\overline{{\cal C}}(\tilde{\mu})|>0$. Notice that for $|\overline{{\cal C}}%
(\tilde{\mu})|=0$, it can be shown that the objective function
is a monotonically increasing concave function and maximized at
$\tilde{\mu}^{\max}$. By
sorting $\{P_{i}\gamma_{gi}\}$ in an
increasing order according to the permutation (\ref{Permutation}), we remark that the RHS of (\ref{MuSolution}) has
at most $M$ possible values \footnote{Some of the $M$ values might be
unfeasible if they are not in the domain $\tilde{\mu}\in\lbrack\tilde{\mu
}_{\min},\tilde{\mu}_{\max}]$.} in (\ref{MuCandidate})
and we choose the optimal $\tilde{\mu}^{\star}$ according to (\ref{OptimalLevel}).
\vspace{-0.8em}
\section{Proof of Corollary 2 : a closed-form solution of $M=2$ under partial CSIT }\label{proof5}
Without loss of generality, we assume $P_{1}\gamma_{g,1}%
<P_{2}\gamma_{g,2}$. We recall that the possible values of the
water level (\ref{MuCandidate}) for $M=2$ are
\[
\mu_{1}=1+P_{1}\gamma_{g,1}\;\;\mu_{2}=\mu^{\mathrm{max}}=\frac{1+P_{1}%
\gamma_{g,1}+P_{2}\gamma_{g,2}}{2}.
\]
First we consider the case $\mu_{1}<\mu_{2}$. This inequality reduces to
$P_{2}\gamma_{g,2}>P_{1}\gamma_{g,1}+1$, and further yields
\[
P_{1}\gamma_{g,1}<\mu_{1}<\mu_{2}<P_{2}\gamma_{g,2}.%
\]
Obviously we obtain $p_{1}=P_{1}$ no matter which of the two values is the
optimum water level. It is not difficult to see that the optimal water level is given by $\mu_{1}$
by comparing the two objective values $J(\tilde{\mu_{1}})$ and $J(\tilde{\mu_{2}})$.
Hence the amplifier power of relay 2 is
$p_{2}=\frac{\mu_{1}}{\gamma_{g2}}=\frac{1+P_{1}\gamma_{g1}}{\gamma_{g2}}$.
Next we consider the case $\mu_{2}<\mu_{1}$. This inequality is equivalent to
$P_{2}\gamma_{g,2}<P_{1}\gamma_{g,1}+1$, and yields $P_{2}\gamma_{g,2}<\mu
_{2}$. In this case, the water level (either $\mu_{1}$ or $\mu_{2}$) is larger
than $P_{1}\gamma_{g,1}$ and $P_{2}\gamma_{g,2}$. Hence, the algorithm lets
both relays transmit at the maximum power, $p_{1}=P_{1}$ and $p_{2}=P_{2}$. In
summary, we have the following power allocation possibilities:
\begin{itemize}
\item $p_{1} =P_{1}$ and $p_{2}=\frac{1+P_{1} \gamma_{g1}}{\gamma_{g2}}<P_{2}$
if $P_{2} \gamma_{g2}>P_{1} \gamma_{g,1}+1$
\item $p_{1} =P_{1}$ and $p_{2}=P_{2}$ if $P_{2} \gamma_{g2}<P_{1}
\gamma_{g,1}+1$
\end{itemize}
By symmetry, when $P_{1} \gamma_{g1}> P_{2} \gamma_{g2}$ we obtain
\begin{itemize}
\item $p_{1}=P_{1}$ and $p_{2}=P_{2}$ if $P_{2}\gamma_{g2}<P_{1}\gamma
_{g,1}<P_{2}\gamma_{g2}+1$
\item $p_{1} =\frac{1+P_{2}\gamma_{g2}}{\gamma_{g1}}<P_{1}$ and $p_{2}=P_{2}$
if $P_{1}\gamma_{g1}>P_{2} \gamma_{g2}+1$
\end{itemize}
These conditions and the corresponding power allocation are summarized in
(\ref{3Subregion}) and depicted in Fig. \ref{fig:PRegion}. Since the parameter
$P_{i}\gamma_{gi}$ does not provide an easy interpretation (note that $P_{i}$
is a function of ${\bf h}$), we express the power allocation region in terms of
$\hbox{\boldmath$\gamma$}_{g}$, ${\bf h}$ explicitly. The subregion where only relay 1, 2 is allocated
its maximum power is given respectively by
\[
|h_{2}|^{2}<\frac{p_{r}\gamma_{g2}\left( |h_{1}|^{2}+\frac{N_{0}}{p_{s}%
}\right) }{p_{s}|h_{1}|^{2}+p_{r}\gamma_{g1}+N_{0}}-\frac{N_{0}}{p_{s}},\;\; \;
|h_{1}|^{2}<\frac{p_{r}\gamma_{g1}\left( |h_{2}|^{2}+\frac{N_{0}}{p_{s}%
}\right) }{p_{s}|h_{2}|^{2}+p_{r}\gamma_{g2}+N_{0}}-\frac{N_{0}}{p_{s}}%
\]
These conditions yield the power allocation region in terms of ${\bf h}$ in Fig.
\ref{fig:HRegion}.
\begin{figure}[n]
\begin{center}
\epsfxsize=3in \epsffile{BlockDiagram2.eps}
\end{center}
\caption{A wireless relay network}%
\label{fig:model}%
\end{figure}
\begin{figure}[n]
\begin{center}
\epsfxsize=3.5in \epsffile{ConvergenceM.eps}
\end{center}
\caption{Convergence of on-off algorithm for different $M$}%
\label{fig:ConvergenceM}%
\end{figure}
\begin{figure}[n]
\begin{center}
\epsfxsize=2.4in \epsffile{KKT.eps}
\end{center}
\caption{Proposed waterfilling solution with $M=3$}%
\label{fig:KKT}%
\end{figure}
\begin{figure}[n]
\begin{center}
\epsfxsize=3in \epsffile{PowerRegion.eps}
\end{center}
\caption{Power allocation region as a function of $P_{i}\gamma_{gi}$}%
\label{fig:PRegion}%
\end{figure}
\begin{table}[n]
\centering
\begin{tabular}
[c]{c|cc|c}%
vertex & $\xi_{1}$ & $\xi_{2}$ & $\Delta$\\\hline
$(P_{1},0)$ & + & - & $(\alpha_{2}/P_{1},\infty]$\\\hline
$(P_{1},P_{2})$ & + & + & $(-\alpha_{1}/P_{2},\alpha_{2}/P_{1})$\\\hline
$(0,P_{2})$ & - & + & $[-\infty,-\alpha_{1}/P_{2})$\\\hline
\end{tabular}
\caption{Optimal solutions and corresponding conditions}%
\label{tab:gradient}%
\end{table}
\begin{figure}[n]
\begin{center}
\epsfxsize=4in \epsffile{TwoRelayRegion.eps}
\end{center}
\caption{Two-relay ON/OFF region under perfect CSIT}%
\label{fig:Region}%
\end{figure}
\begin{figure}[n]
\begin{center}
\epsfxsize=4in \epsffile{HPowerRegionPartial.eps}
\end{center}
\caption{Two-relay power region under partial CSIT}%
\label{fig:HRegion}%
\end{figure}
\begin{figure}[n]
\begin{center}
\epsfxsize=4in \epsffile{OnoffvsBeamformingM2.eps}
\end{center}
\caption{Block error rate vs SNR }%
\label{fig:OnoffvsBeamformingM2}%
\end{figure}
\begin{figure}[n]
\begin{center}
\epsfxsize=4in \epsffile{BERvsR.eps}
\end{center}
\caption{BER vs. transmitter-relay distance }%
\label{fig:BERvsDistance}%
\end{figure}
\begin{figure}[n]
\begin{center}
\epsfxsize=4in \epsffile{NumRelayvsR.eps}
\end{center}
\caption{Normalized allocated power vs. transmitter-relay distance }%
\label{fig:NumRelayvsR}%
\end{figure}
\begin{figure}[n]
\begin{center}
\epsfxsize=4in \epsffile{ImpactGvsSNRr0.5.eps}
\end{center}
\caption{BER performance vs ${\cal P}/N_{0}$ }%
\label{fig:BER2}%
\end{figure}
\section*{Acknowledgment}
This work was partially supported by the Generalitat de Catalunya under grant
SGR2005-00690 and by the European Commission under project IST-6FP-033533
(COOPCOM).
\bibliographystyle{IEEEtran}
|
2,869,038,156,899 | arxiv | \section{Introduction}
The SND detector \cite{SNDAchasov,SNDAbramov,SNDAulchenko} operates at the VEPP-2000 collider \cite{VEPPKhazin} since 2008. It produces hundreds gigabytes of stored raw data per day~\cite{DAQupd14}. The data are complemented with dozens megabytes of metadata, facility conditions (beam energy, luminosity, crate temperatures, etc.) and additional statistics (histograms etc.) that could be used in reconstruction, processing and system control.
An important part of experiment software is a data quality monitoring (DQM) system. This is necessary to obtain the meaningful data for physical analysis and to control the detector state. Recently DQM software was seriously redesigned. We present here the new system and its first usage experience.
The data quality metadata are generated at every stage of data collecting and reprocessing. The new DQM system includes software tools that
\begin{itemize}
\item show data acquisition summary and histograms;
\item collect quality data from automated scripts and users input;
\item have hierarchical quality model;
\item support several parameter sets for different stages of data processing;
\item provide quality information getter UI's and API's.
\end{itemize}
\section{Estimating Run Quality}
The minimal collection of events for analysis is referred to here and below as ``\emph{run}''. This is also the minimal unit for the data quality estimation. Parameters to monitor are defined by the detector subsystem experts with optional scripts which assign data quality marks to runs according to configuration. These marks could be ``bad'', ``user has to decide'', ``in doubt'', ``good'', ``no data''. They could be also assigned manually by dedicated persons which are usually operators, run coordinator and detector subsystems experts.
\subsection{The Data Acquisition Stage}
At this stage an operator monitors the experiment data quality right after the data acquisition. A large set of histograms (e.g. drift chamber layers statistics, calorimeter energies distribution) becomes available minutes/hours later after the data are processed by a high level trigger and recorded. Our DQM system then launches predefined scripts which assign quality marks where possible, and then displays the histograms and automatic quality marks to an operator. The operator shall check them, assign the quality marks which were not set automatically, and, if necessary, correct automatic marks. Not all important quality parameters yet covered with automatic decision scripts. The DQM system requires that an operator fills the gaps. In order to help one to do it without special knowledge, the interface displays reference histograms. Subsystem experts also may leave comments about their decision like for example ``the histogram has to have two peaks''. Having checked all the parameters an operator could either proceed with other activities or report a problem to a run coordinator the same day it appeared.
During the data collection dedicated person (the run coordinator) makes sure that the data collection goes smoothly. This person keeps an eye on the operators checking quality data. The DQM system provides a day summary and a month view for this purpose in addition to the individual run view.
Interactive DQM interface is implemented as a web application. So the run coordinator and the experts can remotely discuss the quality and make sure the detector works fine.
A list of good runs could be exported also for prompt calibration programs to use.
Checks performed at this stage:
\begin{itemize}
\item Check the run validity: enough time, enough events, good collider currents etc.
\item Check the detector subsystems: calorimeter, tracking system, aerogel counters, muon system, trigger electronics etc.
\end{itemize}
\subsection{The Reprocessing Stage}
The second stage of the data quality control is done when data has been reprocessed for analysis. At that time we have more information, including that available only after completion the experiment. Reprocessing software applies proper and final calibration (conditions) data and produces new meta-statistics (histograms, counters, averages) to check. These meta-data are analysed later with related scripts based on configuration.
Data preparation requires creative approach and immersion for several days or even weeks. The person who deals with this task have more general view than individual runs. It could be necessary to make quality decisions based on run ranges or run sets at once. Now \emph{the DQM system provides a single point of storing, discussing and retrieving quality information}. The data experts can easily access the first stage quality data or investigate some data mysteries with the detector subsystem experts. Having done that, we can export a list of good runs for processing.
Checks of the stage could include
\begin{itemize}
\item check the run validity
\begin{itemize}
\item enough run time, enough events,
\item good event number ratio for $e^{+}e^{-} \to e^{+}e^{-}$, $e^{+}e^{-} \to \gamma\gamma$;
\end{itemize}
\item check the subsystems, examine specific runs in detail.
\end{itemize}
\section{Interaction With Users}
An operator, a physicist or subsystem expert can interact with the DQM system using three interfaces: web interface, program getter access and DQM scripts respectively. In principle direct access to the DQM database is also possible, but this way is generally discouraged for any use other than system administration and development.
\subsection{Web Interface}
\begin{figure}[htbp]
\centering
\begin{minipage}{12pc}
\includegraphics[width=12pc]{dqm-expert-run.png}
\end{minipage}\quad\quad
\begin{minipage}{12pc}
\includegraphics[width=12pc,clip]{dqm-oper-run.png}
\end{minipage}
\caption{\label{PicDQMView}Reviewing a run quality -- the expert mode and the operator mode (translated in English for better reader experience).}
\end{figure}
This is the main way of manual interaction. It allows users to compare actual numbers and histograms with reference ones, to assign their quality marks and leave comments, to select runs by several quality criteria and to view or edit their quality data.
The interface provides different views for operators and experts.
The expert views are optimized for investigating quality data (figure~\ref{PicDQMView}, on the left). They contain forms for filtering runs by quality, run list view and particular run summary with parameters accessible by a mouse click.
The operator views are optimized to check limited set of parameters for each run (figure~\ref{PicDQMView}, on the right). It displays actual and reference representation (histograms, averages etc.) of pre-defined parameters set. An operator can monitor the run log and walk back and forth at the run quality view.
\subsection{Program Getter Access}
Data quality information is also available using program getter access from Python scripts. This language is chosen because embedded Python interpreter is used for configuration of experiment data processing framework~\cite{SUMO}. The interaction (figure~\ref{FigPythonGetter}) is relatively simple. The retrieved data (overall or per-system quality marks) could be used to filter qualified runs in automated calibration software or in analysis.
\begin{figure}[htbp]
\begin{lstlisting}[language=python]
# getting quality data for a single run 41000
from RunQuality import DataQualityGetter
getter = DataQualityGetter()
quality = getter(41000)
# getting quality data for multiple runs
from RunQuality import DataQualityGetter
getter = DataQualityGetter()
quality, missed = getter.cache(range(41000, 41010))
\end{lstlisting}
\caption{\label{FigPythonGetter} Examples of using DQM python getter.}
\end{figure}
The program getter can either interact with web interface (figure~\ref{FigPythonGetter}, single run example), or load cached integral quality information (figure~\ref{FigPythonGetter}, multiple runs example) from the database. The single mode could be used for triggering quality estimation when login credentials are provided. The multiple mode is faster. However it may result in missing some information if some cache entries are expired or don't exist.
\subsection{DQM Scripts}
\begin{figure}[htbp]
\begin{lstlisting}[language=C++]
void script_example(
int run, // this run number
const char * hists // histograms ROOT file path or NULL
) {
if(hists == NULL) {
parameter("param1")
.quality(QBAD)
.comment("No histograms!")
.valueNull().refNull(); // unset values
} else {
// check the histograms somehow (e.g. by rolling dice)
parameter("param1")
.quality(gRandom->Integer(2) ? QGOOD : QBAD
.valueHist("CL/h29")
.comment("Checked using lazy Monte-Carlo method.");
}
// apply the changes, set script execution status
flush_parameters(QGOOD);
}
\end{lstlisting}
\caption{\label{FigDQMScript} A simple DQM script.}
\end{figure}
Automated data quality estimation is performed by executing special scripts. A script (like at figure~\ref{FigDQMScript}) is a ROOT~\cite{ROOT} macro that accepts several parameters like run number, histogram, file path, etc. The script shall analyze the data and assign quality marks to related subsystem for one run. It also can set parameters titles/comments, choose custom histograms/numbers to show or even hide them. These actions are performed using a simple C++ API.
The system provides the infrastructure. Usually detector subsystem experts create their scripts based on their understanding.
These scripts are executed by a server when an authorized user accesses a web page containing run quality data. Having executed the scripts a user can review the results and correct some data if necessary at any time. The output produced by scripts is cached, so they are executed when accessing for the first time. Cache invalidation is available for experts.
\section{Implementation Details}
\begin{figure}[htbp]
\centering
\includegraphics[width=30pc]{dqm-model.pdf}
\caption{\label{PicImplementation}The DQM data flow.}
\end{figure}
The described software is implemented as an SND information system~\cite{MSIS2017} component. The server part is integrated to the Node.js~\cite{NodeJS} application (in JavaScript). The system uses SND databases running under MySQL RDBMS and node-mysql for accessing them. Please refer to figure~\ref{PicImplementation} for more details.
The histograms mentioned before are served by another Node.js application that uses JSROOT~\cite{JSROOT} both at server and client sides for reading histogram files and rendering histograms. The server can use also old CERNLIB HBOOK files converting them by h2root utility.
\section{Applying to Experiment Data}
Having put the new software into production, we implemented DQM scripts for several subsystems (calorimeter, muon system, trigger electronics). The new interface was used at the data acquisition stage in 2019.
At the same time, the data collected in the previous (2018) year were marked up during the reprocessing stage. More than five thousands runs (including cosmic ones) containing 3.1 billion stored events were checked. This check resulted in detecting 23\% runs having bad quality (cosmic, short, test or erroneous runs), 62\% good ones and 14\% ones with tolerable quality. The results are shown at the figure~\ref{PicRHO2018}. Please note that bad runs tend to be significantly shorter in terms of time, events count and integral luminosity, however they occupy the same area on the figure.
\begin{figure}[htbp]
\centering
\includegraphics[width=32pc,clip]{DQM-RHO2018.png}
\caption{\label{PicRHO2018}Integral quality data for RHO2018 experiment. Each cell represents a run (green, yellow and red backgrounds for good, tolerable and bad quality respectively).}
\end{figure}
\section{Conclusion}
A new version of the SND experiment DQM system provides a framework for automatic and manual data quality estimation, interactive (web) and program user interfaces. SND data quality is monitored in data acquisition and processing stages with different goals and assumptions.
The new DQM system was put into production in 2019. The data acquisition configuration was used during data taking of 2019 and 2020. The reprocessing configuration was used for data of 2018 and 2019.
\acknowledgments
This work is partly supported by the RFBR grant 18-02-00382.
\bibliographystyle{JHEP}
|
2,869,038,156,900 | arxiv | \section{Introduction}
\paragraph{}
The mappings $L_{a}$ and $R_{a}$ on a groupoid $(Q,\cdot)$ such that $L_{a}:Q\rightarrow Q$ and $R_{a}:Q\rightarrow Q$ ($a\in Q$) defined as $L_{a}(x)=a\cdot x$ and $R_{a}(x)=x\cdot a$ for all $x\in Q$ are called left and right translations respectively. These translations find relevant applications in quasigroups. Quickly, a groupoid $(Q,\cdot)$ is called a quasigroup if the translations $L_{a}$ and $R_{a}$ are bijective. For a finite quasigroup $Q$ the translations $L_{a}$ and $R_{a}$ are permutations. On the other hand, the permutations $\lambda_{i}, \varphi_{i} (i\in Q)$ of $Q$ that are defined as
$$
\lambda_{i}(x)\cdot x=i
$$
and
$$
x\cdot \varphi_{i}(x)=i
$$
for all $x \in Q$ are called left and right middle translations of an element $i$ in a quasigroup $Q(\cdot)$ respectively (see-\cite{qua15, qua28}). Moreover, the translation that is both a left and right middle translation is simply called the middle translation. These translations were first introduced and studied by Belousov (\cite{qua11,qua12}), and since then
many researchers have developed interest in expanding the concept (see-\cite{qua13,qua15,qua14,qua28,qua30}). Therefore, the focus of this paper is to investigate the middle translations and their representations in relation to finite involutory latin quandles. The term involutory latin quandles, as used in this paper, refers to latin quandles with involutory properties. Detail study of quandles and involutory latin quandles abound in literature(see-\cite{qua19,qua21,qua02,qua27a,qua27}).
\par
The permutation $\lambda_{i}(x)$ means left track (l-track). That is, to find in the column $x$ the cell containing an element $i$, we must select the row $\lambda_{i}(x)$ \cite{qua14}. In other words, $\lambda_{i}(x)$ is a permutation
of those elements that multiply $x$ from the left to give the result $i$. Thus $\lambda_{i}(x)$ is a row selection. On the other hand, $\varphi_{i}(x)$ is a column selection, and a permutation of those elements that multiply $x$ from the right
to give $i$. Therefore $\varphi_{i}(x)$ is a right track (r-track). The group generated by all these
translations ($R_{i}, L_{i}, \lambda_{i} ~\&~ \varphi_{i}$) is called the multiplication group denoted as $M(Q,\cdot)$
where $(Q,\cdot)$ is a quasigroup.
\begin{mydef} (\cite{qua01,qua09})
A quandle is a set $X$ with a binary operation $(a,b)\mapsto a b$ such that
\begin{description}
\item[(1)] For any $x\in X,~ xx=x$
\item[(2)] For any $a,b\in X$, there is a unique $x\in X$ such that $a=xb$
\item[(3)] For any $a,b,c\in X$, $(a b) c = (a c)(b c)$
\end{description}
\end{mydef}
The juxtaposition represents the binary operation in most of the definitions in this paper.
\begin{mydef} (\cite{qua16,qua23})
A quandle $X$ is commutative if it satisfies the identity
$$ x y=y x ~~ \forall ~ x,y\in X.$$
\end{mydef}
\begin{mydef} (\cite{qua18,qua01})
An abelian quandle is a quandle satisfying the identity:
$$(w x) (y z)=(wy) (x z)$$
\end{mydef}
\begin{mydef} (\cite{qua17})
Given two quandles $(X,\star)$ and $(Y,\bullet)$ , a map $f:(X,\star)\rightarrow (Y,\bullet)$
is a quandle homomorphism if
$$
f(a\star b)=f(a)\bullet f(b)~~\forall~a,b\in X
$$
If $f$ is a bijection then $f$ is called an isomorphism, and $(X,\star)$ and $(Y,\bullet)$ are said to be isomorphic quandles.
\end{mydef}
\begin{mydef}
The automorphism group of quandle $(X, *)$ , denoted as $Aut(X)$ is the group of all
isomorphisms $f: X \rightarrow X$.
\end{mydef}
\begin{mydef} (\cite{qua16,qua07})
The inner automorphism group of a quandle $(X,\star)$ denoted as Inn$(X)$ is the subgroup of Aut$(X)$ generated by all $S_x$, where $S_{x}(y)=y\star x$, for any $x,y\in X$. The map $S_{x}: X\rightarrow X$ that maps $u$ to $u\star x$ defines a right action of $X$ on $X$, so that we obtain a map $X\rightarrow Inn(X)$
\end{mydef}
\begin{mydef} \cite{qua11,qua19}\label{CoreDef}
Let $(G,\cdot)$ be a group, or, more generally, a Bol loop. The binary algebra $(G,\star)$ with
\begin{align}\label{quad16b}
x\star y=xy^{-1}x
\end{align}
is an involutory quandle, called the core of $(G,\cdot)$.
\end{mydef}
Bruck \cite{phd41} had earlier shown that the core of a Moufang loop, originally defined as
\begin{align}\label{quad16}
x+y=yx^{-1}y
\end{align}
is an involutory quandle.
\par It is to be noted that the quandles described above are not quasigroups or latin quandles.
\begin{mydef}\cite{qua19}
A groupoid $(Q,\star)$ is called a latin quandle if it obeys the following laws simultaneously:
\begin{description}
\item[(i)] $x\star x=x$ for all $x\in Q$ [idempotent law]
\item[(ii)] $a\star x=b$ for all $x\in Q$ and $a,b$ specified in $Q$ [left division law]
\item[(iii)]$y\star a=b$ for all $y\in Q$ and $a,b$ specified in $Q$ [right division law]
\item[(iv)] $ a\star (x\star y)=(a\star x)\star (a\star y)$ for all $a, x$ and $y$ in $Q$ [left distributive law]
\item[(v)] $(x\star y)\star a =(x\star a)\star (y\star a)$ for all $a, x$ and $y$ in $Q$ [right distributive law]
\end{description}
\end{mydef}
\begin{myexam}\label{exam1}
Let $(G,\cdot)$ be a cyclic group (more generally, a commutative Moufang loop) of odd order $n$ such that
$$
x+y=xy^{-1}\cdot x ~~\forall~x,y\in G.
$$
Then, the core $(G,+)$ is a latin quandle of odd order $n$
\end{myexam}
\begin{mydef}(\cite{qua21})\label{quad19}
A latin quandle $(Q,\circ)$ that obeys properties:
\begin{description}
\item[(1)] $x\circ(x\circ y)=y$ Left Involutory Property (LIP) is called a LIPQ.
\item[(2)] $(y\circ x)\circ x =y$ Right Involutory Property (RIP) is called a RIPQ.
\item[(3)](1) and (2) Involutory Property (IP) is called an IPQ.
\item[(4)] $ x\circ (y\circ x) =y$ or $y=(x\circ y) \circ x$ Cross Involutory Property (CIP) is called a CIPQ.
\end{description}
For all $x,y\in Q$.
\end{mydef}
Example \ref{exam1} above is a LIPQ of odd order $n$.
In a similar consideration, (\ref{quad16}) also gives the core $(G,+)$ as a RIPQ of odd order $n$.
Therefore, involutory latin quandles can be constructed as cores of cyclic groups as presented above.
\par
A latin quandle $Q$ that obeys Definition \ref{quad19} (LIPQ, RIPQ or IPQ) is the focus of this paper.
These acronyms henceforth will be used to represent these involutory quandles, and are not new in literature. They have always been used in connection with inverse properties quasigroups (loops)as shown in the following definition.
\begin{mydef}\label{quad18} (\cite{qua28})
\begin{enumerate}
\item
A quasigroup $(Q,\circ)$ has the Left Inverse Property (LIP) if there exists a permutation $\lambda$ of the set $Q$ such that $$ \lambda x \circ (x\circ y)=y$$ for all $x,y\in Q$ (By Belousov).
\item
A quasigroup $(Q,\circ)$ has the Right Inverse Property (RIP) if there exists a permutation $\rho$ of the set $Q$ such that $$ (x\circ y)\circ \rho y= x$$ for all $x,y\in Q$ (By Belousov).
\item
A quasigroup $(Q,\circ)$ has the Inverse Property (IP) if it is a LIP and RIP-quasigroup (By Belousov).
\item
A quasigroup $(Q,\circ)$ has the Cross Inverse Property (CIP) if there exists a permutation $J$ of the set $Q$ such that $$ (x\circ y)\circ Jx= y$$ for all $x,y\in Q$ (First by Artzy and later by Keedwell and Shcherbacov) (\cite{qua22,qua24,qua28}).
\end{enumerate}
\end{mydef}
More results on involutory latin quandles (LIPQ, RIPQ, CIPQ, IPQ) can be found in \cite{qua21}. A few are presented below.
\begin{myth}\cite{qua21} Let $Q(\cdot)$ be a cyclic group of odd order n such that
$$ x+y=(y^{-1}x)x, \forall x,y\in Q.$$ Then, $(Q,+)$ is a LIPQ of odd order n.
\end{myth}
\begin{myth}\cite{qua21} Let $Q(\cdot)$ be a cyclic group of odd order n such that
$$ x+y=y(yx^{-1}), \forall x,y\in Q.$$ Then, $(Q,+)$ is a RIPQ of odd order n.
\end{myth}
\begin{myth}\cite{qua21} Let $Q(\cdot)$ be a commutative group (not cyclic) of order $3^{n}, n\ge 1$ such that
$$ x+y=x(y^{-1}x), \forall x,y\in Q.$$ Then, $(Q,+)$ is an IPQ of order $3^{n}, n\ge 1$.
\end{myth}
The authors also stated that a latin Alexander quandle of order $4^{n}, n\ge 1$ is a CIPQ, and presented several concrete non-isomorphic examples of LIPQs, RIPQs, CIPQs and IPQs (see-\cite{qua21}).
\begin{mydef} (\cite{phd20})
Let $Q$ be a quasigroup (loop). The set $\Pi=\{ R(a):a\in Q \}$ is called the right regular representation
of $Q$ or briefly the representation of $Q$. The left regular representation is defined analogously.
\end{mydef}
\begin{mydef} (\cite{qua14})
By a spin of a quasigroup $Q(\cdot)$ we mean the permutation $$\varphi_{ij}=\varphi_{i}\varphi^{-1}_{j}=\varphi_{i}\lambda_{j}$$ where $\varphi_{i}$ and $\lambda_{j}$ are
tracks of $Q(\cdot)$. The spin $\varphi_{ii}$ is called trivial. And the set of all spins of a quasigroup $Q(\cdot)$
is denoted $\Phi_{Q}(\cdot)$.
\end{mydef}
\begin{mycor}\label{quad02a}\cite{qua14}
In any group $G(\cdot)$ we have
\begin{enumerate}
\item $\varphi_{i}(x)=x^{-1}\cdot i \quad (\lambda_{i}(x)=i\cdot x^{-1})$,
\item $\varphi_{1}(x)=\lambda_{1}(x)=x^{-1}$
\item $L_{i}(x)=\lambda_{i}(x)\cdot x^{2}$
\item $R_{i}(x)=x^{2}\cdot \varphi_{i}(x)$
\end{enumerate}
where $1$ is the identity element of the group $G$.
\end{mycor}
The next section presents the left and right middle translations and their induced representations as well as the construction of involutory latin quandles using the middle translations (see Theorem \ref{quad10} and Theorem \ref{quad11} ). Section 3 presents the applications of the middle translations in l-spins and r-spins. These concepts enable us to recover cyclic groups (see Theorem \ref{quad04} and Theorem \ref{quad12}) from involutory latin quandles.
\section{Middle translations and their representations}
\begin{mylem}\label{quad19}
Let $Q (\cdot)$ be an involutory latin quandle (LIPQ, RIPQ or CIPQ), then the following hold:
\begin{description}
\item[(1)] $\lambda_{i}=\varphi^{-1}_{i}$
\item[(2)] $\varphi^{-1}_{i}(x)\cdot x =i$
\item[(3)] $L_{i}(x)=(\lambda_{i}(x)\cdot x)\cdot x$
\item[(4)] $ L_{i}(x)=( x\cdot \varphi_{i}(x))\cdot x$
\item[(5)] $R_{i}(x)= x\cdot (\lambda_{i}(x)\cdot x)$
\item[(6)] $R_{i}(x)=x\cdot (x\cdot \varphi_{i}(x))$
\end{description}
For all $x\in Q$ and $i\in Q$.
\end{mylem}
The Proof of the above results simply follow from the definitions of the four translations on an involutory latin quandle $Q$.
\begin{mylem}\label{quad01}
Let $Q(+)$ be a finite RIPQ. Then, for each $i\in Q, \lambda_{i}(x)+x=i$ implies that $i+x=\lambda_{i}(x)$ for all $ x\in Q$.
\end{mylem}
{\bf Proof:}\\
Consider a RIPQ, then $(i+x)+x=i$ for all $i,x\in Q$. Also $$\lambda_{i}(x)+x=i \Rightarrow (i+x)+x=\lambda_{i}(x)+x=i$$ Thus $$ i+x=\lambda_{i}(x)$$
\begin{mylem}\label{quad02}
Let $Q(\cdot)$ be a finite LIPQ. Then, for each $i\in Q, x\cdot \varphi_{i}(x)=i$ implies that $x\cdot i=\varphi_{i}(x)$ for all $ x\in Q$.
\end{mylem}
{\bf Proof:}\\
$LIPQ$ guarantees that
$$x\cdot (x\cdot i)=i$$ and by definition $$ x\cdot \varphi_{i}(x)=i$$ Thus $$ x\cdot i=\varphi_{i}(x)$$
\begin{mycor}\label{quad08}
Let $Q(\cdot)$ be a latin quandle. Then, the permutations $L_{i}$ and $\lambda_{i}$ coincide if and only if $Q(\cdot)$ is a RIPQ.
\end{mycor}
{\bf Proof:}\\
Given that $Q(\cdot)$ is a RIPQ and that $L_{i}:x\mapsto i\cdot x$ ($i\in Q$), then by Lemma \ref{quad01} the first part holds.\\
Conversely, suppose that $L_{i}=\lambda_{i}$, then $i\cdot x=\lambda_{i}(x)$. Multiplying both sides by $x$ from the right gives
$(i\cdot x)\cdot x=\lambda_{i}(x)\cdot x=i ~ ~ \forall x\in Q$. Thus, $Q(\cdot)$ is a RIPQ.
\begin{mycor}\label{quad09}
Let $Q(\cdot)$ be a latin quandle. Then, the permutations $R_{i}$ and $\varphi_{i}$ on $Q(\cdot)$ defined as $R_{i}(x)=x\cdot i$ and $x\cdot \varphi_{i}(x)=i$ respectively coincide if and only if $Q(\cdot)$ is a LIPQ.
\end{mycor}
{\bf Proof:}\\
Since $Q(\cdot)$ is a LIPQ and that $R_{i}:x\mapsto x\cdot i$ ($i\in Q$), then by Lemma \ref{quad02} the forward statement holds.\\
The converse is proved as above.
\begin{mypro}\label{quad05}
Let $Q$ be a finite set and assume there exists a map $\lambda_{i}:Q\rightarrow Q $ ($i\in Q$). Then the binary operation $\circ$ defined by $\lambda_{i}(x)\circ x=i$ gives a RIPQ if and only if:
\begin{enumerate}
\item
$\lambda_{x}(x)=x$ for all $x\in Q$
\item
$\lambda_{i}$ is bijective
\item
$\lambda_{(i\circ x)}(y)=\lambda_{i}(y)\circ \lambda_{x}(y)$ for all $x,y \in Q$.
\end{enumerate}
\end{mypro}
{\bf Proof:}\\
Given that $(Q,\circ)$ is a RIPQ, then
\begin{enumerate}
\item $x\circ x=x$ implies that $\lambda_{x}(x)=x $ by Lemma \ref{quad01}.
\item By Corollary \ref{quad08}, the permutation $\lambda_{i} $ is bijective since $Q$ is a latin quandle.
\item Lemma \ref{quad01} and distributivity guarantees $\lambda_{(i\circ x)}(y)=(i\circ x)\circ y =\lambda_{i}(y)\circ \lambda_{x}(y)$.
\end{enumerate}
The converse follows from a careful application of Lemma \ref{quad01} and Corollary \ref{quad08}
\begin{mypro}\label{quad05b}
Let $Q$ be a finite set and assume there exists a map $\varphi_{i}: Q\rightarrow Q$ ($i\in Q$). Then the binary operation $\circ$ defined by $ x\circ \varphi_{i}(x)=i $ gives a LIPQ if and only if:
\begin{enumerate}
\item
$\varphi_{x}(x)=x $ for all $x\in Q$
\item
$\varphi_{i}$ is bijective
\item
$\varphi_{(i\circ x)}(y)=\varphi_{i}(y)\circ \varphi_{x}(y)$ for all $x,y \in Q$.
\end{enumerate}
\end{mypro}
{\bf Proof:}\\
The first part of the proof is similar to the proof of Proposition \ref{quad05} and the converse also follows from Lemma \ref{quad02} and Corollary \ref{quad09}
\begin{mycor}
Let $Q$ be a RIPQ such that $\lambda_{i}:Q\rightarrow Q $ ($i\in Q$). Then $\lambda_{i}$ defined as $\lambda_{i}(x)\circ x=i$ is an automorphism.
\end{mycor}
{\bf Proof:}\\
The definition $\lambda_{i}(x)\circ x=i$ implies that $\lambda_{i}(x)=i\circ x$ by Lemma \ref{quad01}.\\
Consider: $\lambda_{i}(x\circ y)=i\circ (x\circ y)=\lambda_{i}(x)\circ \lambda_{i}(y)$.
Thus, $\lambda_{i}$ is an automorphism since $Q$ is bijective as a latin quandle.
\begin{myrem}
That $\varphi_{i}$ is an automorphism when $(Q)$ is a LIPQ can be proved in a similar manner.
\end{myrem}
\begin{mydef}\label{quad06}
Let $Q (\cdot)$ be an involutory latin quandle. The sets $\Pi_{\lambda}=\{ \lambda_{i}(x): \lambda_{i}(x)\cdot x=i, x\in Q \}$ and $\Pi_{\varphi}=\{ \varphi_{i}(x): x \cdot \varphi_{i}(x)=i, x\in Q \}$ are called the left and right middle representations respectively.
\end{mydef}
\begin{mypro}\label{quad07}
A set $\Pi$ of permutations on $Q$ is the representation of left middle translations on an involutory latin quandle $Q(\cdot)$ if and only if
\begin{description}
\item[(i)] $\pi_{x}(x)=x$ for all $x\in Q$ and $\pi_{x} \in \Pi$ (i.e every permutation fixes an element of $Q$),
\item[(ii)] for all $x,y\in Q$, there exists a unique $\pi_{y} \in \Pi$ such that $\pi_{y}(x)\cdot x=y$,
\item[(iii)] $\alpha,\beta \in \Pi$ and $\alpha\beta$ fixes the same element of $Q$, then $\alpha=\beta$.
\end{description}
\end{mypro}
{\bf Proof:}\\
Suppose first that (i), (ii) and (iii) hold. Then we need to show that $\Pi$ is a set of permutations induced by the left
middle translations on a latin quandle $Q$.\\
That $\pi_{x}(x)=x$ and $\pi_{y}(x)\cdot x=y$ means that $\pi_{i}(x)\cdot x=i ~ \forall ~ x\in Q$. Moreover, if $\alpha$ and $\beta$ are in $\Pi$
and their product fixes the same element say $x$, then $\alpha \beta=\pi_{x}(x)=x \Rightarrow \pi_{x}(x)\cdot x=x$. Therefore, $\pi_{x} \in \Pi$ (by Definition \ref{quad06}). Thus, $\Pi$ is a set of permutations induced by the left middle translations.\\
Conversely,
\begin{description}
\item[(i)] given $\Pi$ as in Definition \ref{quad06}, $\pi_{i}(x)\cdot x=i (i\in Q)$. So, $i=x$ means that $\pi_{x}(x)\cdot x=x\cdot x$ (since $Q$ is a latin quandle). Then, $\pi_{x}(x)=x ~ \forall ~ x\in Q$.
\item[(ii)] If $i=y, \pi_{y}(x)\cdot x=y~ ~ \forall x, y\in Q $. Then to show uniqueness: suppose $\pi_{y}$ is not unique in $\Pi$, then there exists $\pi_{y}$ and $\pi_{y}'$ such that $\pi_{y}(x)\cdot x=y$ and $\pi_{y}'(x)\cdot x=y$. This implies that $\pi_{y}(x)\cdot x=\pi_{y}'(x)\cdot x \Rightarrow \pi_{y}=\pi_{y}'$. Thus $\pi_{y}$ is unique in $\Pi$.
\item[(iii)] Let $\alpha=\pi_{i}$ and $\beta=\pi_{x} ~(i\ne x)$. Then $ \alpha\beta(y)=\pi_{i}(y)\cdot \pi_{x}(y)=\pi_{(i\cdot x)}(y)$(Proposition \ref{quad05}).
Thus $\alpha\beta=\pi_{(i\cdot x)}$. From (i), $\pi_{(i\cdot x)}(i\cdot x)=(i\cdot x)$. So, $\alpha\beta$ fixes $(i\cdot x)~~ i\ne x$. But if $i=x$ then $\pi_{i}=\pi_{x}$ and $\alpha\beta$ fixes $x$, the same element as $\alpha$ and $\beta$. Thence, $\alpha=\beta$.
\end{description}
\begin{mypro}
A set $\Pi$ of permutations on $Q$ is the representation of right middle translations on an involutory latin quandle $Q(\cdot)$ if and only if
\begin{description}
\item[(i)] $\pi_{x}(x)=x$ for all $\pi_{x} \in \Pi$ (i.e every permutation fixes an element),
\item[(ii)] for all $x,y\in Q$, there exists a unique $\pi_{y} \in \Pi$ such that $x\pi_{y}\cdot x=y$,
\item[(iii)] $\alpha,\beta \in \Pi$ and $\alpha\beta$ fixes the same element of $Q$, then $\alpha=\beta$.
\end{description}
\end{mypro}
The Proof is similar to the proof of Proposition \ref{quad07}.
\begin{myth}
Let $Q(+)$ be a latin quandle of odd order n. Then, the representation induced by the left middle translations on $Q(+)$ is a LIPQ of odd order n if and only if $Q(+)$ is a RIPQ of odd order n.
\end{myth}
{\bf Proof:}\\
Let $Q(+)$ be a latin quandle generated by $(x+y)+y=x$ for all $x,y\in Q$. Then for each $x$ in $Q$ this implies $\lambda_{x}(y)+ y=x$. This gives the left middle translation.\\
Conversely, suppose $Q(+)$ is generated by the set of all left middle translations on $Q(+)$. That is $$ \lambda_{i}(x)+ x=i $$ By Lemma \ref{quad01} $$ (i+ x)+x =i $$ This means that $Q(+)$ is a RIPQ.
\begin{myth}\label{quad03}
Let $Q(+)$ be a RIPQ of order $n$. Then, the representation induced by the left middle translation on $Q(+)$ is a CIPQ of order $n$ if and only if the representation is commutative.
\end{myth}
{\bf Proof:}\\
Suppose that the representation induced by the left middle translations on $Q(+)$ is commutative. Then $$ \lambda_{i}(x)+ x=i \Rightarrow x+ \lambda_{i}(x)=i$$ But $$ \lambda_{i}(x)=(i+ x)\Rightarrow x+ (i+ x)=i $$ This gives a CIPQ.\\
Conversely, given that $Q(+)$ is a CIPQ, that is, $x+(y+ x)=y$. Represent $y + x$ by $\lambda_{y}(x)$ then
$x+ \lambda_{y}(x)=x$. Commutativity implies that $\lambda_{y}(x)+ x=y$. This gives the left middle translation.
\begin{myth}\label{quad10}
Let $(Q, \star)$ be a cyclic group of odd order n such that $x + y=L_{1}(x)\star \lambda_{1}(y)\star x ~~\forall x,y\in Q$, where $L_{1}$ is a left translation, $\lambda_{1}$ a left middle translation and $1$ the identity element of $(Q,\star)$. Then, $(Q,+)$ is a LIPQ of odd order n.
\end{myth}
{\bf Proof:}\\
consider:
$$
(x + y)= L_{1}(x)\star \lambda_{1}(y)\star x
$$
then simplifying by, $ \lambda_{1}(y)=y^{-1}$ where $1$ is the identity element of $(Q,\star)$ ensures the following results: $$ \begin{gathered}
x + x= L_{1}(x)\star \lambda_{1}(x)\star x=x\\
(x + y)+ z=L_{1}[L_{1}(x)\star \lambda_{1}(y)\star x]\star \lambda_{1}(z)\star L_{1}(x)\star \lambda_{1}(y)\star x\\
=(x + z)+ (y + z)
\end{gathered}
$$
Similarly, $$ x+(y+z)=L_{1}(x)\star \lambda_{1}[L_{1}(y)\star \lambda_{1}(z)\star y]\star x=(x+y)+(x+z)$$
Left and right divisibility hold since $(Q,\star)$ is a group. Then, $(Q,+)$ is a latin quandle.\\ Next
$$
\begin{gathered}
x+(x + y)= x + [L_{1}(x)\star \lambda_{1}(y)\star x]\\
=L_{1}(x)\star \lambda_{1}[L_{1}(x)\star \lambda_{1}(y)\star x]\star x=y
\end{gathered}
$$
Thus, $x + (x + y)=y$. Therefore, $(Q,+)$ is a LIPQ.
\begin{myth}\label{quad11}
Let $(Q,\star)$ be a cyclic group of odd order n such that $x + y=R_{1}(y)\star \varphi_{1}(x)\star y$ ($x,y\in Q$) where $R_{1}$ is a right translation, $\varphi_{1}$ a right middle translation and $1$ is the identity element of $(Q,+)$. Then $(Q,+)$ is a RIPQ of odd order n.
\end{myth}
{\bf Proof:}\\
Similarly, for any group $(Q,\star)$, $\varphi_{1}(x)=x^{-1}$ where $1$ is the identity element of $(Q,\star)$, one can show that
$ x + x=x $ and $(x + y) + z = (x + z) + (y + z)$ as above. The left and right divisibility hold for the same reason. Therefore, $(Q,\star)$ is a latin quandle.\\ Next
$$
\begin{gathered}
(x + y) + y\\
=[R_{1}(y)\star\varphi_{1}{x}\star y] + y\\
=R_{1}(y)\star\varphi_{1}[R_{1}(y)\star \varphi_{1}{x}\star y]\star y=x
\end{gathered}
$$
Thus, $(x + y) + y=x$. Therefore, $(Q,+)$ is a RIPQ.
\begin{myth}\label{quad10a}
Let $(Q, \circ)$ be a commutative group (not necessarily cyclic) of order $3^{n}, n\ge 1$ such that $$x + y=L_{1}(x)\circ \lambda_{1}(y)\circ x ~~\forall x,y\in Q, $$ where $L_{1}$ is a left translation, $\lambda_{1}$ a left middle translation and $1$ the identity element of $(Q,\circ)$. Then, $(Q,+)$ is an IPQ of order $3^{n}, n\ge 1$.
\end{myth}
{\bf Proof:}\\
consider:
$$
(x + y)= x \circ \lambda_{1}(y)\circ L_{1}(x)
$$
then simplify by $ \lambda_{1}(y)=y^{-1}$ gives:
$$ \begin{gathered}
x + x= x \circ \lambda_{1}(x)\circ L_{1}(x)=x\\
(x + y)+ z=[x \circ \lambda_{1}(y)\circ L_{1}(x)]+z\\ = (x \circ \lambda_{1}(y)\circ L_{1}(x))\circ \lambda_{1}(z)\circ L_{1}(x \circ \lambda_{1}(y)\circ L_{1}(x))\\= (xy^{-1}x)z^{-1}(xy^{-1}x)\\
=(x + z)+ (y + z)
\end{gathered}
$$
Similarly, $$ \begin{gathered}
x+(y+z)= x + [y \circ \lambda_{1}(z)\circ L_{1}(y)]\\
= x\circ \lambda_{1}[y \circ \lambda_{1}(z)\circ L_{1}(y)]\circ L_{1}(x)\\ =(x+y)+(x+z)
\end{gathered}
$$
Then left and right divisibility hold since $(Q,\circ)$ is a group. Therefore, $(Q,+)$ is a latin quandle.\\ Next
$$
\begin{gathered}
x+(x + y)= x + [x \circ \lambda_{1}(y)\circ L_{1}(x)]\\
= x\circ \lambda_{1}[x \circ \lambda_{1}(y)\circ L_{1}(x)]\\ = \lambda_{1}[x \circ \lambda_{1}(y)\circ L_{1}(x)]\circ x= (y+x)+x
\end{gathered}
$$
Thus, $(Q,+)$ is an IPQ.
\begin{myrem}
$(Q,+)$ can equivalently be defined as $x + y=x\circ \varphi_{1}(y)\circ R_{1}(x) ~~\forall x,y\in Q, $ where $R_{1}$ is a right translation, $\varphi_{1}$ a right middle translation and $1$ the identity element of $(Q,\circ). $
\end{myrem}
\begin{myth}\label{quad21}
Let $Q(\circ)$ be a commutative latin quandle of order $3^{n}, n\ge 1$ then $Q$ is an IPQ of order $3^{n}$ if and only if
$\lambda_{i}= \varphi_{i}$.
\end{myth}
{\bf Proof}\\
Suppose $Q(\circ)$ is an IPQ, then $Q(\circ)$ is a LIPQ and RIPQ simultaneously. A nice application of Lemma \ref{quad01} and Lemma \ref{quad02} and since $Q$ is commutative ensures that
$$ (i\circ x)\circ x = x\circ (x\circ i)=i=\lambda_{i}(x)\circ x= \varphi_{i}(x)\circ x$$
implies that $\lambda_{i}=\varphi_{i}$.\\
Conversely, since $\lambda_{i}(x)=\varphi_{i}(x)$, then $\lambda_{i}(x)\circ x =\varphi_{i}(x)\circ x$
But $\lambda_{i}(x)\circ x = i=x\circ \varphi_{i}(x)$. Then $(i\circ x)\circ x = i=x\circ (x\circ i)$ (by Lemma \ref{quad01} and Lemma \ref{quad02}), Thus $Q$ is an IPQ.
\begin{myrem}
All translations $L_{i}, R_{i}, \lambda_{i}$ and $\varphi_{i}$ coincide in IPQ.
\end{myrem}
\section{Spins of Involutory Latin Quandles}
\begin{mydef}
Let $Q(\cdot)$ be a latin quandle. Then, by a left spin (l-spin) of $Q(\cdot)$ we mean the permutation
$$ \lambda_{ij}=\lambda_{i}\lambda^{-1}_{j}=\lambda_{i}\varphi_{j} $$
where $\lambda_{i}$ and $\varphi_{j}$ are left and right middle translations on $Q$ respectively.
\end{mydef}
\begin{mydef}
Let $Q(\cdot)$ be a latin quandle. Then, by right spin (r-spin) of $Q(\cdot)$ we mean the permutation
$$ \varphi_{ij}=\varphi_{i}\varphi^{-1}_{j}=\varphi_{i}\lambda_{j} $$
where $\varphi_{i}$ and $\lambda_{j}$ are right and left middle translations on $Q$ respectively.
\end{mydef}
A permutation $\pi_{ij}$ on a latin quandle $Q$ is a spin if it is both l-spin and r-spin.
\begin{mylem}\label{quad20}
Let $Q(\cdot)$ be an involutory latin quandle (LIPQ, RIPQ, or both) of order $n$. Then, the following properties hold:
\begin{enumerate}
\item $\varphi_{ij}(x)\neq x ~(\lambda_{ij}(x)\neq x)$ for all $x\in Q$ and $i\neq j$
\item $\varphi_{pi}(x) \neq \varphi_{pj}(x) (\lambda_{pi}(x) \neq \lambda_{pj}(x))$ for all $x\in Q$ and $i\neq j$
\item $\varphi_{ij}=\varphi^{-1}_{ji} (\lambda_{ij}=\lambda^{-1}_{ij})$ for $i\neq j$
\item $\varphi_{ij} = \varphi_{(i+1)(j+1)}$ for $i=1,2,...,n-1;j=1,2,...,n-1$ and $\varphi_{ii}$ is trivial.
\item $\lambda_{ij} = \lambda_{(i+1)(j+1)}$ for $i=1,2,...,n-1;j=1,2,...,n-1$ and $\lambda_{ii}$ is trivial.
\item $ \varphi_{n1}=\varphi_{(n-1)n} $
\item $\lambda_{n1}=\lambda_{(n-1)n}$
\end{enumerate}
\end{mylem}
\begin{myth}\label{quad04}
Let $Q(\cdot)$ be a LIPQ of odd order n. Then, the set of all r-spins of $Q$ is a cyclic group of odd order n under composition of mapping, denoted as $(\Phi_{R},\circ)$.
\end{myth}
{\bf Proof:}\\
Let $$ P_{R}=\{ \varphi_{i},\lambda_{j}\in M(Q,\cdot)|\varphi_{ij}=\varphi_{i}\varphi^{-1}_{j}=\varphi_{i}\lambda_{j},i,j\in Q \} $$ such that the order of $P_{R}$ is odd. From the definition, $P_{R}$ is a subset of the multiplication group $M(Q,\cdot)$. $\varphi_{ii}\in P_{R}$ since $\varphi_{ii}=\varphi_{i}\varphi^{-1}_{i}=I_{ii}$ (identity r-spin), and thus, $P_{R}$ is not empty. Also, $\varphi_{ij}\circ \varphi_{jk}=\varphi_{ik} (i\ne k)$. Hence $P_{R}$ is closed under composition of mapping. Moreover, $\varphi_{ii}=\varphi_{ij}\circ \varphi^{-1}_{ij}=\varphi_{ij}\circ \varphi_{ji}=I_{ii}$. Thus, $\varphi_{ji}$ is the inverse of
$\varphi_{ij}$ in $P_{R}$. Therefore, $P_{R}$ is a subgroup of $M(Q,\cdot)$, and thus a group.
\par
Now, consider
$\varphi_{ij}(x)=x\cdot (ij)=(x\cdot i)j=j(i\cdot x)=(ji)\cdot x=\varphi^{-1}_{ji}(x) \Rightarrow\varphi_{ij}=\varphi^{-1}_{ji}$ ( by Lemma \ref{quad20}(3)). Thus $P_{R}$ is commutative. Therefore, $P_{R}$ is a cyclic group of odd order n.
\begin{myth}\label{quad12}
Let $Q(\cdot)$ be a RIPQ of odd order n. Then, the set of all l-spins of $Q$ is a cyclic group of odd order n
under composition of mapping denoted as $(\Phi_{L},\circ)$.
\end{myth}
{\bf Proof:}\\
Let $$ P_{R}=\{ \lambda_{i}, \varphi_{j} \in M(Q,\cdot)|\lambda_{ij}=\lambda_{i}\lambda^{-1}_{j}=\lambda_{i}\varphi_{j}, i,j\in Q \} .$$ The remaining part is similar to the proof of Theorem \ref{quad04}.
\begin{myth}
Let $Q(\cdot)$ be an IPQ of order $3^{n}, n\ge 1$. Then, the left and right spins coincide.
\end{myth}
{\bf Proof}\\
By Theorem \ref{quad21} $\lambda_{i}=\varphi_{i}$ and $\lambda_{j}=\varphi_{j}$.\\
Then, consider: $$ \lambda_{ij}=\lambda_{i}\varphi_{j}= \varphi_{i}\varphi_{j}=\varphi_{i}\lambda_{j}=\varphi_{ij} $$
\begin{myrem}
We speak of spins of IPQ since all spins of IPQ are left and right simultaneously.
\end{myrem}
\section{Conclusion}
The concept of middle translation is a 'track algebra' where each element in an algebra is either tracked from the left (left middle translation) or tracked from the right (right middle translation).
This paper, therefore investigated the consequences of left (right) middle translations on involutory latin quandles as well as their induced representations. These permutations $(\lambda_{i}~\&~\varphi_{i})$ were further applied on cyclic groups of odd orders to produce involutory latin quandles. Nice applications of these permutations also helped in reversing the process under r-spins and l-spins of these quandles to recover the cyclic groups earlier used.
|
2,869,038,156,901 | arxiv | \subsection{\large A. \ $\phi^4$ Potential}\label{secA}
As the first case, we assume a $\lambda \phi^4$ potential for the dark glueball. This is a simplified but interesting case where the same quartic coupling controls both glueball dark matter self-interaction strength and the properties of the DSS.
We assume $\lambda>0$ so the glueball self-interaction is repulsive, or equivalently, its contribution to the scalar potential energy is positive. Stable boson star configuration could be obtained when this repulsive interaction balances the attraction from gravity. The potential takes the form
\begin{equation}\label{V1}
V(\phi) = \frac{1}{2} m^2 \phi^2 + \frac{1}{4} \lambda \phi^4 \ .
\end{equation}
In this case, the above Eqs. (\ref{e1}, \ref{e2}, \ref{kg}) can be simplified to
\begin{eqnarray}
&&\mathcal{M}'(x) = x^2 \left[ \frac{1}{4} \left( \frac{\Omega^2}{B(x)} + 1 \right) \sigma(x)^2 + \frac{3}{32} \Lambda \sigma(x)^4 + \frac{\sigma'(x)^2}{4A(x)} \right] \ , \\
&&\frac{B'(x)}{x A(x) B(x)} - \frac{1}{x^2}\left( 1 - \frac{1}{A(x)} \right) = \frac{1}{2} \left( \frac{\Omega^2}{B(x)} - 1 \right) \sigma(x)^2 - \frac{3}{32} \Lambda \sigma(x)^4 + \frac{\sigma'(x)^2}{4A(x)} \ , \\
&&\sigma''(x) + \left( \frac{2}{x} + \frac{B'(x)}{2B(x)} - \frac{A'(x)}{2A(x)} \right) \sigma'(x) + A(x) \left[ \left(\frac{\Omega^2}{B(x)} - 1 \right) \sigma(x) - \frac{1}{2} \Lambda \sigma^3(x) \right] =0 \label{kg2} \ ,
\end{eqnarray}
where we have defined $x= m r$, $\sigma = \sqrt{4\pi G} \Phi$, $\Omega = \omega/m$, $\Lambda = \lambda/(4\pi G m^2)$, and
\begin{equation}
A = \left( 1- \frac{2 \mathcal{M}}{x} \right)^{-1} \ .
\end{equation}
From the definition of Schwarzschild metric, $M_{\rm pl}^2\mathcal{M}(x)/m$ is the mass of the star within a radius $x/m$. Note that $\mathcal{M}$ and $x$ are both dimensionless.
We notice that for glueball dark matter, the model parameters satisfy the condition $\Lambda \gg 1$. In this case, the above equation can be further simplified. Following~\cite{Colpi:1986ye}, we further define
$\sigma_* = \sqrt{\Lambda} \sigma$, $x_* = x/\sqrt{\Lambda}$, $\mathcal{M}_* = \mathcal{M}/\sqrt{\Lambda}$. First, the KG equation (\ref{kg2}) becomes
\begin{eqnarray} \label{kg3}
\Lambda^{-1}\left[\sigma_*''(x_*) + \left( \frac{2}{x_*} + \frac{B'(x_*)}{2B(x)} - \frac{A'(x_*)}{2A(x_*)} \right) \sigma'_*(x_*)\right] + A(x_*) \left[ \left(\frac{\Omega^2}{B(x_*)} - 1 \right) \sigma_*(x_*) - \frac{1}{2} \sigma_*^3(x_*) \right] =0 \ . \nonumber \\
\end{eqnarray}
In the large $\Lambda$ limit, the first term can be dropped, and we obtain,
\begin{eqnarray}\label{sigma*}
\sigma_*(x_*) \simeq \sqrt{ 2 \left(\frac{\Omega^2}{B(x_*)} - 1 \right) } \label{kg4} \ .
\end{eqnarray}
This approximate relation is valid until $\sigma_*$ approaches 0, where the second term in Eq.~(\ref{kg3}) vanishes and we could no longer neglect the terms with derivative on $\sigma_*$. With Eq.~(\ref{sigma*}), the two Einstein equations take the form,
\begin{eqnarray}
&&\mathcal{M}'_*(x_*) \simeq x_*^2 \left[ \frac{1}{4} \left( \frac{\Omega^2}{B(x)} + 1 \right) \sigma_*(x_*)^2 + \frac{3}{32} \sigma_*(x_*)^4 \right] \ , \label{e3} \\
&&\frac{B'(x_*)}{x_* B(x_*)} \left( 1 - \frac{2 \mathcal{M}_*(x_*)}{x_*} \right) - \frac{2 \mathcal{M}_*(x_*)}{x_*^3} \simeq
\frac{1}{2} \left( \frac{\Omega^2}{B(x)} - 1 \right)\sigma_*(x_*)^2 - \frac{3}{16} \sigma_*(x_*)^4 \ . \label{e4}
\end{eqnarray}
We solve Eqs.~(\ref{kg4}, \ref{e3}, \ref{e4}) numerically starting with boundary conditions at the origin $\mathcal{M}_*(x_*=0)=0$ up to the point $x_*=x_R$ where $\sigma_*(x_R) \to 0$. We vary $B(0)/\Omega^2 <1$ as a free parameter. The boson star mass and radius are determined by,
\begin{eqnarray}\label{MR1}
M = \sqrt{\frac{\lambda}{4\pi}} \frac{M_{pl}^3}{m^2} \mathcal{M}_*(x_R)\ , \ \ \ \ \ R = \sqrt{\frac{\lambda}{4\pi}} \frac{M_{pl}}{m^2} x_R \ .
\end{eqnarray}
Note that from our definition above $x_R$ and $\mathcal{M}_*$ are dimensionless quantities, plotted in Fig.~\ref{phi4}.
\begin{figure}[t]
\centerline{\includegraphics[width=8cm]{MassPhi_4.pdf}}
\centerline{\includegraphics[width=8cm]{RadiusPhi_4.pdf}}
\centerline{\includegraphics[width=8cm]{RatioPhi_4.pdf}}
\caption{Properties of the DSS by numerically solving the coupled classical Einstein-Klein-Gordon equations with a $\phi^4$ scalar glueball potential, as described in Section A. The upper plot shows the quantity $\mathcal{M}_*(x_R)$, the DSS mass in unit of $\sqrt{\lambda/4\pi}\left(M_{\rm pl}^3/m^2 \right)$, as a function of the boundary condition $B(0)/\Omega^2$. The middle plot shows the quantity $x_R$, the DSS radius in unit of $\sqrt{\lambda/4\pi}\left(M_{\rm pl}/m^2 \right)$, as a function of $B(0)/\Omega^2$.
The lower plot shows the ratio $\mathcal{M}_*(x_R)/x_R^3$ as a function of $B(0)/\Omega^2$.
The blue shaded region is not accessible through the accretion process.}\label{phi4}
\end{figure}
The results are shown as a function of $B(0)/\Omega^2$ in Fig.~\ref{phi4}. The shaded regions cannot be reached. This can be understood from the picture where the DSS accretes its mass by capturing more and more dark matter particles around it. The mass growth begins from the rightmost toward the left along the curve until it reaches the maximum at $B(0)/\Omega^2=0.53$, with $\mathcal{M}_*(x_R)=0.22$ and $x_R=1.35$.
Thus the mass and radius of the DSS are,
\begin{eqnarray}
M= \sqrt{\lambda} \left( \frac{0.3\,\rm GeV}{m} \right)^2 M_{\odot} \ , \hspace{0.5cm} R = \sqrt{\lambda} \left( \frac{0.3\,\rm GeV}{m} \right)^2 \times 10 \,{\rm km} \ .
\end{eqnarray}
where $M_{\odot} =2\times10^{30}\,$kg is the solar mass. This corresponds to the highest ratio, ${\rm Max}[{\mathcal{M}_*(x_R)}/{x_R^3}] \simeq 0.09$.
From Eqs.~(\ref{binary}) and (\ref{MR1}), we derive the highest gravitational wave frequency,
\begin{eqnarray}\label{fmax1}
f_{\rm max} = \frac{m^2}{2\pi M_{\rm pl}} \sqrt{\frac{4\pi}{\lambda} {\rm Max}\left[\frac{\mathcal{M}_*(x_R)}{x_R^3}\right]} \simeq
50\, {\rm Hz} \times \sqrt{\frac{1}{\lambda}} \times \left( \frac{m}{0.05\,\rm GeV} \right)^2\ .
\end{eqnarray}
\subsection{\large B. \ Glueball Potential From Large $N$ Limit}
In general, the scalar glueball potential not only contains the quartic term but also the cubic and higher dimensional interaction terms. In the large $N$ limit, they follow the power counting $\lambda_3 \sim 1/N$, $\lambda_4 \sim 1/N^2$, $\lambda_5 \sim 1/N^3$ and so on.
In this section, we consider a more realistic dark glueball potential based on the large $N$ power counting~\cite{Soni:2016gzf, Forestell:2016qhc},
\begin{equation}\label{VlargeN}
V(\phi) = \frac{a_2}{2!} m^2 \phi^2 + \frac{a_3}{3!} \left( \frac{4\pi}{N} \right) m \phi^3 + \frac{a_4}{4!} \left( \frac{4\pi}{N} \right)^2 \phi^4 + \frac{a_5}{5!} \left( \frac{4\pi}{N} \right)^3 \frac{\phi^5}{m} + \cdots \ .
\end{equation}
The coefficients $a_i$ are order 1 parameters and they could in principle be reliably determined from lattice calculations. To proceed, we will assume that all $a_i=1$. In this case the potential takes the more compact form
\begin{equation}
V(\phi) = \frac{m^4 N^2}{16\pi^2} \left( e^{\frac{4\pi \phi}{Nm}} - \frac{4\pi \phi}{Nm} - 1 \right) \ .
\end{equation}
We repeat the derivations similar to the $\phi^4$ potential case, and reach the following coupled equations, in analogy to Eqs.~(\ref{kg4}, \ref{e3}, \ref{e4}),
\begin{eqnarray}
&&\frac{\Omega^2}{B(x_*)} \simeq { }_pF_q\left[ \left\{1/2 \right\}, \left\{ 1, 3/2 \right\}, 4\pi \sigma_*(x_*)^2 \rule{0mm}{4mm}\right] \ , \label{20} \\
&&\mathcal{M}'_*(x_*) \simeq x_*^2 \left[ \frac{\Omega^2}{4B(x)} \sigma_*(x_*)^2 + \frac{1}{16\pi^2} \left( I_0 \left(4\pi \sigma_*(x_*)\rule{0mm}{3.5mm}\right) -1 \rule{0mm}{4mm}\right) \right] \ , \label{21} \\
&&\frac{B'(x_*)}{x_* B(x_*)} \left( 1 - \frac{2 \mathcal{M}_*(x_*)}{x_*} \right) - \frac{2 \mathcal{M}_*(x_*)}{x_*^3} \simeq \frac{\Omega^2}{2B(x)}\sigma_*(x_*)^2 -
\frac{1}{8\pi^2} \left( {\rm I}_0 \left(4\pi \sigma_*(x_*)\rule{0mm}{3.5mm}\right) -1 \rule{0mm}{4mm}\right) \ ,
\end{eqnarray}
where the field and parameter redefinitions are similar to above, except that here $\Lambda = 1/ (4\pi G m^2 N^2)$. Throughout the parameter space of interest to this study, $\Lambda \gg 1$. The function $I_0$ is the modified Bessel function, and ${ }_pF_q$ is the generalized hypergeometric function.
For deriving Eq.~(\ref{20}, \ref{21}), we have used the relations
\begin{eqnarray}
\left\langle \frac{e^{\frac{4\pi \Phi \cos\omega t}{Nm}} -1}{\cos\omega t} \right\rangle = \frac{4\pi \Phi}{N m} { }_pF_q\left[ \left\{1/2 \right\}, \left\{ 1, 3/2 \right\}, \frac{4\pi^2 \Phi^2}{N^2 m^2} \rule{0mm}{4mm}\right] \ , \hspace{0.5cm}
\left\langle e^{\frac{4\pi \Phi \cos\omega t}{Nm}}\right\rangle = I_0 \left( \frac{4\pi \Phi}{Nm} \right) \ .
\end{eqnarray}
\begin{figure}[t]
\centerline{\includegraphics[width=8cm]{MassE_Phi.pdf}}
\centerline{\includegraphics[width=8cm]{RadiusE_Phi.pdf}}
\centerline{\includegraphics[width=8cm]{RatioE_Phi.pdf}}
\caption{Properties of the DSS by numerically solving the coupled classical Einstein-Klein-Gordon equations with a scalar glueball potential following from large $N$ counting, as described in Section B. The upper plot shows the quantity $\mathcal{M}_*(x_R)$, the DSS mass in unit of $M_{\rm pl}^3/\left(\sqrt{4\pi} N m^2 \right)$, as a function of the boundary condition $B(0)/\Omega^2$. The middle plot shows the quantity $x_R$, the DSS radius in unit of $M_{\rm pl}/\left(\sqrt{4\pi} N m^2 \right)$, as a function of $B(0)/\Omega^2$.
The lower plot shows the ratio $\mathcal{M}_*(x_R)/x_R^3$ as a function of $B(0)/\Omega^2$.
The blue shaded region is not accessible through the accretion process.}\label{e^phi}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=12cm]{LIGO_Reach.pdf}}
\caption{The LIGO experiment could probe the SU($N$) glueball dark matter parameter space, assuming binary DDS exist.
In the red region, the highest frequency of gravitational wave radiation calculated based on the large $N$ glueball potential in section B, lies between 50--1000\,Hz, thus this region can potentially be within the LIGO sensitivity.
In the magenta region, the highest gravitational wave frequency from binary DSS is between 0.03\,mHz and 0.1\,Hz and could be probed by the future LISA/eLISA project.
The regions between the dashed lines corresponds the case of $\phi^4$ potential (discussed in section A).
For the same SU($N$) model, in the the yellow band the $3\to2$ annihilation enables the lightest scalar glueball to have the proper free-streaming length to be a warm dark matter candidate, while in the blue band, the $2\to2$ elastic scattering of the glueball dark matter is large enough for it to be a self-interacting dark matter candidate~\cite{Soni:2016gzf}. In the lower left corner, the gray region is already ruled out because of the bullet cluster and Lyman alpha observations.
}\label{LIGO}
\end{figure}
The numerical results are shown in Fig.~\ref{e^phi}. Compared to the $\phi^4$ case, if $\lambda \simeq 1/N^2$, we find that, by varying $B(0)/\Omega^2$, the DSS is allowed to be more massive and at the same time much larger in radius. As a result, the gravitational wave radiated from equal-mass binary DSS has lower frequency.
We find the highest gravitational wave frequency that can be radiated by the binary glueball dark star system corresponds to $B(0)/\Omega^2=0.32$. At this point, the star mass reaches its maximum, with $\mathcal{M}_*(x_R)=0.74$ and $x_R=4.7$. The corresponding mass and radius of DSS are,
\begin{eqnarray}\label{MR2}
\begin{split}
M &= \sqrt{\frac{1}{4\pi}} \frac{M_{pl}^3}{N m^2} \mathcal{M}_*(x_R) = \left( \frac{1}{N} \right) \left( \frac{0.6\,\rm GeV}{m} \right)^2 M_{\odot} \ , \\
R &= \sqrt{\frac{1}{4\pi}} \frac{M_{pl}}{N m^2} x_R = \left( \frac{1}{N} \right)\left( \frac{0.6\,\rm GeV}{m} \right)^2 \times 10 \,{\rm km} \ .
\end{split}
\end{eqnarray}
Interestingly, if $m\sim 1\,$GeV and $N\sim\mathcal{O}(1)$, the DSS has the typical mass as a massive compact halo object (MACHO).
On the other hand, \cite{Soni:2016gzf} showed that, for the dark glueball to be both self-interacting and warm dark matter candidate, the favored ranges of parameters are $m\sim 0.01-10\,$keV, $N\sim 10^6-10^3$. Following (\ref{MR2}), this corresponds to the highest DSS mass in the range $10^6-10^9M_{\odot}$ and the lowest DSS radius in the range $10^2-10^5R_\odot$, where the solar radius is $R_\odot = 7\times10^5\,$km.
With ${\mathcal{M}_*(x_R)}/{x_R^3}\simeq0.007$, we find the highest gravitational wave frequency is given by
\begin{eqnarray}\label{fmax2}
f_{\rm max} = \frac{m^2}{2\pi M_{\rm pl}} \sqrt{4\pi N^2 {\rm Max}\left[\frac{\mathcal{M}_*(x_R)}{x_R^3}\right]}
\simeq 50\, {\rm Hz} \times N \times \left( \frac{m}{0.09\,\rm GeV} \right)^2 \ .
\end{eqnarray}
The frequency window most sensitive to the LIGO experiment is 50--1000\,Hz. Therefore, if the $f_{\rm max}$ derived in Eqs.~(\ref{fmax1}) or (\ref{fmax2}) lies within this window and if the DSS pair is located close enough to the earth, LIGO has the potential to detect the gravitational waves.
For the SU($N$) glueball model, the $m$-$N$ parameter space that can potentially be probed by the future running of LIGO is shown in Fig.~\ref{LIGO} by the red shaded band (for the large $N$ potential) and by the region between the dark-red dashed lines (for the $\phi^4$ potential). Here, we have set the value $\lambda=8\pi^2/(3N^2)$, which follows from comparing Eqs.~(\ref{V1}) and (\ref{VlargeN}). In this case, the two potentials make very similar predictions on the gravitational wave frequency.
Also shown in Fig.~\ref{LIGO} are the region of parameter space which allows the SU($N$) glueball to be a warm dark matter candidate (yellow band) or self-interacting dark matter (blue band), as discussed in detail in Ref.~\cite{Soni:2016gzf}. In particular, the dark matter self-interaction cross section is given by $\sigma_{2\to2}\sim\lambda^2/m^2$~\cite{Soni:2016gzf}, and here we assume the order one parameter $a_4$ in Eq.~(\ref{VlargeN}) to be in the range $1/3<a_4<3$.
On the other hand, the warm dark matter scenario can be achieved through the $3\to2$ annihilation process among the glueball particles (possible with the large $N$ potential) and the collisional damping~\cite{Soni:2016gzf}.
In the lower left corner of Fig.~\ref{LIGO}, the glueball dark matter has either too strong self-interaction or too large damping scale in the power spectrum, and the gray region is already ruled out by the bullet cluster and Lyman-$\alpha$ forest observations.
Interestingly, the current LIGO experiment is already probing the self interacting glueball dark matter with mass scale around 0.1\,GeV. The future gravitational wave observatories that are sensitive to lower frequencies (for example LISA/eLISA could probe 0.03\,mHz to 0.1\,Hz~\cite{lisa, Seoane:2013qna}) will be able to further probe the parameter space of a lighter (between keV and MeV) glueball dark matter, and even that of a warm dark matter. The region that could be potentially probed by eLISA is shown by the magenta band in Fig.~\ref{LIGO}.
\subsection{\large Conclusion and Outlook}
To summarize, in this work we explored a natural and important consequence of having glueball dark matter from a hidden sector with pure SU($N$) gauge symmetry --- the formation of dark SU($N$) stars (DSS). We solved the classical Einstein-Klein-Gordon equations for the mass-radius relations of the DSS. Because the dark glueball is a real scalar, the DSS configuration in general is time dependent and oscillates with a frequency given by the glueball mass. In our calculation, we take advantage of the large hierarchy where each DSS oscillation frequency is much higher than the gravitational wave frequency radiated by binary DSS (that can be observed, for example, by LIGO), and semi-analytically solve for the time-averaged DSS configuration.
Based on this calculation, we derive the frequency of gravitational waves radiated by binary DSS systems, as a function of the only two parameters in this simple model, the glueball mass $m$ and the number of colors $N$. We confront the model predictions to the frequency window sensitive to LIGO and future gravitational wave observatories, and find the regions of parameter space which could potentially be probed. This model offers an exciting connection between the gravitational wave radiation on compact dark stellar scales and the dark matter self-interaction on (dwarf-) galactic scales. Our main results are summarized in the key plot Fig.~\ref{LIGO}.
There are several further comments in order.
\begin{itemize}
\item Throughout our discussions the scalar glueball dark matter is treated effectively as a scalar field. On the theoretical side, it is known to be not easy to generate a small mass for a fundamental scalar particle. A much more appealing way is to generate the mass dynamically in an $SU(N)$ gauge theory, where the dimensional transmutation is a well understood effect. On cosmological scales, at the moment we do not know how to differentiate our glueball from a fundamental scalar. This is an interesting and open issue to which we may return in the future.
\item We find the ratio of our glueball dark matter radius to its mass is $(R/M)_{\rm DSS} = {x_R}/({M_{pl}^2 \mathcal{M}_*(x_R)})$. In contrast, the ratio for a Schwarzschild black hole is simply $(R/M)_{\rm BH} = {2}/{M_{pl}^2}$. From our numerical calculation, we find that $x_R/\mathcal{M}_*(x_R)>2$ is always the case, thus the radius of the DSS is larger than the Schwarzschild radius of a black hole with equal mass. Therefore, a DSS will not collapse into a black hole.
\item Because the mass-radius relation of our glueball dark star is different from that of a black hole, it may be possible to distinguish the binary glueball dark star from a binary black hole as the source for gravitational waves. The useful information in the gravitational wave spectrum for this precision measurement includes the time dependence of gravitational wave frequency, the amplitude, as well as the maximum frequency.
\item One might also consider a binary system made of one black hole and one glueball dark star. The highest orbiting angular frequency (just before merging), $\omega$, satisfies $\omega^2 \simeq {2 G (M_{\rm BH} + M_{\rm DSS})}/{(R_{\rm BH} + R_{\rm DSS})^3}$.
Clearly, in the case when $M_{\rm BH} \gg M_{\rm DSS}$ and $R_{\rm BH} \gg R_{\rm DSS}$ (or the other way around),
it is similar to the highest frequency for the case of binary black hole (binary DSS) merging.
\item Another potential way to distinguish the glueball dark stars and black holes is to study the gamma ray portion in the total energy loss (compared to the energy carried away by gravitational waves) during the merging process. This may be significant if the dark glueball is light thus its occupation number is high in the DSS, and if the dark glueball has strong enough coupling to the SM photon via higher dimensional operators. The gamma ray emissions can serve as the point-like source for locating the position of the DSS merger. We leave a more quantitative calculation of this interesting possibility to a future work.
\item Last but not the least, it is also an interesting and relevant question to investigate the formation and accretion process of the dark stars in the early universe and during galaxy formations, which will inform us the fraction of glueball dark matter in the form of the dark SU($N$) stars. The abundance of DSS depends on the primordial density perturbations in the glueball dark matter, and therefore is sensitive to the history of the very early universe. The relic abundance of the very massive DSS can impact the dynamics of (dwarf) galaxy formation and is strongly constrained~\cite{Brandt:2016aco}.
\end{itemize}
As an added note, a first-order phase transition of the dark $SU(N)$ sector in the early universe could also source gravitational waves~\cite{Schwaller:2015tja}. The corresponding frequency is typically much lower than that from binary dark stars considered in this work.
\bigskip
\noindent{\it Acknowledgement.} We would like to thank John Terning and Enrico Rinaldi for discussions.
The work of AS is supported in part by the DOE Grant No. DE-AC-02-98CH10886.
This work of YZ is supported by the DOE Grant No. DE-SC0010143.
Y.Z. acknowledges the hospitality from the Aspen Center for Physics Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293.
|
2,869,038,156,902 | arxiv | \section{Experiments}\label{sec:exp}
In this section, we implement the APG-restart algorithm with different restart schemes listed in \Cref{table: 1} to corroborate our theory that APG-restart has guaranteed convergence with any restart scheme. In specific, for the fixed restart scheme we set the restart period to be $q=10,30,50$, respectively.
We first solve two smooth nonconvex problems, i.e., the logistic regression problem with a nonconvex regularizer (i.e., $g(x):=\alpha \sum_{i=1}^{d} \frac{x_i^2}{1+x_i^2}$) and the robust linear regression problem. For the logistic regression problem, we adopt the cross-entropy loss and set $\alpha=0.01$, and for the robust linear regression problem, we adopt the robust nonconvex loss $\ell(s):= \log(\frac{s^2}{2}+1)$. We test both problems on two LIBSVM datasets: a9a and w8a \cite{Chang_2011}. We use stepsizes $\beta_k = 1, \lambda_{k}=(1+\alpha_{k+1})\beta_{k}$ for the APG-restart as suggested by our theorems. We note that in these experiments, we plot the loss gap versus number of iterations for all algorithms. The comparison of running time is similar as all the algorithms require the same computation per iteration.
\begin{figure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figs/E1_a9a_Loss.jpg}
\caption{logistic regression \\ a9a}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figs/E1_w8a_Loss.jpg}
\caption{logistic regression \\ w8a}
\end{subfigure}%
\\
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{figs/E2_a9a_Loss.jpg}
\caption{robust regression \\ a9a}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{figs/E2_w8a_Loss.jpg}
\caption{robust regression \\ w8a}
\end{subfigure}%
\caption{Comparison of different restart schemes in smooth nonconvex optimization.} \label{Experment_1}
\end{figure}
\Cref{Experment_1} shows the experiment results of APG-restart with fixed scheme (constant $q$), function value scheme (FS), gradient mapping scheme (GS) and non-monotone scheme (NS). It can be seen that APG-restart under the function scheme performs the best among all restart schemes. In fact, the function scheme restarts the APG algorithm the most often in these experiments. The gradient mapping scheme and the non-monotone scheme have very similar performance, and both of them perform slightly worse than the function scheme. Moreover, the fixed restart schemes have the worst performance. In particular, the performance of fixed scheme gets better as the restart period $q$ decreases (i.e., more restarts take place).
Next, we further add a nonsmooth $\ell_1$ norm regularizer to the objective functions of all the problems mentioned above, and apply APG-restart with different restart schemes to solve them. The results are shown in \Cref{Experment_3}. One can see that for the nonsmooth logistic regression, all the non-fixed restart schemes have comparable performances and they perform better than the fixed restart schemes. For the nonsmooth robust linear regression, both the gradient mapping scheme and the non-monotone scheme outperform the other schemes. In this experiment, the function scheme has a degraded performance that is comparable to the fixed restart schemes. This is possibly due to the highly nonconvexity of the loss landscape.
\begin{figure}
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figs/E3_a9a_Loss.jpg}
\caption{logistic regression \\a9a}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figs/E3_w8a_Loss.jpg}
\caption{logistic regression \\ w8a}
\end{subfigure}%
\\
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figs/E4_a9a_Loss.jpg}
\caption{robust regression \\ a9a}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figs/E4_w8a_Loss.jpg}
\caption{robust regression \\ w8a}
\end{subfigure}%
\caption{Comparison of different restart schemes in {\em nonsmooth} nonconvex optimization.} \label{Experment_3}
\end{figure}
\section{Introduction}
Training modern machine learning models in real applications typically involves highly nonconvex optimization, and some effective interesting examples include deep learning \cite{RELU}, nature language processing and computer vision, etc. To solve these nonconvex optimization problems, gradient-based algorithms \cite{Nesterov2014} are popular choices due to their simplicity, effectiveness as well as well-understood convergence guarantees.
In practical training of machine learning models, momentum has been a successful and widely applied optimization trick that facilitates the convergence of gradient-based algorithms. Various types of momentum schemes have been developed, e.g., \cite{Nesterov2014,Beck2009,Tseng2010,Ghadimi2016b,Li:2015}, and have been shown to improve the order of convergence rates of gradient-based algorithms in solving convex and strongly convex optimization problems. In specific, gradient descent algorithms with momentum have been shown to achieve the complexity lower bound for convex optimization \cite{Nesterov2014,Beck2009} and have guaranteed convergence in nonconvex optimization \cite{Ghadimi2016b,Li:2015}.
Despite the superior theoretical advantages of momentum acceleration schemes, they do not fully exploit the potential for acceleration. For example, the basic momentum scheme \cite{Nesterov2014,Beck2009} adopts a diminishing momentum coefficient for accelerating smooth convex optimization, and it does not provide much momentum acceleration after a large number of iterations. Also, for accelerating strongly convex optimization, the choice of momentum coefficient requires the knowledge of condition number of the Hessian matrix, which is typically unknown a priori. To resolve these issues and further facilitate the practical convergence of gradient algorithms with momentum, various types of {\em parameter restart} techniques have been proposed, e.g., \cite{Donoghue2015,Fercoq2016,Fercoq2017,Giselsson2014,Kim2018,Liang2017,Lin2015,Liu2017,Renegar2018,Roulet2017}.
In these works, it has been demonstrated that restarting algorithm parameters (i.e., variables and momentum coefficient) periodically can suppress the oscillations of the training loss induced by the extrapolation step and improve the practical convergence in {\em convex} optimization. In specific, parameter restart is typically triggered by certain occurrences that may slow down the convergence, such as function value divergence \cite{Donoghue2015,Renegar2018} and gradient mismatch \cite{Donoghue2015}, etc. Therefore, parameter restart can reduce the instability and oscillations caused by momentum. However, in {\em nonconvex} optimization, the applications of parameter restart to gradient algorithms with momentum require to deal with the following open issues.
{\em (a)} While the convergence of gradient algorithms with momentum and parameter restart have been well explored in {\em convex} optimization, they are of lack of theoretical understandings in {\em nonconvex} optimization, which are important for modern machine learning purpose. {\em (b)} Previous works on gradient algorithms with momentum and restart for convex optimization are based on very specific restart schemes in order to have convergence guarantee, but practically the best restart scheme can be problem dependent. Therefore, it is much desired to design a momentum scheme that allows to adopt flexible parameter restart schemes with theoretical convergence guarantee.
{\em (c)} The existing gradient algorithms with momentum for nonconvex optimization have convergence guarantees at the cost of either introducing extra computation steps \cite{Li:2015,Li2017} or imposing restrictions on the objective function \cite{Ghadimi2016b}. It is important to explore whether parameter restart can help alleviate these costs or restrictions.
Considering all the issues above, we are motivated to design a gradient algorithm with momentum and parameter restart that {\em (a)} has convergence guarantee in nonconvex optimization, {\em (b)} allows to apply flexible restart schemes in practice and {\em (c)} avoids the existing weakness and restrictions in design of accelerated methods for nonconvex optimization. We summarize our contributions as follows.
\subsection{Our Contributions}
We consider the problem of minimizing a smooth nonconvex function plus a (non)smooth regularizer. To solve such a class of problems, we propose APG-restart: a momentum-accelerated proximal gradient algorithm with parameter restart (see \Cref{alg: Acc-PGD}) and show that APG-restart satisfies the following properties.
\begin{itemize}[leftmargin=*,topsep=0pt]
\item APG-restart allows for adopting any parameter restart scheme (hence covers many existing ones). In particular, it guarantees to make monotonic progress on function value between successive restart periods of iterations.
\item The design of the proximal momentum component in APG-restart leverages the notion of generalized gradient mapping (see \cref{eq: grad_map}), which leads to convergence guarantee in nonconvex optimization. Also, APG-restart does not require extra computation steps compared to other accelerated algorithms for nonconvex optimization \cite{Li2017,Li:2015}, and removes the restriction of bounded domain on the regularizer function in existing works \cite{Ghadimi2016b}.
\item APG-restart achieves the stationary condition at a global sublinear convergence rate (see \Cref{lemma: Acc-PGD dynamic}).
\item Under the Kurdyka-{\L}ojasiewicz (K{\L}) property of nonconvex functions (see \Cref{def: KL}), the variable sequence generated by APG-restart is guaranteed to converge to a critical point. Moreover, the asymptotic convergence rates of function value and variable sequences generated by APG-restart are fully characterized by the parameterization of the K{\L}~ property of the objective function. This work is the first study of gradient methods with momentum and parameter restart under the K{\L}~ property.
\end{itemize}
\subsection{Related Works}
\paragraph{Gradient algorithms with momentum and parameter restart:} Various types of parameter restart schemes have been proposed for accelerated gradient-based algorithms for convex optimization. Specifically, \cite{Donoghue2015} proposed to restart the accelerated gradient descent algorithm whenever certain function value-based criterion or gradient-based criterion is violated. These restart schemes were shown to achieve the optimal convergence rate without prior knowledge of the condition number of the function. \cite{Giselsson2014} further proposed an accelerated gradient algorithm with restart and established formal convergence rate analysis for smooth convex optimization. \cite{Lin2015} proposed a restart scheme that automatically estimates the strong convexity parameter and achieves a near-optimal iteration complexity. \cite{Fercoq2016,Fercoq2017} proposed a restart scheme for accelerated algorithms that achieves a linear convergence in convex optimization under the quadratic growth condition. \cite{Liu2017,Roulet2017} studied convergence rate of accelerated algorithms with restart in convex optimization under the error bound condition and the {\L}ojasiewicz condition, respectively. \cite{Renegar2018} proposed a restart scheme that is based on achieving a specified amount of decrease in function value. All these works studied accelerated gradient algorithms with restart in convex optimization, whereas this work focuses on nonconvex optimization.
\paragraph{Nonconvex optimization under K{\L}~ property:} The Kurdyka-{\L}ojasiewicz property is a generalization of the {\L}ojasiewicz gradient inequality for smooth analytic functions to nonsmooth sub-analytic functions. Such a local property was then widely applied to study the asymptotic convergence behavior of various gradient-based algorithms in nonconvex optimization \cite{Attouch2009,Bolte2014,Zhou2016,Zhou_2017a}. The K{\L}~ property has also been applied to study convergence properties of accelerated gradient algorithms \cite{Li2017,Li:2015} and heavy-ball algorithms \cite{Ochs2018,Liang2016} in nonconvex optimization. Some other works exploited the K{\L}~ property to study the convergence of second-order algorithms in nonconvex optimization, e.g., \cite{Yi2018}.
\section{Conclusion}
In this paper, we propose a novel accelerated proximal gradient algorithm with parameter restart for nonconvex optimization. Our proposed APG-restart allows for adopting any parameter restart schemes and have guaranteed convergence. We establish both the global convergence rate and various types of asymptotic convergence rates of the algorithm, and we demonstrate the effectiveness of the proposed algorithm via numerical experiments. We expect that such a parameter restart algorithm framework can inspire new design of optimization algorithms with faster convergence for solving nonconvex machine learning problems.
{\small
\section*{Acknowledgment}
The work of Z. Wang, K. Ji and Y. Liang was supported in part by the U.S. National Science Foundation under the grants CCF-1761506, CCF-1909291 and CCF-1900145.
\bibliographystyle{named}
\section{Preliminaries}
In this section, we introduce some definitions that are useful in our analysis later.
Consider a proper\footnote{An extended real-valued function $h$ is proper if its domain $\mathop{\mathrm{dom}} h := \{ x: h(x) < \infty \}$ is nonempty.} and lower-semicontinuous function $h:\mathbb{R}^d \to \mathbb{R}$ which is {\em not} necessarily smooth nor convex. We introduce the following generalized notion of derivative for the function $h$.
\begin{definition}(Subdifferential and critical point, \cite{vari_ana})\label{def:sub}
The Frech\'et subdifferential $\widehat\partial h$ of function $h$ at $x\in \mathop{\mathrm{dom}} h$ is the set of $u\in \mathbb{R}^d$ defined as
\begin{align*}
\widehat\partial h(x) := \bigg\{u: \liminf_{z\neq x, z\to x} \frac{h(z) - h(x) - u^\intercal(z-x)}{\|z-x\|} \ge 0 \bigg\},
\end{align*}
and the limiting subdifferential $\partial h$ at $x\in\mathop{\mathrm{dom}} h$ is the graphical closure of $\widehat\partial h$ defined as:
\begin{align*}
\partial h(x) := \{ u: \exists x_k \to x, h(x_k) \to h(x), u_k \in \widehat{\partial} h(x_k) \to u \}.
\end{align*}
The set of critical points of $h$ is defined as $\mathbf{\mathop{\mathrm{crit}}}\!~h := \{ x: \mathbf{0}\in\partial h(x) \}$.
\end{definition}
Note that when the function $h$ is continuously differentiable, the limiting sub-differential $\partial h$ reduces to the usual notion of gradient $\nabla h$.
Next, we introduce the Kurdyka-{\L}ojasiewicz (K{\L}) property of a function $h$. Throughout, we define the distance between a point $x\in \mathbb{R}^d$ and a set $\Omega \subseteq \mathbb{R}^d$ as $\mathrm{dist}_\Omega(x) := \inf_{w\in \Omega} \|x - w\|$.
\begin{definition}(K{\L}~ property, \cite{Bolte2014})\label{def: KL}
A proper and lower-semicontinuous function $h$ is said to satisfy the K{\L}~ property if for every compact set $\Omega\subset \mathop{\mathrm{dom}} h$ on which $h$ takes a constant value $h_\Omega \in \mathbb{R}$, there exist $\varepsilon, \lambda >0$ such that for all $x \in \{z\in \mathbb{R}^d : \mathrm{dist}_\Omega(z)<\varepsilon, h_\Omega < h(z) <h_\Omega + \lambda\}$, the following inequality is satisfied
\begin{align}\label{eq: KL}
\varphi' \left(h(x) - h_\Omega\right) \mathrm{dist}_{\partial h(x)}(\mathbf{0}) \ge 1,
\end{align}
where $\varphi'$ is the derivative of function $\varphi: [0,\lambda) \to \mathbb{R}_+$, which takes the form $\varphi(t) = \frac{c}{\theta} t^\theta$ for some $c>0, \theta\in (0,1]$.
\end{definition}
To elaborate, consider the case where $h$ is differentiable. Then, the K{\L}~ property in \cref{eq: KL} can be rewritten as
\begin{align}
h(x) - h_\Omega \le C \|\nabla h(x)\|^{p} \label{eq: KLsimple}
\end{align}
for some constant $C>0$ and $p\in (1, +\infty)$. In fact, \Cref{eq: KLsimple} can be viewed as a generalization of the gradient dominance condition
that corresponds to the special case of $p = 2$. A large class of functions have been shown to satisfy the K{\L}~ property, e.g., sub-analytic functions, logarithm and exponential functions, etc \cite{Bolte2007}. These function classes cover most of nonconvex objective functions encountered in practical applications, e.g., logistic loss, vector and matrix norms, rank, and polynomial functions, etc. Please refer to \cite[Section 5]{Bolte2014} and \cite[Section 4]{Attouch2010} for more example functions.
To handle non-smooth objective functions, we introduce the following notion of proximal mapping.
\begin{definition}(Proximal mapping)\label{def:prox}
For a proper and lower-semicontinuous function $h$, its proximal mapping at $x\in \mathbb{R}^d$ with parameter $\eta > 0$ is defined as:
\begin{align}
\prox{\eta h}(x) := \mathop{\mathrm{argmin}}_{z\in \mathbb{R}^d} \bigg\{h(z) + \frac{1}{2\eta}\|z - x\|^2\bigg\}.
\end{align}
\end{definition}
\section{APG-restart for Nonsmooth \& Nonconvex Optimization}
In this section, we propose a novel momentum-accelerated proximal gradient with parameter restart (referred to as APG-restart) for solving nonsmooth and nonconvex problems.
Consider the composite optimization problem of minimizing a smooth and nonconvex function $f:\mathbb{R}^d \to \mathbb{R}$ plus a possibly nonsmooth and convex function $g:\mathbb{R}^d \to \mathbb{R}$, which is written as
\begin{align*}
\min_{x\in \mathbb{R}^d} F(x):= f(x) + g(x). \tag{P}
\end{align*}
We adopt the following standard assumptions on the objective function $F$ in the problem (P).
\begin{assum}\label{assum: f+g}
The objective function $F$ in the problem $\mathrm{(P)}$ satisfies:
\begin{enumerate}[leftmargin=*]
\item Function $F$ is bounded below, i.e., $F^*:=\inf_{x\in \mathbb{R}^d} F(x) > -\infty$;
\item For any $\alpha \in \mathbb{R}$, the level set $\{x: F(x) \le \alpha\}$ is compact;
\item The gradient of $f$ is $L$-Lipschitz continuous and $g$ is lower-semicontinuous and convex.
\end{enumerate}
\end{assum}
Under \Cref{assum: f+g}, we further introduce the following mapping for any $\eta>0$ and $x, u \in \mathbb{R}^d$:
\begin{align}
G_\eta(x,u) := \frac{1}{\eta} \big(x-\mathrm{prox}_{\eta g}(x-\eta u) \big). \label{eq: grad_map}
\end{align}
Such a mapping is well-defined and single-valued due to the convexity of $g$. Moreover,
the critical points of function $F$ (cf., \Cref{def:sub}) in the problem (P) can be alternatively characterized as $\mathbf{\mathop{\mathrm{crit}}}\!~F := \{ x: \mathbf{0}\in G_\eta(x,\nabla f(x)) \}$. Therefore, $G_\eta(x,\nabla f(x))$ serves as a type of `gradient' at point $x$, and we refer to such a mapping as {\em gradient mapping} in the rest of the paper. In particular, the gradient mapping reduces to the usual notion of gradient when the nonsmooth part $g\equiv 0$.
\begin{algorithm}[H]
\caption{APG-restart for nonconvex optimization}
\label{alg: Acc-PGD}
{\bf Input:} $K \in \mathbb{N}$, restart periods $q_0=0, \{q_{t}\}_{t\ge 1} \in \mathbb{N}$, stepsizes $\{\lambda_k\}_{k}, \{\beta_k\}_{k} >0.$
{\bf Define:} $Q_t:= \sum_{\ell=0}^{t}q_\ell$.
{\bf Initialize:} $x_{-1} \in \mathbb{R}^d$.
\For{$k=0, 1, \ldots, K$}
{
Denote $t$ the largest integer such that $Q_t \le k$,\\
Set: $\alpha_k = \frac{2}{k-Q_t+2}$,\\
\If{$k= Q_t~$ for some $t\in \mathbb{N}$}
{
Reset: $x_{k} = y_k = x_{k-1},$
}
$z_{k} = (1-\alpha_{k+1})y_{k} + \alpha_{k+1} x_{k}$, \\
$x_{k+1} = x_k - \lambda_{k} G_{\lambda_k}(x_k, \nabla f(z_k))$, \\
$y_{k+1} = z_{k} - \beta_{k} G_{\lambda_k}(x_k, \nabla f(z_k))$.
}
{\textbf{Output:} $x_K$.}
\end{algorithm}
To solve the nonsmooth and nonconvex problem (P), we propose the APG-restart algorithm that is presented in \Cref{alg: Acc-PGD}. APG-restart consists of new design of momentum schemes for updating the variables $x_k$ and $y_k$, the extrapolation step for updating the variable $z_k$ where $\alpha_{k+1}$ denotes the associated momentum coefficient, and the restart periods $\{q_t\}_t$. We next elaborate the two major ingredients of APG-restart: new momentum design and flexible restart scheduling with convergence guarantee.
\paragraph{New momentum design:}
We adopt new momentum steps in APG-restart for updating the variables $x_k$ and $y_k$, which are different from those of the AG method in \cite{Ghadimi2016b} and we compare our update rules with theirs as follows.
\begin{equation}
\text{(APG-restart):}
\left\{
\begin{aligned}
\!x_{k+1} \!=\! x_k \!-\! \lambda_{k} G_{\lambda_k}(x_k, \!\nabla f(z_k)), \\
\!y_{k+1} \!=\! z_{k} \!-\! \beta_{k} G_{\lambda_k}(x_k, \!\nabla f(z_k)).
\end{aligned}
\right.
\end{equation}
\begin{equation}
\text{(AG):}
\left\{
\begin{aligned}
\!x_{k+1} \!=\! \mathrm{prox}_{\lambda_k g}(x_k\!-\!\lambda_k \nabla f(z_k)) \\
\!y_{k+1} \!=\! \mathrm{prox}_{\lambda_k g}(z_k\!-\!\beta_k \nabla f(z_k))
\end{aligned}
\right.
\end{equation}
It can be seen from the above comparison that our APG-restart uses the same gradient mapping term $G_{\lambda_k}(x_k, \nabla f(z_k))$ to update both of the variables $x_k$ and $y_k$, while the AG algorithm in \cite{Ghadimi2016b} updates them using different proximal gradient terms. Consequently, our APG-restart is more computationally efficient as it requires to compute one gradient mapping per iteration while the AG algorithm needs to perform two proximal updates.
On the other hand, the update rules of the AG algorithm guarantee convergence in nonconvex optimization only for functions of $g$ with bounded domain \cite{Ghadimi2016b}. Such a restriction rules out regularization functions with unbounded domain, which are commonly used in practical applications, e.g., $\ell_1, \ell_2$ regularization, elastic net, etc. In comparison, as we show in the analysis later, the update rules of APG-restart has guaranteed convergence in nonconvex optimization and does not require the regularizer $g$ to be domain-bounded.
\paragraph{Guarantee for any restart scheduling:} APG-restart retains the convergence guarantee with any restart scheduling. In specific, by specifying an arbitrary sequence of iteration periods $\{q_{t}\}_t \in \mathbb{N}$, APG-restart calls the restart operation at the end of each period (i.e., whenever $k=Q_t$ for some $t$). Upon restart, both $x_k$ and $y_k$ are reset to be the variable $x_{k-1}$ generated at the previous iteration, and the momentum coefficient $\alpha_k$ is reset to be 1. In the subsequent iterations, the momentum coefficient is diminished inversely proportionally to the number of iterations within the restart period.
Since our APG-restart retains convergence guarantee for any restart periods $\{q_t \}_t$, it can implement any criterion that determines when to perform the parameter restart and have a convergence guarantee (see our analysis later). We list in \Cref{table: 1} some popular restart criteria from existing literature and compare their practical performance under our APG-restart framework in the experiment section later. We note that the restart criterion of the gradient mapping scheme implicitly depends on the gradient mapping, as $y_{k+1}-z_{k} \propto G_{\lambda_k}(x_k, \nabla f(z_k))$ from the update rule in \Cref{alg: Acc-PGD}.
\begin{table*}[ht]
\setlength{\tabcolsep}{4pt}
\center
{\small
\begin{tabular}{ccccc}
\toprule
\begin{tabular}{@{}c@{}} Restart \\ scheme \end{tabular}
& \begin{tabular}{@{}c@{}} Fixed restart \\ \cite{Nesterov07}\end{tabular}
& \begin{tabular}{@{}c@{}} Function value \\ \cite{Donoghue2015}\\\cite{Giselsson2014}\\\cite{Kim2018} \end{tabular}
& \begin{tabular}{@{}c@{}} Gradient mapping \\ \cite{Donoghue2015}\\\cite{Giselsson2014}\\\cite{Kim2018} \end{tabular}
& \begin{tabular}{@{}c@{}} Non-monotonic \\ \cite{Giselsson2014} \end{tabular} \\ \midrule
\begin{tabular}{@{}c@{}} Check \\ condition \end{tabular}
& \begin{tabular}{@{}c@{}} $q_t\equiv q \in \mathbb{N}$ \\ for all $t$ \end{tabular}
& \begin{tabular}{@{}c@{}} restart whenever \\ $F(x_k)>F(x_{k-1})$ \end{tabular}
& \begin{tabular}{@{}c@{}} restart whenever \\ $\inner{z_{k}\!-\!y_{k}}{y_{k+1}\!-\!z_{k}}$ \\ $\ge 0$ \end{tabular}
& \begin{tabular}{@{}c@{}} restart whenever \\ $\inner{z_{k}\!-\!y_{k}}{y_{k+1}\!-\!\frac{z_{k}\!+\!x_k}{2}}$\\ $\ge 0$ \end{tabular} \\
\bottomrule
\end{tabular}
}
\caption{Restart conditions for different parameter restart schemes.}\label{table: 1}
\end{table*}
Performing parameter restart has appealing benefits. First, synchronizing the variables $x_k$ and $y_k$ periodically can suppress the deviation between them caused by the extrapolation step. This further helps to reduce the oscillation of the generated function value sequence. Furthermore,
restarting the momentum coefficient $\alpha_k$ periodically injects more momentum into the algorithm dynamic, and therefore facilitates the practical convergence of the algorithm.
\section{Convergence Analysis of APG-restart}
In this section, we study the convergence properties of APG-restart in solving nonconvex and nonsmooth optimization problems.
We first characterize the algorithm dynamic of APG-restart.
\begin{restatable}{lemma}{LemmaDynamicPGD}[Algorithm dynamic]\label{lemma: Acc-PGD dynamic}
Let \Cref{assum: f+g} hold and apply \Cref{alg: Acc-PGD} to solve the problem (P). Set $\beta_k \equiv \frac{1}{8L}$ and $\lambda_k \in [\beta_k, (1+\alpha_{k+1})\beta_{k}]$. Then, the sequence $\{x_k \}_k$ generated by APG-restart satisfies: for all $t=1,2,...$
\begin{align}
F(x_{Q_t}) \le &F(x_{Q_{t-1}}) - \frac{L}{4}\sum_{k=Q_{t-1}}^{Q_{t}-1} \|x_{k+1} - x_k\|^2, \label{eq: 9}\\
\mathrm{dist}_{\partial F(x_{Q_t})}^2(\mathbf{0}) &\le 162L^2 {\sum_{k=Q_{t-1}}^{Q_t-1}\|x_{k+1} - x_k\|^2}. \label{eq: 10}
\end{align}
\end{restatable}
\Cref{lemma: Acc-PGD dynamic} characterizes the period-wise algorithm dynamic of APG-restart. In specific, \cref{eq: 9} shows that the function value sequence generated by APG-restart is guaranteed to decrease between two adjacent restart checkpoint (i.e., $Q_{t-1}$ and $Q_t$), and the corresponding progress $F(x_{Q_{t-1}})- F(x_{Q_t})$ is bounded below by the square length of the iteration path between the restart checkpoints, i.e., $\sum_{k=Q_{t-1}}^{Q_t-1} \|x_{k+1} - x_k\|^2$. On the other hand, \cref{eq: 10} shows that the norm of the subdifferential at the $t$-th restart checkpoint is bounded by the square length of the same iteration path. In summary, the algorithm dynamic of APG-restart is different from that of traditional gradient-based algorithms in several aspects: First, the dynamic of APG-restart is characterized at the restart checkpoints, while the dynamic of gradient descent is characterized iteration-wise \cite{Attouch2009,Attouch2013}. As we elaborate later, such a property makes the convergence analysis of APG-restart more involved; Second, APG-restart makes monotonic progress on the function value between two adjacent restart checkpoints. In other accelerated gradient algorithms, such a monotonicity property is achieved by introducing a function value check step \cite{Li:2015} or an additional proximal gradient step \cite{Li2017}.
Based on the algorithm dynamic in \Cref{lemma: Acc-PGD dynamic}, we obtain the following global convergence rate of APG-restart for nonconvex and nonsmooth optimization. Throughout the paper, we denote $f(n) = \Theta(g(n))$ if and only if for some $0<c_1<c_2$, $c_1 g(n) \le f(n) \le c_2 g(n)$ for all $n\ge n_0$.
\begin{restatable}{thm}{TheoremGlobal}[Global convergence rate]\label{thm: global}
Under the same conditions as those of \Cref{lemma: Acc-PGD dynamic}, the sequence $\{z_k \}_k$ generated by APG-restart satisfies: for all $K=1,2,...$
\begin{align*}
\min_{0\le k\le K-1}\|G_{\lambda_k}(z_k, \nabla f(z_k))\|^2 \le \Theta \Big({\frac{L\big(F(x_{0}) - F^*\big)}{K}}\Big).
\end{align*}
\end{restatable}
\Cref{thm: global} establishes the global convergence rate of APG-restart in terms of the gradient mapping, which we recall characterizes the critical point of the nonconvex objective function $F$. In particular, the order of the above global convergence rate matches that of other accelerated gradient algorithms \cite{Ghadimi2016b} for nonconvex optimization, and APG-restart further benefits from the flexible parameter restart scheme that provides extra acceleration in practice (as we demonstrate via experiments later).
\Cref{thm: global} does not fully capture the entire convergence property of APG-restart. To elaborate, convergence of the gradient mapping in \Cref{thm: global} does not necessarily guarantee the convergence of the {\em variable sequence} generated by APG-restart. On the other hand, the convergence rate estimate is based on the global Lipschitz condition of the objective function, which may not capture the local geometry of the function around critical points and therefore leads to a coarse convergence rate estimate in the asymptotic regime. To further explore stronger convergence results of APG-restart, we next exploit the ubiquitous Kurdyka-{\L}ojasiewicz (K{\L}) property (cf., \Cref{def: KL}) of nonconvex functions. We make the following assumption.
\begin{assum}\label{assum: KL}
The objective function $F$ in the problem $\mathrm{(P)}$ satisfies the K{\L}~ property.
\end{assum}
Based on the algorithm dynamic in \Cref{lemma: Acc-PGD dynamic} and further leveraging the K{\L}~ property of the objective function, we obtain the following convergence result of APG-restart in nonconvex optimization.
\begin{restatable}{thm}{TheoremVariable}[Variable convergence]\label{thm: Acc-GD variable}
Let Assumptions \ref{assum: f+g} and \ref{assum: KL} hold and apply \Cref{alg: Acc-PGD} to solve the problem (P). Set $\beta_k \equiv \frac{1}{8L}$ and $\lambda_k \in [\beta_k, (1+\alpha_{k+1})\beta_{k}]$. Define the length of iteration path of the $t$-th restart period as
$ L_t:= \sqrt{\sum_{k=Q_t}^{Q_{t+1}-1} \|x_{k+1}-x_k\|^2}.$
Then, the sequence $\{L_t \}_t$ generated by APG-restart satisfies: for all periods of iterations $t=1,2,...$
\begin{align}
\sum_{t=0}^\infty L_t < +\infty. \label{eq: finite_len}
\end{align}
Consequently, the variable sequences $\{x_k \}_k, \{y_k \}_k, \{z_k \}_k$ generated by APG-restart converge to the same critical point of the problem (P), i.e.,
\begin{align}
x_k,y_k,z_k \overset{k}{\to} x^* \in \mathbf{\mathop{\mathrm{crit}}}\!~F.
\end{align}
\end{restatable}
\Cref{thm: Acc-GD variable} establishes the formal convergence of APG-restart in nonconvex optimization. We note that such a convergence guarantee holds for any parameter restart schemes, therefore demonstrating the flexibility and generality of our algorithm. Also, unlike other accelerated gradient-type of algorithms that guarantee only convergence of function value \cite{Li:2015,Li2017}, our APG-restart is guaranteed to generate convergent variable sequences to a critical point in nonconvex optimization.
To highlight the proof technique, we first exploit the dynamic of APG-restart in \Cref{lemma: Acc-PGD dynamic} to characterize the limit points of the sequences $\{x_{Q_t} \}_t, \{F(x_{Q_t}) \}_t$ that are indexed by the restart checkpoints. Then, we further show that the entire sequences $\{x_{k} \}_k, \{F(x_{k}) \}_k$ share the same limiting properties, which in turn guarantee the sequences to enter a local parameter region of the objective function where the K{\L}~ property can be exploited. Taking advantage of the K{\L}~ property, we are able to show that the length of the optimization path is finite as iteration $k\to \infty$. Consequently, the generated variable sequences can be shown to converge to a certain critical point of the Problem (P).
Besides the variable convergence guarantee under the K{\L}~ property, we also obtain various types of convergence rate estimates of APG-restart depending on the specific parameterization of the local K{\L}~ property of the objective function. We obtain the following results.
\begin{restatable}{thm}{TheoremRates}[Convergence rate of function value]\label{thm: Acc-GD rates}
Let Assumptions \ref{assum: f+g} and \ref{assum: KL} hold and apply \Cref{alg: Acc-PGD} to solve the problem (P). Set $\beta_k \equiv \frac{1}{8L}$ and $\lambda_k \in [\beta_k, (1+\alpha_{k+1})\beta_{k}]$. Suppose the algorithm generates a sequence $\{x_k \}_k$ that converges to a certain critical point $x^*$ where the K{\L} property holds with parameter $\theta\in (0,1]$. Then, there exists a sufficiently large $t_0\in \mathbb{N}$ such that for all $t\ge t_0$,
\begin{enumerate}[leftmargin=*]
\item If $\theta = 1$, then $F(x_{Q_t}) \downarrow F(x^*)$ within finite number of periods of iterations;
\item If $\theta \in [\frac{1}{2}, 1)$, then $F(x_{Q_t}) \downarrow F(x^*)$ linearly as
$F(x_{Q_t}) - F(x^*) \le \exp \big(-\Theta(t-t_0)\big);$
\item If $\theta \in (0, \frac{1}{2})$, then $F(x_{Q_t}) \downarrow F(x^*)$ sub-linearly as
$F(x_{Q_t}) - F(x^*) \le \Theta \Big((t-t_0)^{-\frac{1}{1-2\theta}}\Big).$
\end{enumerate}
\end{restatable}
\begin{restatable}{thm}{TheoremRatesVariable}[Convergence rate of variable]\label{thm: Acc-GD var_rates}
Under the same conditions as those of \Cref{thm: Acc-GD rates}, suppose APG-restart generates a sequence $\{x_k \}_k$ that converges to a certain critical point $x^*$ where the K{\L} property holds with parameter $\theta\in (0,1]$. Then, there exists a sufficiently large $t_0\in \mathbb{N}$ such that for all $t\ge t_0$,
\begin{enumerate}[leftmargin=*]
\item If $\theta = 1$, then $x_{Q_t} \overset{t}{\to} x^*$ within finite number of periods of iterations;
\item If $\theta \in [\frac{1}{2}, 1)$, then $x_{Q_t} \overset{t}{\to} x^*$ linearly as
$\|x_{Q_t} - x^*\| \le \exp \big(-\Theta(t-t_0)\big);$
\item If $\theta \in (0, \frac{1}{2})$, then $x_{Q_t} \overset{t}{\to} x^*$ sub-linearly as $\|x_{Q_t} - x^*\| \le\Theta \Big((t-t_0)^{-\frac{\theta}{1-2\theta}}\Big)$.
\end{enumerate}
\end{restatable}
\Cref{thm: Acc-GD rates} and \Cref{thm: Acc-GD var_rates} establish the asymptotic convergence rate results for the function value sequence and variable sequence generated by APG-restart, respectively. Intuitively, after a sufficiently large number of training iterations, APG-restart enters a local neighborhood of a certain critical point. In such a case, the global convergence rate characterized in \Cref{thm: global} can be a coarse estimate because it exploits only the global Lipschitz property of the function. On the contrary, the local K{\L}~ property characterizes the function geometry in a more accurate way and leads to the above tighter convergence rate estimates. In particular, the K{\L}~ parameter $\theta$ captures the `sharpness' of the local geometry of the function, i.e., a larger $\theta$ induces a faster convergence rate.
\section{Proof of \Cref{lemma: Acc-PGD dynamic}}
\LemmaDynamicPGD*
\begin{proof}
The proof utilizes some intermediate results developed in \cite{Ghadimi2016b}, and we include the proof of these results for completeness of presentation. Throughout, we define ${\Gamma}_0 = 0, {\Gamma}_1 = 1, {\Gamma}_k = (1-{\alpha}_k) {\Gamma}_{k-1}$ for $k=2,3,...$. By the restarting nature of $\alpha_{k}$, it is easy to check that ${\Gamma}_k = 0$ whenever $k=Q_t$ for some $t$, and $\Gamma_{k}=\frac{2}{(k-Q_t)(k-Q_t+1)}$ otherwise.
Let the sequences $\{x_k\}_k, \{y_k\}_k, \{z_k\}_k$ be generated by \Cref{alg: Acc-PGD}. Let us first analyze a certain $(t-1)$-th restart period, which consists of the iterations $\{\ell: Q_{t-1}\le \ell \le Q_t-1\}$. We first bound the term $\|y_k - x_k\|$ for any iteration $k$ within this restart period. By the update rule of the momentum scheme in \Cref{alg: Acc-PGD}, we obtain that
\begin{align}
y_k - x_k &= z_{k-1} - \beta_{k-1} G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1})) - (x_{k-1} - \lambda_{k-1} G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))) \nonumber\\
&= (1-{\alpha}_k) (y_{k-1} - x_{k-1}) + (\lambda_{k-1} - \beta_{k-1}) G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1})).
\end{align}
Dividing both sides by ${\Gamma}_k$ and noting that $\frac{1-{\alpha}_k}{{\Gamma}_k} = \frac{1}{{\Gamma}_{k-1}}$, we further obtain that
\begin{align}
\frac{y_k - x_k}{{\Gamma}_k} &= \frac{y_{k-1} - x_{k-1}}{{\Gamma}_{k-1}} + \frac{\lambda_{k-1} - \beta_{k-1}}{{\Gamma}_k} G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1})).
\end{align}
Telescoping the above equality over the iterations $Q_{t-1},...,k$ { within this restart period}, noting that $x_{Q_{t-1}}=y_{Q_{t-1}}$ by restart and rearranging, we obtain that
\begin{align}
\|y_k - x_k \|^2 &= \|{\Gamma}_{k} \sum_{\ell=Q_{t-1}}^{k-1} \frac{\lambda_{\ell} - \beta_{\ell}}{{\Gamma}_{\ell+1}} G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2 \nonumber \\
&= \|{\Gamma}_{k} \sum_{\ell=Q_{t-1}}^{k-1} \frac{{\alpha}_{\ell+1}}{{\Gamma}_{\ell+1}} \frac{\lambda_{\ell} - \beta_{\ell}}{{\alpha}_{\ell+1}} G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2 \nonumber \\
&\overset{(i)}{\le} {\Gamma}_{k} \sum_{\ell=Q_{t-1}}^{k-1} \frac{{\alpha}_{\ell+1}}{{\Gamma}_{\ell+1}} \frac{(\lambda_{\ell} - \beta_{\ell})^2}{{\alpha}_{\ell+1}^2} \|G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2 \nonumber \\
&= {\Gamma}_{k} \sum_{\ell=Q_{t-1}}^{k-1} \frac{(\lambda_{\ell} - \beta_{\ell})^2}{{\Gamma}_{\ell+1}{\alpha}_{\ell+1}} \|G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2, \label{eq: 1}
\end{align}
where (i) uses the facts that $\{{\Gamma}_k\}_k$ is a decreasing sequence within one restart period, $\sum_{\ell=Q_{t-1}}^{k-1} \frac{{\alpha}_{\ell+1}}{{\Gamma}_{\ell+1}} = \frac{1}{{\Gamma}_k}$ for all $k\le Q_t-1$ and Jensen's inequality.
We also need the following lemma, which was established as Lemma 1 and Proposition 1 in \cite{Ghadimi2016}.
\begin{lemma}(Lemma 1 and Proposition 1, \cite{Ghadimi2016})\label{aux: 4}
Let $g$ be a proper and closed convex function. Then, for all $u, v, x\in \mathbb{R}^d$ and $\eta>0$, the following statements hold:
\begin{align*}
&\inner{u}{G_{\eta}(x,u)} \ge \|G_{\eta}(x,u)\|^2 + \frac{1}{\eta} \big(g(\mathrm{prox}_{\eta g}(x-\eta u)) - g(x) \big), \\
&\|G_{\eta}(x,u) - G_{\eta}(x,v)\| \le \|u-v\|.
\end{align*}
\end{lemma}
Next, we further bound the function value gap $F(x_k)-F(x_{k-1})$ of iteration $k$ {within this restart period}. By the Lipschitz continuity of $\nabla f$ in item 3 of \Cref{assum: f+g}, we obtain that
\begin{align}
f(x_k) &\le f(x_{k-1}) + \inner{\nabla f(x_{k-1})}{x_k - x_{k-1}} + \frac{L}{2}\|x_k - x_{k-1}\|^2 \nonumber \\
&= f(x_{k-1}) + \inner{\nabla f(x_{k-1})}{- \lambda_{k-1} G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))} + \frac{L\lambda_{k-1}^2}{2}\|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber \\
&= f(x_{k-1}) - \lambda_{k-1} \inner{\nabla f(x_{k-1})-\nabla f(z_{k-1})}{G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))} \nonumber\\
&\quad- \lambda_{k-1} \inner{\nabla f(z_{k-1})}{G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))}
+ \frac{L\lambda_{k-1}^2}{2}\|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber\\
&\overset{(i)}{\le} f(x_{k-1}) - \lambda_{k-1} \inner{\nabla f(x_{k-1})-\nabla f(z_{k-1})}{G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))} - \lambda_{k-1} \|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber\\
&\quad - \big(g(\mathrm{prox}_{\lambda_{k-1} g}(x_{k-1}-\lambda_{k-1} \nabla f(z_{k-1}))) - g(x_{k-1}) \big) + \frac{L\lambda_{k-1}^2}{2}\|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber\\
&= f(x_{k-1}) - \lambda_{k-1} \inner{\nabla f(x_{k-1})-\nabla f(z_{k-1})}{G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))} - \lambda_{k-1} \|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber\\
&\quad - \big(g(x_k) - g(x_{k-1}) \big) + \frac{L\lambda_{k-1}^2}{2}\|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2, \nonumber
\end{align}
where (i) follows from \Cref{aux: 4}. Rearranging the above inequality and using Cauchy-Swartz inequality yields that
\begin{align}
F(x_k) &\le F(x_{k-1}) - \lambda_{k-1}(1-\frac{L\lambda_{k-1}}{2}) \|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber\\
&\quad + \lambda_{k-1} \|\nabla f(x_{k-1})-\nabla f(z_{k-1})\| \|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|. \label{eq: 15}
\end{align}
Also, note that
\begin{align}
\|\nabla f(x_{k-1}) - \nabla f(z_{k-1})\| &\le L\|x_{k-1} - z_{k-1}\| \overset{(i)}{\le} L(1-{\alpha}_k) \|y_{k-1} - x_{k-1}\| \nonumber,
\end{align}
where (i) follows from the update rule of the momentum scheme. Substituting the above inequality into \cref{eq: 15} yields that
\begin{align}
F(x_k) &\le F(x_{k-1}) - \lambda_{k-1}(1-\frac{L\lambda_{k-1}}{2})\|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber\\
&\quad+ L\lambda_{k-1}(1-{\alpha}_k) \|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|\|y_{k-1} - x_{k-1}\| \nonumber\\
&\le F(x_{k-1}) - \lambda_{k-1}(1-\frac{L\lambda_{k-1}}{2})\|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber\\
&\quad+ \frac{L\lambda_{k-1}^2}{2} \|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 + \frac{L(1-{\alpha}_k)^2}{2}\|y_{k-1} - x_{k-1}\|^2 \nonumber\\
&= F(x_{k-1}) - \lambda_{k-1}(\frac{1}{2}-L\lambda_{k-1}) \|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 + \frac{L(1-{\alpha}_k)^2}{2}\|y_{k-1} - x_{k-1}\|^2 \nonumber\\
&\le F(x_{k-1}) - \lambda_{k-1}(\frac{1}{2}-L\lambda_{k-1}) \|G_{\lambda_{k-1}}(x_{k-1}, \nabla f(z_{k-1}))\|^2 \nonumber\\
&\quad+ \frac{L{\Gamma}_{k-1}}{2}\sum_{\ell=Q_{t-1}}^{k-2} \frac{\lambda_{\ell} - \beta_{\ell}}{{\alpha}_{\ell+1} {\Gamma}_{\ell+1}} \|G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2, \label{eq: 12}
\end{align}
where the last inequality uses \cref{eq: 1} and the fact that $0<{\alpha}_k <1$. Next, telescoping the above inequality over the iterations $Q_{t-1},...,k$ { within this restart period}, we further obtain that
\begin{align}
F(x_k) &\le F(x_{Q_{t-1}}) - \sum_{j=Q_{t-1}}^{k-1} \lambda_{j}(\frac{1}{2}- L\lambda_{j}) \|G_{\lambda_{j}}(x_{j}, \nabla f(z_{j}))\|^2 \nonumber\\
&\quad + \sum_{j=Q_{t-1}}^{k-1} \frac{L{\Gamma}_{j}}{2} \sum_{\ell=Q_{t-1}}^{j-1} \frac{(\lambda_{\ell} - \beta_{\ell})^2}{{\Gamma}_{\ell+1}{\alpha}_{\ell+1}} \|G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2 \nonumber\\
&= F(x_{Q_{t-1}}) - \sum_{j=Q_{t-1}}^{k-1} \lambda_{j}(\frac{1}{2}- L\lambda_{j}) \|G_{\lambda_{j}}(x_{j}, \nabla f(z_{j}))\|^2 \nonumber\\
&\quad+ \frac{L}{2} \sum_{\ell=Q_{t-1}}^{k-1} \frac{(\lambda_{\ell} - \beta_{\ell})^2}{{\Gamma}_{\ell+1}{\alpha}_{\ell+1}} \|G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2 (\sum_{j=\ell-1}^{k-1} {\Gamma}_{j}) \nonumber\\
&\overset{(i)}{\le} F(x_{Q_{t-1}}) - \sum_{j=Q_{t-1}}^{k-1} \lambda_{j}(\frac{1}{2}- L\lambda_{j}) \|G_{\lambda_{j}}(x_{j}, \nabla f(z_{j}))\|^2 \nonumber\\
&\quad+ \frac{L}{2} \sum_{\ell=Q_{t-1}}^{k-1} \frac{2(\lambda_{\ell} - \beta_{\ell})^2}{(\ell-Q_{t-1}-1) {\Gamma}_{\ell+1}{\alpha}_{\ell+1}} \|G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2 \nonumber\\
&= F(x_{Q_{t-1}}) - \sum_{j=Q_{t-1}}^{k-1} \bigg[\lambda_{j}(\frac{1}{2}- L\lambda_{j}) - \frac{L(\lambda_{j} - \beta_{j})^2}{(j-Q_{t-1}-1) {\Gamma}_{j+1}{\alpha}_{j+1}} \bigg] \|G_{\lambda_{j}}(x_{j}, \nabla f(z_{j}))\|^2 \nonumber\\
&\overset{(ii)}{\le} F(x_{Q_{t-1}}) - \sum_{j=Q_{t-1}}^{k-1} \big[\frac{\lambda_{j}}{4} - \frac{L{\alpha}_{j+1} \beta_{j}^2}{(j-Q_{t-1}-1) {\Gamma}_{j+1}} \big] \|G_{\lambda_{j}}(x_{j}, \nabla f(z_{j}))\|^2 \nonumber\\
&\le F(x_{Q_{t-1}}) - \sum_{j=Q_{t-1}}^{k-1} \big[\frac{\beta_{j}}{4} - \frac{\beta_{j}}{8} \big] \|G_{\lambda_{j}}(x_{j}, \nabla f(z_{j}))\|^2 \nonumber\\
&\le F(x_{Q_{t-1}}) - \sum_{j=Q_{t-1}}^{k-1} \frac{1}{64L\lambda_{j}^2} \|x_{j+1} - x_j\|^2 \nonumber\\
&\le F(x_{Q_{t-1}}) - \sum_{j=Q_{t-1}}^{k-1} \frac{L}{4} \|x_{j+1} - x_j\|^2, \label{eq: 8}
\end{align}
where (i) follows from the fact that $\sum_{j=\ell-1}^{k-1} {\Gamma}_{j} = 2\sum_{j=\ell-1}^{k-1} \frac{1}{j-Q_{t-1}} - \frac{1}{j-Q_{t-1}+1} \le \frac{2}{\ell-Q_{t-1}-1}$, and (ii) uses the facts that $L\lambda_{j} \le 2L\beta_j \le \frac{1}{4}$, $\lambda_{j}-\beta_{j} \le {\alpha}_{j+1} \beta_{j}$. Then, setting $k$ in the above inequality to be the last iteration $Q_t-1$ within this restart period and note that $x_{Q_t} = x_{Q_t-1}$, we obtain that
\begin{align}
F(x_{Q_t}) &\le F(x_{Q_{t-1}}) - \frac{L}{4}\sum_{k=Q_{t-1}}^{Q_t-1} \|x_{k+1} - x_k\|^2. \label{eq: 7}
\end{align}
The first inequality is proved.
To prove the second inequality, by the optimality condition of the proximal gradient update for $x_{k}$, we obtain that
\begin{align}
-\nabla f(z_{k-1}) - \frac{1}{\lambda_k}(x_{k}-x_{k-1}) \in \partial g(x_{k}), \nonumber
\end{align}
which further implies that
\begin{align}
\nabla f(x_{k})-\nabla f(z_{k-1}) - \frac{1}{\lambda_{k-1}}(x_{k}-x_{k-1}) \in \partial F(x_{k}). \nonumber
\end{align}
Note that $\mathrm{dist}_{\partial F(x_{k})}(\mathbf{0}) \le u_k$ for any $u_k \in \partial F(x_{k})$. Therefore, the above inequality further implies that
\begin{align}
\mathrm{dist}_{\partial F(x_{k})}(\mathbf{0}) &\le \|\nabla f(x_{k})-\nabla f(z_{k-1}) - \frac{1}{\lambda_{k-1}}(x_{k}-x_{k-1})\| \nonumber\\
&\le \|\nabla f(x_{k})- \nabla f(x_{k-1})\| + \|\nabla f(x_{k-1}) -\nabla f(z_{k-1})\| + \frac{1}{\lambda_{k-1}} \|x_{k}-x_{k-1}\| \nonumber\\
&\overset{(i)}{\le} 9L \|x_k - x_{k-1}\| + L \sqrt{{\Gamma}_{k-1} \sum_{\ell=Q_{t-1}}^{k-2} \frac{(\lambda_{\ell} - \beta_{\ell})^2}{{\Gamma}_{\ell+1}{\alpha}_{\ell+1}} \|G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2}, \label{eq: 13}
\end{align}
where (i) uses the Lipschitz gradient property, the update rule of \Cref{alg: Acc-PGD} and \cref{eq: 1}. Squaring both sides of the above inequality and rearranging, we further obtain that
\begin{align}
\mathrm{dist}_{\partial F(x_{k})}^2(\mathbf{0}) &\le 162L^2 \|x_k - x_{k-1}\|^2 + 2L^2{\Gamma}_{k-1} \sum_{\ell=Q_{t-1}}^{k-2} \frac{(\lambda_{\ell} - \beta_{\ell})^2}{{\Gamma}_{\ell+1}{\alpha}_{\ell+1}} \|G_{\lambda_{\ell}}(x_{\ell}, \nabla f(z_{\ell}))\|^2 \nonumber\\
&\le 162L^2 \|x_k - x_{k-1}\|^2 + 2L^2{\Gamma}_{k-1} \sum_{\ell=Q_{t-1}}^{k-2} \frac{{\alpha}_{\ell+1} \beta_{\ell}^2}{{\Gamma}_{\ell+1} \lambda_{\ell}^2} \|x_{\ell+1} - x_{\ell}\|^2 \nonumber\\
&\le 162L^2 \|x_k - x_{k-1}\|^2 + 2L^2{\Gamma}_{k-1} \sum_{\ell=Q_{t-1}}^{k-2} \frac{{\alpha}_{\ell+1} }{{\Gamma}_{\ell+1}} \|x_{\ell+1} - x_{\ell}\|^2 \nonumber\\
&\le 162L^2 \|x_k - x_{k-1}\|^2 + 2L^2\frac{2}{(k-Q_t-1)(k-Q_t-2)} \sum_{\ell=Q_{t-1}}^{k-2} (\ell+1-Q_t) \|x_{\ell+1} - x_{\ell}\|^2 \nonumber\\
&\le 162L^2 \sum_{\ell=Q_{t-1}}^{k-1} \|x_{\ell+1} - x_{\ell}\|^2. \nonumber
\end{align}
Then, set $k$ in the above inequality to be the last iteration $Q_t-1$ within this restart period and note that $x_{Q_t} = x_{Q_t-1}$, we obtain that
\begin{align}
\mathrm{dist}_{\partial F(x_{Q_t})}^2(\mathbf{0}) \le 162L^2 \sum_{k=Q_{t-1}}^{Q_t-1} \|x_{k+1} - x_{k}\|^2. \nonumber
\end{align}
The second inequality is proved.
\end{proof}
\section{Proof of \Cref{thm: global}}
\TheoremGlobal*
\begin{proof}
Consider any iteration $K$ and the corresponding closet restart checkpoint $Q_t$ (for some $t$). In the proof of \Cref{lemma: Acc-PGD dynamic}, we have shown that (see \cref{eq: 8})
\begin{align}
F(x_{K}) &\le F(x_{Q_{t}}) - \frac{L}{4}\sum_{j=Q_{t}}^{K-1} \|x_{j+1} - x_j\|^2. \label{eq: 11}
\end{align}
On the other hand, by \Cref{lemma: Acc-PGD dynamic} we know that for all $\ell=1,..., t$,
\begin{align}
F(x_{Q_\ell}) &\le F(x_{Q_{\ell-1}}) - \frac{L}{4}\sum_{k=Q_{\ell-1}}^{Q_{\ell}-1} \|x_{k+1} - x_k\|^2.
\end{align}
Telescoping the above inequality over $\ell=1,..., t$ and combining with \cref{eq: 11}, and noting that $Q_0=q_0=0$, $x_{Q_\ell} = x_{Q_{\ell}-1}$ for all $\ell$, we obtain that
\begin{align*}
F(x_{K}) &\le F(x_{0}) - \frac{L}{4}\sum_{j=0}^{K-1} \|x_{j+1} - x_j\|^2 = F(x_{0}) - \frac{L}{4}\sum_{j=0}^{K-1} \lambda_{j}^2 \|G_{\lambda_j}(z_j, \nabla f(z_j))\|^2.
\end{align*}
Note that $\lambda_j > \beta_{j} = \frac{1}{8L}$. Then, the above inequality further implies that
\begin{align*}
\frac{1}{256L}\sum_{j=0}^{K-1} \|G_{\lambda_j}(z_j, \nabla f(z_j))\|^2 \le F(x_{0}) - F(x_{K}) \le F(x_{0}) - F^*.
\end{align*}
Ignoring the universal constants in the above inequality and taking the minimum, we obtain that
\begin{align*}
\min_{0\le k\le K-1}\|G_{\lambda_k}(z_k, \nabla f(z_k))\|^2 \le \Theta\bigg({\frac{L\big(F(x_{0}) - F^*\big)}{K}}\bigg).
\end{align*}
\end{proof}
\section{Proof of \Cref{thm: Acc-GD variable}}
\TheoremVariable*
\begin{proof}
Recall that the length of the iteration path of the $t$-th restart period is defined as
\begin{align}
L_t:= \sqrt{\sum_{k=Q_t}^{Q_{t+1}-1} \|x_{k+1}-x_k\|^2}.
\end{align}
Then, we can rewrite the results of \Cref{lemma: Acc-PGD dynamic} as
\begin{align}
F(x_{Q_t}) &\le F(x_{Q_{t-1}}) - \frac{L}{4}L_{t-1}^2, \label{eq: 2}\\
\mathrm{dist}_{\partial F(x_{Q_t})}^2(\mathbf{0}) &\le 162L^2 L_{t-1}^2.\label{eq: 3}
\end{align}
By \cref{eq: 2}, the function value sequence $\{F(x_{Q_t}) \}_t$ decreases monotonically period-wise. Since the objective function $F$ is bounded below (item 1 of \Cref{assum: f+g}), we conclude that $\{F(x_{Q_t})\}_t$ converges to a certain finite limit $F^*$. Also, since $F$ has bounded sub-level sets (item 2 of \Cref{assum: f+g}), \cref{eq: 2} further implies that the sequence $\{x_{Q_t} \}_t$ is bounded.
The above proof shows that $F(x_{Q_t}) \downarrow F^*$ and $\{x_{Q_t}\}_t$ is bounded. Next, we further show that the entire sequences $\{F(x_{k})\}_k, \{x_k\}_k$ share the same properties. Telescoping \cref{eq: 2} over $t=1,2,...T$ yields that: for any $T\in \mathbb{N}$,
\begin{align}
\sum_{t=0}^{T-1} L_{t}^2 \le L(F(x_0) - F(x_{Q_T})) \le L(F(x_0) - \inf_{x\in \mathbb{R}^d} F(x) ) < +\infty.
\end{align}
Letting $T\to \infty$ we conclude that $\sum_{t=0}^{\infty} L_{t}^2 < +\infty$ and therefore $L_t \overset{t}{\to} 0$.
Since each restart period contains a uniformly bounded number of iterations, this further implies that $\lim_{k\to \infty} \|x_{k+1} - x_k\| = 0$. Therefore, the entire sequence $\{x_k \}_k$ is bounded and we denote $\omega$ as its set of limit points ($\omega$ is a compact set). Also, by the facts that $\lim_{k\to \infty} \|x_{k+1} - x_k\| = 0$ and \cref{eq: 12}, we conclude that $\lim_{k\to \infty} F(x_{k+1}) - F(x_k) = 0$ for all $k\in \mathbb{N}$. Since $F(x_{Q_t}) \downarrow F^*$, we conclude that $F(x_k) \to F^*$. To this end, we have shown that the entire sequence $\{x_k \}_k$ has a limit point set $\omega$ and the entire sequence $\{F(x_k) \}_k$ converges to a certain finite limit $F^*$.
Now consider any limit point $x^*\in \omega$ and without loss of ambiguity we assume that $x_{k} \overset{k}{\to} x^*$ along a proper subsequence. By the proximal gradient update step of $x_k$ we obtain that
\begin{align}
g(x_{k}) + \frac{1}{2\lambda_{k-1}} \|x_k - x_{k-1}\|^2 &+ \inner{\nabla f(z_{k-1})}{x_k - x_{k-1}} \nonumber\\
&\le g(x^*) + \frac{1}{2\lambda_{k-1}} \|x^* - x_{k-1}\|^2 + \inner{\nabla f(z_{k-1})}{x^* - x_{k-1}}. \nonumber
\end{align}
Taking limsup on both sides of the above inequality and noting that $\{x_k\}_k$ is bounded, $\|x_k - x_{k-1}\| \to 0$ and $x_k \to x^*$, we conclude that $\limsup_k g(x_{k}) \le g(x^*)$. Since $g$ is lower-semicontinuous, we know that $\limsup_k g(x_{k}) \ge g(x^*)$. Combining these two inequalities yields that $\lim_k g(x_{k}) = g(x^*)$. By continuity of $f$, we further conclude that $\lim_k F(x_{k}) = F(x^*)$. Since we have shown that the entire sequence $\{F(x_{k})\}_k$ converges to a certain finite limit $F^*$, we conclude that $F(x^*)\equiv F^*$ for all $x^*\in \omega$. Also, \cref{eq: 13} and the fact that $\|x_{k+1} - x_k\| \to 0$ further imply that $\mathrm{dist}_{\partial F(x_{k})}(\mathbf{0}) \overset{k}{\to} 0$.
To this end, we have shown that for every subsequence $x_{k} \to x^* \in \omega$ we have $F(x_{k}) \to F(x^*)$ and $\mathrm{dist}_{\partial F(x_{k})}(\mathbf{0}) \to 0$. Recall the definition of limiting sub-differential, we conclude that every limit point $x^*$ of $\{x_k\}_k$ is a critical point, i.e., $\mathbf{0}\in \partial F(x^*)$.
Next, we show that the sequence $\{x_k\}_k$ has a unique limit point under the K{\L}~ property.
Consider any limit point $x^*\in \omega$. We have shown that 1) $F(x^*)\equiv F^*$ for all $x^*\in\omega$; 2)
$F(x_{Q_t}) \downarrow F^*$; and 3) $\mathrm{dist}_{\partial F(x_{k})}(\mathbf{0}) \to 0$. Collecting these facts, we are ready to apply the K{\L} property for $t$ being sufficiently large. Specifically, by the K{\L} property of the objective function, we obtain that: for all $t\ge t_1$ where $t_1$ is a sufficiently large integer,
\begin{align}
\varphi'(F(x_{Q_t}) - F^*) \ge \frac{1}{\mathrm{dist}_{\partial F(x_{Q_t})}(\mathbf{0})} \overset{(i)}{\ge} \frac{1}{15 L \cdot L_{t-1}}, \label{eq: 4}
\end{align}
where (i) follows from \cref{eq: 3}. Then, by concavity of $\varphi$ and \cref{eq: 2,eq: 4}, we further obtain that
\begin{align}
\varphi(F(x_{Q_t}) - F^*) - \varphi(f(x_{Q_{t+1}}) - F^*) &\ge \varphi'(F(x_{Q_t}) - F^*) (F(x_{Q_t}) - F(x_{Q_{t+1}})) \nonumber\\
&\ge \frac{L_t^2}{60L^2 \cdot L_{t-1}}. \label{eq: 5}
\end{align}
Rearranging the above inequality yields that
\begin{align}
L_t^2 \le 60L^2L_{t-1} \big[\varphi(F(x_{Q_t}) - F^*) - \varphi(F(x_{Q_{t+1}}) - F^*) \big]. \nonumber
\end{align}
Taking square root of both sides of the above inequality and using the fact that $\sqrt{ab} \le \frac{a+b}{2}$ for $a,b>0$, we obtain that
\begin{align}
2L_t \le L_{t-1} + 60L^2\big[\varphi(F(x_{Q_t}) - F^*) - \varphi(F(x_{Q_{t+1}}) - F^*) \big].
\end{align}
Telescoping the above inequality over $t=t_1+1,...T$ yields that
\begin{align}
2\sum_{t=t_1+1}^T L_t &\le \sum_{t=t_1+1}^T L_t + L_{t_1} + 60L^2\big[\varphi(F(x_{Q_{t_1+1}}) - F^*) - \varphi(F(x_{Q_{T+1}}) - F^*) \big] \nonumber\\
&\le \sum_{t=t_1+1}^T L_t + L_{t_1} + 60L^2\varphi(F(x_{Q_{t_1+1}}) - F^*), \nonumber
\end{align}
where the last inequality follows from the fact that $F(x_{Q_t}) \ge F^*$ for all $t\ge t_1$ and $\varphi(s)>0$ for all $s>0$. Rearranging the above inequality yields that: for all $T \ge t_1$
\begin{align}
\sum_{t=t_1+1}^T L_t &\le L_{t_1} + 60L^2\varphi(F(x_{Q_{t_1+1}}) - F^*)<+\infty. \nonumber
\end{align}
Letting $T\to \infty$ and noting that $t_1$ is a finite integer, we finally conclude that
\begin{align}
\sum_{t=0}^\infty L_t < +\infty. \nonumber
\end{align}
To further prove the convergence of the variable sequence, note that $L_t := \sqrt{\sum_{k=Q_t}^{Q_{t+1}-1} \|x_{k+1}-x_k\|^2}\ge \frac{1}{\sqrt{Q_{t+1}-Q_t}} \sum_{k=Q_t}^{Q_{t+1}-1} \|x_{k+1}-x_k\|$. Substituting into the above inequality yields that
\begin{align}
\sum_{t=0}^\infty \frac{1}{\sqrt{Q_{t+1}-Q_t}} \sum_{k=Q_t}^{Q_{t+1}-1} \|x_{k+1}-x_k\| \le \max_t \frac{1}{\sqrt{Q_{t+1}-Q_t}} \sum_{k=0}^\infty \|x_{k+1}-x_k\|< +\infty, \nonumber
\end{align}
where the last inequality uses the fact that all restart periods have uniformly bounded numbers of iterations that are uniformly bounded.
Therefore, the sequence $\{\|x_{k+1}-x_k\| \}_k$ is absolutely summable and this implies that $\{x_k \}_k$ is a convergent Cauchy sequence. Since we have shown that all the limit points of $\{x_k \}_k$ are critical points, we conclude that $\{x_k \}_k$ converges to a certain critical point of $F$. Lastly, it is clear from the previous results that $\|x_k-y_k\|\to 0, \|x_k-z_k\|\to 0$, which imply that both $\{y_k\}_k$ and $\{z_k\}_k$ converge to the same limit.
\end{proof}
\section{Proof of \Cref{thm: Acc-GD rates} and \Cref{thm: Acc-GD var_rates}}
\TheoremRates*
\TheoremRatesVariable*
\begin{proof}
Consider any $t$-th restart period and denote $r_t:= F(x_{Q_t}) - F(x^*)$ as the function value gap. Then, we can rewrite \cref{eq: 5} as: for all sufficiently large $t\ge t_0$,
\begin{align}
60L^2\big(\varphi(r_t) - \varphi(r_{t+1}) \big) \ge \frac{L_t^2}{L_{t-1}}.
\end{align}
Next, fix $\gamma \in (0,1)$ and consider any $t\ge t_0$. Suppose that $L_t\ge \gamma L_{t-1}$, then the above inequality implies that
\begin{align}
L_t \le \frac{60L^2}{\gamma^2} \big(\varphi(r_t) - \varphi(r_{t+1}) \big).
\end{align}
Otherwise, we conclude that $L_t\le \gamma L_{t-1}$. Combining these two inequalities yields that
\begin{align}
L_t \le \gamma L_{t-1} + \frac{60L^2}{\gamma^2} \big(\varphi(r_t) - \varphi(r_{t+1}) \big).
\end{align}
Summing the above inequality over $t=t_0,...,T$ yields that
\begin{align}
\sum_{t=t_0}^{T} L_t &\le \gamma \sum_{t=t_0}^{T}L_{t-1} + \frac{60L^2}{\gamma^2} \big(\varphi(r_{t_0}) - \varphi(r_{T+1}) \big) \nonumber\\
&\le \gamma \Big[\sum_{t=t_0}^{T}L_{t} + L_{t_0-1} \Big] + \frac{60L^2}{\gamma^2} \varphi(r_{t_0}). \nonumber
\end{align}
Rearranging the above inequality yields that: for all $T\ge t_0$,
\begin{align}
\sum_{t=t_0}^{T} L_t &\le \frac{\gamma}{1-\gamma} L_{t_0-1} + \frac{60L^2}{\gamma^2(1-\gamma)} \varphi(r_{t_0}). \nonumber
\end{align}
Next, define $\Delta_{t_0}:= \sum_{t=t_0}^{\infty} L_t$, which is well-defined due to \cref{eq: finite_len}. Then, letting $T\to \infty$ in the above inequality and noting that $\varphi(s)=Cs^\theta$ yields that for all sufficiently large $t$
\begin{align}
\Delta_{t} &\le \frac{\gamma}{1-\gamma} (\Delta_{t-1} - \Delta_{t}) + \frac{CL^2}{\gamma^2(1-\gamma)} r_{t}^{\theta} \nonumber \\
&\overset{(i)}{\le} \frac{\gamma}{1-\gamma} (\Delta_{t-1} - \Delta_{t}) + \frac{CL^{\frac{1}{1-\theta}}}{\gamma^2(1-\gamma)} \Big[\sum_{k=(t-1)q}^{tq-1}\|x_{k+1} - x_k\|^2\Big]^{\frac{\theta}{2(1-\theta)}} \nonumber \\
&= \frac{\gamma}{1-\gamma} (\Delta_{t-1} - \Delta_{t}) + \frac{CL^{\frac{1}{1-\theta}}}{\gamma^2(1-\gamma)} (\Delta_{t-1} - \Delta_{t})^{\frac{\theta}{1-\theta}} \nonumber \\
\end{align}
where (i) uses the K{\L}~ property and the dynamics of APG-restart in \Cref{lemma: Acc-PGD dynamic}, i.e., $r_{t} \le C\mathrm{dist}_{\partial F(x_{Q_t})}^{\frac{1}{1-\theta}}(\mathbf{0}) \le C \Big[L^2 \sum_{k=Q_{t-1}}^{Q_t-1}\|x_{k+1} - x_k\|^2\Big]^{\frac{1}{2(1-\theta)}}$. It has been shown in (Attouch \& Bolte 09) that sequence $\{\Delta_t\}_t$ satisfying the above inductive property converges to zero at different rates depending on $\theta$ as stated in the theorem. Finally, we note that Holder's inequality and triangle inequality imply that $\max_t \frac{1}{\sqrt{Q_{t+1}-Q_t}}\|x_{Q_t} - x^*\| \le \Delta_t$, and the result follows.
To prove the convergence rates of the function value gap, consider any $t$-th restart period and denote $r_t:= F(x_{Q_t}) - F(x^*)$ as the function value gap. As we have shown that $r_t \overset{t}{\to} 0$, for sufficiently large $t$ we can apply the K{\L}~ property and obtain that: for some universal constant $C>0$,
\begin{align}
r_t &\le C\mathrm{dist}_{\partial F(x_{Q_t})}^{\frac{1}{1-\theta}}(\mathbf{0}) = C\sqrt{\mathrm{dist}_{\partial F(x_{Q_t})}^{\frac{1}{1-\theta}}(\mathbf{0})} \nonumber\\
&\overset{(i)}{\le} C\sqrt{\Big(L^2\sum_{k=Q_{t-1}}^{Q_t-1} \|x_{k+1} - x_{k}\|^2 \Big)^{\frac{1}{1-\theta}}} \nonumber\\
&\overset{(ii)}{\le} CL^{\frac{3}{2(1-\theta)}} \big(r_{t-1} -r_t \big)^{\frac{1}{2(1-\theta)}}, \nonumber
\end{align}
where the constant $C$ may vary from line to line in the above derivation, (i) and (ii) follow from \Cref{lemma: Acc-PGD dynamic}. Rearranging the above inequality yields that: for all sufficiently large $t$,
\begin{align}
1\le CL^3r_t^{2(\theta -1)} (r_{t-1} - r_t). \nonumber
\end{align}
It has been shown in (Frankel et al., 2015; Li \& Lin, 2015) that sequence $\{r_t\}_t$ satisfying the above inductive property converges to zero at different rates depending on $\theta$ as stated in the theorem.
\end{proof}
\section{Restart Conditions Used in the Experiments}\label{app: exp}
\begin{enumerate}
\item For the fixed restart scheme we set the restart period to be $q=10,30,50$, respectively;
\item For the function scheme, we relax the condition to be
\begin{align*}
F(x_k) > 0.8F(x_{k-1}).
\end{align*}
\item For the gradient mapping scheme, we relax the condition to be
\begin{align*}
\inner{z_{k}-y_{k}}{y_{k+1}-z_{k}} \ge -0.2 \|z_{k}-y_{k}\|\|y_{k+1}-z_{k}\|.
\end{align*}
\item For the non-monotone scheme, we relax the condition to be
\begin{align*}
\inner{z_{k}-y_{k}}{y_{k+1}-\frac{z_{k}+x_k}{2}} \ge -0.2\|z_{k}-y_{k}\|\Big\|y_{k+1}-\frac{z_{k}+x_k}{2}\Big\|.
\end{align*}
\end{enumerate}
|
2,869,038,156,903 | arxiv | \@ifstar\myssect\mysect{\@ifstar\myssect\mysect}
\def\@startsection {subsection}{2}{\z@{\@startsection {subsection}{2}{\z@}
{8pt plus 2pt minus 2pt}{6pt} {\elvbf}}
\def\myssubsect#1{\@startsection {subsection}{2}{\z@*{#1}}
\def\mysubsect#1{\@startsection {subsection}{2}{\z@{\hskip -1em.~#1}}
\def\@ifstar\myssubsect\mysubsect{\@ifstar\myssubsect\mysubsect}
\makeatother
\renewcommand{\textfraction}{0.01}
\renewcommand{\floatpagefraction}{0.99}
\renewcommand{\topfraction}{0.99}
\renewcommand{\bottomfraction}{0.99}
\renewcommand{\dblfloatpagefraction}{0.99}
\renewcommand{\dbltopfraction}{0.99}
\setcounter{totalnumber}{99}
\setcounter{topnumber}{99}
\setcounter{bottomnumber}{99}
\usepackage{times}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{xfrac}
\usepackage{tabularx}
\usepackage{fancyhdr}
\fancypagestyle{plain}{
\renewcommand{\headrulewidth}{0pt}
\fancyhf{}
\fancyhead[C]{MANUSCRIPT IN PRESS 2015 INTERNATIONAL CONFERENCE ON COMPUTER VISION}
}
\makeatletter\let\captiontemp\@makecaption\makeatother
\usepackage[font=footnotesize]{subcaption}
\makeatletter\let\@makecaption\captiontemp\makeatother
\usepackage[pagebackref=true,breaklinks=true,colorlinks,bookmarks=false]{hyperref}
\pagestyle{empty}
\graphicspath{{./images/}}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\makeatletter
\let\OldStatex\Statex
\renewcommand{\Statex}[1][3]{
\setlength\@tempdima{\algorithmicindent}
\OldStatex\hskip\dimexpr#1\@tempdima\relax}
\makeatother
\newcommand{\mbox{\emph{et al.\ }}}{\mbox{\emph{et al.\ }}}
\hyphenation{regi-s-t-ra-tion}
\begin{document}
\setlength{\abovedisplayskip}{9.0pt plus 2.0pt minus 5.0pt}
\setlength{\belowdisplayskip}{9.0pt plus 2.0pt minus 5.0pt}
\title{An Adaptive Data Representation for Robust Point-Set Registration and Merging}
\author{Dylan Campbell and Lars Petersson\\
Australian National University\,\,\,\,\,\,\,\,\,\,National ICT Australia (NICTA)%
\thanks{\tiny NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program.}\\
{\tt\small \{dylan.campbell,lars.petersson\}@nicta.com.au}
}
\maketitle
\begin{abstract}
This paper presents a framework for rigid point-set registration and merging using a robust continuous data representation. Our point-set representation is constructed by training a one-class support vector machine with a Gaussian radial basis function kernel and subsequently approximating the output function with a Gaussian mixture model. We leverage the representation's sparse parametrisation and robustness to noise, outliers and occlusions in an efficient registration algorithm that minimises the $L_2$ distance between our support vector--parametrised Gaussian mixtures. In contrast, existing techniques, such as Iterative Closest Point and Gaussian mixture approaches, manifest a narrower region of convergence and are less robust to occlusions and missing data, as demonstrated in the evaluation on a range of 2D and 3D datasets. Finally, we present a novel algorithm, GMMerge, that parsimoniously and equitably merges aligned mixture models, allowing the framework to be used for reconstruction and mapping.
\end{abstract}
\vspace{-6pt}
\@ifstar\myssect\mysect{Introduction}
\label{sec:introduction}
Point-set registration, the problem of finding the transformation that best aligns one point-set with another, is fundamental in computer vision, robotics, computer graphics and medical imaging.
A general-purpose point-set registration algorithm operates on unstructured point-sets and may not assume other information is available, such as labels or mesh structure.
Applications include merging multiple partial scans into a complete model~\cite{huber2003fully}; using registration results as fitness scores for object recognition~\cite{belongie2002shape}; registering a view into a global coordinate system for sensor localisation~\cite{nuchter20076d};
and finding relative poses between sensors~\cite{yang2013single}.
The dominant solution is the Iterative Closest Point (ICP) algorithm~\cite{besl1992method} and variants due to its conceptual simplicity, usability and good performance in practice. However, these are local techniques that are very susceptible to local minima and outliers and require a significant amount of overlap between point-sets.
To mitigate the problem of local minima, other solutions have widened the region of convergence~\cite{fitzgibbon2003robust}, performed heuristic global search~\cite{sandhu2010point}, used feature-based coarse alignment~\cite{rusu2009fast} or used branch-and-bound techniques to find the global minimum~\cite{yang2013goicp}.
Our method widens the region of convergence and is robust to occlusions and missing data, such as those arising when an object is viewed from different locations. The central idea is that the robustness of registration is dependent on the data representation used. We present a framework for robust point-set registration and merging using a continuous data representation, a Support Vector--parametrised Gaussian Mixture (SVGM). A discrete point-set is mapped to the continuous domain by training a Support Vector Machine (SVM) and mapping it to a Gaussian Mixture Model (GMM).
Since an SVM is parametrised by a sparse intelligently-selected subset of data points, an SVGM is compact and robust to noise, fragmentation and occlusions~\cite{nguyen2013support}, crucial qualities for efficient and robust registration.
The motivation for a continuous representation is that a typical scene comprises a single, seldom-disjoint continuous surface, which cannot be fully modelled by a discrete point-set sampled from the scene.
Our Support Vector Registration (SVR) algorithm minimises an objective function based on the $L_2$ distance between SVGMs.
Unlike the benchmark GMM registration algorithm GMMReg~\cite{jian2011robust}, SVR uses an adaptive and sparse representation with non-uniform and data-driven mixture weights, enabling faster performance and improving the robustness to outliers, occlusions and partial overlap.
Finally, we propose a novel merging algorithm, GMMerge, that parsimoniously and equitably merges aligned mixtures. Merging SVGM representations is useful for applications where each point-set may contain unique information, such as reconstruction and mapping. Our registration and merging framework is visualised in Figure~\ref{fig:framework}.
\begin{figure*}[!t]
\centering
\setlength{\unitlength}{496.85625pt}
\begin{picture}(1, 0.32)(0, 0)
\put(0.00, 0){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_y.pdf}}}
\put(0.21, 0){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_sy.pdf}}}
\put(0.42, 0){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_gy.pdf}}}
\put(0.63, 0){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_gy.pdf}}}
\put(0.00, 0.16){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_xr.pdf}}}
\put(0.21, 0.16){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_sxr.pdf}}}
\put(0.42, 0.16){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_gxr.pdf}}}
\put(0.63, 0.16){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_gx.pdf}}}
\put(0.84, 0.08){\frame{\includegraphics[width=0.16\textwidth, trim=0 0 3pt 3pt, clip]{butterfly_gmmerge.pdf}}}
\thicklines
\put(0.16, 0.08){\vector(1,0){0.05}}
\put(0.37, 0.08){\vector(1,0){0.05}}
\put(0.58, 0.08){\vector(1,0){0.05}}
\put(0.79, 0.08){\vector(1,1){0.05}}
\put(0.16, 0.24){\vector(1,0){0.05}}
\put(0.37, 0.24){\vector(1,0){0.05}}
\put(0.58, 0.24){\vector(1,0){0.05}}
\put(0.79, 0.24){\vector(1,-1){0.05}}
\put(0.00, 0.32){\makebox(0.16, 0.02)[t]{Point-Set}}
\put(0.21, 0.32){\makebox(0.16, 0.02)[t]{SVM}}
\put(0.42, 0.32){\makebox(0.16, 0.02)[t]{Misaligned SVGM}}
\put(0.63, 0.32){\makebox(0.16, 0.02)[t]{Aligned SVGM}}
\put(0.84, 0.24){\makebox(0.16, 0.02)[t]{Merged SVGM}}
\put(0.16, 0.24){\makebox(0.05, 0.02)[t]{(a)}}
\put(0.37, 0.24){\makebox(0.05, 0.02)[t]{(b)}}
\put(0.58, 0.24){\makebox(0.05, 0.02)[t]{(c)}}
\put(0.79, 0.23){\makebox(0.05, 0.02)[t]{(d)}}
\end{picture}
\caption{Robust point-set registration and merging framework. An $n$D point-set is represented as an SVGM by training a one-class SVM (a) and then mapping it to a GMM (b). The SVR algorithm is used to minimise the $L_2$ distance between two SVGMs in order to align the densities (c). Finally, the GMMerge algorithm is used to parsimoniously fuse the two mixtures. The SVMs are visualised as support vector points scaled by mixture weight and the SVGMs are coloured by probability value. Best viewed in colour.}
\label{fig:framework}
\end{figure*}
\@ifstar\myssect\mysect{Related Work}
\label{sec:related_work}
The large volume of work published on ICP, its variants and other registration techniques precludes a comprehensive list, however the reader is directed to recent surveys on ICP variants~\cite{pomerleau2013comparing} and 3D point-set and mesh registration techniques~\cite{tam2013registration} for additional background. Of relevance to our work are extensions that improve its occlusion robustness, such as trimming~\cite{chetverikov2005robust}. Local methods that seek to improve upon ICP's basin of convergence and sensitivity to outliers include LM-ICP~\cite{fitzgibbon2003robust}, which uses a distance transform to optimise the ICP error without establishing explicit point correspondences.
Another family of approaches, to which ours belongs, is based on the Gaussian Mixture Model (GMM) and show an improved robustness to poor initialisations, noise and outliers. Notable GMM algorithms for rigid and non-rigid registration include Robust Point Matching~\cite{chui2003new}, using soft assignment and deterministic annealing, Coherent Point Drift~\cite{myronenko2010point}, Kernel Correlation~\cite{tsin2004correlation} and GMMReg~\cite{jian2011robust}. The latter two do not establish explicit point correspondences and both minimise a distance measure between mixtures. GMMReg~\cite{jian2011robust} defines an equally-weighted Gaussian at every point in the set with identical and isotropic covariances and minimises the $L_2$ distance between mixtures.
The Normal Distributions Transform (NDT) algorithm~\cite{magnusson2007scan} is a similar method, defining Gaussians for every cell in a grid discretisation and estimating full data-driven covariances, like~\cite{xiong2013study}. Unlike our method, however, it imposes external structure on the scene and uses uniform mixture weights.
In contrast, globally-optimal techniques avoid local minima by searching the entire transformation space. Existing 3D methods~\cite{li20073d,yang2013goicp} are often very slow or make restrictive assumptions about the point-sets or transformations.
There are also many heuristic or stochastic methods for global alignment that are not guaranteed to converge, such as particle filtering~\cite{sandhu2010point}, genetic algorithms~\cite{silva2005precision} and feature-based alignment~\cite{rusu2009fast}.
A recent example is \textsc{Super 4PCS}, a four-points congruent sets method that exploits a clever data structure to achieve linear-time performance~\cite{mellado2014super}.
The rest of the paper is organised as follows: we present the SVGM representation, its properties and implementation in Section~\ref{sec:point-set_representation}, we develop a robust framework for SVGM registration in Section~\ref{sec:svmreg}, we propose an algorithm for merging SVGMs in Section~\ref{sec:merging}, we experimentally demonstrate the framework's effectiveness in Section~\ref{sec:results} and we discuss the results and conclude in Sections~\ref{sec:discussion} and~\ref{sec:conclusion}.
\@ifstar\myssect\mysect{Adaptive Point-Set Representation}
\label{sec:point-set_representation}
A central idea of our work is that the robustness of point-set registration is dependent on the data representation used. Robustness to occlusions or missing data, more so than noise, is of primary concern, because point-sets rarely overlap completely, such as when an object is sampled from a different sensor location.
Another consideration is the class of optimisation problem a particular representation admits. Framing registration as a continuous optimisation problem involving continuous density functions may make it more tractable than the equivalent discrete problem~\cite{jian2011robust}.
Consequently, we represent discrete point-sets with Gaussian Mixture Models (GMMs). Crucially, we first train a Support Vector Machine (SVM) and then transform this into a GMM. Since the output function of the SVM only involves a sparse subset of the data points, the representation is compact and robust to noise, fragmentation and occlusions~\cite{nguyen2013support}, attributes that persist through the GMM transformation.
\@ifstar\myssubsect\mysubsect{One-Class Support Vector Machine}
\label{sec:one-class_svm}
The output function of an SVM can be used to approximate the surface described by noisy and incomplete point-set data, providing a continuous implicit surface representation.
Nguyen and Porikli \cite{nguyen2013support} demonstrated that this representation is robust to noise, fragmentation, missing data and other artefacts for 2D shapes, with the same behaviour expected in 3D.
An SVM classifies data by constructing a hyperplane that separates data of two different classes, maximising the margin between the classes while allowing for some mislabelling \cite{cortes1995support}. Since point-set data contains only positive examples, one-class SVM \cite{scholkopf2001estimating} can be used to find the hyperplane that maximally separates the data points from the origin or viewpoint in feature space. The training data is mapped to a higher-dimensional feature space, where it may be linearly separable from the origin, with a non-linear kernel function.
The output function $f(\mathbf{x})$ of one-class SVM is given by
\begin{equation}
\label{eqn:svm_output_function}
f(\mathbf{x}) = \sum_{i = 1}^{\ell} \alpha_{i} K(\mathbf{x}_{i}, \mathbf{x}) - \rho
\end{equation}
where $\mathbf{x}_i$ are the point vectors, $\alpha_i$ are the weights, $\mathbf{x}$ is the input vector, $\rho$ is the bias, $\ell$ is the number of training samples and $K$ is the kernel function
that evaluates the inner product of data vectors mapped to a feature space.
We use a Gaussian Radial Basis Function (RBF) kernel
\begin{equation}
\label{eqn:GRBF}
K(\mathbf{x}_{i}, \mathbf{x}) = \exp \left(-\gamma \left\| \mathbf{x}_{i} - \mathbf{x} \right\|_2^2 \right)
\end{equation}
where $\gamma$ is the Gaussian kernel width.
The optimisation formulation in~\cite{scholkopf2001estimating} has a parameter $\nu \in (0, 1]$ that controls the trade-off between training error and model complexity. It is a lower bound on the fraction of support vectors and an upper bound on the misclassification rate~\cite{scholkopf2001estimating}. The data points with non-zero weights $\alpha_{i}^{\mathrm{SV}}$ are the support vectors $\mathbf{x}_{i}^{\mathrm{SV}} \in \{\mathbf{x}_{i} : \alpha_{i} > 0, i = 1, \ldots, \ell\}$.
We estimate the kernel width $\gamma$ automatically for each point-set by noting that it is inversely proportional to the square of the scale $\sigma$. For an $\ell \times D$ point-set $\mathbf{X}$ with mean $\bar{\mathbf{x}}$, the estimated scale $\hat{\sigma}$ is proportional to the $2D$th root of the generalised variance
\begin{equation}
\label{eqn:sigma_hat}
\hat{\sigma} \propto
\left|
\frac{1}{\ell - 1} (\mathbf{X} - \mathbf{1} \bar{\mathbf{x}}^{\intercal})^{\intercal} (\mathbf{X} - \mathbf{1} \bar{\mathbf{x}}^{\intercal})
\right|
^{\sfrac{1}{2D}}.
\end{equation}
If a training set is available, better performance can be achieved by finding $\gamma$ using cross-validation, imposing a constraint on the registration accuracy and searching in the neighbourhood of $\sfrac{1}{2\hat{\sigma}^2}$.
\@ifstar\myssubsect\mysubsect{Gaussian Mixture Model Transformation}
\label{sec:gmm}
In order to make use of the trained SVM for point-set registration, it must first be approximated as a GMM. We use the transformation identified by Deselaers \mbox{\emph{et al.\ }} \cite{deselaers2010object} to represent the SVM in the framework of a GMM, without altering the decision boundary.
A GMM converted from an SVM will necessarily optimise classification performance instead of data representation, since SVMs are discriminative models, unlike standard generative GMMs.
This allows it to discard redundant data and reduces its susceptibility to varying point densities, which are prevalent in real datasets.
The decision function of an SVM with a Gaussian RBF kernel can be written as
\begin{equation}
\label{eqn:svm_decision}
r(\mathbf{x}) = \argmax_{k \in \{ -1,1\}} \left\{ \sum_{i = 1}^{\ell^{\mathrm{SV}}} k\alpha_{i}^{\mathrm{SV}} \mathrm{e}^{-\gamma \left\| \mathbf{x}_{i}^{\mathrm{SV}} - \mathbf{x} \right\|_2^2} - k\rho \right\}
\end{equation}
where $\ell^{\mathrm{SV}}$ is the number of support vectors and $k$ is the class, positive for inliers and negative otherwise for one-class SVM. The GMM decision function can be written as
\begin{equation}
\label{eqn:gmm_decision}
r'(\mathbf{x}) = \argmax_{k \in \{ -1,1\}} \left\{ \sum_{i = 1}^{I_{k}} p(k)p(i|k) \mathcal{N} \left( \mathbf{x} \middle| \boldsymbol{\mu}_{ki} , \sigma_{k}^{2} \right) \right\}
\end{equation}
where $I_{k}$ is the number of clusters for class $k$, $p(k)$ is the prior probability of class $k$, $p(i|k)$ is the cluster weight of the $i$th cluster of class $k$ and $\mathcal{N} \left( \mathbf{x} \middle| \boldsymbol{\mu}_{ki} , \sigma_{k}^{2} \right)$ is the Gaussian representing the $i$th cluster of class $k$ with mean $\boldsymbol{\mu}_{ki}$ and variance $\sigma_{k}^2$, given by
\begin{equation}
\label{eqn:gmm_normal_dist}
\mathcal{N} \left( \mathbf{x} \middle| \boldsymbol{\mu}_{ki} , \sigma_{k}^{2} \right) = \frac{1}{(2 \pi \sigma_{k}^{2})^{\sfrac{D}{2}}} \exp \left(- \frac{\left\| \mathbf{x} - \boldsymbol{\mu}_{ki} \right\|_{2}^{2}}{2\sigma_{k}^{2}} \right).
\end{equation}
Noting the similarity of (\ref{eqn:svm_decision}) and (\ref{eqn:gmm_decision}), the mapping
\begin{align}
\boldsymbol{\mu}_{ki} &=
\begin{cases}
\mathbf{x}_{i}^{\mathrm{SV}} & \text{if } k = +1\\
\mathbf{0} & \text{else}
\end{cases}
\label{eqn:transform_mu} \\
\sigma_{k}^{2} &=
\begin{cases}
\sfrac{1}{2\gamma} & \text{if } k = +1\\
N_{\infty} & \text{else}
\end{cases}
\label{eqn:transform_sigma} \\
\phi_{i} = p(k)p(i|k) &=
\begin{cases}
\alpha_{i}^{\mathrm{SV}} (2\pi\sigma_{k}^{2})^{\sfrac{D}{2}} & \text{if } k = +1\\
\rho (2\pi\sigma_{k}^{2})^{\sfrac{D}{2}} & \text{else}
\end{cases}
\label{eqn:transform_p}
\end{align}
can be applied,
where $\phi_{i}$ is the mixture weight, that is, the prior probability of the $i$th component. The bias term $\rho$ is approximated by an additional density given to the negative class with arbitrary mean, very high variance $N_{\infty}$ and a cluster weight proportional to $\rho$. We omit this term from the registration framework because it does not affect the optimisation. The resulting GMM is parametrised by
\begin{equation}
\label{eqn:gmm_set}
\mathcal{G} = \left\{ \boldsymbol{\mu}_{i} , \; \sigma^{2} , \; \phi_{i} \right\}_{i = 1}^{\ell^{\mathrm{SV}}}.
\end{equation}
While we transform an SVM into a GMM, there are many other ways to construct a GMM from point-set data. Kernel Density Estimation (KDE) with identically-weighted Gaussian densities has frequently been used for this purpose, including fixed-bandwidth KDE with isotropic covariances~\cite{jian2011robust,detry2009probabilistic}, variable-bandwidth KDE with non-identical covariances~\cite{comaniciu2003algorithm} and non-isotropic covariance KDE ~\cite{xiong2013study}.
The primary disadvantage of these methods is that the number of Gaussian components is equal to the point-set size, which can be very large for real-world datasets. In contrast, our work intelligently selects a sparse subset of the data points to locate the Gaussian densities and weights them non-identically, making it more robust to occlusions and missing data, as demonstrated in Figure~\ref{fig:butterfly_occlusion}.
\begin{figure}[!t]
\centering
\begin{subfigure}[]{0.32\columnwidth}
\includegraphics[width=\columnwidth]{butterfly_data.pdf}
\caption{Point-Set A}
\label{fig:butterfly_occlusion_A}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.32\columnwidth}
\includegraphics[width=\columnwidth]{butterfly_heat_GMM_g250.pdf}
\caption{KDE-GMM A}
\label{fig:butterfly_occlusion_kdeA}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.32\columnwidth}
\includegraphics[width=\columnwidth]{butterfly_heat_g250.pdf}
\caption{SVGM A}
\label{fig:butterfly_occlusion_svmA}
\end{subfigure}
\begin{subfigure}[]{0.32\columnwidth}
\includegraphics[width=\columnwidth]{butterfly_occl3_data.pdf}
\caption{Point-Set B}
\label{fig:butterfly_occlusion_B}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.32\columnwidth}
\includegraphics[width=\columnwidth]{butterfly_occl3_heat_GMM_g250.pdf}
\caption{KDE-GMM B}
\label{fig:butterfly_occlusion_kdeB}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.32\columnwidth}
\includegraphics[width=\columnwidth]{butterfly_occl3_heat_g250.pdf}
\caption{SVGM B}
\label{fig:butterfly_occlusion_svmB}
\end{subfigure}
\caption{The effect of significant occlusion on two point-set representations, using the same parameters for both. Our SVGM representation is, qualitatively, almost identical when occluded (\subref{fig:butterfly_occlusion_svmB}) and unoccluded (\subref{fig:butterfly_occlusion_svmA}), whereas the fixed-bandwidth KDE representation is much less robust to occlusion (\subref{fig:butterfly_occlusion_kdeB}).
Best viewed in colour.}
\label{fig:butterfly_occlusion}
\end{figure}
Expectation Maximisation (EM)~\cite{dempster1977maximum} can also be used to construct a GMM with fewer components than KDE. EM finds the maximum likelihood estimates of the GMM parameters, where the number of densities is specified a priori, unlike our method. To initialise the algorithm, the means can be chosen at random or using the k-means algorithm; or, an initial Gaussian can be iteratively split and re-estimated until the number of densities is reached~\cite{deselaers2010object}. However, deliberately inflating the number of components can be slow and sensitive to initialisation~\cite[p. 326]{scott2001kernels}.
\@ifstar\myssect\mysect{Support Vector Registration}
\label{sec:svmreg}
Once the point-sets are in mixture model form, the registration problem can be posed as minimising the distance between mixtures.
Like Jian and Vemuri \cite{jian2011robust}, we use the $L_2$ distance, which can be expressed in closed-form.
The $L_2 E$ estimator minimises the $L_2$ distance between densities and is known, counter-intuitively, to be inherently robust to outliers \cite{scott2001parametric}, unlike the maximum likelihood estimator that minimises the Kullback-Leibler divergence.
Let $\mathcal{X}$ be the moving model point-set, $\mathcal{Y}$ be the fixed scene point-set, $\mathcal{G}_{\mathcal{X}}$ and $\mathcal{G}_{\mathcal{Y}}$ be GMMs converted from SVMs trained on $\mathcal{X}$ and $\mathcal{Y}$ respectively, and $T(\mathcal{G},\boldsymbol{\theta})$ be the transformation model parametrised by $\boldsymbol{\theta}$. The $L_2$ distance between transformed $\mathcal{G}_{\mathcal{X}}$ and $\mathcal{G}_{\mathcal{Y}}$ is given by
\begin{equation}
\label{eqn:l2_distance}
D_{L_{2}}(\mathcal{G}_{\mathcal{X}}, \mathcal{G}_{\mathcal{Y}}, \boldsymbol{\theta}) = \int_{\mathbb{R}^{D}} \left( p\left( \mathbf{x} \middle| T(\mathcal{G}_{\mathcal{X}}, \boldsymbol{\theta}) \right) - p\left( \mathbf{x} \middle| \mathcal{G}_{\mathcal{Y}} \right) \right)^{2}\,\mathrm{d}\mathbf{x}
\end{equation}
where $p\left( \mathbf{x} \middle| \mathcal{G} \right)$ is the probability of observing a point $\mathbf{x}$ given a mixture model $\mathcal{G}$ with $\ell$ components, that is
\begin{equation}
\label{eqn:gmm_probability}
p\left( \mathbf{x} \middle| \mathcal{G} \right) = \sum_{i = 1}^{\ell} \phi_{i} \mathcal{N} \left( \mathbf{x} \middle| \boldsymbol{\mu}_{i} , \sigma^{2} \right).
\end{equation}
Expanding (\ref{eqn:l2_distance}), the last term is independent of $\boldsymbol{\theta}$ and the first term is invariant under rigid transformations. Both are therefore removed from the objective function. The middle term is the inner product of two Gaussian mixtures and has a closed form that can be derived by applying the identity
\begin{multline}
\label{eqn:gmm_identity}
\int_{\mathbb{R}^{D}} \mathcal{N} \left( \mathbf{x} \middle| \boldsymbol{\mu}_{1} , \sigma_{1}^{2} \right) \mathcal{N} \left( \mathbf{x} \middle| \boldsymbol{\mu}_{2} , \sigma_{2}^{2} \right) \,\mathrm{d}\mathbf{x}\\
= \mathcal{N} \left( \mathbf{0} \middle| \boldsymbol{\mu}_{1} - \boldsymbol{\mu}_{2} , \sigma_{1}^{2} + \sigma_{2}^{2} \right).
\end{multline}
Therefore, noting that $\sigma_{\mathcal{X}}^{2} = \sigma_{\mathcal{Y}}^{2}$ in our formulation, the objective function for rigid registration is defined as
\begin{equation}
\label{eqn:objective_function}
\begin{aligned}
f\left(\boldsymbol{\theta} \right) &= - \sum_{i = 1}^{m} \sum_{j = 1}^{n} \phi_{i, \mathcal{X}} \phi_{j, \mathcal{Y}} \mathcal{N} \left( \mathbf{0} \middle| \boldsymbol{\mu}_{i, \mathcal{X}}' - \boldsymbol{\mu}_{j, \mathcal{Y}} , 2\sigma^{2} \right)\\
\end{aligned}
\end{equation}
where $m$ and $n$ are the number of components in $\mathcal{G}_{\mathcal{X}}$ and $\mathcal{G}_{\mathcal{Y}}$ respectively and $\boldsymbol{\mu}_{i, \mathcal{X}}' = T(\boldsymbol{\mu}_{i, \mathcal{X}}, \boldsymbol{\theta})$. This can be expressed in the form of a discrete Gauss transform, which has a computational complexity of $\mathcal{O}(mn)$, or the fast Gauss transform \cite{greengard1991fast}, which scales as $\mathcal{O}(m + n)$.
The gradient vector is derived as in~\cite{jian2011robust}. Let $\mathbf{M}_0 = \left[ \boldsymbol{\mu}_{1,\mathcal{X}}, \dots , \boldsymbol{\mu}_{m,\mathcal{X}}\right]^{\intercal}$ be the $m \times D$ matrix of the means from $\mathcal{G}_{\mathcal{X}}$ and $\mathbf{M} = T(\mathbf{M}_0, \boldsymbol{\theta})$ be the transformed matrix, parametrised by $\boldsymbol{\theta}$. Using the chain rule, the gradient is $\frac{\partial f}{\partial \boldsymbol{\theta}} = \frac{\partial f}{\partial \mathbf{M}} \frac{\partial \mathbf{M}}{\partial \boldsymbol{\theta}}$.
Let $\mathbf{G} = \frac{\partial f}{\partial \mathbf{M}}$ be an $m \times D$ matrix, which can be found while evaluating the objective function by
\begin{equation}
\label{eqn:gradient_G}
\mathbf{G}_{i} = - \frac{1}{2\sigma^2} \sum_{j = 1}^{m} f_{ij} \left( \boldsymbol{\mu}_{i, \mathcal{X}}' - \boldsymbol{\mu}_{j, \mathcal{Y}} \right)
\end{equation}
where $\mathbf{G}_{i}$ is the $i$th row of $\mathbf{G}$ and $f_{ij}$ is a summand of $f$.
For rigid motion, $\mathbf{M} = \mathbf{M}_0 \mathbf{R}^{\intercal} + \mathbf{t}$ where $\mathbf{R}$ is the rotation matrix and $\mathbf{t}$ is the translation vector. The gradients with respect to each motion parameter are given by
\begin{align}
\frac{\partial f}{\partial \mathbf{t}} &= \mathbf{G}^{\intercal} \mathbf{1}_{m}
\label{eqn:gradient_translation} \\
\frac{\partial f}{\partial r_{i}} &= \mathbf{1}_{D}^{\intercal} \left( \left( \mathbf{G}^{\intercal} \mathbf{M}_{0} \right) \circ \frac{\partial \mathbf{R}}{\partial r_{i}} \right) \mathbf{1}_{D}
\label{eqn:gradient_rotation}
\end{align}
where $\mathbf{1}_{i}$ is the $i$-dimensional column vector of ones, $\circ$ is the Hadamard
product and $r_{i}$ are the elements parametrising $\mathbf{R}$: rotation angle $\alpha$ for 2D and a unit quaternion for 3D.
For the latter, the quaternion is projected back to the space of valid rotations after each update by normalisation.
Since the objective function is smooth, differentiable and convex in the neighbourhood of the optimal motion parameters, gradient-based numerical optimisation methods can be used, such as nonlinear conjugate gradient or quasi-Newton methods. We use an interior-reflective Newton method \cite{coleman1996interior} since it is time and memory efficient and scales well.
However, since the objective function is non-convex over the search space, this approach is susceptible to local minima, particularly for large motions and point-sets with symmetries.
A multi-resolution approach can be adopted, increasing $\gamma$ at each iteration and initialising with the currently optimal transformation.
SVR is outlined in Algorithm~\ref{alg:SVR}.
\begin{algorithm}
\begin{algorithmic}[1]
\Require model point-set $\mathcal{X} = \{\mathbf{x}_{i} \}_{i = 1}^{\ell_{\mathcal{X}}}$, scene point-set $\mathcal{Y} = \{\mathbf{y}_{i} \}_{i = 1}^{\ell_{\mathcal{Y}}}$, transformation model $T$ parametrised by $\boldsymbol{\theta}$, initial parameter $\boldsymbol{\theta}_{0}$ such as the identity transformation
\Ensure locally optimal transformation parameter $\boldsymbol{\theta}^*$ such that $T(\mathcal{X}, \boldsymbol{\theta}^*)$ is best aligned with $\mathcal{Y}$
\State Select $\nu$ and $\gamma$ by estimation or cross-validation
\State Initialise transformation parameter: $\boldsymbol{\theta} \gets \boldsymbol{\theta}_{0}$
\Repeat
\State Train SVMs:
\Statex[1] $\mathcal{S}_{\mathcal{X}} = \left\{ \mathbf{x}_{i}^{\mathrm{SV}}, \; \alpha_{i,\mathcal{X}}^{\mathrm{SV}} \right\}_{i = 1}^{m} \gets \mathrm{trainSVM}(\mathcal{X}, \nu, \gamma)$
\Statex[1] $\mathcal{S}_{\mathcal{Y}} = \left\{ \mathbf{y}_{i}^{\mathrm{SV}}, \; \alpha_{i,\mathcal{Y}}^{\mathrm{SV}} \right\}_{i = 1}^{n} \gets \mathrm{trainSVM}(\mathcal{Y}, \nu, \gamma)$
\State Convert SVMs to GMMs using (\ref{eqn:transform_mu}), (\ref{eqn:transform_sigma}) and (\ref{eqn:transform_p}):
\Statex[1] $\mathcal{G}_{\mathcal{X}} = \left\{ \boldsymbol{\mu}_{i, \mathcal{X}} , \; \sigma^{2} , \; \phi_{i, \mathcal{X}} \right\}_{i = 1}^{m} \gets \mathrm{toGMM}(\mathcal{S}_{\mathcal{X}}, \gamma)$
\Statex[1] $\mathcal{G}_{\mathcal{Y}} = \left\{ \boldsymbol{\mu}_{i, \mathcal{Y}} , \; \sigma^{2} , \; \phi_{i, \mathcal{Y}} \right\}_{i = 1}^{n} \gets \mathrm{toGMM}(\mathcal{S}_{\mathcal{Y}}, \gamma)$
\State Optimise the objective function $f$~(\ref{eqn:objective_function}) using the \Statex[1] gradient~(\ref{eqn:gradient_translation}), (\ref{eqn:gradient_rotation}) with a trust region algorithm
\State Update the parameter $\boldsymbol{\theta} \gets \argmin_{\boldsymbol{\theta}} f\left(\boldsymbol{\theta} \right)$
\State Anneal: $\gamma \gets \delta\gamma$
\Until{change in $f$ or iteration number meets a condition}
\end{algorithmic}
\caption{Support Vector Registration (SVR): A robust algorithm for point-set registration using one-class SVM}
\label{alg:SVR}
\end{algorithm}
\@ifstar\myssect\mysect{Merging Gaussian Mixtures}
\label{sec:merging}
For an SVGM to be useful for applications where each point-set may contain unique information, such as mapping, an efficient method of merging two aligned mixtures is desirable.
A na\"{i}ve approach is to use a weighted sum of the Gaussian mixtures \cite{deselaers2010object}, however, this would result in an unnecessarily high number of components with substantial redundancy. Importantly, the probability of regions not observed in both point-sets would decrease, meaning that regions that are often occluded would disappear from the model as more mixtures were merged. While the time-consuming process of sampling the combined mixture and re-estimating it with EM would eliminate redundancy, it would not alleviate the missing data problem. The same applies to faster sample-free variational-Bayes approaches \cite{bruneau2010parsimonious}. Sampling (or merging the point-sets) and re-estimating an SVGM would circumvent this problem, since the discriminative framework of the SVM is insensitive to higher-density overlapping regions, but this is not time efficient.
Algorithm~\ref{alg:GMMerge} outlines GMMerge, our efficient algorithm for parsimoniously approximating the merged mixture without weighting the intersection regions disproportionately. Each density of $\mathcal{G}_{\mathcal{X}}$ is re-weighted using a sparsity-inducing piecewise linear function. The parameter $t \in [0, \infty)$ controls how many densities are added. For $t = 0$, $\mathcal{G}_{\mathcal{XY}}$ contains only $\mathcal{G}_{\mathcal{Y}}$. As $t \to \infty$, $\mathcal{G}_{\mathcal{XY}}$ additionally contains every non-redundant density from $\mathcal{G}_{\mathcal{X}}$. Figure~\ref{fig:merge} shows the SVGM representations of two 2D point-sets, the na\"{i}vely merged mixture and the GMMerge mixture.
\begin{algorithm}
\begin{algorithmic}[1]
\Require aligned mixture models with unknown overlap $\mathcal{G}_{\mathcal{X}}$ and $\mathcal{G}_{\mathcal{Y}}$, parametrised by means $\boldsymbol{\mu}$, variances $\sigma^2$ and mixture weights $\phi$, and merging parameter $t$
\Ensure merged model $\mathcal{G}_{\mathcal{XY}}$
\State Initialise merged model: $\mathcal{G}_{\mathcal{XY}} \gets \mathcal{G}_{\mathcal{Y}}$
\For{$i = 1 \dots m$}
\State For the $i$th density of $\mathcal{G}_{\mathcal{X}}$, calculate:
\Statex[1] $\Delta = p \left( \boldsymbol{\mu}_{i,\mathcal{X}} \middle| \mathcal{G}_{i, \mathcal{X}} \right) - p \left( \boldsymbol{\mu}_{i,\mathcal{X}} \middle| \mathcal{G}_{\mathcal{Y}} \right)$
\State Update weight using sparsity-inducing function:
\Statex[1] $\phi_{i, \mathcal{X}} \gets \phi_{i, \mathcal{X}} \max \left( 0 , \min \left( 1, t \Delta \right) \right)$
\If{$\phi_{i, \mathcal{X}} > 0$}
\State Add to merged mixture: $\mathcal{G}_{\mathcal{XY}} \gets \mathcal{G}_{i, \mathcal{X}} \cdot \mathcal{G}_{\mathcal{XY}}$
\EndIf
\EndFor
\State Renormalise $\mathcal{G}_{\mathcal{XY}}$
\end{algorithmic}
\caption{GMMerge: An algorithm for parsimonious Gaussian mixture merging}
\label{alg:GMMerge}
\end{algorithm}
\begin{figure*}[!t]
\centering
\begin{subfigure}[]{0.19\textwidth}
\includegraphics[width=\columnwidth]{butterfly_gx.pdf}
\caption{Aligned mixture $\mathcal{G}_{\mathcal{X}}$}
\label{fig:merge_gx}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.19\textwidth}
\includegraphics[width=\columnwidth]{butterfly_gy.pdf}
\caption{Aligned mixture $\mathcal{G}_{\mathcal{Y}}$}
\label{fig:merge_gy}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.19\textwidth}
\includegraphics[width=\columnwidth]{butterfly_naivemerge.pdf}
\caption{Na\"{i}ve merge}
\label{fig:merge_naivemerge}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.19\textwidth}
\includegraphics[width=\columnwidth]{butterfly_gmmerge.pdf}
\caption{GMMerge}
\label{fig:merge_gmmerge}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.19\textwidth}
\includegraphics[width=\columnwidth]{butterfly_gtmerge.pdf}
\caption{Ground truth merge}
\label{fig:merge_gtmerge}
\end{subfigure}
\caption{Merging Gaussian mixtures (\subref{fig:merge_gx}) and (\subref{fig:merge_gy}) with a na\"{i}ve weighted sum (\subref{fig:merge_naivemerge}) and GMMerge (\subref{fig:merge_gmmerge}). The mixture produced by GMMerge is almost identical to the ground truth (\subref{fig:merge_gtmerge}), while the na\"{i}ve approach over-emphasises overlapping regions. Best viewed in colour.}
\label{fig:merge}
\end{figure*}
\@ifstar\myssect\mysect{Experimental Results}
\label{sec:results}
SVR was tested using many different point-sets, including synthetic and real datasets in 2D and 3D, at a range of motion scales and outlier, noise and occlusion fractions.
In all experiments, the initial transformation parameter $\boldsymbol{\theta}$ was the identity, $\nu$ was 0.01 and $\gamma$ was selected by cross-validation, except where otherwise noted.
For all benchmark methods, parameters were chosen using a grid search.
\@ifstar\myssubsect\mysubsect{2D Registration}
\label{sec:results_2d}
To test the efficacy of SVR for 2D registration, the four point-sets in Figure~\ref{fig:results_2D_datasets} were used: \textsc{road}\footnote{Point-set from Tsin and Kanade \cite{tsin2004correlation}, available at \nolinkurl{http://www.cs.cmu.edu/~ytsin/KCReg/KCReg.zip}}, \textsc{contour}, \textsc{fish} and \textsc{glyph}\footnote{Point-sets from Chui and Rangarajan \cite{chui2003new}, available at \nolinkurl{http://cise.ufl.edu/~anand/students/chui/rpm/TPS-RPM.zip}}. Three benchmark algorithms were chosen: Gaussian Mixture Model Registration (abbreviated to GMR) \cite{jian2011robust}, Coherent Point Drift (CPD) \cite{myronenko2010point} and Iterative Closest Point (ICP) \cite{besl1992method}.
Annealing was applied for both SVR ($\delta = 10$) and GMR.
Note that the advantages of SVR manifest themselves more clearly on denser point-sets.
\begin{figure}[!t]
\centering
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[height=2.0cm]{road_motion.pdf}
\caption{\textsc{road} with rotation}
\label{fig:results_2D_road}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[height=2.0cm]{contour_outlier.pdf}
\caption{\textsc{contour} with outliers}
\label{fig:results_2D_contour}
\end{subfigure}
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[height=2.0cm]{fish_noise.pdf}
\caption{\textsc{fish} with noise}
\label{fig:results_2D_fish}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[height=2.0cm]{glyph_occlusion.pdf}
\caption{\textsc{glyph} with occlusion}
\label{fig:results_2D_glyph}
\end{subfigure}
\caption{Sample scene (left) and model (right) point-sets from each 2D dataset, undergoing a range of perturbations.}
\label{fig:results_2D_datasets}
\end{figure}
The range of motions for which a correct registration result was attained was tested by rotating the model point-set by $\alpha \in [-3.14, 3.14]$ radians with a step size of $0.01$.
In Table~\ref{tab:results_2d_motion_convergence_range}, we report the range of contiguous initial rotations for which the algorithm converged, chosen as a rotation error $\leq 1^{\circ}$. They show that SVR has a wider basin of convergence than the other methods, even for sparse point-sets.
\begin{table}[!t]
\centering
\caption{Convergence range (in radians). All rotation initialisations within these ranges converged (rotation error $\leq 1^{\circ}$).}
\label{tab:results_2d_motion_convergence_range}
\newcolumntype{C}{>{\centering\arraybackslash}X}
\begin{tabularx}{\columnwidth}{l C C C C}
\hline
\textbf{Point-Set} & \textbf{SVR} & \textbf{GMR} & \textbf{CPD} & \textbf{ICP}\\
\hline
\textsc{road} & \textbf{-3.1--3.1} & -3.0--3.0 & -1.6--1.6 & -0.8--0.8\\
\textsc{contour} & \textbf{-1.6--1.6} & -1.5--1.5 & -1.5--1.5 & -0.1--0.1\\
\textsc{fish} & \textbf{-1.6--1.6} & -1.5--1.5 & -1.2--1.3 & -0.4--0.5\\
\textsc{glyph} & \textbf{-1.6--1.6} & \textbf{-1.6--1.6} & -1.6--1.5 & -0.4--0.4\\
\hline
\end{tabularx}
\end{table}
To test the algorithm's robustness to outliers, additional points were randomly drawn from the uniform distribution and were concatenated with the model and scene point-sets separately. To avoid bias, the outliers were sampled from the minimum covering circle of the point-set. The motion was fixed to a rotation of $1$~radian ($57^{\circ}$) and the experiment was repeated $50$ times with different outliers each time. The mean rotation error for a range of outlier fractions is shown in Figure~\ref{fig:results_2D_outlier} and indicates that the proposed method is more robust to outliers than the others for large outlier fractions.
\begin{figure}[!t]
\centering
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{2D_outliers_mean_mean_nolegend.pdf}
\caption{Rotation error vs outlier fraction}
\label{fig:results_2D_outlier}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{2D_noise_mean_mean_nolegend.pdf}
\caption{Rotation error vs noise fraction}
\label{fig:results_2D_noise}
\end{subfigure}
\begin{subfigure}[]{0.51\columnwidth}
\includegraphics[width=\columnwidth]{2D_occlusion_mean_mean_nolegend.pdf}
\caption{Rotation error vs occluded fraction}
\label{fig:results_2D_occlusion}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.48\columnwidth}
\includegraphics[width=\columnwidth, trim=0 0.2cm 0 0, clip]{2D_legend_box.pdf}
\label{fig:results_2D_legend}
\end{subfigure}
\caption{Outlier, noise and occlusion results for the 2D point-sets. The mean rotation error (in radians) of 50 repetitions is reported for each and the results show that SVR is relatively robust to a large range of perturbations commonly found in real data.}
\label{fig:results_2D}
\end{figure}
To test for robustness to noise, a noise model was applied to the model point-set by adding Gaussian noise to each point sampled from the distribution
$\mathcal{N} ( \mathbf{0}, ( \lambda \hat{\sigma} )^2 )$
, where $\lambda$ is the noise fraction and $\hat{\sigma}$ is the estimated generalised standard deviation across the entire point-set (\ref{eqn:sigma_hat}).
A fixed rotation of $1$~radian was used and the experiment was repeated $50$ times, resampling each time. The average rotation error for a range of noise fractions is shown in Figure~\ref{fig:results_2D_noise} and indicates that SVR is comparable to the other methods.
To test for robustness to occlusions, we selected a random seed point and removed a fraction of the model point-set using $k$-nearest neighbours.
A fixed rotation of $1$~radian was used and the experiment was repeated $50$ times with different seed points. The mean rotation error for a range of occlusion fractions is shown in Figure~\ref{fig:results_2D_occlusion} and indicates that the algorithm is more robust to occlusion than the others.
\@ifstar\myssubsect\mysubsect{3D Registration}
\label{sec:results_3d}
The advantages of SVR are particularly apparent with dense 3D point-sets. For evaluation, we used \textsc{dragon-stand}\footnote{Point-set from Brian Curless and Marc Levoy, Stanford University, at \nolinkurl{http://graphics.stanford.edu/data/3Dscanrep/}}, \textsc{aass-loop}\footnote{Point-set from Martin Magnusson, \"{O}rebro University, at \nolinkurl{http://kos.informatik.uni-osnabrueck.de/3Dscans/}} and \textsc{hannover2}\footnote{Point-set from Oliver Wulf, Leibniz University, at \nolinkurl{http://kos.informatik.uni-osnabrueck.de/3Dscans/}}
and seven benchmark algorithms: GMMReg (abbreviated to GMR) \cite{jian2011robust}, CPD \cite{myronenko2010point}, ICP~\cite{besl1992method}, NDT Point-to-Distribution (NDP) \cite{magnusson2007scan} and NDT Distribution-to-Distribution (NDD) \cite{stoyanov2012fast}, Globally-Optimal ICP (GOI) \cite{yang2013goicp} and \textsc{Super 4PCS} (S4P) \cite{mellado2014super}.
Annealing was used only where indicated.
To evaluate the performance of the algorithm with respect to motion scale, we replicated the experiment in \cite{jian2011robust} using the \textsc{dragon-stand} dataset. This contains 15 self-occluding scans of the dragon model acquired from different directions.
We registered all 30 point-set pairs with a relative rotation of $\pm 24^{\circ}$ and repeated this for $\pm 48^{\circ}$, $\pm 72^{\circ}$ and $\pm 96^{\circ}$. As per \cite{jian2011robust}, the criterion for convergence was $\hat{q} \cdot q > 0.99$, where $\hat{q}$ and $q$ are the estimated and ground truth quaternions respectively.
While $\gamma$ was selected by cross-validation, using the estimate $\hat{\sigma}$ yielded a very similar result.
The number of correctly converged registrations is reported in Table~\ref{tab:results_3d_motion_fraction}, showing that SVR has a significantly larger basin of convergence than the other local methods and is competitive with the slower global methods.
\begin{table}[!t]
\centering
\caption{Number of point-set pairs that converged for a range of relative poses. Mean computation time in seconds is also reported.}
\label{tab:results_3d_motion_fraction}
\newcolumntype{C}{>{\centering\arraybackslash}X}
\begin{tabularx}{\columnwidth}{C C C C C | C C}
\hline
& \multicolumn{4}{c|}{\textbf{Local}} & \multicolumn{2}{c}{\textbf{Global}}\\
\textbf{Pose} & \textbf{SVR} & \textbf{GMR} & \textbf{CPD} & \textbf{ICP} & \textbf{GOI} & \textbf{S4P}\\
\hline
$\pm 24^{\circ}$ & \textbf{30} & 29 & 26 & 28 & \textbf{30} & 29\\
$\pm 48^{\circ}$ & \textbf{29} & 20 & 18 & 19 & 27 & 24\\
$\pm 72^{\circ}$ & 16 & 13 & 14 & 13 & \textbf{18} & 17\\
$\pm 96^{\circ}$ & 4 & 2 & 3 & 1 & 10 & \textbf{13}\\
\hline
Runtime & 0.2 & 19.2 & 5.7 & \textbf{0.04} & 1407 & 399\\
\hline
\end{tabularx}
\end{table}
A representative sensitivity analysis is shown in Figure~\ref{fig:results_3D_sensitivity} for the \textsc{dragon-stand} dataset. It indicates that rotation error is quite insensitive to perturbations in $\gamma$ and is very insensitive to $\nu$, justifying the choice of fixing this parameter.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{3D_sensitivity_median_24.pdf}
\caption{Sensitivity analysis for $\gamma$ and $\nu$. The median rotation error (in radians) of all \textsc{dragon-stand} point-sets with $\pm 24^{\circ}$ pose differences are plotted with respect to multiples of $\hat{\gamma} = \sfrac{1}{2\hat{\sigma}^2}$.}
\label{fig:results_3D_sensitivity}
\end{figure}
To evaluate occlusion robustness, the same procedure was followed as for 2D, using the \textsc{dragon-stand} dataset. The mean rotation error (in radians) and the fraction of correctly converged point-set pairs with respect to the fraction of occluded points is shown in Figure~\ref{fig:results_3D_occlusion}, for relative poses of $\pm 24^{\circ}$ and $\pm 48^{\circ}$. The results show that SVR is significantly more robust to occlusion than the other methods.
\begin{figure}[!t]
\centering
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{3D_occluded_mean_24_legend.pdf}
\caption{Mean rotation error for $\pm 24^{\circ}$}
\label{fig:results_3D_occlusion_mean_24}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{3D_occluded_mean_48_nolegend.pdf}
\caption{Mean rotation error for $\pm 48^{\circ}$}
\label{fig:results_3D_occlusion_mean_48}
\end{subfigure}
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{3D_occluded_nSuccesses_24_nolegend.pdf}
\caption{Convergence rate for $\pm 24^{\circ}$}
\label{fig:results_3D_occlusion_nSuccesses_24}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{3D_occluded_nSuccesses_48_nolegend.pdf}
\caption{Convergence rate for $\pm 48^{\circ}$}
\label{fig:results_3D_occlusion_nSuccesses_24}
\end{subfigure}
\caption{Mean rotation error (in radians) and convergence rate of all \textsc{dragon-stand} point-sets with $\pm 24^{\circ}$ and $\pm 48^{\circ}$ pose differences, with respect to the fraction of occluded points.}
\label{fig:results_3D_occlusion}
\end{figure}
Finally, we report registration results on two large real-world 3D datasets shown in Figure~\ref{fig:results_3D_aerial}: \textsc{aass-loop} ($60$ indoor point-sets with ${\sim}13\,500$ points on average) and \textsc{hannover2} ($923$ outdoor point-sets with ${\sim}10\,000$ points on average), after downsampling using a 0.1~m grid. Both were captured using a laser scanner and ground truth was provided.
These are challenging datasets because sequential point-sets overlap incompletely and occluded regions are present.
The results for registering adjacent point-sets are shown in Table~\ref{tab:results_3d_large_aass} for \textsc{aass-loop} and Table~\ref{tab:results_3d_large_hannover2} for \textsc{hannover2}. The ICP and annealed NDT results are reported directly from Stoyanov~\mbox{\emph{et al.\ }} \cite{stoyanov2012fast} and we use their criteria for a successful registration (inlier): a translation error less than 0.5~m and a rotation error less than 0.2~radians. SVR outperforms the other methods by a significant margin, even more so when annealing ($\delta = 2$) is applied (SVR\textsuperscript{+}).
\begin{figure}[!t]
\centering
\begin{subfigure}[]{0.66\columnwidth}
\includegraphics[width=\columnwidth]{aass-loop.png}
\caption{\textsc{aass-loop}}
\label{fig:results_3D_aerial_aass}
\end{subfigure}
\hfill
\begin{subfigure}[]{0.32\columnwidth}
\includegraphics[width=\columnwidth]{hannover2.png}
\caption{\textsc{hannover2}}
\label{fig:results_3D_aerial_hannover2}
\end{subfigure}
\caption{Two large-scale 3D datasets.}
\label{fig:results_3D_aerial}
\end{figure}
\begin{table}[!t]
\centering
\caption{Registration results for \textsc{aass-loop}. While mean translation error (in metres) and rotation error (in radians) are commonly reported, the percentage of inliers (successful registrations) is a more useful metric for comparison. The mean computation time (in seconds) is also reported. SVR\textsuperscript{+} is SVR with annealing.}
\label{tab:results_3d_large_aass}
\newcolumntype{C}{>{\centering\arraybackslash}X}
\begin{tabularx}{\columnwidth}{l C C C C C C C}
\hline
\textbf{Metric} & \textbf{SVR} & \textbf{SVR\textsuperscript{+}} & \textbf{GMR} & \textbf{ICP} & \textbf{NDP} & \textbf{NDD} & \textbf{S4P}\\
\hline
Transl. & 0.95 & \textbf{0.67} & 1.61 & 0.99 & 1.10 & 0.85 & 0.71\\
Rotation & 0.08 & 0.06 & 0.12 & 0.04 & \textbf{0.02} & 0.06 & 0.32\\
Inlier \% & 81.4 & \textbf{86.4} & 18.6 & 55.2 & 50.0 & 63.8 & 78.0\\
\hline
Runtime & 3.43 & 29.7 & 599 & 10.8 & 9.12 & \textbf{1.02} & 60.7\\
\hline
\end{tabularx}
\end{table}
\begin{table}[!t]
\centering
\caption{Registration results for \textsc{hannover2}. The mean translation error (in metres), rotation error (in radians), inlier percentage and mean runtime (in seconds) are reported. SVR\textsuperscript{+} uses annealing.}
\label{tab:results_3d_large_hannover2}
\newcolumntype{C}{>{\centering\arraybackslash}X}
\begin{tabularx}{\columnwidth}{l C C C C C C C}
\hline
\textbf{Metric} & \textbf{SVR} & \textbf{SVR\textsuperscript{+}} & \textbf{GMR} & \textbf{ICP} & \textbf{NDP} & \textbf{NDD} & \textbf{S4P}\\
\hline
Transl. & 0.10 & \textbf{0.09} & 1.32 & 0.43 & 0.79 & 0.40 & 0.40\\
Rotation & \textbf{0.01} & \textbf{0.01} & 0.05 & 0.05 & 0.05 & 0.05 & 0.03\\
Inlier \% & \textbf{99.8} & \textbf{99.8} & 8.88 & 74.4 & 54.2 & 76.4 & 75.0\\
\hline
Runtime & 14.0 & 32.6 & 179 & 5.68 & 4.03 & \textbf{0.51} & 39.7\\
\hline
\end{tabularx}
\end{table}
The mean computation speeds of the experiments, regardless of convergence, are reported in Tables~\ref{tab:results_3d_motion_fraction}, \ref{tab:results_3d_large_aass} and~\ref{tab:results_3d_large_hannover2}. All experiments were run on a PC with a 3.4~GHz Quad Core CPU and 8~GB of RAM.
The SVR code is written in unoptimised MATLAB, except for a cost function in C++, and uses the LIBSVM \cite{chang2011libsvm} library. The benchmarking code was provided by the respective authors, except for ICP, for which a standard MATLAB implementation with k-d tree nearest-neighbour queries was used.
For the \textsc{dragon-stand} speed comparison, all point-sets were randomly downsampled to $2\,000$ points, because GMR, CPD, GOI and S4P were prohibitively slow for larger point-sets.
\@ifstar\myssect\mysect{Discussion}
\label{sec:discussion}
The results show that SVR has a larger region of convergence than the other methods and is more robust to occlusions. This is an expected consequence of the SVGM representation, since it is demonstrably robust to missing data.
In addition, the computation time results show that it scales well with point-set size, unlike GMR and CPD, largely due to the data compression property of the one-class SVM. There is a trade-off, controlled by the parameter $\gamma$, between registration accuracy and computation time.
For the application of accurate reconstruction using our framework, the one-class SVM may be replaced with a two-class SVM to better model the fine details of a scene.
To generate negative class (free space) training points, surface points were displaced along their approximated normal vectors by a fixed distance $d$ and then those points that were closer than $0.9d$ to their nearest surface point were discarded. The SVGMs constructed using this approach may be fused using GMMerge. However, for the purposes of registration, capturing fine detail in this way is unnecessary, counter-productive and much less efficient.
While SVR is a local algorithm, it can still outperform global algorithms on a number of measures, particularly speed, for certain tasks.
In Section~\ref{sec:results_3d}, we compared SVR with the guaranteed-optimal method Globally-Optimal ICP (GOI)~\cite{yang2013goicp} and the faster but not optimal method \textsc{Super 4PCS} (S4P) \cite{mellado2014super}. The motion scale results of GOI were comparable to our method, while the average runtime was four orders of magnitude longer. Note that, for point-sets with missing data or partial overlap, a globally-optimal alignment is not necessarily correct. S4P had a more favourable runtime--accuracy trade-off but was nonetheless outperformed by SVR.
\@ifstar\myssect\mysect{Conclusion}
\label{sec:conclusion}
In this paper, we have presented a framework for robust point-set registration and
merging using a continuous data representation. Our point-set representation is constructed by training a one-class SVM and then approximating the output function with a GMM.
This representation is sparse
and robust to occlusions and missing data, which are crucial attributes for efficient and robust registration.
The central algorithm, SVR, outperforms state-of-the-art approaches in 2D and 3D rigid registration, exhibiting a larger basin of convergence. In particular, we have shown that it is robust to occlusion and missing data and is computationally efficient.
The GMMerge algorithm complements the registration algorithm by providing a parsimonious and equitable method of merging aligned mixtures, which can subsequently be used as an input to SVR.
There are several areas that warrant further investigation. Firstly, there is significant scope for optimising the algorithm using, for example, approximations like the improved fast Gauss Transform \cite{yang2003improved} or faster optimisation algorithms that require an analytic Hessian. Secondly, non-rigid registration is a natural extension to this work and should benefit from the robustness of SVR to missing data.
It may also be useful to train the SVM with full data-driven covariance matrices \cite{abe2005training} and use the full covariances for registration \cite{stoyanov2012fast}.
Finally, methods of constructing tight bounds for an efficient branch-and-bound framework based on SVR could be investigated in order to implement a globally-optimal registration algorithm.
{\small
\bibliographystyle{ieee}
|
2,869,038,156,904 | arxiv | \section{Introduction} \label{sec:intro}
\input{01-introduction.tex}
\vspace{-0.1cm}
\section{Problem Statement} \label{sec:problem}
\input{02-problem.tex}
\vspace{-0.3cm}
\section{Taxonomy of Approaches} \label{sec:approaches}
\input{03-approaches.tex}
\vspace{-0.3cm}
\section{Application Domains} \label{sec:applications}
\input{04-applications.tex}
\input{041-perception.tex}
\input{042-planning.tex}
\input{043-control.tex}
\input{07-tables.tex}
\input{044-otherApplications.tex}
\vspace{-0.4cm}
\section{Discussion}
\label{sec:open}
\input{05-open.tex}
\vspace{-0.3cm}
\section{Acknowledgments} \label{sec:acks}
We thank James Paulos for fruitful discussions in the early stages of this work.
\vspace{-0.3cm}
\bibliographystyle{bibfiles/IEEEtran}
\subsection{The Need for Resilient Multi-Robot Systems}
With progress we also face new challenges. We now depend on connected automated systems to provide key infrastructural services, such as logistics~\cite{tilley_Automation_2017,kamagaew_Concept_2011}, resource distribution~\cite{enright_Optimization_2011b, ma_Lifelong_2017a}, transport systems~\cite{hyldmar_Fleet_2019b, dressler_Intervehicle_2014a, ferreira_Selforganized_2010a}, manufacturing~\cite{cherubini_Collaborative_2016}, and agriculture~\cite{noguchi_Robot_2011, albani_Monitoring_2017}.
The usage of multiple connected robots over a single robot provides evident gains (e.g., work distribution, spatial coverage, specialization).
However, as connections are established, information is shared, and dependencies are created, these systems give rise to new vulnerabilities and threats.
Rodin's book on resilience provides ample real-world evidence that shows how \textit{the failure of a single entity can disrupt operations to leave dependencies unanswered and fundamental necessities unfulfilled}~\cite{rodin_Resilience_2014a}. The book argues that principles such as readiness, responsiveness, and revitalization would lead to resilience, but it is not always clear how such principles can be transformed into actionable plans. Also, while such general guidelines hold in
{social systems, it is not clear}
how the field of automation and robotics would be able to leverage them to increase system resilience.
{We focus on} the domain of networked robotic systems---multi-robot systems, in short---wherein individual autonomous machines work together in pursuit of higher-order missions and goals.
The virtue we seek to characterize and acquire is \textit{resilience}. Yet, how is it defined, how may it be measured? How do we build automated resilient systems?
This survey article aims at providing answers to these questions. By doing so, our argument develops to state that {\bf{\emph{resilience must become a central engineering paradigm}}}.
\vspace{-0.3cm}
\subsection{From Robustness to Resilience}
\label{subsec:history}
{While robustness is a classic theme in systems engineering, resilience has emerged as an important new paradigm, and publications trends reveal this \textit{shift in focus towards resilience}. Figure \ref{fig:resilience_pubs_xplore} plots annual publications from IEEExplore\footnote{\url{https://ieeexplore.ieee.org}}, here showing the percentage difference (relative to the year 2000) of papers with keywords `resilient' or `resilience' and of papers with keywords `robustness' or `robust'.
Papers addressing robustness grew from just under 5,000 in 2000 to over 17,000 in 2021 (a three-fold increase), while publications considering resilience grew from 150 in 2000 to over 2,200 in 2021 (a fifteen-fold increase).}
\begin{figure}[tb]
\centering
\includegraphics[width = 0.75\columnwidth]{figures/pubs_percentdiff.pdf}
\caption{\edited{Publication trends reveal the rapid increase in the study of resilience, plotted as the percent difference of resilience and robustness relative to their respective number of publications in year 2000.}}
\label{fig:resilience_pubs_xplore}
\vspace{-0.3cm}
\end{figure}
{Robustness is central to robotics.} One of robotics' most influential papers, published by R. Brooks in 1985 (with more than 100,000 citations), is concerned with a robust robot control architecture~\cite{Brooks85}. While robustness is not explicitly tested for nor measured, given an adequate amount of over-provisioning in the robot's design, Brooks' architecture allows for real-time adjustments to internal robot component failures.
Other early work on robustness was heavily inspired by the influence of control theorists~\cite{Slotine91book-appliedNonlinearControl}. The domain of robust robot manipulators enjoyed significant attention early on,
e.g.,\xspace~\cite{slotine_Robust_1985, lim_Robust_1987, colgate_Robust_1988}, yet provided a very narrow lens on robustness through the consideration of \textit{parametric uncertainty} alone. Later work began to generalize robust measures to address causes such as imperfect motion, environmental dynamics~\cite{luders_Chance_2010}, and adversarial attacks~\cite{basilico_Extending_2009}. The commonality of these methods is their reliance on models of uncertainty and adversity, with solutions often shown to be robust under the action of \textit{bounded disturbance}. These conditions are idealistic; real-world stories abound ~\cite{taleb_black_2007}. While the field of \textit{adaptive control} aims at providing better ways of adapting to changing process dynamics through online tuning of model parameters, these methods, too, are burdened by design assumptions (e.g., gain scheduling) that restrict operations to well-defined conditions~\cite{astrom_Adaptive_2013}.
\edited{In a series of papers, Doyle and collaborators argue that any attempt to maximize robustness leads to fragility~\cite{doyle_Rules_2007, doyle_robust_2005}. This is best seen through the simple example of a linear system with a feedback loop, in which any attempt to reduce error within a range of frequencies results in an increase of error in another frequency range~\cite{olsman_Hard_2019}.
These effects are more prominent in networked systems including cellular/molecular networks in biology~\cite{carlson_Complexity_2002} and the internet~\cite{doyle_robust_2005}.
The best designed complex networked systems are robust to random component failures, but remarkably fragile to targeted out-of-distribution attacks.}
\edited{This `robust-yet-fragile' behavior is typical of the multi-robot domain:} the failure of just one robot may cascade and consequently undermine the performance of the system as a whole (e.g., see~\cite{bradshaw_Ocado_2021}). A solution to this problem was first proposed by Parker through the ALLIANCE architecture~\cite{parker_ALLIANCE_1998} that solves multi-task problems through a distributed program that allows robots to select their actions as a function of their own internal state as well as environmental conditions. This notion of \textit{fault-tolerant} multi-robot systems was refined in a subsequent body of literature, considering specific components in the autonomy pipeline, i.e., perception, planning, and control---each of which are reviewed in depth in Sec.~\ref{sec:applications}.
The realization of diverse failure modes in the multi-robot domain instigates a delineation between robustness and resilience. We understand \textit{robustness} to be the ability to withstand or overcome adverse conditions or rigorous testing without any structural changes in the multi-robot system. Robustness accommodates uncertainty and risk, but refers to the sensitivity of a particular desirable system output in response to parametric changes or bounded disturbances.
In contrast, \textit{resilience} refers to the capability of withstanding or overcoming adverse conditions or shocks, and unknown, unmodeled disturbances.
Quintessentially, resilient systems \textit{relax assumptions on expected conditions}.
Robust behaviors are mitigating, whereby actions or design choices are taken in advance of disruption (e.g., through pre-planned methods for the rejection of disturbances).
During disruption, the system remains in the same state, as no structural transformations take place. Resilient behaviors, instead, incorporate agile policies that allow the system to transform itself, to adapt to newly perceived conditions.
Hence, providing resilience often involves system-wide \textit{re-organization}, \textit{adaptation} and \textit{growth}.
The following definition summarizes this:
\begin{definition}[Resilient multi-robot system]
A resilient multi-robot system is capable of withstanding or overcoming {unexpected} adverse conditions or shocks, and unknown, unmodeled disturbances. The property of resilience is associated with a system-wide transformation (e.g., reconfiguration, adaptation, or growth), and refers to the contingent nature of the robots' behaviors
\edited{that is aimed at preserving the existence of functionality or minimizing the time-lapses in which the existence of functionality is compromised.}
\end{definition}
This definition of resilience highlights the relevance of \textit{interaction} between robots so that the system as a whole can leverage emanating \textit{capabilities} with the goal of retaining some task-level \textit{performance}.
These three concepts are dealt with in more detail in Sections~\ref{sec:capabilities}, \ref{sec:interaction}, and~\ref{sec:performance}.
\vspace{-0.3cm}
\subsection{Problem Domains}
\label{subsec:domains}
In this survey, we consider three classical application domains within robotics research: perception, planning, control.
Perception is the creation of an internal model of the world given sensor data and priors.
Planning uses the robot's internal world model to plan a course of action to achieve a desired goal.
Control ensures that the course of action is correctly executed. These domains constitute three key robotic research
areas and form the building blocks of modern autonomous systems. In this context, we use the term `perception' in a broad sense, encompassing both 2D vision (e.g.,\xspace object detection and pose estimation),
3D localization and mapping, sensor fusion, and high-level scene understanding.
Similarly, `planning' includes both motion and task planning, as well as task allocation, while
`control' spans topics from traditional control theory to networked control and consensus.
\vspace{-0.3cm}
\subsection{Contributions of this Survey}
\label{subsec:contributions}
This article aims not only to delineate robustness vs. resilience and to contextualize this within the multi-robot domain, but also to provide actionable open problems that define directions where, we believe, we should be heading next. Our contributions are listed as follows:
\begin{itemize}
\item We provide new \textit{definitions and terminology} that constitute the resilience problem.
\item We provide the first \textit{formal model} of resilience engineering. We introduce notation to support the model.
\item We provide a \textit{taxonomy} of approaches towards resilience in multi-robot systems.
\item We introduce key \textit{labels} that facilitate an analysis of the body of existing work and review existing papers with respect to our taxonomy and formalization.
\item We provide an enumeration of \textit{open problems} that encompass key challenges and areas of future work.
\end{itemize}
\edited{Following an early seminal work surveying multi-robot systems~\cite{parker_Multiple_2016},} several recent surveys on multi-robot systems have been released~\cite{Halsted21arxiv-multiRobotSurvey,Kegeleirs21frontier-multiRobotSLAMSurvey,Dorigo21ieee-multiRobotSurvey,Lajoie21arxiv-multiRobotSLAMSurvey,zhou_Multirobot_2021}, while none of them share our focus on resilience.
Halsted, et al.,~\cite{Halsted21arxiv-multiRobotSurvey} focus on distributed optimization techniques for multi-robot systems.
Kegeleirs, et al.,~\cite{Kegeleirs21frontier-multiRobotSLAMSurvey} focus on SLAM and point out the importance of decentralized approaches as opposed to more traditional centralized multi-robot SLAM.
Lajoie, et al.,~\cite{Lajoie21arxiv-multiRobotSLAMSurvey} provide a more in-depth discussion on multi-robot SLAM, including
mathematical formulations and discussion about open problems.
Dorigo, et al.,~\cite{Dorigo21ieee-multiRobotSurvey} complement this survey by reviewing history and new applications of swarm robotics.
Finally, a recent survey provided in~\cite{zhou_Multirobot_2021} has similar interests, yet focuses on a narrower segment of multi-robot approaches (i.e., mainly coordination), and does not provide formal taxonomies.
\subsection{Capabilities and Constraints}
\label{sec:capabilities}
In robotics, there are four fundamental \textbf{\textit{capability}} classes: \textit{(i)} sensing, \textit{(ii)} computation, \textit{(iii)} actuation, \textit{(iv)} communication~\cite{siciliano_Springer_2008}. While sensors, actuators, and computation are well-studied components that underpin an individual robot's perception-action loop, in a \textit{multi}-robot system, the action loop needs to be closed over a communication channel.
Explicit communication (e.g., via narrowband communication channels) facilitates robot interaction through the dissemination of hidden and unobservable values, giving rise to perception-action-communication loops that provide feedback to local agent controllers~\cite{yang_grand_2018a, fink_Robust_2011}.
Critical parameters include connectivity and range~\cite{mosteo_Multirobot_2008, fink_robust_2013}, topology-dependent delay~\cite{schwager_Time_2011}, and bandwidth~\cite{trawny_Cooperative_2009a,nerurkar_communicationbandwidthaware_2013}.
While tempted to design-in capabilities based on what they can do, instead, we often find ourselves limited by what they cannot do, i.e., what their {\textit{constraints}} are.
Constraints are most commonly formulated as energy budgets~\cite{robinson_efficient_2018,Tzoumas20tac-sLQG}, but specific formulations can vary: the work in~\cite{prorok_Redundant_2019a, malencia_fair_2021} includes budgets on the number of redundant robots;~\cite{setter_Energyconstrained_2016, notomista_resilient_2021} considers monitoring battery levels;~\cite{best_Online_2018} considers maximum travel time budgets;~\cite{dimario_Distributed_2015,Carlone18tro-attentionVIN} considers computing budgets; and ~\cite{liu_Optimal_2013} considers budgets that represent affordable `prices' in multi-robot auctions.
Accurately modeling constraints is reminiscent of the problem at hand, i.e.,\xspace resilience engineering. If constraints are characterized a priori, we can design our systems to operate accordingly. The challenge, however, lies in discovering, adapting to, and overcoming new constraints.
\vspace{-0.3cm}
\subsection{Types of Robot Interaction}
\label{sec:interaction}
\begin{figure*}[tb]
\centering
\includegraphics[width = \textwidth]{figures/Figure3Cs_combined.pdf}
\caption{\edited{The \textit{three Cs} of robot interaction: coordination, cooperation, collaboration. Coordination \textit{(left)} such as multi-robot coverage, has sub-linear gains in the number of robots, where at best the gains remain proportional to the number of robots. Cooperation \textit{(middle)} achieves super-additive gains but may depend on a threshold number of robots; with a large enough team, multi-agent search and tracking problem is enabled through multi-hop communications. Collaboration \textit{(right)} involves heterogeneous agents, where performance improves with the number of species, and the corresponding complementarity of the species' capabilities.
}}
\label{fig:three_cs}
\vspace{-0.3cm}
\end{figure*}
The strength of multi-robot systems lies in the robots' ability to work together.
Networks of agents provide key infrastructural services successfully by leveraging their system-wide complementarity, diversity, and redundancy. However, not all systems interact in the same way, as dependencies arise from a variety of conditions (i.e., spatial, temporal or functional relationships).
Different types of interaction may create different vulnerabilities or lead to different capabilities at a systems level. Here, we classify various types of multi-robot interactions into three main groups: coordination, cooperation, collaboration. We refer to these as the \textit{\underline{three Cs} of robot interaction}, illustrated in Fig.~\ref{fig:three_cs}.
\textbf{\textit{Coordination}} seeks \edited{\textit{additive} performance gains} by minimizing interference within a system, such as avoiding collisions (e.g., in multi-robot path planning) or avoiding duplicate work (e.g., in multi-robot coverage). For example, in a warehouse setting, the number of boxes that a team of robots can move per hour increases linearly as more robots join the team, as long as the team coordinates their actions. Similarly, in distributed coverage tasks, the amount of time it takes for a team of robots to cover the full area decreases linearly as more robots join the team, as long as the team coordinates their motion.
Coordinating agents need not share goals (though they often do) because agents are awarded for their individual local performance.
\edited{However, team performance may at times only exhibit \textit{subadditive} gains as the number of coordinating agents increases, in particular when considering cooperation among an increasingly redundant set of robots (e.g.,~\cite{prorok_Redundant_2019a}).}
\textbf{\textit{Cooperation}} considers teamwork where the system can achieve \edited{\textit{superadditive}} improvement, i.e., where the `whole is greater than the sum of its parts.’ Cooperating agents share goals and leverage teammates' help to improve task performance as a system, rather than just minimizing interference among agents as seen in coordination. \edited{Cooperation, however, may depend on threshold numbers.
Consider a multi-agent search and tracking problem in which sensed information needs to be communicated to a base-station. In addition to coordinating to enable coverage, it may be necessary for some agents to relay information using multi-hop communications. Even if more agents are recruited for the task, the performance may not increase until a communication network can be established, which may require a threshold to be exceeded.
Thus, the performance in a team of cooperative agents may not change significantly with an increase in team size until a threshold is reached, upon which, there can be a dramatic improvement in performance.
Similarly, in cooperative driving, a single vehicle gains no benefits on its own. As surrounding vehicles participate in a shared cooperative driving style, each vehicle gains efficiency with the help of the cooperating agents as well as gains from the increased traffic throughput that results from the reduced system-wide congestion.
On the other hand, tasks like cooperative manipulation and object transport can exhibit superadditive gains with increasing team size.}
\textbf{\textit{Collaboration}} involves \textit{heterogeneous} team interaction where agents leverage \textit{complementary} capabilities, \edited{also leading to \textit{superadditive} performance gains}. This differs from cooperation in that there is a need for specific types of agents to work together due to task requirements and inherent agent constraints~\cite{prorok_Impact_2017a}.
The resulting performance is a step function: task performance only reaches a satisfactory level when all capabilities are present.
For example, a team of agents searching for targets in a forest might leverage teamwork between aerial as well as ground vehicles. The aerial vehicles' capabilities are used to map large areas from a birds-eye view and inform exploration strategies, whereas the ground vehicles collect close-up first-person view information, or retrieve targets.
\vspace{-0.3cm}
\subsection{Performance}
\label{sec:performance}
There is abundant literature within the multi-robot field that deals with the development of methods that strive to reach and maintain efficiency (e.g., stability around an equilibrium), across a vast variety of target functionalities~\cite{siciliano_Springer_2008}.
Yet, while evidence suggests that efficiency-driven objectives lead to brittle performance under perturbation~\cite{lechner_Adversarial_2021, tsipras_Robustness_2019}, there is a dearth of work that discusses how to maintain \textit{existence} instead of \textit{efficiency} of functionality.
Classically, robustness is measured by how much the system loses in terms of performance during disruptions. In many cases, it measures how well stability is maintained near an equilibrium state, through either the speed of return to that equilibrium or through the resistance to disturbance.
Analogously, resilience could be measured by the magnitude of disturbance that can be absorbed before the system needs to change its structure by changing the variables and processes that control behavior. But, as pointed out by Holling~\cite{holling_Engineering_1996}, there are systems that are able to maintain functionality by transitioning between multi-stable states---if there is more than one objective function, then where is the optimum, and what methods should we use to reach it?
The tension between \textit{efficiency of functionality} (e.g., thriving) and \textit{existence of functionality} (e.g., surviving) is still poorly understood, and few measures exist to quantify it. The dichotomy between robustness and resilience is illustrated in Fig.~\ref{fig:resilience_vs_robustness}.
A few recent works propose domain-specific measures of resilience. For example, in transport engineering, resilience is estimated as the change in efficiency resulting from roadway disruptions~\cite{ganin_Resilience_2017}. Areas such as biology~\cite{pimm_complexity_1984}, health~\cite{zautra_Resilience_2010, davydov_Resilience_2010}, and the built environment~\cite{hassler_Resilience_2014} have also dedicated a decade of research into this broad question; but tying together formalisms in a cross-disciplinary manner proves hard, if not impossible, due to incompatible quantities of interest. While preceding ideas may, at the very least, inspire resilience measures in the multi-robot systems domain, further dimensions must be considered: e.g.,\xspace the various behavioral changes that occur over time to stymie a disruption, or the time it takes for the system to reach a new steady state, or the performance of the system at the equilibrium after disruption, or even the number of possible equilibria that ensure system functionality.
\begin{openProblem} [Measurement of Resilience]
If the property of resilience is associated with a system-wide transformation (e.g., reconfiguration, adaptation, or growth), then new multi-dimensional measures need to be developed that account for these changes holistically.
\label{op:resilience_measure}
\end{openProblem}
\subsection{Stressors and Key Variables}
Networked robotic systems encounter myriad adverse conditions, such as distributional noise whose instantiation is a priori unknown~\cite{lajoie_DOORSLAM_2020,Mangelson18icra}, and disturbances outside the robots' world model~\cite{lechner_Adversarial_2021}. We also consider targeted disturbances, such as adversaries intent on disrupting the system (e.g.,~\cite{saulnier_Resilient_2017a, mitchell2020gaussian}) and non-cooperative agents competing for resources~\cite{lowe_MultiAgent_2017a, blumenkamp_Emergence_2020b}. Resilience is achieved by withstanding and overcoming such adverse conditions. We identify two main \textit{stressor} dimensions that delineate whether the stressor is stochastic or out-of-distribution, and whether the stressor is targeted or not. Each stressor type is defined by \textit{key variables,} i.e., the models and parameters that characterize it.
\textbf{ \textit{Stochastic}} stressors entail the noise and uncertainty that is present throughout robotics systems. The key variables of stochastic stressors are hyper-parameters of a disturbance model. For example, the key variables of a Gaussian stochastic stressor are mean and standard deviation. When dealing with stochastic stressors, the system may possess a priori knowledge of the type of model that characterizes the disturbance, but may also be capable of adapting or updating the model parameters in-situ, during operation. Even with a perfect model and well-tuned key variables, robotic systems are challenged because the exact, true instantiation of such stressors is rarely known a priori.
\textbf{ \textit{Out-of-distribution}} stressors are disturbances that are not captured by the robot systems' model. Similar to stochastic stressors, the exact, true instantiation of an out-of-distribution stressor is rarely known a priori. However, out-of-distribution stressors are more challenging because the disturbance is not even probabilistically known beforehand; in other words, the disturbance is unknown to the model. Therefore, the key variable of out-of-distribution stressors is the model itself, together with its hyper-parameters (which may change when the model changes).
\textbf{ \textit{Targeted}} stressors are distinguished by the existence of an \textit{intent}: targeted stressors are goal-oriented and are therefore often functions of the robotic system itself, capable of adapting to changes in the robotic system. A common example of targeted stressors are adversarial disturbances, which are intent on disrupting the robotic system. But not all targeted stressors are adversarial---external agents (that have their own goals and are therefore non-cooperative) compete for the resources that are shared with the robotic system in question. For example, connected autonomous multi-vehicle systems that share road-space with human drivers must deal with their potentially non-cooperative driving behavior. The humans' behavior is targeted (i.e., through egocentric driving goals), yet non-adversarial.
The distinction of targeted and untargeted stressors does not impact the key variables. As seen in Fig.~\ref{fig:stressors}, targeted and untargeted stressors are both further categorized as stochastic versus out-of-distribution stressors which in turn define the key variables (as seen above).
\begin{figure}
\centering
\includegraphics[scale=0.35]{figures/FigureStressors_1.pdf}
\caption{The classification of stressor types, showing that the stressors can be either targeted or un-targeted. Further, the stressors can be classified as stochastic, if the robots have a model of the stressor, or out-of-distribution otherwise.}
\label{fig:stressors}
\vspace{-0.3cm}
\end{figure}
\vspace{-0.3cm}
\subsection{Approaches}
There are multiple types of approaches by which a robotic system can withstand or overcome stressors. Stressor types, as described above, are defined by their key variables. Similarly, approach types are defined by how robotic systems interact with key stressors variables. Specifically, a stressor type defines \textit{what} the key variables are, whereas an approach type defines \textit{how} the key variables are tuned. In the following, we introduce three approach types, \textit{\textbf{pre-}}, \textit{\textbf{intra-}}, and \textit{\textbf{post-operative}}, which are agnostic to the stressor type.
To help define these approaches, we make use of some notation. In robotics applications ranging from estimation to planning, the goal is to optimize an objective function\footnote{We illustrate this optimization with the \textit{max} function, though any optimization function can be used.} (e.g., localization accuracy or coverage) over some time horizon. Let $f$ represent the \textbf{objective function}, \edited{which takes a general form here to encapsulate the various manifestations of resilience seen in Fig.\ref{fig:resilience_vs_robustness} and the problem domains considered in this survey}. Let $x$ represent the system decision variable over which this function is optimized, and $T$ be the time horizon (in non-sequential optimization $T$ is unit step). Resilience in these systems is concerned with the impact of the key variables of the stressors, denoted $\phi$, on the system performance, i.e., the existence of functionality
\textbf{ \textit{Pre-operative}} approaches pre-determine the key variables of the stressor \edited{a priori, and provide resilience \textit{by design}}. In the context of the robotics system, this means the objective function is optimized with respect to the decision variable assuming given (or pre-calculated) key variables.
\begin{definition}[Pre-operative] \label{def:pre-operative}
Pre-operative approaches optimize the system objective function, $f$, over the system decision variables, $x$, with respect to given key variable values, $\phi$:
\[
\displaystyle{ \max_{x_0 ... x_T} \ f(x \ | \ \phi).}
\]
\end{definition}
Because these approaches are offline with respect to the stressor, they often provide resilience against an expected or worst-case disturbance~\footnote{This definition of pre-operative approaches is reminiscent of \textit{robustness}}.
One trait of pre-operative approaches is that their resilient actions are taken regardless of the presence of the stressor. For example, a system that is designed to be resilient to five adversaries will take the same actions no matter how many adversaries are present. Another common aim of pre-operative approaches is to achieve robustness or resilience through over-provisioning and redundancy.
\textbf{ \textit{Intra-operative}} approaches are online with respect to the key variables of the stressor, \edited{and provide resilience \textit{through adaptation}}. During operation, changes to the key variables are made, therefore, the stressor key variables $\phi$ become decision variables in the optimization.
\begin{definition}[Intra-operative] Intra-operative approaches optimize the system objective function, $f$, over \textit{both} the system decision variables, $x$, and stressor key variable values, $\phi$, such that the stressor is addressed online:
\[
\displaystyle{\max_{\phi, \ x_i ... x_{i+T}} \ f(x, \phi) }.
\]
\end{definition}
Intra-operative approaches often use online algorithms with respect to the decision variable $x$ (i.e., both the system decision variable and the stressor key variables are updated online), but note that the opposite is not necessarily true. Some algorithms may be online with respect to the system variable, $x$, but assume a fixed key variable, $\phi$, throughout; therefore, they are pre-operative because they are offline with respect to the stressor. In contrast to pre-operative approaches, intra-operative approaches will not take resilient actions when there is no stressor present.
\textbf{ \textit{Post-operative}} approaches update the key variables of the stressor using past data (often in batch), \edited{and provide resilience \textit{through learning}}. Therefore, the system variable $x$ is no longer a decision variable. Rather, the optimization seeks the best key variables given a set of data.
\begin{definition}[Post-operative] \label{def:post-operative} Post-operative approaches optimize the stressor key variables, $\phi$, given past data of the system objective function, $f$, and system decision variables, $x$:
\[
\displaystyle{\mathrm{argmax}_{\phi} \ f(\phi \ | \ x_{i-T} ... x_i).}
\]
\end{definition}
Similar to pre-operative approaches, post-operative approaches are offline with respect to the stressor, with the difference that tuning is executed \textit{post-factum}. In other words, post-operative approaches seek improvement in future trials, \textit{after} the robot system's performance on relevant tasks has been experienced and measured. The unique feature of post-operative approaches is that they facilitate the discovery of out-of-distribution stressors. Examples of post-operative approaches include co-design, evolutionary optimization, and learning techniques (e.g., off-policy, lifelong, reinforcement learning).
\edited{
\begin{remark}[Objective function notation]
The general notation used for the objective functions $f$ in Definitions~\ref{def:pre-operative}-\ref{def:post-operative} can represent the many manifestations of resilience seen in Fig.~\ref{fig:resilience_vs_robustness}. Current work, and many of the citations in Table \ref{taxonomy}, often interpret these objective functions in the form of optimization objectives where resilience is scalarized, as is the case with measures of efficiency. However, the general notation of $f$ encapsulates a broad class of functions, with the ability to incorporate constraints; model hybrid systems and indicator functions; capture system survival, such as defining a survival threshold $f_0$ and an objective function of the form $f(x, \phi) \ge f_0$; and represent the measures of resilience that may result by future research addressing Open Problem \ref{op:resilience_measure}.
\end{remark}}
\vspace{-0.3cm}
\subsection{Stressor and Approach Relationship}
The relationship between stressor types and approach types is straightforward: the approach type defines how the key variables are updated while the stressor type defines the key variables of interest. To add clarity and comprehensiveness, we build on the above notation used to define the approach types and introduce new notation for the stressors. While $\phi$ abstractly represents any stressor, we introduce more specific notation in order to distinguish between the different stressor types. Let $\theta$ be the hyper-parameters of the stressor and $g$ be the model of the stressor, both of which apply to targeted and untargeted stressors.
Fig.~\ref{fig:key_var} shows that stochastic stressors are defined and tuned by their hyper-parameters $\theta$ while out-of-distribution stressors are governed by their model $g$ as well as the hyper-parameters. Furthermore, out-of-distribution stressors cannot be addressed by pre-operative approaches.
To be pre-operative, the stressor model would need to encompass the disturbance; if the model does contain the disturbance, then the stressor is stochastic rather than out-of-distribution, whereas if the model does not contain the disturbance, then no resilience has been achieved and the disturbance must be handled in an intra- or post-operative manner.
\begin{figure}[]
\centering
\begin{tabular}{c|cc}
\hline
Approach & Stochastic & Out of Dist \\
\hline
Pre-operative & Offline $\theta$ & N/A \\
Intra-operative & Online $\theta$ & Online $g,\theta$\\
Post-operative & Update $\theta$ & Update $g, \theta$ \\
\hline
\end{tabular}
\caption{The relationship between approach types and stressor types, showing how each approach changes or uses the key variables for each stressor type.}
\label{fig:key_var}
\vspace{-0.3cm}
\end{figure}
\subsection{Perception and Estimation}
\label{subsec:perception}
Perception ---the robot’s ability to sense and understand the surrounding environment--- is a key enabler for autonomous systems’ operation in complex environments, and provides functionalities such as estimating the location of the robot, building a map of obstacles in its surroundings, detecting, classifying, and tracking objects.
This capability is even more crucial for multi-robot systems, where a shared understanding of the world is
a key requirement for successful
interaction.
However, multi-robot systems pose new challenges to perception:
(i) the sensor data is collected independently by multiple robots, possibly equipped with different sensor suites and with limited onboard compute,
(ii) the team needs to form a shared world model in the face of communication constraints (e.g.,\xspace bandwidth, communication range, privacy), and (iii) the scale of multi-robot perception problems exacerbates
the limitations that already arise in single-robot perception (e.g.,\xspace scalability, noisy and out-of-distribution measurements).
In the following, we review perception problems arising in multi-robot systems, spanning several
subdomains (e.g.,\xspace
low-level perception and 2D vision,
localization and mapping,
and high-level scene understanding).
Note that we restrict our focus to \emph{spatial} perception (i.e.,\xspace we are mainly concerned with estimating
quantities that live in 3D space), and do not cover other perception problems (e.g.,\xspace action and emotion recognition) nor
prediction problems.
\textbf{Pre-operative approaches.} We review pre-operative approaches for
(i) low-level perception, (ii) localization, mapping, and estimation, and (iii) describe open problems in high-level
multi-robot learning and real-time scene understanding.
\subsubsection{Low-level Perception and 2D Vision}
Low-level perception focuses on image --or more generally sensor's signal-- processing and is typically finalized to
detecting features or objects, performing pixel-wise semantic segmentation,
and recognizing known places, among other problems. Low-level perception methods are often referred to as the \emph{perception front-end}~\cite{Cadena16tro-SLAMsurvey}. In these problems, common stressors include
illumination and viewpoint changes, presence of unexpected dynamic elements in the scene, and in general the presence of
nuisances that are irrelevant for the perception task~\cite{Soatto14iclr}.
The following approaches are classified as pre-operative since they do not adapt during operation,
are often designed for the worst case, or do not explicitly deal with stressors.
Multi-robot research has extensively investigated \emph{distributed place recognition}, where
robots in a team have to detect whether they are observing the same place; place recognition
enables re-localization, and loop closure detection in Simultaneous Localization and Mapping (SLAM)~\cite{Cadena16tro-SLAMsurvey}.
In a centralized setup, a common way to obtain loop closures is to use visual
place recognition methods, which compare compact image descriptors to find
potential loop closures. This is traditionally done with global visual
features~\cite{Oliva01ijcv,Ulrich00icra,Arandjelovic16cvpr-netvlad}, or local visual
features~\cite{Lowe99iccv,Bay06eccv} which can be quantized in a bag-of-word
model~\cite{Sivic03iccv}. The feature descriptors are designed to gain robustness
to the stressors (e.g.,\xspace viewpoint changes).
Distributed loop closure detection aims at detecting loop closures without exchanging raw data,
a desirable feature when the robots operate under range and bandwidth constraints.
Tardioli~\emph{et al.}~\cite{Tardioli15iros} use visual vocabulary indexes instead of descriptors to reduce the required bandwidth.
Cieslewski and Scaramuzza~\cite{Cieslewski18icra} propose distributed
and scalable solutions for place recognition in a fully connected team of
robots, using bag-of-words of visual features~\cite{Cieslewski17ral-bow} or
full-image NETVLAD descriptors~\cite{Cieslewski17mrs-netvlad}.
Tian~\emph{et al.}~\cite{Tian18rss,Tian19arxiv}, Lajoie~\emph{et al.}~\cite{Lajoie20ral-doorSLAM}, and
Giamou~\emph{et al.}~\cite{Giamou18icra} propose approaches to coordinate the
data exchange during the geometric verification step.
Recent effort in computer vision has focused on \emph{segmentation and recognition problems}.
Liu~\emph{et al.}~\cite{Liu20cvpr-when2com,Liu20arxiv-who2com} learn to construct
communication groups and decide when to communicate to complete
multi-agent semantic segmentation and 3D shape recognition tasks.
Reinartz~\emph{et al.}~\cite{Reinartz13cg-distributedRecognition}
develop a multi-agent object recognition system based on satellite imagery.
Wu~\emph{et al.}~\cite{Wu19iccv-RLVideoRecognition} use multi-agent reinforcement learning to
sample frames that maximize the accuracy of video recognition.
Mousavi~\emph{et al.}~\cite{Mousavi19iros-multiAgentImageClassification} propose
a multi-agent image classification approach based on RL and generalized policy gradient.
Male\v{s}~\emph{et al.}~\cite{Males19esa-multiAgentFaceTracking} propose a hierarchical architecture for
multi-agent face tracking in video sequences.
Earlier work has focused on distributed object detection, including work on
linear support vector machines~\cite{Pang14tc-distributedSVM}; in this case the goal is to parallelize the
computation to speed up processing.
Low-level perception problems are often performed using learning-based techniques, including
descriptor learning~\cite{Choy19iccv-FCGF,Sarlin20cvpr-superglue} and place recognition~\cite{Arandjelovic16cvpr-netvlad}.
While currently less used in robotics, the growing field of \emph{federated learning} investigates how to train machine learning models in a distributed fashion, with the goal of preserving privacy of the agents
in the team and sharing computational resources~\cite{Yang19acm-federatedML,McMahan17aistats-federatedML,Konecny16arxiv-federatedML,Konecny18iclr-federatedML}.
This field is still in its infancy, with recent effort being devoted to dealing with unreliable agents~\cite{Li17arxiv-federatedML} and unreliable connectivity among agents~\cite{Salehi21toc-federatedML}.
While most approaches listed in this section do not formally address the presence of stressors, the
literature on \emph{adversarial learning}~\cite{Madry17arxiv-adversarialML} attempts to quantify and improve the robustness of a neural network to {perturbations}~\cite{Bastani16nips-robustNN,Cheng17springer-maximum,huang17iccav-safety,Kolter17icml-provableDefense,Pei17-DeepXplore,Ma18ieee-deepgauge,Dvijotham18-dual_approach_verification,Wong18neurips-provableDefense}.
Most of these approaches consider simple perturbations (e.g.,\xspace additive pixel-wise noise on an image) that lead the network to produce incorrect classification results.
\subsubsection{Localization, Mapping, and Estimation}
Here we briefly review distributed estimation techniques and then focus on their applications in multi-robot teams.
Estimation techniques typically constitute the \emph{perception back-end}~\cite{Cadena16tro-SLAMsurvey}, in that
they take intermediate representations produced by low-level perception processes (the perception front-end)
and use them to estimate the state of the system (e.g.,\xspace the pose of the robots, the 3D location and velocity of
objects in the environment).
In these problems, typical stressors include measurement noise, out-of-distribution data (typically produced by incorrect processing at the front-end or by off-nominal sensor behavior), as well as intermittent communication.
Early work focuses on estimation with Gaussian noise, a setup that builds on well-established estimation-theoretic methods~\cite{Mendel95book,Dellaert17fnt-factorGraph} for which efficient solvers are now available (e.g.,\xspace~\cite{gtsam,Agarwal12-ceres}).
Distributed estimation in multi-agent systems has been also extensively investigated in robotics and
sensor networks, with the goal of developing methods that converge to optimal estimates while only requiring local
communication~\cite{Garin10ncs-surveyDistributedEstimation,Carli08jsac-distributedKF,Barooah07csm}
and are possibly robust to unreliable communication channels~\cite{Schenato07ieee,Sinopoli04tac-KFintermittent,Schenato08tac}.
Multi-robot research
investigates multi-robot localization with different estimation
techniques, including Extended Kalman filters~\cite{Roumeliotis02tra,Zhou06iros},
information filters~\cite{Thrun03isrr}, and particle filters~\cite{Howard06ieee,Carlone10jirs-multiRobotSLAM}.
Maximum a posteriori and maximum likelihood estimation have recently been
adopted as a general and accurate framework for robotics; in SLAM problems,
these frameworks lead to well-studied optimization problems, including \emph{pose graph optimization} (PGO)~\cite{Cadena16tro-SLAMsurvey} or \emph{factor graph optimization}~\cite{Dellaert17fnt-factorGraph}.
Early literature on multi-robot PGO focused on centralized approaches, where measurements
are collected at a central station, which computes the trajectory estimates for all the
robots~\cite{Andersson08icra,Kim10icra,Bailey11icra,Lazaro11icra,Dong15icra}.
{Since the computation workload and the communication bandwidth of a
centralized approach grow with the number of robots, related work has
explored \emph{distributed techniques}, in which robots perform local communication and share
the computational workload~\cite{Aragues11icra-distributedLocalization,Cunningham10iros,Cunningham13icra,Choudhary17ijrr-distributedPGO3D,tian_Asynchronous_2020,Tian19arxiv-distributedSEsync}; these techniques leverage problem structure and distributed optimization methods
to obtain optimal estimates from partial information exchange.
Recent work on multi-robot localization, mapping, and estimation has focused on
the realistic case where some of the measurements used by the back-end are \emph{outliers}
(i.e.,\xspace they are affected by severe unmodeled noise).
The fundamental problem of robust estimation has a long history and there are
well established frameworks to model estimation problems with outliers, including
M-estimation~\cite{Huber81} and consensus
maximization~\cite{Chin18ECCV-robustFitting,Chin17slcv-maximumConsensusAdvances}.
However, these frameworks typically lead to hard optimization
problems~\cite{Chin18ECCV-robustFitting,Antonante21tro-outlierRobustEstimation}, and developing
fast and effective solvers is still an active research area~\cite{Yang20neurips-certifiablePerception,Yang20ral-GNC,Barron19cvpr-generalAdaptiveLoss,Chebrolu2020arxiv-adaptiveRobustKernels}.
We remark that these approaches are still pre-operative (according to Definition~\ref{def:pre-operative}): for instance
M-estimators can be understood as maximum-likelihood estimators over heavy-tailed (but known) noise models.
Robust estimation is particularly important in multi-robot localization and mapping where
incorrect measurements among the robots are more difficult to detect when the robots do not share a common reference frame.
Centralized outlier rejection techniques for multi-robot SLAM include
voting schemes~\cite{Indelman14icra} and graph-theoretic methods~\cite{Mangelson18icra};
the Pairwise Consistency Maximization approach of~\cite{Mangelson18icra} has been particularly successful, has
also been implemented in~\cite{Ebadi20icra-LAMP}, and is
amenable for a distributed implementation~\cite{Lajoie20ral-doorSLAM,Chang21icra-KimeraMulti}.
More recently, Tian~\emph{et al.}~\cite{Tian21arxiv-KimeraMulti} propose a distributed M-estimation approach based on graduated non-convexity~\cite{Yang20ral-GNC} which is shown to lead to more accurate trajectory and map estimates.
\begin{openProblem}[Learning in teams] \label{op:team_learning}
Federated and adversarial learning have the potential to enhance multi-robot operation
but have found limited use in robotics.
Open challenges in federated learning for multi-robot systems include improving
communication bandwidth, energy efficiency, and security~\cite{Yu21arxiv-federatedML}.
Regarding adversarial learning, robotics and computer vision applications
require going beyond simple additive perturbation models, which are
not well-suited to capture nuisances arising in real perception problems~\cite{Poursaeed18cvpr-adversarialML}.
\end{openProblem}
\begin{openProblem} [Distributed real-time scene understanding]
\label{op:scene_understanding}
While multi-robot SLAM can be considered a mature field of research, the goal of achieving human-level
understanding of the environment is still out of reach for a robot.
Despite the growing literature on single-robot metric-semantic understanding~\cite{Rosinol20icra-Kimera,Salas-Moreno13cvpr,McCormac17icra-semanticFusion,Tateno15iros-metricSemantic,McCormac183dv-fusion++,Runz17icra-cofusion,Xu19icra-midFusion,Wald18ral-metricSemantic,Narita19arxiv-metricSemantic,Grinvald19ral-voxbloxpp,Tateno17cvpr-CNN-SLAM,Lianos18eccv-VSO,Behley19iccv-semanticKitti,Bowman17icra} and
3D scene graph representations~\cite{Rosinol20rss-dynamicSceneGraphs,Armeni19iccv-3DsceneGraphs},
few papers have considered metric-semantic multi-robot mapping~\cite{Chang21icra-KimeraMulti,Tian21arxiv-KimeraMulti,Tchuiev20ral-semanticMultiRobotMapping,Yue20iros-semanticMultiRobotMapping}.
Infusing semantic and high-level understanding in localization and mapping problems creates novel opportunities to
improve resilience since a team of robots can dynamically adjust depending on the external context or the semantic
elements in the scene (e.g.,\xspace presence of a threat).
\end{openProblem}
\textbf{Intra-operative approaches.}
The literature on intra-operative approaches to perception is more sparse but growing.
\subsubsection{Low-level Perception and 2D Vision}
Learning-based approaches for low-level perception (e.g.,\xspace object detection) are challenged by
(i) a potential shift between the training and testing distributions, and (ii) test instances
that belong to the tails of the distribution (e.g.,\xspace rare examples) for which little training data is available.
Intra-operative approaches include methods that deal with these challenges online during operation.
Kalal~\emph{et al.}~\cite{Kalal12pami-onlineLearningDetection} and
Prasad~\emph{et al.}~\cite{Prasad18pcs-onlineLearningDetection} investigate online learning
to detect and track objects in videos.
The survey by Abass~\emph{et al.}~\cite{Abbass20vc-onlineLearningTracking} provides a more extensive review of
online learning for visual tracking.
Recent work in robotics and autonomous vehicles uses learning-based methods to detect out-of-distribution
examples online during operation.
Rahman~\emph{et al.}~\cite{RahmanIROS19-falseNegative} process the hidden layer outputs of a neural
network to predict when a traffic sign detection network outputs false negatives.
Cheng~\emph{et al.}~\cite{Cheng18date} and Henzinger~\emph{et al.}~\cite{Henzinger20ecai} observe neuron activation patterns to monitor when the network is operating on inputs unlike the data seen during training.
Hendrycks~\emph{et al.}~\cite{Hendrycks16iclr} develop a method of monitoring network confidence based on softmax probabilities. Gupta and Carlone~\cite{Gupta20itsc-atom} propose
Adversarially-Trained Online Monitor (ATOM) to detect incorrect detections of pedestrians in self-driving applications.
System-level monitoring approach are studied in~\cite{Antonante20tr-perSysMonitoring2} to detect off-nominal behaviors of perception modules.
\subsubsection{Localization, Mapping, and Estimation}
We start by remarking that several approaches for robust estimation admit an alternative interpretation
as intra-operative approaches. For instance, approaches based on graduated non-convexity, reweighted least squares,
and dynamic covariance scaling~\cite{Yang20ral-GNC,Antonante21tro-outlierRobustEstimation,Sunderhauf13icra,Agarwal13icra}
can be understood as approaches to adjust the measurement covariances (a key stressor variable, see Table~\ref{fig:key_var}) online, to down-weight out-of-distribution measurements. The connection between robust estimation and measurement weighting is a well-understood one and goes back to the seminal work from Black and Rangarajan~\cite{Black96ijcv-unification}, with more recent multi-robot applications in~\cite{Tian21arxiv-KimeraMulti}.
More recent work on robust estimation for robust localization, mapping, and learning goes further and
explicitly tackles online adaptation. Antonante~\emph{et al.}~\cite{Antonante21tro-outlierRobustEstimation} propose minimally tuned robust estimation algorithms that can learn the inlier noise statistics online.
Barron~\cite{Barron19cvpr-generalAdaptiveLoss} and Chebrolu~\emph{et al.}~\cite{Chebrolu2020arxiv-adaptiveRobustKernels}
adjust the choice of robust loss function (within a parametric family) using an automatic online tuning procedure.
Other potential stressors include sensor mis-calibrations and sensor failures.
Online calibration has been extensively investigated in the context of
kinematic odometry~\cite{Roy99icra-onlineCalibration},
visual-inertial~\cite{Lee20iros-onlineCalibration,Geneva20icra-openVINS} and lidar-inertial odometry~\cite{Tagliabue20iser-lion},
and SLAM~\cite{Nobre17iser-selfCalibration}.
System-integration efforts have also investigated system reconfiguration in response to sensor
failures~\cite{Palieri21ral-locus}. A general framework for sensor selection based on resilient submodular maximization is investigated in~\cite{Tzoumas18cdc}.
\begin{openProblem}[Resilience and reasoning over failures]
\label{op:failure_reasoning}
At the algorithmic level, resilient perception is still in its infancy: most perception frameworks
are ``rigid'' and target robustness rather resilience and online adaptability.
It is desirable for future perception algorithms to perform automatic parameter tuning to adjust to
heterogeneous environmental conditions.
At the system level, monitoring of perception systems is also a largely unexplored topic: how to detect failures
of perception algorithms? how to detect that the world models built by different robots in a team
are inconsistent with each other?
More importantly, robot perception currently aims at \emph{detecting and isolating} off-nominal data (e.g.,\xspace outliers, sensor failures,
algorithmic failures) rather than reasoning on the cause of those failures and learning how to avoid them in the
future.
\end{openProblem}
\begin{openProblem} [Task-dependent perception and active perception]
\label{op:active_perception}
As already stressed in~\cite{Cadena16tro-SLAMsurvey}, an open-challenge is to develop a tractable
and general framework for \emph{task-driven perception}, which can guide sensing and perception processes to
maximize a task-driven performance metric (e.g.,\xspace obstacle avoidance) while minimizing computation, sensing, or communication. This is particularly important in multi-robot teams, where --under communication constraints-- it is desired for the robots to exchange the minimum amount of information to guarantee successful completion of a task.
Conversely, resilience also requires active perception, i.e.,\xspace how to actively plan and control the robot to
minimize the impact of environmental stressors.
\end{openProblem}
\textbf{Post-operative approaches.}
Postoperative approaches use batch training data collected by a robot over multiple deployments to identify stressors.
Approaches for offline system identification and sensor calibration fall in this category,
see, e.g.,\xspace~\cite{Furgale13iros,Rehder16icra-extending}; in this case, the training is often augmented with external sensors (e.g.,\xspace a vicon system) to increase the observability of the resulting parameter estimation problem.
More recently, post-operative approaches have focused on learning-based methods that can improve and adapt after multiple
executions. In this sense, post-operative approaches are related to \emph{domain adaptation and transfer learning} in machine learning, where the goal is to allow a network --trained on a given training distribution-- to transfer to a different test distribution.
The surveys~\cite{Zhuang19arxiv-domainAdaptation,Wang18neurocomputing-domainAdaptation}
cover techniques for domain adaptation and perception applications including
image classification, face recognition, semantic segmentation, and object detection.
Domain adaptation can rely on external supervision, but can also be semi-supervised or unsupervised.
For future operation of multi-robot teams, the unsupervised setup is particularly appealing since
it avoids massive human annotations~\cite{Jing20pami-selfsupervised}.
The unsupervised (or self-supervised) setup relies on automatically generated labels
produced by image generation techniques~\cite{Ledig17cvpr-superresolutionGAN},
classical vision algorithms~\cite{Li16cvpr-unsupervisedEdges,Jiang18eccv-selfsupervisedDepth},
or simulation~\cite{Dosovitskiy17arxiv-carla,Richter17iccv}.
Self-supervision has been proven useful to learn depth~\cite{Wang18cvpr-selfsupervisedDepth,Godard19iccv-selfsupervisedDepth},
optical flow~\cite{Liu19cvpr-selflow},
visual odometry~\cite{Zhou17cvpr-selfsupervisedDepth, Yang20cvpr-D3VO},
and feature descriptors for scan matching~\cite{Yew18eccv-3dfeatnet, Choy19iccv-FCGF, Bai20cvpr-D3Feat}.
In this landscape, \emph{self-training} methods start from a pre-trained model (using manual labels) and then
build a larger dataset by automatically building \textit{pseudo-labels}
for a larger body of unlabeled data~\cite{Lee13icmlws-pseudolabel,Xie20cvpr-selftraining,Zoph20arxiv-selftraining,Wei20arxiv-selftraining,Yang21cvpr-sgp}.
\begin{openProblem}[Tuning and reconfiguration]
\label{op:tuning_reconfig}
While offline calibration is well understood, currently there are no efficient and automatic ways
to automatically tune parameters and potentially reconfigure components in complex perception pipelines.
For instance, modern SLAM and VIO pipelines include tens to hundreds of tunable configuration parameters (e.g.,\xspace number of features,
type of feature descriptors, etc.) that impact performance,
are scenario-dependent, and rely on manual tuning from an expert.
The large number of parameters (and the potential lack of ground-truth information) quickly makes brute-force and black-box
approaches for tuning (e.g.,\xspace Bayesian optimization) impractical.
This adds to the combinatorial complexity of choosing how to combine different algorithmic blocks comprising the robot
perception system: which object detector? which 3D object pose estimation and tracking approach? which SLAM pipeline?
\end{openProblem}
\subsection{Perception and Estimation}
\label{subsec:perception}
Perception ---the robot’s ability to sense and understand the surrounding environment--- is a key enabler for autonomous systems’ operation in complex environments, and provides functionalities such as estimating the location of the robot, building a map of obstacles in its surroundings, detecting, classifying, and tracking objects.
This capability is even more crucial for multi-robot systems, where a shared understanding of the world is
a key requirement for successful
interaction.
However, multi-robot systems pose new challenges to perception:
(i) the sensor data is collected independently by multiple robots, possibly equipped with different sensor suites and with limited onboard compute,
(ii) the team needs to form a shared world model in the face of communication constraints (e.g.,\xspace bandwidth, communication range, privacy), and (iii) the scale of multi-robot perception problems exacerbates
the limitations that already arise in single-robot perception (e.g.,\xspace scalability, noisy and out-of-distribution measurements).
In the following, we review perception problems arising in multi-robot systems, spanning several
subdomains (e.g.,\xspace
low-level perception and 2D vision,
localization and mapping,
and high-level scene understanding).
Note that we restrict our focus to \emph{spatial} perception (i.e.,\xspace we are mainly concerned with estimating
quantities that live in 3D space), and do not cover other perception problems (e.g.,\xspace action and emotion recognition) nor
prediction problems.
\textbf{Pre-operative approaches.} We review pre-operative approaches for
(i) low-level perception, (ii) localization, mapping, and estimation, and (iii) describe open problems in high-level
multi-robot learning and real-time scene understanding.
\subsubsection{Low-level Perception and 2D Vision}
Low-level perception focuses on image --or more generally sensor's signal-- processing and is typically finalized to
detecting features or objects, performing pixel-wise semantic segmentation,
and recognizing known places, among other problems. Low-level perception methods are often referred to as the \emph{perception front-end}~\cite{Cadena16tro-SLAMsurvey}. In these problems, common stressors include
illumination and viewpoint changes, presence of unexpected dynamic elements in the scene, and in general the presence of
nuisances that are irrelevant for the perception task.
The following approaches are classified as pre-operative since they do not adapt during operation,
are often designed for the worst case, or do not explicitly deal with stressors.
Multi-robot research has extensively investigated \emph{distributed place recognition}, where
robots in a team have to detect whether they are observing the same place; place recognition
enables re-localization, and loop closure detection in Simultaneous Localization and Mapping (SLAM)~\cite{Cadena16tro-SLAMsurvey}.
In a centralized setup, a common way to obtain loop closures is to use visual
place recognition methods, which compare compact image descriptors to find
potential loop closures. This is traditionally done with global visual
features~\cite{Arandjelovic16cvpr-netvlad}, or local visual
features which can be quantized in a bag-of-word
model~\cite{Sivic03iccv}. The feature descriptors are designed to gain robustness
to the stressors (e.g.,\xspace viewpoint changes).
Distributed loop closure detection aims at detecting loop closures without exchanging raw data,
a desirable feature when the robots operate under range and bandwidth constraints.
Tardioli, et al.,~\cite{Tardioli15iros} use visual vocabulary indexes instead of descriptors to reduce the required bandwidth.
Cieslewski and Scaramuzza~\cite{Cieslewski18icra} propose distributed
and scalable solutions for place recognition in a fully connected team of
robots, using bag-of-words of visual features~\cite{Cieslewski17ral-bow} or
full-image NETVLAD descriptors~\cite{Cieslewski17mrs-netvlad}.
Tian, et al.,~\cite{Tian18rss}, Lajoie, et al.,~\cite{Lajoie20ral-doorSLAM}, and
Giamou, et al.,~\cite{Giamou18icra} propose approaches to coordinate the
data exchange during the geometric verification step.
Recent effort in computer vision has focused on \emph{segmentation and recognition problems}.
Liu, et al.,~\cite{Liu20cvpr-when2com} learn to construct
communication groups and decide when to communicate for
multi-agent semantic segmentation and 3D shape recognition tasks.
Wu, et al.,~\cite{Wu19iccv-RLVideoRecognition} use multi-agent reinforcement learning to
sample frames that maximize the accuracy of video recognition.
Mousavi, et al.,~\cite{Mousavi19iros-multiAgentImageClassification} propose
a multi-agent image classification approach based on generalized policy gradient.
Low-level perception problems are often performed using learning-based techniques, including
descriptor learning for place recognition~\cite{Arandjelovic16cvpr-netvlad}.
While currently less used in robotics, the growing field of \emph{federated learning} investigates how to train machine learning models in a distributed fashion, with the goal of preserving privacy of the agents
in the team and sharing computational resources~\cite{Konecny18iclr-federatedML}.
This field is still in its infancy, with recent effort being devoted to dealing with unreliable agents or unreliable connectivity~\cite{Salehi21toc-federatedML}.
While most approaches listed in this section do not formally address the presence of stressors, the
literature on \emph{adversarial learning} attempts to quantify and improve the robustness of a neural network to {perturbations}, see~e.g.,\xspace~\cite{Madry17arxiv-adversarialML}.
Most of these approaches consider simple perturbations (e.g.,\xspace additive pixel-wise noise on an image) that lead the network \mbox{to produce incorrect classifications.}
\subsubsection{Localization, Mapping, and Estimation}
Here we briefly review distributed estimation techniques and then focus on their applications in multi-robot teams.
Estimation techniques typically constitute the \emph{perception back-end}~\cite{Cadena16tro-SLAMsurvey}, in that
they take intermediate representations produced by low-level perception processes (the perception front-end)
and use them to estimate the state of the system (e.g.,\xspace the pose of the robots, the 3D location and velocity of
objects in the environment).
In these problems, typical stressors include measurement noise, out-of-distribution data (typically produced by incorrect processing at the front-end or by off-nominal sensor behavior), as well as intermittent communication.
Early work focuses on estimation with Gaussian noise, a setup that builds on well-established estimation-theoretic methods~\cite{Mendel95book,Dellaert17fnt-factorGraph}.
Distributed estimation in multi-agent systems has also been extensively investigated in robotics and
sensor networks, with the goal of developing methods that converge to optimal estimates while only requiring local
communication~\cite{Barooah07csm}
and are possibly robust to unreliable communication channels~\cite{Schenato07ieee}.
Multi-robot research
investigates multi-robot localization with different estimation
techniques, including Extended Kalman filters~\cite{Roumeliotis02tra},
information filters~\cite{Thrun03isrr}, and particle filters~\cite{Howard06ieee,Carlone10jirs-multiRobotSLAM}.
Maximum a posteriori and maximum likelihood estimation have recently been
adopted as a general and accurate framework for robotics; in SLAM problems,
these frameworks lead to well-studied optimization problems, including \emph{pose graph optimization} (PGO)~\cite{Cadena16tro-SLAMsurvey} or \emph{factor graph optimization}~\cite{Dellaert17fnt-factorGraph}.
Early literature on multi-robot PGO focused on centralized approaches, where measurements
are collected at a central station, which computes the trajectory estimates for all the
robots~\cite{Andersson08icra,Kim10icra,Bailey11icra,Lazaro11icra,Dong15icra}.
{Since the computation workload and the communication bandwidth of a
centralized approach grow with the number of robots, related work has
explored \emph{distributed techniques}, in which robots perform local communication and share
the computational workload~\cite{Aragues11icra-distributedLocalization,Cunningham13icra,Choudhary17ijrr-distributedPGO3D,tian_Asynchronous_2020}; these techniques leverage problem structure and distributed optimization methods
to obtain optimal estimates from partial information exchange.
The works~\cite{Forster13iros-airGroundLocalization,Michael14fr-airGroundMapping} consider a collaborative setup with ground and aerial robots.
Recent work on multi-robot localization, mapping, and estimation has focused on
the realistic case where some of the measurements used by the back-end are \emph{outliers}
(i.e.,\xspace they are affected by severe unmodeled noise).
The fundamental problem of robust estimation has a long history and there are
well established frameworks to model estimation problems with outliers, including
M-estimation and consensus
maximization~\cite{Chin18ECCV-robustFitting,Antonante21tro-outlierRobustEstimation}.
However, these frameworks typically lead to hard optimization
problems~\cite{Chin18ECCV-robustFitting,Antonante21tro-outlierRobustEstimation}, and developing
fast and effective solvers is still an active research area~\cite{Yang20neurips-certifiablePerception,Yang20ral-GNC,Barron19cvpr-generalAdaptiveLoss,Chebrolu2020arxiv-adaptiveRobustKernels}.
We remark that these approaches are still pre-operative (according to Definition~\ref{def:pre-operative}): for instance
M-estimators can be understood as maximum-likelihood estimators over heavy-tailed (but known) noise models.
Robust estimation is particularly important in multi-robot localization and mapping where
incorrect measurements among the robots are more difficult to detect when the robots do not share a common reference frame.
Centralized outlier rejection techniques for multi-robot SLAM include
voting schemes~\cite{Indelman14icra} and graph-theoretic methods~\cite{Mangelson18icra};
the Pairwise Consistency Maximization approach of~\cite{Mangelson18icra} has been particularly successful,
with a field deployment reported in~\cite{Ebadi20icra-LAMP}, and a distributed implementation proposed in~\cite{Lajoie20ral-doorSLAM}.
More recently, Tian, et al.,~\cite{Tian21arxiv-KimeraMulti} propose a distributed M-estimation approach based on graduated non-convexity~\cite{Yang20ral-GNC} which is shown to lead to more accurate trajectory and map estimates.
\begin{openProblem}[Learning in teams] \label{op:team_learning}
Federated and adversarial learning have the potential to enhance multi-robot operation
but have found limited use in robotics.
Open challenges in federated learning for multi-robot systems include improving
communication bandwidth, energy efficiency, and security~\cite{Aledhari20access-federatedLearning}.
Regarding adversarial learning, robotics and computer vision applications
require going beyond simple additive perturbation models, which are
not well-suited to capture nuisances arising in real perception problems~\cite{Poursaeed18cvpr-adversarialML}.
\end{openProblem}
\begin{openProblem} [Distributed real-time scene understanding]
\label{op:scene_understanding}
While multi-robot SLAM can be considered a mature field of research, the goal of achieving human-level
understanding of the environment is still out of reach for a robot.
Despite the growing literature on single-robot metric-semantic understanding (e.g.,\xspace~\cite{McCormac17icra-semanticFusion,Bowman17icra}) and
3D scene graph representations~\cite{Rosinol20rss-dynamicSceneGraphs,Armeni19iccv-3DsceneGraphs},
few papers have considered metric-semantic multi-robot mapping~\cite{Tian21arxiv-KimeraMulti,Tchuiev20ral-semanticMultiRobotMapping,Yue20iros-semanticMultiRobotMapping}.
Infusing semantic and high-level understanding in localization and mapping problems creates novel opportunities to
improve resilience since a team of robots can dynamically adjust depending on the external context or the semantic
elements in the scene (e.g.,\xspace presence of a threat). Moreover, it allows creating a distributed \emph{spatial knowledge base},
which can support several tasks from human-robot interaction to long-term autonomy.
\end{openProblem}
\textbf{Intra-operative approaches.}
The literature on intra-operative approaches to perception is more sparse but growing.
\subsubsection{Low-level Perception and 2D Vision}
Learning-based approaches for low-level perception (e.g.,\xspace object detection) are challenged by
(i) a potential shift between the training and testing distributions, and (ii) test instances
that belong to the tails of the distribution (e.g.,\xspace rare examples) for which little training data is available.
Intra-operative approaches include methods that deal with these challenges online during operation.
The survey by Abass, et al.,~\cite{Abbass20vc-onlineLearningTracking} provides an extensive review of online learning for visual tracking.
Recent work in robotics and autonomous vehicles uses learning-based methods to detect out-of-distribution
examples online during operation.
Rahman, et al.,~\cite{RahmanIROS19-falseNegative} process the hidden layer outputs of a neural
network to predict when a traffic sign detection network outputs false negatives.
Henzinger, et al.,~\cite{Henzinger20ecai} observe neuron activation patterns to monitor when the network is operating on inputs unlike the data seen during training.
Hendrycks, et al.,~\cite{Hendrycks16iclr} develop a method of monitoring network confidence based on softmax probabilities. Gupta and Carlone~\cite{Gupta20itsc-atom} propose
Adversarially-Trained Online Monitor (ATOM) to flag incorrect detections of pedestrians in self-driving applications.
System-level monitoring approach are studied in~\cite{Antonante21iros-perSysMonitoring2} to detect off-nominal behaviors of perception modules.
\subsubsection{Localization, Mapping, and Estimation}
We start by remarking that several approaches for robust estimation admit an alternative interpretation
as intra-operative approaches. For instance, approaches based on graduated non-convexity, reweighted least squares,
and dynamic covariance scaling~\cite{Yang20ral-GNC,Antonante21tro-outlierRobustEstimation,Sunderhauf13icra,Agarwal13icra}
can be understood as approaches to adjust the measurement covariances (a key stressor variable, see Table~\ref{fig:key_var}) online, to down-weight out-of-distribution measurements. The connection between robust estimation and measurement weighting is a well-understood one and goes back to the seminal work from Black and Rangarajan~\cite{Black96ijcv-unification}, with more recent multi-robot applications in~\cite{Tian21arxiv-KimeraMulti}.
More recent work on robust estimation for robust localization, mapping, and learning goes further and
explicitly tackles online adaptation. Antonante, et al.,~\cite{Antonante21tro-outlierRobustEstimation} propose minimally tuned robust estimation algorithms that can learn the inlier noise statistics online.
Barron~\cite{Barron19cvpr-generalAdaptiveLoss} and Chebrolu, et al.,~\cite{Chebrolu2020arxiv-adaptiveRobustKernels}
adjust the choice of robust loss function (within a parametric family) using an automatic online tuning procedure.
Other potential stressors include sensor mis-calibrations and sensor failures.
Online calibration has been extensively investigated in the context of
kinematic odometry~\cite{Roy99icra-onlineCalibration},
visual-inertial~\cite{Lee20iros-onlineCalibration} and lidar-inertial odometry~\cite{Tagliabue20iser-lion},
and SLAM~\cite{Nobre17iser-selfCalibration}.
System-integration efforts have also investigated system reconfiguration in response to sensor
failures~\cite{Palieri21ral-locus}. A general framework for sensor selection based on resilient submodular maximization is investigated in~\cite{Tzoumas18cdc}.
\begin{openProblem}[Resilience and reasoning over failures]
\label{op:failure_reasoning}
At the algorithmic level, resilient perception is still in its infancy: most perception frameworks
are ``rigid'' and target robustness rather than resilience and online adaptability.
It is desirable for future perception algorithms to perform automatic parameter tuning to adjust to
heterogeneous environmental conditions.
At the system level, monitoring of perception systems is also a largely unexplored topic: how to detect failures
of perception algorithms? how to detect that the world models built by different robots in a team
are inconsistent with each other?
More importantly, robot perception currently aims at \emph{detecting and isolating} off-nominal data (e.g.,\xspace outliers, sensor failures,
algorithmic failures) rather than reasoning on the cause of those failures and learning how to avoid them in the
future.
\end{openProblem}
\begin{openProblem} [Task-dependent perception and active perception]
\label{op:active_perception}
As already stressed in~\cite{Cadena16tro-SLAMsurvey}, an open-challenge is to develop a tractable
and general framework for \emph{task-driven perception}, which can guide sensing and perception processes to
maximize a task-driven performance metric (e.g.,\xspace obstacle avoidance) while minimizing computation, sensing, or communication. This is particularly important in multi-robot teams, where --under communication constraints-- it is desired for the robots to exchange the minimum amount of information to guarantee successful completion of a task.
Conversely, resilience also requires active perception, i.e.,\xspace how to actively plan and control the robot to
minimize the impact of environmental stressors.
\end{openProblem}
\textbf{Post-operative approaches.}
Postoperative approaches use batch training data collected by a robot over multiple deployments to identify stressors.
Approaches for offline system identification and sensor calibration fall in this category,
see, e.g.,\xspace~\cite{Furgale13iros}; in this case, the training is often augmented with external sensors (e.g.,\xspace a vicon system) to increase the observability of the resulting parameter estimation problem.
More recently, post-operative approaches have focused on learning-based methods that can improve and adapt after multiple
executions. In this sense, post-operative approaches are related to \emph{domain adaptation and transfer learning} in machine learning, where the goal is to allow a network --trained on a given training distribution-- to transfer to a different test distribution.
Domain adaptation can rely on external supervision, but can also be semi-supervised or unsupervised.
For future operation of multi-robot teams, the unsupervised (or self-supervised) setup is particularly appealing since
it avoids massive human annotations~\cite{Jing20pami-selfsupervised}.
Self-supervision has been proven useful to learn depth,
optical flow,
visual odometry,
and feature descriptors for scan matching.
\begin{openProblem}[Tuning and reconfiguration]
\label{op:tuning_reconfig}
While offline calibration is well understood, currently there are no efficient and automatic ways
to automatically tune parameters and potentially reconfigure components in complex perception pipelines.
For instance, modern SLAM and VIO pipelines include tens to hundreds of tunable configuration parameters (e.g.,\xspace number of features,
type of feature descriptors, etc.) that impact performance,
are scenario-dependent, and rely on manual tuning from an expert.
The large number of parameters (and the potential lack of ground-truth information) quickly makes brute-force and black-box
approaches for tuning (e.g.,\xspace Bayesian optimization) impractical.
This adds to the combinatorial complexity of choosing how to combine different algorithmic blocks comprising the robot
perception system: which object detector? which 3D object pose estimation and tracking approach? which SLAM pipeline?
\end{openProblem}
\subsection{Planning and Task Assignment}
\label{subsec:planning}
Planning and task assignment are fundamental problems in multi-robot systems. Teams of robots must collectively optimize the assignment of mobile robots to tasks~\cite{kuhn_hungarian_1955}, plan schedules and action sequences that are conflict-free~\cite{torreno_cooperative_2017}, and route individual agents along collision-free paths \cite{atzmon_probabilistic_2020}. These planning problems arise in many applications, including
product pickup and delivery~\cite{grippa_Drone_2019a, jorgensen_team_2017}, item retrieval in warehouses~\cite{enright_Optimization_2011b, peltzer_stt-cbs_2020}, and mobility-on-demand services~\cite{alonso-mora_Ondemand_2017a, salzman_research_2020}.
Planning entails optimizing higher level goals, such as minimizing the cost of an assignment \cite{nam_analyzing_2017} or the average travel time among agents \cite{prorok_redundant_2019}. To orchestrate this coordination, centralized communication architectures have become the norm in various instances; a centralized unit collects all costs (e.g., expected travel times) to determine the optimal plan or assignment (e.g., through search algorithms such as RRT or the Hungarian algorithm). However, \emph{the optimality of this assignment hinges on the accuracy of the assignment cost estimates}.
Despite best efforts to model any uncertainties, discrepancies between model assumptions and real-life dynamics may arise \cite{nam_when_2015}. For example, in transport scenarios, a robot may encounter an unexpectedly blocked path, and consequently takes significantly longer to reach its destination than anticipated \cite{prorok_Redundant_2019a}. Travel time uncertainty also arises due to the deterioration of positioning accuracy (e.g., GNSS service deterioration). Furthermore, recent methods consider it \emph{desirable} to actively obfuscate true robot state information (e.g., robot positioning), to ensure privacy across a variety of applications~\cite{prorok_Privacypreserving_2017a}.
Irrespective of the source of uncertainty, it follows that any discrepancies around true robot states cause a degradation in the system's overall performance, and can lead to cascading effects. Furthermore, multi-robot systems have uncertainties beyond individual robot states. The strengths of multi-robot collaboration are juxtaposed with added disruptions. To achieve resilient performance, networked robotic systems must not only cope with a higher likelihood of individual robots among a large team failing, but also with compounding uncertainties among team members, second order effects, and the impact of real world complexities on collective planning.
\textbf{Pre-operative approaches.}
Multi-agent planning can generally be categorized into assignment \cite{yang_algorithm_2020}, routing \cite{toth_vehicle_2002}, or path planning \cite{stern_multi-agent_2019}. In each of these categories, pre-operative approaches create team plans offline, making decisions about the whole team's actions a priori. Although these pre-operative approaches create plans before the existence of any disturbance, they can still plan for disturbances that they modeled. Take for example the multi-agent path planning problem. Following the notation in Section~\ref{sec:approaches}, the planning algorithm seeks to optimize the average travel time $f$ given a model of uncertainty due to traffic $\phi$ by searching over the space of paths $x$.
\subsubsection{Assignment}
Assignments under random costs have gained a considerable amount of attention~\cite{nam_When_2015a, nam_Analyzing_2017a, ponda_distributed_2012, shang_stochastic_2020}. The focus has primarily been on providing analyses of the performance under noisy conditions. Prorok and Kumar consider privacy in mobility on demand by obfuscating passenger destination locations. Because there exists unused supply even at peak demand, multiple vehicles with noisy origin locations are assigned to passengers through an iterative Hungarian algorithm \cite{prorok_privacy-preserving_2017}.
In other work, authors provide a complementary method that provides robustness to noisy travel time estimates by making use of \emph{robot redundancy}~\cite{prorok_Redundant_2019a, prorok_Robust_2020, malencia_fair_2021}. In other words, the core idea of those works is to exploit redundancy to counter uncertainty and redeem performance.
Although the idea of engineering robust systems with redundant resources is not new in a broad sense~\cite{kulturel-konak_Efficiently_2003a, ghare_Optimal_1969}, these works consider redundant mechanisms for the problem of mobile robot assignment under uncertainty, with arbitrary and potentially correlated probability distributions.
\subsubsection{Routing}
The quintessential routing problem is the Multi Traveling Salesperson Problem, where a team of agents must collectively visit a set of locations while minimizing the total amount of travel time \cite{lawler_traveling_1985, balasubramanian_risk-aware_2020}. Traditional approaches that assume a known fixed time of travel between any two cities, i.e., the graph edges have fixed costs, are fragile in scenarios that involve stochasticity, partial information, and modeling errors \cite{bektas_multiple_2006}. Pre-operative approaches to routing explicitly consider uncertainties such as robot failures, environmental dynamics, and changing task definitions when solving the Multiple Traveling Robot Problem \cite{sariel-talay_multiple_2009}.
A closely related routing problem is the Orienteering Problem, where the robot team seeks to maximize reward that can be collected at different nodes but are not required to visit all nodes (whereas traveling salespersons problems require all locations are covered) \cite{vansteenwegen_orienteering_2011}.
In the Multiple-Path Orienteering Problem, an adversary is capable of attacking a subset of the robot team which plans to maximize their reward under this threat \cite{shi_robust_2020}.
In the Team Surviving Orienteers Problem, edge weights represent the probability of a robot surviving the traversal of that edge and there are constraints on each agents probability of survival over their full path \cite{jorgensen_team_2017}.
\subsubsection{Path-planning}
Recent works in resilient multi-agent path planning include planning under uncertain costs or times \cite{yakovlev_prioritized_2019, atzmon_robust_2020}, privacy \cite{zhang_privacy-preserving_2021}, and disruptions or attacks \cite{zhou_approximation_2019, chasparis_lp-based_2008}. In these pre-operative approaches, there is a known disturbance model and the plan is created with respect to this model such that the impact of the disturbance on the multi-robot system is minimized.
Wagner and Choset \cite{wagner_path_2017} model agents with dilated sizes according to the uncertainty in their poses and then plan conflict free trajectories for these dilated agents. Whilst this work models uncertainty in the pose of the robots (which impact travel time), others directly model stochastic travel times by representing delays as either gamma distributions \cite{peltzer_stt-cbs_2020} or number of time steps \cite{atzmon_robust_2018}.
In these pre-operative approaches, incorporating uncertainty creates more expressive models than deterministic approaches, but there are still limits because these models are assumed to fully and correctly model disturbances. Additionally, pre-operative planning approaches create changes to the system even without the presence of a disruption. This more conservative approach readies the system for disruption and thus may perform suboptimally when disruptions do not occur.
\begin{openProblem}[Planning over uncertainty]
\label{op:plan_pre}
Pre-operative approaches to planning handle stochastic stressors through redundancy and risk-averse measures (e.g., conditional value at risk). While these methods provide robustness, and are complementary to each other in handling risk, they are conservative.
More work is required to understand how best to plan under modeled uncertainty, e.g., recent works have only just begun studying risk adaptive approaches~\cite{rudolph_desperate_2021}.
\end{openProblem}
\textbf{Intra-operative approaches.}
Multi-agent planning algorithms that adapt when a disturbance occurs are considered intra-operative approaches. Some intra-operative methods can identify disturbances and respond accordingly \cite{talebpour_adaptive_2019} while others identify degradation in system performance and adapt without knowledge of the source or type of disturbance \cite{ramachandran_resilience_2021, ramachandran_resilient_2020}.
\subsubsection{Assignment}
Assignment algorithms that are able to adapt to real-time disturbances have received recent attention \cite{he_data-driven_2020, emam_adaptive_2020}. The high computational cost of calculating optimal assignments prevents the continuous calculation of assignments during runtime, therefore, intra-operative assignment methods often work to identify when re-assignment is worth the expense. Zhou et al.~\cite{zhou_risk-aware_2020} present an event-driven algorithm that recomputes an assignment only when certain conditions are met, e.g., there exists a path that is both shorter and has less uncertainty in travel time. Mayya et al.~\cite{mayya_resilient_2021} measure the degradation of robot capabilities due to disturbances such as fog or mud and re-assign robots to tasks that have experienced performance drops due to these degraded capabilities.
In the language of our Section~\ref{sec:approaches} notation, this work measures capabilities $\phi$ (which capture unmodeled disturbances) and re-assigns agent-task pairings $x$ to maximize the average task performance~$f$.
\edited{Similarly, \cite{ramachandran_resilience_2019} reconfigure their heterogeneous team when failures occur to maintain a communication graph.}
In mobility on demand applications, it is expected that new demand is continuously added. To address the uncertainty in both future demand and vehicle supply, He et al.~\cite{he_data-driven_2020} present a receding horizon algorithm that solves a distributionally robust optimization problem at each time step to calculate vehicle load balancing while Alonso-Mora et al.\cite{alonso-mora_predictive_2017} decouple vehicle routing and passenger to account for uncertain future demand.
\subsubsection{Path-planning}
Similar to some pre-operative planning approaches, Zhou et al.~\cite{zhou_distributed_2020} assume the worst case adversarial attack is known to have at most $\alpha$ adversaries. However, instead of using fixed pre-planned routes, this intra-operative approach considers the worst-case attack at each time step when re-planning. Other intra-operative planning methods include navigation in human workspaces \cite{lo_towards_nodate} and warehouses \cite{honig_persistent_2019}.
\begin{openProblem}[Off-task planning]
\label{op:off_task}
Many homogeneous and heterogeneous planning applications involve extent of redundancy, whether explicitly in the number of robots, or implicitly in robots with capabilities that are not in current use, or capability-complementarities that are not currently exploited. These redundant resources often lie dormant until needed. Rather, researchers should be asking how to account for these unused resources with respect to possible future use. For example, in mobility on demand, agents that are currently not assigned to riders can move to locations that decrease the uncertainty or wait times of possible future riders. Or in assignment, how do current coalitions impact the space of complementary resources available for possible future coalitions?
\end{openProblem}
\textbf{Post-operative approaches.}
Current work in multi-agent planning addresses planning for single instances/missions. However, future applications will involve robot teams completing repeated or continual missions (e.g., agricultural robotics or automated construction teams).
The resilience needed for the long-term success of these teams could be achieved through post-operative multi-agent planning. Rather than adapting the plan when encountering disturbance, as in an intra-operative approach, post-operative planning would adapt the algorithm over the long term based on the experience of repeated episodes of missions. For example, after servicing a single farm, a team could optimize $\phi$, its model of environmental disturbances (e.g., mud), given the data from the plan $x$ that the team followed and the performance $f$ that it achieved.
A stream of pickup and delivery tasks creates a nearly endless multi-agent planning sequence. While pre- and intra-operative planning can adapt to disturbance during a given trip or day, these systems may be fragile over the long term as disturbance characteristics change. For example, robotic systems in agricultural settings must adapt to seasonal changes in disturbance as well as longer-term changes due to climate change. While post-operative planning methods are just beginning to receive attention (e.g., lifelong path planning~\cite{ma_lifelong_2019}), this nascent literature is far from creating the long-term autonomous robotic systems required to solve complex continuous missions across industries like transportation, manufacturing, and agriculture. To bridge this gap, new tools must be brought into multi-agent planning for such life-long learning, which has been identified by others as an open challenge in this space \cite{salzman_research_2020}.
\begin{openProblem}[Long-term survival]
\label{op:survival}
Current planning approaches focus on optimizing an objective function such as efficiency. However, for long duration applications such as oceanic and extraterrestrial exploration, survival is a more important objective than efficiency. While constraint based approaches consider single agent survival \cite{egerstedt_robot_2018}, researchers must study the `survival' of multi-agent teams, with questions such as: How do you define a multi-agent optimization objective for planning for survival? and how do individual agents' actions harm or benefit the survival of its teammates, and the survival of the team?
\end{openProblem}
\begin{openProblem}[Reliance on a world model]
\label{op:world_model}
Planning algorithms rely on a model of the world and its uncertainties. However, these approaches still fail because of model inaccuracies, such as black swan events. Researchers should investigate whether it is possible to create an accurate world model that captures black swan events and captures the ways the world model changes with respect to the robot system's actions. Or, if this model is not possible, how to be resilient to such unknowns despite having no model of them. If it is the latter, this planning problem is related to determining what it means for the robot system to be resilient (Open Problem 1).
\end{openProblem}
\subsection{Control}
\label{subsec:control}
Multi-robot control strategies facilitate the organization of multiple robots to solve team-level, global tasks using local interaction rules. In this section, we review methods across control applications, including motion coordination, coverage control, formation control, and control for information gathering and surveillance.
This section also includes the general problem of ensuring cooperative computation in multi-robot networks---a problem to which consensus-based approaches are often the answer~\cite{pasqualetti_Consensus_2012, cortes2017coordinated}. We focus on failure-prone or adversarial environments that lead to malfunctioning robots, or compromised communication channels, resulting in disruptions to the collective task. In other words, the stressors $\phi$ are typically misbehaving or adversarial robots, and protective (resilient) mechanisms are required to deal with (mis-)information being disseminated by these robots.
\textbf{Pre-operative approaches.}
Cooperative control algorithms for robot teams are underpinned by the general assumption that all entities are indeed cooperative.
This, however, cannot be generally guaranteed, as robots break, are compromised, or fail to process and interpret sensor information. As such, the robots themselves become the stressors ($\phi$) of the system.
\subsubsection{Robot formations for resilient consensus} Building resilient formations ($x$) provides a precautionary means of overcoming such non-cooperative or faulty robots. This line of work borrows from seminal results in network science that define the notion of {resilient communication graphs} through a property widely referred to as \textit{r-robustness}~\cite{leblanc_Resilient_2013a,sundaram_Distributed_2011a}.
Pre-operative approaches apply these concepts to the domain of robotics by considering physically embedded multi-agent systems, with constrained communication and dynamic behaviors.
The challenge is that testing these networks for \textit{r-robustness} is computationally demanding, and requires global knowledge of the topology.
By constructing \textit{resilient robot formations}, authors have demonstrated that distributed consensus algorithms converge safely, regardless of what non-cooperative robots are communicating~\cite{saldana_Triangular_2016a, saldana_Resilient_2017a, guerrero-bonilla_Formations_2017}. One of the earliest works in this domain demonstrates that the most basic resilient formation can be built via {triangular networks}~\cite{saldana_Triangular_2016a}. This topology has the attractive property that it can be constructed incrementally and verified in a decentralized manner, in polynomial time. Further work builds on this foundation:~\cite{guerrero-bonilla_Formations_2017} accounts for any number of non-cooperative robots,~\cite{guerrero-bonilla_Design_2018} presents sufficient conditions on the robot communication range to guarantee resilient consensus, and~\cite{guerrero-bonilla_Dense_2020} addresses three-dimensional space through cubic lattice-based formations.
\subsubsection{Pre-planned consensus policies}
Mobile robot teams have communication graphs that, generally, vary over time, and as a consequence, the (rigid) resilient formations introduced in the prior paragraph are not necessarily maintained.
By implementing connectivity management policies, one can ensure that resilience is guaranteed as the network topologies undergo change. To address this problem, authors use measures of the resilience of the communication graph, characterized by the algebraic connectivity~\cite{saulnier_Resilient_2017a, saldana_Resilient_2017a}, and by Tverberg partitions~\cite{park2017fault}.
Resilience in the sense of \textit{r-robustness} has also been quantified probabilistically, as shown in~\cite{wehbe_Probabilistic_2021}, assuming that robot communication is subject to random failures that can be modeled using a probability distribution. Robots with access to such an estimate can evaluate how their future actions may affect the system's resilience.
When connectivity constraints cannot be satisfied due to hard physical constraints, we need to resort to additional methods. In~\cite{saldana_Resilient_2017a}, authors develop a {sliding window consensus protocol} that provably guarantees resilience when the union of communication graphs over a bounded period of time jointly satisfies robustness properties. Their policy selectively activates communication links to attain resilience while solving tasks that require the robot team to cover wide-spread areas (e.g., perimeter surveillance).
Other work considers the applications of formation control~\cite{usevitch2018resilient, guerrero2019realization} and leader-follower systems~\cite{usevitch2018finite}, whereby reference values are time-varying. Wang et al.~\cite{wang2019resilient} propose event-triggered update rules that can mitigate the influence of faulty or malicious agents.
We note that in aforementioned approaches, while the robot team is adaptive with respect to its communication topology and motion strategy, there is no adaptation with respective to the stressors $\phi$, i.e., the assumed number of non-cooperative or faulty robots is fixed---hence the \textit{pre-operative} classification of these approaches.
\subsubsection{Optimization-based trajectory control}
The use of model-predictive (e.g.,\xspace receding horizon ) control for coordinating multi-robot systems consists of continuously finding paths for all robots in the system, such that a global objective is optimized (such as traffic throughput or overall fuel consumption), subject to certain constraints (e.g.,\xspace no vehicle's path collides with another path, nor with any fixed or moving obstacle).
Coupled centralized approaches, which consider the joint configuration space of all involved vehicles, have the advantage of producing optimal and complete plans~\cite{kavraki:1996,kant:1986, schouwenaars:2001}.
However, such methods rely on the fact that all vehicles cooperate in the globally determined plans~\cite{chen:2015, kant:1986}. Consequently, these approaches are notoriously brittle and susceptible to individual robot failures and non-cooperation. The work in~\cite{kuwata_Cooperative_2011a} shows that a monotonic cost reduction of global objectives can be achieved, even in non-cooperative settings. This feat, however, relies on the fact that neighboring vehicles reliably execute the agreed upon maneuvers up to a known error bound.
\subsubsection{Control barrier functions}
Reliability can also be achieved by defining a desirable subset of the robots' state space, and then generating control inputs that render this subset forward-invariant. Control barrier functions (CBFs)~\cite{borrmann_Control_2015} are a framework for establishing such forward invariance, hence providing the sought-after robustness. In one of the earliest works in this vein, Usevitch et al.~\cite{usevitch_Adversarial_2021} present a method for guaranteeing forward invariance of sets in sampled-data multi-agent systems in the presence of a set of worst-case adversarial agents (whereby the identities of the adversarial agents are known to the normal agents).
While CBFs provide a computationally efficient tool to guarantee safety in multi-agent environments, they generally assume perfect knowledge of other agents’ dynamics and behaviors (e.g.,~\cite{chen_Guaranteed_2020}).
\subsubsection{Combinatorial approaches}
Providing resilience to any number of robot drop-outs (e.g., due to denial-of-service attacks or failures) is a computationally challenging task, since one would need to account for all possible removals of robots from the joint planning task, which is a problem of combinatorial complexity. The work in~\cite{rabban2020improved} defines a resilient coverage maximization problem, in which the objective is to select a trajectory for each robot such that target coverage is maximized in the case of a worst-case failure of $\alpha$ robots. While it is assumed that at most $\alpha$ robots may fail, it is unknown which robots are going to fail. A similar assumption is made in~\cite{schlotfeldt2018resilient} for the case of active information gathering scenario, namely, multi-robot target tracking.
\subsubsection{Protective approaches} The topic of privacy remains poorly addressed within robotics at large. Yet, privacy can be an important facet of defence against active adversaries for many types of robotics applications. Using privacy as a defence mechanism is particularly relevant for collaborative robot teams, where individual robots assume different roles with varying degrees of specialization. As a consequence, specific robots may be critical to securing the system’s ability to operate without failure. The premise is that a robot’s motion may reveal sensitive information about its role within the team.
Privacy preserving control methods, hence, tackle the problem of \textit{preventing an adversary from being able to distinguish the role} of one robot from that of another.
In~\cite{prorok_macroscopic_2016}, the authors consider collaboration across heterogeneous robot teams; their method builds on the theory of differential privacy to quantify how easy it is for an adversary to identify the \textit{type} of any robot in the group, based on an observation of the robot group’s dynamic state. Note that, a similar, yet post-operative approach is taken in~\cite{zheng_adversarial_2020}.
\textbf{Intra-operative approaches.}
These approaches are dynamic, with decision variables $x$ adapting to changes perceived in $\phi$; e.g., measurements from non-attacked robots can be used to observe ongoing failures or newly perceived obstacles.
\subsubsection{Obstacle avoidance and adaptive navigation} In contrast to the coupled (centralized) trajectory control methods introduced above, {decentralized} approaches consider the generation of collision-free paths for individual robots that cooperate only with immediate neighbors~\cite{desaraju:2012,kuwata:2007}, or with no other vehicles at all~\cite{alonso:2012, vandenberg:2005, wang_Mobile_2020}. Hence, coordination is reduced to the problem of dynamically (and reciprocally) avoiding other vehicles (and obstacles), and can generally be solved without the use of explicit communication. Although such approaches are resilient to communication-based faults and attacks, the key disadvantage is that the optimality of global objectives (such as overall traffic efficiency) can generally not be guaranteed as robots follow ad-hoc policies.
The work in~\cite{csenbacslar2019robust} combines the best of both worlds, presenting a hybrid planning strategy employing both discrete planning and trajectory optimization with a dynamic receding horizon approach. Although pre-planned trajectories form the initial coordinated trajectory plan, their method allows for adaptation to dynamic changes, including newly appearing obstacles, robots breaking down, and imperfect motion execution.
Also adapting to stressors in an online manner, the work in~\cite{cheng_Safe_2020} learns high-confidence bounds for dynamic uncertainties. This robust CBF formulation maintains safety with a high probability and adapts to the learned uncertainties.
\subsubsection{Security}
While most works addressing multi-robot fault tolerance through robust consensus policies make use of worst-case assumptions, approaches toward spoof detection make use of \textit{independent} physical channel observations (i.e., signal profiles), created by complex multi-path fading~\cite{wheeler2019switching,gil_guaranteeing_2017,renganathan_Spoof_2017}.
The methods differ, e.g.,~\cite{wheeler2019switching} determines which edges in the network to switch on or off over the evolution of the consensus in order to eliminate spoofed node influence, whereas~\cite{gil_guaranteeing_2017} assigns robot confidence values (signifying robot legitimacy).
The work in~\cite{mallmann-trenn_Crowd_2021} leverages a probabilistic measure of trustworthiness to find and eliminate adversarial robots in the presence of a Sybil attack.
\subsubsection{Combinatorial approaches}
The approaches introduced in the \textit{pre-operative} section above consider worst-case failures and over-provision for robustness (e.g., see~\cite{schlotfeldt2018resilient}). Differently, the work in~\cite{schlotfeldt2021resilient} continuously takes measurements from all non-attacked robots to observe ongoing failures (in an active information gathering task). The control algorithms, therefore, are calculated based on the actual observed stressors (i.e., attacked robots).
In a similar vein, Tzoumas et al.~\cite{tzoumas2018resilient} consider a similar scenario (i.e., fault-tolerant robot navigation with sensor scheduling), whereby at each time step, the algorithm selects system elements based on the history of inflicted attacks, deletions, or failures; this allows for guarantees of resiliency to any number of robot failures.
\begin{openProblem} [Design of signal complementarity for system resilience]
\label{op:signal_complementarity}
The commonality of many intra-operative approaches is that they leverage some independent signal, e.g., a physical observation or separate communication channel, which facilitates the online adaptation to stressors (e.g., see the need for an `eye-in-the-sky' in~\cite{saulnier_Resilient_2017a}). This, in turn, promotes the design of heterogeneous teams, that can provide the necessary complementary information. However, thus far, heterogeneous systems have been hand-designed, \edited{and their optimal control policies are hard to come by~\cite{prorok_Impact_2017a}. This compounds the problem of devising methods that incorporate heterogeneous modalities.}
\end{openProblem}
\begin{openProblem} [Co-design of control and communications]
\label{op:co_design_control_comms}
Time-varying and unreliable connectivity compounds the difficulty of
\edited{resilient group coordination and control. Joint networking and control designs are needed that exploit evolving cognitive communications, provide self-healing network topology adaptation, and guarantee privacy and security. Enhanced perception-action-communication loop designs are also needed, that provide relevant signals and feedback to local agent controllers \cite{yang_grand_2018a, fink_Robust_2011}. }
\end{openProblem}
\textbf{Post-operative approaches.}
Learning-based methods have proven effective at designing robot control policies for an increasing number of multi-robot tasks, whereby Imitation Learning (IL) (e.g.,~\cite{tolstaya_Learning_2019a, li_Messageaware_2021}) and Reinforcement Learning (RL) (e.g.,~\cite{wang_Mobile_2020}) are currently the leading paradigms. In both cases, the learning procedure leverages information that is accumulated within robot neighborhoods, composing batches of data that are learnt from \textit{post-factum}.
\subsubsection{Multi-Agent Reinforcement Learning (MARL)}
Learning to interact in multi-agent systems is challenged by a non-stationarity of the environment, as agents learn concurrently to coordinate their actions, and continually change their decision-making policies~\cite{papoudakis_Dealing_2019}. An actor-critic method is presented in~\cite{lowe_MultiAgent_2017a} that successfully learns policies that require complex multi-agent coordination, discovering various physical and informational coordination strategies.
The work in \cite{omidshafiei_Deep_2017} introduces a decentralized single-task learning approach that is robust to concurrent interactions of teammates. It presents an approach for distilling single-task policies into a unified policy that performs well across multiple related tasks, without explicit provision of task identity.
The work in~\cite{zhang_Robust_2020a} studies the MARL problem with model uncertainty. The authors pose the problem as a robust Markov game, where the goal of all agents is to find policies such that no agent has the incentive to deviate, i.e., reach some equilibrium point, which is also robust to the possible uncertainty of the MARL model.
The work in~\cite{cheng_general_2021} proposes a framework that uses an epistemic logic to quantify trustworthiness of agents, and embed the use of quantitative trustworthiness values into control and coordination policies.
\subsubsection{Imitation Learning (IL)} The idea behind IL is to start with simple (small-scale) problems and use corresponding (optimal) solutions as examples to approach more complex, large-scale problems. This progression from example to application is crucial to mitigating the shortcomings of decentralized approaches in solving challenging multi-robot problems.
Bridging the gap between the qualities of centralized and decentralized approaches, IL-based methods promise to find solutions that \textit{balance optimality and real-world efficiency}, as demonstrated in recent works, e.g.,~\cite{tolstaya_Learning_2019a, li_Graph_2020a, li_Messageaware_2021}. Although generalization to unseen cases has been successfully demonstrated, these approaches remain brittle due to their dependency on expert demonstrations during learning.
\subsubsection{Graph Neural Networks (GNNs)} While centralized-training, decentralized-execution (CTDE)~\cite{oliehoek_Optimal_2008} is the \edited{typical} paradigm for multi-agent RL and multi-agent IL, the underlying machine learning framework can vary.
\edited{Graph Neural Networks (GNNs) have} shown remarkable performance across a number of multi-robot problems~\cite{khan_2020, tolstaya_Learning_2019a, li_Graph_2020a, kortvelesy_ModGNN_2021,Talak21arxiv-neuralTree,Ravichandran21arxiv-RLwithSceneGraphs}.
\edited{Graph nodes represent robots and edges model communication links between them~\cite{prorok_Graph_2018a}. GNNs provide a general learning framework for perception-action-communication loops that incorporates network topology, distributed processing, and control~\cite{hu_Scalable_2021}. Global state information can be distilled and shared through neighbor data exchange.
GNNs, like conventional NNs, maybe susceptible to adversarial attack, e.g., malicious agents can learn to manipulate or outperform other agents sharing the same communication channel~\cite{blumenkamp_Emergence_2020b}. A countermeasure was proposed in~\cite{mitchell2020gaussian}, providing a probabilistic model that allows agents to compute confidence values quantifying the truthfulness of any given communication partner. This confidence can be used to suppress suspicious information. Yet, as noted in Open Problem~\ref{op:signal_complementarity}, this idea leans on information complementarity---future work should look to specifying the requirements needed, and guarantees that can be provided.
}
\subsubsection{Adversarial training}
Learning to deal with adversarial input or disruption during training is a promising approach to providing for resilience. In~\cite{zheng_adversarial_2020}, Zheng et al. leverage data-driven adversarial co-optimization, and design a mechanism that optimizes a flock's motion control parameters, such that the risk of flock \textit{leader identification} is minimized. This approach is reminiscent of the ideas in~\cite{prorok_macroscopic_2016} that aim to preserve role privacy. While the work in~\cite{blumenkamp_Emergence_2020b} first shows that an adversary can learn to exploit other agents' behaviors to better its own reward, it also shows that when the learning is alternated, cooperative agents are able to learn to re-coup their performance losses. This line of work was extended in~\cite{mitchell2020gaussian}, where a local filter is trained to detect implausible communication, allowing agents to cooperate more robustly.
The work in~\cite{stone_collaborative_1998} shows the necessity of performing both collaborative and adversarial learning, resulting in successful team performance that can withstand opponent attacks.
Yet, recent work~\cite{lechner_Adversarial_2021} argues that adversarial training can introduce novel error profiles in robot learning schemes, and more work is required to fully understand how the method can be leveraged to for safety-critical applications.
\begin{openProblem} [Quick vs. slow learning]
\label{op:quick_v_slow}
\edited{Once deployed, a policy may become stale and should be updated based on newly collected data. If updated too soon, noisy data may lead to overfitting and poor generalization. }
Conversely, not updating the policy often enough can lead to catastrophic failures and an inability to adapt.
\end{openProblem}
\begin{openProblem} [Unsupervised resilience learning]
\label{op:unsupervised_resilience}
Supervised learning, including reinforcement learning and imitation learning, requires apriori specification of rewards / cost functions, or access to expert data.
Resilience, however, requires autonomous identification and diagnosis of failure to inform how robot policies and configurations should change post-operatively. It is currently unclear how this is to be achieved without supervisory intervention.
\end{openProblem}
\begin{openProblem} [Interpretability of multi-agent policies]
\label{op:interpret}
Time-varying and unpredictable connectivity patterns complexify the task of explaining and guaranteeing performance of multi-agent policies---literature on visualizing/interpreting multi-agent communication is sparse, with recent solutions designed specifically for the task at hand (e.g.,\cite{blumenkamp_Emergence_2020b}).
\end{openProblem}
\subsection{Applications that are not yet addressed?}
\vspace{-0.3cm}
\subsection{Other Applications}
\label{subsec:others}
\subsubsection{Robot Co-design}
Co-design problems aim at jointly designing sensing, computation, control and other algorithmic aspects
that enable robots to perform a given task. Most of the approaches in this section are pre-operative, in the sense that they design for the worst case, but some evolutionary approaches can be considered post-operative since they evolve the system design after multiple executions.
Traditional control-theoretic approaches study sensor selection~\cite{Faming11icmsi,Joshi09tsp-sensorSelection,Gupta06automatica,Leny11tac-scheduling,Jawaid15automatica-scheduling,Zhao16cdc-scheduling,Tzoumas16acc-sensorScheduling,Carlone18tro-attentionVIN,Summers16tcns-sensorScheduling,Nozari17acc-scheduling,Summers17arxiv,Summers17arxiv2,Golovin10icipsn-sensorSelection}, while more modern techniques co-design sensing and control~\cite{tanaka15cdc-sdplqg,Tatikonda04tac-limitedCommControl,Tzoumas18acc-sLQG,Tzoumas20tac-sLQG}.
A main limitation of this line of work is that the pursuit of theoretical guarantees limits these
papers to focus on linear dynamical systems, a representation that struggles to capture the nonlinear and
possibly discrete nature of perception and sensing in real-world robotics.
Evolutionary approaches~\cite{Lipson00nature-codesign,Hornby03tro-codesign,Lipson16ecal-codesign,Cheney18jrsi-codesign}
provide a powerful paradigm that can indeed be understood in terms of Reinforcement Learning;
this approach has not been applied to sensing and perception aspects, due to the size of the search space (e.g.,\xspace choice of algorithms, parameters, and computation) and the difficulty of designing differentiable perception modules (e.g.,\xspace~\cite{Brachmann17CVPR-DSAC}).
Similar considerations hold for modular languages and modularity-based approaches~\cite{Mehta15jmr-codesign,Hornby03tro-codesign,Ramos18icae}, where perception is typically simplified to reduce the size of the library or language and make the design tractable.
Only a few optimization-based co-design approaches have explicitly tackled sensing and perception.
Among those, Zhang~\emph{et al.}~\cite{Zhang17rss-vioChip} investigate hardware-and-algorithms co-design
for visual-inertial odometry, and provide a heuristic approach to explore the search space.
Zardini~\emph{et al.}~\cite{Zardini21arxiv-codesign} leverage Censi's monotone co-design theory~\cite{Censi15arxiv-codesign} to design hardware and software for an autonomous vehicle.
The work~\cite{Carlone19icra-codesign} designs sensing and hardware for a multi-robot team in charge of a
collective transport task using integer linear programming.
\begin{openProblem}[Multi-Robot Co-design]
The literature on co-design is in its infancy, and the current tools for automated design
still fall short from providing a satisfactory design tool for real-world robotics problems.
E.g., none of the existing approaches is able to tame the complexity of a modern SLAM
pipeline, due to their scalability limitations and underlying assumptions.
In particular, co-design approaches neglect resilience altogether (while robustness is investigated in~\cite{Censi17ral-codesign}) and only a few
involve multi-robot systems~\cite{Carlone19icra-codesign}.
\label{op:co-design}
\end{openProblem}
\subsubsection{Co-optimization of environment and multi-robot policies}
Current approaches to the design of mobile robot systems consider the environment as a fixed constraint~\cite{gombolay_Fast_2013a, smith_Estimating_1990a, prorok_Multilevel_2011a}. In the case of navigation, structures and obstacles must be circumnavigated; in this process, mobile agents engage in negotiations for right-of-way, driven by local incentives to minimize individual delays. Even in cooperative systems, environmental constraints can lead to dead-locks, live-locks, and prioritization conflicts~\cite{bennewitz_Finding_2002a, jager_Decentralized_2001a}.
Despite the obvious influence of spatial constraints on agent interactions~\cite{boudet_collections_2021b}, the optimization of mobile robot systems and their immediate environment has, traditionally, been \textit{disjoint}, and little thought is given to what would make an artificial environment \textit{conducive} to effective and efficient collaboration, cooperation and coordination within mobile robot systems.
As we progress with automated, roboticized systems, we must jointly re-evaluate the shape, form, and function of the environments that we operate in. Ultimately, this approach will allow us to overcome incremental research results, based on solutions that consider the environment as a fixed constraint, to provide for robustness and resilience in a holistic way.
\begin{openProblem}[Co-optimization of robots and their environment]
Concurrent optimization of robot policies and environment they operate in has received little attention thus far (although evidence suggests significant benefits, e.g.,~\cite{cap_Asynchronous_2013,saunders_Teaching_2006a}). Such approaches are particularly applicable in man-made workspaces (e.g., factories, warehouses and urban settings), especially when stressors originate in the environment.
\label{op:co-opt}
\end{openProblem}
\subsection{Grand Challenges}
\label{subsec:open_perception}
\textit{Introspective,~Resilient,~Multi-Robot~High-level~Understanding:}
We believe a grand challenge in multi-robot perception is to develop multi-robot teams that can build a human-level shared representation of the environment (encompassing geometry, semantics, relations among entities in the scene, and more) in real-time and under computation and communication constraints. A second grand challenge is the design of truly resilient perception algorithms: we believe that the first step towards this goal is to develop \emph{introspection techniques} that can reason over failures, rather than just trying to avoid failures at all costs; the second step would then be to understand how automated system tuning and reconfiguration would impact the system performance in response to a failure.
\textit{Redundancy vs Complementarity:}
\edited{The open problems in Sections~\ref{subsec:planning} and~\ref{subsec:control} highlight the challenge of including adequate levels of complementarity and/or redundancy in system designs, for example through the provision of orthogonal sensing capabilities, distributed across the robot team, or redundant numbers of robots.
This pre-operative approach relates to the idea of \textit{anticipatory} resilience (cf. Fig.~\ref{fig:resilience_vs_robustness}). While redundancy is reminiscent of over-provisioning (and classical notions of robustness), its purpose in this context is to target unexpected disruptions (in contrast to modeled disruptions).
The grand challenge consists of devising foundational methods that inform which capabilities are to be integrated, and through which interaction paradigms.
}
\textit{Inter-disciplinary Resilience:}
\edited{The works in Table~\ref{taxonomy} highlight resilience research in the domains of perception, planning, and control. However, the complexity and inter-disciplinary nature of future applications of multi-agent systems requires that we investigate resilience at the \textit{intersection} of robotics domains.
A grand challenge is to investigate the complex interplay and second order effects of stressors across the domains of perception, planning, and control. For example, failures due to stressors on an agent's perception could be addressed through planning by leveraging a heterogeneous teammate's complementary sensor system that is more resilient to the targeted stressor that is encountered (see Open Problem~\ref{op:signal_complementarity}).
A second grand challenge, aligned with Open Problem \ref{op:resilience_measure}, is to develop interdisciplinary measures of resilience.}
\vspace{-0.4cm}
\subsection{Survivorship Bias}
\label{subsec:bias}
Current research practice and publication standards pressure the community to report successes only, which leads to a culture wherein failures and mistakes may be poorly documented, undisclosed, and consequently, not discussed publicly. Operating in this manner reinforces a \textit{survivorship bias}\footnote{\url{https://en.wikipedia.org/wiki/Survivorship_bias}}, which stymies learning from errors and controversially, leads to fragile designs~\cite{mangel_Abraham_1984}. Changes in publication culture, including more venues targeting negative results, would help accelerate progress towards resilient solutions.
|
2,869,038,156,905 | arxiv | \section{Introduction}
In the last decade or so, non-Fermi liquid (NFL) behaviors around quantum critical point (QCP) have been one of main issues in physics, not only in heavy fermion systems\cite{lohneysen}, but also in those exhibiting the Mott transition\cite{imada}.
Of these NFL behaviors, those of heavy fermion systems with f$^2$-configuration form a kind of subclass in which the QCP is triggered by local criticalities: such as the two-channel Kondo effect (TCKE) due to the non-Kramers doublet state\cite{cox,cox1}, and that caused by the competition between the crystalline-electric field (CEF) singlet and the Kondo-Yosida (K-Y) singlet states\cite{yotsuhashi,hattori}.
The former TCKE was reported to be observed in La$_{1-x}$Pr$_x$Pb$_3$ that has a $\Gamma_3$ non-Kramers doublet ground state in the cubic symmetry\cite{kawae}.
The NFL behaviors in Th$_{1-x}$U$_x$Ru$_2$Si$_2$ were understood in a unified way by assuming that the system is located near the phase boundary between the CEF singlet and the K-Y singlet states\cite{yotsuhashi}.
owever, a detailed study about the magnetic field dependence on NFL behaviors has not been performed so far. \par
In the present paper, we investigate the magnetic field dependence of NFL behaviors in the specific heat $C_{\rm imp}(T)$ and the entropy $S_{\rm imp}(T)$ due to f-electrons with the two-orbital impurity Anderson model in a tetragonal symmetry with the CEF singlet ground state on the basis of the numerical renormalization group (NRG) method\cite{wilson,krishna}.
We discuss how the magnetic field, $H_z$, changes the characteristic temperature, $T_{\rm F}^{*}$, which is defined as the temperature at which the temperature derivative of entropy, $\partial S_{\rm imp}(T)/\partial (\log\,T)$, takes the maximum value as $S_{\rm imp}(T)$ approaching 0 as $T \rightarrow 0$.
In the vicinity of the QCP, $T_{\rm F}^{*}$ is suppressed by the effect of the competition between the CEF singlet and the K-Y singlet states for $H_z=0$, and the NFL behaviors occur at $T_{\rm F}^{*} < T < T_{\rm K2}$, where $T_{\rm K2}$ is the lower Kondo temperature of two orbitals, as in the case of TCKE.
The magnetic field is shown not to affect $T_{\rm F}^{*}$ up to a certain value $H_z^{*}$ which is determined approximately by the condition that the effect of the magnetic field, destroying a criticality of the TCKE type, becomes comparable to the effect of the deviation from the criticality at $H_z=0$.
$H_z^{*}$ so determined is far larger than $T_{\rm F}^{*}(H_z=0)$ for a reasonable set of parameters.
As a result, the NFL behaviors become robust against the magnetic field up to $H_z^{*} \sim T_{\rm K2}$ which is about hundred times larger than $T_{\rm F}^{*}(H_z=0)$.\par
This paper is organized as follows.
In \S 2, the model Hamiltonian is introduced and transformed into a form suitable for the NRG calculation.
In \S 3, we discuss how the characteristic temperature $T_{\rm F}^{*}$ is affected by the effect of the competition between the CEF singlet and the K-Y singlet states in the case of $H_z =0$.
In \S 4, we demonstrate the magnetic field dependence of $T_{\rm F}^{*}$ and $\gamma_{\rm imp}(T) = C_{\rm imp}/T$.
In the vicinity of the QCP, there are parameter regions where $-\log\,T$ behavior of $\gamma_{\rm imp}$, at temperature $T_{\rm F}^{*} < T < {\rm min}(T_{\rm K},\Delta)$, is robust against the magnetic field.
In \S 5, we investigate how such an anomalous NFL is affected by the change of the characteristic energy scale of two singlet states.
In \S 6, we summarize our results and discuss their applicability for understanding the magnetically robust NFL behaviors observed in UBe$_{13}$ because such an NFL being robust against the magnetic field can arise in systems with other symmetry if the K-Y singlet state and the CEF singlet state compete for the ground state.
\section{Model Hamiltonian}
In this section, we recapitulate discussions of ref.\citen{yotsuhashi} about how to derive the model Hamiltonian for discussing the competition between the K-Y singlet and the CEF singlet states in f$^2$-configuration on the basis of the $j-j$ coupling scheme in the tetragonal symmetry.
We restrict the $f^1$ state within two low-lying doublet states out of three doublets of $j=5/2$ orbitals, and allot the pseudospin representation for these states as follows:
\begin{eqnarray}
\label{2.1a}
\vert \Gamma_{7+}^{(2)} \rangle &=& \frac{3}{\sqrt{14}} \vert + \frac{5}{2} \rangle - \sqrt{\frac{5}{14}} \vert -\frac{3}{2} \rangle \equiv \vert \uparrow, 0 \rangle, \\
\vert \Gamma_{7-}^{(2)} \rangle &=& -\frac{3}{\sqrt{14}} \vert -\frac{5}{2} \rangle + \sqrt{\frac{5}{14}} \vert +\frac{3}{2} \rangle \equiv \vert \downarrow, 0 \rangle, \\
\vert \Gamma_{6,+} \rangle &=& \vert + \frac{1}{2} \rangle \equiv \vert 0, \uparrow \rangle, \\
\label{2.1d}
\vert \Gamma_{6,-} \rangle &=& \vert - \frac{1}{2} \rangle \equiv \vert 0, \downarrow \rangle.
\end{eqnarray}
Here, for example, $\vert \hspace{-2.0mm}\uparrow,0 \rangle$ represents the state where orbital 1 ($\Gamma_{7}^{(2)}$) with up pseudospin is occupied and orbital 2 ($\Gamma_6$) is empty.
We also restrict the $f^2$ state within four low-lying states out of states allowed in $J=4$ manifold, and construct these four states with the direct product of $f^1$ states.
Here, we have discarded states where two f-electrons occupy the same orbital, $\vert \hspace{-1.0mm} \uparrow \downarrow, 0 \rangle$, $\vert 0, \uparrow \downarrow \rangle$, because the intra-orbital Coulomb repulsion is larger than the inter-orbital one.
Then, low-lying four $f^2$ states are expressed as
\begin{eqnarray}
\label{2.2a}
\vert \Gamma_4 \rangle =& \frac{1}{\sqrt{2}} \left( \vert +2 \rangle - \vert -2 \rangle \right) =\frac{1}{\sqrt{2}} \left( \vert \hspace{-1.0mm} \downarrow, \uparrow \rangle - \vert \hspace{-1.0mm} \uparrow, \downarrow \rangle \right),\ \ \ \\
\label{2.2b}
\vert \Gamma_3 \rangle =& \frac{1}{\sqrt{2}} \left( \vert +2 \rangle + \vert -2 \rangle \right) =\frac{1}{\sqrt{2}} \left( \vert \hspace{-1.0mm} \uparrow, \downarrow \rangle + \vert \hspace{-1.0mm} \downarrow, \uparrow \rangle \right),\ \ \ \\
\label{2.2c}
\vert \Gamma_{5,+}^{(2)} \rangle =& \beta \vert +3 \rangle - \alpha \vert -1 \rangle = \vert \hspace{-1.0mm} \uparrow, \uparrow \rangle,\\
\label{2.2d}
\vert \Gamma_{5,-}^{(2)} \rangle =& \beta \vert -3 \rangle - \alpha \vert +1 \rangle = \vert \hspace{-1.0mm} \downarrow, \downarrow \rangle.
\end{eqnarray}
It is noted that we cannot determined coefficients, $\alpha$ and $\beta$, because we have discarded one of the doublet in f$^1$-configuration.
Therefore, in this paper, we take its $j-j$ coupling representation as f$^2$ states with $\Gamma_5^{(2)}$ symmetry as shown in Appendix including the derivation of eqs.(\ref{2.1a})-(\ref{2.2d}).\par
We assume that the CEF ground state is the singlet ($\Gamma_4$), the first excited CEF states are magnetic doublet ($\Gamma_5$) with the excitation energy $\Delta$, and the second excited CEF state is the singlet ($\Gamma_3$) with the excitation energy $K$, as shown in Fig.\ref{fig1}.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.35\textwidth]{arxivfig1.eps}
\caption{CEF level scheme of low-lying $f^2$ states and their eigenstates. }
\label{fig1}
\end{center}
\vspace{-8mm}
\end{figure}
Such a CEF level scheme can be reproduced by introducing the ``antiferromagnetic Hund's-rule coupling'' for the pseudospin as
\begin{equation}
\mathcal{H}_{\rm Hund} = \frac{J_{\perp}}{2} \left[ S_1^{+}S_{2}^{-} +S_1^{-}S_{2}^{+} \right] + J_z S_{1}^{z}S_{2}^{z},
\label{2.3}
\end{equation}
where coupling constants are defined as $J_{\perp} =K$ and $J_z = 2\Delta -K$, respectively, and $\vec{S}_i$ is a pseudospin operator of the localized electron in the orbital $i$ defined as
\begin{equation}
\vec{S}_i = \frac{1}{2}\sum_{\sigma\sigma^{'}}f_{i\sigma}^{\dagger} \vec{\sigma}_{\sigma\sigma^{'}} f_{i\sigma^{'}}.
\label{2.4}
\end{equation}
Furthermore, assuming that f-electrons constructing the f$^2$ state hybridize with conduction electrons which have the same symmetry as each $f^1$ state.
Thus the system can be described by the two-orbital impurity Anderson model with the ``antiferromagnetic Hund's-rule coupling'' as follows:
{\small
\begin{align}
\label{2.5a}
\mathcal{H} &= \mathcal{H}_{\rm c} + \mathcal{H}_{\rm hyb} + \mathcal{H}_{\rm f} + \mathcal{H}_{\rm Hund},\\
\label{2.5b}
\mathcal{H}_{\rm c} &=\sum_{i=1,2} \sum_{\vec{k}\sigma} \varepsilon_{\vec{k}} c_{\vec{k}i\sigma}^{\dagger} c_{\vec{k}i\sigma},\\
\label{2.5c}
\mathcal{H}_{\rm hyb} &= \sum_{i=1,2} \sum_{\vec{k}\sigma} \left( V_{i\vec{k}} c_{\vec{k}i\sigma}^{\dagger} f_{i\sigma} + {\rm h.c.} \right),\\
\label{2.5d}
\mathcal{H}_{\rm f} &=\sum_{i=1,2}\sum_{\sigma}E_{fi} f_{i\sigma}^{\dagger}f_{i\sigma} + \sum_{i=1,2}\sum_{\sigma}\frac{U_i}{2}f_{i\sigma}^{\dagger}f_{i \bar{\sigma}}^{\dagger} f_{i\bar{\sigma}}f_{i\sigma},
\end{align}
}
where $f_{i\sigma}(f_{i\sigma}^{\dagger})$ and $c_{\vec{k}i\sigma}(c_{\vec{k}i\sigma}^{\dagger})$ are annihilation (creation) operators of the f-electron on the orbital $i$ with the energy $E_{fi}$ and the conduction electron with wave vector $\vec{k}$ hybridizing with the f-electron with the symmetry of the orbital $i$ with strength $V_{i \vec{k}}$.
Here, the on-site intra-orbital Coulomb repulsion $U_i$ is explicitly taken into account, while other Coulomb repulsion terms like the inter-orbital or the exchange interaction, are implicitly included in the ``antiferromagnetic Hund's-rule coupling'' of (\ref{2.3}).\par
To analyze properties of the system described by the Hamiltonian (\ref{2.5a}) by the Wilson NRG method \cite{wilson,krishna}, we transform the conduction electron part as usual.
For simplicity, we take conduction bands to be isotropic in momentum space, i.e. the hybridization depends only on the orbital $i$, $V_{i\vec{k}} \equiv V_{i}$, and symmetric in the energy space (with an extent from $-D$ to $D$) about the Fermi level.
We discretize conduction bands logarithmically with the discretization parameter, $\Lambda$, and perform the unitary transformation assuming the density of state in conduction bands as constant.
Thus, eqs. (\ref{2.5b}) and (\ref{2.5c}) can be rewritten as
{\small
\begin{align}
\mathcal{H}_{\rm c} &= \sum_{i,\sigma} \sum_{n=0}^{\infty} \Lambda^{-n/2} t_n \left( f_{i,n\sigma}^{\dagger} f_{i,n+1\sigma} + f_{i,n+1\sigma}^{\dagger} f_{i,n\sigma}\right),\\
\mathcal{H}_{\rm hyb} &= \sum_{i,\sigma} V_i \left( f_{i,0\sigma}^{\dagger} f_{i,-1\sigma} + f_{i,-1\sigma}^{\dagger} f_{i,0\sigma} \right),
\label{2.6}
\end{align}
}
where $f_{i,n}$ ($f_{i,n}^{\dagger}$) is the annihilation (creation) operator of the conduction electron in the shell orbital whose extent is $k_{\rm F} \Lambda^{n/2}$ and $f_{i,-1\sigma} \equiv f_{i\sigma}$.
The hopping integral between $n$-th and $(n+1)$-th shell states, $t_n$, is expressed as
\begin{equation}
t_n = \frac{D(1+\Lambda^{-1})(1-\Lambda^{-n-1})}{2\sqrt{(1-\Lambda^{-2n-1})(1-\Lambda^{-2n-3})}}.
\label{2.7}
\end{equation}
Then, we define $\mathcal{H}_N$ which approaches $\mathcal{H}/( D(1 + \Lambda^{-1})/2 )$ in the limit $N \rightarrow \infty$ as follows:
{\small
\begin{eqnarray}
\notag&& \mathcal{H}_N = \Lambda^{(N-1)/2} \left[ \tilde{\mathcal{H}}_{\rm f} + \sum_{i,\sigma} \tilde{V}_i\left( f_{i,0\sigma}^{\dagger} f_{i,-1\sigma} + f_{i,-1\sigma}^{\dagger} f_{i,0\sigma} \right) \right.\\
\label{2.9}
&& \left. + \sum_{i,\sigma} \sum_{n=0}^{N-1} \Lambda^{-n/2} \tilde{t}_n \left( f_{i,n\sigma}^{\dagger} f_{i,n+1\sigma} + f_{i,n+1\sigma}^{\dagger} f_{i,n\sigma}\right) \right],
\end{eqnarray}
}
where the tilde indicates that energies are measured in a unit of $D(1+\Lambda^{-1})/2$.
The Hamiltonian (\ref{2.9}) satisfies the recursion relation
\begin{equation}
\mathcal{H}_{N+1} = \Lambda^{1/2}\mathcal{H}_N + \sum_{i\sigma} \tilde{t}_N \left( f_{i,N\sigma}^{\dagger} f_{i,N+1\sigma} + f_{i,N+1\sigma}^{\dagger} f_{i,N\sigma}\right).
\label{2.10}
\end{equation}
We solve the whole sequence of Hamiltonian ($\mathcal{H}_{N}$) by using the recursive form (\ref{2.10}) with keeping states up to 1500 states in each iteration step, and use $\Lambda=3.0$ in all the calculations below unless explicitly stated.
\section{NFL Behavior due to Competition between CEF and K-Y singlets}
In this section, we discuss the effect of the competition between the CEF singlet and the K-Y singlet states, which can give rise to a NFL state.
It is already known that the system described by the Hamiltonian (\ref{2.5a}) has the competition between the K-Y singlet and the $f^2$-CEF singlet states\cite{yotsuhashi}.
In general, the energy level and the strength of hybridization with conduction electron in each f-orbital are different.
In the present paper, we take parameters so that the Kondo temperature of orbital 2 is always lower than that of orbital 1: i.e., we set parameters of the two-orbital impurity Anderson model, eq (\ref{2.5a}), as $E_{f1}= E_{f2}=-0.4, U_1=U_2=1.0, V_1=0.45,$ and $V_2=0.3$.
Hereafter, the unit of energy is taken as $D$ unless stated explicitly.
In the case of $K=\Delta=0$, the model Hamiltonian, eq. (\ref{2.5a}), reduces to two independent impurity Anderson models.
The Kondo temperature of each orbital can be determined by the Wilson's definition, $4T_{\rm K}\chi_{\rm imp}(T=0) = 0.413$, for conventional Anderson model as $T_{\rm K1}= 6.10 \times 10^{-2}$ and $T_{\rm K2}= 6.01 \times 10^{-3}$, respectively.\par
For the finite value of CEF parameters, $(K,\Delta)$, there are two stable Fermi Liquid (FL) fixed points corresponding to two singlet ground states as shown in Fig.\ref{fig2}: the K-Y singlet (filled circles) and the CEF singlet (open circles) fixed points.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.40\textwidth]{arxivfig2.eps}
\caption{Phase diagram of the ground state in $K-\Delta$ plane. Filled circles represent the K-Y singlet fixed point and open circles represent the $f^2$-CEF singlet fixed point. Parameter set is $E_{f1}=E_{f2}=-0.4, U_1=U_2=1.0, V_1=0.45,$ and $V_2=0.3$. }
\label{fig2}
\end{center}
\vspace{-8mm}
\end{figure}
At the boundary of these two regions of FL fixed points, there exists a curve of critical points, across which energy spectra for even and odd iteration interchange, and NFL behaviors appear in the vicinity of the boundary.
To analyze further, we fix one of the CEF parameters as $K=0.16$, and calculate the physical properties for a series of values of the CEF splitting parameter $\Delta$.
Analyzing near the critical point in more detail, the critical value of $\Delta$ is determined as $\Delta^{*} \simeq 0.112$ for $K=0.16$.\par
Fig.\ref{fig3} shows the result of the temperature dependence of $S_{\rm imp}(T)$, the entropy due to f-electrons, near the critical point.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.40\textwidth]{arxivfig3.eps}
\caption{Temperature dependence of the entropy due to f-electrons in systems near the critical point. Parameter set is the same as that used in Fig.\ref{fig2}. In order to obtain the result with a higher accuracy, 3000 states are kept in each step of NRG. Ground states of each system are indicated by open symbols for the K-Y singlet, and filled symbols for the CEF singlet. The characteristic temperature $T_{\rm F}^{*}$ is given by that making $\partial S_{\rm imp}(T)/\partial ( \log\,T)$ maximum at the lower temperature side.}
\label{fig3}
\end{center}
\vspace{-8mm}
\end{figure}
As mentioned above, the characteristic temperature $T_{\rm F}^{*}$ is defined as the temperature at which the temperature derivative of entropy, $\partial S_{\rm imp}(T)/\partial (\log\,T)$, takes the maximum value just before $S_{\rm imp}(T)$ approaching 0 as $T \rightarrow 0$.
As seen in Fig.\ref{fig3}, $T_{\rm F}^{*}$ is drastically suppressed by the effect of the competition near the critical value of CEF splitting $\Delta =0.112 \simeq \Delta^{*}$.\par
Fig.\ref{fig4} shows the $\Delta$ dependence of $T_{\rm F}^{*}$ which is obtained by numerical calculations of $S_{\rm imp}(T)$.
\begin{figure}[tb]
\begin{center}
\includegraphics[width = 0.38\textwidth]{arxivfig4.eps}
\caption{$\Delta$ dependence of $T_{\rm F}^{*}$. The effect of the competition between two singlet states suppresses $T_{\rm F}^{*}$, and in particular $T_{\rm F}^{*} =0$ at the critical point $\Delta = 0.112 \simeq \Delta^{*}$. }
\label{fig4}
\end{center}
\vspace{-6mm}
\end{figure}
In the case of $\Delta < \Delta^{*}$, the K-Y singlet state is the ground state, and two localized moments $\vec{S}_1$ and $\vec{S}_2$ are screened out independently by corresponding conduction electrons, where each Kondo temperature is affected by the interaction between f-electrons.
In this case, the total phase shift of conduction electrons characterizing this fixed point is $\delta=\pi\ (\delta_1=\pi/2, \delta_2 =\pi/2)$, and $T_{\rm F}^{*}$ is given by a value slightly lower than the Kondo temperature $T_{\rm K2}$, if $\Delta$ is much smaller than $\Delta^{*}$.
On the other hand, in the case of $\Delta >\Delta^{*}$, the CEF splitting (antiferromagnetic interaction between f-electrons in the model Hamiltonian, (\ref{2.5a})) is so large compared to the energy gain related to the formation of K-Y singlet states that the CEF singlet becomes the ground state.
In this case, the remaining conduction electrons are not scattered by f-electrons, and as a result the total phase shift is $\delta=0\ (\delta_1=0,\delta_2=0)$.
When $\Delta \gg \Delta^{*}$, $T_{\rm F}^{*}$ becomes close to the excitation energy $K$ between two singlet states.\par
Such an interchange of the ground state can be understood by considering that the increase of $\Delta$ causes the stabilization of the level of the CEF singlet state as shown in Fig.\ref{fig5}.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.30\textwidth]{arxivfig5.eps}
\caption{Schematic energy levels of two singlet ground states. The CEF singlet state is stabilized relative to the K-Y singlet state as $\Delta$ increases. }
\label{fig5}
\end{center}
\vspace{-7mm}
\end{figure}
In the case of $\Delta \sim \Delta^{*}$, $T_{\rm F}^{*}$ is determined not by characteristic energies of the K-Y singlet and the CEF singlet states, but by the energy splitting between two singlet states, $\Delta E$: i.e., $T_{\rm F}^{*} \sim \Delta E$.
Particularly, at the critical point, the degeneracy of the K-Y singlet and the CEF singlet states is not lifted even at $T =0$, making $T_{\rm F}^{*}=0$ and $\lim_{T \rightarrow 0}S_{\rm imp} = 0.5 \log\,2$.
In other words, at low enough temperatures, the localized moment $\vec{S}_1$ of orbital 1 has already been screened out by conduction electrons in orbital 1 below $T_{\rm K1}$, while $\vec{S}_2$ of orbital 2 still has the degree of freedom as localized moment.
Therefore, the effective Hamiltonian of (\ref{2.5a}) near the fixed point behaves as the two-channel Kondo model (TCKM) \cite{cragg,pang} because $\vec{S}_2$ interacts with two ``conduction'' electron channels, one is the conduction electrons on orbital 2 and the other is a complex of conduction electrons on orbital 1 and screened $\vec{S}_{1}$ as discussed in ref. 10.\par
Fig.\ref{fig6} shows the $\Delta$ dependence of the Sommerfeld coefficient, $\gamma_{\rm imp}(T) \equiv C_{\rm imp}(T)/T$, due to f-electrons for various temperatures.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.4\textwidth]{arxivfig6.eps}
\caption{$\Delta$ dependence of the Sommerfeld coefficient $\gamma_{\rm imp} (T) \equiv C_{\rm imp}(T)/T$ due to the f-electrons for various temperature. The ground state switch at $\Delta =\Delta^{*} \simeq 0.112$ from the K-Y singlet ground state for $\Delta < \Delta^{*}$ to the CEF singlet for $\Delta > \Delta^{*}$. }
\label{fig6}
\end{center}
\vspace{-7mm}
\end{figure}
For all $\Delta$ shown in Fig.\ref{fig6}, $\gamma_{\rm imp}(T)$ increases monotonically down to $T=7.0\times 10^{-7}$ as decreasing $T$.
At $\Delta = \Delta^{*} \simeq 0.112$, the increase of $\gamma_{\rm imp}(T)$ does not stop and exhibits divergence in the limit $T \rightarrow 0$ because the structure of the fixed point is the same as that of TCKM as discussed above.
For $\Delta$ off the critical value $\Delta^{*}$, the increase of $\gamma_{\rm imp}(T)$ stops at around the characteristic temperature $T_{\rm F}^{*}$ leading to the Fermi liquid behavior at $T < T_{\rm F}^{*}$.
$\gamma_{\rm imp}(T)$ takes a dip structure around $\Delta \sim \Delta^{*}$ at higher temperature region.
This is because $S_{\rm imp}(T)$ has only a weak $T$ dependence in a wide temperature range $0 \sim T_{\rm F}^{*} < T < T_{\rm K2}$ or $\Delta$ around $\Delta \simeq \Delta^{*}$ as can be seen in Fig.\ref{fig3}.\par
It is remarked that the enhanced part of $\gamma_{\rm imp}(T)$ near $\Delta \sim \Delta^{*}$ in the low temperature limit from the background part at $\vert \Delta - \Delta^{*} \vert \gg \Delta^{*}$ arises from the effect of the competition between the K-Y singlet and the CEF singlet states.
The part of the background is essentially given by an inverse of $T_{\rm K2}$ or $\Delta$, and is overwhelmed by the enhanced part near $\Delta \sim \Delta^{*}$.
Note that the ordinate of Fig.\ref{fig6} is represented in a logarithmic scale.\par
Although we take $\Delta$ as a control parameter here, we can expect a similar behavior of $\gamma_{\rm imp}$ through other parameters, such as the hybridizations $V_1$ and $V_2$, which can also control the competition between levels of two singlet states.
It is also remarked that such an anomalous behavior of $\gamma_{\rm imp}$ can be realized in systems with other symmetry: e.g., in UBe$_{13}$ with cubic symmetry\cite{ott,ott2, gegenwart}.
In this material, $\gamma$ shows the similar behavior as shown in Fig.\ref{fig6} through the change of the lattice constant, $a_{0}$, which is controlled by replacing the U atom partly with other nonmagnetic elements.
It is remarkable that $\gamma$ takes a maximum value at $a_{0} = a_{0}^{*}$, which is approximately the same as the lattice constant of UBe$_{13}$\cite{kim}.
Experimentally, in a series of materials with $a_{0} < a_{0}^{*}$, the Kondo like upturn is observed in the resistivity in the low temperature region, while in those with $a_{0} > a_{0}^{*}$, the temperature dependence of the resistivity can be explained by the effect of the CEF with the singlet ground state.
Then, we expect that UBe$_{13}$ is located near the critical point in this series of materials.
\section{Magnetic Field Dependence of Non-Fermi Liquid Behavior}
In this section, we discuss the magnetic field dependence of the NFL behavior of $\gamma_{\rm imp}(T)$.
The effect of the magnetic field on $f^1$ states is taken into account through the Zeeman term for total angular moment, $\mathcal{H}_{\rm Zeeman}(f^1) = - g_J \mu_{\rm B} j_z H_z$, with $j=5/2$ and $g_j=6/7$.
That on $f^2$ states arises from the diagonal (for $\Gamma_5^{(2)}$ doublet) and the off-diagonal (for $\Gamma_3$ and $\Gamma_4$ singlets) matrix elements of two-electron Zeeman term $\mathcal{H}_{\rm Zeeman} (f^2)$; e.g., $\langle \Gamma_{5 \pm }^{(2)} \vert \mathcal{H}_{\rm Zeeman} (f^2) \vert \Gamma_{5 \pm}^{(2)} \rangle=\mp 11 g_j \mu_{\rm B} H_z /7$, and $\langle \Gamma_3 \vert \mathcal{H}_{\rm Zeeman} (f^2) \vert \Gamma_4 \rangle = -2 g_j \mu_{\rm B} H_z$.
Here, $\mathcal{H}_{\rm Zeeman}(f^2)$ consists of two $\mathcal{H}_{\rm Zeeman}(f^1)$.\par
In Fig.\ref{fig7}, we show the magnetic field dependence of the characteristic temperature $T_{\rm F}^{*}$ near the critical point; i.e., $\Delta=0.108, 0.110, 0.112(\simeq \Delta^{*}), 0.114, 0.116$ and $0.118$.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.45\textwidth]{arxivfig7.eps}
\caption{Magnetic field dependence of $T_{\rm F}^{*}$ near the critical point. The parameters related to f-electrons are the same as Fig.\ref{fig2}. Circles indicate characteristic magnetic fields $Hz^{*}$'s.}
\label{fig7}
\end{center}
\vspace{-7mm}
\end{figure}
It is noted that $T_{\rm F}^{*}(H_z)$ remains constant for $H_z$ less than the characteristic magnetic field $H_z^{*}$ which is defined approximately as that from which $T_{\rm F}^{*}( H_z)$ starts to increase as increasing $H_z$ (as shown by circles in Fig.\ref{fig7}).
Explicitly, the characteristic magnetic field $H_{z}^{*}$'s are given as $H_z^{*} \simeq 3 \times 10^{-4}$ for $\Delta = 0.106$ and $0.118$, $H_z^{*} \simeq 2 \times 10^{-4}$ for $\Delta = 0.108$ and $0.116$, $H_{z}^{*} \simeq 3 \times 10^{-5}$ for $\Delta = 0.110$ and $0.114$, and $H_z^{*} \simeq 1 \times 10^{-5}$ for $\Delta=0.112$.
$H_z^{*}$ has a tendency of approaching zero as the critical fixed point is approached, i.e., $\Delta \rightarrow \Delta^{*}$.
For CEF parameter $\Delta$ shown in Fig.\ref{fig7}, $H_z^{*}$ is much smaller than the lower Kondo temperature $T_{\rm K2} \simeq 6.01 \times 10^{-3}$, so that the magnetic field $H_z < H_{z}^{*}$ has little influence on the K-Y singlet state.
Then, $H_{z}^{*}$ is considered to be determined by a competition of two effects which destroy the TCKM-type NFL fixed point: one is a distance of $\Delta$ from $\Delta^{*}$ and the other is the magnetic field which breaks the degeneracy corresponding to $S_{\rm imp}(T=0) = 0.5 \log\,2$ due to the TCKE, the origin of the TCKM-type NFL fixed point.
Namely, $H_z^{*}$ is given by the energy scale characterizing a crossover from the TCKM-type NFL behavior to the polarized Fermi liquid behavior beyond the effect of the distance of $\Delta$ from the critical value $\Delta^{*}$.
Since $\gamma_{\rm imp}(T)$ exhibits the divergent increase around $\Delta \sim \Delta^{*}$ in the temperature region $T > T_{\rm F}^{*}(H_z)$ as decreasing $T$, $\gamma_{\rm imp}(T)$ exhibits a NFL behavior in the same temperature region $T > T_{\rm F}^{*}(H_z)$.
Since $T_{\rm F}^{*}(H_z)$ remains almost unchanged up to $H_z = H_z^{*}$, the NFL behaviors are expected to remain robustly even under the magnetic field $H_z > T_{\rm F}^{*}(H_z)$ so long as $H_z < T_{\rm K2}$.
This behavior is reproduced by explicit calculations of $\gamma_{\rm imp}(T)$ under various magnetic fields as shown below.\par
In Fig.\ref{fig8}, we show the temperature dependence of $\gamma_{\rm imp}(T)$ for $\Delta= 0.112$ $(\simeq \Delta^{*})$ and $\Delta=0.118$ under various magnetic fields of up to $H_z = 1.2 \times 10^{-3}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.4\textwidth]{arxivfig8.eps}
\caption{Temperature dependence of $\gamma$ for (a) $\Delta= 0.112$ ($\simeq \Delta^{*}$) and (b) $\Delta=0.118$ under various magnetic fields. In the case of (b), the NFL behavior of $\gamma$ is robust against a magnetic field of up to $H_z = 1.2 \times 10^{-3}$ for $T>3.0 \times 10^{-5}$ in spite of $T_{\rm F}^{*} \simeq 1.69 \times 10^{-5}$. The parameters related to f-electrons are the same as those used in Fig.\ref{fig2}. }
\label{fig8}
\end{center}
\vspace{-7mm}
\end{figure}
Extremely close to the criticality at $\Delta = 0.112 \simeq \Delta^{*}$, $\gamma_{\rm imp}(T)$ is enhanced by the magnetic field as shown in Fig.\ref{fig8}(a).
This is because $T_{\rm F}^{*}(H_z)$ increases appreciably from $10^{-7}$ to $10^{-5}$ corresponding to the increase of the magnetic field $H_z$ from $10^{-4}$ to $10^{-3}$, resulting in an increase of $\partial S_{\rm imp}(T)/ \partial (\log\,T) = C_{\rm imp}(T)$, so $\gamma_{\rm imp}(T)$, at $T > 10^{-5}$.
On the other hand, at $\Delta = 0.118$ slightly off the criticality, $\gamma_{\rm imp}(T)$ is robust against the magnetic field up to $H_z =1.2 \times 10^{-3}$ for the temperature region $T>3 \times 10^{-5}$ as shown in Fig.\ref{fig8}(b).
This is because $T_{\rm F}^{*}(H_z)$ remains almost unchanged up to $H_z = H_z^{*} \sim 10^{-3}$ so that $\gamma_{\rm imp}(T)$ remains the same as that at $H_z = 0$ for $T > T_{\rm F}^{*} \simeq 10^{-5}$.\par
These kinds of NFL behaviors arise also in the region of the K-Y singlet state, i.e., $\Delta < \Delta^{*}$, although we do not show the results explicitly.
\section{Kondo-Temperature Dependence of Non-Fermi Liquid Behavior under Magnetic Field}
In this section, we investigate the properties of the NFL behavior of $\gamma_{\rm imp}(T)$ under magnetic field of systems with other $T_{\rm K2}$ by changing $V_2$ as $V_2=0.25$ and $0.20$ for various sets of the CEF parameter, $(K,\Delta)$.
Other parameters are set to be the same as those in the previous section: i.e., $E_{f1}= E_{f2} =-0.4, U_1=U_2=1.0$, and $V_1=0.45$.
In the case of $K=\Delta=0$, each lower Kondo temperature can also be determined by the Wilson's definition as $T_{ {\rm K2}}=1.27 \times 10^{-3}$ for $V_2=0.25$ and $T_{\rm K2}=8.92 \times 10^{-5}$ for $V_2=0.20$, respectively.
To analyze further, we also fix one of the CEF parameters as $K=0.16$ and calculate $\gamma_{\rm imp}(T)$ for a series of $\Delta$ under various magnetic fields.
It is natural that $\Delta^{*}$ (corresponding to the critical point) becomes small with decreasing $T_{\rm K2}$ because the energy gain due to the formation of the K-Y singlet state decreases with a smaller $V_2$.
The critical value of $\Delta$ is determined as $\Delta^{*} \simeq 0.054$ for $V_2=0.25$ and $\Delta^{*} \simeq 0.024$ for $V_2=0.20$, respectively.
Fig.\ref{fig9} shows the temperature dependence of $\gamma_{\rm imp}(T)$ of the system with the CEF ground state: (a) $\Delta = 0.062 > \Delta^{*} \simeq 0.054$ for $V_2 = 0.25$ and (b) $\Delta = 0.032 > \Delta^{*} \simeq 0.024$ for $V_2=0.20$.
The NFL behavior being robust against the magnetic field occurs in a temperature region of $T> T_{\rm F}^{*}$ up to $H_z \simeq H_z^{*}$ in the former case (a), while in the latter case (b) the magnetic field has considerable influence on the NFL behavior.\par
For $H_z =0$, $T_{\rm F}^{*}$ is also suppressed as in the case of $V_2=0.30$ in the vicinity of the critical point $\Delta \sim \Delta^{*}$.
It is noted that the decrease of $T_{\rm K2}$ and $\Delta$ does not appreciably affect $T_{\rm F}^{*}$, i.e. $T_{\rm F}^{*} \sim 10^{-5}$ for both cases of (a) and (b), which is determined from calculations corresponding to Fig.\ref{fig3}.
This is because $T_{\rm F}^{*}$ is determined by the energy splitting between the K-Y singlet and the CEF singlet states, and does not depend on the characteristic energy scale of each singlet state.
Under the magnetic field, the effect on the NFL behavior is markedly different in two cases (a) and (b).
\begin{figure}[t]
\begin{center}
\includegraphics[width = 0.4\textwidth]{arxivfig9.eps}
\caption{Temperature dependence of $\gamma_{\rm imp}(T)$ for a series of magnetic fields in the system for the hybridization (a)$V_2=0.25$ and (b)$V_2=0.20$. }
\label{fig9}
\end{center}
\vspace{-6mm}
\end{figure}
In the case of (a) with $V_2 = 0.25$, the NFL behavior of $\gamma_{\rm imp}(T)$ is rather robust against the magnetic field (up to $H_z^{*}$) in a wide temperature range ($T>T_{\rm F}^{*}$) as in the case of $V_2 = 0.30$, while in the case of (b) with $V_2=0.20$, $\gamma_{\rm imp}(T)$ is sensitive to the magnetic field because the characteristic magnetic field $H_z^{*}$ is comparable to the lower Kondo temperature, $T_{\rm K2}$.
Namely, in the case of $V_2=0.20$, the magnetic field $H_z > T_{\rm K2} \simeq 8.92 \times 10^{-5}$ suppresses $\gamma_{\rm imp}(T)$ by breaking the K-Y singlet ground state.
It is noted that in the case $T_{\rm K2} > \Delta $, the suppression of $\gamma_{\rm imp}(T)$ as in the case of Fig.\ref{fig8}(b) is expected for $H_z > \Delta$ by breaking the CEF singlet states.
Thus, the magnetic field dependence of the NFL behavior of $\gamma_{\rm imp}(T)$ is determined not by the characteristic temperature $T_{\rm F}^{*}$, but by the characteristic magnetic field $H_z^{*}$ which is determined by the characteristic energy scale of each singlet state, $T_{\rm K2}$ and $\Delta$, or the distance from the critical point.
\section{Conclusion and Discussion}
We have investigated the effect of the magnetic field on the NFL behaviors due to the competition between the K-Y singlet and the CEF singlet states in f$^2$-based heavy fermion systems with tetragonal symmetry.
The effect of the competition suppresses the characteristic temperature $T_{\rm F}^{*}$, corresponding to a peak of the specific heat, $C_{\rm imp}(T)$, to a much smaller value than the characteristic energy scale of each singlet states: i.e., $T_{\rm K2}$, the lower Kondo temperature, and $\Delta$, the energy splitting between the CEF singlet ground state and the first excited doublet states.
$T_{\rm F}^{*}$ is determined approximately by $\Delta E$, the energy difference between two singlet states, and there exists the two-channel Kondo model (TCKM) type NFL behaviors at $T_{\rm F}^{*} < T < T_{\rm K2}$.
Namely, near the critical point, $\Delta \sim \Delta^{*}$, the Sommerfeld coefficient $\gamma_{\rm imp}(T)$ exhibits a NFL behavior ($\gamma_{\rm imp}(T) \propto -\log\,T$) at $T > T_{\rm F}^{*}$.\par
In the vicinity of the critical point, $T_{\rm F}^{*}$ was shown not to be affected by the magnetic field up to a certain value $H_z^{*}$, while $T_{\rm F}^{*}$ is increased for $H_z^{*} < H_z < {\rm min}(T_{\rm K2}, \Delta)$.
As a result, the NFL behavior of $\gamma_{\rm imp}$ at $T>T_{\rm F}^{*}$ is robust against the magnetic field $H<H_{z}^{*}$.
Then, for reasonable sets of parameters, the NFL behaviors being robust against a magnetic field of up to $H_{z}^{*} $ can occur at an observable temperature range.
Thus, the magnetic field dependence of this NFL is characterized by $H_z^{*}$ which is determined by the characteristic energy scales of two singlet states and the distance from the critical point.\par
In the present paper, we have discussed physical properties in the tetragonal symmetry.
However, also in the case of other crystal symmetries, it is expected that there remains the effect of the competition between the K-Y singlet and the CEF singlet states, leading to the NFL behaviors similar to the present case.
One example would be the case of the cubic system UBe$_{13}$ which seems to be located near the phase boundary between the K-Y singlet and the CEF singlet states, according to a series of experiments of $\lim_{T \sim 0} C(T)/T $ for systems of solid solution, U$_{1-x}$T$_{x}$Be$_{13}$, where the lattice constant $a_0$ is changed in a wide range covering both the K-Y singlet and the CEF singlet ground states\cite{kim}.
Moreover, pure UBe$_{13}$ exhibits the NFL behavior, $C(T)/T \sim -\log\,T$ up to $H_z = 12$ Tesla\cite{gegenwart}.
Of course, precisely speaking, results of the present paper are for the system of $f^2$-impurity so that we should be careful in deriving a solid conclusion.
Indeed, an approach based on the dynamical mean field concept is indispensable for deriving a solid conclusion for lattice systems, in which the present results would be inherited to the solver of impurity problem.
Nevertheless, we expect that the effect of the competition plays an important role for UBe$_{13}$ to exhibit such a NFL behavior rather robust against the magnetic field larger than the effective Fermi energy inferred from the value of $\lim_{T \sim 0}C(T)/T$.
Namely, the lower Kondo temperature would be larger than 12K from the fact that the NFL behavior $\lim_{T\sim 0}C(T)/T \sim - \log\,T$ in UBe$_{13}$ is robust against the magnetic field up to 12 Tesla at least\cite{gegenwart}.
Were it not for the superconducting state at $T<T_{\rm c} \simeq 1$K, there would exist the peak with specific heat near at $T = T^{*}_{\rm F}$.
Predictions of the present paper may be checked by experiments in some U-diluted system of UBe$_{13}$ near the phase boundary between the K-Y singlet and the CEF singlet states under pressures and/or magnetic fields.
\section*{Acknowledgements}
We are grateful to K. Hattori and S. Yotsuhashi for stimulating conversations and discussions.
One of the authors (K.M.) is grateful to T. Kasuya for directing his attention to ref.15 on an occasion of the workshop of a Grant-in-Aid for Scientific Research on Priority Areas ``Filled Skutterudites'' held at Tokyo Metropolitan University in November, 2003.
S.N. and H.M. are supported by the Global COE program (G10) from The Japan Society for the Promotion of Science.
This work is supported by a Grant-in-Aid for Scientific Research on Innovative Area ``Heavy Electrons'' (No.20102008) from the Ministry of Education, Culture, Sports, Science and Technology.
\vspace{-4mm}
|
2,869,038,156,906 | arxiv | \section{Introduction}
The goal of this paper is to argue that certain properties of three-dimensional Chern-Simons theory can be understood in a unified way by regarding the theory as an effective description of an $\mathcal{N} = 2$ supersymmetric completion. To an optimist, such a viewpoint might represent a particular instance of a more general program of using supersymmetry to elucidate aspects of quantum field theories without manifest supersymmetry.
The application of supersymmetry to topological field theories is far from new. For instance, both the topological invariance and semiclassical exactness of observables in Witten-type (cohomological) TQFTs have long been recognized as consequences of a fermionic BRST symmetry \cite{Birmingham:1991ty}. After a suitable topological twist, gauge-fixed Chern-Simons theory itself furnishes an example of a Witten-type TQFT \cite{Kallen:2011ny}. The BRST supersymmetry is a restatement of the underlying general covariance of the theory: the subtraction of ghost degrees of freedom guarantees the absence of excited states. By contrast, our approach relies on a further auxiliary supersymmetry. The relevant fermions obey the spin-statistics theorem. At finite Yang-Mills coupling, they result in an infinite tower of states with equal numbers of bosonic and fermionic degrees of freedom, which make no net contribution to supersymmetric observables. However, they have the additional effect of shifting the number of vacuum states. We will argue that this shift, combined with the localization principle afforded by the auxiliary fermionic symmetry, provides a natural framework in which to understand some features of correlation functions in bosonic Chern-Simons theory that are obscure from the point of view of perturbation theory.
It has long been understood that induced Chern-Simons terms are one-loop exact because higher-order corrections (via an expansion in $\hbar\sim 1/k$) cannot, in general, respect the quantization condition on the level \cite{Chen:1992ee, Witten:1999ds} (see \cite{Coleman:1985zi} for a diagrammatic proof in the abelian case, and \cite{Closset:2012vg, Closset:2012vp} for a modern perspective). One manifestation of this fact is that quantum observables in pure Chern-Simons theory with simple gauge group $G$ and level $k > 0$, possibly involving Wilson loops in irreducible representations of $G$ labeled by highest weights $\lambda$, are naturally viewed as functions not of the ``bare'' parameters (suitably defined), but of
\begin{equation}
k\to k + h, \quad \lambda\to \lambda + \rho
\label{shifts}
\end{equation}
where $h$ is the dual Coxeter number and $\rho$ is the Weyl vector of $G$. For example, when $G = SU(2)$, the shifts read $k\to k + 2$ and $j\to j + 1/2$, and the latter appears at the level of representation theory in the $SU(2)$ Weyl character
\begin{equation}
\chi_j(\theta) = \sum_{m=-j}^j e^{im\theta} = \frac{\sin[(j + 1/2)\theta]}{\sin(\theta/2)},
\label{su2Weylcharacter}
\end{equation}
which (up to a $j$-independent prefactor) takes the form of a sum over $m = \pm(j + 1/2)$, as familiar from equivariant localization formulas. These shifts can be thought of as quantum corrections. While $\lambda$, unlike $k$, does not appear in the bulk Lagrangian, the associated shift similarly lends itself to a Lagrangian point of view via an auxiliary system attached to the Wilson line, obtained by quantizing the coadjoint orbit of $\lambda$.
It is likewise well-known that correlation functions in pure $\mathcal{N} = 2$ and $\mathcal{N} = 0$ Chern-Simons coincide up to a shift of the above form:\footnote{The 3D Lorentzian spin group $SL(2, \mathbb{R})$ has Majorana representations, while the Euclidean version $SU(2)$ does not; hence the minimal amount of SUSY in three Euclidean dimensions is that associated with a single two-component complex spinor, and Euclidean supersymmetric partition functions can only be calculated for $\mathcal{N}\geq 2$. 3D $\mathcal{N}\geq 2$ theories are precisely those whose holomorphy properties allow them to be constrained by non-renormalization theorems \cite{Aharony:1997bx}.} in the Chern-Simons action for an $\mathcal{N} = 2$ vector multiplet at level $k + h$, all superpartners of the gauge field (real scalars $\sigma, D$ and a gaugino $\lambda$ -- not to be confused with the weight $\lambda$ of the previous paragraph) are auxiliary, and performing the Gaussian path integral over these fields leads to an effective $\mathcal{N} = 0$ Chern-Simons action at level $k$. In practice, this can be understood for sufficiently large $k$ by regulating the $\mathcal{N} = 2$ theory with an irrelevant Yang-Mills term (so that the path integral converges absolutely), which introduces a scale that masses up all fields, and then integrating out the gaugino in the Wilsonian sense.
However, to make a merely perturbative analogy between $\mathcal{N} = 0$ and $\mathcal{N} = 2$ Chern-Simons theory is slightly misleading. While the renormalized parameters in \eqref{shifts} are one-loop exact, general observables in the $\mathcal{N} = 0$ theory are not, reflecting the fact that Chern-Simons theory is conventionally formulated as a Schwarz-type rather than a Witten-type TQFT. The real power of supersymmetry lies in its ability to explain how the shifts \eqref{shifts} persist nonperturbatively in a wide class of observables. Enhancing both the 3D Chern-Simons action and the 1D coadjoint orbit action for Wilson loops with $\mathcal{N} = 2$ supersymmetry gives one access to a localization argument that ensures that correlation functions depend only on the bare couplings appearing in the respective actions. This is a sort of non-renormalization principle. These two supersymmetrizations are not independent, as there exists a precise map between fields in the bulk and fields on the line. The supersymmetric, coupled 3D-1D path integral can be evaluated exactly, and after adjusting for parity anomalies\footnote{We are abusing terminology here: by this, we simply mean the trading of a parity-violating fermion mass for a parity-violating Chern-Simons term. The induced Chern-Simons terms that we obtain from integrating out massive fermions will always be properly quantized, so we will not encounter any actual parity anomalies (the situation is different when $\mathcal{N} = 1$ \cite{Witten:1999ds}).} from integrating out the auxiliary fermions (in 3D and in 1D), we immediately deduce the exact result in the corresponding bosonic theory, including the famous shifts. In this way, a one-loop supersymmetric localization computation reproduces an all-loop result in the bosonic theory. This line of reasoning leads to a conceptually simpler explanation for \eqref{shifts} than that originally obtained from anomalies in the coherent state functional integral \cite{Elitzur:1989nr}.
The non-renormalization of the level in $\mathcal{N}\geq 2$ Chern-Simons theory is often acknowledged in the localization literature (such as when performing supersymmetric tests of non-supersymmetric dualities \cite{Kapustin:2010mh, Aharony:2015mjs}), but the non-renormalization of the weight(s) is seldom mentioned. This omission may make the latter point seem pedantic, but it is in fact essential for a consistent mapping of line operators between the bosonic theory and its $\mathcal{N} = 2$ cousin.
Making the above statements precise requires fixing unambiguous physical definitions of the ``bare'' parameters $k$ and $\lambda$ -- for example, via the coefficient of the two-point function in the associated 2D current algebra and canonical quantization of the coadjoint orbit theory, respectively.\footnote{An intrinsically bulk definition of $k$ is as follows. For positive integer $k$, the Hilbert space of Chern-Simons theory with simply connected $G$ on a Riemann surface $\Sigma$ is isomorphic to $H^0(\mathcal{M}, \mathcal{L}^k)$ where $\mathcal{M}$ is the moduli space of flat $G$-connections on $\Sigma$ and $\mathcal{L}$ is the basic line bundle over $\mathcal{M}$ in the sense of having positive curvature and that all other line bundles over $\mathcal{M}$ take the form $\mathcal{L}^n$ for some integer $n$ \cite{Witten:1999ds}. For example, for simple, connected, simply connected $G$ and $\Sigma = T^2$, $\mathcal{M}$ is a weighted projective space of complex dimension $\operatorname{rank} G$ and $\mathcal{L} = \mathcal{O}(1)$ (whose sections are functions of degree one in homogeneous coordinates on $\mathcal{M}$). In the $\mathcal{N} = 1$ and $\mathcal{N} = 2$ settings, fermions have the effect of tensoring $\mathcal{L}^k$ with $K^{1/2}$ or $K$ to give $\mathcal{L}^{k - h/2}$ or $\mathcal{L}^{k - h}$, respectively, where $K = \mathcal{L}^{-h}$ is the canonical bundle of $\mathcal{M}$. Note that these fermions effectively implement the metaplectic correction in geometric quantization \cite{Mikhaylov:2014aoa, Schottenloher:2008zz}.} Having done so, the shifts in $k$ and $\lambda$ arise in a unified fashion from jointly supersymmetrizing the 3D bulk theory and the 1D coadjoint orbit theory, giving rise to three equivalent descriptions of the same theory:
\begin{enumerate}
\item The bosonic Chern-Simons theory has level $k$ and Wilson loops
\begin{equation}
\operatorname{Tr}_\lambda P\exp\left(i\oint A_\mu dx^\mu\right).
\end{equation}
\item The supersymmetric Chern-Simons theory has level $k + h$ and Wilson loops
\begin{equation}
\operatorname{Tr}_\lambda P\exp\left[i\oint (A_\mu dx^\mu - i\sigma ds)\right].
\label{superloop}
\end{equation}
\item The coadjoint orbit description of half-BPS Wilson loops coupled to the bulk supersymmetric theory has level $k + h$ and weight $\lambda + \rho$ from the start; these parameters are not renormalized. The trace in \eqref{superloop} is replaced by an appropriate supertrace in a 1D theory containing an auxiliary complex fermion $\psi$. In the standard presentation of a supersymmetric Wilson loop, the fermion $\psi$ has already been integrated out.
\end{enumerate}
One would in principle expect to be able to match \emph{all} observables between these descriptions, not only those that are protected (BPS) and hence calculable using supersymmetric localization, because the path integral over the auxiliary fermions and the scalar $D$ can be performed exactly (shifting $(k + h, \lambda + \rho)\to (k, \lambda)$ and setting $\sigma = 0$, respectively). The main limitation of our analysis is that we are able to demonstrate this equivalence only for correlation functions of Wilson loops that are BPS with respect to the bulk supersymmetry (for which the integration contour implicit in \eqref{superloop} is subject to certain constraints).
The shifts \eqref{shifts} can be thought of as fundamentally representation-theoretic in nature, with the correspondence between 3D Chern-Simons theory with compact $G$ and 2D RCFT placing them in a physical setting: Wilson loops encode Weyl characters of $G$, and quantizing the theory on various manifolds makes contact with the representation theory of the corresponding affine Kac-Moody algebra. Many of the relevant statements regarding character formulas and their associated Weyl shifts have been known since the early days of equivariant localization and index theorems, with the notion of hidden supersymmetry being a common thread. For a sample of the relevant literature, see the reviews \cite{Szabo:1996md, Pestun:2016zxk} and references therein. Part of our aim is to review some of these old localization results in light of new ones, while emphasizing that in the supersymmetric context, the essential mechanism for the shifts is identical in 3D and in 1D.
The essence of the 1D localization argument can be seen in the prototypical system of a massless charged particle on $S^2$ in the field of a magnetic monopole, which we refer to as the ``monopole problem.'' Indeed, part of our discussion involves giving a slightly more modern formulation of the treatment of the monopole problem in \cite{Stone:1988fu}, while embedding it into Chern-Simons theory. In \cite{Stone:1988fu}, it is shown using a hidden supersymmetry that the semiclassical approximation to the path integral for the monopole problem is exact: rather than taking the zero-mass limit, one can introduce a fermionic superpartner so that the contributions of all excited states to the partition function cancel regardless of the mass. The upshot is a derivation of the Weyl character formula for $SU(2)$ from supersymmetric quantum mechanics, which provides a physical interpretation of the Duistermaat-Heckman formula. The same strategy of localizing an apparently purely bosonic theory has many modern incarnations: see, for example, \cite{Stanford:2017thb}.
Passing to 3D, exact results for Chern-Simons theory have been obtained by a variety of methods: aside from surgery and 2D CFT, these include abelianization \cite{Blau:1993tv, Blau:2006gh}, nonabelian localization \cite{Beasley:2005vf, Beasley:2009mb}, and supersymmetric localization \cite{Kallen:2011ny}.\footnote{Abelianization is a common theme in Chern-Simons theory: it reduces to an abelian theory in the semi\-classical limit \cite{Witten:1988hf}, to an abelian effective quantum mechanics problem in canonical quantization \cite{Elitzur:1989nr}, and to an abelian matrix model in localization.} Our goal is to explain why the supersymmetric localization approach provides a structural understanding of these exact results.
In \cite{Blau:1993tv}, Chern-Simons theory on $\Sigma\times S^1$ was reduced to an abelian $BF$-type theory on $\Sigma$ where the role of $B$ is played by a compact scalar (in this way, $k$ cannot be scaled away, and one obtains a sum over integrable representations at level $k$). In \cite{Blau:2006gh}, the technique of abelianization was extended to Chern-Simons theory on nontrivial circle bundles. The final abelianized expression for the partition function, obtained by integrating over all connections in the 2D $BF$ theory on the base, takes the form of an integral over the Cartan of $G$ and incorporates the shift in $k$. If one had first integrated over $B$, one would have recovered the result of \cite{Beasley:2005vf} obtained by nonabelian localization, which involves not only an integral over the Cartan, but also over the moduli space of flat connections on $M^3$ to which the Chern-Simons path integral localizes. In \cite{Beasley:2009mb}, the techniques of \cite{Beasley:2005vf} were extended to compute the expectation values of Wilson loops along the $U(1)$ fibers.
Our approach involves introducing an auxiliary fermionic symmetry with the aid of generalized Killing spinors, allowing the localization procedure to be carried out on arbitrary Seifert manifolds. The underlying geometric structure that makes this possible is a transversely holomorphic foliation, or THF \cite{Closset:2012ru, Closset:2013vra}. It is worth contrasting this approach with that of \cite{Kallen:2011ny}, which avoids assuming the existence of Killing spinors by using a contact structure to define the requisite fermionic symmetry. A contact structure exists on any compact, orientable three-manifold. It is, locally, a one-form $\kappa$ for which $\kappa\wedge d\kappa\neq 0$; a metric can always be chosen for which $\kappa\wedge d\kappa$ is the corresponding volume form, i.e., such that $\ast 1 = \kappa\wedge d\kappa$ and $\ast\kappa = d\kappa$. The dual vector field $v$ such that $\iota_v\kappa = 1$ and $\iota_v d\kappa = 0$ is known as the Reeb vector field. It was found in \cite{Kallen:2011ny} that to carry out the localization, the corresponding Reeb vector field must be a Killing vector field, which restricts this approach to Seifert manifolds (as in \cite{Beasley:2009mb}); this approach was generalized in \cite{Ohta:2012ev} to Chern-Simons theories with matter. Therefore, while the geometric basis for our approach differs from that for the cohomological localization of \cite{Kallen:2011ny, Ohta:2012ev}, the domain of applicability is the same. Our focus, however, is different: the compensating level shift from auxiliary fermions was ignored in \cite{Kallen:2011ny}, noted in \cite{Ohta:2012ev}, and essential in neither.
We begin by reviewing some background material and setting our conventions in Sections \ref{N0CS} and \ref{wilsonandcoadjoint}. We then carry out the analysis for Wilson lines very explicitly for $G = SU(2)$ in Section \ref{N2CS} (we comment briefly on the generalization to arbitrary $G$ at the end of the paper). Using the description of these lines as 1D $\mathcal{N} = 2$ sigma models, we compute the effective action for fermions at both zero and finite temperature, canonically quantize the system, and present the localization argument in 1D.
In Section \ref{couplingtobulk}, we show how to embed this story in bulk 3D $\mathcal{N} = 2$ Chern-Simons theory. Crucially, while we expect $\mathcal{N} = 0$ and $\mathcal{N} = 2$ Chern-Simons to be equivalent by integrating out the extra fields in the vector multiplet, the equivalence only holds if we take into account \emph{both} the shift of the level and the weight (as discussed further in Section \ref{matching}).
In Section \ref{3Dloc}, we describe how to generalize the aforementioned analysis of a Wilson line in flat space, either straight or wrapping a compact direction, to various classes of compact three-manifolds. We also give some examples of the observables that we can compute. Both the $\mathcal{N} = 0$ and $\mathcal{N} = 2$ theories are topological, so their observables are metric-independent. In the $\mathcal{N} = 0$ case, the introduction of a metric is usually regarded as a ``necessary evil'' for the purposes of gauge-fixing and regularization. In the $\mathcal{N} = 2$ case, the metric plays a more essential role in computing observables because it determines which observables are compatible with supersymmetry and therefore accessible to localization techniques. Seifert loops (i.e., Wilson loops along the Seifert fiber direction) can give different knots depending on the choice of Seifert fibration. For instance, depending on the choice of Seifert fibration on $S^3$, the half-BPS sector can contain Wilson loop configurations with the topology of Hopf links or torus links \cite{Beasley:2009mb}.
We review in Appendix \ref{quantization} the necessary elements of the quantization of Chern-Simons theory to which we refer throughout the paper. In Appendix \ref{surgvsloc}, we comment on SUSY as an alternative to surgery computations in some situations.
\section{\texorpdfstring{$\mathcal{N} = 0$}{N = 0} Chern-Simons Theory} \label{N0CS}
Let $M^3$ be a compact, oriented three-manifold and let $G$ be a simple, compact, connected, simply connected Lie group. The latter two assumptions on $G$ ensure that any principal $G$-bundle $P$ over $M^3$ is trivial, so that the Chern-Simons gauge field $A$ is a connection on all of $P$. It then suffices to define the Lorentzian $\mathcal{N} = 0$ $G_{k > 0}$ Chern-Simons action by
\begin{equation}
S_\text{CS} = \frac{k}{4\pi}\int_{M^3} \operatorname{Tr}\left(AdA - \frac{2i}{3}A^3\right).
\label{N0SCS}
\end{equation}
We normalize the trace such that the norm squared of the longest root is two (for example, when $G = SU(N)$, the trace is taken in the fundamental representation and $k$ is integrally quantized). In more general settings (e.g., $G$ non-simply connected), the quantization of $k$ would depend on additional data, such as whether we choose a spin structure on $M^3$ \cite{Dijkgraaf:1989pz}.
In flat space, we work in Lorentzian signature, except when computing the supersymmetric index in Section \ref{indexshift}. In curved space (Section \ref{3Dloc}), we work in Euclidean signature. In flat Minkowski space, we have the $\mathcal{N} = 2$ Lagrangians
\begin{align}
\mathcal{L}_\text{CS}|_{\mathbb{R}^{1, 2}} &= \frac{k}{4\pi}\operatorname{Tr}\left[\epsilon^{\mu\nu\rho}\left(A_\mu\partial_\nu A_\rho - \frac{2i}{3}A_\mu A_\nu A_\rho\right) - 2i\lambda\bar{\lambda} - 2D\sigma\right], \label{LCS} \\
\mathcal{L}_\text{YM}|_{\mathbb{R}^{1, 2}} &= \frac{1}{g^2}\operatorname{Tr}\left(-\frac{1}{4}F_{\mu\nu}F^{\mu\nu} - \frac{1}{2}D_\mu\sigma D^\mu\sigma + \frac{1}{2}D^2 + i\bar{\lambda}\gamma^\mu D_\mu\lambda - i\bar{\lambda}[\sigma, \lambda]\right). \label{LYM}
\end{align}
These are written in the convention where the generators $T^a$ are Hermitian, which we use throughout this paper.\footnote{Writing $\lambda = (\eta + i\tilde{\eta})/\sqrt{2}$ with $\eta, \tilde{\eta}$ real adjoint Majorana fermions reproduces the $\mathcal{N} = 2$ expressions of \cite{Kao:1995gf}. WLOG, we may take $k > 0$ because time reversal (equivalently, spacetime orientation reversal in Euclidean signature) flips the overall sign of \eqref{LCS}, i.e., the sign of the bosonic Chern-Simons term, the sign of the gaugino mass term, \emph{and} the sign of the pseudoscalar $\sigma$.}
\subsection{Perturbation Theory}
The level of the pure $\mathcal{N} = 2$ CS theory whose correlation functions reproduce those of the corresponding $\mathcal{N} = 0$ theory is $k_{\mathcal{N} = 2} = k_{\mathcal{N} = 0} + h$ ($k_{\mathcal{N} = 0} > 0$ by assumption). This we refer to as the ``fermionic shift.'' A quick way to justify this shift in flat space is as follows. Consider, in generality, some fermions in a representation $R$ of $G$, minimally coupled to the gauge field and with a negative real mass term:
\begin{equation}
\frac{i}{g^2}\operatorname{Tr}(\bar{\lambda}\gamma^\mu D_\mu\lambda - m\lambda\bar{\lambda}) = \frac{i}{g^2}(\bar{\lambda}^i)^\alpha(\delta^{ij}(\gamma^\mu)_\alpha{}^\beta\partial_\mu - i(\gamma^\mu)_\alpha{}^\beta A_\mu^a(T_R^a)^{ij} + m\delta^{ij}\delta_\alpha{}^\beta)\lambda_\beta^j,
\end{equation}
where $i, j = 1, \ldots, \dim R$ and $D_\mu = \partial_\mu - iA_\mu^a T_R^a$. These $\dim R$ complex fermions can be thought of as $2\dim R$ Majorana fermions for $R$ real. Ignoring the vacuum energy, the only terms in the one-loop effective action $iS_\text{eff}[A, m]$ that survive the IR limit (in which $m\to\infty$ and the external momenta $p\to 0$) are those quadratic and cubic in $A$. In this limit, the parity-odd parts of these terms are
\begin{equation}
-\frac{i}{8\pi}\frac{m}{|m|}\operatorname{Tr}(T_R^a T_R^b)\int d^3 x\, \epsilon^{\mu\nu\rho}\left(A_\mu^a\partial_\nu A_\rho^b + \frac{1}{3}f^{acd}A_\mu^b A_\nu^c A_\rho^d\right).
\end{equation}
Keeping the parity-even parts leads to a linearly divergent mass term for $A_\mu$, which can be regularized by subtracting the parity-even effective action at $m = 0$ \cite{Redlich:1983kn, Redlich:1983dv}. Recall that $\operatorname{Tr}(T_R^a T_R^b)$ is scaled up relative to $\operatorname{Tr}(T_\text{fund}^a T_\text{fund}^b)$ by $\smash{\frac{C(R)}{C(\text{fund})}}$ where $C(R)$ is the Dynkin index of $R$. In our conventions, $C(\text{fund}) = 1/2$ and $C(\text{adj}) = h$, where the latter follows from our normalization of long roots. Hence $S_\text{eff}[A, m]$ for two Majorana fermions in $R = \text{adj}$ is $S_\text{CS}$ at level $-h\operatorname{sign}(m)$.\footnote{In the case of an \emph{odd} number of Majorana fermions, this is the basic mechanism of the parity anomaly, wherein gauge invariance requires that the UV Lagrangian contain parity-violating local counterterms to compensate for the gauge non-invariance of the fermion determinant. This requirement holds regardless of whether the fermions themselves have bare masses in the UV.}
Now consider the sum of the Lagrangians \eqref{LCS} and \eqref{LYM}. The resulting theory has a mass gap of $m = kg^2/2\pi$. At large $k$ ($m\gg g^2$), we may integrate out all massive superpartners of the gauge field. Assuming unbroken supersymmetry, the result is the low-energy effective theory of zero-energy supersymmetric ground states. Of course, the fact that integrating out $\lambda$ induces $\mathcal{L}_\text{CS}^{\mathcal{N} = 0}$ at level $-h$ (among other interactions), along with the assumption that $\mathcal{N} = 2$ SUSY is preserved quantum-mechanically, is only a heuristic justification for the renormalization of the coefficient of $\mathcal{L}_\text{CS}^{\mathcal{N} = 2}$ to $k - h$. This expectation is borne out by computing the one-loop perturbative renormalization of couplings \cite{Kao:1995gf}.
The fermionic shift discussed above is entirely separate from any ``bosonic shift'' that might arise from gauge dynamics (as found in, e.g., \cite{Pisarski:1985yj}, which effectively integrates out the topologically massive $W$-boson). Such a shift does not affect the number of vacuum states. Indeed, it is an artifact of regularization scheme: in the YM-CS regularization (which preserves supersymmetry, and which we use throughout this paper), the IR level is shifted by $+h$ relative to the bare level, while dimensional regularization yields no such shift \cite{Chen:1992ee}. It is, nonetheless, a convenient conceptual slogan that $k$ is renormalized to $k + h$ at one loop in $\mathcal{N} = 0$ YM-CS, so that $k$ is not renormalized in $\mathcal{N}\geq 2$ YM-CS. The important point is that for $\mathcal{N}\geq 2$ supersymmetry, integrating out the gauginos in the 3D YM-CS Lagrangian yields a shift of $-h$, which is twice the shift of $-h/2$ in the $\mathcal{N} = 1$ case \cite{Kao:1995gf}.
Given a precise physical definition of the level $k$, such as those presented in the introduction, a more substantive ``bosonic'' shift of the form mentioned above is that exhibited by correlation functions of $\mathcal{N} = 0$ Chern-Simons theory as functions of $k$. This can already be seen in the semiclassical limit \cite{Witten:1988hf}. At large $k$, we may expand \eqref{N0SCS} to quadratic order around a flat connection $A_0$. The semiclassical path integral evaluates to its classical value weighted by the one-loop contribution $e^{i\pi\eta(A_0)/2}T(A_0)$ where $T(A_0)$ is the Ray-Singer torsion of $A_0$ (a topological invariant). The APS index theorem implies that the relative $\eta$-invariant
\begin{equation}
\frac{1}{2}(\eta(A_0) - \eta(0)) = \frac{h}{\pi}I(A_0),
\end{equation}
where $I(A_0)\equiv \frac{1}{k}S_\text{CS}(A_0)$, is a topological invariant. The large-$k$ partition function is then
\begin{equation}
Z = e^{i\pi\eta(0)/2}\sum_\alpha e^{i(k + h)I(A_0^{(\alpha)})}T(A_0^{(\alpha)}),
\end{equation}
where the sum (assumed finite) runs over gauge equivalence classes of flat connections. This is how the shift $k\to k + h$, which persists in the full quantum answer, appears perturbatively. The phase $\eta(0)$ depends on the choice of metric. However, given a trivialization of the tangent bundle of $M^3$, the gravitational Chern-Simons action $I_\text{grav}(g)$ has an unambiguous definition, and upon adding a counterterm $\frac{\dim G}{24}I_\text{grav}(g)$ to the action, the resulting large-$k$ partition function is a topological invariant of the framed, oriented three-manifold $M^3$ \cite{Witten:1988hf}.
Thus a framing of $M^3$ fixes the phase of $Z$. Aside from the framing anomaly of $M^3$ itself, there exists a framing ambiguity of links within it. This framing ambiguity appears in the computation of Wilson loop expectation values because the conventional regularization of overlapping integrals of fields along the loop involves a choice of self-linking number, which is not a topologically invariant notion. This point will be important in our application: the supersymmetric framing of a BPS Wilson loop differs from the canonical framing, when it exists, because the point splitting must be performed with respect to another BPS loop \cite{Kapustin:2009kz}.
To make concrete the utility of supersymmetry in light of these perturbative considerations, take as an example $\mathcal{N} = 0$ $SU(2)_k$ on $S^3$. A typical observable in this theory receives contributions from all loops. For example, the full nonperturbative result for the partition function is
\begin{equation}
Z(S^3) = \sqrt{\frac{2}{k + 2}}\sin\left(\frac{\pi}{k + 2}\right).
\label{ZN0}
\end{equation}
Suppose we were to compute the logarithm of this quantity (the free energy on $S^3$) in perturbation theory as the sum of connected vacuum bubbles, without recourse to 2D conformal field theory. Expanding around the trivial flat connection, the one-loop factor is simply the large-$k$ limit of the exact result:
\begin{equation}
Z_\text{1-loop} = \exp(\bigcirc) = \frac{\sqrt{2}\pi}{(k + 2)^{3/2}}.
\end{equation}
The reconstruction of the exact result from summing trivalent graphs is far from obvious, regardless of whether the expansion parameter is $k^{-1}$ or $(k + 2)^{-1}$ (the necessity of doing perturbation theory in the renormalized level has historically been a point of contention in the literature; for a review of early references on large-$k$ asymptotics of Chern-Simons invariants, see \cite{Marino:2002fk}). On the other hand, a one-loop supersymmetric localization computation in $\mathcal{N} = 2$ $SU(2)_{k+2}$ on $S^3$ (with the level adjusted to account for the fermionic shift, suitably generalized to curved space) handily yields the all-loop non-supersymmetric result \eqref{ZN0}, up to a framing phase given in Appendix \ref{surgvsloc}. The bulk of our discussion will focus on more complicated observables that include Wilson loops.
\subsection{Beyond Perturbation Theory}
As known since \cite{Witten:1988hf}, there exist completely general nonperturbative techniques for computing observables in the $\mathcal{N} = 0$ theory, and thus checks of any results obtained via supersymmetry. These techniques rely on essentially two ingredients. The first is the fact that $Z(\Sigma\times_K S^1) = \operatorname{Tr}_{\mathcal{H}_\Sigma}(K)$, where the mapping torus $\Sigma\times_K S^1$ is obtained by identifying the ends of the cylinder $\Sigma\times [0, 1]$ by a diffeomorphism $K$ of $\Sigma$. The second is the fundamental surgery formula
\begin{equation}
Z(\tilde{M}; R_i) = \sum_j K_i{}^j Z(M; R_j),
\end{equation}
where $M$ contains an arbitrary Wilson loop in the representation $R_i$ (possibly trivial) and $\tilde{M}$ is the result of gluing a tubular neighborhood of this loop back into $M$ with a diffeomorphism $K$ on its boundary. Topologically equivalent surgeries on three-manifolds may have different effects on framing.
To give a few examples of nonperturbative results computed by these means (stated in the canonical framing), consider $G_{k > 0}$ on $S^3$. Let $S_{ij}$ be the representation of the modular transformation $S$ on $T^2$ in the Verlinde basis for $\mathcal{H}_{T^2}$. Then
\begin{equation}
Z(S^3) = S_{00} = \frac{1}{(k + h)^{\operatorname{rank} G/2}}\left(\frac{\operatorname{vol} \Lambda_W}{\operatorname{vol} \Lambda_R}\right)^{1/2}\prod_{\alpha > 0} 2\sin\left(\frac{\pi\alpha(\rho)}{k + h}\right),
\label{ZS3}
\end{equation}
while for an unknotted Wilson loop in an irreducible representation $R_i$,
\begin{equation}
\langle W\rangle = \frac{Z(S^3; R_i)}{Z(S^3)} = \frac{S_{0i}}{S_{00}} = \prod_{\alpha > 0} \frac{\sin(\pi\alpha(\lambda + \rho)/(k + h))}{\sin(\pi\alpha(\rho)/(k + h))}.
\label{WS3}
\end{equation}
Here, $\alpha$ runs over positive roots and $\lambda$ is the highest weight of $R_i$. As $k\to\infty$, $Z(S^3)\sim k^{-\dim G/2}$ and $\langle W\rangle\to \dim R_i$, the latter of which justifies the nomenclature ``quantum dimension.'' The expressions in terms of $S$-matrix elements were deduced in \cite{Witten:1988hf}, while the explicit formulas in \eqref{ZS3} and \eqref{WS3} are consequences of the Weyl denominator and character formulas \cite{Marino:2005sj}.\footnote{The result for $Z(S^3)$ follows from consistency between two different ways of gluing together two copies of a solid torus $D^2\times S^1$: one trivially to get $S^2\times S^1$, and another with an $S$ transformation on the boundary to get $S^3$. More generally, by inserting Wilson lines in these solid tori, one obtains the expectation value of the Hopf link as a normalized $S$-matrix element.
For any $G$, the modular transformation $T$ is represented in the Verlinde basis by a diagonal matrix with $T_i{}^i = e^{2\pi i(h_i - c/24)}$ where $h_i$ is the conformal weight of the primary field in the representation $R_i$ and $c$ is the central charge of $\smash{\widehat{G}_k}$.} In particular, for $SU(2)_k$,
\begin{equation}
S_{ij} = \sqrt{\frac{2}{k + 2}}\sin\left[\frac{(2i + 1)(2j + 1)\pi}{k + 2}\right]
\label{Smatrix}
\end{equation}
where $i, j$ label the spins of the corresponding representations (thus giving \eqref{ZN0}), and for an unknotted Wilson loop in the spin-$j$ representation,
\begin{equation}
\langle W\rangle = \frac{S_{0j}}{S_{00}} = \frac{q^{j + 1/2} - q^{-(j + 1/2)}}{q^{1/2} - q^{-1/2}} = \frac{\sin((2j + 1)\pi/(k + 2))}{\sin(\pi/(k + 2))}
\end{equation}
where $q = e^{2\pi i/(k + 2)}$.
In some observables, highest weights of integrable representations of the $G_k$ theory appear not due to explicit Wilson loop insertions, but rather because they are summed over. Indeed, the shift in $\lambda$ already appears in the partition function on $\Sigma\times S^1$, which computes the dimension of the Hilbert space of the Chern-Simons theory on $\Sigma$ and hence the number of conformal blocks in the corresponding 2D RCFT. The answer is famously given by the Verlinde formula, which for arbitrary compact $G$, reads \cite{Blau:1993tv}
\begin{equation}
\dim V_{g, k} = (|Z(G)|(k + h)^{\operatorname{rank} G})^{g-1}\sum_{\lambda\in \Lambda_k} \prod_\alpha (1 - e^{2\pi i\alpha(\lambda + \rho)/(k + h)})^{1-g}
\label{Verlinde}
\end{equation}
where $g$ is the genus of $\Sigma$ and $\Lambda_k$ denotes the set of integrable highest weights of $\smash{\widehat{G}_k}$. For the $SU(2)_k$ WZW model, it becomes
\begin{equation}
\dim V_{g, k} = \left(\frac{k + 2}{2}\right)^{g-1}\sum_{m=0}^k \sin\left[\frac{(m + 1)\pi}{k + 2}\right]^{2-2g},
\end{equation}
where the RHS reduces to $k + 1$ for $g = 1$. While our focus is on Wilson loops, it turns out that the appearance of $\lambda + \rho$ in $Z(\Sigma\times S^1)$ comes ``for free'' in our approach, without the need to adjust for any 1D fermionic shifts, which is consistent with the fact that the weights in \eqref{Verlinde} are not associated with Wilson loops. This fact has already been appreciated in prior literature, as we briefly review in Section \ref{3Dloc}.
\section{Wilson Loops and Coadjoint Orbits} \label{wilsonandcoadjoint}
\subsection{The Orbit Method}
A central ingredient in our analysis is the fact that a Wilson loop over a curve $\gamma$ in $M^3$ is a path integral for a 1D Chern-Simons theory whose classical phase space is a coadjoint orbit of $G$, with the corresponding representation $R$ arising by the orbit method \cite{Witten:1988hf}. We will be interested in the case of compact $G$, where this construction is also known as Borel-Weil-Bott quantization. The philosophy is that one can eliminate both the trace and the path ordering from the definition of a Wilson loop in a nonabelian gauge theory at the cost of an additional path integral over all gauge transformations along $\gamma$.
To make this description explicit, we draw from the exposition of \cite{Beasley:2009mb}. We would like to interpret a Wilson loop as the partition function of a quantum-mechanical system on $\gamma$ with time-dependent Hamiltonian. In the Hamiltonian formalism, this is a matter of writing
\begin{equation}
W_R(\gamma) = \operatorname{Tr}_R P\exp\left(i\oint_\gamma A\right) = \operatorname{Tr}_{\mathcal{H}} T\exp\left(-i\oint_\gamma H\right)
\end{equation}
where the Hilbert space $\mathcal{H}$ is the carrier space of the representation $R$, $H$ generates translations along $\gamma$, and the time evolution operator is the holonomy of the gauge field. In the path integral formalism, this becomes
\begin{equation}
W_R(\gamma) = \int DU\, e^{iS_\lambda(U, A|_\gamma)}
\label{Upathintegral}
\end{equation}
where $U$ is an auxiliary bosonic field on $\gamma$, $\lambda$ is the highest weight of $R$, and the restriction of the bulk gauge field $A|_\gamma$ is a background field in the (operator-valued) path integral over $U$. Since the definition of a Wilson loop is independent of any metric on $\gamma$,\footnote{This is not true of its supersymmetric counterparts.} it is not surprising that the action $S_\lambda$ will turn out to describe a topological sigma model.
The Borel-Weil-Bott theorem identifies the irreducible representation $R$ with the space of holomorphic sections of a certain line bundle over the coadjoint orbit $\mathcal{O}_\lambda\subset \mathfrak{g}^\ast$ of $\lambda$, which (in the generic case) is isomorphic to the flag manifold $G/T$ where $T$ is a maximal torus of $G$. In physical terms, it states that $R$ is the Hilbert space obtained by quantizing $\mathcal{O}_\lambda$. We are therefore led to consider the quantum mechanics of a particle on $\mathcal{O}_\lambda$ given by a 1D sigma model of maps $U : S^1\to \mathcal{O}_\lambda$, where the compact worldline is identified with $\gamma\subset M^3$. To ensure that $\mathcal{O}_\lambda$ (rather than $T^\ast\mathcal{O}_\lambda$) appears as the classical phase space, the action for $U$ must be first-order in the time derivative along $S^1$. Moreover, on general grounds, it should be independent of the metric on $S^1$.
There is an essentially unique choice of action that fulfills these wishes. For convenience, we identify $\lambda$ via the Killing form as an element of $\mathfrak{g}$ rather than $\mathfrak{g}^\ast$, so that $\mathcal{O}_\lambda\subset \mathfrak{g}$ is the corresponding adjoint orbit (henceforth, we shall not be careful to distinguish $\mathfrak{g}$ and $\mathfrak{g}^\ast$). We assume that $\lambda$ is a regular weight, so that $\mathcal{O}_\lambda\cong G/G_\lambda$ where $G_\lambda\cong T$. The (left-invariant) Maurer-Cartan form $\theta$ is a distinguished $\mathfrak{g}$-valued one-form on $G$ that satisfies $d\theta + \theta\wedge\theta = 0$. We obtain from it two natural forms on $G$, namely the real-valued presymplectic one-form $\Theta_\lambda$ and the coadjoint symplectic two-form $\nu_\lambda$:
\begin{equation}
\theta = g^{-1}dg\in \Omega^1(G)\otimes \mathfrak{g}, \qquad \Theta_\lambda = i\operatorname{Tr}(\lambda\theta)\in \Omega^1(G), \qquad \nu_\lambda = d\Theta_\lambda\in \Omega^2(G).
\label{forms}
\end{equation}
Both $\Theta_\lambda$ and $\nu_\lambda$ descend to forms on $\mathcal{O}_\lambda$. The weight $\lambda$ naturally determines a splitting of the roots of $G$ into positive and negative, positive roots being those having positive inner product with $\lambda$. Endowing $\mathcal{O}_\lambda$ with the complex structure induced by this splitting makes $\mathcal{O}_\lambda$ a K\"ahler manifold, with K\"ahler form $\nu_\lambda$ of type $(1, 1)$.\footnote{This is usually phrased as a choice of Borel subalgebra $\mathfrak{b}\supset \mathfrak{t}$, so that the coadjoint orbit is isomorphic to $G_\mathbb{C}/B$ where $B$ is the corresponding Borel subgroup and the roots of $B$ are defined to be the positive roots of $G$; then representations are labeled by their lowest weights. We instead adhere to the ``highest weight'' conventions of \cite{Beasley:2009mb}.} Now consider the action
\begin{equation}
S_\lambda(U) = \oint_{S^1} U^\ast(\Theta_\lambda) = \oint_{S^1} (\Theta_\lambda)_m\frac{dU^m}{d\tau}\, d\tau.
\end{equation}
The second expression (written in local coordinates $U^m$ on $\mathcal{O}_\lambda$) is indeed first-order in derivatives, so that the solutions to the classical EOMs are constant maps $U$, as desired.
To be concrete, we may think of $U$ as parametrizing gauge transformations. Using the isomorphism $G/G_\lambda\xrightarrow{\sim} \mathcal{O}_\lambda$ given by $gG_\lambda\mapsto g\lambda g^{-1}$, we lift $U$ to a map $g : S^1\to G$, so that
\begin{equation}
S_\lambda(U) = i\oint_{S^1} \operatorname{Tr}(\lambda g^{-1}dg).
\label{Uactiong}
\end{equation}
From \eqref{Uactiong}, we see very explicitly that the canonical symplectic form $\nu_\lambda$ on $\mathcal{O}_\lambda$, given in \eqref{forms}, takes the form $d\pi_g\wedge dg$ where the components of $g$ are canonical coordinates. The fact that $\lambda\in \mathfrak{g}$ is quantized as a weight of $G$ implies that \eqref{Uactiong} is independent of the choice of lift from $\mathcal{O}_\lambda$ to $G$. Namely, $g$ is only determined by $U$ up to the right action of $G_\lambda$; under a large gauge transformation $g\mapsto gh$ where $h : S^1\to G_\lambda$, the integrand of \eqref{Uactiong} changes by $d\operatorname{Tr}(\lambda\log h)$ and the action changes by an integer multiple of $2\pi$.\footnote{From the geometric quantization point of view, the quantization of $\lambda$ is necessary for the existence of a prequantum line bundle $\mathcal{L}(\lambda)$ over $\mathcal{O}_\lambda$, with curvature $\nu_\lambda$. Each $\lambda$ in the weight lattice gives a homomorphism $\rho_\lambda : T\to U(1)$, which can be used to construct an associated line bundle $\mathcal{L}(\lambda) = G\times_{\rho_\lambda} \mathbb{C}$ over $G/T$, so that the Hilbert space is the space of holomorphic sections of $\mathcal{L}(\lambda)$. Then $\Theta_\lambda$ is a connection on $\mathcal{L}(\lambda)$.} Thus $\Theta_\lambda$ descends (up to exact form) to $\mathcal{O}_\lambda$. The path integral \eqref{Upathintegral} is over all maps $U$ in $L\mathcal{O}_\lambda$, or equivalently, over all maps $g$ in $LG/LG_\lambda$ (accounting for the gauge redundancy).
To couple \eqref{Uactiong} to the bulk gauge field, we simply promote $dg$ to $d_A g = dg - iA|_\gamma\cdot g$:
\begin{equation}
S_\lambda(U, A|_\gamma) = i\oint_{S^1} \operatorname{Tr}(\lambda g^{-1}d_A g).
\end{equation}
Prescribing the correct gauge transformations under $G\times T$ (with $T$ acting on the right and $G$ acting on the left), the 1D Lagrangian transforms by the same total derivative as before.
The first-order action \eqref{Uactiong}, in the absence of a background gauge field, can be thought of as describing the IR limit of a charged particle on $\mathcal{O}_\lambda$ in a magnetic field $\nu_\lambda$. In complete analogy to 3D Chern-Simons theory, the irrelevant two-derivative kinetic terms have the effect of renormalizing $\lambda$ to $\lambda + \rho$ at one loop, and upon supersymmetrizing the theory, the fermion effective action provides a compensating shift by $-\rho$.\footnote{As in 3D, the effect of these fermions can be compared to that of the metaplectic correction in geometric quantization, which states that wavefunctions should not be viewed as sections of $\mathcal{L}(\lambda)$, but rather as half-densities valued in $\mathcal{L}(\lambda)$, meaning that they belong to $\mathcal{L}(\lambda)\otimes K^{1/2}\cong \mathcal{L}(\lambda - \rho)$ where $K^{1/2}$ is a square root of the canonical bundle of $\mathcal{O}_\lambda$ \cite{Mikhaylov:2014aoa}.} We will substantiate this interpretation for $G = SU(2)$ in exhaustive detail.
\subsection{Wilson/'t Hooft Loops in Chern-Simons Theory} \label{loopsinCS}
While the coadjoint representation of a Wilson loop holds in any gauge theory, it is especially transparent in Chern-Simons theory, where it can be derived straightforwardly via a surgery argument \cite{Moore:1989yh}. Consider Chern-Simons on $S^1\times \mathbb{R}^2$, where the Wilson line wraps the $S^1$ at a point on the $\mathbb{R}^2$. Cutting out a small tube around $\gamma$ and performing a gauge transformation $\tilde{g}$, the action changes by\footnote{By Appendix \ref{gaugetransformations}, varying the bulk action gives a boundary term of $-\frac{ik}{4\pi}d(A\tilde{g}^{-1}d\tilde{g})$; the Pontryagin density term does not contribute because $G$ is assumed simply connected. By Appendix \ref{boundaryconditions}, specifying nonzero $A_t$ on the boundary requires adding a boundary term of $\frac{k}{4\pi}\int_{\partial M^3} d^2 x \operatorname{Tr}(A_t A_\phi)$ to the action, whose variation under a $\phi$-dependent gauge transformation $\tilde{g}$ gives another contribution of $\smash{-\frac{ik}{4\pi}\int_{\partial M^3} d^2 x \operatorname{Tr}(A_t\tilde{g}^{-1}\partial_\phi\tilde{g})}$.}
\begin{equation}
\Delta S = -\frac{ik}{2\pi}\int_{\partial M^3} \operatorname{Tr}(A\tilde{g}^{-1}d\tilde{g}).
\end{equation}
Set $\tilde{g} = e^{i\alpha\phi}$ where $e^{2\pi i\alpha} = 1$ (this gauge transformation is singular along the loop; $t$ is the coordinate along $\gamma$ and $\phi$ the coordinate around it). To define a gauge-invariant operator, average over $\tilde{g}\to g\tilde{g}$ and $A\to gAg^{-1} - idgg^{-1}$ where $g = g(t)$, whereupon this becomes
\begin{equation}
\Delta S = ik\int_\gamma \operatorname{Tr}(\alpha g(\partial_t - iA_t)g^{-1})\, dt,
\end{equation}
where we have performed the $\phi$ integral and shrunk the boundary to a point. Finally, replace $g$ by $g^{-1}$. Hence $k\alpha$ must be quantized as a weight $\lambda$.\footnote{We have corrected the transformation rule $\tilde{g}\to g\tilde{g}g^{-1}$ and a spurious factor of $\frac{1}{2}$ in \cite{Moore:1989yh}.} This derivation illustrates that Wilson and 't Hooft loops are equivalent in pure Chern-Simons theory.
To summarize, consider a bulk theory with gauge group $G$ and the 1D Lagrangian
\begin{equation}
\mathcal{L}_\text{1D} = i\operatorname{Tr}[\lambda g^{-1}(\partial_t - iA)g]
\label{Lforg}
\end{equation}
where $g\in G$, $A\equiv A|_\gamma$, and $\lambda\in \mathfrak{t}$ (properly, $\lambda\in \mathfrak{t}^\ast$). Since $\lambda$ is Hermitian in our conventions, the factor of $i$ ensures that the coadjoint orbit action is real. The Lagrangian \eqref{Lforg} transforms by a total derivative under $t$-dependent $G\times T$ gauge transformations
\begin{equation}
g\to h_\ell gh_r, \quad A\to h_\ell Ah_\ell^{-1} - i\partial_t h_\ell h_\ell^{-1},
\label{GtimesT}
\end{equation}
namely $i\operatorname{Tr}(\lambda\partial_t\log h_r)$, where $h_\ell$ is the restriction of a $G$-gauge transformation in the bulk and $h_r\in T$. Hence $\lambda$ is quantized to be a weight of $G$. The $T$-gauge symmetry restricts the degrees of freedom in $g$ to $G/T$. Quantizing $g$ in this Lagrangian leads to the Wilson line.
Strictly speaking, the global symmetry of the model \eqref{Uactiong} that we gauge to obtain \eqref{Lforg} is $G/Z(G)$, since the center is already gauged. This should be contrasted with the global symmetry $G\times G/Z(G)$ of a particle on a group manifold with the usual kinetic term $\operatorname{Tr}((g^{-1}\dot{g})^2)$, which consists of isometries of the bi-invariant Killing metric on $G$.
\section{Wilson Loops in \texorpdfstring{$\mathcal{N} = 2$}{N = 2} Chern-Simons Theory} \label{N2CS}
We now show that properly defining half-BPS Wilson loops in $\mathcal{N} = 2$ Chern-Simons theory ensures that their weights are not renormalized, in direct parallel to the non-renormalization of the bulk Chern-Simons level. This involves enhancing the sigma model of the previous section with 1D $\mathcal{N} = 2$ supersymmetry in a way compatible with bulk 3D $\mathcal{N} = 2$ supersymmetry.
\subsection{Shift from Line Dynamics}
\subsubsection{\texorpdfstring{$\mathcal{N} = 2$}{N = 2} Coadjoint Orbit}
We work in Lorentzian 1D $\mathcal{N} = 2$ superspace with coordinates $(t, \theta, \theta^\dag)$ (see Appendix \ref{1DN2}). Implicitly, we imagine a quantum-mechanical system on a line embedded in $\mathbb{R}^{1, 2}$, but we will not need to pass to 3D until the next section. Our primary case study is $G = SU(2)$. We first construct, without reference to the 3D bulk, an $SU(2)$-invariant and supersymmetric coadjoint orbit Lagrangian from the 1D $\mathcal{N} = 2$ chiral superfield
\begin{equation}
\Phi = \phi + \theta\psi - i\theta\theta^\dag\dot{\phi}
\end{equation}
descending from bulk super gauge transformations and the 1D $\mathcal{N} = 2$ vector superfields
\begin{equation}
V_i = a_i + \theta\psi_i - \theta^\dag\psi_i^\dag + \theta\theta^\dag A_i
\end{equation}
obtained from restrictions of the bulk fields to the Wilson line, which extends along the 0 direction in flat space. Here, $i = 1, 2, 3$ label the $\mathfrak{su}(2)$ components in the $\vec{\sigma}/2$ basis; $\phi$ is a complex scalar and $\psi$ is a complex fermion; $a_i, A_i$ are real scalars and $\psi_i$ are complex fermions; and the relevant SUSY transformations are given in \eqref{1Dsusyvector} and \eqref{1Dsusychiral}.
We begin by writing \eqref{Lforg} in a form more amenable to supersymmetrization, namely in terms of a complex scalar $\phi$ whose two real degrees of freedom come from those in $g\in G = SU(2)$ minus those in $h\in T = U(1)$. Along with its conjugate $\phi^\dag$, it parametrizes the phase space $SU(2)/U(1)\cong \mathbb{CP}^1$. Take $\lambda = -j\sigma_3$ with $j\in \frac{1}{2}\mathbb{Z}_{\geq 0}$, which fixes a Cartan; then
\begin{equation}
g = \left(\begin{array}{cc} a & b \\ -\bar{b} & \bar{a} \end{array}\right), \quad |a|^2 + |b|^2 = 1
\end{equation}
is subject to a $U(1)$ gauge redundancy $g\sim ge^{i\theta\sigma_3}$. We identify variables via the Hopf map $SU(2)\to S^2$, followed by stereographic projection:
\begin{equation}
\phi = -\frac{a}{\bar{b}}.
\end{equation}
This map respects the chosen $U(1)$ gauge equivalence: $(a, b)\to (ae^{i\theta}, be^{-i\theta})$. Let us gauge-fix the $U(1)$ action on the right by taking $b = r$ real. Since $|a|^2 + r^2 = 1$, $r$ is only determined by $a$ up to a sign (reflecting the ambiguity in the action of $SU(2)$ on $S^2$). Note that the gauge fixing breaks down when $|a| = 1$ ($r = 0$). Accounting for the sign ambiguity, we have
\begin{equation}
\phi = -\frac{a}{\pm\sqrt{1 - |a|^2}} \implies a = \mp\frac{\phi}{\sqrt{1 + |\phi|^2}}, \mbox{ } r = \pm\frac{1}{\sqrt{1 + |\phi|^2}}.
\label{gtophi}
\end{equation}
The relative minus sign is important for ensuring equivariance of the map from $a$ to $\phi$ with respect to the action of $SU(2)$. Let us fix the overall sign to ``$(a, r) = (+, -)$.'' This is a one-to-one map between the interior of the unit disk $|a| < 1$ and the $\phi$-plane that takes the boundary of the disk to the point at infinity. To couple the $\phi$ degrees of freedom to the gauge field, we work in the basis $\vec{\sigma}/2$, so that
\begin{equation}
A = \frac{1}{2}\left(\begin{array}{cc} A_3 & A_1 - iA_2 \\ A_1 + iA_2 & -A_3 \end{array}\right)
\end{equation}
where the three $\mathfrak{su}(2)$ components $A_{1, 2, 3}$ are real (note that $A$ has only one spacetime component). Then the non-supersymmetric 1D coadjoint orbit Lagrangian \eqref{Lforg} can be written as $\mathcal{L}_\text{1D} = j\mathcal{L}$ where $\mathcal{L} = \mathcal{L}_0 + \mathcal{L}_A$ and
\begin{align}
\mathcal{L}_0 &= \frac{i}{j}\operatorname{Tr}(\lambda g^{-1}\partial_t g) = \frac{i(\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi})}{1 + |\phi|^2}, \label{L0} \\
\mathcal{L}_A &= \frac{1}{j}\operatorname{Tr}(\lambda g^{-1}Ag) = -\left[\frac{(A_1 + iA_2)\phi + (A_1 - iA_2)\phi^\dag - A_3(1 - |\phi|^2)}{1 + |\phi|^2}\right]. \label{LA}
\end{align}
Note that with Hermitian generators, the Killing form given by $\operatorname{Tr}$ is positive-definite.
By promoting $\phi$ to $\Phi$, we find that the supersymmetric completion of $\mathcal{L}_0$ (the coadjoint orbit Lagrangian with vanishing background gauge field, i.e., the pullback of the presymplectic one-form for $SU(2)$) is
\begin{equation}
\tilde{\mathcal{L}}_0 = \int d^2\theta\, K = \frac{i(\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi})}{1 + |\phi|^2} - \frac{\psi^\dag\psi}{(1 + |\phi|^2)^2}, \quad K\equiv \log(1 + |\Phi|^2).
\label{K}
\end{equation}
We have covered $\mathbb{CP}^1$ with the standard patches having local coordinates $\Phi$ and $1/\Phi$, so that $K$ is the K\"ahler potential for the Fubini-Study metric in the patch containing the origin.
To gauge $\tilde{\mathcal{L}}_0$ in a supersymmetric way and thereby obtain the supersymmetric completion of $\mathcal{L}$ requires promoting the $A_i$ to $V_i$, which is more involved. Having eliminated the integration variable $g$ in favor of $\phi$, let us denote by $g$ what we called $h_\ell$ in \eqref{GtimesT}. Writing finite and infinitesimal local $SU(2)$ transformations as
\begin{equation}
g = \left(\begin{array}{cc} a & b \\ -\bar{b} & \bar{a} \end{array}\right)\sim \left(\begin{array}{cc} 1 + \frac{i\epsilon_3}{2} & \frac{i\epsilon_1 + \epsilon_2}{2} \\ \frac{i\epsilon_1 - \epsilon_2}{2} & 1 - \frac{i\epsilon_3}{2} \end{array}\right),
\label{infinitesimal}
\end{equation}
finite and infinitesimal gauge transformations take the form
\begin{align}
A\to gAg^{-1} - i\dot{g}g^{-1} &\Longleftrightarrow \delta_{SU(2)}A_i = \epsilon_{ijk}A_j\epsilon_k + \dot{\epsilon}_i, \label{gaugeA} \\
\Phi\to \frac{a\Phi + b}{-\bar{b}\Phi + \bar{a}} &\Longleftrightarrow \delta_{SU(2)}\Phi = \epsilon_i X_i, \mbox{ } (X_1, X_2, X_3)\equiv \frac{1}{2}(i(1 - \Phi^2), 1 + \Phi^2, 2i\Phi),
\end{align}
where the holomorphic $SU(2)$ Killing vectors $X_i$ satisfy $[X_i\partial_\Phi, X_j\partial_\Phi] = \epsilon_{ijk}X_k\partial_\Phi$. Then
\begin{equation}
\delta_{SU(2)}K = \epsilon_i(\mathcal{F}_i + \bar{\mathcal{F}}_i), \quad (\mathcal{F}_1, \mathcal{F}_2, \mathcal{F}_3)\equiv \frac{1}{2}(-i\Phi, \Phi, i)
\end{equation}
(any purely imaginary $\mathcal{F}_3$ would do, but our choice leads to the ``canonical'' Noether currents transforming in the adjoint representation). To implement the Noether procedure, we promote the real $\epsilon_i$ to complex chiral superfields $\Lambda_i$:
\begin{equation}
\delta_{SU(2)}\Phi = \Lambda_i X_i.
\end{equation}
The corresponding change in $\tilde{\mathcal{L}}_0$ can be read off from
\begin{equation}
\delta_{SU(2)}K = \Lambda_i\mathcal{F}_i + \bar{\Lambda}_i\bar{\mathcal{F}}_i - i(\Lambda_i - \bar{\Lambda}_i)J_i
\label{deltaK}
\end{equation}
where the $SU(2)$ Noether currents (Killing potentials) are the real superfields
\begin{equation}
J_i = \frac{iX_i\Phi^\dag}{1 + |\Phi|^2} - i\mathcal{F}_i \implies (J_1, J_2, J_3) = \frac{1}{2}\left(-\frac{\Phi + \Phi^\dag}{1 + |\Phi|^2}, -\frac{i(\Phi - \Phi^\dag)}{1 + |\Phi|^2}, \frac{1 - |\Phi|^2}{1 + |\Phi|^2}\right),
\label{Noether}
\end{equation}
which satisfy $J_i^2 = 1/4$ and
\begin{equation}
\delta_{SU(2)}J_i = -\frac{1}{2}\epsilon_{ijk}(\Lambda_j + \bar{\Lambda}_j)J_k + i(\Lambda_j - \bar{\Lambda}_j)J_j J_i - \frac{i}{4}(\Lambda_i - \bar{\Lambda}_i).\footnote{Under $\Phi\to 1/\Phi$, we have $J_1 + iJ_2\leftrightarrow J_1 - iJ_2$ and $J_3\to -J_3$. The difference between $\mathcal{F}_3 = i/2$ and $\mathcal{F}_3 = 0$ ($J_3$ and $J_3 - 1/2$) is a $U(1)$ Chern-Simons term in the third component of the gauge field, which is singled out by our conventions for the maximal torus (Chern-Simons terms for simple gauge groups do not exist in 1D). The $J_i$ are only defined up to additive constants, but for nonabelian gauge groups, these constants can be fixed by choosing the $J_i$ to transform in the adjoint representation; in our case,
\[
X_i\partial_\Phi J_j - X_j\partial_\Phi J_i = (X_i\partial_\Phi + \bar{X}_i\partial_{\Phi^\dag})J_j = \epsilon_{ijk}J_k.
\]
For each $U(1)$ factor of the gauge group, there is one undetermined constant (corresponding to an FI term).}
\label{deltaJ}
\end{equation}
This generalizes $\delta_{SU(2)}J_i = -\epsilon_{ijk}\epsilon_j J_k$ for real $\epsilon_i$. Now, if we could find a counterterm $\Gamma$ such that $\delta_{SU(2)}\Gamma = i(\Lambda_i - \bar{\Lambda}_i)J_i$, then we would be done: the supersymmetric completion of $\mathcal{L}$ would be the minimally gauged supersymmetric $\mathbb{CP}^1$ model $\smash{\tilde{\mathcal{L}} = \tilde{\mathcal{L}}_0 + \tilde{\mathcal{L}}_A}$ where
\begin{equation}
\tilde{\mathcal{L}}_A = \int d^2\theta\, \Gamma, \quad \delta_{SU(2)}\Gamma = i(\Lambda_i - \bar{\Lambda}_i)J_i.
\end{equation}
Note that $\tilde{\mathcal{L}}$ is invariant under local $SU(2)$ because, in light of \eqref{deltaK}, the total variation of $K + \Gamma$ takes the form of a K\"ahler transformation. There exists a standard procedure for constructing such a $\Gamma$ \cite{Wess:1992cp}, which we review in Appendix \ref{higherGamma}. Its exact form is
\begin{equation}
\Gamma = 2\int_0^1 d\alpha\, e^{i\alpha V_i O_i}V_j J_j
\end{equation}
where $O_i = X_i\partial_\Phi - \bar{X}_i\partial_{\Phi^\dag}$. For our purposes, it suffices to work in Wess-Zumino gauge, where the bulk vector superfield is nilpotent of degree three ($V_\text{3D}^3 = 0$) and its restriction to the line is nilpotent of degree two ($V_\text{1D}^2 = 0$): namely, $V_i = \theta\theta^\dag A_i$. In this gauge, we have $\Gamma = 2V_i J_i$, so that $\tilde{\mathcal{L}}$ reduces to the non-manifestly supersymmetric Lagrangian $\tilde{\mathcal{L}}_0 + \mathcal{L}_A$. In arbitrary gauge, $\tilde{\mathcal{L}}$ contains terms of arbitrarily high order in the dimensionless bottom component of $V$.\footnote{With $V = V_i T_i$ and $\Lambda = \Lambda_i T_i$ where $T_i = \sigma_i/2$, we have in Wess-Zumino gauge that a 1D super gauge transformation truncates to
\begin{align}
e^{2V}\to e^{i\Lambda}e^{2V}e^{-i\bar{\Lambda}} &\Longleftrightarrow V\to V + \frac{i}{2}(\Lambda - \bar{\Lambda}) - \frac{i}{2}[V, \Lambda + \bar{\Lambda}] \nonumber \\
&\Longleftrightarrow \delta_{SU(2)}V_i = \frac{i}{2}(\Lambda_i - \bar{\Lambda}_i) + \frac{1}{2}\epsilon_{ijk}V_j(\Lambda_k + \bar{\Lambda}_k) \label{gaugeV}
\end{align}
(note that the order of chiral and antichiral parameters is opposite to that in 3D due to our conventions for 1D $\mathcal{N} = 2$ superspace). Wess-Zumino gauge is preserved under super gauge transformations with parameters $\Lambda_i = \epsilon_i - i\theta\theta^\dag\dot{\epsilon}_i$ where $\epsilon_i\in \mathbb{R}$ (i.e., where the lowest component of $\Lambda$ is real and the fermionic component vanishes). For such $\Lambda_i$, \eqref{gaugeV} is precisely equivalent to \eqref{gaugeA}.}
An important point is the following. There are two standard ways of geometrizing the action of $SU(2)$ on $S^2$, both of which can be found in the literature. These two conventions differ by signs, leading to slightly different $SU(2)$ Noether currents. First, the action of $SU(2)$ on $S^2$ descends from the adjoint action of $SU(2)$ on $\mathfrak{su}(2)\cong \mathbb{R}^3$, which preserves the Killing form (hence $S^2\subset \mathbb{R}^3$). This convention is used in, e.g., \cite{Wess:1992cp}, corresponding to
\begin{equation}
(J_1, J_2, J_3)_\text{other} = \frac{1}{2}\left(\frac{\Phi + \Phi^\dag}{1 + |\Phi|^2}, -\frac{i(\Phi - \Phi^\dag)}{1 + |\Phi|^2}, -\frac{1 - |\Phi|^2}{1 + |\Phi|^2}\right)
\end{equation}
(with the relative sign of $J_2$ reversed relative to our \eqref{Noether}). Second, $SU(2)$ acts on $\mathbb{CP}^1$ by linear fractional transformations. We use the latter convention unless stated otherwise. For further details, see Appendix \ref{sigmadetails}.
\subsubsection{Effective Action}
To compute the effective action generated by integrating out $\psi$, we add an $SU(2)$-invariant kinetic term for $\psi$ (with an implicit dimensionful coefficient) as a UV regulator:
\begin{equation}
\mathcal{L}' = \int d^2\theta\, K' = -\frac{i(\psi^\dag\dot{\psi} - \dot{\psi}^\dag\psi) + 4\dot{\phi}\dot{\phi}^\dag}{(1 + |\phi|^2)^2} - \frac{2i(\dot{\phi}^\dag\phi - \phi^\dag\dot{\phi})\psi^\dag\psi}{(1 + |\phi|^2)^3}, \quad K'\equiv \frac{D^\dag\Phi^\dag D\Phi}{(1 + |\Phi|^2)^2}.
\label{Kprime}
\end{equation}
Note that since $D\Phi = \psi - 2i\theta^\dag\dot{\phi} + i\theta\theta^\dag\dot{\psi}$ transforms in the same way under $SU(2)$ as its bottom component $\psi$, $K'$ is automatically invariant under global $SU(2)$. We want to gauge $K'$. With chiral superfield gauge transformation parameters, we have (note $DX_i = 2\mathcal{F}_i D\Phi$)
\begin{equation}
\delta_{SU(2)}K' = -i(\Lambda_i - \Lambda_i^\dag)J_i' - i(D\Lambda_i I_i - D^\dag\Lambda_i^\dag I_i^\dag)
\label{deltaKprime}
\end{equation}
where $J_i'$ are the bosonic Noether currents associated to $K'$ and the $I_i$ are fermionic:
\begin{equation}
J_i' = -2K' J_i, \quad I_i = \frac{iX_i(D\Phi)^\dag}{(1 + |\Phi|^2)^2}.
\label{Kprimecurrents}
\end{equation}
There exists a counterterm $\Gamma'$ satisfying
\begin{equation}
\delta_{SU(2)}\Gamma' = i(\Lambda_i - \bar{\Lambda}_i)J_i' + i(D\Lambda_i I_i - D^\dag\bar{\Lambda}_i\bar{I}_i),
\label{Gammaprimecondition}
\end{equation}
which takes the form
\begin{equation}
\int d^2\theta\, \Gamma' = \frac{2\psi^\dag\psi}{(1 + |\phi|^2)^2}\left[\frac{(A_1 + iA_2)\phi + (A_1 - iA_2)\phi^\dag - A_3(1 - |\phi|^2)}{1 + |\phi|^2}\right] + \cdots
\end{equation}
in Wess-Zumino gauge, such that the Lagrangian
\begin{equation}
\tilde{\mathcal{L}}' = \int d^2\theta\, (K' + \Gamma')\equiv \mathcal{L}_\psi - \frac{4\dot{\phi}\dot{\phi}^\dag}{(1 + |\phi|^2)^2} + \cdots
\end{equation}
(written in Wess-Zumino gauge) is invariant under local $SU(2)$, where
\begin{equation}
\mathcal{L}_\psi = -\frac{i(\psi^\dag\dot{\psi} - \dot{\psi}^\dag\psi)}{(1 + |\phi|^2)^2} - \frac{2\mathcal{L}\psi^\dag\psi}{(1 + |\phi|^2)^2}
\end{equation}
is itself invariant under local $SU(2)$ (we construct $\Gamma'$ in Appendix \ref{higherGammap} using a general pres\-cription for the full nonlinear gauging of supersymmetric sigma models with higher-derivative terms). Thus the ``${\cdots}$'' in $\tilde{\mathcal{L}}'$ contains only dimension-two terms not involving $\psi$, namely the couplings to $A_i$ necessary to make the two-derivative term in $\phi$ invariant under local $SU(2)$. Making the scale $\mu$ of the higher-dimension terms explicit, consider
\begin{equation}
\tilde{\mathcal{L}}_\text{tot} = j\tilde{\mathcal{L}} - \frac{1}{2\mu}\tilde{\mathcal{L}}'\equiv j\mathcal{L} + \psi^\dag\mathcal{D}\psi + \frac{2\dot{\phi}\dot{\phi}^\dag}{\mu(1 + |\phi|^2)^2} + \cdots,
\end{equation}
where we have integrated by parts. Performing the path integral over $\psi$ generates the one-loop effective action
\begin{equation}
\operatorname{tr}\log\mathcal{D} = \pm\frac{i}{2}\int dt\, \mathcal{L},
\label{effectiveaction}
\end{equation}
as derived in Appendix \ref{effactiondetails}. The regularization-dependent sign is fixed to ``$-$'' by canonical quantization, leading to a shift $j\to j - 1/2$. The ``${\cdots}$'' terms in $\tilde{\mathcal{L}}'$ decouple at low energies ($\mu\to\infty$).
The full component-wise Lagrangian $\tilde{\mathcal{L}}'$ in Wess-Zumino gauge is $\tilde{\mathcal{L}}' = \mathcal{L}_\psi - 4\mathcal{L}_\phi$ where
\begin{equation}
\mathcal{L}_\phi\equiv F - \frac{1}{2}(A_1 - iA_2)F_- - \frac{1}{2}(A_1 + iA_2)F_+ - A_3 F_3 - \frac{1}{4}\mathcal{L}_A^2 + \frac{1}{4}A_i^2
\end{equation}
and we have defined
\begin{equation}
F = \frac{\dot{\phi}\dot{\phi}^\dag}{(1 + |\phi|^2)^2}, \quad F_+ = F_-^\dag = \frac{-i(\dot{\phi} + \phi^2\dot{\phi}^\dag)}{(1 + |\phi|^2)^2}, \quad F_3 = \frac{i(\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi})}{(1 + |\phi|^2)^2}.
\end{equation}
Note that $\mathcal{L}_A = 2A_i J_i$ where $J_i$ denotes the lowest component. One can check that $\mathcal{L}_\phi$, hence $\tilde{\mathcal{L}}'$, is invariant under local $SU(2)$. We have $\delta_{SU(2)}\mathcal{L}_\psi = \delta_{SU(2)}\mathcal{L}_\phi = 0$ exactly (not up to total derivatives), which is a consequence of the fact that $\delta_{SU(2)}\Gamma'$ cancels $\delta_{SU(2)}K'$ exactly.
\subsection{Shift from Canonical Quantization}
Canonical quantization of the $\mathcal{N} = 2$ quantum mechanics provides another perspective on the shift in $j$. Here, we set $A_i = 0$, whence
\begin{equation}
\tilde{\mathcal{L}}|_{A_i = 0} = \tilde{\mathcal{L}}_0, \quad \tilde{\mathcal{L}}'|_{A_i = 0} = \mathcal{L}',
\end{equation}
so that the full Lagrangian is $\smash{\tilde{\mathcal{L}}_\text{tot}|_{A_i = 0}} = j\tilde{\mathcal{L}}_0 - \smash{\frac{1}{2\mu}}\mathcal{L}' = L_B + L_F$ where $L_B$ and $L_F$ describe 1D sigma models with $S^2$ target space:
\begin{align}
L_B &= \frac{ij(\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi})}{1 + |\phi|^2} + \frac{i\alpha}{2}\left(\frac{\dot{\phi}}{\phi} - \frac{\dot{\phi}^\dag}{\phi^\dag}\right) + \frac{2\dot{\phi}\dot{\phi}^\dag}{\mu(1 + |\phi|^2)^2}, \label{LB} \\
L_F &= -\left[j - \frac{i(\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi})}{\mu(1 + |\phi|^2)}\right]\frac{\psi^\dag\psi}{(1 + |\phi|^2)^2} + \frac{i(\psi^\dag\dot{\psi} - \dot{\psi}^\dag\psi)}{2\mu(1 + |\phi|^2)^2}.
\end{align}
For later convenience, we have added a total derivative, parametrized by $\alpha\in \mathbb{R}$, to $L_B$. Its meaning is as follows: $L_B$ describes an electrically charged particle on $S^2$ in the field of a magnetic monopole of charge $\propto j$ at the center, with the scale $\mu\in \mathbb{R}_{\geq 0}$ (the spectral gap) proportional to its inverse mass and $\alpha$ parametrizing the longitudinal gauge of the monopole vector potential. We define the gauges $S$, $E$, and $N$ by setting $\alpha = (0, j, 2j)$, respectively. We refer to $L_B$ as the ``bosonic system'' and to $L_B + L_F$ as the corresponding ``supersymmetric system.'' We now summarize the results of quantizing the theories $L_B$ and $L_B + L_F$: details are given in Appendix \ref{canonicalquantization}. As when computing the effective action, we use the $\mu$-suppressed kinetic terms as a technical aid; they have the effect of enlarging the phase space.
\subsubsection{Bosonic System}
As a warmup, consider $L_B$ alone. At finite $\mu$, the phase space is $(2 + 2)$-dimensional and the quantum Hamiltonian can be written as
\begin{equation}
H_j = \frac{\mu}{2}(\vec{L}^2 - j^2) = \frac{\mu}{2}(\ell(\ell + 1) - j^2).
\label{Hj}
\end{equation}
Here, $\smash{\vec{L}}^2 = \frac{1}{2}(L_+ L_- + L_- L_+) + L_3^2$ and we have defined the operators
\begin{align}
L_+ &= -\phi^2\frac{\partial}{\partial\phi} - \frac{\partial}{\partial\phi^\dag} + \frac{2j|\phi|^2 + \alpha(1 - |\phi|^2)}{2|\phi|^2}\phi, \nonumber \\
L_- &= \frac{\partial}{\partial\phi} + (\phi^\dag)^2\frac{\partial}{\partial\phi^\dag} + \frac{2j|\phi|^2 + \alpha(1 - |\phi|^2)}{2|\phi|^2}\phi^\dag, \label{Lquantumwithmu} \\
L_3 &= \phi\frac{\partial}{\partial\phi} - \phi^\dag\frac{\partial}{\partial\phi^\dag} - (j - \alpha), \vphantom{\frac{\phi^\dag}{2}} \nonumber
\end{align}
which satisfy $[L_3, L_\pm] = \pm L_\pm$, $[L_+, L_-] = 2L_3$. The spectrum is constrained to $\ell\geq j$ by an $L_3$ selection rule,\footnote{Considering the matrix element $\langle\theta = 0|L_3|\ell, m'\rangle$ shows that $\langle\theta = 0|\ell, m'\rangle = 0$ for $m'\neq -j$ (note that while the position eigenstate $|\theta = 0\rangle$ is $\varphi$-independent in the gauge $\alpha = 0$, it acquires a phase factor of $e^{-i\alpha\varphi}$ for general $\alpha$). In particular, since $|\theta, \varphi\rangle$ is related to $|\theta = 0\rangle$ by a rotation, $\langle\theta, \varphi|\ell, m\rangle = 0$ unless $\ell\geq j$.} with each level $\ell$ appearing once; the eigenfunctions of the associated generalized angular momentum are monopole spherical harmonics. As $\mu\to\infty$, all states except those with $\ell = j$ decouple (add $-j\mu/2$ to $H_j$). Rather than taking the decoupling limit $\mu\to\infty$ in $L_B$, which projects out all but the spin-$j$ states, setting $j = 0$ yields the rigid rotor. Its Hamiltonian is given in terms of the Laplace-Beltrami operator $\Delta_{S^2}$, whose spectrum is $-\ell(\ell + 1)$ with degeneracy $2\ell + 1$ for $\ell\geq 0$.
The bosonic theory with $\mu = \infty$ ($L_B = j\mathcal{L}_0$, in $S$ gauge) is the well-known Wess-Zumino term for quantization of spin. The action computes the solid angle enclosed by a trajectory on the sphere, and the Dirac quantization condition requires that the coefficient $j$ be a half-integer. Quantizing the compact phase space $S^2$ yields $2j + 1$ states $|j, m\rangle$, all eigenstates of $L_3$. Indeed, at $\mu = \infty$, the phase space is $(1 + 1)$-dimensional and we can write
\begin{equation}
L_+ = -\phi^2\partial_\phi + (2j - \alpha)\phi, \quad L_- = \partial_\phi + \frac{\alpha}{\phi}, \quad L_3 = \phi\partial_\phi - (j - \alpha).
\label{Lquantumnomu}
\end{equation}
The wavefunctions are $\phi^{-\alpha}, \ldots, \phi^{2j - \alpha}$, the eigenvalues range from $-j$ to $j$ in integer steps, and $\smash{\vec{L}}^2 = j(j + 1)$.
\subsubsection{Supersymmetric System}
For $L_B + L_F$, let us keep $\mu$ finite (work in the full phase space) and set $\alpha = 0$. Write
\begin{equation}
\chi = \frac{\psi}{\sqrt{\mu}(1 + |\phi|^2)},
\label{chi}
\end{equation}
which satisfies $\{\chi, \chi^\dag\} = 1$ upon quantization. The supercharges are represented by differential operators as
\begin{equation}
Q = \psi\left(\frac{\partial}{\partial\phi} - \frac{(j + 1/2)\phi^\dag}{1 + |\phi|^2}\right), \quad Q^\dag = \psi^\dag\left(-\frac{\partial}{\partial\phi^\dag} - \frac{(j - 1/2)\phi}{1 + |\phi|^2}\right),
\label{Qquantum}
\end{equation}
which are adjoints with respect to the Fubini-Study measure. The Hamiltonian is
\begin{equation}
H' = \frac{1}{2}\{Q, Q^\dag\} = H_{j + \chi^\dag\chi - 1/2} + j\mu\chi^\dag\chi - \frac{\mu}{2}(j - 1/2) = \frac{\mu}{2}(\vec{L}_f^2 - (j + 1/2)(j - 1/2))
\label{Hp}
\end{equation}
where $\vec{L}_f = \vec{L}|_{j + \chi^\dag\chi - 1/2}$. On the Hilbert space $(L^2(S^2, \mathbb{C})\otimes |0\rangle)\oplus (L^2(S^2, \mathbb{C})\otimes \chi^\dag|0\rangle)$,
\begin{align}
H' &= \left(\begin{array}{cc} H_{j - 1/2} - \mu(j - 1/2)/2 & 0 \\ 0 & H_{j + 1/2} + \mu(j + 1/2)/2 \end{array}\right) \\
&= \frac{\mu}{2}\left(\begin{array}{cc} \ell_b(\ell_b + 1) - (j - 1/2)(j + 1/2) & 0 \\ 0 & \ell_f(\ell_f + 1) - (j - 1/2)(j + 1/2) \end{array}\right) \label{Hpmatrix}
\end{align}
where $\ell_b\geq j - 1/2$ and $\ell_f\geq j + 1/2$. There are $2j$ bosonic ground states at $\ell_b = j - 1/2$. This fixes the sign of the previous path integral calculation. As a further check, the quantum representations of the fermionic monopole angular momenta $(L_f)_i$ are presented in \eqref{Lfquantum}. Their classical counterparts \eqref{Lfclassical} reduce to the classical $L_i$ with $j - 1/2$ as $\mu\to\infty$.
\subsection{Shift from 1D Supersymmetric Index} \label{indexshift}
To make contact with bulk Wilson loops, we compute both the non-supersymmetric twisted partition function and the flavored Witten index
\begin{equation}
I_{\mathcal{N} = 0} = \operatorname{Tr}(e^{-\beta H}e^{izL_3}), \quad I_{\mathcal{N} = 2} = \operatorname{Tr}[(-1)^F e^{-\beta H'}e^{iz(L_f)_3}]
\label{indices}
\end{equation}
by working semiclassically in the Euclidean path integral. Let
\begin{align}
L_{B, E} &= \frac{j(\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi})}{1 + |\phi|^2} + \frac{\alpha}{2}\left(\frac{\dot{\phi}}{\phi} - \frac{\dot{\phi}^\dag}{\phi^\dag}\right) + \frac{2\dot{\phi}\dot{\phi}^\dag}{\mu(1 + |\phi|^2)^2}, \label{LBE} \\
L_{F, E} &= \left[j + \frac{\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi}}{\mu(1 + |\phi|^2)}\right]\frac{\psi^\dag\psi}{(1 + |\phi|^2)^2} + \frac{\psi^\dag\dot{\psi} - \dot{\psi}^\dag\psi}{2\mu(1 + |\phi|^2)^2}
\end{align}
denote the Euclideanized versions of $L_B$ and $L_F$, with dots denoting $\tau$-derivatives. Then
\begin{equation}
I_{\mathcal{N} = 0} = \int D\phi^\dag D\phi\, e^{-\int_0^\beta d\tau\, L_{B, E}}, \quad I_{\mathcal{N} = 2} = \int D\phi^\dag D\phi D\psi^\dag D\psi\, e^{-\int_0^\beta d\tau\, (L_{B, E} + L_{F, E})},
\end{equation}
with boundary conditions twisted by $e^{izL_3}$ or $e^{iz(L_f)_3}$ as appropriate. While both $I_{\mathcal{N} = 0}$ and $I_{\mathcal{N} = 2}$ are known from canonical quantization, our goal here is to introduce the localization argument via what amounts to a derivation of the Weyl character formula \eqref{su2Weylcharacter} as a sum of two terms coming from the classical saddle points with a spin-independent prefactor coming from the one-loop determinants. For our precise normalization conventions in what follows, see Appendix \ref{quantummechanics}.
We first compute $I_{\mathcal{N} = 0}$ in the bosonic problem. Set $\mu = \infty$ and work in the $E$ gauge (not to be confused with ``$E$ for Euclidean''), where
\begin{equation}
L_{B, E} = \frac{j(\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi})}{1 + |\phi|^2} + \frac{j}{2}\partial_\tau\log\left(\frac{\phi}{\phi^\dag}\right).
\label{LBE-Egauge}
\end{equation}
We restrict the path integral to field configurations satisfying $\phi(\tau + \beta) = e^{iz}\phi(\tau)$, for which
\begin{equation}
\int_0^\beta d\tau\, \partial_\tau\log\left(\frac{\phi}{\phi^\dag}\right) = 2iz.
\end{equation}
With this restriction, the action is extremized when $\phi = \phi_\text{cl}\in \{0, \infty\}$ (the two fixed points of the $L_3$ action). We see that $L_{B, E}|_0 = ijz/\beta$ and $L_{B, E}|_\infty = -ijz/\beta$. First expand around $\phi_\text{cl} = 0$ with perturbation $\Delta$: $\phi = \phi_\text{cl} + \Delta = \Delta$, where $\Delta$ satisfies the twisted boundary condition. Its mode expansion takes the form
\begin{equation}
\Delta = \frac{1}{\sqrt{\beta}}\sum_{n=-\infty}^\infty \Delta_n e^{i(2\pi n + z)\tau/\beta},
\end{equation}
from which we obtain simply
\begin{equation}
\int_0^\beta d\tau\, L_{B, E}|_{O(\Delta^2)} = j\int_0^\beta d\tau\, (\Delta\dot{\Delta}^\dag - \Delta^\dag\dot{\Delta}) = -\frac{2ij}{\beta}\sum_{n=-\infty}^\infty (2\pi n + z)|\Delta_n|^2.
\end{equation}
Thus the one-loop factor from expanding around $\phi_\text{cl} = 0$ is
\begin{equation}
Z_\text{1-loop}|_0 = \exp\left[-\sum_{n=-\infty}^\infty \log(2\pi n + z)\right] = \frac{e^{az + b}}{\sin(z/2)} = -\frac{e^{-iz/2}}{2i\sin(z/2)}
\label{abconstants}
\end{equation}
where the integration constants $a, b$ parametrize the counterterms by which different regularization schemes differ. We present several ways to fix the values of $a$ and $b$ to those written above. First, \eqref{abconstants} is the only choice consistent with canonical quantization. Second, Hurwitz zeta function regularization yields
\begin{equation}
\sum_{n=-\infty}^\infty \log(An + B) = \log(1 - e^{2\pi iB/A}) \implies Z_\text{1-loop}|_0 = \frac{1}{1 - e^{iz}} = -\frac{e^{-iz/2}}{2i\sin(z/2)}.
\end{equation}
Third, performing free-field subtraction (normalizing the functional determinant, sans zero mode) at finite $\mu$ and then taking $\mu\to\infty$ yields the same answer. Indeed, accounting for the $1/\mu$ term in \eqref{LBE}, the kinetic operator for bosonic fluctuations $\Delta$ is $-2(j\partial_\tau + \partial_\tau^2/\mu)$ where the eigenvalues of $\partial_\tau$ are $i(2\pi n + z)/\beta$, giving the regularized product
\begin{align}
Z_\text{1-loop}|_0 = \frac{1}{\det(j\partial_\tau + \partial_\tau^2/\mu)} = -\frac{\sinh(\beta\mu j/2)}{2i\sin(z/2)\sinh((\beta\mu j + iz)/2)}\xrightarrow{\beta\mu\to\infty} -\frac{e^{-(j/|j|)iz/2}}{2i\sin(z/2)}. \label{bosdeterminant}
\end{align}
Now note that taking $\phi\to 1/\phi$ leaves $L_{B, E}$ in $E$ gauge \eqref{LBE-Egauge} invariant (with the $1/\mu$ term in \eqref{LBE} being invariant by itself) while taking $z\to -z$ in the boundary condition for the path integral. Hence
\begin{equation}
Z_\text{1-loop}|_\infty = (Z_\text{1-loop}|_0)|_{z\to -z} = \frac{e^{iz/2}}{2i\sin(z/2)},
\end{equation}
and it follows that
\begin{equation}
I_{\mathcal{N} = 0} = \sum_{0, \infty} e^{-\beta L_{B, E}}Z_\text{1-loop} = \frac{e^{i(j + 1/2)z} - e^{-i(j + 1/2)z}}{2i\sin(z/2)} = \frac{\sin((j + 1/2)z)}{\sin(z/2)}.
\label{IN0}
\end{equation}
This is, of course, a special case of the Duistermaat-Heckman formula for longitudinal rotations of $S^2$, with the contribution from each fixed point weighted by the appropriate sign. As a consequence, the index is an even function of $z$ (invariant under the Weyl group $\mathbb{Z}_2$), as it must be, because the Hilbert space splits into representations of $SU(2)$.\footnote{That the index is even in $z$, as implied by canonical quantization, fixes potential multiplicative ambiguities in the path integral computation. For example, regardless of $\alpha$ in \eqref{LBE}, $L_3$ in \eqref{Lquantumwithmu} satisfies $[L_3, \phi] = \phi$ and hence implements the same twisted boundary condition $\phi(\tau + \beta) = e^{iz}\phi(\tau)$. However, to obtain an answer that is even in $z$ requires implementing the boundary condition using the operator $L_3|_{\alpha = j}$. In this way, the constant shift in $L_3$ relative to $L_3|_{\alpha = j}$ gives an overall phase of $e^{-i(j - \alpha)z}$, which combines with the classical contributions $e^{-\beta L_{B, E}}|_0 = e^{-i\alpha z}$ and $e^{-\beta L_{B, E}}|_\infty = e^{i(2j - \alpha)z}$ to produce the gauge-independent result \eqref{IN0}. To avoid this complication, we have chosen to work in the $E$ gauge from the beginning.}
We now compute $I_{\mathcal{N} = 2}$, keeping $\mu$ finite. In the supersymmetric problem, the $E$ gauge corresponds to choosing the K\"ahler potential $\log(1 + |\Phi|^2) - \frac{1}{2}\log|\Phi|^2$, which is invariant under $\Phi\to 1/\Phi$. In component fields, the Lagrangian is $L_{B, E} + L_{F, E}$ with $\alpha = j$. Expanding in both bosonic fluctuations $\Delta$ and fermionic fluctuations $\Xi$ ($\psi = \psi_\text{cl} + \Xi = \Xi$) gives
\begin{equation}
(L_{B, E} + L_{F, E})|_{O(\Delta^2 + \Xi^2)} = j(\Delta\dot{\Delta}^\dag - \Delta^\dag\dot{\Delta} + \Xi^\dag\Xi) + \frac{2}{\mu}\dot{\Delta}\dot{\Delta}^\dag + \frac{1}{2\mu}(\Xi^\dag\dot{\Xi} - \dot{\Xi}^\dag\Xi).
\label{quadterms}
\end{equation}
The part of the Lagrangian quadratic in fluctuations, as written above, is supersymmetric by itself.\footnote{We have that
\[
\delta(\Delta\dot{\Delta}^\dag - \Delta^\dag\dot{\Delta} + \Xi^\dag\Xi) = -\partial_\tau(\epsilon\Xi\Delta^\dag + \epsilon^\dag\Xi^\dag\Delta), \quad \textstyle \delta[2\dot{\Delta}\dot{\Delta}^\dag + \frac{1}{2}(\Xi^\dag\dot{\Xi} - \dot{\Xi}^\dag\Xi)] = \partial_\tau(\epsilon\Xi\dot{\Delta}^\dag - \epsilon^\dag\Xi^\dag\dot{\Delta})
\]
under the (global) Euclidean SUSY variations $(\delta\Delta, \delta\Xi) = (\epsilon\Xi, 2\epsilon^\dag\dot{\Delta})$ and $(\delta\Delta^\dag, \delta\Xi^\dag) = (-\epsilon^\dag\Xi^\dag, -2\epsilon\dot{\Delta}^\dag)$.} Twisted boundary conditions in the path integral are implemented by $(L_f)_3$, which satisfies $[(L_f)_3, \phi] = \phi$ and $[(L_f)_3, \psi] = \psi$. The moding for the fermionic fluctuations
\begin{equation}
\Xi = \frac{1}{\sqrt{\beta}}\sum_{n=-\infty}^\infty \Xi_n e^{i(2\pi n + z)\tau/\beta}
\end{equation}
is integral because at $z = 0$, the insertion of $(-1)^F$ would require periodic boundary conditions for fermions on the thermal circle. Hence the fermions contribute a factor of
\begin{equation}
\exp\left[\sum_{n=-\infty}^\infty \log\left(\frac{2\pi n + z}{\beta\mu} - ij\right)\right]
\end{equation}
to $Z_\text{1-loop}|_0$ (to obtain a nontrivial functional determinant, we cannot neglect the fermion kinetic term, which is why we have kept $\mu$ finite). Hurwitz zeta function regularization alone does not suffice for taking the $\beta\mu\to\infty$ limit, so we instead perform free-field subtraction (divide by a fiducial functional determinant):
\begin{equation}
\det(j + \partial_\tau/\mu) = \prod_{n=-\infty}^\infty \frac{(2\pi n + z)/\beta\mu - ij}{2\pi n/\beta\mu - ij} = \frac{\sin((i\beta\mu j - z)/2)}{\sin(i\beta\mu j/2)} \xrightarrow{\beta\mu\to\infty} e^{(j/|j|)iz/2}.
\label{ferdeterminant}
\end{equation}
Taking $j$ positive, this reduces to a phase of $e^{iz/2}$. By similar reasoning to that in the bosonic case, we conclude that
\begin{equation}
I_{\mathcal{N} = 2} = \frac{\sin(jz)}{\sin(z/2)}.
\label{IN2}
\end{equation}
Again, this is the only answer consistent with canonical quantization. Thus in the supersymmetric theory, the one-loop shift of $j$ due to the bosons ($+1/2$) exactly cancels that due to the fermions ($-1/2$).
\subsubsection{Localization in 1D}
In both the bosonic and supersymmetric theories, direct comparison to canonical quantization shows that the semiclassical (one-loop) approximation for the index is exact. It is natural to ask why this should be so, and supersymmetry provides an answer. While the exactness in the bosonic case can only be heuristically justified by the Dirac quantization condition on $j$, it can be rigorously justified by appealing to the supersymmetric case.
In its most basic form, the localization principle starts from the fact that a Euclidean partition function deformed by a total variation of some nilpotent symmetry $\delta$ ($\delta^2 = 0$) of both the action and the measure is independent of the coefficient of this deformation:
\begin{equation}
Z(t) = \int \mathcal{D}\Phi\, e^{-S[\Phi] + t\delta V} \implies \frac{dZ(t)}{dt} = \int \mathcal{D}\Phi\, \delta\left(e^{-S[\Phi] + t\delta V}V\right) = 0.
\end{equation}
If the bosonic part of $\delta V$ is positive-semidefinite, then as $t\to\infty$, the path integral localizes to $\delta V = 0$. For a given field configuration with $\delta V = 0$, one can compute a semiclassical path integral for fluctuations on top of this background, and then integrate over all such backgrounds to obtain the exact partition function.
In our case, the quadratic terms arising from perturbation theory are already $(Q + Q^\dag)$-exact, without the need to add any localizing terms. Indeed, we compute that\footnote{Here, we again use that in Euclidean signature,
\[
\delta\phi = \delta_\epsilon\phi = \epsilon\psi, \quad \delta\phi^\dag = \delta_{\epsilon^\dag}\phi^\dag = -\epsilon^\dag\psi^\dag, \quad \delta\psi = \delta_{\epsilon^\dag}\psi = 2\epsilon^\dag\dot{\phi}, \quad \delta\psi^\dag = \delta_\epsilon\psi^\dag = -2\epsilon\dot{\phi}^\dag
\]
where $\delta\mathcal{O}\equiv [\epsilon Q + \epsilon^\dag Q^\dag, \mathcal{O}]$ and $\delta_{\epsilon, \epsilon^\dag}$ are Grassmann-even.}
\begin{align}
\delta(\delta(\phi^\dag\phi)) &= 2\epsilon^\dag\epsilon(\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi} + \psi^\dag\psi), \\
\delta(\delta(\psi^\dag\psi)) &= 2\epsilon^\dag\epsilon(4\dot{\phi}\dot{\phi}^\dag + \psi^\dag\dot{\psi} - \dot{\psi}^\dag\psi).
\end{align}
Up to overall factors, these are precisely the quadratic expressions \eqref{quadterms} that we integrate over the fluctuations $\Delta, \Xi$ to compute the one-loop factors in the index $I_{\mathcal{N} = 2}$. As we take the coefficient of either the $\delta(\delta(\phi^\dag\phi))$ term or the $\delta(\delta(\psi^\dag\psi))$ term to infinity, the original Lagrangian $L_{B, E} + L_{F, E}$ becomes irrelevant for the one-loop analysis, but since these terms have the same critical points as the original Lagrangian, the result of the localization analysis coincides with that of the original Lagrangian, proving that the path integral for the latter is one-loop exact.\footnote{Note that the bosonic part of the $\delta(\delta(\phi^\dag\phi))$ term is not positive-semidefinite; indeed, it is imaginary. We are implicitly using a stationary phase argument.} Furthermore, the final result is independent of the coefficient of either term. This has a simple explanation: the regularized bosonic and fermionic functional determinants \eqref{bosdeterminant} and \eqref{ferdeterminant} have a product which is independent of $\beta\mu$, namely
\begin{equation}
\frac{\det(j + \partial_\tau/\mu)}{\det(j\partial_\tau + \partial_\tau^2/\mu)} = -\frac{1}{2i\sin(z/2)}.
\end{equation}
Hence the one-loop factor has the same limit whether $\beta\mu\to\infty$ or $\beta\mu\to 0$.\footnote{To complete the argument, one should check that the path integral measure is invariant under $Q$ (and/or $Q^\dag$). While we have assumed that this measure reduces to $D\Delta^\dag D\Delta\, D\Xi^\dag D\Xi$ for fluctuations ($D$ here should not be confused with a superderivative), the full nonperturbative path integral measure (i.e., the supersymmetrized Fubini-Study measure) must be invariant under both SUSY and global $SU(2)$.}
\subsubsection{Finite Temperature}
We have shown in Lorentzian signature and at zero temperature that integrating out the fermions in the supersymmetric theory with isospin $J$ ($2J$ bosonic ground states) yields an effective bosonic theory with isospin $j = J - 1/2$ ($2j + 1$ bosonic ground states), which is consistent with the equality of $I_{\mathcal{N} = 0}(j)$ in \eqref{IN0} and $I_{\mathcal{N} = 2}(J)$ in \eqref{IN2}.
The index, however, is computed at finite temperature. The temperature can only enter the effective action through the dimensionless combination $\beta\mu$, and this dependence must disappear in the limit $\mu\to\infty$. Therefore, the statement of the preceding paragraph must be independent of temperature. Let us show this directly at finite temperature by mimicking the index computation, thereby giving an alternative and cleaner derivation of \eqref{effectiveaction}.
We first perform a field redefinition $\psi' = \psi/(1 + |\phi|^2)$ (the associated Jacobian determinant cancels in regularization). Integrating by parts then gives
\begin{equation}
L_{F, E} = \psi'^\dag\mathcal{D}\psi', \quad \mathcal{D}\equiv \frac{\partial_\tau + \mathcal{L}_{0, E}}{\mu} + j, \quad \mathcal{L}_{0, E}\equiv \frac{\phi\dot{\phi}^\dag - \phi^\dag\dot{\phi}}{1 + |\phi|^2}.
\end{equation}
In Euclidean signature, the eigenfunctions of $\mathcal{D}$ are simple:
\begin{equation}
f(\tau) = \exp\left[(\lambda - j)\mu\tau - \int^\tau d\tau'\, \mathcal{L}_{0, E}\right].
\end{equation}
With periodic (supersymmetric) boundary conditions for the fermions, the eigenvalues are
\begin{equation}
\lambda_n = j + \frac{2\pi in + \mathcal{A}}{\beta\mu}, \quad \mathcal{A} = \int_0^\beta d\tau\, \mathcal{L}_{0, E}, \quad n\in \mathbb{Z}.
\end{equation}
Free-field subtraction then gives
\begin{equation}
\frac{\det(\partial_\tau/\mu + j + \mathcal{L}_{0, E}/\mu)}{\det(\partial_\tau/\mu + j)} = \prod_{n=-\infty}^\infty \frac{j + (2\pi in + \mathcal{A})/\beta\mu}{j + 2\pi in/\beta\mu} = \frac{e^{-\mathcal{A}/2}(1 - e^{\mathcal{A} + \beta\mu j})}{1 - e^{\beta\mu j}}.
\end{equation}
Upon taking $\mu\to\infty$, this becomes $e^{(j/|j|)\mathcal{A}/2}$, whose exponent has the correct sign because the Euclidean action appears with a minus sign in the path integral.
Note that while this computation seemingly fixes the sign outright, our regularization crucially assumes a positive sign for $\mu$. Moreover, different regularization schemes lead to different global anomalies in the effective action \cite{Dunne:1998qy, Elitzur:1985xj}. For instance, using Hurwitz zeta function regularization \emph{before} free-field subtraction would give
\begin{equation}
\frac{1 - e^{\mathcal{A} + \beta\mu j}}{1 - e^{\beta\mu j}} \xrightarrow{\mu\to\infty} e^{(1 + j/|j|)\mathcal{A}/2}.
\end{equation}
These ambiguities can be phrased as a mixed anomaly between the ``charge conjugation'' symmetry taking $z\to -z$ and invariance under global gauge transformations $z\to z + 2\pi n$ for $n\in \mathbb{Z}$ \cite{Elitzur:1985xj} (as we will see shortly, $z$ can be interpreted as a background gauge field). Indeed, in terms of the effective bosonic system, $I_{\mathcal{N} = 0}$ in \eqref{IN0} is invariant under $z\to z + 2\pi n$ for integer $j$ but picks up a sign of $(-1)^n$ for half-integer $j$. On the other hand, in an alternate regularization where $I_{\mathcal{N} = 0}\to e^{i(j + 1/2)z}I_{\mathcal{N} = 0}$, $I_{\mathcal{N} = 0}$ is no longer even in $z$ but picks up a sign of $(-1)^n$ for all $j$. To fix the sign of the shift unambiguously (i.e., such that the effective action computation is consistent with the index), we appeal to canonical quantization. In other words, in the Hamiltonian formalism, we demand that the $SU(2)$ symmetry be preserved quantum-mechanically.
\subsubsection{Background Gauge Field}
The quantities \eqref{indices} are useful because the twisted index with vanishing background gauge field is in fact equivalent to the \emph{un}twisted index with \emph{arbitrary} constant background gauge field. To see this, set $\mu = \infty$ for simplicity. To restore the background gauge field, we simply take $L_B\to L_B + j\mathcal{L}_A$, or equivalently
\begin{equation}
L_{B, E}\to L_{B, E} - j\mathcal{L}_A,
\end{equation}
with $\mathcal{L}_A$ in \eqref{LA} (note that $\mathcal{L}_{A, E} = -\mathcal{L}_A$, where the gauge field is always written in Lorentzian conventions). With $A_i = 0$, the bosonic index $I_{\mathcal{N} = 0}$ corresponds to the partition function for $L_{B, E}$ on $S^1$ with twisted boundary conditions implemented by the quantum operator $L_3$, whose classical expression is given in \eqref{Lclassical}. Clearly, $I_{\mathcal{N} = 0}$ can also be viewed as a thermal partition function for a deformed Hamiltonian with periodic boundary conditions:
\begin{equation}
I_{\mathcal{N} = 0} = \operatorname{Tr}(e^{-\beta H_z}), \quad H_z\equiv H - \frac{izL_3}{\beta} = H + \frac{ijz}{\beta}\frac{1 - |\phi|^2}{1 + |\phi|^2}.
\end{equation}
This corresponds to a path integral with the modified Lagrangian
\begin{equation}
L_{B, E} + \frac{ijz}{\beta}\frac{1 - |\phi|^2}{1 + |\phi|^2}.
\end{equation}
Setting $z = i\beta A_3$, we recover precisely $(L_{B, E} - j\mathcal{L}_A)|_{A_1 = A_2 = 0}$, so we deduce from \eqref{IN0} that
\begin{equation}
\int D\phi^\dag D\phi\, e^{-\int_0^\beta d\tau\, (L_{B, E} - j\mathcal{L}_A)|_{A_1 = A_2 = 0}} = \frac{\sinh((j + 1/2)\beta A_3)}{\sinh(\beta A_3/2)},
\end{equation}
with periodic boundary conditions implicit. But for a constant gauge field, we can always change the basis in group space to set $A_1 = A_2 = 0$: under a finite global $SU(2)$ transformation $\phi\to (a\phi + b)/(-\bar{b}\phi + \bar{a})$, the measure is invariant, the single-derivative Wess-Zumino term changes by a total derivative, and the Noether currents \eqref{Noether} rotate into each other. Letting $|A| = \sqrt{\sum_i A_i^2}$ denote the norm in group space, we conclude that
\begin{equation}
\int D\phi^\dag D\phi\, e^{-\int_0^\beta d\tau\, (L_{B, E} - j\mathcal{L}_A)} = \frac{\sinh((j + 1/2)\beta|A|)}{\sinh(\beta|A|/2)}.\footnote{While we inferred this result from the $SU(2)$ symmetry of the twisted partition function, it can also be seen directly from a semiclassical analysis of the Euclidean Lagrangian. Setting $\mu = \infty$ and imposing periodic boundary conditions, the critical points of $L_{B, E} - j\mathcal{L}_A$ occur at the constant values
\[
\phi_\text{cl} = \frac{A_3\pm |A|}{A_1 + iA_2} \implies (L_{B, E} - j\mathcal{L}_A)|_{\phi_\text{cl}} = \pm j|A|.
\]
Including the one-loop determinants around these classical contributions gives the expected answer.}
\end{equation}
Setting $L_{B, E}^\text{WZ}\equiv j\mathcal{L}_{0, E}$ and noting that $\operatorname{Tr}_j e^{\pm\beta A_i J_i} = \operatorname{Tr}_j e^{\beta|A|J_3}$, this result can be written more suggestively as
\begin{equation}
\int D\phi^\dag D\phi \exp\left[-\int_0^\beta d\tau\, (L_{B, E}^\text{WZ} - 2jA_i J_i)\right] = \operatorname{Tr}_j e^{-\beta A_i J_i}
\end{equation}
where on the left, the $J_i$ are interpreted as classical Noether currents and on the right, they are interpreted as quantum non-commuting matrices (the Hermitian generators of $SU(2)$ in the spin-$j$ representation). Hence the path integral for the 1D quantum mechanics with constant background gauge field computes a Wilson loop of spin $j$ with constant gauge field along the $S^1$, i.e., the character of the spin-$j$ representation. This identification holds even for arbitrary background gauge field because one can always choose a time-dependent gauge such that the gauge field is constant along the loop; the only invariant information is the conjugacy class of the holonomy around the loop. Indeed, a Wilson loop can be thought of as a dynamical generalization of a Weyl character.
We can now be even more explicit about the relation between the standard path-ordered definition of a Wilson loop and the coadjoint orbit description, with path ordering identified with time ordering in the quantum mechanics on the line and noncommutativity arising as a quantum effect. In this way, we derive Kirillov's character formula from the partition function of the quantum mechanics \cite{Noma:2009dq}. Take the gauge field along the $S^1$ to be time-dependent and consider the path-ordered exponential
\[
P\equiv P\exp\left[\int_0^T dt\, (A + B(t))\right] = \sum_{n=0}^\infty \int_0^T dt_1\int_0^{t_1} dt_2\cdots \int_0^{t_{n-1}} dt_n\, (A + B(t_1))\cdots (A + B(t_n))
\]
where $A, B$ are matrices and $A$ is constant. Observe that $P = P'$ where
\[
P' = \sum_{n=0}^\infty \int_0^T dt_1\int_0^{t_1} dt_2\cdots \int_0^{t_{n-1}} dt_n\, e^{(T - t_1)A}B(t_1)e^{(t_1 - t_2)A}B(t_2)\cdots e^{(t_{n-1} - t_n)A}B(t_n)e^{t_n A}
\]
because $P$ and $P'$ both satisfy the differential equation $f'(T) = (A + B(T))f(T)$ subject to the initial condition $f(0) = 1$. Now consider a Euclideanized Wilson loop wrapping the $S^1$ and split the gauge field into a fiducial time-independent part and the remainder:
\begin{equation}
\operatorname{Tr}_j P\exp\left[-\int_0^\beta d\tau\, (A_i^c + A_i^\tau)J_i\right]\equiv \operatorname{Tr}_j P\exp\left[\int_0^\beta d\tau\, (A + \mathcal{A})\right]\equiv \sum_{n=0}^\infty P_n.
\end{equation}
One can view the terms $P_n$ as operator insertions inside
\begin{equation}
\int D\phi^\dag D\phi \exp\left[-\int_0^\beta d\tau\, (L_{B, E}^\text{WZ} + 2jA)\right] = \operatorname{Tr}_j e^{\beta A}
\end{equation}
(where the implicit $J_i$ are classical on the left and quantum on the right) as follows:
\begin{align*}
P_n &= \int_0^\beta d\tau_1\int_0^{\tau_1} d\tau_2\cdots \int_0^{\tau_{n-1}} d\tau_n \operatorname{Tr}_j(e^{(\beta - \tau_1)A}\mathcal{A}(\tau_1)e^{(\tau_1 - \tau_2)A}\mathcal{A}(\tau_2)\cdots e^{(\tau_{n-1} - \tau_n)A}\mathcal{A}(\tau_n)e^{\tau_n A}) \\
&= \int_0^\beta d\tau_1\int_0^{\tau_1} d\tau_2\cdots \int_0^{\tau_{n-1}} d\tau_n\int D\phi^\dag D\phi\left[\prod_{i=1}^n -2j\mathcal{A}(\tau_i)\right]\exp\left[-\int_0^\beta d\tau\, (L_{B, E}^\text{WZ} + 2jA)\right] \\
&= \frac{1}{n!}\int D\phi^\dag D\phi\left[\prod_{i=1}^n -2j\int_0^\beta d\tau_i\, \mathcal{A}(\tau_i)\right]\exp\left[-\int_0^\beta d\tau\, (L_{B, E}^\text{WZ} + 2jA)\right].
\end{align*}
In the last step, we have used that the $\mathcal{A}(\tau_i)$ are classical quantities inside the path integral. Hence the sum exponentiates to
\begin{equation}
\operatorname{Tr}_j P\exp\left[-\int_0^\beta d\tau\, (A_i^c + A_i^\tau)J_i\right] = \int D\phi^\dag D\phi \exp\left[-\int_0^\beta d\tau\, (L_{B, E}^\text{WZ} - 2j(A_i^c + A_i^\tau)J_i)\right]
\end{equation}
where again, the $J_i$ on the left and right have different meanings.
The above arguments can be carried over wholesale to the supersymmetric index $I_{\mathcal{N} = 2}$, since the $(L_f)_i$ rotate into each other under global $SU(2)$. The fermions modify the representation in which the trace is taken, and (as we will see) the fact that a particular linear combination of the bulk gauge field and the auxiliary scalar $\sigma$ appears in the quantum mechanics is reflected in the appearance of these fields in the supersymmetric path-ordered expression.
\section{Coupling to the Bulk} \label{couplingtobulk}
We now take a top-down approach to the quantum mechanics on the line by restricting the 3D $\mathcal{N} = 2$ multiplets to 1D $\mathcal{N} = 2$ multiplets closed under SUSY transformations that generate translations along the line, which we take to extend along the 0 direction in $\mathbb{R}^{1, 2}$ (as in the previous section, aside from Section \ref{indexshift}, we work in Lorentzian signature). We thus identify the components of the 1D vector multiplet with restrictions of the bulk fields; in principle, the 1D chiral multiplet $\Phi$ of the previous section descends from bulk super gauge transformations.
Our conventions for SUSY in $\mathbb{R}^{1, 2}$ are given in Appendix \ref{3DN2}. The linear combination of supercharges that generates translations along the line is $\Omega\equiv (Q_1 + iQ_2)/\sqrt{2}$ (any choice $\Omega = c_1 Q_1 + c_2 Q_2$ with $|c_1|^2 = |c_2|^2 = 1/2$ and $c_1 c_2^\ast$ purely imaginary would suffice), which satisfies $\{\Omega, \Omega^\dag\} = -2P_0 = 2H$ for vanishing central charge. Therefore, to restrict to the line, we choose the infinitesimal spinor parameter $\xi$ such that
\begin{equation}
\xi Q = \xi_1 Q_2 - \xi_2 Q_1 = \omega\Omega \implies (\xi_1, \xi_2) = \frac{1}{\sqrt{2}}(i\omega, -\omega)
\end{equation}
where $\omega$ is some fiducial Grassmann parameter (note that $\xi Q$ has suppressed spinor indices, while $\omega\Omega$ does not). In terms of the linear representations of the supercharges on 3D and 1D $\mathcal{N} = 2$ superspace (\eqref{diffops3D} and \eqref{diffops1D}, respectively), we compute that for superfields whose only spacetime dependence is on the 0 direction, 1D $\mathcal{N} = 2$ SUSY transformations are implemented by $\xi\mathcal{Q} - \bar{\xi}\bar{\mathcal{Q}} = \omega\hat{Q} + \bar{\omega}\hat{Q}^\dag$ with $\theta = \frac{1}{\sqrt{2}}(\theta^1 - i\theta^2)$ and $\partial_\theta = \frac{1}{\sqrt{2}}(\partial_{\theta^1} + i\partial_{\theta^2})$.
\subsection{Linearly Realized SUSY on the Line}
With all auxiliary fields necessary to realize SUSY transformations linearly, a 3D $\mathcal{N} = 2$ vector multiplet ($V = V^\dag$) takes the form
\begin{equation}
\begin{aligned}
V &= C + \theta\chi - \bar{\theta}\bar{\chi} + \frac{1}{2}\theta^2(M + iN) - \frac{1}{2}\bar{\theta}^2(M - iN) - i\theta\bar{\theta}\sigma - \theta\gamma^\mu\bar{\theta}A_\mu \\
&\phantom{==} + i\theta^2\bar{\theta}\left(\bar{\lambda} - \frac{1}{2}\gamma^\mu\partial_\mu\chi\right) - i\bar{\theta}^2\theta\left(\lambda - \frac{1}{2}\gamma^\mu\partial_\mu\bar{\chi}\right) + \frac{1}{2}\theta^2\bar{\theta}^2\left(D - \frac{1}{2}\partial^2 C\right)
\end{aligned}
\label{3DN2vector}
\end{equation}
where $V = V^a T^a$, etc., and all bosonic components are real. A 3D $\mathcal{N} = 2$ chiral multiplet ($\bar{D}_\alpha\Phi = 0$) takes the form
\begin{equation}
\Phi = A - i\theta\gamma^\mu\bar{\theta}\partial_\mu A - \frac{1}{4}\theta^2\bar{\theta}^2\partial^2 A + \sqrt{2}\theta\psi - \frac{i}{\sqrt{2}}\theta^2\bar{\theta}\gamma^\mu\partial_\mu\psi + \theta^2 F
\label{3DN2chiral}
\end{equation}
where the scalar components are complex. Bulk (3D) SUSY acts on the vector and chiral multiplets as in \eqref{3Dsusyvector} and \eqref{3Dsusychiral}. For $f$ any complex 3D fermion, it is convenient to set
\begin{equation}
f'\equiv \frac{f_1 + if_2}{\sqrt{2}}, \quad f''\equiv \frac{f_1 - if_2}{\sqrt{2}}.
\label{linenotation}
\end{equation}
We find that the 3D $\mathcal{N} = 2$ vector multiplet restricts to the following 1D $\mathcal{N} = 2$ multiplets:
\begin{itemize}
\item a 1D vector $\{-C, \chi', \sigma + A_0\}$,
\item a 1D chiral $\{(N + iM)/2, \lambda' - i\partial_0\bar{\chi}''\}$ (and its conjugate antichiral),
\item and a 1D chiral $\{(iD - \partial_0\sigma)/2, \partial_0\bar{\lambda}''\}$ (and its conjugate antichiral).
\end{itemize}
We find that the 3D $\mathcal{N} = 2$ chiral multiplet restricts to the following 1D $\mathcal{N} = 2$ multiplets:
\begin{itemize}
\item a 1D chiral $\{A, -\sqrt{2}\psi'\}$
\item and a 1D antichiral $\{F, -\sqrt{2}\partial_0\psi''\}$.
\end{itemize}
The above 1D $\mathcal{N} = 2$ multiplets transform according to \eqref{1Dsusyvector} and \eqref{1Dsusychiral} with $\epsilon = \omega$. Note that $\chi, \lambda, \psi$ in 3D each restrict to two independent complex fermions in 1D.
\subsection{Nonlinearly Realized SUSY on the Line}
The most direct way to see how $\Phi$ in the coadjoint orbit Lagrangian arises from bulk super gauge transformations would be to perform the supersymmetric analogue of the derivation of Section \ref{loopsinCS} by cutting out a tubular neighborhood of the line and examining the effect of a bulk super gauge transformation on the resulting boundary (an action is induced on the line after integrating over all such transformations and taking the radius to zero). For this derivation, it would not suffice to work in Wess-Zumino gauge.\footnote{The conditions on the chiral superfield transformation parameter $\Lambda$ to preserve Wess-Zumino gauge are that $A = -A^\ast$, $\psi = 0$, and $F = 0$, in which case the super gauge transformation $e^{2V}\to e^{\bar{\Lambda}}e^{2V}e^\Lambda$ reduces to an ordinary gauge transformation with parameter $-A$: $V\to V - i\theta\gamma^\mu\bar{\theta}\partial_\mu A + [V, A]$. These conditions preclude the possibility of inducing fermions on the line.} Therefore, let us not presuppose a gauge. In superspace, the 3D $\mathcal{N} = 2$ Chern-Simons Lagrangian
\begin{equation}
\mathcal{L}_\text{CS} = \frac{k}{4\pi i}\int d^4\theta\int_0^1 dt\, \operatorname{Tr}[V\bar{D}_\alpha(e^{-2tV}D^\alpha e^{2tV})]
\end{equation}
is invariant under (linearly realized) SUSY and reduces to \eqref{LCS} in Wess-Zumino gauge. To see the effect of a super gauge transformation (following \cite{Ivanov:1991fn}), consider more generally
\begin{equation}
\mathcal{L}_\text{CS} = \frac{k}{8\pi i}\int d^4\theta\int_0^1 dt\, \ell_\text{CS}, \quad \ell_\text{CS} = \operatorname{Tr}[(e^{-2V(t)}\partial_t e^{2V(t)})\bar{D}_\alpha(e^{-2V(t)}D^\alpha e^{2V(t)})]
\end{equation}
with boundary conditions $V(0) = 0$ and $V(1) = V$.\footnote{The integration can only be explicitly performed in the abelian case: upon integrating by parts,
\[
\ell_\text{CS} = 2\partial_t(V(t)\bar{D}_\alpha D^\alpha V(t)) \implies \mathcal{L}_\text{CS} = \frac{k}{4\pi}\int d^4\theta\, V\Sigma \xrightarrow{\text{WZ gauge}} \frac{k}{4\pi}(\epsilon^{\mu\nu\rho}A_\mu\partial_\nu A_\rho - 2i\lambda\bar{\lambda} - 2D\sigma)
\]
where $\Sigma = -i\epsilon^{\alpha\beta}\bar{D}_\alpha D_\beta V$ is the linear superfield associated to $V$. In Wess-Zumino gauge \eqref{WZgauge},
\begin{gather*}
\Sigma = -2\sigma + 2\bar{\theta}\lambda - 2\theta\bar{\lambda} + 2i\theta\bar{\theta}D - \epsilon^{\mu\nu\rho}\theta\gamma_\rho\bar{\theta}F_{\mu\nu} + i\bar{\theta}^2\theta\gamma^\mu\partial_\mu\lambda - i\theta^2\bar{\theta}\gamma^\mu\partial_\mu\bar{\lambda} + \frac{1}{2}\theta^2\bar{\theta}^2\partial^2\sigma.
\end{gather*}
The superspace Lagrangian transforms by a total spinor derivative under $V\to V + \Phi + \bar{\Phi}$, by virtue of the relations $\{D_\alpha, \bar{D}_\beta\} = -2i\gamma_{\alpha\beta}^\mu\partial_\mu$ and hence $\{D^\alpha, \bar{D}_\alpha\} = 0$.} Under a super gauge transformation $e^{2V(t)}\to e^{\bar{\Phi}(t)}e^{2V(t)}e^{\Phi(t)}$, we have $\ell_\text{CS}\to \ell_\text{CS} + \delta'\ell_\text{CS}$ where
\begin{equation}
\delta'\ell_\text{CS} = \operatorname{Tr}[D^\alpha(e^{-\bar{\Phi}(t)}\partial_t e^{\bar{\Phi}(t)}e^{2V(t)}\bar{D}_\alpha e^{-2V(t)}) + \bar{D}_\alpha(\partial_t e^{\Phi(t)}e^{-\Phi(t)}e^{-2V(t)}D^\alpha e^{2V(t)})].
\end{equation}
Obtaining an explicit expression for this total derivative (in particular, for $\Phi(t)$ when $V(t) = tV$) is prohibitively complicated. Thus, rather than imitating the derivation of \cite{Moore:1989yh}, we will arrive at a bulk interpretation of the quantum-mechanical variables $\phi, \psi$ in Wess-Zumino gauge, which partially fixes ``super gauge'' while retaining the freedom to perform ordinary gauge transformations. To this end, it is useful to work in terms of the correponding nonlinearly realized supersymmetry (SUSY') transformations.
In Wess-Zumino gauge, a 3D $\mathcal{N} = 2$ vector multiplet takes the form
\begin{equation}
V|_\text{WZ} = -i\theta\bar{\theta}\sigma - \theta\gamma^\mu\bar{\theta}A_\mu + i\theta^2\bar{\theta}\bar{\lambda} - i\bar{\theta}^2\theta\lambda + \frac{1}{2}\theta^2\bar{\theta}^2 D.
\label{WZgauge}
\end{equation}
Bulk (3D) SUSY' acts on the vector multiplet as
\begin{align}
\delta'\sigma &= -(\xi\bar{\lambda} - \bar{\xi}\lambda), \nonumber \\
\delta' A_\mu &= i(\xi\gamma_\mu\bar{\lambda} + \bar{\xi}\gamma_\mu\lambda), \nonumber \\
\delta'\lambda &= \textstyle -i\xi D - i\gamma^\mu\xi D_\mu\sigma - \frac{1}{2}\epsilon^{\mu\nu\rho}\gamma_\rho\xi F_{\mu\nu}, \label{3Dsusypvector} \\
\delta'\bar{\lambda} &= \textstyle i\bar{\xi}D + i\gamma^\mu\bar{\xi}D_\mu\sigma - \frac{1}{2}\epsilon^{\mu\nu\rho}\gamma_\rho\bar{\xi}F_{\mu\nu}, \nonumber \\
\delta' D &= -(\xi\gamma^\mu D_\mu\bar{\lambda} - \bar{\xi}\gamma^\mu D_\mu\lambda) + [\xi\bar{\lambda} + \bar{\xi}\lambda, \sigma] \nonumber
\end{align}
where $D_\mu(\cdot) = \partial_\mu(\cdot) - i[A_\mu, (\cdot)]$ and $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu - i[A_\mu, A_\nu]$. Bulk (3D) SUSY' acts on a fundamental chiral multiplet as
\begin{align}
\delta' A &= -\sqrt{2}\xi\psi, \nonumber \\
\delta'\psi &= -\sqrt{2}\xi F + i\sqrt{2}\gamma^\mu\bar{\xi}D_\mu A + i\sqrt{2}\bar{\xi}\sigma A, \label{3Dsusypchiral} \\
\delta' F &= i\sqrt{2}\bar{\xi}\gamma^\mu D_\mu\psi - i\sqrt{2}\sigma\bar{\xi}\psi - 2i\bar{\xi}\bar{\lambda}A \nonumber
\end{align}
where $D_\mu(\cdot) = \partial_\mu(\cdot) - iA_\mu(\cdot)$. SUSY' transformations close off shell into the algebra
\begin{equation}
[\delta_\zeta', \delta_\xi'](\cdot) = -2i(\xi\gamma^\mu\bar{\zeta} + \bar{\xi}\gamma^\mu\zeta)D_\mu(\cdot) - 2i(\xi\bar{\zeta} - \bar{\xi}\zeta)\sigma\cdot (\cdot)
\end{equation}
on gauge-covariant fields where, e.g., $\sigma\cdot (\cdot)\equiv [\sigma, (\cdot)]$ for $\sigma, F_{\mu\nu}, \lambda, \bar{\lambda}, D$ and $\sigma\cdot (\cdot)\equiv \sigma(\cdot)$ for $A, \psi, F$. The above transformation laws and commutators can be obtained by dimensional reduction from 4D (set $\partial_3 = 0$).
The 3D SUSY' transformations restrict to the line as follows. We again use the notation \eqref{linenotation}. For the vector multiplet, defining the SUSY'-covariant derivative $D_0'(\cdot)\equiv D_0(\cdot) - i[\sigma, (\cdot)] = \partial_0(\cdot) - i[\sigma + A_0, (\cdot)]$, which satisfies $\delta' D_0'(\cdot) = D_0'\delta'(\cdot)$ and $D_0'\sigma = D_0\sigma$, we obtain the following (rather degenerate) restricted multiplets in 1D:
\begin{itemize}
\item a 1D vector $\{0, 0, \sigma + A_0\}$,
\item a 1D adjoint chiral $\{0, \lambda'\}$ (and its complex conjugate),
\item and a 1D adjoint chiral $\{(iD - D_0'\sigma)/2, D_0'\bar{\lambda}''\}$ (and its complex conjugate).
\end{itemize}
For a fundamental chiral multiplet, defining the SUSY'-covariant derivative $D_0'(\cdot)\equiv D_0(\cdot) - i\sigma(\cdot) = \partial_0(\cdot) - i(\sigma + A_0)(\cdot)$, which satisfies $\delta' D_0'(\cdot) = D_0'\delta'(\cdot)$, we obtain a single restricted multiplet in 1D, namely
\begin{itemize}
\item a 1D fundamental chiral $\{A, -\sqrt{2}\psi'\}$,
\end{itemize}
whose scalar component is associated with bulk gauge transformations. All of the above 1D $\mathcal{N} = 2$ chiral multiplets transform according to \eqref{1Dsusypchiral} with $\epsilon = \omega$ and $D_0\to D_0'$. Note that the putative 1D fundamental antichiral $\{F, -\sqrt{2}D_0'\psi''\}$ transforms according to
\begin{align*}
\delta' F &= -\bar{\omega}(-\sqrt{2}D_0'\psi'') - 2iA\bar{\omega}\bar{\lambda}', \\
\delta'(-\sqrt{2}D_0'\psi'') &= 2i\omega D_0' F,
\end{align*}
which is incompatible with 1D SUSY'. On a 1D chiral multiplet, the 1D SUSY' algebra is realized as
\begin{equation}
[\delta_\eta', \delta_\epsilon'](\cdot) = -2i(\epsilon\eta^\dag + \epsilon^\dag\eta)D_0'(\cdot)
\end{equation}
for $(\cdot) = \phi, \psi$, while $\delta'$ acts trivially on a 1D vector multiplet in Wess-Zumino gauge.
One would expect to write a coupled 3D-1D action
\begin{equation}
S_\text{3D-1D} = \int d^3 x\, \mathcal{L}_\text{CS} + j\int dt\, \tilde{\mathcal{L}}
\end{equation}
that is both supersymmetric and gauge-invariant (under SUSY' and ordinary gauge transformations), with the transformation of the 1D action compensating for any boundary terms induced along the line in the transformation of the 3D action. However, in Wess-Zumino gauge, $\mathcal{L}_\text{CS}$ in \eqref{LCS} has the following SUSY' variation:
\begin{equation}
\delta'\mathcal{L}_\text{CS} = \frac{k}{4\pi}\partial_\mu\operatorname{Tr}[i\epsilon^{\mu\nu\rho}(\xi\gamma_\nu\bar{\lambda} + \bar{\xi}\gamma_\nu\lambda)A_\rho + 2(\xi\gamma^\mu\bar{\lambda} - \bar{\xi}\gamma^\mu\lambda)\sigma].
\label{LCS-boundary}
\end{equation}
This induces a boundary term along the line only if the fields are singular as the inverse of the radial distance to the line. Since they are not, it suffices to show that the 1D action is itself invariant under appropriately defined 1D SUSY' transformations.
\subsection{Nonlinearly Realized SUSY in the Sigma Model}
To carry out this last step, we specialize to $SU(2)$. For the vector multiplet, the bulk and line variables are identified as $a_i = -C_i$, $\psi_i = \chi_i'$, $A_i = \sigma_i + (A_0)_i$. The quantum mechanics $\tilde{\mathcal{L}}|_\text{WZ} = \tilde{\mathcal{L}}_0 + \mathcal{L}_A$ is invariant under the 1D SUSY' transformations
\begin{align}
\delta'\phi &= \epsilon\psi, \nonumber \\
\delta'\psi &= \textstyle -2i\epsilon^\dag(\dot{\phi} - \frac{i}{2}A_1(1 - \phi^2) - \frac{1}{2}A_2(1 + \phi^2) - iA_3\phi),
\end{align}
which satisfy the algebra
\begin{align}
[\delta_\eta', \delta_\epsilon']\phi &= \textstyle -2i(\epsilon\eta^\dag + \epsilon^\dag\eta)(\dot{\phi} - \frac{i}{2}A_- + \frac{i}{2}A_+\phi^2 - iA_3\phi), \nonumber \\
[\delta_\eta', \delta_\epsilon']\psi &= -2i(\epsilon\eta^\dag + \epsilon^\dag\eta)(\dot{\psi} + i(A_+\phi - A_3)\psi), \\
[\delta_\eta', \delta_\epsilon']\psi^\dag &= -2i(\epsilon\eta^\dag + \epsilon^\dag\eta)(\dot{\psi}^\dag - i(A_-\phi^\dag - A_3)\psi^\dag). \nonumber
\end{align}
The adjoint action of $SU(2)$ on its Lie algebra induces an action on $S^2$, which explains the appearance of the $SU(2)$ Killing vectors in $\delta'\psi$. Explicitly, at the level of scalar components, the map between the adjoint (gauge parameter) chiral superfield $S = s + \theta f - i\theta\theta^\dag\dot{s} = S^a\sigma^a/2$ and the (scalar) $SU(2)/U(1)$ coset chiral superfield $\Phi = \phi + \theta\psi - i\theta\theta^\dag\dot{\phi}$ is
\begin{equation}
\frac{1}{|s|}\left(\begin{array}{c} s^1 \\ -s^2 \\ s^3 \end{array}\right)\leftrightarrow \left(\begin{array}{c} \sin\theta\cos\varphi \\ \sin\theta\sin\varphi \\ \cos\theta \end{array}\right)\leftrightarrow \phi = \frac{s^1 - is^2}{|s| - s^3}
\label{superfieldmap}
\end{equation}
by stereographic projection (note that this only makes sense for $s$ real). In terms of angles,
\begin{equation}
e^{i\varphi} = \frac{s^1 - is^2}{\sqrt{|s|^2 - (s^3)^2}}, \quad \tan(\theta/2) = \sqrt{\frac{|s| - s^3}{|s| + s^3}}.
\end{equation}
Keep in mind that to translate between the adjoint action and linear fractional transformations, one must flip the sign of the second Killing vector: that is, one must identify $\vec{\sigma}/2$ with $(\vec{e}_1, -\vec{e}_2, \vec{e}_3)$. The action of $SU(2)$ is as expected: writing $\epsilon = \epsilon^a\sigma^a/2$ and $s = s^a\sigma^a/2$, we have with $\epsilon^i$ infinitesimal that
\begin{equation}
g = 1 + i\epsilon \implies gsg^{-1} = s + i[\epsilon, s] \implies \delta_{SU(2)}s^i = \epsilon^{ijk}s^j\epsilon^k.
\end{equation}
Under the given map \eqref{superfieldmap}, this is equivalent to $\delta_{SU(2)}\phi = \epsilon_i x_i$. Now we check that SUSY' acts correctly. Na\"ively, we have for the components of $S$ that (with $A = A^a\sigma^a/2$)
\begin{align}
\delta' s &= \epsilon f, \nonumber \\
\delta' f &= -2i\epsilon^\dag(\dot{s} - i[A, s]),
\end{align}
but to make sense of SUSY' transformations for real $s$, we must take $f$ real and $\epsilon$ purely imaginary (though $S$ itself is not real):
\begin{align}
\delta' s &= i\epsilon f, \nonumber \\
\delta' f &= -2\epsilon(\dot{s} - i[A, s]),
\end{align}
where $\epsilon, f$ are real Grassmann variables. In terms of chiral superfields, the desired map is
\begin{equation}
\Phi = \frac{S^1 - iS^2}{|S| - S^3} = \phi + \theta\psi - i\theta\theta^\dag\dot{\phi}.
\end{equation}
Upon substituting for $\delta' s^a$ and $\delta' f^a$, the $\delta'$ variations of $\phi = \phi(s^a, f^a)$ and $\psi = \psi(s^a, f^a)$ are
\begin{align}
\delta'\phi &= i\epsilon\psi, \nonumber \\
\delta'\psi &= \textstyle -2\epsilon(\dot{\phi} - \frac{i}{2}A^1(1 - \phi^2) - \frac{1}{2}A^2(1 + \phi^2) - iA^3\phi),
\end{align}
as expected (for our choice of $\epsilon$). It would be interesting to clarify the interpretation of the coadjoint orbit theory as resulting from promoting the gauge transformation parameter in the group element $g$ to a chiral superfield ($g = e^{iS^a\sigma^a/2}$).
\section{Localization in 3D} \label{3Dloc}
\subsection{Overview}
We now examine how the understanding achieved for a straight line in $\mathbb{R}^{1, 2}$ can be extended to compact Euclidean spaces. We will describe shortly the backgrounds to which our analysis generalizes, but some general considerations are as follows.
A supersymmetric field theory minimally coupled to a curved metric is invariant under variations with covariantly constant spinors. Going beyond the minimal coupling paradigm, one can preserve supersymmetry by generalizing the spinor condition $\nabla_\mu\xi = 0$ in various ways. It is convenient to start by assuming superconformal symmetry, which requires only the existence of conformal Killing spinors. Under this assumption, we construct in Appendix \ref{3DN2} the curved-space SUSY' transformations \eqref{3Dsusypvector-M3} and \eqref{3Dsusypchiral-M3}, in which the eight independent superconformal symmetries associated to $\xi, \tilde{\xi}$ generate the 3D $\mathcal{N} = 2$ superconformal algebra $\mathfrak{osp}(2|2, 2)$. These transformations turn out to be a special case of a more general set of transformations derived from the ``new minimal'' supergravity background (see \cite{Willett:2016adv}, which we follow closely in this section). Of course, not all of the backgrounds that we are interested in are conformally flat: having derived the SUSY' transformations with conformal Killing spinor parameters, we restrict to those spinors that generate a suitable non-conformal subalgebra; the resulting transformations, for suitably generalized Killing spinors, pertain to all of the backgrounds that we consider. The results coincide with the supergravity background perspective that we describe in the next subsection.
Having placed the theory supersymmetrically on a curved space, the next question is that of computability. The fact that the Chern-Simons partition function on Seifert manifolds can be written as a matrix model is well-known from \cite{Marino:2002fk, Aganagic:2002wv} (see \cite{Marino:2005sj} for a review), and has been discussed in the framework of nonabelian localization in \cite{Beasley:2009mb, Beasley:2005vf}. By now, the computation of observables in $\mathcal{N} = 2$ Chern-Simons theory via supersymmetric localization \cite{Kapustin:2009kz} is also a well-established technique. The original approach of \cite{Kapustin:2009kz} applies to SCFTs (of which $\mathcal{N} = 2$ Chern-Simons-matter theories with no superpotential, and particularly pure $\mathcal{N} = 2$ Chern-Simons theory, are examples \cite{Gaiotto:2007qi}), but it can be generalized to non-conformal theories with a $U(1)_R$ symmetry \cite{Hama:2010av, Jafferis:2010un}.
The basic approach is as follows. In Euclidean signature, we regard all fields as complexified and the path integration cycle as middle-dimensional. To use $\delta V$ as a localizing term while preserving $\delta$-invariance of the theory (and correlation functions of $\delta$-closed operators), we choose $\delta$ to square to (if not zero) a bosonic symmetry under which $V$ is invariant up to total derivatives, and we choose the bosonic part of $\delta V$ to be positive-semidefinite to ensure convergence of the path integral. On the backgrounds of interest to us, the Euclidean Yang-Mills action may be conveniently chosen to play the role of $\delta V$. The localization locus $\mathcal{F}$, comprised of field configurations that contribute to the path integral in the $t\to\infty$ limit ($t$ being the coefficient of $\delta V$), is the intersection of the BPS field configurations with the saddle points of $\delta V$ (for gauge theories, we mean that the gauge-fixed action localizes to $\mathcal{F}$; properly, one would form an extended cochain complex with respect to the nilpotent $\delta + \delta_\text{BRST}$ \cite{Pestun:2007rz}). To avoid a potential confusion about order of limits when deriving the absence of the shift in $\mathcal{N}\geq 2$ Chern-Simons theories \cite{Tanaka:2012nr}, one should integrate out the gauginos at any finite value for the coefficient of the localizing term, before taking it to infinity (which sets the gauginos to zero).
Passing to compact Euclidean three-manifolds allows us to compute correlation functions of nontrivially linked, mutually half-BPS Wilson loops. On $S^3$, for example, we can access links whose components are fibers of the same Hopf fibration and are therefore unknots with mutual linking number one (the simplest example is the Hopf link). One can also squash the $S^3$ to obtain the Berger sphere with $SU(2)\times U(1)$ isometry (where the Killing vector has closed orbits and points along the Seifert fiber) \cite{Hama:2011ea, Imamura:2011wg} or the ellipsoid with $U(1)\times U(1)$ isometry (where the Killing vector does not, generically, point along the fiber) \cite{Closset:2012ru}; on the latter background, one can compute expectation values of nontrivial torus knots \cite{Tanaka:2012nr, Kapustin:2013hpk}. One can also consider lens spaces \cite{Gang:2009wy} and more general Seifert manifolds \cite{Kallen:2011ny, Ohta:2012ev, Closset:2016arn, Closset:2017zgf}. With appropriate boundary conditions, localizing on a solid torus $D^2\times S^1$ \cite{Sugishita:2013jca, Yoshida:2014ssa} makes contact with supersymmetric analogues of the gluing and Heegaard decompositions usually encountered in the context of Chern-Simons theory \cite{Pasquetti:2011fj, Beem:2012mb, Fujitsuka:2013fga, Benini:2013yva}. We examine the quantum mechanics on Wilson loops in these general backgrounds.
\subsection{Supergravity Background} \label{sugra}
For the sake of a unified presentation, we first review the relevant aspects of the background supergravity formalism of \cite{Festuccia:2011ws}, following \cite{Willett:2016adv, Closset:2012ru}. The idea is that one can systematically formulate quantum field theories preserving some rigid supersymmetry as BPS configurations of off-shell supergravity theories to which they couple via a chosen multiplet containing both the supercurrent and the energy-momentum tensor. For a given theory, different supercurrent multiplets lead to different off-shell supergravities, which have different rigid limits.
In the 3D $\mathcal{N} = 2$ context, this approach allows for the construction of a scalar supercharge by partially topologically twisting the $U(1)_R$ symmetry of the $\mathcal{N} = 2$ algebra. Namely, suppose that $M^3$ admits a transversely holomorphic foliation (THF), which consists of a nowhere vanishing unit vector field $v^\mu$ and a complex structure $J$ on the two-dimensional leaves transverse to $v^\mu$ such that $\mathcal{L}_v J = 0$. Then, very roughly speaking, one may twist the spatial rotations in the ``planes'' transverse to the ``time'' direction \cite{Closset:2012ru}. This construction subsumes both the round sphere and squashed sphere backgrounds. The relevant supergravity theory is ``new minimal'' supergravity, defined as the off-shell formulation of 3D supergravity that couples to the $\mathcal{R}$-multiplet of a 3D $\mathcal{N} = 2$ quantum field theory with a $U(1)_R$ symmetry. For the supersymmetry algebra and multiplets resulting from the rigid limit of new minimal supergravity, see Section 6 of \cite{Closset:2012ru}. The bosonic fields in new minimal supergravity are the metric $g_{\mu\nu}$, the $R$-symmetry gauge field $\smash{A_\mu^{(R)}}$, a two-form gauge field $B_{\mu\nu}$, and the central charge symmetry gauge field $C_\mu$. It is convenient to let $H$ and $V_\mu$ denote the Hodge duals of the field strengths of $B_{\mu\nu}$ and $C_\mu$, respectively.
For 3D $\mathcal{N} = 2$ theories with a $U(1)_R$ symmetry, \cite{Closset:2012ru} classifies the backgrounds that preserve some supersymmetry. In particular, to preserve two supercharges of opposite $R$-charge, the three-manifold $M^3$ must admit a nowhere vanishing Killing vector $K^\mu$. If $K^\mu$ is real, then $M^3$ is necessarily an orientable Seifert manifold. An example with $K^\mu$ complex is the $S^2\times S^1$ background of \cite{Imamura:2011su} relevant to computing the superconformal index (as opposed to the topologically twisted index of \cite{Benini:2015noa}). We focus on the case of a real, nowhere vanishing Killing vector $K^\mu$, but we do not restrict the orbit to be a Seifert fiber. Under these assumptions, it suffices to consider backgrounds with $V_\mu = 0$, so that the conditions for the existence of a rigid supersymmetry are
\begin{equation}
\begin{aligned}
(\nabla_\mu - iA_\mu^{(R)})\xi &= -\frac{1}{2}H\gamma_\mu\xi, \\
(\nabla_\mu + iA_\mu^{(R)})\tilde{\xi} &= -\frac{1}{2}H\gamma_\mu\tilde{\xi}.
\end{aligned}
\label{generalizedKS}
\end{equation}
These are the generalized Killing spinor equations, under which $\xi$ and $\tilde{\xi}$ have $R$-charges $\pm 1$, respectively. The corresponding SUSY' transformations with $V_\mu = 0$ \cite{Willett:2016adv} are
\begin{align}
\delta'\sigma &= -(\xi\tilde{\lambda} - \tilde{\xi}\lambda), \nonumber \\
\delta' A_\mu &= i(\xi\gamma_\mu\tilde{\lambda} + \tilde{\xi}\gamma_\mu\lambda), \nonumber \\
\delta'\lambda &= \textstyle -i\xi(D - \sigma H) - i\gamma^\mu\xi D_\mu\sigma - \frac{i}{2}\sqrt{g}^{-1}\epsilon^{\mu\nu\rho}\gamma_\rho\xi F_{\mu\nu}, \vphantom{\tilde{\xi}} \label{V0susypvector} \\
\delta'\tilde{\lambda} &= \textstyle i\tilde{\xi}(D - \sigma H) + i\gamma^\mu\tilde{\xi}D_\mu\sigma - \frac{i}{2}\sqrt{g}^{-1}\epsilon^{\mu\nu\rho}\gamma_\rho\tilde{\xi}F_{\mu\nu}, \nonumber \\
\delta' D &= \textstyle -D_\mu(\xi\gamma^\mu\tilde{\lambda} - \tilde{\xi}\gamma^\mu\lambda) + [\xi\tilde{\lambda} + \tilde{\xi}\lambda, \sigma] + H(\xi\tilde{\lambda} - \tilde{\xi}\lambda) \nonumber
\end{align}
for the vector multiplet and
\begin{align}
\delta' A &= -\sqrt{2}\xi\psi, \nonumber \\
\delta'\psi &= \textstyle -\sqrt{2}\xi F + i\sqrt{2}\gamma^\mu\tilde{\xi}D_\mu A + i\sqrt{2}\tilde{\xi}\sigma A - i\sqrt{2}\Delta H\tilde{\xi}A, \label{V0susypchiral} \\
\delta' F &= \textstyle i\sqrt{2}D_\mu(\tilde{\xi}\gamma^\mu\psi) - i\sqrt{2}\sigma\tilde{\xi}\psi - 2i\tilde{\xi}\tilde{\lambda}A + i\sqrt{2}(\Delta - 2)H\tilde{\xi}\psi \nonumber
\end{align}
for a fundamental chiral multiplet of dimension $\Delta$ (here, the dimensions coincide with the $R$-charges, differing by a sign for antichiral multiplets). The covariant derivative is now
\begin{equation}
D_\mu = \nabla_\mu - iA_\mu - irA_\mu^{(R)}
\label{DmuwithR}
\end{equation}
where $r$ is the $R$-charge of the field on which it acts. The transformations \eqref{V0susypvector} and \eqref{V0susypchiral} furnish a representation of the algebra $\mathfrak{su}(1|1)$. Taking into account the generalized Killing spinor equations \eqref{generalizedKS} and replacing
\begin{equation}
\nabla_\mu\xi\to D_\mu\xi = (\nabla_\mu - i\smash{A_\mu^{(R)}})\xi, \quad \nabla_\mu\tilde{\xi}\to D_\mu\tilde{\xi} = (\nabla_\mu + i\smash{A_\mu^{(R)}})\tilde{\xi}
\end{equation}
in the curved-space SUSY' transformations \eqref{3Dsusypvector-M3} and \eqref{3Dsusypchiral-M3} results in precisely the $V_\mu = 0$ SUSY' transformations above. The latter transformations satisfy the same algebra as \eqref{3Dsusypvector-M3} and \eqref{3Dsusypchiral-M3}, given in Appendix \ref{3DN2}, but with parameters
\begin{equation}
U^\mu = 2i\xi\gamma^\mu\tilde{\xi}, \quad \epsilon_{\mu\nu} = 2H\sqrt{g}\epsilon_{\mu\nu\rho}\xi\gamma^\rho\tilde{\xi}, \quad \rho = 0, \quad \alpha = 2iH\xi\tilde{\xi}
\label{V0susypparameters}
\end{equation}
and $D_\mu$ as in \eqref{DmuwithR}. We also have at our disposal
\begin{gather}
\delta_\xi'\delta_{\tilde{\xi}}'\operatorname{Tr}\left(\frac{1}{2}\lambda\tilde{\lambda} - iD\sigma\right) = \xi\tilde{\xi}\mathcal{L}_\text{YM}, \label{V0LYM} \\
\mathcal{L}_\text{YM} = \operatorname{Tr}\left[\frac{1}{4}F_{\mu\nu}F^{\mu\nu} + \frac{1}{2}D_\mu\sigma D^\mu\sigma - \frac{1}{2}(D - \sigma H)^2 - i\tilde{\lambda}\gamma^\mu D_\mu\lambda + i\tilde{\lambda}[\sigma, \lambda] + \frac{i}{2}H\lambda\tilde{\lambda}\right] \nonumber
\end{gather}
as a convenient localizing term, where we have omitted the Yang-Mills coupling.
If the generalized Killing spinor equations have at least one solution, then $M^3$ admits a THF. The existence of solutions to both equations implies that $K^\mu\equiv \xi\gamma^\mu\tilde{\xi}$ is a nowhere vanishing Killing vector. Assuming that $K^\mu$ is real, we can find local (``adapted'') coordinates $(\tilde{\psi}, z, \bar{z})$ such that $K = \partial_{\tilde{\psi}}$ and
\begin{equation}
ds^2 = (d\tilde{\psi} + a(z, \bar{z})\, dz + \bar{a}(z, \bar{z})\, d\bar{z})^2 + c(z, \bar{z})^2\, dz\, d\bar{z}
\label{standardmetric}
\end{equation}
where $a$ is complex and $c$ is real (following \cite{Willett:2016adv}, we have normalized the metric such that $|K|^2 = 1$, which does not affect results for supersymmetric observables \cite{Closset:2013vra}; see also \cite{Alday:2013lba}). Coordinate patches are related by transformations of the form $\tilde{\psi}' = \tilde{\psi} + \alpha(z, \bar{z})$, $z' = \beta(z)$, $\bar{z}' = \bar{\beta}(\bar{z})$ with $\alpha$ real and $\beta$ holomorphic. We choose the vielbein
\begin{equation}
e^1 = \frac{1}{2}c(z, \bar{z})(dz + d\bar{z}), \quad e^2 = \frac{i}{2}c(z, \bar{z})(dz - d\bar{z}), \quad e^3 = d\tilde{\psi} + a(z, \bar{z})\, dz + \bar{a}(z, \bar{z})\, d\bar{z},
\label{standardvielbein}
\end{equation}
for which the corresponding spin connection (determined from $de^a + \omega^a{}_b\wedge\smash{e^b} = 0$) is
\begin{equation}
\omega^{12} = -\omega^{21} = F_a e^3 + (\omega_\text{2D})^{12}, \quad \omega^{23} = -\omega^{32} = -F_a e^1, \quad \omega^{31} = -\omega^{13} = -F_a e^2
\end{equation}
where we have defined
\begin{equation}
F_a(z, \bar{z})\equiv \frac{i(\partial_{\bar{z}}a - \partial_z\bar{a})}{c^2}, \quad (\omega_\text{2D})^{12} = -(\omega_\text{2D})^{21} = -\frac{i}{c}(\partial_z c\, dz - \partial_{\bar{z}}c\, d\bar{z})
\end{equation}
with $\omega_\text{2D}$ being the spin connection associated to $e^1, e^2$ for the 2D metric $c^2\, dz\, d\bar{z}$. Note that $F_a$ is independent of the choice of chart, while $\omega_\text{2D}$ is not. We have on spinors that
\begin{equation}
\nabla_\mu = \partial_\mu - \frac{i}{2}F_a\gamma_\mu\cdot {} + i\left(F_a e_\mu^3 + \frac{1}{2}(\omega_\text{2D})_\mu^{12}\right)\gamma^3\cdot {}
\end{equation}
(cf.\ \eqref{nablamuonspinors}), where the dots indicate matrix multiplication rather than spinor contraction (see Appendix \ref{3DN2}). Hence if we take
\begin{equation}
H = -iF_a, \quad A^{(R)} = -\left(F_a e^3 + \frac{1}{2}(\omega_\text{2D})^{12}\right),
\label{standardbkgd}
\end{equation}
then the generalized Killing spinor equations \eqref{generalizedKS} are solved by
\begin{equation}
\xi = x\left(\begin{array}{c} 1 \\ 0 \end{array}\right), \quad \tilde{\xi} = x\left(\begin{array}{c} 0 \\ 1 \end{array}\right)
\label{generalizedKSsolns}
\end{equation}
in a basis where $\gamma^a = x\sigma^a x^{-1}$ (here, as in the definition of $K^\mu$, we really mean the commuting spinors $\xi|_0$ and $\tilde{\xi}|_0$). In particular, $\xi, \tilde{\xi}$ are constant in the chosen frame, and since $x\in SU(2)$, we have both $\tilde{\xi} = \xi^\dag$ and $\xi|_0\xi^\dag|_0 = 1$. Regardless of basis, we have
\begin{equation}
K^a = (\xi|_0)\gamma^a(\tilde{\xi}|_0) = \left(\begin{array}{cc} 0 & -1 \end{array}\right)\gamma^a\left(\begin{array}{c} 0 \\ 1 \end{array}\right) = \delta^{a3},
\end{equation}
so that $K = \partial_{\tilde{\psi}}$.
\subsection{Localizing ``Seifert'' Loops}
We now describe how bulk $V_\mu = 0$ SUSY' restricts to BPS Wilson loops. To summarize, our assumption that $M^3$ admits a real, nowhere vanishing Killing vector restricts it to be a Seifert manifold. On any such manifold, it is possible to define a 3D $\mathcal{N} = 2$ supergravity background with $V_\mu = 0$, in which the Killing spinors take a simple form. Namely, we work in local coordinates $(\tilde{\psi}, z, \bar{z})$ such that $K = \smash{\partial_{\tilde{\psi}}}$ and the metric takes the standard form \eqref{standardmetric}: upon choosing the frame \eqref{standardvielbein} and the background fields $H$ and $\smash{A^{(R)}}$ as in \eqref{standardbkgd}, the generalized Killing spinor equation $D_\mu\eta = -\frac{1}{2}H\gamma_\mu\eta$ (with $D_\mu$ as in \eqref{DmuwithR}) has solutions $\eta = \smash{\xi, \tilde{\xi}}$ of $R$-charge $\pm 1$ as in \eqref{generalizedKSsolns}.
However, the integral curves of the Killing vector field may not be compact. Therefore, local coordinates adapted to the Killing vector do not necessarily define a Seifert fibration of $M^3$. Thus the Wilson loops that we consider, while supported on the Seifert manifold $M^3$, are not necessarily Seifert loops. The quotation marks in the title of this subsection serve to emphasize that the term ``Seifert loop'' (in the sense of \cite{Beasley:2009mb}) is a misnomer.
To begin, consider a Euclidean 3D $\mathcal{N} = 2$ Wilson loop along a curve $\gamma$ \cite{Gaiotto:2007qi, Kapustin:2013hpk}:
\begin{equation}
W = \operatorname{Tr}_R P\exp\left[i\oint_\gamma (A_\mu dx^\mu - i\sigma ds)\right] = \operatorname{Tr}_R P\exp\left[i\oint_\gamma d\tau\, (A_\mu\dot{x}^\mu - i\sigma|\dot{x}|)\right].
\label{3DN2loop}
\end{equation}
The BPS conditions following from \eqref{V0susypvector} take the same form on any background geometry:
\begin{equation}
n^\mu\gamma_\mu\xi - \xi = 0, \quad n^\mu\gamma_\mu\tilde{\xi} + \tilde{\xi} = 0,
\label{bpsconditions}
\end{equation}
with $n^\mu = \dot{x}^\mu/|\dot{x}|$ being the unit tangent vector to $\gamma$. They are satisfied when $n^\mu = -K^\mu$. Hence a BPS Wilson loop preserving both supercharges under consideration lies along an integral curve of $K^\mu$.\footnote{These conditions would be equivalent in Lorentzian signature: a Lorentzian 3D $\mathcal{N} = 2$ Wilson loop
\[
W = \operatorname{Tr}_R P\exp\left[i\oint_\gamma (A_\mu dx^\mu + \sigma ds)\right] = \operatorname{Tr}_R P\exp\left[i\oint_\gamma dt\, (A_\mu\dot{x}^\mu + \sigma|\dot{x}|)\right]
\]
is locally half-BPS in $\mathbb{R}^{1, 2}$ (using \eqref{3Dsusypvector}) if we choose
\[
-\frac{\dot{x}^\mu}{|\dot{x}|}\gamma_\mu\xi + i\xi = 0 \Longleftrightarrow -\frac{\dot{x}^\mu}{|\dot{x}|}\gamma_\mu\bar{\xi} - i\bar{\xi} = 0.
\]
If the line extends along the 0 direction, then these conditions reduce to $i\xi_1 = \xi_2$, as in Section \ref{couplingtobulk}.}
To determine how bulk SUSY' restricts to these BPS Wilson loops, note that even after demanding that the Killing spinors $\xi, \tilde{\xi}$ be properly normalized, we still have the freedom to introduce a relative phase between them (the overall phase is immaterial). Therefore, let us keep $\xi$ as in \eqref{generalizedKSsolns}, with $K^\mu = \xi\gamma^\mu\xi^\dag$, and write
\begin{equation}
\tilde{\xi} = \rho x\left(\begin{array}{c} 0 \\ 1 \end{array}\right) = \rho\xi^\dag, \quad |\rho| = 1.
\end{equation}
The linear combinations of 3D fermions that appear in the 1D multiplets depend on the gamma matrix conventions. For simplicity, we work in the basis $\gamma^a = \sigma^a$ ($a = 1, 2, 3$). According to the above discussion, we fix $(n^1, n^2, n^3) = (0, 0, -1)$. Restoring Grassmann parameters, we have
\begin{equation}
\left(\begin{array}{c} \xi_1 \\ \xi_2 \end{array}\right) = \left(\begin{array}{c} \omega \\ 0 \end{array}\right), \quad \left(\begin{array}{c} \tilde{\xi}_1 \\ \tilde{\xi}_2 \end{array}\right) = \left(\begin{array}{c} 0 \\ \rho\bar{\omega} \end{array}\right).
\end{equation}
To restrict the SUSY' transformations \eqref{V0susypvector} and \eqref{V0susypchiral}, we drop dependence on the 1 and 2 directions and consider only the component of the gauge field along the loop. Along the loop, frame and spacetime indices are equivalent since $e_3^3 = 1$. For the vector multiplet, it is convenient to define the 1D SUSY'-covariant derivative
\begin{equation}
D_3'(\cdot)\equiv \partial_3(\cdot) - i[A_3 + i\sigma, (\cdot)]
\label{susypcovariantvector}
\end{equation}
on \emph{both} scalars and spinors, which satisfies $\delta' D_3'(\cdot) = D_3'\delta'(\cdot)$ and $D_3'\sigma = D_3\sigma$. Note that $D_3'(\cdot)$ and $D_3(\cdot) + [\sigma, (\cdot)]$ coincide on scalars, but not on spinors; note also that in 1D, we need not diffeomorphism-covariantize the derivative acting on spinors because the spin connection is trivial. In our supergravity background and frame, we have on spinors that
\begin{equation}
\nabla_i = \partial_i - \frac{1}{2}H\gamma_i + iA_i^{(R)}\gamma_3 \implies \nabla_3\psi_{i_\perp} = \partial_3\psi_{i_\perp} - (-1)^{i_\perp}\left(\frac{1}{2}H - iA_3^{(R)}\right)\psi_{i_\perp}
\label{intermediate1}
\end{equation}
where $i_\perp = 1, 2$. Moreover, it follows from the $V_\mu = 0$ SUSY' algebra that the gauginos $\lambda, \tilde{\lambda}$ have $R$-charges $\mp 1$, so \eqref{DmuwithR} and \eqref{intermediate1} give
\begin{equation}
D_3\lambda_1 = \partial_3\lambda_1 + \frac{1}{2}H\lambda_1 - i[A_3, \lambda_1], \quad D_3\tilde{\lambda}_2 = \partial_3\tilde{\lambda}_2 - \frac{1}{2}H\tilde{\lambda}_2 - i[A_3, \tilde{\lambda}_2].
\label{intermediate2}
\end{equation}
Specializing to our specific $\xi, \tilde{\xi}$, we obtain from \eqref{V0susypvector} (using \eqref{susypcovariantvector} and \eqref{intermediate2}) that
\begin{align}
\delta'\sigma &= -(\omega\tilde{\lambda}_2 + \rho\bar{\omega}\lambda_1), \nonumber \\
\delta' A_3 &= i(\omega\tilde{\lambda}_2 + \rho\bar{\omega}\lambda_1), \nonumber \\
\delta'\lambda_1 &= -i\omega D + i\omega D_3'\sigma + i\omega\sigma H, \vphantom{\tilde{\xi}} \\
\delta'\tilde{\lambda}_2 &= i\rho\bar{\omega}D + i\rho\bar{\omega}D_3'\sigma - i\rho\bar{\omega}\sigma H, \nonumber \\
\delta' D &= -(\omega D_3'\tilde{\lambda}_2 - \rho\bar{\omega}D_3'\lambda_1), \nonumber
\end{align}
with $\delta'\lambda_2 = \delta'\tilde{\lambda}_1 = 0$. We thus obtain the following restricted multiplets in 1D:
\begin{itemize}
\item A 1D vector $\{0, 0, A_3 + i\sigma\}$ where $\delta'(A_3 + i\sigma) = 0$.
\item Two independent 1D adjoint chirals (not related by complex conjugation) $\{0, \lambda_2\}$ and $\{0, \tilde{\lambda}_1\}$ where $\delta'\lambda_2 = \delta'\tilde{\lambda}_1 = 0$.
\end{itemize}
The remaining fields do not comprise good multiplets. Namely, we have:
\begin{itemize}
\item A putative 1D adjoint chiral $\{(D + D_3'\sigma)/2, -D_3'\tilde{\lambda}_2\}$ where
\begin{align*}
\delta'((D + D_3'\sigma)/2) &= \omega(-D_3'\tilde{\lambda}_2), \\
\delta'(-D_3'\tilde{\lambda}_2) &= -2i\rho\bar{\omega}D_3'((D + D_3'\sigma)/2) + i\rho\bar{\omega}H D_3'\sigma.
\end{align*}
\item A putative 1D adjoint antichiral $\{(D - D_3'\sigma)/2, iD_3'\lambda_1\}$ where
\begin{align*}
\delta'((D - D_3'\sigma)/2) &= -i\rho\bar{\omega}(iD_3'\lambda_1), \\
\delta'(iD_3'\lambda_1) &= 2\omega D_3'((D - D_3'\sigma)/2) - \omega HD_3'\sigma.
\end{align*}
\end{itemize}
These do not comprise good multiplets for two reasons. First, the Euclidean SUSY' transformation rules of a 1D chiral are $\delta\phi = \epsilon\psi$, $\delta\psi = 2\epsilon^\dag\dot{\phi}$, and those of a 1D antichiral are $\delta\phi' = -\epsilon^\dag\psi'$, $\delta\psi' = 2\epsilon\dot{\phi}'$; hence the above transformation rules do not close for nonzero $H$. Second, even for $H = 0$, it is impossible to choose the phase $\rho$ such that both sets of transformation rules close (if $\rho = i$, then the chiral closes while the antichiral does not, while if $\rho = -i$, then the opposite is true). We will see that for a 3D chiral to restrict to a 1D chiral, we must choose $\rho = i$. Indeed, consider a fundamental chiral multiplet of dimension $\Delta$. The corresponding 1D SUSY'-covariant derivative is
\begin{equation}
D_3'(\cdot)\equiv \partial_3(\cdot) - i(A_3 + i\sigma)(\cdot),
\label{susypcovariantchiral}
\end{equation}
which satisfies $\delta' D_3'(\cdot) = D_3'\delta'(\cdot)$. From the $V_\mu = 0$ SUSY' algebra, we see that $A, \psi, F$ have $R$-charges $-\Delta, 1 - \Delta, 2 - \Delta$, respectively, so that
\begin{equation}
\begin{aligned}
D_3 A &= (\partial_3 - iA_3 + i\Delta A_3^{(R)})A, \\
D_3\psi_1 &= \nabla_3\psi_1 - iA_3\psi_1 - i(1 - \Delta)A_3^{(R)}\psi_1
\end{aligned}
\end{equation}
with $\nabla_3\psi_1$ as in \eqref{intermediate1}. Substituting our specific $\xi, \tilde{\xi}$ into \eqref{V0susypchiral} and using \eqref{susypcovariantchiral} then gives the restricted transformation rules
\begin{align}
\delta' A &= -\sqrt{2}\omega\psi_2, \nonumber \\
\delta'\psi_1 &= -\sqrt{2}\omega F, \nonumber \\
\delta'\psi_2 &= i\sqrt{2}\rho\bar{\omega}(D_3' + i\Delta A_3^{(R)} - \Delta H)A, \\
\delta' F &= i\sqrt{2}\rho\bar{\omega}(D_3' - i(2 - \Delta)A_3^{(R)} + H)\psi_1 + 2i\rho\bar{\omega}\tilde{\lambda}_1 A - i\sqrt{2}\Delta\rho\bar{\omega}\psi_1 H. \nonumber
\end{align}
Choosing both $\rho = i$ and $\Delta = 0$, we obtain a single restricted multiplet in 1D, namely a 1D fundamental chiral $\{A, -\sqrt{2}\psi_2\}$ where
\begin{equation}
\begin{aligned}
\delta' A &= \omega(-\sqrt{2}\psi_2), \\
\delta'(-\sqrt{2}\psi_2) &= 2\bar{\omega}D_3' A.
\end{aligned}
\end{equation}
The remaining transformation rules can be written as
\begin{align*}
\delta' F &= \bar{\omega}(-\sqrt{2}(D_3' - 2iA_3^{(R)} + H)\psi_1) - 2\bar{\omega}\tilde{\lambda}_1 A, \\
\delta'(-\sqrt{2}D_3'\psi_1) &= 2\omega D_3' F,
\end{align*}
which superficially resemble those of a 1D fundamental antichiral, but which do not close and do not have the correct relative sign.
The key point is that the transformation rules for the restricted 1D multiplets are independent of the supergravity background fields and take exactly the same form as in flat space. The fields are \emph{a priori} complex, and $D_3'$ is not Hermitian because it involves a complexified gauge field. After imposing reality conditions in the path integral, we want $\sigma$ purely imaginary, $A_3$ purely real, and $D$ purely imaginary; the fermions remain independent.
\subsubsection{Example: \texorpdfstring{$S^3$}{S3}}
Let us see how this setup works in the familiar setting of $S^3$, whose radius we take to be $\ell$. This is a special case due to the high amount of symmetry, so we first make some comments on the geometry of $S^3$. We coordinatize $S^3$ by an element $g\in SU(2)$, which admits both left and right actions of $SU(2)$. Frame indices are identified with $\mathfrak{su}(2)$ indices in the basis $T^a = \sigma^a/2$. The $\mathfrak{su}(2)$-valued left- and right-invariant one-forms are
\begin{equation}
\Omega_L\equiv g^{-1}dg = i(\Omega_L)^a T^a, \quad \Omega_R\equiv dgg^{-1} = i(\Omega_R)^a T^a
\end{equation}
where $(\Omega_L)^a\equiv (\Omega_L)_\mu^a\, dx^\mu$ and $(\Omega_R)^a\equiv (\Omega_R)_\mu^a\, dx^\mu$, which satisfy the Maurer-Cartan equations
\begin{equation}
d(\Omega_L)^a - \frac{1}{2}\epsilon^{abc}(\Omega_L)^b\wedge (\Omega_L)^c = 0, \quad d(\Omega_R)^a + \frac{1}{2}\epsilon^{abc}(\Omega_R)^b\wedge (\Omega_R)^c = 0.
\end{equation}
The bi-invariant Riemannian metric (i.e., the metric on $S^3$ induced by its embedding in $\mathbb{C}^2$ with coordinates $(a, b)$ via $g = (\begin{smallmatrix} a & b \\ -\bar{b} & \bar{a} \end{smallmatrix})$) is
\begin{equation}
ds^2 = \frac{\ell^2}{2}\operatorname{Tr}(dg\otimes dg^{-1}) = -\frac{\ell^2}{2}\operatorname{Tr}(\Omega_L^{\otimes 2}) = -\frac{\ell^2}{2}\operatorname{Tr}(\Omega_R^{\otimes 2}).
\end{equation}
In terms of Euler angles $\theta\in [0, \pi)$, $\phi\in [0, 2\pi)$, $\psi\in [0, 4\pi)$, we have
\begin{equation}
g = \begin{pmatrix} \cos\frac{\theta}{2}e^{i(\phi + \psi)/2} & i\sin\frac{\theta}{2}e^{i(\phi - \psi)/2} \\[5 pt] i\sin\frac{\theta}{2}e^{-i(\phi - \psi)/2} & \cos\frac{\theta}{2}e^{-i(\phi + \psi)/2} \end{pmatrix}, \mbox{ } ds^2 = \frac{\ell^2}{4}(d\theta^2 + d\phi^2 + d\psi^2 + 2\cos\theta\, d\phi\, d\psi).
\label{eulerangles}
\end{equation}
The frame one-forms are
\begin{equation}
e_L = \frac{\ell}{2}\Omega_L, \quad e_R = \frac{\ell}{2}\Omega_R,
\label{frameoneforms}
\end{equation}
which satisfy $e_\mu^a e_\nu^b\delta_{ab} = g_{\mu\nu}$. With the canonical orientation $\sqrt{\det g}\epsilon_{\theta\phi\psi} = \ell^3\sin\theta/8$ (writing $\det g$ in full to avoid confusion with the $SU(2)$ coordinate $g$), the volume form is
\begin{equation}
\sqrt{\det g}\, d\theta\wedge d\phi\wedge d\psi = -(e_L)^1\wedge (e_L)^2\wedge (e_L)^3 = -(e_R)^1\wedge (e_R)^2\wedge (e_R)^3,
\end{equation}
giving $\operatorname{vol}(S^3) = 2\pi^2\ell^3$. From the Maurer-Cartan equations, we read off the spin connection $\omega^{ab}\equiv \omega_\mu^{ab}\, dx^\mu$ in the two frames: $(\omega_L)^{ab} = \frac{1}{\ell}\epsilon^{abc}(e_L)^c$ and $(\omega_R)^{ab} = -\frac{1}{\ell}\epsilon^{abc}(e_R)^c$. Hence on spinors (see \eqref{nablamuonspinors}), we have
\begin{equation}
\nabla_\mu|_\text{LI} = \partial_\mu + \frac{i}{2\ell}\gamma_\mu\cdot {}, \quad \nabla_\mu|_\text{RI} = \partial_\mu - \frac{i}{2\ell}\gamma_\mu\cdot {}
\end{equation}
in the left- and right-invariant frames, respectively. The left- and right-invariant vector fields that generate the left and right actions of $SU(2)$ via $L^a g = -T^a g$ and $R^a g = gT^a$ are dual to $\Omega_{R, L}$ in that their components are given by the inverse vielbeins:
\begin{equation}
L_a = \frac{i\ell}{2}(e_R)_a^\mu\partial_\mu, \quad R_a = -\frac{i\ell}{2}(e_L)_a^\mu\partial_\mu,
\label{LandR}
\end{equation}
with normalizations chosen such that they satisfy the algebra $\mathfrak{su}(2)_\ell\oplus \mathfrak{su}(2)_r$:
\begin{equation}
[L_a, L_b] = i\epsilon_{abc}L_c, \quad [R_a, R_b] = i\epsilon_{abc}R_c.
\end{equation}
Their actions on $g$ imply that
\begin{equation}
L^a(\Omega_R)^b = i\epsilon^{abc}(\Omega_R)^c, \quad R^a(\Omega_L)^b = i\epsilon^{abc}(\Omega_L)^c, \quad L^a(\Omega_L)^b = R^a(\Omega_R)^b = 0.
\end{equation}
The four $\mathbb{C}$-linearly independent conformal Killing spinor fields on $S^3$ can be constructed by taking $\xi$ constant in the left-invariant frame or constant in the right-invariant frame:
\begin{equation}
\nabla_\mu\xi = \pm\frac{i}{2\ell}\gamma_\mu\cdot\xi = \mp\frac{i}{2\ell}\gamma_\mu\xi \Longleftrightarrow \xi' = \mp\frac{i}{2\ell}\xi,
\end{equation}
with two solutions for each sign. Keeping in mind that \eqref{CKScondition} means $\nabla_\mu\xi = -\gamma_\mu\cdot\xi'$, we refer to spinors with $\xi' = -\smash{\frac{i}{2\ell}\xi}$ as ``positive'' and those with $\xi' = \smash{\frac{i}{2\ell}\xi}$ as ``negative.'' These conformal Killing spinors are genuine Killing spinors, for which $\xi'\propto \xi$ (on maximally symmetric spaces with nonzero curvature, there always exists a basis for the space of conformal Killing spinors consisting of Killing spinors). For these spinors, the condition \eqref{extracondition} for closure of the superconformal algebra (involving $R = 6/\ell^2$ and $h = -9/4\ell^2$) is automatically satisfied. Using \eqref{frameoneforms}, the Killing spinors can be written more explicitly as follows. Let $\xi_0$ denote a constant spinor, and write $\gamma^a = x\sigma^a x^{-1}$ for constant $x\in SU(2)$ parametrizing the basis. In the left-invariant frame, ``positive'' and ``negative'' Killing spinors satisfy
\begin{equation}
\partial_\mu\xi = 0, \quad \partial_\mu\xi = -\frac{i}{\ell}\gamma_\mu\cdot\xi = -\frac{i}{\ell}(e_L)_\mu^a\gamma^a\cdot\xi,
\end{equation}
respectively. The first equation is solved by $\xi = \xi_0$, and the second by $\xi = xg^{-1}x^{-1}\cdot\xi_0$. In the right-invariant frame, ``positive'' and ``negative'' Killing spinors satisfy
\begin{equation}
\partial_\mu\xi = \frac{i}{\ell}\gamma_\mu\cdot\xi = \frac{i}{\ell}(e_R)_\mu^a\gamma^a\cdot\xi, \quad \partial_\mu\xi = 0,
\end{equation}
respectively. The first equation is solved by $\xi = xgx^{-1}\cdot\xi_0$, and the second by $\xi = \xi_0$.
By taking the spinor parameters $\xi, \tilde{\xi}$ of the superconformal algebra both to be positive or negative (hence constant in the left- or right-invariant frame), the dilatation parameter $\rho = \frac{2i}{3}\nabla_\mu(\xi\gamma^\mu\tilde{\xi})$ vanishes and we restrict to either of the non-conformal subalgebras
\[
\mathfrak{osp}(2|2)_\text{left}\times \mathfrak{su}(2)_\text{right}, \mbox{ } \mathfrak{su}(2)_\text{left}\times \mathfrak{osp}(2|2)_\text{right}\subset \mathfrak{osp}(2|2, 2),
\]
which are $S^3$ analogues of $\mathcal{N} = 2$ Poincar\'e supersymmetry in $\mathbb{R}^3$. The Killing vector $\smash{\xi\gamma^\mu\tilde{\xi}}$ is likewise constant in the appropriate frame. The supercharges generate $\mathfrak{osp}(2|2)$ (left or right), whose bosonic subalgebra contains the $\mathfrak{su}(2)$ (left or right) isometry and the $\mathfrak{u}(1)_R$. The SUSY' transformations become
\begin{equation}
\begin{aligned}
\delta'\sigma &= -(\xi\tilde{\lambda} - \tilde{\xi}\lambda), \\
\delta' A_\mu &= i(\xi\gamma_\mu\tilde{\lambda} + \tilde{\xi}\gamma_\mu\lambda), \\
\delta'\lambda &= \textstyle -i\xi D - i\gamma^\mu\xi D_\mu\sigma - \frac{i}{2}\sqrt{g}^{-1}\epsilon^{\mu\nu\rho}\gamma_\rho\xi F_{\mu\nu}\mp \frac{1}{\ell}\sigma\xi, \vphantom{\tilde{\xi}} \\
\delta'\tilde{\lambda} &= \textstyle i\tilde{\xi}D + i\gamma^\mu\tilde{\xi}D_\mu\sigma - \frac{i}{2}\sqrt{g}^{-1}\epsilon^{\mu\nu\rho}\gamma_\rho\tilde{\xi}F_{\mu\nu}\pm \frac{1}{\ell}\sigma\tilde{\xi}, \\
\delta' D &= \textstyle -(\xi\gamma^\mu D_\mu\tilde{\lambda} - \tilde{\xi}\gamma^\mu D_\mu\lambda) + [\xi\tilde{\lambda} + \tilde{\xi}\lambda, \sigma]\mp \frac{i}{2\ell}(\xi\tilde{\lambda} - \tilde{\xi}\lambda)
\end{aligned}
\label{S3susypvector}
\end{equation}
for the vector multiplet and
\begin{equation}
\begin{aligned}
\delta' A &= -\sqrt{2}\xi\psi, \\
\delta'\psi &= \textstyle -\sqrt{2}\xi F + i\sqrt{2}\gamma^\mu\tilde{\xi}D_\mu A + i\sqrt{2}\tilde{\xi}\sigma A\pm \frac{\sqrt{2}\Delta}{\ell}\tilde{\xi}A, \\
\delta' F &= \textstyle i\sqrt{2}\tilde{\xi}\gamma^\mu D_\mu\psi - i\sqrt{2}\sigma\tilde{\xi}\psi - 2i\tilde{\xi}\tilde{\lambda}A\mp \frac{\sqrt{2}(\Delta - 1/2)}{\ell}\tilde{\xi}\psi
\end{aligned}
\label{S3susypchiral}
\end{equation}
for a fundamental chiral multiplet of dimension $\Delta$, with the top and bottom signs corresponding to $\xi, \tilde{\xi}$ positive and $\xi, \tilde{\xi}$ negative, respectively. The non-conformal $\mathcal{N} = 2$ Yang-Mills Lagrangian is invariant under this restricted supersymmetry. Indeed, we find that \eqref{V0LYM} holds, where $\mathcal{L}_\text{YM}|_{S^3}$ corresponds to taking $H = \pm i/\ell$ with the top and bottom signs as above:
\begin{equation}
\mathcal{L}_\text{YM}|_{S^3} = \operatorname{Tr}\left[\frac{1}{4}F_{\mu\nu}F^{\mu\nu} + \frac{1}{2}D_\mu\sigma D^\mu\sigma - \frac{1}{2}\left(D\mp \frac{i\sigma}{\ell}\right)^2 - i\tilde{\lambda}\gamma^\mu D_\mu\lambda + i\tilde{\lambda}[\sigma, \lambda]\mp \frac{1}{2\ell}\lambda\tilde{\lambda}\right].
\label{LYM-S3}
\end{equation}
\emph{A fortiori}, $\mathcal{L}_\text{YM}|_{S^3}$ is supersymmetric with respect to the restricted SUSY'.
Let us restrict to the subalgebra of positive (``left-invariant'') spinors and use as a localizing term $\mathcal{L}_\text{YM}|_{S^3}$ with $H = i/\ell$. A Wilson loop with unit tangent vector $n^\mu$ is locally half-BPS if we choose $n^\mu$ such that \eqref{bpsconditions} holds (these are half-BPS rather than BPS conditions due to the extra symmetry of $S^3$). Since $\xi, \tilde{\xi}$ are constant in the left-invariant frame, the loop can only be globally half-BPS if $n^\mu$ is a constant linear combination of the $(e_L)_a^\mu$: $n^\mu = n^a(e_L)_a^\mu$. The positive spinors $\xi, \tilde{\xi}$ each belong to a two-complex-dimensional space of two-component complex spinors; the Wilson loop selects a one-complex-dimensional line inside each of these spaces, so it preserves a single complex supercharge. In practice, one localizes with respect to a single real supercharge, which further restricts us to one of the two left-invariant Killing spinors selected by the Wilson loop. Regardless of the basis for the gamma matrices, the property $\sigma_2\sigma_a^\ast\sigma_2 = -\sigma_a$ implies that the BPS conditions require that $\tilde{\xi}\propto \sigma_2\cdot\xi^\ast\propto \xi^\dag$. Now define the Killing vector
\begin{equation}
K^\mu = (\xi\gamma^\mu\xi^\dag)|_0 = (\xi|_0)\gamma^\mu(\xi^\dag|_0).
\end{equation}
Normalizing $\xi$ by setting $\xi|_0\xi^\dag|_0 = 1$, we compute using Fierz identities for mutually commuting spinors (see Appendix \ref{3DN2}) that $K^\mu = (K^\mu)^\ast$, $K_\mu K^\mu = 1$, and
\begin{equation}
K^\mu\gamma_\mu\xi = -\xi, \quad K^\mu\gamma_\mu\xi^\dag = \xi^\dag.
\end{equation}
Thus for a properly normalized, positive Killing spinor $\xi$ with corresponding $K^\mu$, the BPS equation in $\xi$ for a Wilson loop with $n^\mu = -K^\mu$ is automatically satisfied, while the BPS equation in $\tilde{\xi}$ requires that $\tilde{\xi}\propto \xi^\dag$. The supercharge with respect to which we localize is defined by the choice of $\xi$ (for which $n^\mu = -K^\mu$) and $\tilde{\xi} = 0$.
To determine how bulk SUSY' restricts to half-BPS Wilson loops, note that the left-invariant SUSY' algebra on $S^3$, corresponding to \eqref{S3susypvector} and \eqref{S3susypchiral} with the top sign, takes the same form as the $V_\mu = 0$ SUSY' algebra with parameters \eqref{V0susypparameters}, but with $H = i/\ell$ and $\smash{A_\mu^{(R)}} = 0$. We work in the basis $\gamma^a = \sigma^a$. Any constant $n^a$ defines a family of Wilson loops; WLOG, we fix $(n^1, n^2, n^3) = (0, 0, -1)$, giving the normalized Killing spinors $(\xi_1, \xi_2) = (\omega, 0)$ and $(\tilde{\xi}_1, \tilde{\xi}_2) = (0, \rho\bar{\omega})$, where we have fixed a convenient phase for $\xi$ and let $\rho$ denote the relative phase between $\tilde{\xi}, \xi^\dag$. The corresponding Killing vector is $(K^1, K^2, K^3) = (0, 0, 1)$. The rest of the analysis proceeds in the same way as for a general Seifert manifold, but with $H = i/\ell$ and $\smash{A_\mu^{(R)}} = 0$: we obtain the same restricted multiplets.\footnote{Choosing $(n^1, n^2, n^3) = (0, 0, 1)$ would change the 1D gauge field to $A_3 - i\sigma$, with 1D SUSY' transformations modified such that $\delta'(A_3 - i\sigma) = 0$.}
Let us see how the general construction of Section \ref{sugra} reduces to the known one in the case of $S^3$. In a form that makes the Hopf fibration manifest \cite{Drukker:2012sr}, the metric \eqref{eulerangles} on $S^3$ is
\begin{equation}
ds^2 = \frac{\ell^2}{4}(d\theta^2 + \sin^2\theta\, d\phi^2 + (d\psi + \cos\theta\, d\phi)^2),
\end{equation}
where the first two terms are the round metric on $S^2$ of radius $\ell/2$.\footnote{The integral curves of Killing vectors on $S^3$ are great circles corresponding to fibers of the Hopf fibration $S^3\to S^2$. The fiber over a given point on $S^2$ is the locus with fixed $(\theta, \phi)$ and arbitrary $\psi$. These are great circles because such circles have $ds = \ell\, d\psi/2$ and hence circumference $2\pi\ell$, and when parametrized by arc length $s$, they are clearly integral curves of $(e_L)_3^\mu\partial_\mu = \frac{2}{\ell}\partial_\psi$ with unit tangent vector.} Defining the dimensionful variable $\tilde{\psi} = \ell\psi/2$, so that $\tilde{\psi}/\ell\in [0, 2\pi)$, we have $K = \smash{\partial_{\tilde{\psi}}}$. Stereographic projection gives the relation $z = \ell e^{i\phi}/\tan(\theta/2)$ between complex coordinates $z, \bar{z}$ and spherical coordinates $(\theta, \phi)$ on $S^2$. To go from the patch containing the origin to the patch containing $\infty$ on $S^2$, we simply take $z' = \ell^2/z$ ($\tilde{\psi}$ does not transform). These correspond to adapted coordinates \eqref{standardmetric} on $S^3$ with
\[
a = \frac{i\ell}{4z}\left(\frac{1 - |z|^2/\ell^2}{1 + |z|^2/\ell^2}\right), \mbox{ } c = \frac{1}{1 + |z|^2/\ell^2}\implies H = -\frac{i}{\ell}, \mbox{ } A^{(R)} = -\frac{d\tilde{\psi}}{\ell} - \frac{i}{4}\left(\frac{dz}{z} - \frac{d\bar{z}}{\bar{z}}\right).
\]
Despite that the resulting SUSY' algebra takes the same form as in the right-invariant frame, our standard frame \eqref{standardvielbein} for the adapted coordinates \eqref{standardmetric} is \emph{neither} the left- nor the right-invariant frame on $S^3$. Indeed, in the $L$ and $R$ frames, we may choose $A^{(R)} = 0$. To reproduce the analysis in the left- or right-invariant frame directly from adapted coordinates, one can work in toroidal rather than Hopf coordinates, as we do on $S_b^3$.\footnote{By contrast, \cite{Closset:2013vra} uses the following adapted Hopf coordinates for the round sphere:
\[
ds^2 = \frac{dz\, d\bar{z}}{(1 + |z|^2/\ell^2)^2} + \left(d\tilde{\tau} + \frac{i}{2\ell}\frac{\bar{z}\, dz - z\, d\bar{z}}{1 + |z|^2/\ell^2}\right)^2,
\]
where $\tilde{\tau}/\ell\in [0, 2\pi)$ and $K = \smash{\partial_{\tilde{\tau}}}$. To go from the patch containing the origin to the patch containing $\infty$, we take $z' = \ell^2/z$ and $\tilde{\tau}' = \tilde{\tau} - \frac{i\ell}{2}\log\frac{\bar{z}}{z}$.}
Finally, we recall the computation of Wilson loop expectation values in $\mathcal{N} = 2$ Chern-Simons theory on $S^3$ in the left-invariant frame, where $\mathcal{L}_\text{CS}|_{S^3}$ is as in \eqref{LCS-curved} and $\mathcal{L}_\text{YM}|_{S^3}$ is given in \eqref{LYM-S3}. For left-invariant $\xi, \tilde{\xi}$, the BPS equations are
\begin{equation}
D - \frac{i\sigma}{\ell} = 0, \quad D_\mu\sigma\pm \frac{1}{2}\sqrt{g}\epsilon_{\mu\nu\rho}F^{\nu\rho} = 0
\label{S3BPSintermediate}
\end{equation}
where the top and bottom signs correspond to setting $\delta'\lambda = 0$ or $\delta'\tilde{\lambda} = 0$ in \eqref{S3susypvector}, respectively. Since we localize with respect to a supercharge with $\tilde{\xi} = 0$, we would in principle impose only $\delta'\lambda = 0$; however, seeing as $\mathcal{L}_\text{YM}|_{S^3}$ is both $\delta_\xi$- and $\delta_{\tilde{\xi}}$-exact, and the operators that we localize preserve both of the corresponding supercharges, the BPS conditions really require both vanishing conditions to hold. Hence we see that the solutions to \eqref{S3BPSintermediate}, namely
\begin{equation}
F_{\mu\nu} = 0, \quad D_\mu\sigma = 0, \quad D - \frac{i\sigma}{\ell} = 0,
\label{S3BPSequations}
\end{equation}
are precisely the minima (zeros) of \eqref{LYM-S3}, and \emph{a fortiori} saddle points thereof. In other words, the localization locus coincides with the BPS locus. In fact, one sees directly that the localization locus is simply the zero locus of the Yang-Mills action because all other saddle points are infinitely suppressed in the limit of zero Yang-Mills coupling.
The BPS equations require that $A_\mu$ be pure gauge, and since $S^3$ is simply connected, the zero modes $V_0$ of the vector multiplet fields are given by
\begin{equation}
A_\mu = 0, \quad \sigma = -i\ell D = \sigma_0, \quad \lambda = \tilde{\lambda} = 0
\end{equation}
(we write $\sigma_0\equiv \hat{\sigma}_0/\ell$ for constant $\hat{\sigma}_0\in \mathfrak{g}$). The partition function can be evaluated in the $t\to\infty$ limit by expanding in transverse fluctuations to the $V_0$ in field space. It reduces to an integral over the finite-dimensional space of zero modes:
\begin{equation}
Z = \int_{\mathfrak{g}}\, d\hat{\sigma}_0\, e^{-S_\text{CS}[V_0]}Z_\text{1-loop}^\text{vector}[\hat{\sigma}_0]\equiv \int_{\mathfrak{g}}\, d\hat{\sigma}_0\, Z_\text{cl}[\hat{\sigma}_0]Z_\text{1-loop}^\text{vector}[\hat{\sigma}_0],
\end{equation}
where the classical contribution is
\[
S_\text{CS}[V_0] = \frac{k}{2\pi}\int d^3 x\, \sqrt{g}\operatorname{Tr}(i\sigma_0^2/\ell) = k\pi i\operatorname{Tr}\hat{\sigma}_0^2.
\]
WLOG, we may fix $\hat{\sigma}_0$ in a Cartan subalgebra $\mathfrak{t}\subset \mathfrak{g}$ (which introduces a Jacobian factor), and fixing the residual Weyl symmetry gives
\begin{equation}
Z = \frac{1}{|\mathcal{W}|}\int_{\mathfrak{t}}\, da\, \left|\prod_\alpha \alpha(\hat{\sigma}_0)\right|Z_\text{cl}[\hat{\sigma}_0]Z_\text{1-loop}^\text{vector}[\hat{\sigma}_0]
\end{equation}
where the real scalars $a_i$ ($i = 1, \ldots, \operatorname{rank} G$) parametrize $\mathfrak{t}$, $\alpha$ ranges over all roots of $\mathfrak{g}$, and $\hat{\sigma}_0$ is a function of the $a_i$. The integration measure, including the Vandermonde determinant $|{\prod_\alpha \alpha(\hat{\sigma}_0)}|$, is manifestly Weyl-invariant. Making a Cartan decomposition of the transverse (divergenceless) component of the gauge field and of the fermions, we find that only the components in the direction of the root spaces contribute nontrivially to $Z_\text{1-loop}^\text{vector}[\hat{\sigma}_0]$. By the standard spectral analysis,\footnote{This is aided by the $L^a$ and $R^a$ in \eqref{LandR}, which generate the $SU(2)_\text{left}\times SU(2)_\text{right}$ isometry group of $S^3$. The scalar Laplacian is $\nabla^2 = -\frac{4}{\ell^2}L_a L^a = -\frac{4}{\ell^2}R_a R^a$. The Dirac operator can be written as
\[
-i\gamma^\mu\nabla_\mu|_\text{LI} = \frac{2}{\ell}S^a(2R_a + S_a), \quad -i\gamma^\mu\nabla_\mu|_\text{RI} = -\frac{2}{\ell}S^a(2L_a + S_a),
\]
where $S^a\equiv \frac{1}{2}\gamma^a$ are the generators of $SU(2)_\text{spin}$ and matrix multiplication (as opposed to spinor contraction) is understood.} one finds, up to an overall power of $\ell$,
\begin{equation}
Z_\text{1-loop}^\text{vector}[\hat{\sigma}_0] = \prod_\alpha \frac{2\sinh(\pi\alpha(\hat{\sigma}_0))}{\alpha(\hat{\sigma}_0)}.
\end{equation}
The full partition function of $\mathcal{N} = 2$ $G_k$ on $S^3$ is therefore (up to a dimensionful constant)
\begin{equation}
Z = \frac{1}{|\mathcal{W}|}\int da\, e^{-k\pi i\operatorname{Tr}\hat{\sigma}_0^2}\prod_\alpha 2\sinh(\pi\alpha(\hat{\sigma}_0))\equiv \frac{1}{|\mathcal{W}|}\int da\, e^{-k\pi i\operatorname{Tr}(a^2)}\operatorname{det}_\text{Ad} 2\sinh(\pi a).
\end{equation}
Half-BPS Wilson loops simply give insertions of $\operatorname{Tr}_R(e^{2\pi a})$:
\begin{equation}
\langle W_{R_1}\cdots W_{R_n}\rangle = \frac{1}{|\mathcal{W}|Z}\int da\, e^{-k\pi i\operatorname{Tr}(a^2)}\operatorname{Tr}_{R_1}(e^{2\pi a})\cdots \operatorname{Tr}_{R_n}(e^{2\pi a})\operatorname{det}_\text{Ad} 2\sinh(\pi a).
\label{matrixmodelS3}
\end{equation}
The Weyl character formula leaves us with a sum of Gaussian integrals, which can be evaluated to reproduce the $\mathcal{N} = 0$ result, up to a framing phase and a level shift.
\subsubsection{Example: \texorpdfstring{$S_b^3$}{Sb3}}
The backgrounds that we consider include $U(1)$ fibrations over arbitrary Riemann surfaces $\Sigma$, where the integral curves of $K$ are the Seifert fibers. When $\Sigma$ has genus zero, the base has additional isometries (this comment applies also to genus one), allowing for squashed sphere backgrounds where $K$ generates an isometry that does not point along the Seifert fiber. Consider parametrizing the round sphere by toroidal coordinates, which manifest $S^3$ as a torus fibered over a closed interval \cite{Willett:2016adv, Drukker:2012sr}. As in the previous section, we regard $S^3$ as the locus $(u, v)\in \mathbb{C}^2$ satisfying $|u|^2 + |v|^2 = 1$, but rather than parametrizing $u, v$ using Euler angles, we use coordinates $\chi\in [0, \pi/2]$ and $\phi_1, \phi_2\in [0, 2\pi)$:
\begin{equation}
u = \cos\chi e^{i\phi_1}, \quad v = \sin\chi e^{i\phi_2}, \quad ds^2 = \ell^2(d\chi^2 + \cos^2\chi\, d\phi_1^2 + \sin^2\chi\, d\phi_2^2).
\label{toroidal}
\end{equation}
This metric clearly admits two independent $U(1)$ isometries. More generally, consider the following squashed-sphere metric, which preserves a $U(1)\times U(1)$ subgroup of the $SU(2)\times SU(2)$ isometry group of $S^3$:
\begin{equation}
ds^2 = f(\chi)^2\, d\chi^2 + \ell_1^2\cos^2\chi\, d\phi_1^2 + \ell_2^2\sin^2\chi\, d\phi_2^2.
\label{Sb3metric}
\end{equation}
Here, $\ell_1, \ell_2 > 0$ and $f$ is a smooth, positive function on $[0, \pi/2]$ satisfying $f(0) = \ell_2$ and $f(\pi/2) = \ell_1$ to avoid conical singularities along the $\phi_1$ and $\phi_2$ circles, respectively. The squashing parameter of this geometry that enters supersymmetric observables is $b\equiv \sqrt{\ell_1/\ell_2}$. For example, the metric on $S_b^3$ induced by its embedding as the locus $(u, v)\in \mathbb{C}^2$ satisfying $b^{-2}|u|^2 + b^2|v|^2 = 1$ where $u = b\cos\chi e^{i\phi_1}$ and $v = b^{-1}\sin\chi e^{i\phi_2}$ takes the above form, with
\[
f(\chi) = \sqrt{\ell_1^2\sin^2\chi + \ell_2^2\cos^2\chi}
\]
(cf.\ \cite{Hama:2011ea, Kapustin:2013hpk}). A generic Killing vector is a linear combination of the $U(1)$ generators: $K = \alpha\partial_{\phi_1} + \beta\partial_{\phi_2}$. To use the 3D $\mathcal{N} = 2$ supergravity background described above, we take
\begin{equation}
|K|^2 = \alpha^2\ell_1^2\cos^2\chi + \beta^2\ell_2^2\sin^2\chi = 1 \Longleftrightarrow K = \ell_1^{-1}\partial_{\phi_1} + \ell_2^{-1}\partial_{\phi_2}
\label{Sb3Killing}
\end{equation}
and define local coordinates ($z = x + iy$)
\begin{equation}
x = \int^\chi \frac{f(\chi')}{\sin\chi'\cos\chi'}\, d\chi', \quad y = \ell_1\phi_1 - \ell_2\phi_2, \quad \tilde{\psi} = \ell_1\phi_1\cos^2\chi + \ell_2\phi_2\sin^2\chi,
\end{equation}
in terms of which the metric \eqref{Sb3metric} on $S_b^3$ can be written in the standard form \eqref{standardmetric} with
\begin{equation}
a = 2\sin\chi\cos\chi(\ell_1\phi_1 - \ell_2\phi_2)\, d\chi, \quad c = \sin\chi\cos\chi.
\end{equation}
Note that by (locally) inverting the relation between $x$ and $\chi$, we can write $a = yF(x)\, dx$ for some $F$ and $c = G(x)$ for some $G$; hence $a, c$ are independent of $\tilde{\psi}$ and $d\tilde{\psi}$, as required. In addition, we see that $K = \partial_{\tilde{\psi}}$. In these coordinates, the background supergravity formalism \eqref{standardvielbein} yields the vielbein
\begin{equation}
e^1 = f(\chi)\, d\chi, \,\,\, e^2 = -\sin\chi\cos\chi(\ell_1\, d\phi_1 - \ell_2\, d\phi_2), \,\,\, e^3 = \ell_1\cos^2\chi\, d\phi_1 + \ell_2\sin^2\chi\, d\phi_2.
\label{Sb3vielbein}
\end{equation}
We compute that
\begin{equation}
F_a = -\frac{1}{f(\chi)}, \quad (\omega_\text{2D})^{12} = -(\omega_\text{2D})^{21} = \frac{\cos^2\chi - \sin^2\chi}{f(\chi)}(\ell_1\, d\phi_1 - \ell_2\, d\phi_2).
\end{equation}
The corresponding background field configuration is
\begin{equation}
H = \frac{i}{f}, \quad V_\mu = 0, \quad A^{(R)} = \frac{\ell_1\, d\phi_1 + \ell_2\, d\phi_2}{2f}.
\label{Sb3bkgd}
\end{equation}
We may fix a gauge in which $A^{(R)}$ is regular everywhere (i.e., such that when the $\phi_i$ circle shrinks, the coefficient of $d\phi_i$ vanishes):
\begin{equation}
A^{(R)} = \frac{1}{2}\left(\frac{\ell_1}{f} - 1\right)d\phi_1 + \frac{1}{2}\left(\frac{\ell_2}{f} - 1\right)d\phi_2.
\end{equation}
Performing the corresponding $R$-symmetry gauge transformation on the Killing spinors in \eqref{generalizedKSsolns}, we have in the chosen frame and gauge that
\begin{equation}
\xi = e^{i(\phi_1 + \phi_2)/2}\left(\begin{array}{c} 1 \\ 0 \end{array}\right), \quad \tilde{\xi} = e^{-i(\phi_1 + \phi_2)/2}\left(\begin{array}{c} 0 \\ 1 \end{array}\right).
\label{Sb3Killingspinors}
\end{equation}
For generic $b$, the metric in $\tilde{\psi}, z, \bar{z}$ coordinates does not define an $S^1$ fibration because the coordinate $\tilde{\psi}$ is not periodic: namely, we see from \eqref{Sb3Killing} that the integral curves of $K$ do not close on the tori at $\chi\neq 0, \pi/2$ unless $b^2 = \ell_1/\ell_2$ is rational. If $b^2 = m/n$ with $m, n$ relatively prime integers, then the integral curves for $\chi\neq 0, \pi/2$ are $(n, m)$ torus knots wrapping the $\phi_1$ cycle $n$ times and the $\phi_2$ cycle $m$ times.\footnote{In general, the $(n, m)$ torus link has $\operatorname{gcd}(n, m)$ components.} These curves have length
\[
2\pi(\ell_1^2 n^2\cos^2\chi + \ell_2^2 m^2\sin^2\chi)^{1/2} = 2\pi\sqrt{\ell_1\ell_2 mn},
\]
regardless of $\chi$. On the other hand, at $\chi = 0, \pi/2$, the vector field $K$ is singular and its integral curves are circles (regardless of $b$) of lengths $2\pi\ell_1$ and $2\pi\ell_2$, respectively. One can insert supersymmetric Wilson loops along these knots or circles.
The round-sphere limit is given by $f(\chi) = \ell_1 = \ell_2 = \ell$. In this limit, we have $H = i/\ell$ and $V_\mu = \smash{A_\mu^{(R)}} = 0$, and both the generalized Killing spinor equations and the $V_\mu = 0$ SUSY' transformations reduce to those for the left-invariant frame on $S^3$. Note that the $\mathfrak{su}(1|1)$ algebra contains half of the Killing spinors that generate $\mathfrak{osp}(2|2)_\text{left}$ on $S^3$; the existence of two additional left-invariant Killing spinors is due to the extra symmetry of $S^3$. Recall that on $S^3$, the available spinors in $\mathfrak{osp}(2|2)_\text{left}$ are halved by the BPS condition for a half-BPS Wilson loop; it then suffices to use one of the two remaining supercharges for localization.
As on $S^3$, the vector multiplet localization locus on $S_b^3$ can be read off from $\mathcal{L}_\text{YM}$ \eqref{V0LYM}:
\begin{equation}
\sigma = D/H = \sigma_0\equiv \hat{\sigma}_0/\ell,
\end{equation}
with $\hat{\sigma}_0\in \mathfrak{g}$ constant and all other fields vanishing (here, $\ell\equiv \sqrt{\ell_1\ell_2}$ and $H = i/f$). Unlike on $S^3$, the spectra of the relevant differential operators are in general infeasible to compute exactly. Nonetheless, cohomological \cite{Hama:2011ea, Hosomichi:2014hja} or index theory \cite{Drukker:2012sr, Alday:2013lba} arguments that identify and keep only unpaired bosonic and fermionic eigenmodes can be used to extract the one-loop determinants in this situation (similar cancellations arise due to BRST symmetry in topological field theories \cite{Blau:1993tv}). It suffices to show that for a chiral multiplet of $R$-charge $r$ transforming in the representation $R$ of $G$,
\begin{equation}
Z_\text{1-loop}^\text{chiral}[\hat{\sigma}_0] = \prod_{\rho\in R} Z_\text{chiral}^r[\rho(\hat{\sigma}_0)], \quad Z_\text{chiral}^r[\rho(\hat{\sigma})]\equiv s_b(iQ(1 - r)/2 - \rho(\hat{\sigma}))
\end{equation}
where the product is taken over the weights $\rho$ of $R$, $Q = b + b^{-1}$, and $s_b$ is the double sine function. The corresponding result for the vector multiplet then follows from a standard Higgsing argument \cite{Willett:2016adv, Benini:2015noa}. First observe that (up to constant factors)
\begin{equation}
Z_\text{chiral}^r[\rho(\hat{\sigma})]Z_\text{chiral}^{2-r}[-\rho(\hat{\sigma})] = 1
\label{Higgsing1}
\end{equation}
because two chirals that may be coupled through a superpotential mass term do not contribute to the partition function.\footnote{In this case, the formula reduces to the identity $s_b(x)s_b(-x) = e^{i\pi(1 - Q^2/2)/6}$.} Now suppose that we have fixed $\hat{\sigma}_0$ to lie in a Cartan subalgebra of $\mathfrak{g}$, which incurs a Vandermonde determinant $\mathcal{V}[\hat{\sigma}_0]$. Write
\begin{equation}
\mathcal{V}[\hat{\sigma}_0]Z_\text{1-loop}^\text{vector}[\hat{\sigma}_0]\equiv \tilde{Z}_\text{1-loop}^\text{vector}[\hat{\sigma}_0] = \prod_\alpha \tilde{Z}_\text{vector}[\alpha(\hat{\sigma}_0)]
\end{equation}
where the product is taken over roots $\alpha$ of $G$. As usual, the Cartan components of the transverse vector multiplet modes contribute only constant factors. The contribution of a mode in the direction of a root $\alpha$ is determined by the Higgs mechanism to be
\begin{equation}
\tilde{Z}_\text{vector}[\alpha(\hat{\sigma})]Z_\text{chiral}^0[-\alpha(\hat{\sigma})] = 1
\label{Higgsing2}
\end{equation}
(again, up to constant factors), since a massive vector multiplet contributes trivially to the partition function: the massless chiral multiplet that is eaten has no flavor or $R$-charges, so that its VEV breaks no global symmetries. From \eqref{Higgsing1} and \eqref{Higgsing2}, we infer that
\begin{equation}
\tilde{Z}_\text{vector}[\alpha(\hat{\sigma})] = Z_\text{chiral}^2[\alpha(\hat{\sigma})] \implies \tilde{Z}_\text{1-loop}^\text{vector}[\hat{\sigma}_0] = \prod_\alpha Z_\text{chiral}^2[\alpha(\hat{\sigma}_0)].
\end{equation}
Specializing to $S_b^3$, this gives\footnote{Here, we use $s_b(x) = \prod_{p, q = 0}^\infty \frac{pb + qb^{-1} + Q/2 - ix}{pb + qb^{-1} + Q/2 + ix}$ and zeta function regularization (see, e.g., \cite{Hama:2011ea}).}
\begin{equation}
\tilde{Z}_\text{1-loop}^\text{vector}[\hat{\sigma}_0] = \prod_\alpha s_b(-iQ/2 - \alpha(\hat{\sigma}_0)) = \prod_{\alpha > 0} 4\sinh(\pi b\alpha(\hat{\sigma}_0))\sinh(\pi b^{-1}\alpha(\hat{\sigma}_0)).
\end{equation}
When $b = 1$, this reduces to the expected result on $S^3$ (up to phases):
\begin{equation}
\tilde{Z}_\text{1-loop}^\text{vector}[\hat{\sigma}_0] = \prod_\alpha 2\sinh(\pi\alpha(\hat{\sigma}_0))\Longleftrightarrow Z_\text{1-loop}^\text{vector}[\hat{\sigma}_0] = \prod_\alpha \frac{2\sinh(\pi\alpha(\hat{\sigma}_0))}{\alpha(\hat{\sigma}_0)}.
\end{equation}
On $S_b^3$, as on $S^3$, the classical contribution is $Z_\text{cl}[\hat{\sigma}_0] = e^{-S_\text{CS}[V_0]}$ where
\[
S_\text{CS}[V_0] = \frac{k}{2\pi\ell^2}\operatorname{Tr}(\hat{\sigma}_0^2)\int d^3 x\, \sqrt{g}H = \frac{2\pi i\ell_1\ell_2 k}{\ell^2}\operatorname{Tr}(\hat{\sigma}_0^2)\int_0^{\pi/2} d\chi\, \cos\chi\sin\chi = k\pi i\operatorname{Tr}(\hat{\sigma}_0^2).
\]
Thus the partition function $Z = \frac{1}{|\mathcal{W}|}\int_{\mathfrak{t}}\, d\hat{\sigma}_0\, Z_\text{cl}[\hat{\sigma}_0]\tilde{Z}_\text{1-loop}^\text{vector}[\hat{\sigma}_0]$ of pure Chern-Simons theory is
\begin{equation}
Z_{S_b^3} = \frac{1}{|\mathcal{W}|}\int_{\mathfrak{t}}\, d\hat{\sigma}_0\, e^{-k\pi i\operatorname{Tr}\hat{\sigma}_0^2}\prod_{\alpha > 0} 4\sinh(\pi b\alpha(\hat{\sigma}_0))\sinh(\pi b^{-1}\alpha(\hat{\sigma}_0)).
\end{equation}
Again by \eqref{bpsconditions}, BPS Wilson loops lie along integral curves of $K^\mu$ and thus correspond to insertions of
\[
W[\hat{\sigma}_0] = \operatorname{Tr}_R\exp\left(\frac{\hat{\sigma}_0}{\ell}\oint_\gamma ds\right) = \sum_{\rho\in R} \begin{cases} e^{2\pi b\rho(\hat{\sigma}_0)} & \text{if $\chi = 0$}, \\ e^{2\pi b^{-1}\rho(\hat{\sigma}_0)} & \text{if $\chi = \pi/2$}, \\ e^{2\pi\sqrt{mn}\rho(\hat{\sigma}_0)} & \text{if $b^2 = m/n$, $\gcd(m, n) = 1$, $\chi\neq 0, \pi/2$} \end{cases}
\]
in the partition function. Clearly, this matrix model reduces to that for $S^3$ when $b = 1$. One can use this matrix model to calculate the Jones polynomial for torus knots and torus links \cite{Tanaka:2012nr} (see also \cite{Beasley:2009mb}). To do so, one must account for supersymmetric framing: the self-linking number at generic $\chi$ is given by the linking number of nested torus knots, while circular Wilson loops at $\chi = 0, \pi/2$ have fractional self-linking number (as seen from the phase relative to the expectation value of an unknot in bosonic Chern-Simons). Explicit formulas for torus knots in $S^3$ can be found in \cite{Marino:2005sj}.
\subsubsection{Non-Example: (Squashed) Lens Spaces}
One might wish to apply the formalism of Section \ref{sugra} to more general Seifert manifolds (for useful references, see \cite{Beasley:2005vf, Beasley:2009mb, Kallen:2011ny}). Lens spaces provide a nice class of examples: these are Seifert manifolds that admit infinitely many distinct Seifert fibrations.
Two convenient definitions of the lens space $L(p, q)$, for relatively prime integers $p, q$, are as follows. First, it is the quotient of $S^3$, namely the locus $(u, v)\in \mathbb{C}^2$ where $|u|^2 + |v|^2 = 1$, by the free $\mathbb{Z}_p$ action
\[
(u, v)\mapsto (e^{2\pi i/p}u, e^{-2\pi iq/p}v).
\]
Second, it is the result of gluing two solid tori by a homeomorphism $g : T^2\to T^2$ of their boundaries, specified up to isotopy by its $SL(2, \mathbb{Z})$ action on the homology of $T^2$: $g = (\begin{smallmatrix} m & n \\ p & q \end{smallmatrix})$. More generally, one obtains a space $L_b(p, q)$ that is topologically $L(p, q)$ by starting with a metric \eqref{Sb3metric} for $S_b^3$ and imposing the identifications
\begin{equation}
(\chi, \phi_1, \phi_2)\sim (\chi, \phi_1 + 2\pi/p, \phi_2 - 2\pi q/p).
\label{Zpsquashed}
\end{equation}
As examples, $S^3$ and $S^2\times S^1$ are equivalent to $L(1, 1)$ and $L(0, 1)$, respectively (note that $L(1, 0)$ is also $S^3$, as can be seen from the gluing definition). Aside from the case of $L(p, -1)$ considered in \cite{Closset:2017zgf}, however, lens spaces cannot be written as Seifert manifolds in such a way that the base is a smooth Riemann surface, without orbifold points (the analysis of \cite{Ohta:2012ev} is also restricted to Seifert manifolds with a smooth base).
The only case in which it is possible to localize on a squashed lens space $L_b(p, q)$ using the $V_\mu = 0$ supergravity background \eqref{Sb3bkgd} on $S_b^3$ is that of $L_b(p, 1)$, because only in this case does the $\mathbb{Z}_p$ action \eqref{Zpsquashed} preserve the Killing spinors \eqref{Sb3Killingspinors}. More general $L_b(p, q)$ would require a different supergravity background. For example, localization on $L(p, -1)$ is performed in \cite{Closset:2017zgf} using a background in which the $R$-symmetry gauge field has a holonomy around the nontrivial cycle, leading to integrally quantized $R$-charges.
On $L_b(p, 1)$, the values of the supergravity background fields and the BPS equations are the same as on $S_b^3$. However, the localization locus is different because $\pi_1(L_b(p, 1)) = \mathbb{Z}_p$, so flat connections on $L_b(p, 1)$ are labeled by $g\in G$ satisfying $g^p = 1$, modulo conjugation. Fixing $g$ to lie in a maximal torus, we may write
\begin{equation}
g = e^{2\pi i\mathfrak{m}/p}, \quad \mathfrak{m}\in \frac{\Lambda_W^\vee}{p\Lambda_W^\vee}
\end{equation}
where $\Lambda_W^\vee\subset \mathfrak{t}$ is the coweight lattice of $G$ and $\mathfrak{t}$ is the chosen Cartan subalgebra. The remaining BPS equations in \eqref{S3BPSequations} impose that
\begin{equation}
\sigma = -i\ell D = \sigma_0\equiv \hat{\sigma}_0/\ell, \quad [\hat{\sigma}_0, \mathfrak{m}] = 0,
\end{equation}
with $\hat{\sigma}_0\in \mathfrak{g}$ constant; the latter condition requires that $\hat{\sigma}_0\in \mathfrak{t}$. Thus the space of BPS configurations is $(\mathfrak{g}\times \Lambda_W^\vee/p\Lambda_W^\vee)/\mathcal{W}$; i.e., fixing $\hat{\sigma}_0\in \mathfrak{g}$ selects a Cartan subalgebra $\mathfrak{t}$, after which one has a choice of $\mathfrak{m}$ and a residual Weyl symmetry. The classical contribution from the $\mathcal{N} = 2$ Chern-Simons action in the holonomy-$\mathfrak{m}$ sector is $Z_\text{cl}[\hat{\sigma}_0] = e^{-S_\text{CS}[V_0]}$ where
\[
S_\text{CS}[V_0] = S_\text{CS}[\hat{\sigma}_0, \mathfrak{m}] = \frac{k\pi i}{p}\operatorname{Tr}(\hat{\sigma}_0^2 - \mathfrak{m}^2).
\]
The first term is the contribution from the scalars, which is identical to that on $S_b^3$ (or on $S^3$) up to a division by $p$; the second term is the contribution from the flat connection \cite{Gang:2009wy, Alday:2012au}. The one-loop determinants are obtained by keeping only $\mathbb{Z}_p$-covariant modes on $S_b^3$. In terms of a suitably modified double sine function, we have \cite{Willett:2016adv}
\begin{align}
Z_\text{chiral}^r[\rho(\hat{\sigma}_0), \mathfrak{m}] &= s_b^{(p)}(iQ(1 - r)/2 - \rho(\hat{\sigma}_0); \rho(\mathfrak{m})), \\
s_b^{(p)}(x; k) &\equiv \prod_{(\ast)} \frac{mb + nb^{-1} + Q/2 - ix}{mb + nb^{-1} + Q/2 + ix} \nonumber
\end{align}
where $(\ast)$ means $m, n\geq 0$ and $m - n\equiv k\text{ (mod $p$)}$.\footnote{The results for $Z_\text{cl}[\hat{\sigma}_0]$ and $Z_\text{chiral}^r[\rho(\hat{\sigma}_0), \mathfrak{m}]$ need to be dressed by nontrivial signs to ensure factorization of the $L_b(p, 1)$ partition function of general $\mathcal{N} = 2$ theories into holomorphic blocks (see \cite{Willett:2016adv} and references therein).} With holonomy $\mathfrak{m}$, the Vandermonde determinant appearing in
\begin{equation}
\mathcal{V}[\hat{\sigma}_0, \mathfrak{m}]Z_\text{1-loop}^\text{vector}[\hat{\sigma}_0, \mathfrak{m}] = \tilde{Z}_\text{1-loop}^\text{vector}[\hat{\sigma}_0, \mathfrak{m}] = \prod_\alpha Z_\text{chiral}^2[\alpha(\hat{\sigma}_0), \mathfrak{m}]
\end{equation}
is $\mathcal{V}[\hat{\sigma}_0, \mathfrak{m}] = \smash{|{\prod_{\alpha(\mathfrak{m}) = 0} \alpha(\hat{\sigma}_0)}|}$ because the generators of $\mathfrak{g}$ with $\alpha(\mathfrak{m})\neq 0$ are broken. Upon summing over holonomies (gauge bundles), the partition function of $\mathcal{N} = 2$ Chern-Simons theory on $L_b(p, 1)$ is (by similar simplifications as for $S_b^3$)
\begin{equation}
Z_{L_b(p, 1)} = \frac{1}{|\mathcal{W}|}\sum_{\mathfrak{m}}\int_{\mathfrak{t}}\, d\hat{\sigma}_0\, e^{-\frac{k\pi i}{p}\operatorname{Tr}(\hat{\sigma}_0^2 - \mathfrak{m}^2)}\prod_{\alpha > 0}\prod_\pm 2\sinh(\pi b^{\pm 1}\alpha(\hat{\sigma}_0\pm i\mathfrak{m})/p).
\end{equation}
One can obtain from this matrix model the expectation value of a Wilson loop wrapping the non-contractible cycle, corresponding to the generator of $\pi_1$ (see \cite{Gang:2009wy} and references therein).
Now suppose we want to localize Wilson loops on $L_b(p, 1)$ with $b^2$ rational, so that the orbits of the Killing vector close into torus knots. For any Wilson loop along a torus knot in $S_b^3$, one can easily compute the expectation value of its image in $L_b(p, 1)$ by writing down the appropriate matrix model. One could in principle take a different approach to localizing Wilson loops on $L_b(p, q)$, which is more akin to that in \cite{Closset:2017zgf}: instead of viewing $L_b(p, q)$ as a quotient of $S_b^3$, exhibit it directly as a Seifert fibration and consider loops wrapping the circle fibers (for rational $b^2$, the $L_b(p, q)$ are Seifert fibrations over $S^2$ with two singular fibers). This was accomplished in \cite{Closset:2017zgf} for the case of round $L(p, -1)$. Moreover, whereas the lens spaces $L(p, 1), L(p, -1), L(p, p - 1)$ are all homeomorphic, the latter two are defined by the same $\mathbb{Z}_p$ action and therefore have the same induced THF.
As for the special case of $S^2\times S^1 = L(0, 1)$, our $V_\mu = 0$ background (with $K$ generating translations along the $S^1$) computes the topologically twisted index \cite{Benini:2015noa} with a negative unit of $R$-symmetry flux through the $S^2$. Taking the $S^1$ to have circumference $\beta$ and the $S^2$ to have radius $\ell$, we have
\[
ds^2 = \beta^2\, dt^2 + \ell^2(d\theta^2 + \sin^2\theta\, d\phi^2)
\]
where $t\sim t + 1$. Putting $z = 2\ell e^{i\phi}/\tan(\theta/2)$ and following the standard recipe \eqref{standardvielbein} (with $\tilde{\psi} = \beta t$), we choose the vielbein $e^3 = \beta\, dt$ and
\begin{align*}
e^1 = -\ell(\cos\phi\, d\theta + \sin\theta\sin\phi\, d\phi), \quad e^2 = \ell(\sin\phi\, d\theta - \sin\theta\cos\phi\, d\phi).
\end{align*}
We then compute that $H = 0$ and $A^{(R)} = \cos^2(\theta/2)\, d\phi$. Note that our vielbein differs from that of \cite{Benini:2015noa} (namely, $e^1 = \ell\, d\theta$ and $e^2 = \ell\sin\theta\, d\phi$) in the $1, 2$ directions, and our $A^{(R)}$ differs from theirs (namely, $A^{(R)} = \frac{1}{2}\cos\theta\, d\phi$) by an $R$-symmetry gauge transformation. One can place Wilson loops along the $S^1$ over any point on the $S^2$.
\subsubsection{Non-Example: 3D Supersymmetric Index}
The above framework is well-suited to the computation of the 3D supersymmetric index, which has already been discussed from various points of view in the literature and which we will therefore not describe in any great detail. Due to its significance in Chern-Simons theory (for which it yields the ground-state degeneracy), we will simply review the known results for the index while paying special attention to signs, as a prelude to our subsequent localization computation on a solid torus (which gives a partial result for the wavefunctions of Chern-Simons theory).
In the case of finite ground-state degeneracy, the Witten index can be computed as the Euclidean partition function on $T^3$ with periodic boundary conditions for fermions along each $S^1$ to preserve supersymmetry:
\begin{equation}
I(k)\equiv \operatorname{Tr}_{\mathcal{H}_{T^2}} (-1)^F.
\end{equation}
The supersymmetric index of pure Chern-Simons theory suggests that supersymmetry is spontaneously broken when $|k| < h/2$ for $\mathcal{N} = 1$ \cite{Witten:1999ds} and when $|k| < h$ for $\mathcal{N} = 2$ \cite{Intriligator:2013lca}. The standard mnemonic of simply accounting for the level shift after integrating out gauginos at large $|k|$ and plugging the result into the genus-one Verlinde formula applies, but the derivation for arbitrary $k$ requires care. Namely, for $k\geq 0$, the pure $G_k$ theory has index
\[
I_{\mathcal{N} = 1}(k) = \begin{cases} J_G(k - h/2) & \text{for } k\geq h/2, \\ 0 & \text{for } k < h/2, \end{cases} \qquad I_{\mathcal{N} = 2}(k) = \begin{cases} J_G(k - h) & \text{for } k\geq h, \\ 0 & \text{for } k < h, \end{cases}
\]
where $J_G(k')$ is the number of primary operators in the $G_{k'}$ WZW model. The indices for $k < 0$ are related to those for $k > 0$ as follows:
\begin{equation}
I_{\mathcal{N} = 1}(-k) = (-1)^r I_{\mathcal{N} = 1}(k), \qquad I_{\mathcal{N} = 2}(-k) = I_{\mathcal{N} = 2}(k),
\end{equation}
where $r = \operatorname{rank} G$. The reason is that the sign of the operator $(-1)^F$ is potentially ambiguous in finite volume. From a microscopic point of view (i.e., using a Born-Oppenheimer approximation in the regime $g^2 k\ll 1/r$, $r$ being the size of the $T^2$), the $\mathcal{N} = 1$ index can be computed by quantizing $r$ pairs of fermion zero modes $\eta_\pm^a$, where $a = 1, \ldots, r$ (as in \cite{Witten:1999ds}). There are two choices $|\Omega_\pm\rangle$ for the zero-mode Fock vacuum, depending on which chirality ($\pm$) we choose for the creation or annihilation operators. They are related by
\begin{equation}
|\Omega_+\rangle = \prod_{a=1}^r \eta_+^a|\Omega_-\rangle.
\end{equation}
Taking $k\to -k$ is akin to a reversal of spacetime orientation and hence exchanges $|\Omega_+\rangle$ with $|\Omega_-\rangle$. But fixing the statistics ($(-1)^F$-eigenvalue) of one of these vacua also fixes that of the other to be $(-1)^r$ times this value. This explains the sign in the $\mathcal{N} = 1$ case. On the other hand, when $\mathcal{N} = 2$, there are $2r$ pairs of fermion zero modes $\eta_\pm^a, \tilde{\eta}_\pm^a$ and four possibilities for the zero-mode Fock vacuum: $|\Omega_\pm\rangle\otimes |\smash{\tilde{\Omega}_{\pm'}}\rangle$, where $\pm$ and $\pm'$ are independent pairs of signs. Regardless of which vacuum state one chooses, orientation reversal does not change its statistics under $(-1)^F$. This explains the absence of the sign in the $\mathcal{N} = 2$ case.
For example, consider $SU(N)$, for which $h = N$ and
\begin{equation}
J_{SU(N)}(k') = \binom{N + k' - 1}{N - 1}.
\end{equation}
Note that while this function gives the dimension of the space of $SU(N)_{k'}$ conformal blocks only when $k'\geq 0$, it has an analytic continuation to all $k'$. Hence we may write succinctly
\begin{equation}
I_{\mathcal{N} = 1}(k) = J_{SU(N)}(k - N/2) = \frac{1}{(N - 1)!}\prod_{j=-N/2+1}^{N/2-1} (k - j),
\end{equation}
which indeed vanishes when $|k| < N/2$ and satisfies $I_{\mathcal{N} = 1}(-k) = (-1)^{N-1}I_{\mathcal{N} = 1}(k)$. On the other hand, we have that
\begin{equation}
I_{\mathcal{N} = 2}(k) = J_{SU(N)}(|k| - N) = \frac{1}{(N - 1)!}\prod_{j=1}^{N-1} (|k| - j) \text{ for } k\neq 0, \quad I_{\mathcal{N} = 2}(0) = 0,
\end{equation}
which vanishes when $|k| < N$ and satisfies $I_{\mathcal{N} = 2}(-k) = I_{\mathcal{N} = 2}(k)$.
Supersymmetric localization has the benefit of quantitatively justifying why the irrelevant Yang-Mills term does not affect the number of vacua and hence the index, even beyond the regime where semiclassical reasoning applies: it is $Q$-exact. The genus-zero Verlinde formula ($Z(S^2\times S^1) = 1$ for all $G, k$) was computed by localization in \cite{Benini:2015noa}, where the Verlinde algebra was also obtained from correlation functions of Wilson loops wrapping the $S^1$ above arbitrary points on the $S^2$. The Verlinde formula for $\Sigma\times S^1$ in arbitrary genus $g$ was later computed by localization in \cite{Benini:2016hjo, Closset:2016arn} using a background $R$-symmetry flux of $g - 1$ through $\Sigma$, which imposes a quantization condition of $q_R(g - 1)\in \mathbb{Z}$ on the $R$-charges; in particular, in the case $g = 1$ where no twisting is necessary, this computation reproduces the result of \cite{Blau:1993tv} for $Z_{T^3} = \dim\mathcal{H}_{T^2}$. One can also compute by these means correlation functions of Wilson loops over arbitrary points on $\Sigma$, leading to the algebra of Wilson loops in arbitrary genus \cite{Closset:2016arn}, which dimensionally reduces to the twisted chiral ring of a 2D $(2, 2)$ theory on $\Sigma$ and generalizes the results of \cite{Kapustin:2013hpk}. This approach was further generalized to nontrivial circle bundles \cite{Closset:2017zgf}. An alternative approach to the supersymmetric index of $\mathcal{N} = 2$ Chern-Simons theory in arbitrary genus, in the spirit of \cite{Kallen:2011ny}, is presented in \cite{Ohta:2012ev}.
\subsection{Localization on Solid Torus}
So far, we have considered various compact manifolds, but in all cases, the half-BPS sector includes a very limited subset of the possible loop observables. In pursuit of greater generality, it is natural to ask whether the localization argument can be applied to a basic ``building block'' that can then be used to assemble Wilson loop configurations of more complicated topologies. Since every closed, orientable, connected three-manifold can be obtained from $S^3$ by surgery on knots \cite{Marino:2005sj, Guadagnini:1994cc}, the natural candidate for such a building block is a solid torus. Here, we take only the most tentative possible step toward making this program precise by localizing on a solid torus dressed by a Wilson loop: we do not address the gluing procedure at all, much less how to make it supersymmetric. This constructive approach has appeared in the supersymmetric context in the guise of holomorphic blocks \cite{Pasquetti:2011fj, Beem:2012mb}, and also as a gluing formula of a suggestively similar form to surgery in pure Chern-Simons theory \cite{Closset:2017zgf}; however, both of these incarnations depend on more than topological data.
Localization on a solid torus is a simple application of the results of \cite{Sugishita:2013jca}. Their calculations fit easily into our framework for Seifert manifolds, as we now describe. We restrict our attention to Dirichlet boundary conditions, as considered there. These boundary conditions eliminate half of the fermions and, with the appropriate choice of localizing term, do not require introducing boundary degrees of freedom. For pure $\mathcal{N} = 2$ gauge theory without matter, we can preserve two supersymmetries on the solid torus, which gives us precisely the 1D $\mathcal{N} = 2$ supersymmetry required for cancellation of the line shift.
The solid torus is constructed by starting with toroidal coordinates \eqref{toroidal} on round $S^3$ and restricting to $\chi\in [0, \chi_0]$ with $\chi_0 < \pi/2$. In the standard supergravity background \eqref{Sb3bkgd} and frame \eqref{Sb3vielbein} (with $f = \ell_1 = \ell_2 = \ell$), we may choose a gauge in which $A^{(R)} = 0$ and the Killing spinors are as in \eqref{Sb3Killingspinors}. Seeing as $\gamma^\chi = \frac{1}{\ell}\gamma^1$ in our frame, we have
\begin{equation}
-\ell e^{-i(\phi_1 + \phi_2)}\gamma^\chi\xi = \tilde{\xi}
\label{property}
\end{equation}
(recall that $-\gamma^\chi\xi = \gamma^\chi\cdot\xi$). Up to a slight difference in conventions, this is precisely the condition satisfied by Killing spinors preserved by the boundary conditions of \cite{Sugishita:2013jca} on the solid torus. Specifically, the boundary conditions that we wish to impose are
\begin{equation}
A_{\tilde{\mu}}| = a_{\tilde{\mu}}, \quad \sigma| = \sigma_0, \quad \ell e^{-i(\phi_1 + \phi_2)}\gamma^\chi\lambda| = \tilde{\lambda}|
\label{torusbcs}
\end{equation}
where $\tilde{\mu}\in \{\phi_1, \phi_2\}$ denotes the directions tangent to the boundary $T^2$, ``$|$'' denotes restriction to the boundary $\chi = \chi_0$, and $a_{\tilde{\mu}}$ and $\sigma_0$ are constants. The fields $A_\chi$ and $D$ are left free. These boundary conditions and the property \eqref{property} of the Killing spinors imply that
\begin{equation}
\xi\tilde{\lambda}| = \tilde{\xi}\lambda|, \quad \xi\gamma^\chi\tilde{\lambda}| = \tilde{\xi}\gamma^\chi\lambda|, \quad \xi\gamma^{\tilde{\mu}}\tilde{\lambda}| = -\tilde{\xi}\gamma^{\tilde{\mu}}\lambda|.
\end{equation}
Given these properties and \eqref{property}, one easily sees that the boundary conditions are compatible with the relevant SUSY' transformations, which are given by \eqref{S3susypvector} with the top sign (note, however, that we are \emph{not} working in the left-invariant frame).
The boundary terms in the SUSY' variation of the curved-space Chern-Simons action \eqref{LCS-curved} are given by \eqref{LCS-curved-boundary}. For the Yang-Mills action on the solid torus, we write the fermion kinetic terms symmetrically between $\lambda$ and $\tilde{\lambda}$ (following \cite{Sugishita:2013jca}) to ensure that the boundary conditions kill surface terms without the need to introduce a compensating boundary action. Namely, we use as a localizing term
\begin{equation}
\mathcal{L}_\text{(YM)}|_{S^3}\equiv \mathcal{L}_\text{YM}|_{S^3, \, \tilde{\lambda}\gamma^\mu D_\mu\lambda\to \frac{1}{2}(\tilde{\lambda}\gamma^\mu D_\mu\lambda + \lambda\gamma^\mu D_\mu\tilde{\lambda})},
\end{equation}
where $\mathcal{L}_\text{YM}|_{S^3}$ is defined by choosing the top sign in \eqref{LYM-S3} (the parentheses in (YM) are a mnemonic for symmetrization). One computes that the corresponding boundary terms are
\begin{align*}
\delta'\mathcal{L}_\text{(YM)}|_{S^3} &= \frac{1}{2}\nabla_\mu\operatorname{Tr}\bigg[\frac{1}{2}\sqrt{g}^{-1}\epsilon^{\mu\nu\rho}F_{\nu\rho}(\xi\tilde{\lambda} + \tilde{\xi}\lambda) + i\sqrt{g}^{-1}\epsilon^{\mu\nu\rho}D_\nu\sigma(\xi\gamma_\rho\tilde{\lambda} - \tilde{\xi}\gamma_\rho\lambda) \\
&\hspace{1 cm} + iF^{\mu\nu}(\xi\gamma_\nu\tilde{\lambda} + \tilde{\xi}\gamma_\nu\lambda) - D^\mu\sigma(\xi\tilde{\lambda} - \tilde{\xi}\lambda) + \left(D - \frac{i\sigma}{\ell}\right)(\xi\gamma^\mu\tilde{\lambda} - \tilde{\xi}\gamma^\mu\lambda)\bigg].
\end{align*}
By the boundary conditions \eqref{torusbcs}, all of the Chern-Simons and Yang-Mills boundary terms vanish. Finally, one can also show along the lines of \cite{Sugishita:2013jca} that the Yang-Mills action written in this way remains $Q$-exact with these boundary conditions, again without the need to add a compensating boundary action (see also the useful summary of boundary terms in \cite{Drukker:2012sr}).
The BPS equations are the same as on $S^3$, \eqref{S3BPSequations}. Given the boundary conditions \eqref{torusbcs}, one can choose a gauge in which the saddle points of the Yang-Mills action are given by $A_\chi = 0$, $A_{\tilde{\mu}} = a_{\tilde{\mu}}$, $\sigma = \sigma_0$ where the constants $a_{\tilde{\mu}}$ and $\sigma_0$ satisfy $[a_{\tilde{\mu}}, \sigma_0] = 0$. Moreover, regularity of the gauge field at $\chi = 0$ requires $a_{\phi_2} = 0$; as long as $\chi_0 < \pi/2$, $a_{\phi_1}$ can be nonzero, and it is only this component on which a localized Wilson loop depends. Namely, a BPS Wilson loop along a curve $\gamma$ at fixed $\chi\in [0, \chi_0]$ localizes as follows:
\begin{equation}
W = \operatorname{Tr}_R P\exp\left[i\oint_\gamma d\tau\, (A_\mu\dot{x}^\mu - i\sigma|\dot{x}|)\right]\to \langle W\rangle = \operatorname{Tr}_R e^{2\pi i\ell(a_{\phi_1} - i\sigma_0)}.
\label{localizedloop}
\end{equation}
For $\chi\in (0, \chi_0]$, $\gamma$ is an unknot represented as a $(1, 1)$ torus knot. At the core of the torus ($\chi = 0$), $\gamma$ is a $(1, 0)$ torus knot. For any $\chi$, $\gamma$ has length $2\pi\ell$. Because the value of $\sigma$ at the saddle point is fixed rather than integrated over, the expectation value of a Wilson loop is trivial to compute: both the classical contribution and the one-loop determinants cancel in the normalized expectation value.
Note that $\chi$ parametrizes the time direction in Hamiltonian quantization on $T^2$. Hence we would like to interpret \eqref{localizedloop} as the wavefunction of a state in the Hilbert space on the boundary $T^2$. The latter is more properly identified with the \emph{unnormalized} expectation value of a Wilson loop threading the torus, which is obtained by dressing \eqref{localizedloop} with factors of $Z_\text{cl}$ and $Z_\text{1-loop}^\text{vector}$ from \cite{Sugishita:2013jca}: we have
\[
Z_\text{cl} = e^{-\frac{ik}{2\pi\ell}\operatorname{Tr}(\sigma_0)^2 V(\chi_0)}
\]
where $V(\chi_0) = 2\pi^2\ell^3\sin^2\chi_0$ is the volume of the torus and
\[
Z_\text{1-loop}^\text{vector} = \prod_{\alpha > 0}\prod_{n\in \mathbb{Z}} (n - \ell\alpha(a_{\phi_1} - i\sigma_0)) = \prod_{\alpha > 0} 2\sinh(\pi\ell\alpha(ia_{\phi_1} + \sigma_0))
\]
up to an overall dimensionful factor, where we have used zeta function regularization. Apart from the classical contribution, the boundary gauge field and $\sigma$ appear in the expected supersymmetric combination. The localized Wilson loop trivially gives a character, but its interpretation as a wavefunction is unclear. One obvious shortcoming is that with the half-BPS Dirichlet boundary conditions used for localization, one cannot obtain an arbitrary wavefunctional of the boundary fields from the path integral on the torus; rather, it is evaluated at particular boundary values of the gauge field. It is tempting to compare this result to that obtained from holomorphic (coherent state) quantization \cite{Elitzur:1989nr, Bos:1989kn}. There, one obtains affine (Weyl-Kac) characters at level $k$, which are both Weyl-invariant and modular-invariant. Recall that an affine character is a combination of a Virasoro character and an ordinary Weyl character: for $\smash{\widehat{G}_k}$ with vacuum representation of highest weight $\lambda$,
\begin{equation}
\chi_\lambda^{(k)}(\tau, \theta^i) = q^{-c/24}\operatorname{tr}_\lambda^{(k)}(q^{L_0}e^{i\theta^i H^i})
\end{equation}
where the $H^i$ are Cartan generators. However, to arrive at an answer that is a function (rather than a functional) of the constant $\phi_1$-component, we have effectively chosen a real polarization \cite{Elitzur:1989nr}; moreover, the localization calculation implicitly fixes a complex structure on the $T^2$ rather than allowing it to be arbitrary.\footnote{The flat metric on the boundary $T^2$ is $ds^2 = \ell^2\cos^2\chi_0\, d\phi_1^2 + \ell^2\sin^2\chi_0\, d\phi_2^2$, from which we identify the modular parameter as $i\operatorname{max}(\tan^2\chi_0, \cot^2\chi_0)$ up to a modular transformation (as usual, choose $\operatorname{Im}\tau > 0$).} Hence we cannot hope to reproduce the Kac-Moody part of the affine character, which depends on the modular parameter $\tau$; only the Weyl character for compact $G$ is visible in this calculation. What we \emph{can} do is isolate the shift in the highest weight $\lambda$, because the Weyl character already fixes $\lambda$. This is enough for our purposes. The constant Cartan element in the supersymmetric boundary condition for the gauge field is identified with the Cartan parameter of the Hodge decomposition on the torus \eqref{flatonT2}, which determines the Cartan angles that appear in the Weyl character. Note that after fixing a gauge, we must still add a boundary term to make the variational principle well-defined in our real polarization (see Appendix \ref{boundaryconditions}); this term vanishes on the localization locus because it involves the product of both components of the gauge field along the boundary torus.
To be slightly more precise, the wavefunctions \eqref{ansatzT2} obtained via holomorphic quantization are functionals of $A_z$; restricting them to functions of $A_z = a_z$ where $a_z$ is a constant in the Cartan subalgebra shows that a basis for the physical wavefunctions is given by Weyl-Kac characters at level $k$, labeled by distinct $\lambda\in \Lambda_W/(\mathcal{W}\ltimes k\Lambda_R)$:
\begin{equation}
\psi_\lambda(a_z)\equiv e^{-\frac{k\operatorname{Im}\tau}{\pi}\operatorname{Tr} a_z^2}\chi_\lambda^{(k)}(\bar{\tau}, u), \quad u\equiv -\frac{\operatorname{Im}\tau}{\pi}a_z.
\label{wavefunctionsT2}
\end{equation}
For constant $a_z$, one might na\"ively make a change of variables to interpret the wavefunctions as functions of the coordinates $a_1$. However, it is precisely the passage from holomorphic to real polarization that involves nontrivial Jacobians and leads to the famous shifts \cite{Elitzur:1989nr}. Instead of simply setting $A_z = a_z$, one should integrate out the modes that are not constant and in the Cartan to obtain an effective wavefunction for $a_z$, leading to an effective wavefunction in $a_1$ that coincides with the na\"ive one up to the famous shifts (see Appendix \ref{holomorphicpol}).
Finally, let us comment on the broader context. For a large class of 3D $\mathcal{N} = 2$ theories that preserve a $U(1)_R$ symmetry, the squashed lens space partition functions (including those on squashed spheres and various supersymmetric indices) factorize as
\begin{equation}
Z_{M^3}(m_\alpha) = \sum_\alpha B_\alpha(x_\alpha; q)\tilde{B}_\alpha(\tilde{x}_\alpha; \tilde{q})
\label{factorization}
\end{equation}
where $\alpha$ labels vacua of the mass-deformed theory on $T^2$, $x_\alpha, \tilde{x}_\alpha$ are $U(1)$ flavor symmetry fugacities, $(q, \tilde{q}) = (e^{2\pi ib^2}, e^{2\pi ib^{-2}})$, and the holomorphic blocks $B_\alpha, \tilde{B}_\alpha$ are intrinsic to the theory but independent of $M^3$ ($Z_{M^3}$ depends on $M^3$ only through how the blocks are glued together) \cite{Pasquetti:2011fj, Beem:2012mb}. More precisely,
\begin{equation}
B_\alpha(x_\alpha; q) = Z_{D^2\times_q S^1}(x_\alpha; \alpha)
\end{equation}
where the theory on the ``Melvin cigar'' $D^2\times_q S^1$ (the $D^2$ being fibered over the $S^1$ with holonomy $q$) is topologically twisted such that the partition function is independent of the metric of the $D^2$, with $\alpha$ determined by boundary conditions. If $M^3$ admits a Heegaard splitting into two solid tori glued together by an element of $SL(2, \mathbb{Z})$, and if both pieces can be deformed to a Melvin cigar in a $Q$-exact manner, then we have the desired factorization \eqref{factorization}. The $M^3$ partition function can be exhibited directly in factorized form through Higgs branch localization \cite{Fujitsuka:2013fga, Benini:2013yva}, and the individual blocks can be computed by localization on the solid torus $D^2\times S^1$ \cite{Yoshida:2014ssa}.
The computation of \cite{Yoshida:2014ssa} does not fit neatly into our framework for Seifert manifolds because it uses both a different metric and different boundary conditions than \cite{Sugishita:2013jca}. Nonetheless, it is also found in \cite{Yoshida:2014ssa} that the $\mathcal{N} = 2$ Yang-Mills action can be written in a $Q$-exact form without surface terms. On the other hand, the $\mathcal{N} = 2$ Chern-Simons action is invariant under neither SUSY nor gauge transformations in the presence of a boundary. Two proposals are given in \cite{Yoshida:2014ssa} for maintaining gauge invariance, namely that the compensating boundary action should contain either a chiral 2D $(0, 2)$ theory (namely, $(0, 2)$ matter multiplets to cancel gauge anomalies) or a trivially supersymmetric gauged chiral WZW model obtained by viewing gauge parameters as physical fields on the boundary.\footnote{In fact, one can consider possibilities intermediate between these two extremes, involving a chiral WZW model in which only a subgroup $H\subset G$ is gauged, along with $(0, 2)$ matter multiplets coupled to the $H$-gauge field.} The second option makes direct contact with the Chern-Simons wavefunction computed in holomorphic polarization: the K\"ahler potential in the inner product \eqref{innerproduct} of coherent states appears from the localization point of view as a boundary term necessary to preserve half-BPS boundary conditions (again, the localization computation selects a constant gauge field, for which the wavefunction becomes a Weyl-Kac character). However, as in holomorphic quantization, one obtains a character in the representation $\lambda$ by taking Weyl-invariant linear combinations of generalized theta functions rather than by directly evaluating the path integral with a Wilson loop in the representation $\lambda$; it is not clear how such a Wilson loop would fit into the approach of \cite{Yoshida:2014ssa}.\footnote{The computation of \cite{Yoshida:2014ssa} can be viewed as that of a half-index with Neumann boundary conditions for the vector multiplet. For a discussion of how to recover Weyl-Kac characters from the half-index with Dirichlet $(0, 2)$ boundary conditions in $\mathcal{N} = 2$ Chern-Simons theories, see \cite{Dimofte:2017tpi}.}
\section{Matching \texorpdfstring{$\mathcal{N} = 0$}{N = 0} and \texorpdfstring{$\mathcal{N} = 2$}{N = 2} Line Operators} \label{matching}
So far, we have explained the quantum-mechanical non-renormalization of the weight only for certain classes of BPS observables in pure $\mathcal{N} = 2$ Chern-Simons, which can be computed via localization on three-manifolds that admit a real, nowhere vanishing Killing vector. This amounts to an explanation of the renormalization of the weight for a similarly restricted set of observables in the corresponding $\mathcal{N} = 0$ theory. The correspondence is as follows: embed the $\mathcal{N} = 0$ theory in an $\mathcal{N} = 2$ theory with the appropriate level (``integrate in'' auxiliary fields), and then for those links that are deformable to a BPS configuration, deform them to said configuration and enrich them with $\sigma$ as in \eqref{3DN2loop}. This operation is trivial at the level of the functional integral. Clearly, this procedure is not possible for arbitrary links, even though all observables are explicitly computable by topological means. While we cannot explain the non-renormalization of the weight for completely general observables, we have sketched how one might approach a more general understanding by localizing on a solid torus. In this section, we make some further comments on the correspondence between $\mathcal{N} = 0$ and $\mathcal{N} = 2$ observables.
\subsection{(Non-)Renormalization}
To substantiate the claim that the natural UV completion of Chern-Simons theory should have $\mathcal{N} = 2$ supersymmetry, it is (as mentioned in the introduction) important to fix unambiguous definitions of the level $k$ and the representation $\lambda$. Throughout, we have used the canonical definition that the $k$ in $\mathcal{N} = 0$ $G_k$ Chern-Simons theory is the level of the corresponding 2D WZW model, where it appears in correlation functions and has a precise physical meaning. Relative to this definition, the level that appears in the Chern-Simons braiding matrix with parameter $q$ is $k + h$.\footnote{Monodromy matrices in Chern-Simons theory follow from $R$-matrices in braid representations, and by ``braiding matrix,'' we mean the half-monodromy matrix \cite{Guadagnini:1994cc}.} This shift is independent of regularization scheme, i.e., the question of how the renormalized coupling depends on the bare coupling. Said differently, our $k\equiv k_\text{phys}$ is what determines the dimension of the Hilbert space and changes sign under time reversal, while $k_\text{phys} + h$ is what appears in correlation functions. The relation of $k_\text{phys}$ to some UV parameter $k_\text{bare}$ (e.g., via $k_\text{phys} = k_\text{bare} + h$ or $k_\text{phys} = k_\text{bare}$) is a question of regularization scheme and not physically meaningful.
On the other hand, $\lambda$ determines the conjugacy class of the holonomy around a Wilson loop to be $e^{-2\pi i\lambda/k}$, as measured by another loop that links it. This relation, derived from the classical EOMs (see Appendix \ref{realpol}), receives quantum corrections. For example, in the case of $SU(2)$ (and using our convention for $\lambda$ from Section \ref{N2CS}), the classical and quantum holonomies are $e^{2\pi ij\sigma_3/k}$ and $e^{2\pi i(j + 1/2)\sigma_3/(k + 2)}$, respectively. To interpret the statement that ``$\lambda$ is not renormalized'' in the $\mathcal{N} = 2$ setting, it should be kept in mind that Wilson loops are typically written not in terms of the \emph{bare} $\lambda$, but rather in terms of an effective $\lambda$ that corresponds to having integrated out the fermions along the line.\footnote{One can compare the supersymmetric case to the complex (analytically continued) case, where there are also no shifts. Assuming the standard integration cycle over real connections, the Chern-Simons path integral is oscillatory: the level shift can be attributed to a Wick rotation in the space of connections that renders the integral absolutely convergent (see \cite{Mikhaylov:2014aoa} and references therein). Since analytically continued Chern-Simons theory requires no further regularization, it is free of the attendant shift ambiguities (which is fortunate because, for instance, the lack of a Killing form of definite signature means that deforming by an irrelevant Yang-Mills term no longer gives rise to a sensible quantum field theory).}
\subsection{3D Point of View}
For completeness, let us comment on how the differences between the bosonic and supersymmetric theories bear on the mapping of line operators between the two theories. These subtleties do not affect our conclusions.
An obvious difference is that the $\mathcal{N} = 2$ theory contains extra bulk fields, as well as both supersymmetric and non-supersymmetric line operators. Wilson loops in $\mathcal{N} = 0$ CS are functions of the gauge field $A$, while Wilson loops in $\mathcal{N} = 2$ CS are (schematically) functions of the combination $A + \sigma$. A collection of the former loops in an arbitrary smooth configuration and a collection of the latter loops in the same configuration have identical correlation functions in the respective theories, up to an appropriate identification of parameters. This is true even if the configuration is not BPS from the point of view of the latter theory, hence not calculable using localization, as one sees by integrating out $\sigma$. Moreover, in the latter theory, correlation functions of non-intersecting loops not involving local operators constructed from the extra bulk fields are independent of whether the loop operators are written as functions of $A$ or of $A + \sigma$. However, such correlation functions can still have contact terms with integrated local operators; these contact terms differ for $A$ and $A + \sigma$ loops. The Schwinger-Dyson equation for $A$ says that both $\mathcal{N} = 0$ and $\mathcal{N} = 2$ loops have contact terms with the equation of motion for $A$. The Schwinger-Dyson equation for $\sigma$ says that only $\mathcal{N} = 2$ loops have contact terms with the auxiliary scalar $D$ in the $\mathcal{N} = 2$ vector multiplet. At finite Yang-Mills coupling, the $A$ and $\sigma$ EOMs involve fermionic sources, but these irrelevant terms are $Q$-exact, so do not affect correlation functions of BPS loops. If one is only interested in correlators of non-intersecting loops, as we are, then these issues are not relevant. For a related discussion of the loop equation for BPS Wilson loops, see Section 4 of \cite{Drukker:1999zq}.
To properly define the localizing term requires both a metric and a spin structure (the latter because the fermions become dynamical at finite Yang-Mills coupling), neither of which seem to be necessary to define the bosonic theory (which, for $G$ connected and simply connected, is independent of spin structure \cite{Dijkgraaf:1989pz}).\footnote{The Killing spinor equations \eqref{generalizedKS} require only a spin$^c$ structure, which exists on any orientable three-manifold \cite{Closset:2012ru}.} But recall that a metric is already needed both to regularize and to gauge-fix the bosonic theory \cite{Witten:1988hf}. Moreover, recall that computing observables in $\mathcal{N} = 0$ Chern-Simons requires choosing a framing of $M^3$, which automatically fixes a spin structure: every orientable three-manifold is spin, hence parallelizable, and a spin structure is specified by a homotopy class of trivializations of the tangent bundle over the one-skeleton that extends over the two-skeleton.\footnote{More precisely, only a two-framing, or a framing of $TM^3\oplus TM^3$, is required to define the phase of the partition function. Every three-manifold admits a canonical two-framing. In fact, every Seifert fibration $\pi : M^3\to \Sigma$ also determines a two-framing on $M^3$, which in general differs from the canonical one \cite{Beasley:2005vf}.} Therefore, even at finite Yang-Mills coupling, the regularized pure $\mathcal{N} = 2$ Chern-Simons theory does not depend on any additional geometric data beyond that required to compute observables in the $\mathcal{N} = 0$ theory.\footnote{There are at least two qualifications to this statement. First, while the non-topological localizing terms are $Q$-exact, the metric still enters into the computation of observables in a more essential way in the $\mathcal{N} = 2$ theory because BPS Wilson loops must lie along isometries. However, just as in the $\mathcal{N} = 0$ theory, smoothly deforming links in the (non-manifestly topological) $\mathcal{N} = 2$ theory leaves correlation functions unchanged (to see this, set the coefficient of the localizing term to zero and integrate out the extra bulk fields). Second, the localization procedure determines the framing of knots, which is additional data beyond the framing of the three-manifold (the latter cancels in normalized expectation values).}
Finally, we reviewed in Section \ref{loopsinCS} the well-known fact that in bosonic Chern-Simons theory, Wilson and vortex ('t Hooft) loops are equivalent \cite{Moore:1989yh}, the latter being defined by their holonomy. The same is true in $\mathcal{N} = 2$ Chern-Simons theory \cite{Kapustin:2012iw, Drukker:2012sr, Hosomichi:2014hja}, where a vortex loop is defined by a vector multiplet in a singular BPS configuration. In this case, vortex loops entail nontrivial background profiles for $\sigma, D$, but the path integral with appropriate boundary conditions and boundary actions completely decouples between these fields and $A$. Hence, modulo the exceptional situations discussed above for Wilson loops, supersymmetric vortex loops in pure Chern-Simons are equivalent to their non-supersymmetric counterparts. Note that in \emph{abelian} theories, vortex loops for gauge (rather than flavor) connections are trivial in the sense that up to an overall factor (the classical contribution from the Chern-Simons action), a vortex loop insertion simply results in an imaginary shift of the Coulomb branch parameters that is integrated over in the matrix model and can therefore be absorbed into a redefinition of the integration contour.
\subsection{A Quasi-2D Point of View}
We now show that there exists a one-to-one correspondence between line operators in the bosonic and supersymmetric theories that is clear only if we take into account both shifts. Given our assumptions on $G$, this correspondence can be viewed as a restatement of well-known facts about 2D rational conformal field theory.
As can be seen in canonical quantization, the distinct Wilson lines in pure Chern-Simons theory are in one-to-one correspondence with the ground states of the theory on a (spatial) torus. To explain what ``distinct'' means, we must identify the precise equivalence classes of Wilson lines that map to these ground states. $SU(2)_k$ Chern-Simons on a torus has $k + 1$ ground states labeled by half-integers $j = 0, \ldots, k/2$. These can equivalently be viewed as the $k + 1$ primary operators in the $SU(2)_k$ WZW model, where the truncation of $SU(2)$ representations to integrable representations is a selection rule that follows from combining two different $\mathfrak{su}(2)$ subalgebras of $\smash{\widehat{SU}(2)_k}$. From the 3D point of view, however, a Wilson line can carry any $SU(2)$ representation (half-integer $j$). To respect the 2D truncation, all such lines fall into equivalence classes labeled by the basic lines $j = 0, \ldots, k/2$. The equivalence relations turn out to be a combination of Weyl conjugation and translation \cite{Elitzur:1989nr}:
\begin{equation}
j\sim -j, \quad j\sim j + k.
\label{N0relations}
\end{equation}
As reviewed in Appendix \ref{realpol}, the story is similar for general $G$. Line operators are subject to equivalence relations given by the action of the affine Weyl group at level $k$ ($\mathcal{W}\ltimes k\Lambda_R^\vee$ where $\Lambda_R^\vee$ is the coroot lattice of $G$), whose fundamental domain we refer to as an affine Weyl chamber or a Weyl alcove and which contains all inequivalent weights (corresponding to integrable representations of $\smash{\widehat{G}_k}$).\footnote{The equivalence classes of Wilson lines in abelian Chern-Simons can be found in Appendix C of \cite{Seiberg:2016rsg}.}
Now consider the correlation functions of these lines. Two basic observables of $SU(2)_k$ Chern-Simons on $S^3$ are the expectation value of an unknotted spin-$j$ Wilson loop and the expectation value of two Wilson loops of spins $j, j'$ in a Hopf link:
\begin{equation}
\langle W_j\rangle_{\mathcal{N} = 0} = \frac{S_{0j}}{S_{00}}, \quad \langle W_j W_{j'}\rangle_{\mathcal{N} = 0} = \frac{S_{jj'}}{S_{00}}.
\label{basiccorrelators}
\end{equation}
Recall that the modular $S$-matrix of $SU(2)_k$ is given by \eqref{Smatrix} in a basis of integrable representations. The formulas \eqref{basiccorrelators} apply only to Wilson loops with $j$ within the restricted range $0, \ldots, k/2$. Indeed, \eqref{Smatrix} is not invariant under the equivalence relations \eqref{N0relations}. Nonetheless, let us na\"ively extend these formulas to arbitrary $j, j'$. The first positive value of $j$ for which $\langle W_j\rangle = 0$ is that immediately above the truncation threshold: $j = (k + 1)/2$. More generally, from \eqref{basiccorrelators}, it is clear that a line of spin $j$ and a line of spin $j + k + 2$ have identical correlation functions, while lines with $j = n(k/2 + 1) - 1/2$ for any integer $n$ vanish identically. Here, one should distinguish the \emph{trivial} line $j = 0$, which has $\langle W_0\rangle = 1$ and trivial braiding with all other lines, from \emph{nonexistent} lines, which have $\langle W_j\rangle = 0$ and vanishing correlation functions with all other lines. On the other hand, a line with $j$ and a line with $j + k/2 + 1$ have the same expectation value and braiding, \emph{up to a sign}. In other words, at the level of correlation functions, $SU(2)_k$ Wilson lines are antiperiodic with period $k/2 + 1$.
An analogous antiperiodicity phenomenon holds for arbitrary simple, compact $G$. In the WZW model, the fusion rule eigenvalues (computed from the $S$-matrix elements) are equal to the finite Weyl characters of $G$, evaluated on some special Cartan angles that respect the truncation of the relevant representations \cite{Verlinde:1988sn}. For example, in $SU(2)_k$, $\smash{\lambda_\ell^{(n)}} = S_{\ell n}/S_{0n}$ is the Weyl character $\chi_\ell(\theta)$ in \eqref{su2Weylcharacter} evaluated at $\theta/2 = (2n + 1)\pi/(k + 2)$ for $n = 0, \ldots, k/2$, chosen such that the Weyl character of spin $\ell = (k + 1)/2$ vanishes.
The (anti)periodicity of $S$ under $j\to j + (k + 2)/2$ can be understood in terms of the renormalized parameters $K = k + 2$ and $J = j + 1/2$.\footnote{See also \cite{Mikhaylov:2014aoa}. I thank V. Mikhaylov for correspondence on this point.} In the $\mathcal{N} = 2$ theory, a $J$ Wilson line has holonomy $e^{2\pi iJ\sigma_3/K}$, so the equivalence relations are
\begin{equation}
J\sim -J, \quad J\sim J + K \quad \Longleftrightarrow \quad j\sim -1 - j, \quad j\sim j + k + 2.
\label{N2relations}
\end{equation}
The inequivalent values of $j$ are $-1/2, \ldots, (k + 1)/2$. The extremal values $j = -1/2$ and $j = (k + 1)/2$ correspond to identically zero line operators, and the remaining values are the same as in the $\mathcal{N} = 0$ formulation. In other words, in contrast to \eqref{basiccorrelators} for $\mathcal{N} = 0$ $SU(2)_k$ on $S^3$, we have for $\mathcal{N} = 2$ $SU(2)_K$ on $S^3$ that
\begin{equation}
\langle W_J\rangle_{\mathcal{N} = 2} = \frac{S_{\frac{1}{2}J}}{S_{\frac{1}{2}\frac{1}{2}}}, \quad \langle W_J W_{J'}\rangle_{\mathcal{N} = 2} = \frac{S_{JJ'}}{S_{\frac{1}{2}\frac{1}{2}}}, \quad S_{JJ'}\equiv \sqrt{\frac{2}{K}}\sin\left[\frac{(2J)(2J')\pi}{K}\right],
\end{equation}
where the \emph{bare} $J$ must satisfy $J\geq 1/2$ for supersymmetry not to be spontaneously broken. In the supersymmetric theory, labeling lines by $J = 1/2, \ldots, (k + 1)/2$, the $J = 0$ line does not exist due to the vanishing Grassmann integral over the zero modes of the fermion in the $\mathcal{N} = 2$ coadjoint orbit sigma model. The conclusion is that the $\mathcal{N} = 2$ theory has the same set of independent line operators as the $\mathcal{N} = 0$ theory. In the $\mathcal{N} = 2$ formulation, the $S$-matrix $S_{jj'}$ is explicitly invariant under the equivalence relations \eqref{N2relations}.
For general $G$, let $\Lambda = \lambda + \rho$ and $K = k + h$. Then $\Lambda$, modulo the action of the affine Weyl group at level $K$, takes values in an affine Weyl chamber at level $K$. Those $\lambda = \Lambda - \rho$ for $\Lambda$ at the boundary of the chamber correspond to nonexistent lines, while those for $\Lambda$ in the interior are in one-to-one correspondence with weights in the affine Weyl chamber at level $k$ (for further details, see Appendix \ref{holomorphicpol}).
It would be interesting to understand both shifts from an intrinsically 2D point of view. The shift in $k$ is, as in the 3D case, transparent; the shift in $\lambda$ is less so. As the ring of line operators in Chern-Simons with compact gauge group is the fusion ring of the corresponding WZW model, one would like to translate the equivalence between $(J, K)$ and $(j, k)$ into an equivalence between ordinary and super WZW models. One can impose half-BPS boundary conditions to obtain both $(1, 1)$ and $(0, 2)$ WZW models on the boundary of bulk $\mathcal{N} = 2$ CS \cite{Faizal:2016skd}. It appears that $(1, 1)$ is relevant to the level shift while $(0, 2)$ is relevant to holomorphic blocks \cite{Yoshida:2014ssa}. It is well-known that after a suitable redefinition of the super Kac-Moody currents, the $(1, 1)$ super WZW model at level $k$ (for compact, connected, simply connected $G$) is equivalent to a bosonic WZW model at level $k - h$ (with central charge $(1 - \frac{h}{k})\dim G$) plus decoupled free Majorana fermions in the adjoint representation (with central charge $\frac{1}{2}\dim G$), resulting in a super Virasoro algebra with central charge $\hat{c} = \frac{2}{3}c = (1 - \frac{2h}{3k})\dim G$ \cite{DiVecchia:1984nyg, Fuchs:1988gm}. On the other hand, just as pure $\mathcal{N} = 2$ Chern-Simons is the bosonic theory plus some decoupled auxiliary fields, the corresponding $(0, 2)$ WZW model in 2D is the bosonic WZW model plus some decoupled fields; its left-moving sector is simply a non-supersymmetric chiral WZW model.
Finally, it is interesting to note that similar truncations of Wilson loop representations exist due to quantum relations in $\mathcal{N} = 2$ Chern-Simons-matter theories, despite that we do not expect an equivalence to a WZW model in this case \cite{Kapustin:2013hpk}.
\section{Discussion}
Using $SU(2)$ as a case study, we have supersymmetrized the coadjoint orbit quantum mechanics on a Wilson line in flat space from both intrinsically 1D and 3D points of view, providing several complementary ways to understand the shift in the representation $j$. We have described how to extend this understanding to certain compact Euclidean manifolds. For some classes of observables in Chern-Simons theory, the existence of an auxiliary supersymmetry lends itself not only to conceptual unity, but also to increased computability.
For arbitrary simple groups, one has both generic and degenerate coadjoint orbits, corresponding to quotienting $G$ by the maximal torus $T$ or by a Levi subgroup $L\supset T$ (see \cite{Bernatska:2008} and references therein). For example, the gauge group $SU(N + 1)$ has for a generic orbit the phase space $SU(N + 1)/U(1)^N$, a flag manifold with real dimension $N^2 + N$ (corresponding to a regular weight); on the other hand, the most degenerate orbit is $SU(N + 1)/(SU(N)\times U(1))\cong S^{2N + 1}/S^1\cong \mathbb{CP}^N$, which has $2N$ real dimensions and a simple K\"ahler potential (corresponding to a weight that lies in the most symmetric part of the boundary of the positive Weyl chamber).\footnote{A regular weight $\lambda$ satisfies $\lambda(\alpha)\neq 0$ for all roots $\alpha$; otherwise, the coadjoint orbit is isomorphic to $G/L$ where the Lie algebra of the Levi subgroup $L$ is that of $T$ adjoined with all roots $\alpha$ such that $\lambda(\alpha) = 0$.} The quantization of the phase space $\mathbb{CP}^N$ is well-known and can be made very explicit in terms of coherent states (see, e.g., Section 5 of \cite{Szabo:1996md}). The Fubini-Study metric for $\mathbb{CP}^N$ follows from covering the manifold with $N + 1$ patches with the K\"ahler potential in each patch being the obvious generalization of that for $SU(2)$. In principle, one can carry out a similar analysis with $SU(N + 1)$ Killing vectors. We will not attempt the full analysis in this paper. We simply remark that in general, the shift of a fundamental weight by the Weyl vector (half-sum of positive roots, or sum of fundamental weights) is no longer a fundamental weight, so one would need a qualitatively different sigma model than the original to describe the coadjoint orbit of the shifted weight. An option is not to work in local coordinates at all, along the lines of \cite{Beasley:2009mb} (however, this approach seems harder to supersymmetrize).
Finally, perhaps this story is more natural in a setting with twice as much supersymmetry (3D $\mathcal{N} = 4$), where one has the option of twisting spatial rotations by either of the $SU(2)$ $R$-symmetry groups, allowing for the construction of $1/4$-BPS Wilson or vortex loops supported on arbitrary curves \cite{Assel:2015oxa}.
\section*{Acknowledgements}
I thank B. Le Floch, S. Giombi, I. Klebanov, V. Mikhaylov, S. Pufu, P. Putrov, N. Seiberg, H. Verlinde, B. Willett, E. Witten, and I. Yaakov for helpful discussions, correspondence, and (in some cases) comments on a preliminary draft. I would like to give a special thanks to N. Seiberg for suggesting this line of investigation. I am also grateful to M. Dedushenko, S. Pufu, and R. Yacoby for collaboration on related work. I am supported in part by the NSF GRFP under Grant No.\ DGE-1656466 and by the Graduate School at Princeton University.
|
2,869,038,156,907 | arxiv | \section{Introduction}
One flagship of AGN studies resides in the unique opportunity X-rays offer to directly probe the innermost regions around AGN central black holes.
In fact, AGN spectral properties exhibiting numerous emission and absorption lines in X-rays, combined with the observed fast variability of both continuum and lines, provide unique tools to measure the velocities, the ionization states, the time variations and the geometries of the accretion/ejection flows surrounding supermassive black holes (SMBHs) \citep{done_2010_lessons_arxiv, fabian_2016_reverb, Kaastra_2017_AN, reynolds_2019_nat}.
With the advent of high throughput and high spectral resolution of the imaging and grating spectrometers on-board XMM-Newton and Chandra, the field has seen the flourishing of a number of detailed spectral and timing studies addressing two broad topics: i) the SMBH-accretion disk systems via the relativistic reflection component from the accretion disk and the inferred SMBH spins (e.g. \citet{tanaka_1995,nandra_1997,brenneman_2006_apj,fabian_2009_nature, zoghbi_2010_firstlag, demarco_2013_mnras, brenneman_2013, fabian_2017_AN, kara_2016_AN,garcia_2019_wpusds,zoghbi_2019_wpusds}), and ii) the study of AGN-driven outflows (from low velocity warm absorbers to more extreme ultra-fast outflows, hereafter UFOs) thought to originate from the accretion disk, the inner broad line region and/or the inner torus via a yet unknown physical process \citep{reeves_2003_outflowpds456, pounds_2003_pg1211, blustin_2005_aa, tombesi_2010_sampleufo, kaastra_2014, mehdipour_2015, nardini_2015, cappi_2016, parker_2018_b}.
To progress on both these topics is of extreme importance with wide ranging implications. For example, black hole spin encodes information about its growth history, may play an active role in setting relativistic jets and energetic outflows shaping the evolution of their host galaxy, determines the radiative accretion efficiency and sets the magnitude of some of the most extreme general relativistic phenomena observable in the Universe, such as gravitational redshift, light bending \citep{reynolds_2019_nat}.
Similarly massive outflows at very high velocity (up to $\sim 0.3~\rm c$) which mere existence has been a challenge to standard theoretical models of wind formation may well be responsible for the so-called AGN feedback and explain the AGN-host galaxy co-evolution (e.g. \citet{tombesi_2019_wpusds,laha_2019_wpusds}, and references therein).
Remarkably, both phenomena (relativistic reflection and fast, massive outflows) are often seen together \citep{gallo_2019,walton_2019}. Consequently, understanding and disentangling their precise contribution has often been a matter of debate \citep{boller_2002, tanaka_2004, fabian_2009_nature, miller_2010, zoghbi_2011}. This is because the emission and absorption features, imprinted by partial covering, multiple ionization absorbers can mimic, within available data, those expected from reflection components, and vice versa \citep{done_2007, miller_2009_mnras,gallo_and_fabian_2011, gallo_and_fabian_2013,miller_2013_apj}.
In order to shed light on these studies, key is to obtain a combination of high resolution spectra (to detect lines) and high throughput (to fill in all energy channels), as the one provided by the X-IFU instrument onboard Athena \citep{barret_2018_spie}. It is therefore not a surprise that the two above topics are core science for Athena \citep{nandra_2013_sp,dovciak_2013,cappi_2013_sp}. X-IFU combines a better than 2.5 eV spectral resolution up to 7 keV, and a peak effective area of $\sim 1$ m$^2$ at 1 keV, and capabilities to observe with $\sim 100$\% throughput the brightest known AGN \citep{barret_2018_spie}.
Here we investigate for the first time the accuracy reached by X-IFU observations in measuring black hole spins, the geometry of the reflection and the parameters of the absorbers, considering realistic, though complex multi-component AGN spectra. In the next section, we first present our methodology to model, simulate and fit spectra (\S \ref{methodology}), present the results of a set of representative simulations highlighting some key parameters of the model (\S\ref{simulations}), describe the prospects of measuring ultra fast outflows in high redshift objects (\S\ref{ufo}), introduce a method to estimate systematic errors due to calibration errors (\S\ref{calib}). The results are discussed in \S\ref{discussion}, where a comparison with similar studies is presented.
\section{Methodology}
\label{methodology}
\subsection{Model settings}
To simulate AGN X-IFU spectra, we assume an underlying continuum X-ray emission which consists of a hard cutoff power law component and its relativistically smeared ionized reflection, assuming a lamp post geometry for the irradiating source ({\it relxilllp}\footnote{The model can be downloaded from \url{http://www.sternwarte.uni-erlangen.de/~dauser/research/relxill/}} in XSPEC) \citep{dauser_2013_mnras,garcia_2014_apj}. The free parameters of the model are the photon index ($\gamma$) of the incident continuum, the height of the lamp post (h, in units of gravitational radius, R$_{\rm g}$$= GM/c^2$), the black hole spin parameter (a), and the inclination (Incl), ionization ($\log \xi$), and iron abundance (A$_{\rm Fe}$\ in solar units) of the accretion disk. The high-energy cutoff of the power law is fixed to 300 keV, meaning that we do not have any curvature in the X-IFU energy range. When simulating the spectrum, the reflection fraction ({\it reflfrac}, hereafter R$_{\rm f}$) can be either forced to a specified value (setting the model parameter {\it fixReflFrac} to 0) or computed self-consistently within the model and fixed to the lamp post value (setting {\it fixReflFrac} to 1), as defined in \citet{dauser_2016_aa}. The inner disk extends from the radius of the (spin dependent) innermost stable circular orbit up to 400 gravitational radii. The height of the compact source is constrained to lie within 3 and 10 R$_{\rm g}$, consistent with the observations \citep{fabian_2009_nature,demarco_2013_mnras,emmanopulous_2014_mnras,gallo_2015_mnras}. Here we consider only positive black hole spin values, limited to 0.998.
The primary power law plus disk reflected emission are then seen through three absorbers of varying column density (N$_{\rm H~1,2,3}$), ionization parameter ($\log \xi_{1,2,3} $) and covering factor ($cvf_{1,2,3}$). The absorber thought to be closer to the black hole has a higher column density and higher ionization, and the other two have non overlapping, though continuous, N$_{\rm H}$\ and ionization parameters. Here we arbitrarily constrain the covering factor to range between 0.4 and 0.9. The absorber is modeled with {\it zxipcf}\ in XSPEC. {\it zxipcf}\ uses a grid of XSTAR photoionised absorption models (calculated assuming a micro-turbulent velocity of 200 km/s) for the absorption \citep{kallman_2001,reeves_2008}. Having the turbulent velocity a free parameter of the model will soon be implemented (C. Done, private communication).
Reflection on cold distant neutral material is also accounted for and is also subject to obscuration by the two most distant, less ionized absorbers. Cold reflection is modeled using the {\it xillver} model \citep{garcia_2013_apj} (but see e.g. \citet{tanimoto_2019_apj} for a recent discussion on torus based models). Only the reflected component is computed setting {\it reflfrac}=-1. The power law index of the irradiating source is tight to the one of the primary emission (with again the same high energy cutoff set to 300 keV). The iron abundance of the reflector is set to 1 and its ionization parameter is set to 1 ($\log \xi$=0), meaning an almost neutral reflector. It is known that the resolution of the {\it xillver} model, currently 17 eV at 6 keV \citep{garcia_2010_apj}, is significantly worse than the 2.5 eV X-IFU spectral resolution. However, for the prime focus of this paper, this is unlikely to be an issue, as we are interested primarily in the relativistically smeared reflection component, with broadening exceeding the resolution of the model. Reflection models with finer resolution, to fully exploit the X-IFU capabilities, may become available (J. Garcia, private communication).
\begin{table}
\caption{Model name, parameter and range of values (between Min and Max) assumed in the simulations (see text for details), where N$_{\rm{H}}$, $\xi$, height (h), and inclination (Incl) are given in units of 10$^{22}$cm$^{-2}$, erg cm$^{-1}$s, R$_{\rm g}$, and degrees, respectively.}
\label{table:1}
\centering
\begin{tabular}{c c c c }
\hline\hline
Model & Parameter & Min & Max \\
\hline
TBabs & N$_{\rm H}$\ & 0.01 & 0.05 \\
\hline
& N$_{\rm{H 1}}$ & 0.3 & 0.6 \\
$\rm zxipcf_1$ & $\log \xi_1$ & 0.5 & 1.5 \\
& $cvf_1$ & 0.4 & 0.9 \\
\hline
& N$_{\rm{H 2}}$ & 0.6 & 1.0 \\
$\rm zxipcf_2$ & $\log \xi_2$ & 1.5 & 3.0 \\
& $cvf_2$ & 0.4 & 0.9 \\
\hline
& N$_{\rm{H 3}}$ & 5 & 10 \\
$\rm zxipcf_3$ & $\log \xi_3$ & 2.5 & 4.0 \\
& $cvf_3$ & 0.4 & 0.9 \\
\hline
& a & 0.05 & 0.95 \\
&$h$& 3.0 & 10.0 \\
& $\gamma_{relxilllp}$ & 1.7 & 2.2 \\
$\rm relxilllp $& $\log \xi $ & 2.0 &\ldots \\
& A$_{\rm Fe}$ & 1.0 & 2.0 \\
& Incl & 30 & \ldots \\
& R$_{\rm f}$ & \ldots & 2.0 \\
\hline
& A$_{\rm Fe}$ & 1.0 & \ldots \\
$\rm xillver $& $\log \xi $ & 0 &\ldots \\
& $\gamma$ & $\gamma_{relxilllp}$ & \ldots \\
& Incl & 30. & \ldots \\
\hline
\end{tabular}
\end{table}
We first consider a system inclination of 30 degrees and a redshift of all the components set to 0 (unless mentioned otherwise). Galactic absorption is modeled through TBabs \citep{verner_1993,wilms_2000}, with N$_{\rm H}$ allowed to vary between 1 and $5 \times 10^{20}$ cm$^{-2}$. In XSPEC terminology, the model considered is $\rm TBabs \times (\rm zxipcf_1 \times zxipcf_2 \times zxipcf_3 \times relxilllp + zxipcf_1\times zxipcf_2\times xillver)$. The main parameters of the model are presented in Table \ref{table:1} together with their allowed range of variations. These values are estimated from \cite{delacalle_2010, tombesi_2010_sampleufo, laha_2014}. They can be considered representative of typical values of nearby Seyfert 1 galaxies. An example of simulated X-IFU spectrum, highlighting the imprint of the various absorbers, and the contribution of the different reflectors is shown in Figure \ref{eeufspec}. Beside the forest of absorption lines present in the spectrum, it is worth noticing that the shape of the ionized reflection component shows multiple bumpy features below $\sim 2.5$ keV (see bottom left panel of Figure \ref{eeufspec}), which is key in constraining the black hole spin, in support to the constraints provided by the broad iron line above 6 keV.
\subsection{Simulation setup and fitting}
The simulations are performed with the PyXspec interface to the XSPEC spectral-fitting program \citep{arnaud}. We use here a build in version of XSPEC $12.10.1$. For simulations intended to sample the spin parameter space, we consider 50 values ranging from 0 to 0.995 in regular spacing. For all the other free physical model parameters, they are drawn from a uniform distribution bounded by their allowed interval of variations (listed in Table \ref{table:1}). The overall model is first normalized to correspond to an absorbed flux equivalent to a 1 mCrab source in the 2-10 keV range (or an absorbed flux of $2 \times 10^{-11}$ ergs/cm${^2}$/s). The sample size of 50 is commensurate with the number of known unabsorbed (typically type 1) AGN of similar brightness, which currently populate the Athena mock observing plan\footnote{The Athena mock observing plan can be downloaded from \url{http://www.isdc.unige.ch/athena/document-repository/category/192-general-interest.html}}, and as can be found in common catalogues such as the 3XMM-DR8 \citep{rosen_2019}, Chandra \citep{evans_2019}, Swift \citep{oh_2018} catalogues.
The normalization of the cold reflection {\it xillver} component is realistically assumed to be one fifth of the {\it relxilllp}\ component. As a sanity check of this assumption, we fitted the 2-10 keV spectrum with a simple power-law model plus three Gaussian lines, and the best fit equivalent width of the emission lines at $\sim$ 6.2, 6.4 and 7.1 keV were 70, 120 and 20 eV, respectively. These values are broadly consistent with typical values measured for the FeK line redshifted, neutral and K$_{\beta}$ components, e.g. \cite{guainazzi_2006_an,nandra_2007_mnras, delacalle_2010}. The spectra are then generated using the latest response matrices of the X-IFU \citep{barret_2018_spie} and the latest background files\footnote{Available for download from \url{http://x-ifu.irap.omp.eu/resources-for-users-and-x-ifu-consortium-members/}. The response files used here are named XIFU\_CC\_BASELINECONF\_2018\_10\_10, which correspond to the configuration of the X-IFU presented at the Instrument Preliminary Requirement Review.}. Note that the background rate for a point source with an extraction radius of 5 arc seconds is less than $2 \times 10^{-4}$ counts/s and is negligible in the simulations (one mCrab source generates about 100 counts/s). For grouping the spectral bins, we consider the optimal binning scheme of \citet{kaastra_2016} using the ftools {\it ftgrouppha}. The scheme accounts in particular for the energy dependent spectral resolution of the instrument and the statistic of the spectrum (narrower bins near high count regions and wider bins near low count regions). Depending on the model parameter simulated (e.g. slope of the power law, absorbers column density), the 1 mCrab count rate varies from $\sim 50$ counts/s to $\sim 120$ counts/s over the considered X-IFU fitting energy range (0.3 keV to 11.5 keV). When grouped, the mean energy bin width is less than 3 eV. There are 16 free parameters for the model considered here, and more than 6000 degrees of freedom (for a source of 1 mCrab brightness).
We use the so-called cstat\ metric in fitting the spectra \citep{cash_1979_apj,kaastra_2017_aa}, (see however section \ref{appendix_a} for an illustrative example of biases introduced by the use of $\chi^2$ statistic). To fit a spectrum, as with real data we will not know what the model parameters will be, we do not initialize the fit with the input model parameters. Instead, we draw tens (up to 50) sets of randomly distributed parameters in their allowed interval of variations. For the normalization of the primary emission and cold reflection components, we draw two numbers from a uniform distribution bounded as $\pm 50$\% of the input normalizations. To speed up the fitting, we constrain the fit to converge in 50 iterations, with a critical change in the fit statistic $\Delta \rm cstat=0.1$, assuming that any better fit will be found during the error computation (on all free model parameters). As these starting parameters may be far off from the input spectral parameters, the fit may not reach an acceptable solution before its 50 iterations and is simply ignored (of the 50 initial sets of parameters, at least one set leads to an acceptable fit to launch the error computation). This method has the advantage that it sweeps well over the parameter space, as to avoid the fit to get trapped into a local minimum.
For computing the errors on the best fit parameters, we consider the set of best fit parameters which provided the lowest cstat. For the parameter of interest, the positive and negative errors are computed by varying its value around its best fit value, freezing it, and fitting the spectrum with all the other free parameters allowed to vary. The value is incremented until it exceeds a critical threshold ($\Delta \rm{cstat}=2.706$ for 90\% confidence level errors). The parameter value at $\Delta \rm{cstat}=2.706$ is obtained through interpolation of the $\Delta \rm{cstat}$ curve. This is equivalent to the recommended {\it steppar} procedure in XSPEC. If a new best fit is found along the error computation, the procedure aborts and then restarts on the first free parameter from the newly found best fit. Computing the errors further sweeps over the parameter space, and is often used to get away from local minima in the fitting statistics, e.g. \cite{hurkett_2008}. With the method to initialize the fit described above, considering the simulation 1 described below, the mean decrease of cstat\ over 50 simulations along the error computation is $\sim 0.1$ (for a mean value of $\sim 6405$), indicating that the global minimum was likely found, hence the best fit. This is further supported by computing the goodness of the fit. Following \cite{kaastra_2017_aa}, we compute the goodness of the fit from the expected value and expected variance of the cstat. Again, in simulation 1 described hereafter, all measured cstat\ values for the best fits are all within $\pm 2\sigma$ of their expected values, indicating that the spectral model is acceptable. A visual inspection of the $\Delta \rm{cstat}$ was applied to check on the behavior of some sensitive model parameters, such as the black hole spin.
\section{X-IFU spectral simulations in representative configurations}
\label{simulations}
In the simulations, we assume an effective integration time of 100 ks (unless mentioned otherwise), with the implicit assumption that all model parameters remain constant within that duration (e.g. the height of the compact irradiating source, the parameters of the absorbers, etc.).
\subsection{The most conservative case: R$_{\rm f}$=1, A$_{\rm Fe}$=1 for a 1 mCrab source (configuration 1)}
To demonstrate the power of the X-IFU to constrain black hole spins, the very first simulation to be conducted assumes the most conservative case, in which the iron abundance and the reflection fraction are both set to 1, and we consider a mildly ionized disk with the ionization parameter set to $\log \xi$=2. We assume a 1 mCrab source. We simulate 50 spectra with positive spins ranging from 0 to 0.995 in regular spacing, to make sure that the whole spin range is covered. The mean error on the best fit parameters for the three absorbers and the reflection component are listed in Table \ref{table:2}. As can be seen, the statistical error on the spin parameter is on average $\le 0.1$, and the error on the height of the irradiating source is $\sim 0.3$R$_{\rm g}$.
\subsection{Another conservative case: fixed lamp post geometry with A$_{\rm Fe}$=1 for a 1 mCrab source (configuration 2)}
In this other simulation run, the parameters of the reflection component are computed by the model to the predicted value of the current parameter configuration in the lamp post geometry \citep{dauser_2016_aa}, namely the height of the irradiating source and the spin. R$_{\rm f}$\ is then left as a free parameter of the fit. The iron abundance is conservatively set to 1 and the ionization parameter of the accretion disk remains set to 2. We again consider 50 spectra with positive spins ranging from 0 to 0.995 in regular spacing, while for each spin, $h$ is drawn from a uniform distribution between 3 and 10 R$_{\rm g}$. The mean error on the best fit parameters for the three absorbers and the reflection component are listed in Table \ref{table:2}. The best fit parameters (a, $h$, R$_{\rm f}$) are shown in Figure \ref{conf2}. The reflection fraction computed from the model is everywhere smaller than 2, but larger than in the most conservative case discussed above. It goes from $\sim 1.2$ for low spin values up to $\sim 1.9$ at the highest spins. The higher reflection fraction at high spins compensates for the increased smearing of the reflection features, likely explaining why the error bars on the spin remain similar across the spin range. As listed in Table \ref{table:2} the accuracy of the fitted spin values has a mean error of $\sim 0.05$ across the spin range considered, while the height of the irradiating source is accurate to $\sim 0.2$R$_{\rm g}$.
\subsection{Setting R$_{\rm f}$=2, A$_{\rm Fe}$=2 for a 1 mCrab source (configuration 3)}
Although its physical origin has often been debated (but see the hypothesis on radiative levitation by \cite{reynolds_2012}, iron over abundance and large R$_{\rm f}$\ have often been reported from AGN X-ray spectra, with values reaching 10 and 5, respectively, in the most extreme cases \citep[and references therein]{fabian_2009_nature, risaliti_2013, parker_2018}. Similarly, iron over abundance is also inferred from fitting binary black hole spectra \citep{garcia_2018_aspc}, with values several times solar being routinely found. Here we assume a reflection fraction of 2 and iron over abundance by a factor of 2 at maximum, which may not be considered such an extreme case after all. R$_{\rm f}$\ is fixed in faking the spectra, and then left as a free parameter of the fit. 50 sets of the remaining 15 parameters are drawn from their uniform distribution, with the spin ranging from 0 to 0.995 in regular spacing. The mean error on the best fit parameters for the three absorbers and the reflection component are listed in Table \ref{table:2}. The best fit parameters against the input values of the model are presented in Figure \ref{conf3}. There are several noticeable features in this figure. First the spin parameter is very well constrained, and as expected, as the smearing increases with the spin, the error on the spin recovered increases towards the highest spin values (this is not compensated by a larger R$_{\rm f}$\ as it is fixed to 2 in the simulations). Nevertheless, the mean error on the spin parameter is $\lesssim 0.05$ across the range of spins considered. Second, in the lamp post geometry, the height of the compact source is also very well constrained with a mean error of the order of $\sim 0.2 $ R$_{\rm g}$. R$_{\rm f}$\ as recovered by the fit is shown in Figure \ref{conf3_ref}. R$_{\rm f}$\ is again determined with an accuracy of $\sim 2$\%. Similarly, the power law index of the hard irradiating source has a negligible error (0.003), showing no bias against its input value (see section \ref{appendix_a} for the bias that using $\chi^2$ as the fitting metric would introduce). Finally, the parameters of the three absorber components are very well recovered, most notably the ionization parameter (typical errors of $\sim 2$\% over their range of variation). As expected at the highest ionization, the best fit error increases as there are less lines in the spectra to hook the fit. A degeneracy between the N$_{\rm H}$\ and the covering factor may be expected (a higher N$_{\rm H}$\ and a smaller covering factor can be found as equal to a smaller N$_{\rm H}$\ and a higher covering factor). Despite this, the recovered values are all consistent or very close to their input values, demonstrating the power of high resolution and high throughput spectroscopy to decipher multiple components, narrow and broad, in complex AGN spectra. Not shown on this summary plot, it should be added that the normalization of the two reflection components and the galactic N$_{\rm H}$\ are also consistent with their input values.
\begin{table*}[!t]
\caption{Mean 90\% confidence level one sided errors from the best fit spectral parameters of the various configurations of the simulations. Only the parameters of the three absorbers and the ones of the reflection component are listed. The mean one sided errors are the mean of the positive and negative errors. Note that the positive error on the spin is bounded by the maximum spin value of 0.998. Configuration 1b corresponds to the same simulations as for configuration 1 but using the WFI responses \citep{meidinger_2018_spie}. Configuration 3b is identical to configuration 3 but a 5\% energy independent systematics has been added to the data. The integration time for configuration 1 to 4, and 7 to 9 is 100 ks. For configuration 5 and 6, it is 25 ks. The source intensity is 1 mCrab in configuration 1 to 4 and 0.5 mCrab in configuration 5 to 9. Configurations 7 to 9 are identical to configuration 3 but the iron abundance, the system inclination and the disk ionization are allowed to vary, as indicated in Table \ref{table:3}.}
\label{table:2}
\centering
\begin{tabular}{cccccccccccccc}
\hline\hline
Conf. & $\Delta$N$_{\rm{H 1}}$ & $\Delta\log\xi_1$ & $\Delta cvf_1$ &
$\Delta$N$_{\rm{H 2}}$ & $\Delta\log\xi_2$ & $\Delta cvf_2$ &
$\Delta$N$_{\rm{H 3}}$ & $\Delta\log\xi_3$ & $\Delta cvf_3$ &
$\Delta h$ & $\Delta a$ & $\Delta \gamma$ & $\Delta$ R$_{\rm f}$ \\
\hline
1 & 0.011 & 0.013 & 0.010 & 0.052 & 0.036 & 0.032 & 0.498 & 0.024 & 0.027 & 0.296 & 0.083 & 0.003 & 0.027 \\
1b & 0.044 & 0.053 & 0.049 & 0.164 & 0.074 & 0.081 & 1.579 & 0.063 & 0.097 & 0.592 & 0.148 & 0.004 & 0.042 \\
2 & 0.012 & 0.014 & 0.012 & 0.056 & 0.036 & 0.034 & 0.475 & 0.022 & 0.023 & 0.217 & 0.048 & 0.003 & 0.036 \\
3 & 0.013 & 0.018 & 0.015 & 0.058 & 0.036 & 0.034 & 0.595 & 0.027 & 0.026 & 0.178 & 0.049 & 0.003 & 0.038 \\
3b & 0.016 & 0.022 & 0.019 & 0.065 & 0.035 & 0.030 & 0.521 & 0.021 & 0.023 & 0.180 & 0.045 & 0.003 & 0.042 \\
4 & 0.043 & 0.050 & 0.040 & 0.174 & 0.086 & 0.077 & 1.574 & 0.066 & 0.068 & 0.595 & 0.168 & 0.009 & 0.119 \\
5 & 0.043 & 0.049 & 0.036 & 0.169 & 0.076 & 0.068 & 1.506 & 0.066 & 0.078 & 0.610 & \ldots & 0.009 & 0.066 \\
6 & 0.041 & 0.044 & 0.035 & 0.112 & 0.061 & 0.055 & 2.162 & 0.077 & 0.082 & 0.604 & \ldots & 0.008 & 0.063 \\
\hline
7 & 0.027 & 0.028 & 0.026 & 0.090 & 0.045 & 0.040 & 0.753 & 0.031 & 0.031 & 0.495 & 0.113 & 0.004 & 0.070 \\
8 & 0.027 & 0.027 & 0.030 & 0.074 & 0.045 & 0.046 & 0.767 & 0.044 & 0.041 & 0.543 & 0.107 & 0.004 & 0.051 \\
9 & 0.021 & 0.025 & 0.017 & 0.063 & 0.037 & 0.028 & 0.660 & 0.026 & 0.029 & 0.370 & 0.118 & 0.005 & 0.054 \\
\hline
\end{tabular}
\end{table*}
\subsection{Setting R$_{\rm f}$=2, A$_{\rm Fe}$=2 for a 0.1 mCrab source (configuration 4)}
Given the small uncertainties on the spin determination in Figure \ref{conf3}, it is tempting to investigate how the X-IFU would perform on sources ten times weaker, opening the possibility to explore the spin distribution of weaker seyfert galaxies and/or more distant quasars. So we repeated the simulations above, but assuming a source corresponding to a flux of 0.1 mCrab, equivalent to $2 \times 10^{-12}$ ergs/cm${^2}$/s\ (2-10 keV), allowing the integration time to increase to 150 ks (to compensate partly for the ten time lower brightness). The mean error on the best fit parameters for the three absorbers and the reflection component are listed in Table \ref{table:2}. The results of the simulations are shown in Figure \ref{conf4} for a sample of 20 simulations. As can be seen, the spin determination is less accurate, yet the error bars on the spin parameter are typically $\sim 0.17$, with the same tendency as above for lower spin values to be determined more accurately, as expected given the sharper line features then.
\subsection{Recovering the height of the compact source on shorter timescales (configuration 5)}
Assuming that the spin of the black hole is known, it is worth investigating how the height of the compact source could be measured on timescales of the order of 25 ks, thought to be commensurable to the characteristic variability timescale of these sources, i.e. typically bright nearby Seyfert galaxies \citep{ponti_2012}. We consider here a 0.5 mCrab source with a spin parameter of 0.5. We assume a conservative iron abundance for the disk, an ionization $\log\xi =$ 2 and R$_{\rm f}$\ computed self-consistently from the {\it relxill}\ model in the lamp post geometry, allowing the height of the irradiating source to vary between 3 and 10 R$_{\rm g}$. We simulate 25 X-IFU spectra, with different corona heights, power law index, absorber parameters. We let the reflection fraction a free parameter of the fit. The mean error on the best fit parameters for the three absorbers and the reflection component are listed in Table \ref{table:2}. The best fit results for the compact source height and the reflection fraction are shown in Figure \ref{conf5}. On 25 ks timescale, the accuracy with which $h$ can be measured is about $0.6$ R$_{\rm g}$, while R$_{\rm f}$\ is determined with an accuracy of 5\%. Combining such spectral information, with timing analysis (e.g. measuring time lags), would enable a detailed mapping of the accretion geometry around the black hole.
\subsection{Recovering the parameters of the UFOs (configuration 6)}
We have shown previously the capability of X-IFU to separate the three absorbers imprinting on a complex reflection spectrum. Next we focus on the third high density, high ionization absorber, when blue shifted. In low resolution AGN X-ray spectra, these absorbers manifest as narrow Fe K-shell blue shifted absorption lines from Fe XXV/XXVI, with inferred radial velocities between 0.03 - 0.3 c \citep{tombesi_2010_sampleufo, gofford_2013}. As the blue shift increases, the strongest high energy absorption lines due to iron get shifted towards higher energies and separate clearly from the relativistic iron line, but fall in an energy range where the effective area of the X-IFU decreases sharply. Yet, as shown below, constraints on the parameters of this absorber are expected to come also from the low-energy absorption lines, which are also well resolved by the X-IFU.
We carry out a set of simulations, considering a fixed black hole spin (0.5), the reflection computed in a fixed lamp post configuration, blue shifting the third absorber component with velocities ranging between $-0.3$ and $-0.05$ the speed of light, with the parameters of the three absorbers again drawn from uniform distributions bounded in their interval of variations listed in Table 1 (the redshift of the first two other absorbers remains 0). We keep the same {\it zxipcf}\ model for the absorber, but we smear it with a gaussian (gsmooth in XSPEC), to account for an additional broadening of 1000 km/s at 6 keV, consistent with UFO observations \citep{tombesi_2011}. This will smear out the absorption features, thus reducing the benefit of a high resolution spectrometer for that component, while it remains crucial to separate the other absorbers. The XSPEC model used is $\rm TBabs \times (\rm zxipcf_1 \times zxipcf_2 \times (zxipcf_3 \otimes gsmooth) \times relxilllp + zxipcf_1\times zxipcf_2\times xillver)$.
We assume a 0.5 mCrab source and 25 ks for the spectrum integration time, again because one would be interested to probe those UFOs on the shortest possible timescales. The mean error on the best fit parameters for the three absorbers and the reflection component are listed in Table \ref{table:2}The results of the fit for the case of a turbulent velocity of 1000 km/s are shown in Figure \ref{conf6}. As can be seen, the accuracy by which the absorber parameters are recovered has decreased, due to the smearing, although the redshift (i.e. velocity) of the absorber is measured with a very high accuracy.
\subsection{Varying other key parameters: iron abundance, system inclination, disk ionization (configurations 7 to 9)}
\label{otherparameters}
The above simulations have all considered fixed iron abundance (A$_{\rm Fe}$=1 or 2), fixed inclination (30 degrees) and fixed ionization ($\log \xi$=2) of the disk. It is interesting to see how those parameters can be constrained, if varying and left as free parameters of the model, and which impact this would have on the spin measurement accuracy. We have simulated 3 sets of 25 spectra for a 0.5 mCrab source, the spin covering 0 to 0.995 and an exposure time of 100 ks (R$_{\rm f}$=2, A$_{\rm Fe}$=2). The range of allowed variations for A$_{\rm Fe}$, system inclination and disk ionization are listed in Table \ref{table:3}. The mean error on these specific parameters is given as the last column of \ref{table:3}. The mean error on the other best fit parameters are reported in Table \ref{table:2} in the lines corresponding to configurations 7 to 9. As can be seen, those parameters are recovered with high accuracy, most notably the varying disk ionization parameter which is one of the key parameter defining the reflection spectrum. Leaving those parameters, as free parameters of the fit does degrade the accuracy by which the spin and the height of the irradiating source are measured by about a factor of 2.
\begin{table}
\caption{Allowed range of variations for the the iron abundance, system inclination and the ionization parameter of the disk. The mean one sided 90\% error on these parameters are listed from the fitting of 25 simulated spectra. Each simulation considers a 0.5 mCrab source observed for 100 ks and a reflection fraction of 2, and a uniform distribution of the varying parameter in their allowed range of variations. The errors on the other parameters are listed in table \ref{table:2}, including errors on the spin and the height of the irradiating source.}
\label{table:3}
\centering
\begin{tabular}{c l c c }
\hline\hline
Configuration & Parameter & Range & Mean error \\
\hline
7 & A$_{\rm Fe}$ & 1-10 & 0.269 \\
8 & Inclination (deg.) & 10-70 & 0.123 \\
9 & $\log\xi$ & 1-3 & 0.015 \\
\hline\hline
\end{tabular}
\end{table}
\subsection{Soft excess}
The measurement of the reflection parameters relies on the broad band coverage of the X-IFU, not only the iron K$\alpha$ line (6-7 keV), but also the relativistically smeared features below 2 keV or so (see Figure \ref{eeufspec}, bottom left panel). As a matter of fact, a simple test of ignoring the data below 2 keV in the fit shows that it would significantly reduce the accuracy on the best fit parameters for the complex model considered here. Below 2 keV is an energy range in which a soft excess (in addition to the underlying power law component) is often requested by the data, with a significant contribution to the total flux. Its origin is still debated. Hypothesis include an extra warm Comptonization, complex partial covering, disk blackbody emission, reprocessed reflection component and even a relativistically blurred high-density reflection \citep{magdziarz_1998, crummy_2006,gierlinski_2006, mehdipour_2015, petrucci_2018, middei_2019,garcia_2019_apj}. Let alone the fact that X-IFU with its unprecedented sensitivity at $\sim 1$ keV will provide critical insights on the origin of the soft excess, for this paper, it is important to test whether the presence of such a soft component, if not related to relativistic reflection could affect the accuracy by which the reflection parameters and the black hole spin are measured.
We have thus simulated a set of 50 spectra in configuration 3 (R$_{\rm f}$=2, A$_{\rm Fe}$=2, $\log \xi$=2), adding to the input model a steep power law with a photon index ranging between 2.5 and 3.5. Alternative models for the soft excess would be a thermal comptonization, {\it comptt}-like or a blackbody model. The exact shape of the soft excess does not matter here: what matters being the number of counts added on top of the reflection spectrum. The corresponding XSPEC model is then $\rm TBabs \times (\rm zxipcf_1 \times zxipcf_2 \times zxipcf_3 \times (relxilllp + powerlaw) + zxipcf_1\times zxipcf_2\times xillver)$. The power law normalization is such that the 0.5-2 keV flux of the power law component is conservatively set to 50\% of the total flux in that energy range. This can be considered conservative as on average it leads to a 0.5-2 keV unabsorbed flux higher than the 2-10 keV flux (by $\sim 10$\%), while they are generally found to be comparable, e.g. \citet{miniutti_2009_mnras}. As we are only interested in the errors on the spin and height, the fit starts with the input model parameters and the errors are computed on these two parameters only. We find a mean error on the spin of $\sim 0.13$ and $\sim 0.36$ R$_{\rm g}$\ on the height. This is to be compared with the values of $\sim 0.05$ and $\sim 0.18$ reported in table \ref{table:2}. As expected, the accuracy on the spin and height measurement has decreased, because the broad features of the relativistic reflection below 2 keV are diluted by the soft excess. They remain however acceptable, in the conservative setting, used for the model.
\section{Beyond the local Universe: a z=2.5 AGN}
\label{ufo}
We have demonstrated above that even for moderately bright sources (0.1 mCrab) X-IFU is able to characterize both the absorption components and the reflection spectra simultaneously and with great precision. We now want to investigate how well it could perform on even more distant (i.e. fainter) sources, considering that redshifting significantly the spectrum would bring the absorption/emission features closer to the peak of the effective area of the X-IFU (see Figure \ref{fig_aeff}), thus compensating partly the reduction of flux.
The model we consider is a simplification of the model above in which the first two absorbers are merged into one. The XSPEC model becomes $\rm TBabs \times (\rm zxipcf_1 \times (zxipcf_2 \otimes gsmooth) \times relxilllp + zxipcf_1\times xillver)$. The covering factor of the two absorbers is set to an intermediate value of 0.75. We thus first consider an AGN with a flux of $2 \times 10^{-13}$ ergs/cm${^2}$/s\ (i.e. 0.01 mCrab) \citep{georgakakis_2013,martocchia_2017_aa,dadina_2018,baronchelli_2018_mnras}. The normalization of the cold reflection component relative to the relativistic remains one fifth. The reflection component is computed in a fixed lamp post configuration assuming a black hole spin of 0.5. {Note that no meaningful constraints can be derived on the spin at this flux level}. We further assume a height of the irradiating source of 4 R$_{\rm g}$\ with A$_{\rm Fe}$=1 and $\log \xi$=2. The redshift of the first absorber is set to the redshift of the source, while a blue shift (between -0.3 and -0.05) is added to the second absorber. A Gaussian velocity broadening of 1000 km/s normalized at 6 keV (rest frame, the index of the gaussian smoothing function in XSPEC is assumed to be 1) is assumed for the high density absorber. A simulated spectrum corresponding to an exposure time of 100 ks is shown in Figure \ref{fig_distant_agn}, to highlight the imprints of the two absorbers on the spectrum, leaving a forest of absorption lines that will be crucial to measure the redshifts.
To be more quantitative, we simulate 10 spectra with the redshift of the source allowed to vary between 2.4 and 2.6, and a large velocity broadening of 3000 km/s for the high density absorbers. The fit is performed between 0.3 and 3.5 keV. Both the source redshift and the redshift of the absorber are then left free in the fit. In the framework of this simplistic model, unsurprisingly the redshift of the source would be determined with a high accuracy due to the prominent redshifted iron line produced by the distant reflector (statistical error less than $\sim 0.001$). Despite the larger velocity broadening, the blueshift of the high density absorber would be measured with a statistical error much less than $\sim 0.01$. The best fit X-ray redshifts for the source and the UFO are plotted in Figure \ref{fig_redshifts}. More detailed simulations are warranted with added complexity to the model, but as discussed by \cite{martocchia_2017_aa}, who pushed the flux limit by yet another order of magnitude, snapshot X-IFU observations may reveal the presence of outflows, imprinting absorption lines around the peak of the effective area of the X-IFU. Such observations may as well provide the X-ray redshift of the source and would probe the occurrence rate of outflows, their temporal variability and their link with the kpc-scale outflows running through the interstellar medium, right at the golden epoch of AGN-galaxy evolution at redshift above 2 \citep{martocchia_2017_aa}.
\section{Accounting for calibration uncertainties}
\label{calib}
It can be predicted that a high resolution spectrometer such as the X-IFU will be challenging to calibrate. In this section, we investigate how uncertainties in the instrument effective area may affect the present results, which are provided so far just with statistical errors, not accounting for any systematic errors. The first test we performed was by allocating a 5\% systematic error to the spectra for the configuration 3 (A$_{\rm Fe}$=2, R$_{\rm f}$=2). The mean error on the best fit parameters are reported in Table \ref{table:2} for the so-called configuration 3b. It is encouraging to see that the impact on the accuracy of the best-fit parameters is very small. Next we introduce on a more detailed analysis accounting for the X-IFU calibration requirements.
Per its performance requirements, the X-IFU is required to have a knowledge on the shape of the effective area curve better than 3\% ($1\sigma$), across the 0.2 to 10 keV range. In total, with the mirror assembly, the requirement is not to exceed 5\% ($1\sigma$, on axis). In addition, for the X-IFU the normalisation of the effective area should be known with an absolute error lower than 4\% at 1 keV (still at $1\sigma$), with a contribution from the mirror of less than 6\%. In a first step, we first restrict our exercise to X-IFU, ignoring the additional uncertainties arising from the mirror.
At low energies, the X-IFU quantum efficiency is determined by the transmission of the optical/thermal blocking filters and their supporting meshes \citep{barbera_2018_spie, barret_2018_spie}. Those filters are made of Polymide, Aluminum and Aluminum Oxyde. On the other hand, at high energies the quantum efficiency derives from the transition edge sensor absorber thickness, currently 1.7 $\rm{\mu m}$ of gold and 4.2 $\rm\mu m$ of Bismuth, e.g. \cite{peille_2018_spie}. Here we follow the Monte Carlo simulation approach introduced by \cite{drake_2006_spie} and followed recently by \cite{cucchetti_2018_spie}. We first generate a large number of auxiliary response files (1000) whose shape remains within the envelope of the $\pm 3$\% maximum allowed shape deviation. We do this by bounding the thicknesses of the filters and the absorbers, e.g. the thickness of the gold absorber is drawn from a conservative unbounded normal distribution of $\sigma=0.13$ $\rm \mu m$ centered around 1.7 $\rm \mu m$ (\cite{drake_2006_spie} assumed the distribution to be truncated at $\pm 1 \sigma$). The energies at which the maximum deviation is computed are 0.5 keV and 10 keV instead of 0.2 and 12 keV. We ignore uncertainties around the edge of the response. Once the overall shape of the effective area curve is determined, its normalization is drawn from an unbounded normal distribution of mean 1, and $\sigma=0.04$ (the same normalization applies not only at 1 keV but throughout the whole energy band, see Figure \ref{fig_aeff}).
There are two possible approaches to estimate the errors linked to the uncertainties in the instrument response. \citet{drake_2006_spie} proposed to start from a single simulated spectrum generated from the nominal response file, and fit it with the newly generated response files. With this approach, the statistics being the same, it better highlights the perturbations induced by the calibration uncertainties. On the hand, the statistics of the one single spectrum to fit may have an important role in the results, as we deal with spectra with millions of counts (i.e. the fit may converge to the same best fit if the changes in the response shape are not significant enough). At the opposite, \citet{cucchetti_2018_spie} faked one spectrum per newly generated response files and fit each of them with a single, nominal response file. In both cases, we wish to compare the distribution of the best fit parameters with the distribution expected from pure Poisson statistics. We estimate the latter by faking the same number of spectra with the nominal response file, fitting them and recording the best fit parameters (i.e. the usual way of estimating best fit errors from Monte Carlo simulations).
As in real life, the end user of the X-IFU will likely be provided with one single response file to fit the data, affected by calibration uncertainties, here we prefer the approach of \cite{cucchetti_2018_spie}. For the sake of this exercise, we simulate spectra for the so-called configuration 3 (1 mCrab, 100 ks, A$_{\rm Fe}$=2, R$_{\rm f}$=2), and an intermediate spin value of 0.5. It is important to note that before all fits, the simulated spectra are binned optimally, accounting for the response file used \citep{kaastra_2016}. All fits are performed between 0.3 and 11.5 keV, and as we are interested in assessing only the distribution of best fit parameters, they all start with the model input values.
The distribution of best fit parameters for the main reflection parameters are compared in Figure \ref{fig_res_cal_uncertainties} (the spin, the height of the irradiating source, the reflection fraction and the power law index). If systematic calibration errors were to be important, the two distributions should differ significantly, with the distribution from Poisson statistics being narrower than the one accounting for both Poisson statistics and calibration uncertainties. As can be seen from Figure \ref{fig_res_cal_uncertainties}, for that particular case, the calibration errors considered here (at the X-IFU level only) are small, in particular for the spin which is of prime interest here (the mean error on the spin increases from $\sim 0.02$ to $\sim 0.03$). It is also interesting to note that the calibration uncertainties, as modeled here, are not introducing any biases in the distributions.
If we were considering that all calibration errors at the mirror assembly plus instrument level would arise from X-IFU alone (i.e. changing 3\% to 5\% and 4\% to 10\%), we have repeated the simulations above, regenerating another set of 1000 response files. The systematic errors increase as expected but remain small for the spin parameter (increasing from $\sim 0.02$ to $\sim 0.04$). The error on the height goes at the same time from $\sim 0.2$ to $\sim 0.4$ R$_{\rm g}$.
It should be anticipated that the calibration errors may affect differently different parameters, depending on whether they are sensitive to the low energy part of the response or to the high energy part, or if they relate to a continuum component or relate to a component with discrete features, such as absorption/emission lines. A more detailed analysis of the calibration requirements for X-IFU, and their impact on Athena driving science cases (such as the detection of the missing baryons in the warm hot intergalactic medium) will be devoted to a follow-up paper, in which we will also compare our method of assessing the systematics with the method of \citet{drake_2006_spie}.
\section{Discussion}
\label{discussion}
We have demonstrated for the first time in a quantitative way the power of high resolution spectroscopy to decipher complex multi-component AGN X-ray spectra. Next, we briefly discuss on the advances permitted by X-IFU on probing black hole spins, accretion-ejection physics, the issue of over iron abundances found in AGN X-ray spectra and conclude by a comparison with similar feasibility studies, using different instrumental settings.
\subsection{New insights on the black hole spin distribution}
Measuring the distribution of spins of a large (greater than $\sim$ 50) sample of super massive black holes may tell us about their growth channels, including the relative contributions of mergers versus prolonged accretion \citep{berti_volonteri_2008,dovciak_2013}. Models predict that mergers would lead to a flat spin distribution while prolonged disk-mode accretion would end up with black holes spinning rapidly. Of the few tens of AGN with known spin values, the distribution is peaked towards high spins \citep{vasudevan_2016_mnras,reynolds_2019_nat}. This has been argued as a consequence of the fact that, in a flux limited sample, black holes with higher spins, accreting at the same rate are likely to be over represented because of their higher radiation efficiency ($\eta = 0.057$ for a non-spinning black hole to $0.32$ for maximal spin a = 0.998). Another bias that may be present in the current data comes from the fact that higher black hole spins lead to larger R$_{\rm f}$\ \citep{dauser_2014_mnras}.
{In the near future, the eROSITA All-Sky Survey, reaching a 2-10 keV sensitivity limit about two orders of magnitude lower than the previous HEAO-1 All-Sky Survey \citep{piccinotti_1982_apj} will increase the number of objets at z larger than 1 and brighter than 0.1 mCrab from the handful known today to several hundreds \citep{comparat_2019_mnras}. Interestingly enough, recent very deep (4 Ms) Chandra exposures on the Chandra Deep Field South indicate that high z and nearby objects may share similar spectral properties, in particular by the presence of a broad iron line \citep{baronchelli_2018_mnras}. Thus, with the accuracy reached on the spin measurement at the 0.1 mCrab flux level (0.17 in the configuration 4 above), and thanks to its unbiased sensitivity to measure low spins (even for relatively high source heights of 10 R$_{\rm g}$), the results presented in this paper demonstrate that the X-IFU carries the potential to provide unprecedented constraints on the intrinsic black hole spin distribution, up to z$\sim$1-2. Such a result would have wide-ranging implications, such as for constraining the black hole growth models \citep{berti_volonteri_2008}, but also for correcting luminosity functions or constraining black hole population synthesis models as discussed by \cite{vasudevan_2016_mnras}.}
\subsection{New studies of accretion and ejection flows}
The X-IFU will open a spectroscopic window to address strong gravity accretion physics and probe outflows over a range of physical parameters for the corona/reflection component of accretion disks and of AGN-driven winds, down to unprecedented short time scales and faint source fluxes. This will enable new, currently mostly unpredictable, type of studies of accretion and ejection phenomena. We address qualitatively a couple of such cases, considering that more extensive simulations would be required.
X-IFU will be able to probe the height of the X-ray compact source with respect to the accretion disk down to less than a fraction of R$_{\rm g}$\ and on time-scales comparable to the X-ray source variability time scales (see Figure \ref{conf5}). In a way comparable to what is currently done in coronal mass ejections from the sun, e.g. \citet{Gou_2019}, it is possible that the X-ray coronae in AGN are also formed by magnetic reconnection events on top of an accretion disk. This will lead to strong flaring, massive coronal loops and particle acceleration. The spectral information provided by X-IFU, combined with reverberation lags between the direct and reflected emissions will probe the geometry and corona-disk structure down to the innermost regions of the accretion disk, where most of the energy is released \citep{dovciak_2013,wilkins_2016_simlags, zoghbi_2019_wpusds}.
Measuring with great precision the different parameters of AGN and QSO-driven winds, such as their ionization parameter, column density and velocity, is key to understand whether such winds have a sufficiently high mechanical power (typically 0.5 per cent of the bolometric luminosity) to provide a significant contribution to AGN feedback \citep{hopkins_2010_mnras,fabian_2012_nature,cappi_2013_sp}. Kinetic energies being proportional to the v$^3$, precise measurements such as the ones shown in Figure \ref{conf6}, i.e. yielding typical errors less than few \%, are mandatory. But beside this classic argument, another new opportunity, introduced in \cite{cappi_2013_sp} is the possibility offered by X-IFU to measure not only their line shifts (i.e. velocity) but their line profile with unprecedented precision, again down to either short time scales and faint sources, i.e. in nearby Seyferts and more distant QSOs. Such information would be key to constrain the launching site and mechanisms for the winds (see \citet{dorodnitsyn_2009_mnras} for detailed simulations of such profiles, and \citet{chartas_2016_apj} for a tentative application to Chandra data).
\cite{done_2007} and \cite{nardini_2015} have shown that the broad FeK emission lines combined with the strong absorption features at higher energies seen in some bright nearby AGN may well be interpreted as P-Cygni line profiles produced by a spherically symmetric wind or shell. Similar P-Cygni profiles should be seen for all absorption lines but with different shapes and different time variations for the different absorbers. In addition, a realistic flow will be rather radially extended with a distribution of kinematical, ionization, and dynamical properties along the line of sight, leading to even more complex absorption profiles \citep{proga_and_kurosawa_2010, giustini_and_proga_2012}. Simulating such complex spectra with X-IFU goes beyond the scope of this paper, but clearly X-IFU holds the potential to provide key insights into the winds properties.
\subsection{Iron over abundance, disk inclination and ionization}
Inferred iron over abundances from X-ray reflection spectroscopy is one of the most intriguing results, which casts some doubts on the reported spin values given the tight relation between reflection parameters and iron abundances. This has motivated the revision of reflection models towards densities above the currently used values: those densities being expected in the vicinity of black holes \citep{garcia_2016_mnras,garcia_2019_wpusds}. Application of high density models to a few selected objects has already shown that the iron abundance recovered was significantly lower than the one obtained with lower-density disk reflection models, e.g. \cite{tomsick_2018_apj,jiang_2019_mnras}. Those models which are currently under development have clear signatures at energies below 1 keV. In particular, the enhancement of free-free heating in the atmosphere of the disk, increasing with increasing density leads to a soft excess. These high density models will be easily testable with X-IFU, which will measure the iron abundance down to solar, together with the reflection component with high accuracy (see Tables \ref{table:2} \& \ref{table:3}).
We have also shown (\S\ref{otherparameters}) that the disk inclination and ionization will be well constrained. This is very important because inclination measurements could allow comparison of inner disk inclinations to those for the host galaxy stellar disk, thereby put constraints on the way AGN are fueled. Material propagating inward through the galactic disk or via minor mergers are expected to leave imprints on their average respective alignment \citep{middleton_2016}.
Understanding how much the reflection component shall be ionized is also an open and debated issue. The FeK line profile is, in principle, carrying sensitive information on the disk ionization state, but in practice it is often degenerate with the other free parameters of the line profile. As a result, the soft energy band is key to constrain the amount of ionization for the reflection component as shown in the lower left panel of Figure \ref{eeufspec} where the disk soft emission becomes quickly very significant at intermediate up to high ionization levels. As a note of caution, it is worth stating that in our lamp-post model, the ionisation is assumed to be constant radially. Ideally it should be calculated self-consistently with the radial density to account for the centrally peaked illumination expected in a relativistic accretion disk model, see \cite{martocchia_2002, svoboda_2012} and in particular \cite{kammoun_2019} for a consideration of this effect, including also X-IFU simulations.
\subsection{Comparison with other instruments and simulations}
Feasibility studies have so far been carried out, considering X-ray spectra with limited spectral resolution, $\sim 100$ eV at 6-7 keV, e.g. as provided by XMM-Newton EPIC instruments \citep{strueder_2001_aa}, combined with hard X-ray data enabling to sample the smooth Compton reflection bump above 10 keV, e.g. as provided by NuSTAR \citep{harrison_2013_apj}. We briefly discuss here how these studies compare to those presented in this paper.
\cite{kammoun_2018} have conducted a similar analysis to ours, simulating XMM-Newton EPIC-PN and NuSTAR spectra in the range of 1 to 3 mCrab fluxes. Beside relativistic reflection, they considered a warm absorber and two layers of partially covering neutral absorbers, cold reflection and thermal emission from the galaxy, thus introducing complexity in their spectral model similar to ours. The success rate of measuring their spin is about 50\% (over 60 fits). This rate increases to 100\% for spins larger than 0.8 and a lamp-post height lower than five gravitational radii (because this configuration imprints stronger, easier to detect, relativistic distortions to the spectrum, see also \cite{choudhury_2017}). On the other hand, the success rate goes to zero if the height of the irradiating source is at a distance larger than 5 R$_{\rm g}$. As demonstrated above, X-IFU can measure spins all across the range investigated, and this even for small reflection fractions and iron abundance of 1, and source height up to 10 R$_{\rm g}$. Interestingly, \cite{kammoun_2018}, considering two of their failed simulations, with the height of irradiating source at 11 and 18 R$_{\rm g}$, noticed that Athena WFI simulations would not be more successful (despite the much improved statistics), concluding that this was likely due to the non proper sampling of the reflection hump above 10 keV. Because the Compton hump is not properly sampled by X-IFU either, it may be more likely due to a too low reflection fraction (due to the large source heights considered). We have repeated the simulations of Configuration 1 (A$_{\rm Fe}$=1 and R$_{\rm f}$=1) using the WFI response files\footnote{The response files were downloaded from \url{http://www.mpe.mpg.de/ATHENA-WFI/response_matrices.html} and the date of the version used is November 2017. See \cite{meidinger_2018_spie} for a recent description of the WFI instrument.} and found that the accuracy by which WFI recovers the reflection parameters is a factor of $\sim 2$ less than X-IFU, despite the higher effective area of the WFI at high energy ($\sim 25$\% around 6 keV). Taking advantage of its better spectral resolution, at the same time, the X-IFU recovers the parameters of the absorbers with error bars that are between a factor of 3 to 4 smaller (see Conf. 1b in Table \ref{table:2}).
\cite{bonson_2016} have considered a model based on {\it relxill}\ only (i.e. without absorbers and cold reflection, see \cite{choudhury_2017} for a discussion on their fitting scheme). They have simulated spectra with XMM-Newton EPIC-PN and NuSTAR for the brightest seyferts, and found that the spin parameter could only be well measured for the most rapidly rotating super-massive black holes (i.e. $a > 0.8$ to about $\pm 0.10$). The error on the spin would reach $\sim 0.30$ at $a=0$ for R$_{\rm f}$=5, a value not considered here. Interestingly enough in their simulations, they found that the addition of NuSTAR hard X-ray data did not improve the spin determination, see Figure 7 of \citet{bonson_2016}. At first sight, the simulations performed here do not seem heavily impacted by the lack of hard X-ray data (above 10 keV), possibly because there is sufficient information across the X-IFU band pass, in particular in the soft X-ray band where the ionized reflection component contributes significantly.
Following up on this, it is worth noting that the power law index is extremely well constrained within our simulations, despite the model complexity. The assumption of a straight power law in the X-IFU band pass is correct for any plausible high-energy cutoffs (above tens of keV). \cite{garcia_2015_apj} showed that the high-energy cutoff up to even 1 MeV can be constrained using X-ray data below 100 keV by the sole modeling of the reflection component. This is due to the fact that the reflection spectrum, imprinted by fluorescent lines and other atomic features, depend sensitively on the shape of the emission spectrum of the irradiating source. We have repeated the simulations in configuration 3 (1 mCrab, 100 ks, R$_{\rm f}$=2, A$_{\rm Fe}$=2), leaving the energy cutoff as a free parameter, allowing it to vary between 50 and 200 keV (drawing the initial 50 values from a uniform distribution). Such a cutoff range is consistent with the latest Swift/XRT-NuSTAR observations of type 1 AGN \citep{molina_2019_mnras}, see also \citep{ricci_2017_apjs} and references therein. The mean 90\% confidence level error on the energy cutoff derived from the X-IFU simulations is $\sim 15$ keV over the 50 to 200 keV range, with a tendency for the errors to increase at the upper end of the range. This indeed suggests that meaningful constraints can be obtained on the high energy cutoff from the X-ray data alone. This also means that combining X-IFU data with comparably sensitive hard X-ray data, e.g. from the High-Energy X-ray Probe (HEX-P), proposed as a complementary mission to Athena \citep{madsen_2018_spie} would set very tight constraints on the reflection parameters, by measuring precisely both the energy cutoff and the Compton hump. It is also worth noting that the shape of the Compton hump, being independent on parameters such as the iron abundance or the disk ionization would help in removing model degeneracies, in case data are more complex than the ones simulated here (as they will likely be).
Finally, \cite{choudhury_2017} have tested the {\it relxill}\ model with simulated NuSTAR data, and assumptions more extreme than ours, e.g. R$_{\rm f}$\ values up to 10, iron over abundance up to 10 also. They have also considered NuSTAR spectra accumulated over 100 ks and delivering between 1 and 10 millions counts. For the model considered here and a source of 1 mCrab, the rate expected in one NuSTAR module is $\sim 0.5$ counts/s between 3 and 70 keV\footnote{Response files were downloaded from \url{https://www.nustar.caltech.edu/page/response_files}}, meaning that the fluxes they considered would correspond to 20 to 200 mCrab for X-IFU: a flux regime not explored in this paper (and in which there are just a couple of AGN). They found that better constraints are obtained for smaller height of the irradiating source and larger reflection fractions, yet, the errors that they obtained in the most favorable conditions exceeds by at least one order of magnitude the one we obtain in our more realistic and complex setting. To take an example, for a spin parameter input of 0 and R$_{\rm f}$=1 and $h=3$R$_{\rm g}$, the 90\% dispersion among the simulations goes from $\sim -0.5$ to $\sim 0.25$, while for sources 200 times fainter the X-IFU would reach an error of $\le 0.1$.
\subsection{Comparison with {\it XRISM}-Resolve}
{The X-ray Imaging and Spectroscopy Mission (XRISM), a JAXA/NASA collaborative mission, with ESA participation is expected to launch around 2021 \citep{tashiro_2018_spie}. It will carry Resolve, a soft X-ray spectrometer, which combines a lightweight soft X-ray telescope paired with a X-ray calorimeter spectrometer, to provide non-dispersive 5-7 eV energy resolution in the 0.3-12 keV bandpass. Opening the way to broad band high-resolution X-ray spectroscopy, which we have seen being critical for the science of interest in this paper, it is interesting to compare how Resolve will perform compared to X-IFU, despite its lower effective area (about a factor of $\sim 45$ at 1 keV and a factor of $\sim 5$ at 6 keV), and this, at least for the brightest objects. For the sake of this simple comparison and focussing on the spin measurements, we have simulated a 5 mCrab source in the so-called conservative configuration 1 above (R$_{\rm f}$=1, A$_{\rm Fe}$=1). We have generated 50 spectra with a constant spin spacing between 0 and 0.995 and with an integration time of 100 ks. Setting a favorable case, we have ignored the background and initiated the fit to the model input parameters. The error on the spin parameter has then been computed. In about $\sim 15$\% of the simulations, the fitted spin pegged at the hard limit. The mean error on the spin is $\sim 0.3$. With the same settings, the mean error on the spin from X-IFU observations would be $\sim 0.04$.}
{To summarize, the comparison with the three feasibility studies similar to the one presented here, as well as the comparison with the XRISM-Resolve above, clearly demonstrate the advances the X-IFU will permit over existing and future instrumentations. }
\section{Conclusions}
The Athena X-IFU, as currently designed, is predicted to be transformational in many field of astrophysics, so will Athena overall, by the complementarity of its science payload \citep{nandra_2013_sp,barret_2013_sf2a,barcons_2017_an,guainazzi_2018_arxiv}. Here we have demonstrated the rather unique and outstanding capabilities of X-IFU for probing AGN spins, AGN surroundings, accretion disk physics, winds and outflows from local to more distant AGN, {using a state of the art reflection model in a lamp post geometrical configuration}. The leap in sensitivity provided by X-IFU derives from its excellent spectral resolution, high throughput and broad band coverage. More feasibility studies of this type, possibly combining spectral-timing analysis, extending the range of models to be tested, {the range of reflection geometries} and the range of objects to be considered (e.g. X-ray binaries) should be performed to further assess and quantify its unique capabilities. The methodology presented here may also serve this purpose.
\section{Appendix A: Biases in $\chi^2$ fitting}
\label{appendix_a} $\chi^2$ statistics is often used as a the fitting metric, although its limitations are known, especially in the low count regime. As discussed by \citet{humphrey_2009_apj}, even in the high count rate regime (when the counts per bin gets typically larger than $\sim 20$), $\chi^2$ fitting will lead to biased parameter estimates, unless the number of data bins is far smaller than the square root of the number of counts in the spectrum (which is not the case for most simulations presented here). The bias may be comparable to, or even exceed, the statistical error. We have repeated the configuration 1 simulation, replacing the optimal binning scheme of \citet{kaastra_2016} by a standard grouping scheme ensuring that each spectral bin would have at least 20 counts. We have used $\chi^2$ statistics. Of all the 16 free parameters of the fit, the photon index of the power law has a very small statistical error (0.003 in table \ref{table:2}). In figure \ref{fig_bias_pl_index}, the best fit power law index is reported against the input power law index. As can be seen, a bias is present towards recovering steeper indexes, and the bias exceeds the statistical error. The bias is still present when the data are grouped further having a minimum of 50 counts per bin. A similar bias was present in the simulations reported by \cite{choudhury_2017}. No such bias is present in our fits based on cstat, as shown in Figure \ref{conf3}. To conclude, for X-IFU data, it is recommended to always use cstat\ in fitting spectra, see also \cite{kaastra_2017_aa} on how cstat\ can be used for statistical tests, such as assessing the goodness of fit of a spectral model, as used here.
\begin{acknowledgements}
The authors wish to thank the anonymous referee for useful comments.
DB acknowledges useful discussions with Thomas Dauser, Edoardo Cucchetti, Chris Done, Jeremy Drake and Etienne Pointecouteau. DB and MC thank Laura Brenneman, Francisco Carrera, Mauro Dadina, Javier Garcia, Matteo Guainazzi, Jelle Kaastra, Elias Kammoun, Giovanni Miniutti, Jon Miller, and Jiri Svoboda for their useful suggestions and comments on an earlier version of the paper. Special thanks to Edoardo Cucchetti and the computer support team at IRAP (Elodie Bourrec and C\'edric Hillembrand) for providing DB with the cluster resources required to carry out these time consuming simulations. MC acknowledges financial support from the Italian Space Agency under agreement ASI-INAF n.2017-14-H.O.and 2018-11-HH.0. DB acknowledges support from the French Space Agency (CNES). DB wishes to dedicate this paper to his beloved mother who passed away along the preparation of this work.
\end{acknowledgements}
|
2,869,038,156,908 | arxiv | \section{Introduction}
The Einstein field equations in general relativity predict the
existence of black hole solutions, such as the Schwarzschild,
Reissner-Nordstr\"{o}m and Kerr metrics. In recent several years,
a number of detections of gravitational-wave signals emitted from
binary black hole mergers [1,2] and the Event Horizon Telescope
(EHT) shadow image of M87$^{*}$ central supermassive black hole
[3] have frequently confirmed the prediction.
Although the great success of General Relativity
has been achieved, developments in constructing alternative
theories of gravity are necessary due to the requirement of rapid
progress in the field of observational cosmology [4]. In fact,
several notable examples like Eddington's theory of connections,
Weyl's scale independent theory and the higher dimensional
theories of Kaluza and Klein were made during the very early days
after Einstein's theory of General Relativity. The Eddington's
theory shows the consistency of the magnitude of a varying
Newton's constant and the ratio of the mass and scale of the
Universe. The Parameterised Post-Newtonian (PPN) formalism by
Kenneth Nordtvedt, Kip Thorne and Clifford Will allows for
precision tests of fundamental physics on the scale of the
observable Universe. The limits of General Relativity occur for
the emergence of the dark universe scenario. The dark energy
beyond Einstein's theory is good to explain the apparent
accelerating expansion of the Universe. This shows that General
Relativity may not be suitable for describing the Universe on the
largest scales. Constructing a quantum field theory of gravity is
based on the rise of super-gravity and super-string theories. The
black hole singularity problem in the general relativistic black
hole solutions should be avoided in the study of some theories of
gravity like quantum field theory [5-9]. In short, many
experimental tests and theoretical studies of strong-field gravity
features often require that such a strong gravitational field
should not be one described by the standard general relativity but
should be a departure from the general relativity.
The review on ``Modified gravity and cosmology"
[4] provides a useful reference tool for researchers and students
in cosmology and gravitational physics. Many modified gravity
theories with extra fields to the Einstein's theory of general
gravity were introduced in [4]. Some examples are
quantum-corrected gravity theories [10-15] including Kaluza-Klein
gravity theories [16,17], scalar-tensor theories [18-23],
Einstein-$\AE$ther theories [24], Bimetric theories [25], $f(R)$
theories [26,27], $f(T)$ gravity [28], and scalar-tensor-vector
gravity [29,30]. The quantum-corrected gravity theories relate to
higher dimensional gravity theories including extra spatial
dimensions and extra temporal dimensions. The Kaluza-Klein theory
is devoted to unifying gravity and electrodynamics, and its basic
idea is based on General Relativity built on a 4+1 dimensional
manifold with one small and compact spatial dimension. The
scalar-tensor theories of gravity are established through the
Lagrangian density with the metric tensor coupling to scalar field
and matter fields. They allow possible variations in Newton's
constant, $G_N$. The $f(R)$ theories of gravity are derived from a
generalisation of the Einstein-Hilbert density. They are useful to
explain the observed accelerating expansion of the Universe. The
scalar-tensor-vector gravity theory contains a vector field, three
scalar fields and a metric tensor. It can well explain the solar
observations, the rotation curves of galaxies [31] and the
dynamics of galactic clusters [32]. Based on the theory of
gravity, a static spherically symmetric modified gravity
Schwarzschild black hole metric was first given in [33]. The
metric describes the final stage of the collapse of a body by
introducing $\alpha$ as a coupling parameter of modified gravity,
which enhances the gravitational constant and provides a charge
yielding a gravitational repulsive force. In fact, the modified
gravity Schwarzschild black hole \emph{seems} to be a
Reissner-Nordstr\"{o}m black hole by the modified gravity coupling
parameter adjusting the gravitational constant and acting as the
black hole charge.
The authors of [34] studied circular orbits of charged particles
around the modified gravity Schwarzschild black hole immersed in
an asymptotically uniform magnetic field. They found that no
stable circular orbits exist when the magnetic coupling parameter
is not smaller than 1. The range of stable circular orbits
increases as the modified gravity coupling parameter and the
magnetic coupling parameter increase. The center-of-mass energy
collision of charged particles increases with the modified
gravity coupling parameter increasing. The authors of [35] also
showed that the innermost stable circular orbits and marginally
bound orbits in the modified gravity Schwarzschild metric are
larger than those in the pure Schwarzschild spacetime. The
positions of the innermost stable circular orbits for charged
particle are less than those for neutral particles. In addition,
the shadow cast by the spherical symmetric black hole in the
modified gravity was investigated. When the modified gravity
coupling parameter increases, the sizes of photonsperes and
shadows of the black hole are enlarged and can be observed through
EHT [36].
The authors of [34,35] mainly surveyed the effect of the modified
gravity coupling parameter on the circular motions of charged
particles at the equatorial plane. Unlike them, we shall consider
the modified gravity coupling parameter how to effect the regular
and chaotic orbital dynamics of charged particles in the global
phase space. For the sake of our purpose, a dynamical model for
the description of charged particles moving near the modified
gravity Schwarzschild black hole with an external magnetic field
is introduced in Section 2. Then, explicit symplectic methods are
designed for this dynamical problem and the orbital dynamics of
charged particles is explored in Section 3. Finally, our main
results are concluded in Section 4.
\section{Modified Gravity Nonrotating Black Hole Immersed in an External Magnetic Field}
In terms of the scalar-tensor-vector modified gravitational
theory, a static spherically symmetric nonrotating black hole
[33] is written in Boyer-Lindquist coordinates
$x^{\mu}=(t,r,\theta ,\varphi)$ as
\begin{eqnarray}
ds^2 &=& g_{\mu \nu}dx^\mu dx^\nu \nonumber \\
&=& -fc^2dt^2+\frac{1}{f}dr^2 \nonumber \\ && +r^{2}(d\theta
^2+\sin^2\theta d\varphi ^2),
\end{eqnarray}
where function $f$ has the following form
\begin{eqnarray}
f = 1-\frac{2(1+\alpha)G_NM}{rc^2}+\frac{\alpha(1+\alpha)G^2_NM^2}{r^2c^4}.
\end{eqnarray}
Several notations are specified here. $c$ is the speed of light
and $G_N$ represents the Newton's gravitational constant. $M$
stands for the black hole mass. $\alpha$ is a
dimensionless modified gravity coupling parameter, which is
responsible for adjusting the gravitational constant $G_N$ as
$G=G_N(1+\alpha)$ and providing the black hole charge $Q=\pm
M\sqrt{\alpha G_N}$. For $\alpha>0$, the adjusted gravitational
constant $G$ is larger than the the Newton's gravitational
constant $G_N$; this implies that $\alpha$ in the second term of
Equation (2) can enhance the gravitational effects. However,
$\alpha$ in the third term of Equation (2) gives the black hole
charge with a gravitational repulsive force. Thus, the modified
gravity parameter plays roles in inducing an enhanced
gravitational effect and a gravitational repulsive force
contribution. In other words, Eq. (1) for the description of the
modified gravity Schwarzschild metric \emph{looks like} the
Reissner-Nordstr\"{o}m black hole metric when $G_N(1+\alpha)$ and
$\pm M\sqrt{\alpha G_N}$ in Eq. (2) are respectively replaced by
the adjusted gravitational constant $G$ and the charge $Q$,
$G_N(1+\alpha)\rightarrow G$ and $\pm M\sqrt{\alpha
G_N}\rightarrow Q$. There are two horizons
$r_{\pm}=G_NM(1+\alpha\pm\sqrt{1+\alpha})/c^2$ for $\alpha>0$.
$\alpha=0$ corresponds to the Schwarzschild event horizon
$r_{+}=r_S=2G_NM/c^2$.
Assume that the black hole is immersed in an asymptotically
uniform external magnetic field, whose four-vector potential
satisfying the Maxwell equation in the curved spacetime background
has a nonzero component [34,35]
\begin{eqnarray}
A_\varphi =\frac{1}{2}B[r^2-\alpha(1+\alpha)M^2]\sin^2\theta.
\end{eqnarray}
Parameter $B$ is the magnetic field strength.
Consider that a particle with mass $m$ and charge $q$ moves around
the modified gravity Schwarzschild black hole surrounded by the
external magnetic field. The particle motion is described in the
following Lagrangian
\begin{eqnarray}
\mathcal{L} =\frac{m}{2}g_{\mu \nu}\dot{x}^\mu
\dot{x}^\nu+qA_\mu\dot{x}^\mu,
\end{eqnarray}
where $\dot{x}^\mu$ is the 4-velocity, i.e., the derivative of
coordinate $x^{\mu}$ with respect to the proper time $\tau$. The
covariant generalized 4-momentum is defined by
\begin{eqnarray}
p_\mu =\frac{\partial \mathcal{L} }{\partial \dot{x}^\mu }=mg_{\mu \nu}
\dot{x}^\nu+qA_\mu.
\end{eqnarray}
Based on the Euler-Lagrangian equations, two components of the
4-momentum are conserved, that is,
\begin{eqnarray}
p_t &=& -mf\dot{t}=-E, \\
p_\varphi &=& mr^2\sin^2 \theta \dot{\varphi }+q A_\varphi=L.
\end{eqnarray}
$E$ is the energy of the particle, and $L$ denotes the angular
momentum of the particle. This Lagrangian is equivalent to the
Hamiltonian
\begin{eqnarray}
H=\frac{1}{2m}g^{\mu \nu}(p_\mu -qA_\mu)(p_\nu-qA_\nu).
\end{eqnarray}
For simplicity, $c$ and $G_N$ are taken as geometric units:
$c=G_N=1$. In addition, dimensionless operations are implemented
through scale transformations: $r \to rM$, $t \to Mt$, $\tau \to
M\tau$, $B \to B/M$, $E \to mE$, $L \to mML$, $p_r \to mp_r$,
$p_\theta\to mMp_\theta$, $q \to mq$, $H \to mH$. Thus, $m$ and
$M$ in the above-mentioned expressions are also used as geometric
units $m = M = 1$. Now, the Hamiltonian has a simple expression
\begin{eqnarray}
H &=& \frac{p_r^2}{2}[1-\frac{2(1+\alpha)}{r}
+\frac{\alpha(1+\alpha)}{r^2}]+\frac{1}{2}\frac{p_\theta^2}{r^2} \nonumber \\
&& + \frac{1}{8r^2}[\frac{2L}{\sin\theta}+\beta (\alpha^2+\alpha-r^2)\sin\theta]^2 \nonumber \\
&& -\frac{E^2r^2}{2[\alpha+\alpha^2-2r(1+\alpha)+r^2]},
\end{eqnarray}
where $\beta =Bq$. This system has two degrees of freedom in a
four-dimensional phase space made of $(r,\theta,p_r,p_\theta)$.
If the spacetime (1) is time-like, the Hamiltonian is always
identical to a given constant
\begin{eqnarray}
H=-\frac{1}{2}.
\end{eqnarray}
The system (9) is inseparable to the variables. In this case, no
other constants of motion but the three constants in Eqs. (6), (7)
and (10) are present. Numerical integration methods are convenient
to solve such a nonintegrable system.
\section{Numerical Investigations}
Several explicit symplectic integrators are designed for the
Hamiltonian (9). Then, one of the algorithms is used to provide
some insight into the regular and chaotic dynamics of charged
particle orbits in the system (9).
\subsection{Construction of Explicit Symplectic Methods}
It is clear that the Hamiltonian (9) is not split into two parts
with analytical solutions as explicit functions of proper time and
then does not allow for the application of explicit symplectic
algorithms. However, the explicit symplectic methods are still
available when the Hamiltonian describing the motion of a charged
particle around the Reissner-Nordstr\"{o}m black hole immersed in
an external magnetic field is separated into five parts having
explicitly analytical solutions, as was claimed in [37]. The idea
on the construction of explicit symplectic integrators is also
applicable to the Hamiltonian (9) for the description of the
motions of charged particles around the modified gravity
Schwarzschild black hole. The related details are given to the
algorithmic construction.
Following the work [37], we split the Hamiltonian (9) into five
parts
\begin{eqnarray}
H=H_1+H_2+H_3+H_4+H_5,
\end{eqnarray}
where all sub-Hamiltonians are expressed as
\begin{eqnarray}
H_1 &=& \frac{1}{8r^2}[\frac{2L}{\sin\theta}+\beta (\alpha^2+\alpha-r^2)\sin\theta]^2 \nonumber \\
&& -\frac{E^2r^2}{2[\alpha+\alpha^2-2r(1+\alpha)+r^2]},\\
H_2 &=& \frac{1}{2}p_r^2,\\
H_3 &=& -\frac{(1+\alpha)}{r}p_r^2,\\
H_4 &=& \frac{p_\theta ^2}{2r^2},\\
H_5 &=& \frac{1}{2}\frac{\alpha(1+\alpha)}{r^2}p_r^2.
\end{eqnarray}
The sub-Hamiltonians $H_2$ and $H_4$ are consistent with those in
[37], but the others are somewhat different. The five splitting
parts are solved analytically and their analytical solutions are
explicit functions of proper time $\tau$. Operators for
analytically solving these sub-Hamiltonians are $\mathcal{H}_1$,
$\mathcal{H}_2$, $\mathcal{H}_3$, $\mathcal{H}_4$ and
$\mathcal{H}_5$. The splitting method is based on
the case of $\alpha>0$. If $\alpha=0$, then $H_5=0$ and the
Hamiltonian (9) has four explicitly integrable splitting parts.
This case is the same as the Reissner-Nordstr\"{o}m black hole
with a vanishing charge in [37].
Setting $h$ as a proper time step, we define two first-order
operators
\begin{eqnarray}
\aleph (h)&=& \mathcal{H}_1(h)\times\mathcal{H}_2(h)\times\mathcal{H}_3(h)\nonumber\\
&&\times\mathcal{H}_4(h)\times\mathcal{H}_5(h), \\
\aleph^{\ast} (h) &=&
\mathcal{H}_5(h)\times\mathcal{H}_4(h)\times\mathcal{H}_3(h)\nonumber\\
&&\times\mathcal{H}_2(h)\times\mathcal{H}_1(h).
\end{eqnarray}
The product of $\aleph^{\ast}$ and $\aleph$ is a symmetric
composition as an explicit symplectic algorithm to a second-order
accuracy
\begin{eqnarray}
S_2(h)=\aleph^{\ast}(\frac{h}{2})\times\aleph(\frac{h}{2}).
\end{eqnarray}
This method can rise to a fourth-order accuracy [38]
\begin{eqnarray}
S_4(h)=S_2(\gamma h)\times S_2(\delta h)\times S_2(\gamma h),
\end{eqnarray}
where $ \gamma =1/(1-\sqrt[3]{2}) $ and $ \delta =1-2\gamma$.
There is an optimized fourth-order partitioned Runge-Kutta (PRK)
explicit symplectic integrator [39]
\begin{eqnarray}\nonumber
PRK_64(h) &=& \aleph^{\ast} (\alpha _{12} h)\times \aleph(\alpha
_{11}
h)\times \cdots \\
&& \times \aleph^{\ast} (\alpha _2 h) \times \aleph(\alpha _1 h),
\end{eqnarray}
where time-step coefficients are listed in [40] by
\begin{eqnarray}
\nonumber
&&\alpha _1=\alpha _{12}= 0.0792036964311597, \\ \nonumber
&&\alpha _2=\alpha _{11}= 0.1303114101821663, \\ \nonumber
&&\alpha _3=\alpha _{10}= 0.2228614958676077, \\ \nonumber
&&\alpha _4=\alpha _9=-0.3667132690474257, \\ \nonumber
&&\alpha _5=\alpha _8= 0.3246484886897602, \\ \nonumber
&&\alpha _6=\alpha _7= 0.1096884778767498. \nonumber
\end{eqnarray}
Now, we take $h=1$ in our numerical tests. The parameters are given by $E=0.995$, $L=4.6$, $\alpha =0.12$ and $\beta =5.8\times
10^{-4}$. The initial conditions $p_r =0$ and $\theta = \pi /2$.
Given the initial separation $r$, the initial value of $p_\theta$
$(>0)$ is determined in terms of
Equations (9) and (10). For Orbit 1, $r=15$. For Orbit 2, $r=110$. The two orbits are
integrated by the second-order method $S_2$ and are plotted on the Poincar\'{e} section at the
plane $\theta = \pi /2$ with $p_\theta >0$ in Figure 1a. Clearly, the two orbits have distinct phase
space structures on the Poincar\'{e} section. The structure of Orbit 1 exhibiting a closed curve
describes the regularity of Orbit 1. However, the structure of Orbit 2 consisting of many
points that are randomly distributed in a certain region shows the chaoticity of Orbit 2.
These orbital phase space structures described by the fourth-order methods $S_4$ and $PRK_64$
are almost consistent with those obtained from $S_2$. However, the three integrators have
different orders of magnitude in the Hamiltonian errors $\Delta H=H+1/2$. For the regular
orbit 1 in Figure 1b, the error of $S_4$ is about three orders of magnitude smaller than that
of $S_2$, but about three orders of magnitude larger than that of $PRK_64$. The three methods
are approximately the same in the errors without secular drifts. On the other hand, $S_2$
still shows no secular growth in the error, whereas $S_4$ and $PRK_64$ show secular growths in
the errors for the chaotic orbit 2 in Figure 1c. Such secular drifts are caused by the rapid
accumulation of roundoff errors. Because of this, the errors of $S_4$ and $PRK_64$ approach the
error of $S_2$ in a long enough integration time. $S_2$ is greatly superior to $S_4$ and $PRK_64$ in the
computational efficiency. Considering the computational accuracy and efficiency, we select
$S_2$ as a numerical tool in our later discussions.
\subsection{Orbital Dynamical Behavior}
Figure 2 plots the Poincar\'{e} sections when several different values are given to the
modified gravity parameter $\alpha$. Orbit 2 with the initial separation $r=110$ that is chaotic
in Figure 1a is still chaotic for $\alpha=0.05$ in Figure 2a, $\alpha=0.1$ in Figure 2b and $\alpha=0.18$ in
Figure 2c. We also show a path for Orbit 3 with the initial separation $r=60$ in Figure 1a
going from order to chaos as $\alpha$ increases. This orbit evolves from one single torus for
$\alpha=0.05$ in Figure 2a to six islands for $\alpha=0.1$ in Figure 2b, and to chaos for $\alpha=0.12$
in Figure 1a and $\alpha=0.18$ in Figure 2c. Seen from the global phase space structures in
Figures 1a and 2, an increase of $\alpha$ leads to more orbits with stronger chaoticity. This does
not mean that a given orbit always becomes stronger and stronger chaos in this case.
The regularity of Orbit 1 and the chaoticity of Orbit 2 in Fig.
1(a) can also be identified in terms of the technique of fast
Lyapunov indicator (FLI) in Fig. 3(a). The FLI with two nearby
orbits is defined in [41,42] as a spacetime coordinate independent
indicator
\begin{eqnarray}
\textrm{FLI}=\log_{10}\frac{d(\tau)}{d(0)},
\end{eqnarray}
where $d(0)$ is the initial proper distance between two nearby
orbits and $d(\tau)$ is a proper distance at the proper time
$\tau$. The FLI of Orbit 1 increasing in a power law with time
$\log_{10}\tau$ shows the regularity of bounded Orbit 1. The FLI
of Orbit 2 increasing in an exponential law with time indicates
the chaoticity of bounded Orbit 2. It is found that ordered orbits
correspond to the FLIs not more than 4.5 and chaotic orbits
correspond to the FLIs larger than 4.5 when the integration time
$\tau=10^6$. With the aid of FLIs, the values of $\alpha$ can be
classified according to the regular and chaotic two cases. In Figure
3b, the values of $\alpha<0.178$ correspond to order, and the
values of $0.225<\alpha<0.306$ or $\alpha>0.32$ correspond to
chaos.
The method of FLIs is used to find chaos by scanning one parameter
space in Figure 3. This operation is still useful by scanning two
parameter spaces. Figure 4a plots the regions of
$(\alpha,\beta)$ for order and those for chaos. It is shown that
the chaoticity is strengthened when $\alpha$ and $\beta$ increase.
This result is also supported by the method of Poincar\'{e}
sections in Figures 5a-c. Similarly, chaos becomes stronger
with an increase of $E$, as shown in Figures 4b and 5d-f.
However, chaos is weakened when $L$ increases in Figures 4c and
5g-i.
In short, chaos is strengthened from the global phase space
structures when each of the modified gravity parameter $\alpha$,
the magnetic field parameter $\beta$ and the particle energy $E$
increases. However, it is weakened as the particle angular
momentum $L$ increases. Here an explanation is given to these
results. The analysis is based on Eq. (12) with several main terms:
\begin{eqnarray}
H_1 &\approx& -\frac{E^2+L\beta}{2}-\frac{E^2}{r}(1+\alpha)
+\frac{E^2}{2r^2}\alpha(1+\alpha) \nonumber \\
&& +\frac{\beta^2}{8}r^2\sin^2\theta
+\frac{L^2}{2r^2\sin^2\theta}+\cdots.
\end{eqnarray}
The expression is considered when $2(1+\alpha)/r\ll 1$ and
$\alpha(1+\alpha)/r^2\ll 1$. The second term in
Equation (23) corresponding to the second term of Equation (2)
acts as the black hole gravity to the particle. Here, the modified
gravity parameter $\alpha$ also plays a role in enhancing the
gravitational effect. However, $\alpha$ in the third term of
Equation (23) corresponding to the third term of Equation (2)
gives a gravitational repulsive force contribution to the
particle. The gravitational force from $\alpha$ in the second
term of Equation (23) is more important than the gravitational
repulsive force from $\alpha$ in the third term of Equation (23).
The fourth term in Equation (23) corresponds to a magnetic field
force as a gravitational effect. The fifth term corresponds to an
inertial centrifugal force from the particle angular momentum.
When anyone of the three parameters $\alpha$, $\beta$ and $E$
increases, the gravitational effects are strengthened and the
particle motions have more dramatic changes. As a result, chaos
would become stronger from the global phase space structures when
chaos can occur. On the contrary, an increase of the angular
momentum leads to enlarging the repulsive force effects;
equivalently, it weakens the gravitational effects and decreases
the strength of chaos.
\section{Conclusions}
With the aid of the scalar-tensor-vector modified
gravitational theory, the modified gravity Schwarzschild black
hole was obtained in the literature through a modified gravity
parameter. This parameter plays important roles in enhancing the
gravitational constant and providing the black hole charge with a
gravitational repulsive force contribution. The modified
Schwarzschild black hole is still a static spherically symmetric
black hole solution of the field equation. When the black hole is
immersed in an external asymptotic uniform magnetic field, the
dynamics of charged particles moving in the background field is
not integrable.
Although the Hamiltonian for the description of the charged
particle dynamics is inseparable to the variables, it still allows
for the acceptance of explicit symplectic integrators because the
Hamiltonian has five splitting parts with analytical solutions as
explicit functions of proper time. Numerical tests show that the
explicit symplectic integrators exhibit good performance in the
long-term conservation of energy integral when appropriate time
steps are chosen.
One of the explicit symplectic integrators combined with the
techniques of Poincar\'{e} sections and fast Lyapunov indicators
is mainly used to survey the effect of the modified gravity
parameter on the regular and chaotic dynamical features of
charged particle orbits. It is shown that chaos is strengthened
from the global phase space structures under some circumstances as
the modified gravity parameter increases. Such a similar result is
also suitable for the case of the magnetic field parameter and the
particle energy increasing. However, chaos is somewhat weakened
with an increase of the particle angular momentum.
\textbf{Author Contributions}: Software and Writing-original
draft, D.Y.; Software, W.C.; Formal Analysis, N.Z.;
Investigation, H.Z.; Resources, W.L.; Supervision,
Conceptualization, Methodology, Writing - Review $\&$ Editing and
Funding Acquisition, X. W. All authors have read and agreed to the
published version of the manuscript.
\textbf{Funding}: This research has been supported by the National
Natural Science Foundation of China (Grant Nos. 11973020 and
11533004), and the Natural Science Foundation of Guangxi (Grant
No. 2019JJD110006).
\textbf{Data Availability Statement}: Our paper is a theoretical
work. All of the data are calculated and given in the paper.
\textbf{Institutional Review Board Statement}: Not applicable.
\textbf{Informed Consent Statement}: Not applicable.
\textbf{Acknowledgments}: The authors are very grateful to the
referees for useful suggestions.
\textbf{Conflicts of Interest}: The authors declare no conflict of
interest.
|
2,869,038,156,909 | arxiv | \section{Mathematical Foundations of the Problem}
In this Section we start with the Lagrangian for the system and set up
the equilibrium equations and their solutions as well as the evolution
equations with which we evolve the equilibrium solutions.
\subsection {\bf Einstein-scalar-field equations}
The Lagrange density of a complex self-gravitating scalar
field reads
\begin{equation}
{\cal L} = \frac {1}{2} \sqrt{\mid g \mid} \left [
\frac {1}{\kappa } R + g^{\mu \nu } (\partial_\mu \Phi^\ast)
(\partial_\nu \Phi) - U(|\Phi|^2) \right ] \; , \label{lagr}
\end{equation}
where $R$ is the curvature scalar, $\kappa = 8\pi G$, $G$ the
gravitation constant ($\hbar=c=1$),
$g$ the determinant of the metric $g_{\mu \nu }$,
$\Phi $ the complex scalar field, and $U$ the potential depending on
$|\Phi|^2$ so that a global $U(1)$ symmetry is conserved.
Then we find the coupled system
\begin{eqnarray}
R_{\mu \nu } - \frac{1}{2} g_{\mu \nu } R & = &
- \kappa T_{\mu \nu } (\Phi ) \; , \\
\Box \Phi + \frac{\partial U}{\partial \Phi^\ast} & = & 0 \; ,
\end{eqnarray}
where
\begin{equation}
T_{\mu \nu } = (\partial_\mu \Phi^\ast ) (\partial_\nu \Phi )
- \frac{1}{2} g_{\mu \nu }
[ g^{\sigma \kappa } (\partial_\sigma \Phi^\ast )
(\partial_\kappa \Phi ) - U(|\Phi|^2) ]
\end{equation}
is the energy-momentum tensor and
\begin{equation}
\Box = \partial_\mu
\Bigl [ \sqrt{\mid g \mid } g^{\mu \nu } \partial_\nu \Bigr ]/
\sqrt{\mid g \mid }
\end{equation}
the generally covariant d'Alembertian.
For spherically symmetric solutions we use the following line element
\begin{equation}
ds^2 = N^2({\bf r},{\bf t}) d{\bf t}^2 - g^2({\bf r},{\bf t}) d{\bf r}^2
- {\bf r^2} ( d\vartheta^2 + \sin^2\vartheta \, d\varphi^2) \label{metric}
\end{equation}
and for the scalar field the ansatz
\begin{equation}
\Phi ({\bf r},{\bf t}) = P({\bf r},{\bf t}) e^{-i \omega {\bf t}} \; , \\
\end{equation}
where $\omega $ is the frequency.
\subsection{Equilibrium Equations}
The non-vanishing components of the energy-momentum tensor are
\begin{eqnarray}
T_0{}^0 = \rho & = & \frac{1}{2} \left [ \frac{\omega^2 P^2}{N^2}
+ \frac{P'^2}{g^2} + U \right ] \; , \\
- T_1{}^1 = p_{\bf r} & = &
\frac{1}{2} \left [ \frac{\omega^2 P^2}{N^2}
+ \frac{P'^2}{g^2} - U \right ] \; , \\
- T_2{}^2 = - T_3{}^3 = p_\bot & = &
\frac{1}{2} \left [ \frac{\omega^2 P^2} {N^2}
- \frac{P'^2}{g^2} - U \right ] \; ,
\end{eqnarray}
where $'=d/d{\bf r}$.
The equilibrium configurations (those for which the metrics are static)
for a system of massless scalar fields are derived from the following
equations
\begin{equation}
\sigma' = \chi \; , \label{chii}
\end{equation}
\begin{equation}
\chi'= -\left[\frac{1}{r}+\frac{g^2}{r}\right]\chi -
\frac{\sigma g^2}{N^2} \; ,
\end{equation}
\begin{equation}
g'=\frac{1}{2} \left[\frac{g}{r}-\frac{g^3}{r}+\frac{\sigma^2 r g^3}{N^2}
+ r g \chi^2 \right] \; ,
\end{equation}
\begin{equation}
N' =\frac{1}{2}\left[-\frac{N}{r}+\frac{N g^2}{r}+\frac{r g^2 \sigma^2}{N}
+ r N \chi^2 \right] \; . \label{chif}
\end{equation}
Here we use the dimensionless variables $r=\omega {\bf r}$,
$t=\omega {\bf t}$ and $\sigma = \sqrt{4\pi G\, } \phi$.
Regularity at the center implies $g(r=0)=1$.
The equilibrium configurations are characterized by saddle points in the
density $\rho$, which might
have some significance in stability issues. These systems are characterized
by a two parameter family
of solutions. Firstly, we can vary $\sigma(0)$, the central density,
and secondly, for
each $\sigma(0)$ we can vary $N(0)$. Although we do not have asymptotic
flatness (the mass
$M$ rises with $r$) the mass profile for a given value of $r$ has a very
interesting feature.
Figure $1a$ shows the mass as a function of central density for a given
value of $r=R$ and
$N(0)$. The mass throughout the paper is calculated from the Schwarzschild
metric
\begin{equation}
M(r)=\frac{r}{2} \left ( 1 - \frac{1}{g^2} \right ) \; ,
\end{equation}
where $g^2$ is the radial metric.
The mass profile is very similar to that of the massive scalar field case
although there the mass is not subject to these variations with $r$.
This is also reminiscent of neutron star
profiles. The increase in mass with increase in central density is
followed by a decrease in mass
with further increase in central density.
For the same value of $N(0)$ and larger values of $r$ the mass increases
but the profile remains the same.
Figure $1b$ shows mass versus radius for different central
densities and a given value of $N(0)$. The profile is independent of the
value of the radius for large values of $r$.
The Noether theorem associates with each symmetry a locally conserved
``charge''. The Lagrangian density (\ref{lagr}) is invariant under a
global phase transformation
$\Phi \rightarrow \Phi e^{-i\vartheta }$. The
{\em local conservation law} of the associated Noether current
density reads
\begin{equation}
\partial_\mu j^\mu =0\; , \qquad
j^\mu = \frac {i}{2} \sqrt{\mid g\mid }\; g^{\mu \nu }
[\Phi^\ast \partial_\nu \Phi -\Phi \partial_\nu \Phi^\ast ] \; . \label{Noet}
\end{equation}
If one
integrates the time component $j^0$ over the whole space we find the
{\em particle number}
\begin{equation}
N_p = 4\pi \omega \int\limits_0^\infty \; \frac{g}{N} r^2 P^2 \, dr
\label{particle} \; .
\end{equation}
\subsection {Approximate solution}
The Newtonian and the general relativistic solutions of system
(\ref{chii})-(\ref{chif}) for $U=0$ were recently discussed
\cite{sch1,sch2,sch3,sch4}.
The Newtonian ones (initial value $\sigma (0)$ smaller than about $10^{-2}$)
can be used to describe dark matter halos of spiral and
dwarf galaxies. The general relativistic ones
(initial value $\sigma (0)>10^{-2}$) reveal very high
redshift values. As was shown in \cite{sch1,sch2,sch3,sch4},
for the Newtonian solutions
there exists an approximately analytic formula for the scalar field
($N=g=1$)
\begin{equation}
\sigma (r) = A \frac{\sin (r)}{r} \; ,
\label{sigma}
\end{equation}
where $A$ is a constant. The energy density reads
\begin{equation}
\rho (r) = \frac{A^2}{r^2} \left [ 1 - \frac{\sin (2 r)}{r}
+ \frac{\sin^2 (r)}{r^2} \right ] \label{rho}
\end{equation}
and the mass defined as usual as
$M(r) = \int_0^r \rho (\zeta ) \zeta^2 d\zeta $ yields
\begin{equation}
M(r) = A^2 \left [ r + \frac{\cos (2 r) - 1}{2 r} \right ]
\label{mass} \; .
\end{equation}
The energy density shows a decreasing behavior with saddle points in between
(cf.~Fig.~3a).
The Newtonian solution for the particle number is
\begin{equation}
N_p(r) = \frac{A^2}{4} \biggl [ 2 r - \sin (2 r) \biggr ]
\label{part2} \; ,
\end{equation}
the quantitative behavior of which is also revealed by numerical calculation.
\subsection{Evolution Equations}
For accurately numerical evolution the following set of variables are chosen
\cite{sei}
\begin{equation}
\psi_1 \equiv r\sigma_1 \; ,\quad
\psi_2 \equiv r\sigma_2 \; ,\quad
\pi_1 \equiv \frac{1}{\alpha}\frac{\partial\psi_1}{\partial t} \; ,\quad
\pi_2 \equiv \frac{1}{\alpha}\frac{\partial\psi_2}{\partial t} \; ,
\end{equation}
where
\begin{equation}
\alpha \equiv \frac{N}{g} \; ,
\end{equation}
and the subscripts on $\psi_i$ denote the real and imaginary parts of the
scalar field multiplied by $r$.
In terms of these variables and the dimensionless ones in the previous
Section the evolution equations are as follows: The radial metric function
$g$ evolves according to
\begin{equation}
{\dot g}=N(\pi_1\sigma_1'+\pi_2\sigma_2') \; .
\end{equation}
The polar slicing equation, which is integrated on each time slice,
is given by
\begin{equation}
N'=\frac{N}{2}\left[\frac{g^2-1}{r}+r\left[(\sigma_1')^2+(\sigma_2')^2
\right]+\frac{\pi_1^2+\pi_2^2}{r}
\right] \; .
\end{equation}
The Klein-Gordon equation for the scalar field can be written as
\begin{equation}
{\dot\pi_i}=\alpha'\psi_i'+\alpha\psi_i''-
\psi_i\left[gN+\frac{\alpha'}{r}\right]
,\quad i=1,2 \; ,
\end{equation}
\begin{equation}
{\dot\psi}_i=\alpha\pi_i,\quad i=1,2 \; .
\end{equation}
The Hamiltonian constraint equation is given by
\begin{equation}
\frac{2g'}{rg^3}+\frac{g^2-1}{r^2g^2}-\frac{\pi_1^2+\pi_2^2}{r^2g^2}-
\frac{\sigma_1'^2+\sigma_2'^2}{g^2}
=0 \; .
\end{equation}
After we introduce a perturbation in the field or when we have
an arbitrary initial field configuration we reintegrate on
the initial time slice to get new metric components \cite{sei}.
\section{Formation of Scalar Objects}
In this Section we discuss the possibility of formation of massless
scalar field configurations
from non-equilibrium data. Earlier studies both numerical and analytical
have confirmed that
self-gravitating objects with massless scalar fields cannot be compact
\cite{sei2,chr}. Thus taking a Gaussian type
localized distribution of scalar matter and evolving the system using the
evolution equations
described above, results in dissipation of the scalar matter without forming
any self-gravitating
object. Even sinusoidal functions that were damped by exponential decays met
with the same fate.
These were functions like $\exp(-r)\sin(r)/r$ and $\exp(-r)\cos(r)$.
In Figure 2, a plot of the
density versus the radius for different times is shown for
$\sigma = 0.001 \cos (r) \exp(-r)$.
The star dissipates very quickly.
The initial configurations for the scalar field that yield stable
configurations were those
characterized by a $1/r$ times a sinusoidal dependence at
large $r$ as well as a saddle point structure of the energy density
$\rho$. These were typically functions like
\begin{equation}
\frac{\cos(r)}{1+r} \; , \quad
\cos(r) \left [ 1-\exp \left (-\frac{1}{r} \right ) \right ] \; ,
\quad \mbox{and}
\end{equation}
\begin{equation}
\sin(r) \left ( 1+ \frac{1}{r} \right ) \log \left [1+ \frac{1}{1+r} \right ]
\; .
\end{equation}
In Figure 3 we show the density and radial metric evolutions
for a field of the form $0.003\cos(r) (1-\exp(-1/r))$. This settles
into a self-gravitating
object after some time. Figure $3a$ shows the density as a function of
$r$. The central density $\rho(0)$ increases during the evolution
from its initial value at $t=0$. Figure $3b$ reveals the radial metric as it
evolves in time as a
function of radius. This too displays the settling to a stable configuration.
In Figure $3c$ we show the mass loss for the system as it finds itself
a configuration. The mass at fixed radial values for these times is
shown in Table 1.
The amount of radiation for this system is relatively small and decreases in
time.
On the other hand, functions like $1/(r+1)$ , $\cos (r)/{(1+r)}^2$ and
$\sin (r^{1.01})/r^{1.01})$ failed
to settle to a bound state and just dissipated away. These functions did
not have saddle points in $\rho$.
Functions like $\sigma=\sin (r)/\log (1+r)$ for which $\rho$
has saddle points (it also has a maximum near the origin), did not
dissipate but failed to settle in a
long numerical evolution. This might mean they would form the halo
structures in a very long time
for which it would not be computationally feasible to evolve the system.
The same was observed for $\sin (r)/(r \log (1+r))$ or a numerical
six node boson star configuration ($\omega <m$).
All the above configurations were fairly Newtonian for which the exact
solution was close to $\sin (r)/r$.
\section{Analytical stability proof}
We investigate here the stability of the Newtonian solutions against
small radial perturbations; for stability investigations concerned with
the boson star model and using a perturbative method,
see Ref.~\cite{LP89,J89,GW89}. Therefore, we can neglect the
perturbations for the spacetime and make the following ansatz for the
linear scalar field perturbations $\delta P$:
\begin{equation}
\Phi (r,t):=P(r) e^{-i\omega t} + \delta P(r) e^{i k_n t} \; ,
\end{equation}
where $k_n$ are the frequencies of the normal modes. The pulsation equation,
an eigenvalue equation, is then:
\begin{equation}
\delta P''+2 \delta P'/x+k_n^2 \delta P = 0 \; . \label{deltap}
\end{equation}
Together with the boundary conditions
\begin{equation}
\delta P'(0)=\delta P (R)=0
\end{equation}
for regularity of $\delta P$ at the origin and at the radius $R$ of the
boson halo, this is a Sturm-Liouville eigenvalue problem.
As it is well-known from mathematical theory, we have a series of
real eigenvalues with a minimal one:
\begin{equation}
k_1^2 < k_2^2 < \ldots \; .
\end{equation}
Eigenfunctions of this particular differential equation (\ref{deltap})
exist only if the eigenvalues $k_n^2$ are positive.
This means that all modes are stable. The eigenfunctions are
\begin{equation}
\delta P = \frac{\sin(k_n x)}{x} \; ,
\end{equation}
where the eigenvalues are
\begin{equation}
k_n = n \frac{\pi }{R} \; ;
\end{equation}
$n$ is a natural number and $R$ the radius of the solution.
\section{Numerical Stability Issues}
In order to study the stability of these systems numerically
we perturbed exact configurations and observed whether
the system settled to new configurations. In some sense taking a function like
$A \sin(r)/r$ which is quite close to the exact solution for a Newtonian system
or $A \cos(r)/(1+r)$ can be regarded as a perturbation on the exact
solution and the eventual
settling to a new configuration can be regarded as indicative of stability.
The settling of $0.001\cos(r)/(r+1)$ to a stable configuration is shown
in Figure 4.
The density at the center increases in this case from what it starts at.
The radial metric evolution case is shown in Figure 4b.
On the other hand when $A=1$ the system is non-Newtonian
and in a long evolution failed to settle down although it did not disperse.
This is not surprising since the $\sin (r)/r$ form is close to the solution
only in the Newtonian case.
In Figure 5 we show a perturbed Newtonian configuration as it settles to
a new Newtonian configuration.
The perturbation that is used in this case mimics an annihilation of
particles. A Gaussian bump
of field is removed from a part of the star near the origin.
Figure 5a is a plot of the unperturbed versus the perturbed density at
$t=0$. The perturbed configuration
is evolved and settles to a new configuration. The scalar radiation moves
out as shown in Figure 5b.
In Figure 5c the density profile is shown after the system settles down.
The system is very
clearly in a new stable configuration. In Figure 5d the mass is plotted
as a function of radius
for various times. Again one can see that the mass loss is decreasing by
the end of the run showing
that the system is settling to a new configuration. The mass as a
function of time is presented in Table 2 for different radii.
We have so far been successful only in our Newtonian evolutions.
The reason for this is that denser configurations
need much better resolution for evolution. This is coupled with
the difficulty that we still
need the boundary to be very far away so that the density has significantly
fallen off. We are working on improving our boundary conditions.
So far, we are using either an outgoing wave condition
or exact boundary conditions where the latter one simulates a vacuum
energy. Further investigation is needed before we
can decide whether the
non-Newtonian configurations are inherently unstable.
\section {Boson star oscillators}
A boson star consists of scalar massive particles, hence in the simplest
model one has a potential $U=m^2 |\Phi|^2$. Exponentially decreasing
solutions exist for special eigenvalues $\omega < m$,
so that the star has a finite mass.
In the case of $\omega > m$, oscillating scalar field solutions can
be found for all values of $\omega $. The energy density reveals
minima and maxima as opposed to the saddle point structure seen for
the massless case.
Figure 6a shows a comparison of the density profile for the two cases.
In figure 6b we show the mass profile and the particle number as functions
of central density. The binding energy is always positive making the
system unstable against a collective transformation in which it disperses
into free particles but the system is still stable against any one-at-a-time
removal of a particle \cite{harr}. The complete dispersion is a numerical
confirmation of the result by Zeldovich, cf.~\cite{zel}.
\section {\bf Discussion}
We have shown that massless complex scalar fields are able to settle down
to a stable configuration. It seems that the formation process needs
a special form for the energy density. The appearance of saddle
points within the decreasing density supports the `birth' of the boson halos.
In contrast, extremal values for the density cannot be compensated
and lead to the destruction of the initial configuration. This result
of our numerical investigation reveals that the formation of
boson halos could be difficult. But as our attempt by cutting off a part of a
Newtonian solution has shown, if there exists such a structure, then a
new configuration forms and settles down to a stable dark matter halo.
One can understand this as Bose-Einstein condensation where a small
initial configuration forms and particles in the surrounding falls into
the most favorable state given by the condensation. In this way, the
boson halo forms and grows up to the point where it is stopped by
the outer boundary, the vacumm energy density.
Our choice of the numerical boundary
was necessary so that it simulates a hydrostatic equilibrium with
this vacuum energy as it was assumed in \cite{sch2,sch3,sch4}.
Because the boundary can be placed at every radius the size of these
dark matter halo depends on the value of the cosmological constant.
During the formation process of the boson halo, massless scalar particles
change into the Bose-Einstein condensate and loose energy through
this process; only particles with at least this energy can participate.
We infer that the condensed state is the
energetically more preferable state in comparison with the free state.
The calculation of the binding energy, the comparison of mass of the
bounded with the free particle at infinity, is not advised due to
the arbitrariness of the energy of a massless particle at infinity.
Investigations with real scalar fields \cite{chr} or with different
boundary conditions \cite{sei} cannot lead to singularity
free solutions;
cf.~Liddle and Madsen in \cite{rep}.
The second class of solutions presented here is the oscillatory one for
a massive complex scalar field. Our result of a positive binding energy
confirms the expectations for systems where the energy of the field
is higher than the rest mass energy. Our numerical evolution also
reveals the instability by dispersion of the
unperturbed exact solution.
\acknowledgments
We would like to thank John D.~Barrow,
Andrew R.~Liddle, Eckehard W.~Mielke, Ed Seidel, Wai-Mo Suen,
and Pedro T.P.~Viana for helpful discussions and comments.
Research support of FES was provided by an European Union Marie Curie
TMR fellowship.
FES wishes to thank Peter Anninos and Wai-Mo Suen for their hospitality
during a stay at the University of Champaign-Urbana and at the Washington
University of St.~Louis.
Research of JB is supported in part by the McDonnell
Center for Space Sciences, the Institute of Mathematical Science of
the Chinese University of Hong Kong, the NSF Supercomputing
Meta Center (MCA935025) and National Science Foundation (Phy 96-00507).
|
2,869,038,156,910 | arxiv | \section{Introduction}
\label{S:Intro}
The Bullet cluster has provided particularly direct evidence for the existence
of dark matter by displaying a large offset between the
the gas component, marked by X-ray emission,
and dark matter traced by gravitationally lensed images
\citep{MarkevitchET02}.
This merging cluster also demonstrates that dark matter is collisionless to high precision,
because, although the X-ray shocks unambiguously reveal that two clusters
have clearly just collided (Markevich 2005), the two dark mater centroids are still
centered on their respective galaxy members \citep{CloweET2006}.
When two clusters suffer gravitational encounters their respective member galaxies being
relatively small are unlikely to collide with each other.
Therefore the cluster member galaxies are effectively
collisionless, hence, If the dark matter has an associated collisional cross section
it may be revealed by a relative displacement between the two dark matter lensing
centroids and their respective member galaxy distributions.
The amplitude of this displacement is proportional to the self-interaction
cross section of dark matter particles.
Using this property, the Bullet cluster was also the first to be used to put upper limits
on the self-interaction cross section of dark matter particles
providing quantitative evidence that the dark matter is
collisoinless \citep{MarkevitchET2004ApJ606}.
Since the discovery of the Bullet cluster, more merging clusters have been found
with offset between the centroids of the intra-cluster gas and the dark matter
(MACS J1149.5+2223: \citealt{GolovichET2016ApJ831} ;
CL0152-1357: \citealt{MolnarET2012ApJ748}
DLSCL J0916.2+2951: \citealt{DawsonET2012};
ACT-CL J01024915: \citealt{MenanteauET2012};
MACS J0717.5+3745: (Mroczkowski et al. 2012) \citealt{MroczkowskiET2012};
A1758N: \citealt{RagozzineET2011};
A2744: \citealt{MertenET2011}
A2163: \citealt{OkabeET2011};
ZwCL0008.8+5215: \citealt{WeerenET2011AA528};
CL0152-1357: \citealt{MassardiET2010};
A1240: \citealt{BarrenaET2009};
MACS J0025.4-1222: \citealt{BradacET2008};
A520: \citealt{MahdaviET2007};
Bullet Cluster: \citealt{CloweET2004})
The collisions of massive clusters of galaxies provide a unique possibility
to study the nature of dark matter as these collisions are the most energetic
of all phenomena (after the Big Bang)
with the power to separate the collisional gas from the confining dark matter
gravitational potential, including the galaxies,
which provide an effectively collisionless tracer population.
Any significant inconsistencies in accounting for the relative dynamics of
the components would therefore be of fundamental importance.
To examine this possibility to the fullest requires, of course, appropriate simulations
that incorporate dark matter and gas. The first test to make in this respect is to
compare the best data with collisionless dark matter,
which is well described by $N$-body simulations together with
an appropriate hydrodynamical numerical scheme for the associated hot gas.
The gas is much harder to model, but several high quality codes have been constructed
and tested for this purpose with different levels of approximation and numerical schemes
-- ranging from adaptive mesh refinement (AMR) to smooth particle hydrodynamics (SPH) --
and applied to the Bullet cluster \citep{LageFarrar2014ApJ787,MastBurk08MNRAS389p967,SpringelFarrar2007MNRAS380};
the Sausage cluster (CIZA J2242.8+5301\xspace; \citealt{MolnarBroadhurst2017,Donnert2017MNRAS471})
El Gordo (ACT-CT J0102-4915; \citealt{ZhangET2015ApJ813,MolnarBroadhurst2015});
A1758N \citep{MachadoET2015};
A1750 \citep{Molnar2013ApJ779}; and
CL0152-1357 \citep{MolnarET2012ApJ748};
for a review see \citealt{Molnar2015Frontiers}.
Several examples of extreme collisions have now been analyzed in detail for which the speed of
the infalling cluster exceeds the sound speed of the gas, thus merging shocks are generated.
These shock fronts, in some systems, are traced by radio ``relics'', large-scale diffuse
synchrotron emission (\citealt{Ensslin1999}; for a recent review see \citealt{FerettiET2012}).
There are also very bright gas features corresponding compressed gas that, although subsonic,
are relatively dense so that the X-ray emission is enhanced showing a large scale
cometary morphology, such as El Gordo \citep{MenanteauET2012}.
This morphology has been readily explained as an off axis binary collision in
self-consistent $N$-body/\-hydro\-dynamical numerical simulations assuming
zero self-interacting cross section for the dark matter
\citep{ZhangET2015ApJ813,MolnarBroadhurst2015}.
Radio relics in El Gordo delineate a bow shock and a back shock.
We define bow shock as the shock front in the gas of the main cluster
generated by the infalling cluster, which resembles a bow shock of a bullet
flying through air.
The back shock is the shock front propagating in the
gas of the infalling cluster in the opposite direction to the cluster.
The back shock was detected by X-ray and SZ observations
\citep{BotteonET2016,BasuET2016AA591}.
Another such example is the Sausage cluster (CIZA J2242.8+5301\xspace), in which, in addition
to tidally compressed gas between the two merging clusters highlighted by X-ray
emission, there are impressive radio ``relics'' marking the location of the bow shock front
in particular with evidence of a back shock in the radio observations
\citep{MolnarBroadhurst2017,WeerenET2010Sci}.
These simulations are demanding in time and analysis despite the inherent simplicity
of binary encounters, because the gas interactions must be resolved well spatially and
temporally, and there is a wide rage of masses and impact parameters to explore and
projection angles involved.
With some intuition in exploring the output of these models and the improvements in data quality,
we can become more efficient by narrowing the ranges of the relevant initial conditions.
This numerical work has, in the above cases, found consistency with the collisionless dark matter
hypothesis by finding compelling agreement among the independent observables
including the lensing, radio relic, X-ray and SZ data that are increasingly being obtained for
examining the binary collision clusters.
A discrepancy that has been advanced for the ``Pandora'' cluster (A2744) in terms of dark matter
offsets is not so readily modeled as the system this multi-model in complexity
\citep{LamET2014ApJ797,Zitrin2014ApJ793,MertenET2011},
so such statements are qualitative at present.
Further clarification would also benefit from a deeper weak lensing analysis \citep{MedezinskiET2016ApJ817}.
Even for binary collisions it is very clear that approximate or misleading eyeball statements regarding
the basic collision parameters are often easily cleared up uniquely,
thanks to comparison with the numerical calculations.
In particular, statements regarding the relative impact velocity and the time after first core passage
at which the system is being viewed are often highly
uncertain without the guidance of a full self-consistent simulation
\citep[e.g.][]{GolovichET2016ApJ831,NgET2015}.
The issue of the relative velocity is particularly interesting as it
provides a definitive test of the {{\sc $\Lambda$CDM}}\xspace scenario.
The distribution of dark matter halo relative velocities has been calculated in detail
by several $N$-body simulations based on {{\sc $\Lambda$CDM}}\xspace
(e.g., \citealt{BouillotET2015,ThompsonNagamine2012}),
as well as including approximate hydro interactions \citep{CaiET2009}.
The first such consistency test was inspired by the Bullet cluster.
The infall velocities in the Bullet cluster, $\sim 3000\,${$\rm km\;s^{-1}$}\xspace,
derived using detailed $N$-body/hy\-dro\-dy\-nam\-ical simulations based on
multifrequency observations seemed to be too high for {{\sc $\Lambda$CDM}}\xspace models
\citep{MastBurk08MNRAS389p967,SpringelFarrar2007MNRAS380}.
Analyzing large cosmological numerical simulations,
\cite{LeeKomatsu2010ApJ718} and \cite{ThompsonNagamine2012}
found that the Bullet cluster is incompatible with {{\sc $\Lambda$CDM}}\xspace\ models.
\cite{BouillotET2015} arrived at the same conclusion adopting a new halo
finder algorithm, ROCKSTAR, and using extreme value statistics.
In contrast, \cite{WatsonET2014} and \cite{LageFarrar2015JCAP}
using different cosmological simulations concluded that the Bullet cluster is
not excluded by the {{\sc $\Lambda$CDM}}\xspace.
\cite{KraljicSarkar2015},
using the ``Dark Sky Simulations'', the ROCKSTAR algorithm,
and extreme value statistics, found that the number of Bullet-cluster-like
massive halos is $\sim\,$0.1, i.e., the Bullet cluster is compatible with the {{\sc $\Lambda$CDM}}\xspace models.
However, more high-infall velocity merging clusters have been identified recently
(Abell 2744: \citealt{OwersET2011};
CL J0152-1347: \citealt{MolnarET2012ApJ748};
MACS J0717.5+3745: \citealt{MaET2009} and \citealt{SayersET2013};
El Gordo: \citealt{MolnarBroadhurst2015}).
The probability to find all of these massive systems simultaneously
based on cosmological simulations of {{\sc $\Lambda$CDM}}\xspace models has not been assessed yet.
It is an open question today wether their infall velocities are compatible with
the predictions of our standard {{\sc $\Lambda$CDM}}\xspace models.
Clearly, a statistical sample of such clusters is of great interest to clarify this question further
A confirmation of a sample of merging clusters with high infall velocities
could be a serious challenge to the standard {{\sc $\Lambda$CDM}}\xspace models
\cite[e.g.,][]{Molnar2015,KraljicSarkar2015}.
Here we apply our well-tested \emph{FLASH}\xspace based 3-dimensional (3D) Hydro/$N$-body code to a recently
discovered Bullet-cluster-like binary collision encounter that has been recently
recognized \citep{GolovichET2016ApJ831},
but for which self-consistent simulations have yet not been applied.
This system has the advantage of showing a clear cut pair of radio relics,
a distinct bullet like X-ray morphology,
weak gravitation lensing based mass centroids for the two interacting components, and
line of sight redshift information for cluster member galaxies \citep{GolovichET2016ApJ831}.
We follow the time evolution of the merging shocks until they run out of the intracluster gas,
and test the assumption of collisionless dark matter inherent to the standard LCDM cosmology.
We make use of the AMR code, \emph{FLASH}\xspace allowing us to follow
the shocks in the low density intracluster gas, that is not well represented in the fixed grid
Eulerian scheme and codes based on SPH
(for a comparison between AMR and SPH simulations see,
e.g., \citealt{MitchellET2009,AgertzET2007}).
The structure of this paper is as follows.
In Section~\ref{S:ZWCL0008} we summarize results
from previous analyses of ZwCl008\xspace based on multifrequency
observations and numerical simulations.
We describe our simulation setup for modeling of ZwCl008\xspace
as a binary merger in Section~\ref{S:SIMULATIONS}.
Section~\ref{S:RESULTS} presents our results, a dynamical model for
ZwCl008\xspace, a discussion on the dynamics of merging shocks
in clusters similar to ZwCl008\xspace, and a comparison with the Bullet cluster.
Section~\ref{S:CONCLUSIONS} contains our conclusions.
We adopt a spatially flat {{\sc $\Lambda$CDM}}\xspace cosmology with h = 0.7,
$\Omega_m = 0.3$, thus $\Omega_\Lambda = 0.7$.
Unless stated otherwise, the quoted errors represent 68\% Confidence levels (CLs).
\section{ZwCl008\xspace: The Newly Discovered Bullet-like Cluster}
\label{S:ZWCL0008}
The merging cluster ZwCl008\xspace, at a redshift of 0.1032,
was observed by \cite{WeerenET2011AA528} using
the Giant-Meterwave Radio Telescope (GMRT)
at 241 and 640 MHz and the Westerbrook Synthesis Radio Telescope
(WSRT) at 1.3-1.7 GHz.
They found two radio relics to the east and west from
the X-ray peak emission, with the eastern relic much more elongated
(first panel in Figure~\ref{F:OBSMOD}).
The spectral indices of both relics are steepening towards
the cluster center, suggesting that they are moving outward,
away from the center of the merging system.
The spectral indices at the front of the east and west relics
were reported to be $-1.2\pm0.2$ and $-1.0\pm0.15$.
Adopting these as the spectral indices of the injection distribution,
they derive Mach numbers ${\cal M}\xspace = 2.2_{-0.1}^{+0.2}$ and ${\cal M}\xspace = 2.4_{-0.2}^{+0.4}$
for the east and west relics; the polarizations were constrained to 5\%-25\% and 5\%-10\%.
More recently \cite{KierdorfET2017AA} carried out
high-frequency radio observations of ZwCl008\xspace
at 4.85 and 8.35 GHz with the Effelsberg telescope.
They found a polarization fraction of the eastern relic
between 20\% and 30\%,
and derived a Mach number of ${\cal M}\xspace = 2.35\pm0.1$,
in agreement with previous radio measurements of
\cite{WeerenET2011AA528}.
Most recently, \cite{GolovichET2017} carried out a dynamical
analysis of ZwCl008\xspace based on detailed radio (JVLA),
optical (HST, Subaru/SuprimeCam, Keck/DEIMOS)
and X-ray (Chandra/ACIS-I) observations and weak lensing
observations (Subaru/HST) to estimate masses of
M$_{200,1} = 5.73_{-1.81}^{+2.75}\,$ {$\times 10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace
and M$_{200,2} = 1.21_{-0.63}^{+1.43}\,${$\times 10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace
for the main and infalling cluster respectively,
which is a mass ratio of $\simeq 5$.
\cite{GolovichET2017} used this information as input in their
dynamical model, which is based on fixed NFW
\citep{NFW1997ApJ490p493} gravitational potentials
for the two components and zero impact parameter, ignoring
the effects of gas, gravitational tidal effects, mass loss,
and integrating the equations of motion numerically.
Golovich et al. estimated the merger velocity at pericenter,
$V_p$, and obtained $V_p = 1800^{+400}_{-300}$ {$\rm km\;s^{-1}$}\xspace.
The inclination angle relative to the plane of the sky, $\theta$,
was constrained to $6.6\degree \simless\; \theta \simless\; 31\degree$,
which is consistent with the direct constraint derived from radio polarization
measurements: $\theta\; \simless\; 40\degree$.
Golovich et al. concluded that the gas of the two merging subclusters
is still moving outward, and derived the phase of the system as
either 0.76$_{-0.27}^{+0.24}$ Gyr or 1.3$_{-0.35}^{+0.90}$ Gyr after the
first core passage for the outgoing phase and infalling phase
(after the turnover) respectively.
They could not distinguish between the outgoing and returning phase
because their model is time symmetric,
includes only the dark matter and not gas, hence the X-ray emission,
which provides information of the gas, could not be not interpreted.
\section{Modeling ZwCl008\xspace using hydrodynamical simulations}
\label{S:SIMULATIONS}
Our main goals were to obtain a reasonable physical model for
the newly discovered Bullet-cluster-like merging cluster, ZwCl008\xspace,
using $N$-body/\-hydro\-dynamical simulations, and thus estimate the infall velocity
and constrain the phase of the collision with high precision.
We have not carried out a systematic search for all the initial parameters
and determined their errors with statistical measures,
which would require many more simulations.
The errors we quote for the results from our simulations are conservatively estimated.
\subsection{Details of the simulations}
\label{SS:ICOND}
We modeled ZwCl008\xspace in 3D using an Eulerian $N$-body/hydrodynamic
code \emph{FLASH}\xspace (developed at the Center for Astrophysical Thermonuclear Flashes
at the University of Chicago; \citealt{Fryxell2000ApJS131p273,Ricker2008ApJS176}).
\emph{FLASH}\xspace is a publicly available AMR code, which can be run in parallel computer architectures.
We assumed a binary merger for ZwCl008\xspace and
included dark matter and gas self-consistently taking their gravity into account dynamically.
We used our well-established method to carry out merging
cluster simulations
\citep{MolnarBroadhurst2017,MolnarBroadhurst2015,Molnar2013ApJ779,Molnar2013ApJ774,MolnarET2012ApJ748}.
For our simulations, we adopted a large box size (13.3 Mpc on a side)
to capture the outgoing merger shocks and avoid loosing mass
during the time we ran our simulations.
Our highest resolution, 12.7 kpc was reached at the cluster centers, merger shocks,
and in the turbulent regions behind the shocks.
We chose 3D Cartesian coordinate system, $x,y,z$, with the $x,y$ plain containing the
centers of the clusters and the initial (relative) velocity vector of the infalling cluster
in the positive $x$ direction.
We included shock heating, the most important non-adiabatic process in merging clusters,
and ignored other heating and cooling processes.
\begin{deluxetable}{lcccccc}[t]
\tablecolumns{11}
\tablecaption{ \label{T:TABLE1}
IDs and input parameters for different models used in our hydrodynamical simulations.\\
}
\tablewidth{0pt}
\tablehead{
\multicolumn{1}{l} {ID\,\tablenotemark{a}} &
\multicolumn{1}{c} {M$_{vir1}$\,\tablenotemark{b}} &
\multicolumn{1}{c} {c$_{vir1}$\tablenotemark{b}} &
\multicolumn{1}{c} {M$_{vir2}$\,\tablenotemark{c}} &
\multicolumn{1}{c} {c$_{vir2}$\tablenotemark{c}} &
\multicolumn{1}{c} {P\,\tablenotemark{d}} &
\multicolumn{1}{c} {V$_{in}$\,\tablenotemark{e}}
}
\startdata
P400V18B & 7.0 & 6 & 5.0 & 8 & 400 & 1800 \\ \hline
P100V15G & 6.0 & 6 & 1.5 & 8 & 100 & 1500 \\ \hline
P300V18 & 7.0 & 6 & 5.0 & 8 & 300 & 1800 \\ \hline
P500V18 & 7.0 & 6 & 5.0 & 8 & 500 & 1800 \\ \hline
P400V18M1 & 7.0 & 6 & 5.5 & 8 & 400 & 1800 \\ \hline
P400V18M2 & 7.0 & 6 & 4.5 & 8 & 400 & 1800 \\ \hline
P400V18M3 & 7.5 & 6 & 5.5 & 8 & 400 & 1800 \\ \hline
P400V18M4 & 7.5 & 6 & 5.0 & 8 & 400 & 1800 \\ \hline
P400V18M5 & 7.5 & 6 & 4.5 & 8 & 400 & 1800 \\ \hline
P400V18M6 & 6.5 & 6 & 5.5 & 8 & 400 & 1800 \\ \hline
P400V18M7 & 6.5 & 6 & 5.0 & 8 & 400 & 1800 \\ \hline
P400V18M8 & 6.5 & 6 & 4.5 & 8 & 400 & 1800 \\ \hline
P400V15 & 7.0 & 6 & 5.0 & 8 & 400 & 1500 \\ \hline
P400V20 & 7.0 & 6 & 5.0 & 8 & 400 & 2000
\enddata
\tablenotetext{a}{IDs of the runs indicate the impact parameters in kpc and the infalling velocities in
in units of 100 {$\rm km\;s^{-1}$}\xspace}
\tablenotetext{b}{Virial mass in {$10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace and concentration parameter for the main cluster (1).}
\tablenotetext{c}{Virial mass in {$10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace and concentration parameter for the infalling cluster (2).}
\tablenotetext{d}{Impact parameter in units of kpc.}
\tablenotetext{e}{Infall velocity of cluster 2 in {$\rm km\;s^{-1}$}\xspace.
\vspace{.4 cm}}
\end{deluxetable}
\begin{figure*}[t]
\includegraphics[width=.33\textwidth]{fig1a.pdf}
\includegraphics[width=.32\textwidth]{fig1b.pdf}
\includegraphics[width=.33\textwidth]{fig1c.pdf}
\caption{
1st panel: \emph{Chandra}\xspace X-ray color image of ZwCl008\xspace with the
WSRT radio radio contours (cyan) based on WSRT observations
overlaid (from \citealt{GolovichET2017,WeerenET2011AA528}).
The white dashed circle and the horizontal bar represent R$_{500}$ and a
length of 1 Mpc. The white annular sector marks the proposed shock
region (see \citealt{GolovichET2017}).
The two BCGs are shown by red crosses.
2nd panel: On the same scale,
simulated X-ray image based on our best model (run P400V18B)
at t = 428 Myr after the 1st core passage
with the contours of the dark matter distribution overlaid (yellow).
The green contours represent the outgoing shocks.
The viewing angle was chosen to match the observations (see text for details).
The bow shock on the right ahead of the infalling cluster moving to the right,
the back shock on the left is moving to the left, to the opposite direction
to the motion of the infalling cluster.
3rd panel: On the same scale, our
run P10V15G with initial conditions suggested by \cite{GolovichET2017}.
There is only one X-ray peak, which is associated with the gas of the main cluster.
The gas of the infalling cluster has been stripped off as a result of the relatively large
velocity of the infalling cluster and the small impact parameter.
\vspace{0.3 cm}
\label{F:OBSMOD}
}
\end{figure*}
\vspace{1.3 cm}
The initial models of the clusters were assumed to have spherical geometry
with cut offs of the dark matter and gas density at the virial radius, $R_{\rm vir}$.
We assumed an NFW model \citep{NFW1997ApJ490p493} for the dark
matter distribution,
\begin{equation} \label{E:NFW}
\rho_{DM} (r) = { \rho_s \over x(r) (1 + x(r))^2}; \,\,r \le R_{\rm vir}
,
\end{equation}
{\noindent}
where $x(r) = r/r_s$ ($r_s = R_{vir}/c_{vir}$) and $\rho_s$, are scaling parameters
for the radius and the amplitude of the density, and $c_{vir}$ is the concentration parameter.
We adopted a non-isothermal $\beta$ model for the gas density,
\begin{equation} \label{E:BETAMODEL}
\rho_{gas}(r) = { \rho_0 \over (1 + y^2)^{3 \beta /2} }; \,\,r \le R_{\rm vir}
,
\end{equation}
{\noindent}
where $y = r/r_{core}$, is the scaling parameters for the radius, $r$
(in units of the core radius, $r_{core}$), and
$\rho_0$, is the density at the center of the cluster.
The exponent, $\beta$ determines the fall off of the density
distribution at large radii. We adopted $\beta=1$, suggested by
cosmological numerical simulations for the large scale distribution
of the intracluster gas in equilibrium (excluding the filaments;
see \citealt{Molnet10ApJ723p1272}).
We derived the gas temperature as a function of the radius, $T(r)$,
assuming hydrostatic equilibrium\ adopting $\gamma = 5/3$ for the ideal gas equation of state.
We used a gas fraction, $f_{gas}$, $f_{gas} = 0.14$, and represented baryons in galaxies
together with the collisionless dark matter particles,
since, for our purposes, galaxies can also be considered collisionless.
It is less straightforward to model a stable dark matter density distribution
than a gas distribution. The hydrostatic equilibrium assumption provides
a stable distribution for the gas, but the dark matter, modeled as particles,
has no pressure, interacting only gravitationally with itself and the gas,
which means that they move on orbits in the potential of the cluster.
We use the local Maxwellian approximation for the amplitude
of the velocities of the dark matter particles: we randomly sample
a Maxwellian distribution with a velocity dispersion as
a function of radius, $r$, $\sigma_v (r)$ derived from
the Jeans equation assuming that the distribution of $\sigma_v (r)$ is
isotropic \citep{LokasMamon2001MNRAS321}.
We assumed an isotropic distribution for the direction of the velocity vectors
(for more details of the set up for our simulations see \citealt{MolnarET2012ApJ748}).
\subsection{\emph{FLASH}\xspace\ Runs}
\label{SS:RUNS}
We have run a series of \emph{FLASH}\xspace simulations varying the
initial masses, concentration parameters, impact parameter,
and infall velocity of our models.
Our aim was to find a physical model for ZwCl008\xspace with
a reasonable agreement with observations.
Our simulations were constrained by the masses and positions
of the dark matter centers derived from weak gravitational lensing,
X-ray morphology \citep{GolovichET2017},
and the positions of the outgoing merging shocks inferred from radio
observations \citep{WeerenET2011AA528}.
The long bright radio relic to the east most likely marks the location of the
back shock, as in the CIZA J2242.8+5301\xspace cluster, as demonstrated by \citealt{MolnarBroadhurst2017}),
due to the limb brightening of the spherical surface of the relic viewed in projection
The position of the bow shock associated with the infalling
cluster is much less certain. This forward shock is not detected by the
X-ray observations \citep{GolovichET2017}).
We estimate from our models that it should lie significantly beyond the small radio
relic that we nevertheless do associate with this shock.
The reason for this displacement can be simply geometric in origin,
because such shock surfaces are convex in shape, thus, in projection, radio
relics on their surface may often appear to lie behind the front when viewed from the side
despite being generated by the shock (e.g, planetary nebulae appear like a ring,
but they are spherical shells). In other words, because radio relics cannot be expected
to trace the full shock surface uniformly, but have a patchy covering, thus it is
likely to see radio relics appearing ``inside'' the projected shock front, and only occasionally
marking the projected shock itself when a relic happens to cover some of the projected area
of the shock front. When that happens, the radio relic can be of high surface brightness
as the projected radio emission (which is optically thin) adds up in projection.
Indeed, notable examples of large shock fronts of anomalously bright such are known,
in particular, the bow shock of CIZA J2242.8+5301\xspace \citep{WeerenET2011MNRAS418}
that modeling has been shown to coincide with their observed projected shock front
\citep{MolnarBroadhurst2017}.
We have carried out a suit of simulations to provide a rough
estimate on the errors on the infall velocity, impact parameter, and masses
of the merging system before the collision.
A systematic parameter search is currently beyond reach of
conventional high speed computing resources based on CPUs.
Table~\ref{T:TABLE1} contains a list of initial parameters
of those simulations we discuss in this paper.
This is a narrow subset of parameter space that contains our best solutions
with enough spread to illustrate the observable effects of moving away
from the best-fit solution.
The first column contains the IDs of our runs as
$PijkVmn$, where $Pijk$ is the impact parameter in kpc,
and $Vmn$ is the infall velocity in units of 100 {$\rm km\;s^{-1}$}\xspace.
In columns 2 to 5 we show the the virial masses (in units of {$10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace)
and concentration parameters ($c_{vir}$) of the two subclusters.
Columns 6 and 7 contain the impact parameters, $P$,
and infall velocities ($V_{in}$) in {$\rm km\;s^{-1}$}\xspace.
\begin{figure*}[t]
\includegraphics[width=.33\textwidth]{fig2a.pdf}
\includegraphics[width=.325\textwidth]{fig2b.pdf}
\includegraphics[width=.33\textwidth]{fig2c.pdf}
\includegraphics[width=.327\textwidth]{fig2d.pdf}
\includegraphics[width=.335\textwidth]{fig2e.pdf}
\includegraphics[width=.333\textwidth]{fig2f.pdf}
\caption{
Samples of simulations which do not fully resemble the X-ray morphology of ZwCl008\xspace.
The color code is the same as in the second panel in Figure~\ref{F:OBSMOD}
Left to right:
First row:
1st and 2nd panel show runs with P = 300 and 500 kpc (P20V18 and P50V18)
Second row:
1st and 2nd panel show V$_{in}$ = 1500 and 2000 {$\rm km\;s^{-1}$}\xspace;
The 3rd panel in the first and second row display the best model,
but before and after the best-fit epoch
(412 Myr and 444 Myr after the first core passage; run P40V18).
The best-fit epoch is t$_{obs}$ = 428 Myr after the first core passage.
See Table~\ref{T:TABLE1} for a list of initial parameters for the runs.
\vspace{0.3 cm}
\label{F:SIMNOMATCH}
}
\end{figure*}
\section{Results and discussion}
\label{S:RESULTS}
\subsection{Dynamical Model for ZwCl008\xspace}
\label{SS:DEPROJECT}
The second panel in Figure~\ref{F:OBSMOD} shows a simulated X-ray color image
of our best-fit model at the epoch of t$_{b} = 428$ Myr after the first core passage
for ZwCl008\xspace, run P400V18B with infall velocity, $V_{in} = 1800$ {$\rm km\;s^{-1}$}\xspace,
impact parameter, $P = 400$ kpc, and masses
$M_{vir1} = 7$ and $M_{vir2} = 5 \times${$10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace (main and infalling cluster).
The yellow contours represent the projected dark matter distribution from our simulation.
The white contours are based on radio observations,
the white dashed circle and the horizontal bar represent R$_{500}$
and a physical length of 1 Mpc (from \citealt{GolovichET2017}; as in the firtst panel).
The viewing (rotation) angles were chosen the following way.
First we rotated the system with an angle, $\varphi = 30${$^{\circ}$} (``roll angle''),
around the axis connecting the two dark matter centers (rotation around the $y$ axis)
then we rotated this axis with an angle, $\theta = 31${$^{\circ}$}, out of the $x-y$ plane.
$\theta$ was chosen to provide a projected distance between the
two dark matter centers, $D = 940$ kpc, to match the positions of the
observed centers based on weak lensing mass reconstruction,
the roll angle was chosen to find the best match with the observed X-ray morphology
\citep{GolovichET2017}.
We choose the output (epoch) which could be rotated in a way that the position of the
back shock is near the eastern edge of the observed long radio relic in the east
of the X-ray peak and the bow shock in the west is not inside the radio relic associated with it,
as radio observations suggest \citep{WeerenET2011AA528}.
The shocks in our simulated images were located based on projected pressure gradients.
A detailed description of our method to generate mock X-ray and mass surface density
images can be found in \cite{MolnarBroadhurst2015}.
\begin{figure*}[t]
\includegraphics[width=.247\textwidth]{fig3a.pdf}
\includegraphics[width=.247\textwidth]{fig3b.pdf}
\includegraphics[width=.247\textwidth]{fig3c.pdf}
\includegraphics[width=.247\textwidth]{fig3d.pdf}
\includegraphics[width=.247\textwidth]{fig3e.pdf}
\includegraphics[width=.247\textwidth]{fig3f.pdf}
\includegraphics[width=.247\textwidth]{fig3g.pdf}
\includegraphics[width=.247\textwidth]{fig3h.pdf}
\caption{
Merging shocks and the distribution of the mass
surface density of dark matter (green and yellow contours)
overlaid on the X-ray emission (color image) as a function of time
before and after the first core passage based on our best model for ZwCl008\xspace
(run P400V18, see Table~\ref{T:TABLE1}).
We assumed that the collision occurs in the plane of the sky
(projections in the LOS). The panels are 3 Mpc on a side.
Left to right first and second row the epochs
(relative to the first core passage; t$_0 = 0$)
are: $t$ = -475, -317, 0, 238, and $t$ = 396, 459, 555, 713 Myr.
The infalling cluster passes the main cluster from below moving east to west.
Panels in the first row:
The first two panels in the first raw show epochs before the first core passage.
The 3rd panel represents the epoch of the first core passage.
The two dark matter peaks overlap at the center.
The 4th panel shows the phase right after the first core passage.
Panels in the second row show phases of the collision when there are
two X-ray peaks.
The first two panels show two shocks ahead
of the cluster centers moving outwards.
The 3rd panel shows the phase when the back shock already ran out of
the gas of the infalling cluster. The bow shock is still moving outward
(towards west).
The 4th panel shows a later epoch, when both shocks ran out of the
gas of the merging system, and the gas is falling back to the cores of the
dark matters of the two components.
\vspace{0.5 cm}
\label{F:OUTSHOCKS}
}
\end{figure*}
\vspace{1.0 cm}
In Figure~\ref{F:SIMNOMATCH}, we show images of models for
ZwCl008\xspace, which do not satisfy our requirements for a good
match with the data.
In this figure the color images represent the simulated X-ray
surface brightness, the color code for the contours is the same as
in the second panel in Figure~\ref{F:OBSMOD}.
The procedure was the following. First we aligned the position of the X-ray
peak associated with the infalling cluster with that of the observed, then
we choose the projection angles to match
the distances and position angles between the dark matter centers to match
those derived from weak lensing observations.
In the third step, if it was possible possible, we selected cases where the position
of the back shock is near the eastern edge of the long radio relic in the east of the X-ray peak.
The first and second panels in the first row show runs with only
the impact parameters changed to P = 300 and 500 kpc (runs P20V18 and P50V18)
from the best model (run P400V18 with P = 400 kpc).
The X-ray morphology of P20V18 (first panel) is too elongated
along the line connecting the two dark matter centers.
Run P50V18 displays two X-ray peaks with large separation and
very small offset from the dark matter centers.
The first and second panel in the second row
show simulations changing only the relative (infall) velocity of the infalling cluster
to V$_{in}$ = 1500 and 2000 {$\rm km\;s^{-1}$}\xspace (runs P40V15 and P40V20)
to bracket our best model whith V$_{in}$ = 1800 {$\rm km\;s^{-1}$}\xspace (run P400V18).
In the first panel, the very bright X-ray peak is associated to the main cluster,
not to the infalling cluster as observed, in the second panel the back shock
is farther than observed.
The third panel in the first and second row display the best model
but at $t$ = 412 Myr and 444 Myr after the first core passage
(runs P40V18T1 and P40V18T2), before and after the best-fit epoch.
The best-fit epoch is t$_{obs}$ = 428 Myr after the first core passage (run P40V18).
See Table~\ref{T:TABLE1} for a list of initial parameters for the runs.
The third panel in the first row shows two bright X-ray peaks with enhanced
emission from the tidal bridge between them, which differs from the data where the
X-ray peaks have a large brightness ratio and a less enhanced bridge between them.
The third panel in the second row shows two X-ray peaks with
a large separation and a small offset for both X-ray peaks,
again differing significantly with respect to the observed morphology.
We conclude from of our suite of $N$-body/\-hydro\-dynamical simulations
that ZwCl008\xspace is viewed at about 428 Myr after first core passage,
and that the infalling cluster has a mass M$_{vir;2} = 5\pm 0.5\,${$\times 10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace
moving to the west, disrupting the gas of the main cluster with mass
M$_{vir;1} = 7\pm 0.5\,${$\times 10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace, so that it lies to the west of the
the main cluster at the observed epoch. Our model also clearly confirms that
the disrupted gas of the main cluster is offset from its the dark matter center
as the data seem to indicate.
Our simulations clearly demonstrate that the merging cluster, ZwCl008\xspace,
is in the outgoing phase just after the first core passage, before the first
turnover. The gas and dark matter associated with the two components
are moving outward (the infalling cluster moving to the east, the
main cluster to the west; see fist and second panels in Figure~\ref{F:OBSMOD}).
These results are in broad qualitative agreement with the results of
\cite{GolovichET2017}, but clearly prefer a recent post collision epoch and
exclude their later, post collision epoch option of 1300 Myr, as by then the shock
fronts predicted by our model will have long left the system.
Also, our best model has significantly larger mass, $1.2 \times 10^{15} \,M_\odot$,
and a smaller mass ratio, 1.4, than those suggested by Golovich et al.
($6.9 \times 10^{14} \,M_\odot$ and 4.75).
We performed a simulation with initial conditions suggested by \cite{GolovichET2017}
(run P10V15G). We show the result in the third panel in Figure~\ref{F:OBSMOD}.
There is only one X-ray peak, which is associated with the gas of the main cluster.
We chose a projection, which provides a roughly match to the positions of the two
merger shocks with those observed. However, there is no X-ray peak associated
with the infalling cluster, contrary to the observations.
The gas of the infalling cluster has been stripped off as a result of the relatively large
velocity of the infalling cluster (1500 {$\rm km\;s^{-1}$}\xspace) and the small impact parameter (100 kpc).
Note, that \cite{GolovichET2017} assumed zero impact parameter, which would
make it even easier to strip all the gas from the infalling cluster. We chose a finite,
but small impact parameter since the observed X-ray morphology is not symmetric.
\subsection{Outgoing Merging Shocks in ZwCl008\xspace}
\label{SS:SHOCKPROPERTIES}
The properties of merging shocks were studied in detail by making use of
$N$-body/\-hydro\-dynamical simulations
\citep[e.g.,][]{MolnarBroadhurst2017,HaET2017arXiv170605509}.
In this section we study the evolution of merging shocks as a function of time
around the first core passage before the first turnover using our best
solution for ZwCl008\xspace as an example.
The evolution of merging shocks is illustrated in Figure~\ref{F:OUTSHOCKS}.
The panels in this figure show a color image of the X-ray emission
with contours of the dark matter distribution (yellow) and the
merging shocks (green) superimposed.
We used a projection along the $z$ axis, i.e., we assumed that $z$ coincides with
the line of sight (LOS), and the collision takes place in the plane of the sky ($x,y$ plane).
The infalling cluster passes the main cluster center from below moving east to west
(second panel in the first row; run P40V18).
The time is measured in Myr relative to the first core passage (t$_0 = 0$ Myr;
left to right first: $t$ = -475, -317, 0, 238, Myr, and
second row: $t$ = 396, 459, 555, 713 Myr).
The first two panels in the first row show epochs before the first
core passage, when only the outer regions of the gas in the two clusters collide.
The shocks move ahead of the two clusters generating them
in the same direction as their respective dark matter
(the shock on the east moves to west, the shock on the west moves to east).
The third panel represents the epoch of the first core passage.
The two dark matter peaks overlap at the center; they are slightly
ahead of the X-ray peaks. The structure of the merging shocks change,
multiple shocks are generated due to the collision of the more dense
gas in the merging clusters.
The fourth panel shows a phase right after the first core
passage, when there is only one X-ray peak and the bow shock on the
west in the gas of the main cluster is moving to the west ahead of the infalling
cluster. The back shock in the east propagates to the east in the gas of the
infalling cluster.
Panels in the second row show phases after the first core passage,
when there are two X-ray peaks.
The first two panels show two shocks ahead
of the cluster centers moving outward.
The third panel shows a phase when the back shock already ran out of
the gas of the infalling cluster. The bow shock is still moving outward
(towards west).
The fourth panel shows a later epoch, when both shocks ran out of the
gas of the merging system, and the gas is falling back to the cores of the
dark matter of the two components.
We study these outgoing merging shocks in detail because of
their importance in detrerming the dynamical state of the
merging cluster, and the phase of the collision.
The phase of the collision is important for particle acceleration models,
because the physical properties of the shock depend on it, and these properties
have a large impact on particle acceleration
(e.g., \citealt{FujitaET2016,KangRyu2016,StroeET2014MNRAS445}),
as well as testing cosmological models \citep[e.g.][]{Molnar2015}.
The reason why the outgoing shocks constrain the phase of the
collision well right after the first core passage in the outgoing phase
is that they move fast and run out of the gas as they propagate
with a high speed in the low density gas at the outskirts of the system.
Because of these relatively high outer shock speeds, these shocks
can (re)accelerate particles and generate luminous radio relics for a limited time.
As a consequence shocks and bright relics can only be expected to
be detected in a pair of merging clusters relatively soon after the first core passage,
before the first turnaround, after which the relics become fainter without a shock
to provide additional energy.
In general, as we see here, if the observed shocks were to lie in between the two
X-ray peaks associated with relaxed cluster centers,
then we would conclude that the merging system is being viewed before
the first core passage, as shown in the first panel in Figure~\ref{F:OUTSHOCKS},
such as the case of, e.g., Abell 1750 \citep{Molnar2013ApJ779}.
Disturbed X-ray morphology displaying with one or two X-ray peaks within one or two
shocks instead suggests a system caught after first core passage
(e.g., \citealt{MolnarBroadhurst2015,MastBurk08MNRAS389p967}.
Figure~\ref{F:OUTSHOCKS} also illustrates how fast the outgoing
shocks propagate in a merging cluster similar to ZwCl008\xspace.
After $\sim$500 Myr, both the bow shock and the back shock
propagating in the gas of the main and the infalling cluster
ran out of the system (panel 4 in Figure~\ref{F:OUTSHOCKS}.
The velocity relative to the ambient gas of the bow shock
(the shock on the west moving to west) is 4500 {$\rm km\;s^{-1}$}\xspace,
the back shock propagates faster (to the east) with 4900 {$\rm km\;s^{-1}$}\xspace.
At these speeds, the merging shocks (bow and back shocks)
would run out of the gas in 470 and 380 Myr after the first core passage,
but our numerical simulations suggest that they run out in 618 and 522 Myr.
The turnover occurs 1.5 Gyr after the first core passage,
thus, well before the turn over, both merger shocks leave the system.
This is a general feature of merging shocks more colliding clusters
with moderate mass ratios and infall velocities $\simgreat 1000$ {$\rm km\;s^{-1}$}\xspace.
We derive the Mach numbers for the merging shocks based on our best model
using the temperature jump at the shocks, as it is done using temperature
based on X-ray observations.
The Rankine-Hugoniot jump conditions provide the connection between
the temperature jump, $T_2/T_1$, and the Mach number,
\begin{equation}
\frac{T_2}{T_1} = \frac{5 {\cal M}\xspace^2 + 14 {\cal M}\xspace^2 -3}{16 {\cal M}\xspace^2}
,
\label{E:MACHT2T1}
\end{equation}
where $T_1$ and $T_2$ are the pre- and post-shock temperatures
(e.g., \citealt{MolnarBroadhurst2017,AkamatsuET2015};
for a review see \citealt{MarkevitchVikhlinin2007}).
Using the temperature jump from our simulations at the shocks
in Equation~\ref{E:MACHT2T1},
we obtain Mach numbers for the merging shocks
directly from the physical (not the observed) temperature ratios
(the bow shock and back shock to the west and east):
${\cal M}\xspace_{w,simu} = 5.5$ and ${\cal M}\xspace_{e,simu} = 6.5$.
\subsection{Comparing ZwCl008\xspace to the Bullet Cluster}
\label{SS:COMPARISON}
It has been suggested that ZwCl008\xspace is an older, less massive
version of the Bullet Cluster \citep{GolovichET2016ApJ831},
where the less massive infalling cluster has pushed the gas of the main
cluster out of equilibrium and, as a result, the X-ray morphology,
which traces the gas of the main cluster,
is irregular (X-ray feature in the east in the first panel in
Figure~\ref{F:OBSMOD}).
However, our hydrodynamical simulations suggest that the
cometary-like X-ray peak in ZwCl008\xspace near the center of mass
of the smaller western cluster (see first panel in Figure~\ref{F:OBSMOD})
marks the gas density maximum of the infalling cluster,
similar to El Gordo \citep{MolnarBroadhurst2015},
unlike the bullet cluster, in which the wedge-shaped bright X-ray feature
marks a contact discontinuity.
The shock in the Bullet cluster is a faint X-ray feature
ahead of the merging shock to the west
(e.g., \citealt{MarkevitchVikhlinin2007}).
Based on our best model, ZwCl008\xspace is only 428 Myr after the first
core passage, and in a much earlier phase than the turnover, 1.5 Gyr,
like the Bullet cluster.
Our results suggest instead that, because of a larger impact parameter
for the collision in ZwCl008\xspace and the lower infall velocity relative to the
Bullet cluster, the ram pressure cannot fully hold back
the gas relative to the dark matter in the infalling cluster,
as opposed to the case of the Bullet cluster.
Instead, the infalling cluster in ZwCl008\xspace passes relatively unhindered
by the gas of the main cluster since it does not penetrate through the
central dense gas of the main cluster.
\section{Conclusions}
\label{S:CONCLUSIONS}
We have performed a set of self-consistent $N$-body/\-hydro\-dynamical simulations
based on \emph{FLASH}\xspace, a publicly available AMR code, augmenting our existing library
of binary cluster encounters generated in our previous work, in order to study the
particular dynamics during the collision of the merging cluster ZwCl008\xspace.
We have modeled ZwCl008\xspace as a binary merger, constraining the initial
parameters using gravitational lensing, X-ray morphology and observations
of radio relics.
Unfortunately the merger shock positions cannot be determined
from the \emph{Chandra}\xspace observations without longer X-ray exposures.
Therefore, we refer to the positions of the pair of opposing distinctively
polarized radio relics that we can assume to have resulted from the
two predicted outgoing shocks and a simple geometrical argument
(see Section~\ref{SS:DEPROJECT}) to limit the current shock locations.
The detailed X-ray morphology and the locations
of the two lensing centroids were used to constrain the impact parameter and
the infall velocity of the collision, as well as the viewing angle.
We have demonstrated, that the outgoing shocks travel fast
(4000-5000 {$\rm km\;s^{-1}$}\xspace) in the low density outer gas of the two subclusters,
and, therefore, their positions relative to those of the mass peaks
can be used to accurately derive the phase of the collision.
Thus, merging shocks can (re)accelerate particles and generate relics
only for a limited time $\simless 5\times 10^8$ yr, so that shocks and bright relics can be
detected in a merging system soon after the first core passage, before the first turnaround.
After that time the relics become fainter after no shocks feed them as the
electrons loose energy.
This point has been unappreciated in earlier work \citep{GolovichET2017,NgET2015}
where later stage merging has been entertained without the guidance of
hydrodynamical simulations like those presented in this paper.
Based on our $N$-body/\-hydro\-dynamical simulations,
we derive an impact parameter of $400 \pm 100$ kpc,
and an infall velocity of $1800 \pm 300\,${$\rm km\;s^{-1}$}\xspace,
with virial masses of M$_{vir;1} = 7\pm 0.5\,${$\times 10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace and
M$_{vir;2} = 5\pm 0.5\,${$\times 10^{\,14}\,$\ensuremath{\mbox{\rm M}_{\odot}}}\xspace for the main and infalling cluster respectively.
We find that ZwCl008\xspace is observed about 430 Myr after the first core passage.
Our simulations clearly demonstrate that ZwCl008\xspace
is currently in the outgoing phase, well before the first turnover,
otherwise the forward and reverse shock fronts would have long run out
of the system to the east and west.
Our numerical simulations represent the first attempt to model the
newly discovered Bullet-cluster-like merging system ZwCl008\xspace
using self-consistent $N$-body/\-hydro\-dynamical simulations.
Previously ZwCl008\xspace was modeled by
\cite{GolovichET2017} using their method based on a model assuming
fixed NFW profiles for the dark matter distribution for the merging subclusters
assuming zero impact parameter and ignoring the gas components \citep{Dawson2013ApJ}.
However, their model cannot distinguish between phases of
outgoing or infalling after the first turnover.
Our full self-consistent simulation containing dark matter and gas can constrain
the impact parameter, the phase of the collision,
and the viewing angle with the location of the merging shocks
to provide a both check on the apparent interpretation of this system
as a binary encounter and to provide reliable estimates of the
basic masses, velocities and the age and orientation of the system.
The degree of agreement we find between all the reliable observables of the binary
merging cluster ZwCl008\xspace and those based on our best model derived from
$N$-body/\-hydro\-dynamical simulations along with self-consistent simulations of
other merging clusters provides further strong evidence that dark matter is effectively
collisionless on large scales.
These results support the remarkable insight into this question initially gained by the Bullet cluster.
This self consistency calls into question other claims and theories that advocate
modified gravity, where the aim is to ``emulate'' dark matter,
simply, because the lensing contours indicating the location of the gravitational
potential do not follow the dominant baryonic material that is composed of gas.
Instead the detailed gas distribution relative to the lensing data indicates the contrary,
that dark matter dominates and it is collisionless to within the precision of the data.
\acknowledgements
The code \emph{FLASH}\xspace\ used in this work was in part developed by the
DOE-supported ASC/Alliance Center for Astrophysical Thermonuclear
Flashes at the University of Chicago.
We thank the Theoretical Institute for Advanced Research in Astrophysics,
Academia Sinica, for allowing us to use its high performance computer facility
to carry out our simulations.
\bibliographystyle{apj}
|
2,869,038,156,911 | arxiv | \section{INTRODUCTION}
Deep inelastic electroproduction of heavy flavours is given by
\begin{equation}
e^-(\ell_1)+P(p)\rightarrow e^-(\ell_2)+Q(p_1)(\ \overline{Q}(p_2)\ )
+'X' \,.
\end{equation}
When the virtuality $-q^2 = Q^2 >0 \ (\ q=\ell_1-\ell_2\ )$
of the exchanged vector bosons is not too large $(\ Q^2\ll M_Z^2\ )$ the
above reaction only gets a contribution
from the virtual photon and we can neglect any weak effects
caused by the exchange of the Z-boson. If the process is inclusive
with respect to the hadronic state $'X'$ as well as the heavy flavours
$Q(\bar Q)$, the unpolarized cross section is given by
\begin{eqnarray}
\frac{d^2\sigma}{dx dy}&=&\frac{2\pi\alpha^2}{(Q^2)^2}S\Big[\{1+(1-y)^2\}
F_2(x,Q^2,m^2)
\nonumber \\ &&
-y^2 F_L(x,Q^2,m^2)\Big] \,,
\end{eqnarray}
where $S$ denotes the square of the c.m. energy of the electron proton
system and the variables $x$ and $y$ are defined as
\begin{equation}
x=\frac{Q^2}{2p.q}\, (0<x\le 1)\, ,\, y=\frac{p.q}{p.\ell_1}
\quad (0<y<1) \,,
\end{equation}
with
\begin{equation}
-q^2=Q^2=xyS ~.
\end{equation}
In the QCD improved parton model the heavy flavour contribution
to the hadronic structure functions, denoted by $F_i(x,Q^2,m^2)$
$(i=2,L)$, where $m$ stands for the heavy quark mass,
can be expressed as integrals over the partonic scaling
variable. This yields the following results
\begin{eqnarray}
&F_i(x,Q^2,m^2)&=x\int^{z_{max}}_x \frac{dz}{z}\Big[\frac{1}{n_f}
\sum^{n_f}_{k=1}e_k^2
\nonumber \\ &&
\times\Big\{\Sigma\Big(\frac{x}{z},\mu^2\Big)
L^{\rm S}_{i,q}\Big(z,\frac{Q^2}{m^2},\frac{m^2}{\mu^2}\Big)
\nonumber \\ &&
+G\Big(\frac{x}{z},\mu^2\Big) L_{i,g}\Big(z,\frac{Q^2}{m^2},
\frac{m^2}{\mu^2}\Big)\Big\}
\nonumber \\ &&
+\Delta\Big(\frac{x}{z},\mu^2\Big)
L^{\rm NS}_{i,q}\Big(z,\frac{Q^2}{m^2},\frac{m^2}{\mu^2}\Big)\Big]
\nonumber \\ &&
+x~e_H^2\int^{z_{max}}_{x}\frac{dz}{z}
\nonumber \\ &&
\times\Big\{\Sigma\Big(\frac{x}{z},\mu^2\Big)
H_{i,q}\Big(z,\frac{Q^2}{m^2},\frac{m^2}{\mu^2}\Big)
\nonumber \\ &&
+G\Big(\frac{x}{z},\mu^2\Big)H_{i,g}\Big(z,\frac{Q^2}{m^2},
\frac{m^2}{\mu^2}\Big)\Big\} \,,
\nonumber \\ &&
\end{eqnarray}
where $z= Q^2/(s+Q^2)$ and $s$ is the square of the
photon-parton centre-of-mass energy $(s \ge 4m^2)$.
Here the upper boundary of the integration is given by
$z_{max}={Q^2}/(4 m^2 + Q^2)$.
The function $G(z,\mu^2)$ stands for the gluon
density whereas the flavour singlet and flavour non-singlet
combinations of the quark densities are given by $\Sigma(z,\mu^2)$ and
$\Delta(z,\mu^2)$ respectively.
In the above expressions the charges of the light quark and the heavy quark
are denoted by $e_i$ and $e_H$ respectively. Furthermore, $n_f$ stands
for the number of light quarks and $\mu$ denotes the mass factorization
scale, which we choose to be equal to the renormalization scale.
The latter shows up in the running coupling constant defined by
$\alpha_s(\mu^2)$. The heavy quark coefficient
functions are denoted by $L_{i,j}$ and $H_{i,j}$ $(i=2,L$; $j=q,g)$.
The distinction between them can be traced back to the different photon-parton
production processes from which they originate. The functions
$L_{i,j}$ are attributed to the reactions where the virtual photon
couples to the light quark, whereas the $H_{i,j}$ originate from the reactions
where the virtual photon couples to the heavy quark.
Hence $L_{i,j}$ and $H_{i,j}$ in (5) are multiplied by $e_i^2$ and $e_Q^2$
respectively. The superscripts NS and S on the heavy quark coefficient
functions refer to flavour non-singlet and flavour singlet respectively.
Furthermore the singlet quark coefficient functions $L^{\rm S}_{i,q}$ and
$H^{\rm S}_{i,q}$ can be split into non-singlet and purely singlet (PS)
parts,i.e.,
\begin{eqnarray}
L^{\rm S}_{i,q} = L^{\rm NS}_{i,q} + L^{\rm PS}_{i,q} \,,
\end{eqnarray}
\begin{eqnarray}
H^{\rm S}_{i,q} = H^{\rm NS}_{i,q} + H^{\rm PS}_{i,q} \,,
\end{eqnarray}
with $H^{\rm NS}_{i,q}=0$ in all orders of perturbation theory.
In \cite{lrsn1} the heavy quark coefficient functions $L_{i,j}$ and $H_{i,j}$
have been exactly calculated up to $\alpha_s^2$.
Expanding them in a power series in $(\alpha_s/4\pi)^k$ they receive
contributions from the following parton subprocesses
\begin{equation}
\gamma^*(q) + g(k_1) \rightarrow Q(p_1) + \overline{Q}(p_2) \,,
\end{equation}
\begin{equation}
\gamma^*(q) + g(k_1) \rightarrow g(k_2) + Q(p_1) + \overline{Q}(p_2) \,,
\end{equation}
and
\begin{equation}
\gamma^*(q) +q(\overline{q})(k_1)\rightarrow q(\overline{q})(k_2)
+ Q(p_1) + \overline{Q}(p_2) ~ .
\end{equation}
For reaction (9) one has to include the virtual gluon corrections
to the Born process (8). The contributions from (8) and (9)
to the heavy quark coefficient functions are denoted by $H^{(1)}_{i,g}$
and $H^{(2)}_{i,g}$ respectively. The parton subprocess (10) has two
different production mechanisms. The first one is given by the
Bethe-Heitler process (see figs. 5a,5b in \cite{lrsn1}) leading to
$H^{{\rm PS},(2)}_{i,q}$ and the second one can be attributed to the
Compton reaction (see figs. 5c,5d in \cite{lrsn1}).
Notice that $L^{{\rm PS}}_{i,q}$ and $ L^{{\rm S}}_{i,g}$ are zero
through order $\alpha_s^2$. Then, from (6), one infers that
$L^{{\rm NS},(2)}_{i,q} = L^{{\rm S},(2)}_{i,q}$.
Finally we want to make the remark that there are no interference
terms between the Bethe-Heitler and the Compton reactions in (10)
if one integrates over all final state momenta.
The complexity of the second order heavy quark coefficient functions prohibits
publishing them in an analytic form, except for $L_{i,q}^{{\rm NS},(2)}$,
which is given in Appendix A of \cite{bmsmn}, so that they are only
available in large computer programs \cite{lrsn1}, involving two-dimensional
integrations.
To shorten the long running time needed for the computation of
the structure functions in (5)
we have tabulated the coefficient functions in the form of a two
dimensional array in the variables $\eta$ and $\xi$ in
a different computer program \cite{rsn1}. These variables are
defined by
\begin{equation}
\eta=\frac{(1-z)}{4z}\xi -1\qquad,\qquad \xi=\frac{Q^2}{m^2} ~.
\end{equation}
However when $\xi \gg 1$ ($Q^2 \gg m^2$) numerical instabilities
appear so that it is desirable to have analytic expressions
for the heavy quark coefficient functions in that region.
Moreover it turns out that for $\xi > 10$ the asymptotic expressions
for $H^{(2)}_{2,g}$ (9) and $H^{{\rm PS},(2)}_{2,q}$ (10)
approach the exact ones so that the former can be used for charm production
at the HERA collidier provided $Q^2 > 22.5 $ $({\rm GeV}/c^2)$
($m_c = 1.5$ $({\rm GeV}/c)$.
Furthermore one can use these asymptotic formulae in the context of
the variable flavour number scheme as has been explained in
\cite{aot}.
\section{METHOD}
We will now explain how the asymptotic form $(Q^2 \gg m^2)$
of the heavy quark coefficient functions $H_{i,j}$ , $L_{i,j}$
(5) can be derived using the renormalization group and the operator product
expansion (OPE) techniques. Using these techniques one can avoid
the computation of the cumbersome Feynman integrals and phase
space integrals which arise in the calculation of the
processes in (8) - (10).
In the limit $Q^2 \gg m^2$ the heavy quark coefficient functions
behave logarithmically as
\begin{eqnarray}
H^{(k)}_{i,j} \Big( z, \frac{Q^2}{m^2}, \frac{m^2}{\mu^2}\Big) &=&
\sum_{l = 0}^k a^{(k),(l)}_{i,j}\Big(z,\frac{m^2}{\mu^2}\Big)
\nonumber \\ &&
\times\ln^l\frac{Q^2}{m^2} ~,
\end{eqnarray}
with a similar expression for L$_{i,j}^{(k)} $.
The above large logarithms originate from collinear divergences which
arise when $Q^2$ is kept fixed and $m^2 \rightarrow 0$. As has been shown
in \cite{bmsmn} each fixed order term in expression (12) can be written as
\begin{equation}
H_{i,j}\Big( \frac{Q^2}{m^2}, \frac{m^2}{\mu^2}\Big)
= A_{kj}\Big( \frac{m^2}{\mu^2}\Big) \otimes C_{i,k}
\Big(\frac{Q^2}{\mu^2}\Big)\,,
\end{equation}
where the power of $\alpha_s$ has to match on the left and right hand
sides. There is a similar expression for $L_{i,j}$ ($i=2,L$; $k,j = q,g$).
Notice that we have suppressed the dependence on the scaling
variable $z$ in (12) and the convolution symbol $\otimes$ is defined by
\begin{eqnarray}
\Big(f\otimes g\Big)(z)&=&\int_0^1 dz_1\int_0^1 dz_2 ~
\delta(z-z_1z_2)
\nonumber \\ &&
\times f(z_1)g(z_2) ~.
\end{eqnarray}
The light quark and gluon coefficient functions $C_{i,k}$ have been
calculated up to order $\alpha_s^2$ in \cite{zn}. The operator
matrix elements (OME's) $A_{kj}$ are now also known up to the same order in
perturbation theory (see \cite{bmsmn}). They are defined by
\begin{equation}
A_{kj}\Big(\frac{m^2}{\mu^2}\Big) = <j|O_k|j> \,,
\end{equation}
where $O_k$ represent the local operators which show up in the operator
product expansion of the two electromagnetic currents which appear
in the calculation of the process (1).
Notice that the OME's in (15) are finite which means that all
renormalizations and mass factorizations have already been carried out.
The last operation is needed because of the collinear divergences
which appear in the OME's when the external on-mass-shell massless
quarks and gluons are coupled to internal massless quanta.
The ultraviolet and collinear divergences are regulated by using the
method of $n$-dimensional regularization. The removal
of these divergences has been done in the
$\overline{\rm MS}$-scheme. For the computation of the heavy quark
coefficient functions $H^{(1)}_{i,g}$ (8) $H^{(2)}_{i,g}$ (9)
we need the OME's $A^{(1)}_{Qg}$ and $A^{(2)}_{Qg}$ respectively,
which are given by the Feynman graphs in figs.1,2 of \cite{bmsmn}.
The Bethe-Heitler coefficient functions
$H^{{\rm PS},(2)}_{i,q}$ (10)
requires the calculation of $A^{{\rm PS},(2)}_{Qq}$
whereas for the Compton coefficient function $L^{{\rm NS},(2)}_{Qq}$ (10)
one has to compute $A^{{\rm NS},(2)}_{qq}$
The results for these OME's can be found in appendix C of \cite{bmsmn}.
Substitution of $A_{kj}$ and $C_{i,k}$ in (13) leads
to the asymptotic heavy quark coefficient functions which are presented in
Appendix D of \cite{bmsmn}.
\section{RESULTS}
We are now interested to find out at which values of $\xi$ (11) or
$Q^2$ the asymptotic coefficient functions approach the exact ones
computed in \cite{lrsn1} and \cite{rsn1}.
For that purpose we define the ratio $R^{(\ell)}_{i,j}$
which is given by
\begin{equation}
R^{(\ell)}_{i,j}\Big(z, \xi,\frac{m^2}{\mu^2}\Big)
= {
{H^{{\rm exact},(\ell)}_{i,j} (z, \xi,m^2/\mu^2)}
\over
{H^{{\rm asymp},(\ell)}_{i,j} (z, \xi,m^2/\mu^2)} }\,,
\end{equation}
where $H^{\rm exact}_{i,j}$ and $H^{\rm asymp}_{i,j}$ stand for the exact
\cite{lrsn1}, \cite{rsn1} and asymptotic \cite{bmsmn} heavy
quark coefficient functions respectively.
Choosing $\mu^2 = m^2$ and the range $5 < \xi < 10^5$, we have plotted as
an example $R^{(2)}_{L,g}$ in fig.1 and $R^{(2)}_{2,g}$ in fig.2
for $z = 10^{-2}$ and $z=10^{-4}$. The reason that we have chosen these two
ratios is that the coefficient functions $H^{(2)}_{L,g}$ and
$H^{(2)}_{2,g}$ (9) constitute the
\begin{figure}[tp]
\vspace{7cm}
\begin{picture}(7,7)
\special{psfile=fig5.ps voffset=-80 hoffset = -10 hscale=40
vscale =40 angle=0}
\end{picture}
\caption{$R^{(2)}_{L,g}$ plotted as a function of $\xi$ for fixed
$z = 10^{-2}$ (solid line) and for $z = 10^{-4}$ (dashed line).}
\end{figure}
bulk of the radiative
corrections to the Born reaction (8). From fig.1 we infer that
$H^{{\rm exact},(2)}_{L,g}$ and
$H^{{\rm asymp},(2)}_{L,g}$ coincide when
$\xi \ge \xi_{\rm min} = 10^3$ and there is essentially no difference between
the ratios for $z=10^{-2}$ and $z = 10^{-4}$. In the case of
$H^{{\rm exact},(2)}_{2,g}$ and
$H^{{\rm asymp},(2)}_{2,g}$ (see fig.2) the above $\xi$-value is much
smaller and both coefficient functions coincide
when $\xi \ge \xi_{\rm min}=10$,
which is quite insensitive to the values chosen for $z$.
The reason why the convergence of
$ R^{(2)}_{L,g}$ to one is so slow in comparison to
$ R^{(2)}_{2,g}$ is unclear. Apparently the logarithmic terms in
$H^{{\rm exact},(2)}_{L,g}$ start to dominate the coefficient
functions at much larger values of $\xi$
than is the case for $H^{{\rm exact},(2)}_{2,g}$.
A similar observation has been made for
$H^{{\rm PS},(2)}_{i,q}$. The small value found for $\xi_{\rm min}$ in
the case of $H_{2,g}$ is very interesting for charm production
where $F_2(x,Q^2,m_c^2)$ can be
measured with much higher accuracy than $F_L(x,Q^2,m^2)$.
Since $H^{(2)}_{2,g}$ dominates the radiative corrections to
$F_2(x,Q^2,m_c^2)$ one can state that for $Q^2 > 22.5$ $({\rm GeV}/c)^2$
$(m_c = 1.5$ ${\rm GeV}/c)$ the exact coefficient functions can be replaced
by their asymptotic ones. However before one can draw definite conclusions
about the dominance of the terms $ln^{l}(Q^2/m^2)$ on the level of the
structure functions ones first to convolute the heavy quark coefficient
functions with the parton densities ~(see (5) ). This will be done in future
work. If it turns out that the above logarithms also
dominate $F_k(x,Q^2,m^2)$, in particular for $k=2$, then these terms
have to be resummed using the renormalization group
equations. This is done using the variable flavour
number scheme approach \cite{aot}. One of the features of this method is that
one has to define a charm density in the proton which is a convolution of the
OME's $A_{k,j}$ (15) and the light parton densities $\Sigma$ and $G$ in (5).
Hence for $Q^2>>m_c^2$ the charm quark behaves like a light parton provided
the large logarithmic terms dominate the proton structure functions in (5).
\begin{figure}[tp]
\vspace{7cm}
\begin{picture}(7,7)
\special{psfile=fig6.ps voffset=-80 hoffset = -10 hscale=40
vscale =40 angle=0}
\end{picture}
\caption{$R^{(2)}_{2,g}$ plotted as a function of $\xi$ for fixed
$z = 10^{-2}$ (solid line) and for $z = 10^{-4}$ (dashed line).}
\end{figure}
|
2,869,038,156,912 | arxiv | \section*{Introduction}
Ever since 1994 when Seiberg and Witten
derived exact low-energy Wilsonian effective action of (pure) $SU(2)$ ${\cal N}=2$ SYM \cite{Seiberg:1994rs},
the interest in this kind of theories has been remaining extremely high. The reason is their remarkably rich physical and
mathematical content. In fact, these theories provide a framework to address in a precise manner such
problems as strong coupling, non-perturbative effects and confinement in non-Abelian
gauge theory (so relevant, for instance, in the Standard Model). The impact of Seiberg-Witten theory
in pure mathematics is also very substantial. Likely, the most famous applications are in algebraic geometry and topology of four-dimensional
differentiable manifolds where e.g. the notions of Seiberg-Witten and Gromov-Witten invariants are
of primary importance.
The effective action of ${\cal N}=2$ SYM is given in terms of prepotential: a holomorphic
function of the vacuum expectation values (VEV) of the vector multiplet scalar field.
Large VEV expansion of the prepotential reveals its structure as sum of classical, one-loop
and instanton contributions. Many researchers tried to restore the instanton contributions
directly from the microscopic theory, but they succeeded only in the case of the first few instantons \cite{Dorey:2002ik}. Actual progress has been achieved with the idea of using equivariant
localization techniques in the moduli space of instantons \cite{Flume:2001nc},
\cite{Flume:2001kb},
especially in combination
with the introduction of the so-called $\Omega$ background (see \cite{Nekrasov:2002qd} and
further developments \cite{Flume:2002az}, \cite{Nekrasov:2003rj}, \cite{Bruzzo:2002xf}). Considering theory in $\Omega$-background effectively embeds the system
in a finite volume $\sim \frac{1}{\epsilon_1\epsilon_2}$, where the parameters
$\epsilon_1$, $\epsilon_2$ are sort of
angular velocities on orthogonal planes of (Euclidean) 4d space-time. This makes
the partition function a finite, well defined quantity (commonly referred as Nekrasov
partition function). Then the corresponding free energy coincides with (generalized)
prepotential. The usual SW prepotential is recovered simply by sending the parameters
$\epsilon_{1,2}\rightarrow 0$. Another, crucial consequence of introducing this background
is the fact that instanton moduli integrals are localized at finitely many points.
This property eventually leads to an elegant combinatorial formula for instanton
contributions \cite{Flume:2002az}.
Later developments are even more surprising. It appears that introduction of $\Omega$
background is not merely a regularization trick. Thus, keeping $\epsilon_{1,2}$ finite
a deep relation between conformal blocks of 2d CFT and Nekrasov partition function
\cite{Alday:2009aq}
emerges, so that the Virasoro central charge is related to these parameters, the masses of
hypermultiplets specify inserted primary fields, while VEVs identify
the states of the intermediate channel.
The special case $\epsilon_{1}=-\epsilon_{2}$ bridges the theory with topological string,
$\epsilon$-expansion of Nekrasov partition function coinciding with topological (string) genus expansion.
Another special case of great interest is the Nekrasov-Shatashvili (NS) limit \cite{Nekrasov:2009rc}
when one of
parameters, say $\epsilon_1$ is kept finite while $\epsilon_2=0$. From AGT point of
view this case corresponds to semiclassical CFT when the central charge
$c\rightarrow \infty$. Besides this, another interesting link to quantum integrable
system emerges, now the remaining nonzero parameter $\epsilon_1$ being related to
Plank constant. In NS limit many quantities familiar from original Seiberg-Witten
theory become deformed or quantised in rather simple manner.
In particular the algebraic equation defining
Seiberg-Witten curve, becomes a finite difference equation \cite{Poghossian:2010pn},
which in terms of
related integrable system is nothing but Baxter's $TQ$ equation (for a later development see also
\cite{Bourgine:2017aqk}).
Through discrete Fourier
transform one gets a linear differential equation \cite{Fucito:2011pn}, which from 2d CFT
perspective is
the null vector decoupling equation \cite{Belavin:1984vu} in the semiclassical limit.
This relation was an object of intensive investigations in the last decade (see e.g.
\cite{Alday:2009fs,Mironov:2009uv,Maruyoshi:2010iu,Marshakov:2010fx,Nekrasov:2013xda,
Piatek:2011tp,Ashok:2015gfa,Poghossian:2016rzb,Poghosyan:2016mkh,
Nekrasov:2017gzb,Jeong:2017mfh}).
More recently, moving from Gaiotto's idea of looking at these
equations as quantum versions of the (suitable power of the) SW differential \cite{Gaiotto:2009we}\footnote{In other words, the SW differential gives way to the oper upon quantisation.}, it has been proposed to investigate their monodromies (quantum periods over cycles) through the connection (Stokes) multipliers appearing in the ODE/IM correspondence \cite{Dorey:1998pt,Bazhanov:1998wj}: \cite{Fioravanti:2019vxi} describes the general idea by exemplifying it in the simple case of pure $SU(2)$ gauge theory\footnote{Very few details are given for the cases with matter in the fundamental.} and in particular the link between the $a$-period and the Baxter transfer matrix $T$ function. In this perspective, Thermodynamic Bethe Ansatz (TBA)-like considerations about pure $SU(2)$ gauge theory were initiated in \cite{Gaiotto:2014bza} at zero modulus (of the Coulomb branch) for the dual period $a_D$, and then more recently pursued by \cite{Grassi:2019coc}.
In fact, in this paper we show how to compute the gauge A-periods of the pure $SU(3)$ theory (without any matter hypermultiplet: {\it cf.} \cite{Klemm:1994qs} and \cite{Argyres:1994xh} for what concerns the generalisation of SW theory to higher rank gauge groups) as Floquet monodromy coefficients of the aforementioned differential equation (in the complex domain). Then, we propose a connexion between them and the integrable Baxter's $T$ function which extends non trivially what happens in the $SU(2)$ case and shows that the latter is not an accident. More in details, we obtain a third order linear differential equation with some similarities (and differences\footnote{The main relevant difference is, as in simpler $SU(2)$ case \cite{Zamolodchikov:2000unpb} and \cite{Fioravanti:2019vxi}, the exit of the oper parameter ($M>1/2$ in \cite{Dorey:1998pt}) from the range of validity with the appearance of a extra irregular singularity in zero (besides that at $\infty$).}) with the third order oper of ODE/IM correspondence in
\cite{Dorey:1999pv}, \cite{Bazhanov:2001xm}. As the latter correspond to some 'minimal' case $M>1/2$, we may conjecture, along the lines of \cite{Dorey:1999uk} and \cite{Zamolodchikov:2000unpb}, that we are describing $A_2$-Toda CFT with central charge $c=98$.
In a very interesting unfinished paper \cite{Zamolodchikov:2000unpb} Alexei Zamolodchikov has proposed ODE/IM for the Liouville CFT TBA. Special attention has been payed to the self-dual case $c=25$, when the related ODE becomes the {\it modified} Mathieu equation and a elegant
relationship between Floquet exponent and Baxter's $T$ function has been suggested. As written, the implication of this conjecture for the period on the $A$-cycle of (effective) $SU(2)$ gauge theory has been highlighted and used by \cite{Fioravanti:2019vxi}. But it was not clear from there if and how it is possible to generalise this beautiful connection between transfer matrix and periods for higher rank groups.
In the details of this paper we derive $QQ$ and $TQ$ functional relations (see eqs. (\ref{Qc_QQ}), (\ref{T_Q})) and extend Zamolodchikov's
conjecture for the case of gauge group $SU(3)$ (see eq. (\ref{conj_su3})). We show that numerical integration of the differential equation leads to
evaluation of the 'quantum' $A$-cycle periods $a_{1,2,3}$ at any point of the moduli
space of vacua parametrised by the vector multiplet scalar VEV's
$u_2=\langle \textbf{tr}\,\phi^2\rangle$ and $u_3=\langle \textbf{tr}\,\phi^3\rangle$
even for large values of $q$ at which the instanton series diverges.
We have checked that the numerical results at small $q$ are in excellent agreement with instanton
calculation. Thus the main message of this paper is that the differential equation provides
an excellent tool for investigation of deformed SW theory in its entire range
from weak to strong coupling.
The paper is organized as follows:
Section \ref{NPF} is a short review on instanton calculus for $SU(N)$ SW theory without
hypers in $\Omega$-background.
Here one can find explicit expressions as a sum over (multiple) Young diagrams for
Nekrasov partition function and VEV's $\langle \textbf{tr}\,\phi^J\rangle$.
Section \ref{BDE} is a brief introduction to deformed SW theory.
We present the main results of \cite{Poghossian:2010pn} in a form convenient
for our present purposes.
Starting from section \ref{DE} we consider the case of $SU(3)$ theory. The main
tool of our investigation, a third order linear ODE is derived and its asymptotic
solutions are found.
In section \ref{FR} we identify a unique solution $\chi(x)$ which rapidly vanishes for
large negative values of the argument $x\rightarrow -\infty $. The three quantities
$Q_{1,2,2}$ are defined as coefficients of expansion of $\chi(x)$ in terms of
three independent solutions $U_{1,2,3}(x)$ defined in asymptotic region $x\gg 0$.
Investigating symmetries of the differential equation we find a system of difference
equations for $Q_k$ and their analogs $\bar{Q}_k$ obtained by flipping the sign of
parameter $u_3\rightarrow -u_3$. Based on this $QQ$ system we introduce Baxter's
$T$ function and write down corresponding $TQ$ relations.
In section \ref{a_cycles} we show how numerical integration of the
differential equation along imaginary direction with standard boundary
conditions allows one to find the monodromy matrix and corresponding
Floquet exponents, which in the context of gauge theory, coincide with the $A$-cycle
periods $a_{1,2,3}$. We have convincingly demonstrated the correctness of this identities
trough comparison with instanton computation. But the main value of this method is
that it makes accessible also the region of large coupling constants, which is beyond
the reach of instanton calculus. Eventually, we close this section by suggesting a simple relation between Baxter's $T$-function and
$A$-cycle periods $a_{1,2,3}$ of $SU(3)$ theory, which can be thought of as a natural extension of Alexei Zamolodchikov's
conjecture relating Floquet exponent of Mathieu equation to Baxter's $T$
function in $c=25$ Liouville CFT.
Finally appendix \ref{TQ_proof} contains few technical details
for derivation of the $TQ$ relation.
\section{Nekrasov partition function and the VEVs $\langle \textbf{tr}\,\phi^J\rangle $}
\label{NPF}
Consider pure $SU(N)$ theory without hypers in $\Omega$-background.
The instanton part of partition function is given by \cite{Nekrasov:2002qd}
\begin{eqnarray}
Z_{inst}(\mathbf{a},\epsilon_1,\epsilon_2,q)=\sum_{\vec{Y}}Z_{\vec{Y}} \left((-)^Nq\right)^{|\vec{Y}|}
\,,
\label{z_inst}
\end{eqnarray}
where sum runs over all $N$-tuples of Young diagrams $\vec{Y}=(Y_1,\cdots,Y_N)$
, $|\vec{Y}|$ is the total number all boxes,
$\mathbf{a}=(a_1,a_2,\cdots,a_N)$ are VEV's of adjoint scalar from
${\cal N}=2$ vector multiplet, $\epsilon_1$, $\epsilon_2$, as already mentioned,
parametrize the $\Omega$-background and the instanton counting parameter $q=\exp 2\pi i
\tau $, with $\tau=\frac{i}{g^2}+\frac{\theta}{2\pi}$
being the (complexified) coupling constant. The coefficients $Z_{\vec{Y}}$ are factorized
as
\begin{eqnarray}
\label{ZY_factorization}
Z_{\vec{Y}}=\prod_{u,v=1}^N\frac{1}{P(Y_u,a_u|Y_v,a_v)}\,\,,
\end{eqnarray}
where the factors $P(\lambda,a|\mu,b)$ for
arbitrary pair of Young diagrams $\lambda,\mu$ and associated VEV parameters $a$, $b$,
are given explicitly by the formula \cite{Flume:2002az}
\begin{eqnarray}
\label{P_factor}
&&P(\lambda ,a|\mu ,b)=\\
&&\quad \prod_{s\in \lambda}(a-b+\epsilon_1(1+L_\mu(s))-\epsilon_2 A_\lambda(s))
\prod_{s\in \mu}(a-b-\epsilon_1L_\lambda(s)+(1+\epsilon_2 A_\lambda(s)))\nonumber
\end{eqnarray}
If one specifies location of a box $s$ by its horizontal and vertical
coordinates $(i,j)$, so that $(1,1)$ corresponds to the corner box, its
leg length $L_\lambda(s)$ and arm length $A_\lambda(s)$ with respect to the
diagram $\lambda$ ($s$ does not necessarily belong to $\lambda$) are defined as
\begin{eqnarray}
A_\lambda(s)=\lambda_i-j; \qquad\qquad L_\lambda(s)=\lambda_j'-i\,,
\end{eqnarray}
where $\lambda_i$ ($\lambda_j'$) is $i$-th column ($j$-th row) of diagram $\lambda$
with convention that when $i$ exceeds the number of columns
($j$ exceeds the number of rows) of $\lambda$, one simply sets $\lambda_i=0$
($\lambda_j'=0$).
The instanton part of (deformed) prepotential is given by \cite{Nekrasov:2002qd}
\begin{eqnarray}
F_{inst}({\bf a},q)=-\epsilon_1\epsilon_2\log Z_{inst}\,.
\end{eqnarray}
Instanton calculus allows one to obtain also the VEV's $\langle \textbf{tr}\,\phi^J\rangle$,
$\phi$ being the adjoint scalar of vector multiplet:
\begin{eqnarray}
\langle \textbf{tr}\,\phi^J\rangle=\sum_{i=1}^Na_u^J+
Z_{inst}^{-1}\sum_{\vec{Y}}Z_{\vec{Y}}{\mathcal O}_{\vec{Y}}^J q^{|\vec{Y}|}
\,,
\label{tr_phiJ}
\end{eqnarray}
where $Z_{\vec{Y}}$ is already defined by (\ref{ZY_factorization}), (\ref{P_factor}),
and \cite{Losev:2003py,Flume:2004rp}
\begin{eqnarray}
{\mathcal O}_{\vec{Y}}^J=\sum_{u=1}^N\,\sum_{(i,j)\in Y_u}
\left(\left(a_u+\epsilon_1 i+\epsilon_2(j-1)\right)^J+
\left(a_u+\epsilon_1 (i-1)+\epsilon_2j\right)^J\right.\nonumber\\
-\left.\left(a_u+\epsilon_1 (i-1)+\epsilon_2(j-1)\right)^J
-\left(a_u+\epsilon_1 i+\epsilon_2 j\right)^J\right).
\end{eqnarray}
\section{A Baxter difference equation}
\label{BDE}
\subsection{Bethe ansatz equation for NS limit}
It was shown in \cite{Poghossian:2010pn} that in NS limit $\epsilon_2\rightarrow 0$, the sum
(\ref{z_inst}) is dominated by a single term corresponding to a unique
array of Young diagrams $\vec{Y}^{(cr)}$ specified by properties
(the $i$-th column length of a diagram $Y_u$ will be denoted as $Y_{u,i}$):
\begin{itemize}
\item{Though the total number of boxes $\rightarrow \infty $ in
$\epsilon_2\rightarrow 0$ limit the rescaled column lengths
$\epsilon_2 Y^{(cr)}_{u,i}$, converge to finite values
\[
\xi_{u,i}=\lim_{\epsilon_2\rightarrow 0}\epsilon_2 Y^{(cr)}_{u,i}\,.
\]
}
\item{
The rescaled column lengths at small $q$ behave as $\xi_{u,i}\sim O(q^i)$.
This means in particular, that in order to achieve accuracy up to $q^L$,
it is consistent to consider restricted Young diagrams with number of columns $\le L$.
}
\item{
Up to arbitrary order $q^L$ the quantities
\[
x_{u,i}=a_u+\epsilon_1(i-1)+\xi_{u,i}
\]
satisfy the Bethe-ansatz equations (for each $u=1,2,\cdots N$)
\begin{eqnarray}
-q \prod_{v,j}^{N,L} \frac{(x_{u,i}-x_{v,j}-\epsilon_1)(x_{u,i}-x_{v,j}^0+
\epsilon_1)}{(x_{u,i}-x_{v,j}+\epsilon_1)(x_{u,i}-x_{v,j}^0-\epsilon_1)}
=\prod_{v=1}^N(x_{u,i}-a_v+\epsilon_1)(a_v-x_{u,i})\,,\quad
\label{BA}
\end{eqnarray}
where, by definition
\[
x_{u,i}^0=a_u+\epsilon_1(i-1)\,.
\]
}
\end{itemize}
The system of equations (\ref{BA}) together with the property $\xi_{u,i}\sim O(q^i)$
uniquely fixes the quantities $x_{u,i}$ up to order $q^L$. Of course, calculations become
more cumbersome if one increases $L$. Examples of explicit computations for first few
values of $L$ can be found in \cite{Poghossian:2010pn}.
\subsection{Baxter's difference equation and deformed Seiberg-Witten 'curve'}
The BA equations can be transformed into a difference equation \cite{Poghossian:2010pn}
\begin{eqnarray}
Y(z+\epsilon_1)+\frac{q}{\epsilon_1^{2N}}Y(z-\epsilon_1)=
\epsilon_1^{-N} P_N(z+\epsilon_1)\, Y(z)\,,
\label{difference_eq}
\end{eqnarray}
where $Y(z)$ is an entire function with zeros located at $z=x_{u,i}$:
\begin{eqnarray}
Y(z)=\prod_{u=1}^N e^{\frac{z}{\epsilon_1} \psi(\frac{a_u}{\epsilon_1})}\prod_{i=1}^\infty
\left(1-\frac{z}{x_{u,i}}\right)e^{z/x_{u,i}^0}\,,
\label{Y}
\end{eqnarray}
and
\[
\psi(x)=\frac{d}{dx}\log \Gamma(x)
\]
is the logarithmic derivative of Gauss' gamma-function. Finally $P_N(z)$ is
an $N$-th order polynomial which parametrises the Coulomb branch of the theory.
Explicit expressions of coefficients of this polynomial in terms of VEVs
\begin{eqnarray}
u_J\equiv \langle {\textbf tr} \phi^J \rangle
\label{u_J_def}
\end{eqnarray}
will be presented later for the case of our current interest $N=3$. For more
general cases one can refer to \cite{Poghossian:2010pn}. Now, let us briefly recall how the
difference equation (\ref{difference_eq}) is related to the Seiberg-Witten curve.
Introducing the function
\[
y(z)=\epsilon_1^{N}\,\,\,
\frac{Y(z)}{Y(z-\epsilon_1)}
\]
one can rewrite (\ref{difference_eq}) as
\begin{eqnarray}
y(z)+\frac{q}{y(z-\epsilon_1)}=P_N(z)\,.
\label{DSW}
\end{eqnarray}
At large $z$ the function $y(z)$ behaves as
\[
y(z)=z^N(1+O(1/z))\,.
\]
Notice that setting $\epsilon_1=0$ in (\ref{DSW}) one obtains
an equation of hyperelliptic curve, which is just the Seiberg-Witten curve.
When $\epsilon_1\neq 0$, everything goes surprisingly similar to the original Seiberg-Witten theory. For example the r\^ole of Seiberg-Witten
differential is played anew by the quantity
\[
\lambda_{SW}=z \frac{d}{dz}\log y(z) \, ,
\]
and, as in the undeformed theory, the expectation values (\ref{u_J_def}) are given by the contour integrals
\begin{eqnarray}
\langle \textbf{tr}\,\phi^J\rangle =\oint_{\cal C} \frac{dz}{2\pi i} z^J\partial_z \log y(z)\,,
\label{u_J_int}
\end{eqnarray}
where $\cal C$ is a large contour, enclosing all zeros and poles of $y(z)$.
\subsection{Details on $SU(3)$ theory}
Without any essential loss of generality, from now on we will assume that
\begin{eqnarray}
u_1 \equiv \langle\textbf{tr}\,\phi \rangle=a_1+a_2+a_3=0\,.
\end{eqnarray}
Representing $y(z)$ as a power series in $1/z$
\begin{eqnarray}
y(z)=z^3(1+c_1z^{-1}+c_2z^{-2}+c_3z^{-3}+\cdots )
\label{y_expansion}
\end{eqnarray}
and inserting in eq. (\ref{u_J_int}) one easily finds the relations
\begin{eqnarray}
c_1=0; \qquad c_2=-\frac{u_2}{2}; \qquad c_3=-\frac{u_3}{3}\,.
\label{c_u}
\end{eqnarray}
Now, consistency of (\ref{y_expansion}), (\ref{c_u}) and (\ref{DSW})
immediately specifies the polynomial $P_3(z)$ (we omit the
subscript $3$, since only the case $N=3$ will be considered later on)
\begin{eqnarray}
P(z)=z^3-\frac{u_2}{2}\,z-\frac{u_3}{3}\,\,.
\end{eqnarray}
\section{The differential equation and its asymptotic solutions}
\label{DE}
\subsection{Derivation of the differential equation}
To keep expressions simple, from now on we will set $\epsilon_1=1$.
In fact, at any stage the $\epsilon_1$ dependence can be easily restored on dimensional grounds. Taking the results of previous subsection, the difference equation
for $N=3$ case (\ref{difference_eq}) can be rewritten as
\begin{eqnarray}
Y(z)-\left(z^3-\frac{u_2}{2}z-\frac{u_3}{3}\right) Y(z-1)+q\,Y(z-2)=0
\,,
\label{difference_eq_3}
\end{eqnarray}
By means of inverse Fourier transform, following \cite{Fucito:2011pn,
Nekrasov:2013xda,Poghossian:2016rzb}, from (\ref{difference_eq_3}) we can derive a third order linear differential equation for the function
\begin{eqnarray}
f(x)=\sum_{z\in\mathbb{Z}+a}e^{x (z+1)}Y(z)\,.
\label{f_series}
\end{eqnarray}
At least when $|q|$ is sufficiently small, it is expected that the series
is convergent for finite $x$,
provided $a$ takes one of the three possible values $a_1$, $a_2$ or $a_3$.
Taking into account the difference relation (\ref{difference_eq_3}), one
can easily check that the function (\ref{f_series}) solves the
differential equation
\begin{eqnarray}
-f^{\,'''}(x)+\frac{u_2}{2}\,f^{\,'}(x)+\left(e^{-x}+q\,e^x+\frac{u_3}{3}\right)
f(x)=0\,.
\label{diff_eq1}
\end{eqnarray}
Denoting
\[
q=\Lambda^6
\]
and shifting the variable
\[
x\rightarrow x-\log \Lambda^3
\]
the differential equation (\ref{diff_eq1}) may be cast into a more symmetric
form
\begin{eqnarray}
-f^{\,'''}(x)+\frac{u_2}{2}\,f^{\,'}(x)+\left(\Lambda^3(e^x+e^{-x})+\frac{u_3}{3}\right)
f(x)=0\,.
\label{diff_eq2}
\end{eqnarray}
\subsection{Solutions at $x\rightarrow \pm \infty $}
Physics leads us to introduce parameters $p_1$, $p_2$, $p_3$
satisfying $p_1+p_2+p_3=0$ such that
\begin{eqnarray}
u_2=p_1^2+p_2^2+p_3^2=2(p_1^2+p_2^2+p_1p_2); \quad
u_3=p_1^3+p_2^3+p_3^3=-3p_1 p_2 (p_1+p_2)\,, \qquad
\label{u_2_3}
\end{eqnarray}
as in the weak coupling limit $\Lambda \rightarrow 0$ the parameters $p_i$ and $a_i$, respectively, coincide.
At large positive values $x\gg 3\ln \Lambda$ the term $\Lambda^3 e^{-x}$ in (\ref{diff_eq2}) can be neglected. In this region the differential equation
can be solved in terms of hypergeometric function $_0F_2(a,b;z)$
defined by the power series
\begin{eqnarray}
_0F_2(a,b;z)=\sum_{k=0}^{\infty}\frac{z^k}{k!(a)_k(b)_k}\,,
\end{eqnarray}
where
\begin{eqnarray}
(x)_k=x(x+1)\cdots(x+k-1)
\end{eqnarray}
is the Pochhammer symbol. Three linearly independent solutions can be chosen as
\begin{eqnarray}
U_i(x)\approx e^{(x+3\theta )p_i}\,\,
_0F_2(1+p_i-p_j,1+p_i-p_k;e^{x+3\theta })\,\,,
\label{U_solutions}
\end{eqnarray}
where by definition
\begin{eqnarray}
\Lambda \equiv \exp \theta
\end{eqnarray}
and the indices $(i,j,k)$
are cyclic permutations of $(1,2,3)$. We used the symbol $\approx$
in (\ref{U_solutions}) to mean that the approximations of the solutions hold, striktly speaking, only for $x\gg 3\theta $ (at leading order). In the end, we must verify that the Wronskian of the three solutions (\ref{U_solutions}) (below and later on, for brevity, we use the notation $p_{ij}\equiv p_i-p_j$)
\begin{eqnarray}
Wr[U_1(x),U_2(x),U_3(x)]\equiv \det \left(
\begin{array}{ccc}
U_1(x)&U_2(x)&U_3(x)\\
U_1^{'}(x)&U_2^{'}(x)&U_3^{'}(x)\\
U_1^{''}(x)&U_2^{''}(x)&U_3^{''}(x)
\end{array}
\right)
=p_{12}p_{23}p_{31}\quad
\label{U_wr}
\end{eqnarray}
is not zero provided the parameters $p_i$ are pairwise different. Thus, (\ref{U_wr}) confirms that generically the $U_i(x)$ are linearly independent
and constitute a basis in the space of all solutions.
Similarly in region $x\ll -3\theta $ the term $\Lambda^3 e^{x}$
of (\ref{diff_eq2}) becomes negligible and one can write down the three linear independent solutions
\begin{eqnarray}
V_i(x)\approx e^{(x-3\theta )p_i}\,\,
_0F_2(1-p_i+p_j,1-p_i+p_k;-e^{-x+3\theta })\,\,.
\label{V_solutions}
\end{eqnarray}
In fact, we obtain the same result for the Wronskian \begin{eqnarray}
Wr[V_1(x),V_2(x),V_3(x)] =p_{12}p_{23}p_{31} \, .
\label{V_wr}
\end{eqnarray}
\section{The functional relations}
\label{FR}
\subsection{The $QQ$ relations}
All three solutions $V_i(x)$ grow very fast at $x\rightarrow -\infty$, but
there is a special linear combination (unique, up to a common constant factor) which vanishes in this limit. If it is the fastest one (as we suspect), this solution is usually referred to as subdominant. Using formulae for asymptotics of $_0F_2$, which
can be found e.g. in \cite{NIST:DLMF}, we are able to establish that the correct combination is
\begin{eqnarray}
\label{chi}
&\chi(x)=
\frac{ \Gamma (p_{12}) \Gamma ( p_{13}) }{4 \pi ^2}\, V_1(x)+
\frac{ \Gamma (p_{23}) \Gamma ( p_{21}) }{4 \pi ^2}\, V_2(x)+
\frac{ \Gamma (p_{31}) \Gamma ( p_{32}) }{4 \pi ^2}\, V_3(x)\,.
\end{eqnarray}
Its asymptotic expansion at $x\rightarrow -\infty$ is given by
\begin{eqnarray}
\chi (x)=
\frac{v^{-\frac{1}{3}} e^{-3 v^{1/3}} }{2 \pi \sqrt{3}}
\left(
1-\left(\frac{1}{9}-\frac{u_2}{2}\right) v^{-\frac{1}{3}}+
\left(\frac{u_2^2}{8}-\frac{5 u_2}{36}+\frac{u_3}{6}+\frac{2}{81}\right)v^{-\frac{2 }{3}} \right.\nonumber\\-
\left. \left(-\frac{u_2^3}{48}+\frac{u_2^2}{18}-\frac{u_3 u_2}{12}-\frac{13 u_2}{324}+\frac{7 u_3}{54}+\frac{14}{2187}\right)v^{-1} +O\left(v^{-\frac{4}{3}}\right)\right)\,,\quad
\end{eqnarray}
where we denoted
\[
v=\exp (3\theta-x)
\]
and $u_2$, $u_3$ are defined in terms of $p_i$ in (\ref{u_2_3}).
Since $U_i(x)$ constitute a complete set of solutions one can represent $\chi(x)$ as
a linear combination
\begin{eqnarray}
\label{chiU}
\chi (x,\theta)=\sum_{n=1}^{3}Q_n(\theta)
\Gamma(p_{nj})\Gamma(p_{nk}) e^{-3p_n\theta}U_n(x,\theta),
\end{eqnarray}
where the important quantities $Q_n(\theta)$, based on general theory of linear
differential equations, are expected to be entire functions of $\theta$ (and also of
parameters ${\bf p}$ dependence on which will be displayed explicitly only if necessary).
The following, easy to check property plays an essential role in further discussion.
Namely the Wronskian of any two solutions $f(x)$, $g(x)$
of the differential equation (\ref{diff_eq2})
\[
W[f(x),g(x)]\equiv f(x)g'(x)-g(x)f'(x)
\]
satisfies the {\it adjoint} equation, i.e. the one obtained by reversing the signs $\bf{p}\rightarrow -\bf{p}$
and $\Lambda^3\rightarrow -\Lambda^3$. Taking inspiration from this property, it is then possible to show exactly that
\begin{eqnarray}
\label{Wr_chi_chi}
Wr\left[\chi (x,\theta+\frac{i \pi}{3}),\chi (x,\theta-\frac{i \pi}{3})\right]=
-\frac{i}{2 \pi }\bar{\chi}(x,\theta)\,,
\end{eqnarray}
where $\bar{\chi}(\theta)=\chi(\theta,-\mathbf{p})$. In fact, the property entails that the l.h.s. of (\ref{Wr_chi_chi}) satisfies the differential equation
(\ref{diff_eq2}) with substitution $\bf{p}\rightarrow -\bf{p}$. Besides, by using the identity\footnote{It can be proven, for instance, by expanding both sides in powers of $e^{-x}$.}
\begin{eqnarray}
&&Wr\left[e^{\frac{2-a-b}{3}\,x}\,_0F_2(a,b,-e^{-x})\,,e^{\frac{2a-b-1}{3}\,x}\,
_0F_2(2-a,1-a+b,-e^{-x})\right]\nonumber\\
&&=(a-1)e^{\frac{1+a-2b}{3}\,x}\,_0F_2(b,1-a+b,e^{-x})\,,
\label{Wr_F_F}
\end{eqnarray}
it is not difficult to show the match of the $x\rightarrow-\infty$ asymptotics of both sides. Of course, the combination of these two statements implies the equality (\ref{Wr_chi_chi}) everywhere.
Let us investigate the $x\rightarrow \infty$ limit of (\ref{Wr_chi_chi}). Taking into account
(\ref{chiU}) and using the identity (\ref{Wr_F_F}) (with $x$ substituted by $-x$),
we obtain the functional relations
\begin{eqnarray}
\label{Qc_QQ}
\frac{\sin(\pi p_{jk})}{2i \pi^2}\, \bar{Q}_n(\theta)=
Q_j\left(\theta+\frac{i \pi}{3}\right)Q_k\left(\theta-\frac{i \pi}{3}\right)-
Q_j\left(\theta-\frac{i \pi}{3}\right)Q_k\left(\theta+\frac{i \pi}{3}\right),\qquad
\end{eqnarray}
where again, the bar on $Q_n$ indicates the sign change $\bf{p}\rightarrow -\bf{p}$
\[
\bar{Q}_n(\theta,{\bf p})\equiv Q_n(\theta,-\bf{p})
\]
and $(n,j,k)$ is a permutations of $(1,2,3)$.
At the end of this section let us establish the $\theta\rightarrow-\infty $ asymptotics
of $Q_k(\theta)$ and $\bar{Q}_k(\theta)$. Obviously, in this case both (\ref{U_solutions})
and (\ref{V_solutions}) are approximate solutions of (\ref{diff_eq2}) at $x\sim 0$.
Thus, comparison of (\ref{chi}) with (\ref{chiU}) ensures that for $\theta\ll 0$
\begin{eqnarray}
\label{Q_asymp}
Q_k(\theta)\sim \frac{\exp (-3\theta p_k)}{4 \pi^2}\,;\qquad
\bar{Q}_k(\theta)\sim \frac{\exp (3\theta p_k)}{4 \pi^2}\,.
\end{eqnarray}
It is easy to see that above asymptotic behavior is fully consistent
with functional relations (\ref{Qc_QQ}).
\subsection{$SU(3)$ version of Baxter's $TQ$ relation}
The functional relations (\ref{Qc_QQ}) suggest the following $SU(3)$ analog
of Baxter's $TQ$ equations:
\begin{eqnarray}
\label{T_Q}
&T(\theta)Q_j\left(\theta-\frac{\pi i}{6}\right)\bar{Q}_k\left(\theta+\frac{\pi i}{6}\right)
=\hspace{9.6cm}\\
&Q_j(\theta-\frac{5\pi i}{6})\bar{Q}_k\left(\theta+\frac{\pi i}{6}\right)
+Q_j\left(\theta+\frac{\pi i}{2}\right)\bar{Q}_k\left(\theta-\frac{\pi i}{2}\right)+
Q_j\left(\theta-\frac{\pi i}{6}\right)\bar{Q}_k\left(\theta+\frac{5\pi i}{6}\right)\nonumber
\end{eqnarray}
for $j,k\in \{1,2,3\}$ with $j\neq k$. To uncover the essence of this construction, notice
that for a fixed pair of indices $(i,j)$ (\ref{T_Q}) can be thought as definition
of function $T(\theta)$ in terms of $Q$'s. Then the nontrivial question is ``do
other choices of $(j,k)$ lead to the same $T$?" Fortunately, elementary algebraic manipulations
with the help of (\ref{Qc_QQ}) ensure that the answer is positive. As mentioned
earlier, $Q_i(\theta)$ are entire functions. A thorough analysis shows that due to
(\ref{Qc_QQ}) all potential poles of $T(\theta)$ have zero residue. Thus $T(\theta)$
is an entire function too. Details on proofs of above two statements can be
found in appendix \ref{TQ_proof}.
The Bethe ansatz equations can be represented as (see equality (\ref{T_entire}))
\begin{eqnarray}
\frac{Q_j(\theta_\ell-\frac{2\pi i}{3})\bar{Q}_k\left(\theta_\ell+\frac{\pi i}{3}\right)}{
Q_j\left(\theta_\ell+\frac{2\pi i}{3}\right)\bar{Q}_k\left(\theta_\ell-\frac{\pi i}{3}\right)}=-1\,,
\end{eqnarray}
where $\theta_\ell$ are the zeroes of $Q_j(\theta)$.
Functional relations similar to (\ref{Qc_QQ}) and (\ref{T_Q}) emerge also in the context of ODE/IM for 'minimal' 2d CFT with extra spin $3$ current ($W_3$ symmetry) \cite{Dorey:1999pv}, \cite{Bazhanov:2001xm}. From there we can extrapolate that our case might correspond to the special choice
of Virasoro central charge $c=98$ for Toda CFT. In fact, this value of the central charge lies outside the region discussed in above references. Nevertheless, it should be possible to derive the corresponding TBA equations: we leave this task for future publication.
\section{Quantum periods and prepotential from Floquet monodromies and extension of Zamolodchikov's conjecture}
\label{a_cycles}
\subsection{The Floquet-Bloch monodromy matrix}
\label{sect_MM}
Consider the basis of solutions $f_1(x)$, $f_2(x)$, $f_3(x)$ of (\ref{diff_eq2}) with standard
initial conditions
($n,k\in\{1,2,3\}$)
\begin{eqnarray}
\label{BC}
\left.f^{(k-1)}_n(x)\right|_{x=0} =\delta_{k,n} \, .
\end{eqnarray}
Since the functions $f_n(x+2\pi i)$ are solutions too, we can
define the monodromy matrix $M_{k,n}$ as
\begin{eqnarray}
f_n(x+2\pi i)=\sum_{k=1}^3 f_k(x)M_{k,n}
\end{eqnarray}
Clearly
\begin{eqnarray}
\label{M_matrix}
M_{k,n}=f_n^{(k-1)}(2\pi i)\nonumber \, .
\end{eqnarray}
The solutions (\ref{f_series}) with $a\in \{a_1,a_2,a_3\}$ have diagonal monodromies
and can be represented as certain linear combinations of $f_n(x)$.
In other words the eigenvalues of the monodromy matrix $M_{k,n}$ must be identified with $\exp (2\pi ia_k)$, with $k=1,2,3$:
\begin{eqnarray}
\label{spec_M}
Spec(M_{k,n})=\{\exp (2\pi ia_1),\exp (2\pi ia_2),\exp (2\pi ia_3)\} \, .
\end{eqnarray}
For any fixed values of parameters $\Lambda$, ${\bf p}$, it is easy to integrate
numerically the differential equation (\ref{diff_eq2}) with boundary conditions
(\ref{BC}), find the matrix $M_{k,n}$ and then its eigenvalues $\exp (2\pi ia_n)$. Taking into account Matone relation \cite{Matone:1995rx}, valid also in the presence of $\Omega$-background \cite{Flume:2004rp},
\begin{eqnarray}
u_2\equiv \langle \textbf{tr}\,\phi^2\rangle=\sum_{n=1}^3a_n^2+2q\partial_qF_{inst}(q,{\bf a}) \, ,
\end{eqnarray}
we can access the deformed prepotential for any value of the coupling constant.
\subsection{Comparison of the instanton counting against numerical results}
Using formula of section \ref{NPF} it is straightforward to calculate
$\langle \textbf{tr}\,\phi^2\rangle$ or $\langle \textbf{tr}\,\phi^3\rangle$ as a
power series in $q$.
Here are the 3-instanton results (it is assumed that $a_1+a_2+a_3=0 $ and
by definition $a_{jk}\equiv a_j-a_k$)
\begin{eqnarray}
\label{tr_phi2}
\langle \textbf{tr}\,\phi^2\rangle &=&\sum_{k=1}^3a_k^2
-\frac{12(1-h_2)q}{\prod_{j<k}(a_{jk}^2-1)}
+\frac{P_{2,2}q^2}{\prod_{j<k}(a_{jk}^2-1)^3(a_{jk}^2-4)}+O(q)^4\qquad \\
\label{tr_phi3}
\langle \textbf{tr}\,\phi^3\rangle &=&\sum_{k=1}^3a_k^3
+\frac{54 h_3q}{\prod_{j<k}(a_{jk}^2-1)}
+\frac{P_{3,2}q^2}{\prod_{j<k}(a_{jk}^2-1)^3(a_{jk}^2-4)}\nonumber\\
&-&\frac{P_{3,3}q^3}{\prod_{j<k}(a_{jk}^2-1)^5(a_{jk}^2-4)(a_{jk}^2-9)}
+O(q)^4\,,
\end{eqnarray}
where
\begin{eqnarray}
h_2=\frac{a_1^2+a_2^2+a_3^2}{2}\,;\qquad h_3=-a_1a_2a_3\,,
\end{eqnarray}
and
\begin{eqnarray}
P_{2,2}=36 (220 - 1027 h_2 + 1659 h_2^2 - 698 h_2^3 - 958 h_2^4 + 1257 h_2^5 -
521 h_2^6\hspace{3cm}\\
+ 68 h_2^7 - 13959 h_3^2 + 33804 h_2 h_3^2 -
25434 h_2^2 h_3^2 + 5292 h_2^3 h_3^2\hspace{2.8cm}\nonumber\\
+ 297 h_2^4 h_3^2 + 13851 h_3^4 -
5103 h_2 h_3^4)\hspace{3.5cm}\nonumber\\
P_{3,2}=-162 h_3 (455 - 2487 h_2 + 4602 h_2^2 - 3286 h_2^3 + 291 h_2^4 +
573 h_2^5 - 148 h_2^6\hspace{2cm}\\
- 8073 h_3^2 + 14985 h_2 h_3^2 - 7695 h_2^2 h_3^2 +
783 h_2^3 h_3^2 + 1458 h_3^4)\hspace{1.8cm}\nonumber\\
P_{3,3}=-108 h_3 (12078563 - 109310145 h_2 + 400164948 h_2^2 - 722480972 h_2^3
\hspace{2.5cm}\\ +
538752402 h_2^4 + 275687658 h_2^5 - 946955868 h_2^6 +
865056708 h_2^7\hspace{3.1cm}\nonumber\\ - 391259133 h_2^8 + 81882223 h_2^9 - 2063856 h_2^{10} -
1715472 h_2^{11} + 162944 h_2^{12}\hspace{1.55cm} \nonumber\\- 984855213 h_3^2 +
6130798389 h_2 h_3^2 - 14569978437 h_2^2 h_3^2+
16850898261 h_2^3 h_3^2\hspace{0.85cm} \nonumber\\ - 9439886367 h_2^4 h_3^2 +
1593033399 h_2^5 h_3^2 + 730653777 h_2^6 h_3^2- 352792017 h_2^7 h_3^2 \hspace{1.1cm}\nonumber\\ +
42690240 h_2^8 h_3^2 - 562032 h_2^9 h_3^2 + 7812512937 h_3^4 -
22941081063 h_2 h_3^4 \hspace{2.1cm}\nonumber\\ + 24720233994 h_2^2 h_3^4 -
11808597150 h_2^3 h_3^4 + 2295385533 h_2^4 h_3^4-
64422459 h_2^5 h_3^4 \hspace{0.65cm}\nonumber\\- 14031792 h_2^6 h_3^4 - 3311723799 h_3^6 +
3321565299 h_2 h_3^6 - 982634409 h_2^2 h_3^6 \hspace{1.7cm}\nonumber\\ + 65800269 h_2^3 h_3^6 +
29760696 h_3^8) \hspace{2.6cm}\nonumber
\end{eqnarray}
We have calculated also $4$ and $5$ instanton corrections, but the formulae are too lengthy
to be presented here.
By means of numerical integration of the differential equation (\ref{diff_eq2}) along the
line indicated in section \ref{sect_MM} we have computed the eigenvalues of monodromy matrix
(\ref{M_matrix}) for several values of the instanton parameter $q=\Lambda^6$, namely for
the values
\begin{eqnarray}
\label{Lambda_range}
\Lambda=\exp \left(\frac{k-1}{20}-5\right) \,,\qquad k=1,2,\cdots,120\,,
\end{eqnarray}
and fixed values of parameters
\[
p_1=0.12\,; \qquad p_2=0.17\,; \qquad p_3=-0.29\,.
\]
Due to identification (\ref{spec_M}) this allows to find the corresponding
$A$-cycle periods $a_1,a_2,a_3$. In table \ref{tab:table1} we present some characteristic
excerpt from the resulting data.
\begin{table}[h!]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{l|l|l}
\textbf{$\Lambda$} & $a_1$ & $a_2$\\
\hline
0.00822974704902 & 0.1200000000131 & 0.169999999982\\
0.0223707718562 & 0.1200000053049 & 0.169999992932 \\
0.0608100626252 & 0.1200021402877 & 0.169997148430\\
0.165298888222 & 0.1208841761521 & 0.168828966405\\
0.246596963942 & 0.1349151981823 & 0.151933010167 \\
0.272531793034 & 0.142136769453 - 0.019455438633 i & 0.142136769453 + 0.019455438633 i\\
0.449328964117 & 0.092117229441 - 0.135924390553 i & 0.092117229441 + 0.135924390553 i\\
0.740818220682 & 0.003727137475 - 0.568756791077 i & 0.003727137475 + 0.568756791077 i\\
1.22140275816 & 0.000899023180 - 1.071594057757 i & 0.000899023180 + 1.071594057757 i\\
2.01375270747 & 0.00036203460 - 1.78605985179 i & 0.00036203460 + 1.78605985179 i\\
3.32011692274 & 0.00013130957 - 2.96965962318 & 0.00013132399 + 2.96965962932 i
\end{tabular}}
\end{center}
\caption{The values $a_1$, $a_2$ obtained through numerical integration
of the differential equation (\ref{diff_eq2}) with initial conditions (\ref{BC})
for $p_1=0.12$, $p_2=0.28$. }
\label{tab:table1}
\end{table}
Inserting the values of $a_k$, $\Lambda$ in (\ref{tr_phi2}), (\ref{tr_phi3})
supplemented by $q^4$ and $q^5$ corrections we have calculated $\langle \textbf{tr}\,\phi^2\rangle$ and
$\langle \textbf{tr}\,\phi^3\rangle$. The consistency requires that
at small values of $q$ for at which instanton expansion is valid one should always obtain the same expectation values $\langle \textbf{tr}\,\phi^2\rangle
=p_1^2+p_2^2+p_3^2=0.1274$ and $\langle \textbf{tr}\,\phi^3\rangle
=p_1^3+p_2^3+p_3^3=-0.017748$. Table \ref{tab:table2} displays the results of actual
computations.
\begin{table}[h!]
\begin{center}
\begin{tabular}{l|l|l}
\textbf{$\Lambda$} & $\langle \textbf{tr}\,\phi^2\rangle$
& $\langle \textbf{tr}\,\phi^3\rangle$\\
\hline
0.00822974704902 & 0.1274000000000 & -0.0177480000000 \\
0.0223707718562 & 0.1274000000000 & -0.0177480000000 \\
0.0608100626252 & 0.1274000000000 & -0.0177480000000 \\
0.165298888222 & 0.1274000000000 & -0.0177480000000 \\
0.246596963942 & 0.1273999999998 & -0.0177480000000 \\
0.272531793034 & 0.1273999999922 & -0.0177479999994 \\
0.449328964117 & 0.1273774046391 & -0.0177462190257 \\
0.740818220682 & 0.1313057536866 & -0.0178774876030
\end{tabular}
\end{center}
\caption{The values $\langle \textbf{tr}\,\phi^2\rangle$,
$\langle \textbf{tr}\,\phi^3\rangle$ obtained by inserting
the values of $a_1$, $a_2$ from Table \ref{tab:table1} into (\ref{tr_phi2}),
(\ref{tr_phi3}) supplemented by $q^4$ and $q^5$ corrections.
To be compared with (by definition) $\langle \textbf{tr}\,\phi^2\rangle
=p_1^2+p_2^2+p_3^2=0.1274$ and $\langle \textbf{tr}\,\phi^3\rangle
=p_1^3+p_2^3+p_3^3=-0.017748$.}
\label{tab:table2}
\end{table}
One expects an essential deviation from the instanton series starting from
the value of $\Lambda$ at which the polynomial
\[
\left(z^3-\frac{u_2}{2}\,z-\frac{u_3}{3}\right)^2-4 \Lambda^6
\]
acquires coinciding zeros, i.e. at the point where its discriminant
\[
1024 \Lambda ^{18} \left(216 \Lambda ^6-72 \Lambda ^3 u_3-u_2^3+6 u_3^2\right)
\left(216 \Lambda ^6+72 \Lambda ^3 u_3-u_2^3+6 u_3^2\right)
\]
vanishes. Such points correspond to massless dyons or monopoles. It is easy to
check that within the range of $\Lambda$
(\ref{Lambda_range}) the only zero is at $\Lambda=0.1822359934629\cdots $ for which
the last factor of discriminant vanishes. And in fact, inspecting table \ref{tab:table2}
one sees that for the greater values of $\Lambda$'s, the mismatch becomes significant
while for smaller values the agreement is quite impressive.
Notice also from table \ref{tab:table1} that for $\Lambda>0.24659696394 $
we encountered complex values for $a_{1}$ and $a_{2}$.
\subsection{Extension of Zamolodcikov's conjecture to $SU(3)$}
The simpler case of the gauge group $SU(2)$ has been analyzed recently in
\cite{Fioravanti:2019vxi}. In this case one has to deal with the Mathieu equation. Corresponding $TQ$ relation was investigated in \cite{Zamolodchikov:2000unpb}, where Al. Zamolodchikov conjectured (and demonstrated numerically) an elegant relationship
between $T$-function and Floquet exponent $\nu$ of Mathieu equation:
\begin{eqnarray}
T=\cosh (2\pi \nu) \, .
\end{eqnarray}
Here we suggest a natural extension of Zamolodchikov's conjecture for $SU(3)$ case:
\begin{eqnarray}
\label{conj_su3}
T(\theta)=\sum_{n=1}^3e^{2 \pi i a_n} \, .
\end{eqnarray}
Notice, that at $\theta\ll 0$ the asymptotic (\ref{Q_asymp}) leads to
\begin{eqnarray}
\label{T_asym}
T(\theta)\sim\sum_{n=1}^3e^{2 \pi i p_n} \,\, ,
\end{eqnarray}
which is consistent with (\ref{conj_su3}), since for $\theta\ll 0$ instanton
corrections disappear and $a_k$ coincides with $p_k$.
\section{Few perspectives}
It would be very interesting to have a TBA for our case and check our
conjecture (\ref{conj_su3}) as it was done by Al. Zamolodchikov in
\cite{Zamolodchikov:2000unpb}. Actually, even relevant would be a gauge TBA that may shed light on the dual $B$-cycle periods
$\bf{a_D}$ along the route presented in \cite{Fioravanti:2019vxi} for the $SU(2)$ case (see also the presentation of \cite{Gaiotto:2014bza} and \cite{Grassi:2019coc}).
It is well known that pure ${\cal N}=2$ $SU(N)$ theories with $N>2$ are endowed with special points
in there moduli spaces of vacua at which mutually nonlocal dyons become massless (Argyres-Douglas points) \cite{ Argyres:1995jj}.
It would be interesting to investigate this phenomenon within our approach (for the NS regime of the $\Omega$ background).
Furthermore, for generic groups of gauge theories (starting with $SU(2)$ and general Liouville ODE/IM correspondence) it is very intriguing to investigate the form of 'potentials' of the ODE describing excited states of the IM ({\it cf.} \cite{BLZ-excited-ode-im, DF-excited-ode-im} for what we know about 'ordinary' ODE/IM). In fact, the latter should be obtainable also via analytic continuation in the parameters/moduli, and thus, these non-trivial monodromies would be of great interest in gauge theories.
Of course, it is very plausible that the imaginable generalizations of our results, and in particular of (\ref{conj_su3}), might hold for arbitrary $SU(N)$ gauge groups. In fact, for those higher order differential equations we shall have also the enlightening treatment of a mathematically similar problem with only one irregular singularity (at $\infty$), the case of gluon scattering amplitudes/Wilson loops at strong coupling in planar ${\cal N}= 4$ SYM \cite{TBA-amp1,TBA-amp3}. Because of its different physical nature, this problem allow for a beautiful and all-coupling exact Operator Product Expansion \cite{TBA-amp2,BSV1}, whose strong coupling limit reproduces interestingly the integrable TBA \cite{FPR} of \cite{TBA-amp1,TBA-amp3}: the similar mathematical structures and ideas of these two different fields should bear fruit, in future, for a deeper understanding.
\section*{Acknowledgments}
HP and RP are grateful to G. Sarkissian and R. Mkrtchyan for many useful discussions.
DF would like to thank D. Gregori, M. Rossi, R. Tateo for stimulating discussions. This work has been partially supported by the grants: research project 18T-1C340 from Armenian State Committee of Science, GAST (INFN), the MPNS-COST Action MP1210, the EC Network Gatis and the MIUR-PRIN contract 2017CC72MK$\textunderscore$003. D.F. thanks the GGI for Theoretical Physics for invitation to the workshop 'Supersymmetric Quantum Field Theories in the Non-perturbative Regime'.
\begin{appendix}
\section{Proving the $TQ$ relations}
\label{TQ_proof}
In this appendix we prove that different choices of indices $j\ne k $ in
$TQ$ relations (\ref{T_Q}) are consistent with $QQ$ relations (\ref{Qc_QQ}).
For example let us choose $j=1$, $k=2$
\begin{eqnarray}
\label{T_Q_12}
&T(\theta)Q_1\left(\theta-\frac{\pi i}{6}\right)\bar{Q}_2\left(\theta+\frac{\pi i}{6}\right)
=\\
&Q_1(\theta-\frac{5\pi i}{6})\bar{Q}_2\left(\theta+\frac{\pi i}{6}\right)
+Q_1\left(\theta+\frac{\pi i}{2}\right)\bar{Q}_2\left(\theta-\frac{\pi i}{2}\right)+
Q_1\left(\theta-\frac{\pi i}{6}\right)\bar{Q}_2\left(\theta+\frac{5\pi i}{6}\right),\nonumber
\end{eqnarray}
and $j=1$, $k=3$
\begin{eqnarray}
\label{T_Q_13}
&T(\theta)Q_1\left(\theta-\frac{\pi i}{6}\right)\bar{Q}_3\left(\theta+\frac{\pi i}{6}\right)
=\\
&Q_1(\theta-\frac{5\pi i}{6})\bar{Q}_3\left(\theta+\frac{\pi i}{6}\right)
+Q_1\left(\theta+\frac{\pi i}{2}\right)\bar{Q}_3\left(\theta-\frac{\pi i}{2}\right)+
Q_1\left(\theta-\frac{\pi i}{6}\right)\bar{Q}_3\left(\theta+\frac{5\pi i}{6}\right),\nonumber
\end{eqnarray}
Multiplying (\ref{T_Q_12}) by $\bar{Q}_3\left(\theta+\frac{\pi i}{6}\right) $,
(\ref{T_Q_13}) by $\bar{Q}_2\left(\theta+\frac{\pi i}{6}\right)$
and taking difference, the right hand side becomes
\begin{eqnarray}
&\bar{Q}_3\left(\theta+\frac{\pi i}{6}\right)\left(Q_1\left(\theta+\frac{\pi i}{2}\right)\bar{Q}_2\left(\theta-\frac{\pi i}{2}\right)+
Q_1\left(\theta-\frac{\pi i}{6}\right)\bar{Q}_2\left(\theta+\frac{5\pi i}{6}\right)\right)-
\nonumber\\
&\bar{Q}_2\left(\theta+\frac{\pi i}{6}\right)\left(
Q_1\left(\theta+\frac{\pi i}{2}\right)\bar{Q}_3\left(\theta-\frac{\pi i}{2}\right)+
Q_1\left(\theta-\frac{\pi i}{6}\right)\bar{Q}_3\left(\theta+\frac{5\pi i}{6}\right)
\right)=\\
&Q_1\left(\theta+\frac{\pi i}{2}\right)
\left(
\bar{Q}_3\left(\theta+\frac{\pi i}{6}\right)\bar{Q}_2\left(\theta-\frac{\pi i}{2}\right)-
\bar{Q}_2\left(\theta+\frac{\pi i}{6}\right)\bar{Q}_3\left(\theta-\frac{\pi i}{2}\right)\right)+
\nonumber\\
&Q_1\left(\theta-\frac{\pi i}{6}\right)\left(
\bar{Q}_3\left(\theta+\frac{\pi i}{6}\right)\bar{Q}_2\left(\theta+\frac{5\pi i}{6}\right)-
\bar{Q}_2\left(\theta+\frac{\pi i}{6}\right)\bar{Q}_3\left(\theta+\frac{5\pi i}{6}\right)
\right)\,.
\nonumber
\end{eqnarray}
Obviously the last expression vanishes due to (\ref{Qc_QQ}) as consistency requires.
Now let us show that $T(\theta)$ does not have any pole.
It follows from (\ref{T_Q_12}), that a potential pole of $T(\theta)$ can be found
either among the zeros of $Q_1\left(\theta-\frac{\pi i}{6}\right)$ or
$\bar{Q}_2\left(\theta+\frac{\pi i}{6}\right)$. For definiteness let us assume that
it belongs to zero set of $Q_1\left(\theta-\frac{\pi i}{6}\right)$ (the other option
can be considered in completely analogues manner). Let $Q_1(\theta_\ell)=0$.
Then for $\theta=\theta_\ell+\frac{i\pi}{6}$ the r.h.s. of (\ref{T_Q_12}) is equal to
\begin{eqnarray}
\label{T_entire}
&Q_1(\theta_\ell-\frac{2\pi i}{3})\bar{Q}_2\left(\theta_\ell+\frac{\pi i}{3}\right)
+Q_1\left(\theta_\ell+\frac{2\pi i}{3}\right)\bar{Q}_2\left(\theta_\ell-\frac{\pi i}{3}\right)=\\
&\frac{2i \pi^2}{\sin(\pi p_{31})}\left(Q_1(\theta_\ell-\frac{2\pi i}{3})Q_1\left(\theta_\ell+\frac{2\pi i}{3}\right)Q_3\left(\theta_\ell\right)-
Q_1\left(\theta_\ell+\frac{2\pi i}{3}\right)Q_1(\theta_\ell-\frac{2\pi i}{3})Q_3\left(\theta_\ell\right)\right)=0,
\nonumber
\end{eqnarray}
where the relations (\ref{Qc_QQ}) for both
$\bar{Q}_2\left(\theta_\ell\pm\frac{\pi i}{3}\right)$ has been used. So $T(\theta_\ell)$ is finite,
thus proving that $T(\theta)$ is entire.
\end{appendix}
\bibliographystyle{JHEP}
\providecommand{\href}[2]{#2}
\begingroup\raggedright
|
2,869,038,156,913 | arxiv | \section{Introduction}
The investigation of the low-energy physics of the Heisenberg antiferromagnet
(HAFM)
\begin{equation}
H = \sum_{<i,j>}{\bf s}_i\cdot{\bf s}_j
\label{H}
\end{equation}
on the kagome lattice is one of the most
challenging problems in the field of frustrated quantum magnetism.
The sum over $\langle i,j \rangle$ runs over all nearest-neighbor pairs of sites on the lattice, counting each bond once only, and ${\bf s}_{i} \equiv (s^{x}_{i},s^{y}_{i},s^{z}_{i})$ is the spin operator on site $i$. Although, there has been an intensive discussion of the problem over many years
applying various theoretical methods, (and see,
e.g.,~Refs.~\onlinecite{marston91,Harris1992,Chalker1992,huse1992,sachdev1992,singh1992,chub92,Leung1993,suzuki1994,Waldtmann1998,Zeng1995,
henley1995,Mambrini2000,farnell2001,bern2002,Nikolic2003,schmal2004,Budnik2004,Capponi2004,Sindzingre2009,Singh2007,Singh2008,
Jiang2008,henley2009,Poilblanc2010,Bishop2010,Evenbly2010,Yan2011,lauchli2011,lee2011,nakano2011,tay2011,cepas2011,iqbal2011}),
no conclusive answer on the nature of
the ground state (GS) and the existence of a spin gap has been found.
While for many years a spin-liquid GS was
favored,\cite{Chalker1992,Zeng1995,Waldtmann1998} recently
arguments have been given for a valence-bond crystal GS with a large unit cell
of 36 sites that breaks
the symmetry of the underlying kagome lattice.\cite{Nikolic2003,Singh2007,Singh2008}
However, very recently this valence-bond picture has been rechecked by
large-scale numerics\cite{Yan2011,lauchli2011,nakano2011} and once again the
spin-liquid GS is favored.
Although, large-scale density-matrix renormalization group (DMRG) and exact
diagonalization (ED) calculations seem to be most effective to study the
low-energy physics of the kagome HAFM,
complementary methods (and see, e.g., Refs.~\onlinecite{Evenbly2010,lee2011,tay2011,iqbal2011}),
are highly desirable to shed further light on the challenging problem.
A method which has been successfully applied to strongly frustrated quantum
magnets is the coupled cluster method (CCM) (and see, e.g.,
Refs.~\onlinecite{farnell2001,Bishop2010,rachid05,Schm:2006,
rachid08,bishop08,farnell09,richter2010,farnell11,farnell11a}).
In the present paper we apply the CCM in high orders of approximation to the kagome HAFM.
\begin{figure}[ht]
\begin{center}
\epsfig{file=fig1.eps,scale=0.45,angle=0.0}
\end{center}
\caption{
Illustration of the $\sqrt{3}\times\sqrt{3}$ (left) and the
$q=0$ (right) classical GS of the kagome HAFM.
}
\label{fig1}
\end{figure}
\section{Coupled cluster method}
For the sake of brevity we illustrate here only some relevant features of
the coupled cluster method (CCM).
For more general information on the methodology of the CCM, see, e.g.,
Refs.~\onlinecite{zeng98,bishop98a,bishop99,bishop00,farnell02,bishop04}.
We first mention that the CCM approach yields results directly in
the thermodynamic limit $N\to\infty$,
where $N$ is the number of lattice sites (and hence spins).
The starting point for a CCM
calculation is the choice of a normalized reference state
$|\Phi\rangle$ that is typically a classical GS of the model.
For the kagome HAFM
we choose the $\sqrt{3}\times\sqrt{3}$ and the
$q=0$ states illustrated in Fig. \ref{fig1}, (and see also,
e.g., Refs.~\onlinecite{Harris1992,chub92,tay2011} for further details).
Then we perform a rotation of the local axes of each of
the spins such that all spins in the reference state align along the
negative $z$ axis.
In this new set of local spin coordinates
a complete set of
mutually commuting multispin
creation operators $C_I^+ \equiv (C^{-}_{I})^{\dagger}$ related to this reference state is defined by
\begin{equation}
\label{set1} |{\Phi}\rangle = |\downarrow\downarrow\downarrow\cdots\rangle ; \mbox{ }
C_I^+
= { s}_{n}^+ \, , \, { s}_{n}^+{ s}_{m}^+ \, , \, { s}_{n}^+{ s}_{m}^+{
s}_{k}^+ \, , \, \ldots \; ,
\end{equation}
where $s^{+}_{n} \equiv s^{x}_{n} + is^{y}_{n}$, the indices $n,m,k,\ldots$ denote arbitrary lattice sites, and the
components of the spin operators are defined
in the local rotated coordinate frames.
Note that for spins of quantum number $s$, each site index in each configuration index $I$ in
Eq.~({\ref{set1}) can be repeated up to a maximum of $2s$ times.
With the set $\{|\Phi\rangle, C_I^+\}$ thus defined, the CCM parametrizations of
the ket
and bra GS eigenvectors
$|\Psi\rangle$
and $\langle\tilde{\Psi}|$
of the spin system
are given by
\begin{eqnarray}
\label{eq5}
|\Psi\rangle=e^S|\Phi\rangle \; , \mbox{ } S=\sum_{I\neq 0}a_IC_I^+ \; \\
\label{eq5b}
\langle \tilde{ \Psi}|=\langle \Phi |\tilde{S}e^{-S} \; , \mbox{ } \tilde{S}=1+
\sum_{I\neq 0}\tilde{a}_IC_I^{-} \; .
\end{eqnarray}
We have defined $C_{0}^{+} \equiv 1$, and the normalization of the states is clearly such that $\langle \tilde{ \Psi}| \Psi \rangle =
\langle \Phi| \Psi \rangle = \langle \Phi| \Phi \rangle \equiv 1$.
The CCM correlation operators, $S$ and $\tilde{S}$, contain the correlation coefficients,
$a_I$ and $\tilde{a}_I$,
which can be determined by the CCM ket-state
and bra-state
equations
\begin{eqnarray}
\label{eq6}
\langle\Phi|C_I^-e^{-S}He^S|\Phi\rangle = 0 \;\; ; \; \forall I\neq 0 \\
\label{eq6a}\langle\Phi|{\tilde S}e^{-S}[H, C_I^+]e^S|\Phi\rangle = 0 \; \; ; \; \forall
I\neq 0.
\end{eqnarray}
Equations (\ref{eq6}) and (\ref{eq6a}) are fully equivalent to the GS Schr\"{o}dinger equations for the ket and bra states. They follow readily from the requirement that the GS energy functional $\langle\tilde{\Psi}|H|\Psi\rangle$ be stationary with respect to variations in all of the correlation coefficients $\tilde{a}_{I}$ and $a_{I}$ respectively ($\forall I \neq 0)$.
Each ket-state
or bra-state
equation belongs to a certain configuration index $I$,
i.e., it corresponds to a certain set (configuration) of lattice sites
$n,m,k,\dots\;$, as in Eq.~(\ref{set1}).
Using the Schr\"odinger equation, $H|\Psi\rangle=E|\Psi\rangle$, we can now write
the GS energy as $E=\langle\Phi|e^{-S}He^S|\Phi\rangle$.
The magnetic order parameter (sublattice magnetization) is given
by $ M = -\frac{1}{N} \sum_{i=1}^N \langle\tilde\Psi|{ s}_i^z|\Psi\rangle$, where
${s}_i^z$
is expressed in the transformed coordinate system, and $N(\rightarrow \infty)$ is the number of lattice sites..
If we would be able to consider all creation and annihilation operators
$C_I^+$ and $C_I^-$,
i.e. all sets (configurations) of lattice sites, in the
CCM correlation operators $S$ and $\tilde{S}$ we would get, in principle, the exact
eigenstate.\cite{bishop98a}
However, for the many-body quantum system under consideration
it is necessary to use approximation
schemes in order to truncate the expansions of $S$
and $\tilde S$
in Eqs.~(\ref{eq5}) and (\ref{eq5b})
in a practical calculation.
Then the approximate results for the GS energy $E$ and the order
parameter $M$ will depend certainly on the choice of the reference state.
We use for spin quantum number $s=1/2$ the so-called LSUB$n$
approximation scheme to truncate the expansions
of $S$ and $\tilde S$
in Eqs.~(\ref{eq5}) and (\ref{eq5b}), where we
include only $n$ or fewer correlated spins in all configurations (or lattice animals in the language of graph theory) which
span a range of no more than $n$ contiguous
lattice sites, where a set of sites is defined to be contiguous if every site has at least one other in the set as a nearest neighbor (and for more details see Refs.~\onlinecite{zeng98,bishop00,farnell02,bishop04}).
Using an efficient
parallelized CCM code \cite{cccm} we are able to solve the CCM equations up
to LSUB10 for $s=1/2$ (where, e.g. for the $q=0$
reference state a set of 238010 coupled ket-state equations has to be
solved), which goes significantly beyond earlier CCM
calculations for the
kagome HAFM.\cite{farnell2001,Bishop2010}
Moreover, we also use the CCM to
consider spin quantum numbers $s>1/2$.
Since the LSUB$n$ approximation becomes exact for $n \to \infty$ (as also so does the alternative SUB$n$-$n$ scheme that we introduce and use below for values of the spin quantum number $s>1/2$ in
Sec.~\ref{larger_S}), it is useful
to extrapolate the `raw' LSUB$n$ (or SUB$n$-$n$)
data to the limit $n \to \infty$.
There is ample experience regarding how one should
extrapolate the GS energy per site $e_0(n) \equiv E(n)/N$ and the magnetic order parameter
$M(n)$. For the GS energy per spin $e_0(n) = a_0 + a_1(1/n)^2 + a_2(1/n)^4$ is a very
well-tested extrapolation ansatz.\cite{bishop00,bishop04,rachid05,
Schm:2006,bishop08,rachid08,richter2010} An appropriate extrapolation rule
for the magnetic order parameter of highly frustrated systems
is\cite{bishop08,rachid08,richter2010}
$M(n)=b_0+b_1(1/n)^{1/2}+b_2(1/n)^{3/2}$.
Moreover, we know from Refs.~\onlinecite{bishop08,rachid08,richter2010} that low
levels
of approximation conform poorly to these
rules. Hence, we
exclude the $n=2$ and $n=3$ data from the extrapolations.
For the solution of the CCM equations we rewrite the Hamiltonian (\ref{H})
in the rotated coordination frame of the local quantization axis
\begin{equation}\begin{aligned}
H_\lambda&=\sum_{<i\rightarrow j>}\biggl[-\frac{1}{2}\left(\lambda s^{x}_{i}s^{x}_{j}
+ s^{z}_{i}s^{z}_{j}\right) + \lambda s^{y}_{i}s^{y}_{j}\\
&+ \frac{\sqrt{3}}{2}\lambda\left(-s^{x}_{i}s^{z}_{j} +
s^{z}_{i}s^{x}_{j}\right)\biggr],
\end{aligned}\label{Hlambda}
\end{equation}
where we have further introduced an anisotropy parameter $\lambda$ that now multiplies the
non-Ising terms, similar to what was done in Refs.~\onlinecite{singh1992,bishop99}.
Note that the symbol $<i\rightarrow j>$ on the sum in Eq. (\ref{Hlambda}) now indicates a directionality for the
nearest-neighbor bonds, (and see, e.g., Refs.~\onlinecite{singh1992,bishop99,farnell09}),
which is different for the $\sqrt{3}\times\sqrt{3}$ and the
$q=0$ reference states.
Starting at $\lambda=0$, where the corresponding reference states are
eigenstates of $H_\lambda$, we can slowly increase $\lambda$ and hence trace the CCM solutions out to the true
kagome point at $\lambda=1$.
Moreover, $\lambda$ can be understood as a parameter that tunes the strength of the quantum
fluctuations.
\begin{figure}[ht]
\begin{center}
\epsfig{file=fig2.eps,scale=0.7,angle=0.0}
\end{center}
\caption{Difference of GS energies per site, $\Delta e \equiv
e_0^{\sqrt{3}\times\sqrt{3}} - e_0^{q=0}$, between the
$\sqrt{3}\times\sqrt{3}$ and $q=0$ states of the spin-$1/2$ kagome HAFM, for various CCM LSUB$n$
approximations and spin quantum number $s=1/2$.
}
\label{fig2}
\end{figure}
\begin{figure}[ht]
\begin{center}
\epsfig{file=fig3.eps,scale=0.55,angle=0.0}
\end{center}
\caption{CCM-LSUB$n$ data for the magnetic order parameter $M$ versus $\lambda$ for the spin-$1/2$ kagome HAFM,
for (a) the $\sqrt{3}\times\sqrt{3}$ reference state and (b) the $q=0$ reference state.
For the extrapolations to $n \to \infty$ according to
$M(n)=b_0+b_1(1/n)^{1/2}+b_2(1/n)^{3/2}$
we have used LSUB$n$ data for $n=4,5,\ldots,10$ as well as for
$n=6,7,\ldots,10$.
}
\label{fig3}
\end{figure}
\section{Results}
\subsection{The extreme quantum case: $s=1/2$}
\begin{table}
\caption{CCM results for the spin-$1/2$ HAFM on the kagome lattice (i.e., at
$\lambda =
1$). The quantity $e_0 \equiv E/N$ is the GS energy per spin
and $M$ is the magnetic order
parameter (sublattice magnetization).
The LSUB$n$ results are extrapolated to $n\to \infty$ according
to
$e_0(n) = a_0 + a_1(1/n)^2 + a_2(1/n)^4$
and
$M(n)=b_0+b_1(1/n)^{1/2}+b_2(1/n)^{3/2}$
using LSUB$n$ data for $n=4,5,\ldots,10$ as well as for
$n=6,7,\ldots,10$.
}
\begin{tabular}{@{}|c|c|c|@{}}\hline
{\bf $\sqrt{3}\times\sqrt{3}$} &$e_0$ &$M$ \\\hline
LSUB4 &-0.408728 &0.320702 \\\hline
LSUB5 &-0.414235 &0.291917 \\\hline
LSUB6 &-0.418052 &0.272109 \\\hline
LSUB7 &-0.420677 &0.248989 \\\hline
LSUB8 &-0.423554 &0.219994 \\\hline
LSUB9 &-0.424962 &0.204661 \\\hline
LSUB10 &-0.426485 &0.187634 \\\hline
{\bf Extrapolated (4-10)} &{\bf -0.4318 } &$<0$ \\\hline
{\bf Extrapolated (6-10)} &{\bf -0.4336 } &$<0$ \\\hline
\hline
{\bf $q=0$} &$e_0$ &$M$ \\\hline
LSUB4 &-0.408066 &0.322860 \\\hline
LSUB5 &-0.414418 &0.286462 \\\hline
LSUB6 &-0.420078 &0.248078 \\\hline
LSUB7 &-0.423126 &0.225356 \\\hline
LSUB8 &-0.426054 &0.202074 \\\hline
LSUB9 &-0.427952 &0.186435 \\\hline
LSUB10 &-0.429413 &0.172742 \\\hline
{\bf Extrapolated (4-10)} &{\bf -0.4357 } &$<0$ \\\hline
{\bf Extrapolated (6-10)} &{\bf -0.4372 } &$<0$ \\\hline
\hline
\multicolumn{3}{|c|}{\bf other recent results}\\\hline
Ref.~\onlinecite{Singh2007} &-0.433 & - \\\hline
Ref.~\onlinecite{Evenbly2010} &-0.4322 & - \\\hline
Ref.~\onlinecite{lauchli2011}, $N=42$ (type a) &-0.437999 & - \\\hline
Ref.~\onlinecite{lauchli2011}, $N=42$ (type b) &-0.438143 & - \\\hline
Ref.~\onlinecite{Yan2011} &-0.4379 & - \\\hline
\hline
\end{tabular}
\label{table1}
\end{table}
We start with the CCM investigation of the kagome HAFM for spin quantum number
$s=1/2$.
At a given finite level of the CCM LSUB$n$ scheme the treatment
of quantum
effects is performed in an approximate manner.
Certainly, the treatment of
quantum effects becomes better as the level of approximation $n$ is increased.
In previous studies of the GS selection based on an expansion around the
classical limit\cite{chub92,sachdev1992,henley1995} the $\sqrt{3}\times\sqrt{3}$ state
was found to be selected by quantum fluctuations.
We present our results for the GS energy per site
in Fig.~\ref{fig2}, where the dependence on the anisotropy
parameter $\lambda$ of the
difference in the energies per site between the two states considered, $\Delta e \equiv e_0^{\sqrt{3}\times\sqrt{3}}
- e_0^{q=0}$,
is shown for various LSUB$n$ approximations.
Interestingly, the GS selection depends on the LSUB$n$ truncation index $n$.
Just as in linear spin-wave theory,\cite{Harris1992,henley1995}
there is also no GS selection (i.e., $\Delta e=0$)
at the CCM-LSUB$2$ level, thereby indicating a poor consideration of quantum effects at the lowest LSUB$n$ order.
As the level of approximation $n$ is increased
we first find that $\Delta e<0$ for $n=3$ and $n=4$ (i.e.,
the $\sqrt{3}\times\sqrt{3}$ state is selected in accordance with
previous findings\cite{chub92,henley1995}), but as $n$ is further increased we
then find that $\Delta e>0$ for $n>4$
(i.e., the $q=0$ state is selected).
Bearing in mind that quantum effects are better taken into account at higher LSUB$n$ levels we might argue that strong quantum fluctuations indeed favor the $q=0$ state.
Note that this line of argument is also supported by our CCM results below for spin quantum
numbers $s> 1/2$ (i.e., generally speaking, where quantum fluctuations are weaker),
where in all levels of approximations the $\sqrt{3}\times\sqrt{3}$ state is
selected (and see our discussion below in Sec.~\ref{larger_S}).
In Fig.~\ref{fig3} we show the magnetic order parameter as a function of the anisotropy parameter $\lambda$. At $\lambda=0$ we have
$M=s=1/2$, since the corresponding reference state is the exact GS of
$H_{\lambda=0}$. As $\lambda$ is increased the order parameter
decreases
monotonically. At a certain value of $\lambda$, near the true kagome point
$\lambda=1$, the extrapolated order parameter vanishes, thus indicating that the GS
is magnetically disordered. The difference in the two variants of the
extrapolation (including or excluding
LSUB4 and LSUB5) may be considered as an error bar for the extrapolated
order parameter.
Next we analyze the model for $\lambda=1$ in more detail.
The CCM-LSUB$n$ data, as well as the extrapolated data, are listed in
Table~\ref{table1}. Moreover, we present results for the GS energy
obtained by other methods for comparison. While the values $e_0=-0.4322$ obtained in
Ref.~\onlinecite{Evenbly2010} and $e_0=-0.4332$ obtained in
Ref.~\onlinecite{Yan2011} can be considered as rigorous upper bounds of the GS
energy, the large-scale DMRG
result $e_0=-0.4379$ obtained in
Ref.~\onlinecite{Yan2011} seems to be the most accurate estimate presently available.
The lowest extrapolated CCM energy is $e_0=-0.4372$ obtained for the $q=0$ reference state
using CCM-LSUB$n$ results for $n=6,7,8,9,10$ for the extrapolation.
This CCM estimate is very close to the DMRG result of
Ref.~\onlinecite{Yan2011}.
\begin{figure}[ht]
\begin{center}
\epsfig{file=fig4.eps,scale=0.67,angle=0.0}
\end{center}
\caption{
Difference of the GS energies per site, $\Delta e \equiv
e_0^{\sqrt{3}\times\sqrt{3}} - e_0^{q=0}$,
between the $\sqrt{3}\times\sqrt{3}$ and $q=0$ states
of the kagome HAFM, calculated for the CCM SUB$8$-$8$
approximation and for spin quantum numbers $s=1/2,1,3/2,2,5/2,3$.
}
\label{fig4}
\end{figure}
\begin{figure}[ht]
\begin{center}
\epsfig{file=fig5.eps,scale=0.55,angle=0.0}
\end{center}
\caption{
Extrapolated magnetic order parameter, $M/s$, versus $\lambda$ for (a) the $\sqrt{3}\times\sqrt{3}$ reference state and (b) the $q=0$ reference state of the kagome HAFM, for various values of the
spin quantum number $s$.
For the extrapolations to $n \to \infty$ according to
$M(n)=b_0+b_1(1/n)^{1/2}+b_2(1/n)^{3/2}$
we have used SUB$n$-$n$ data for $n=4,5,\ldots,8$. [Note that even for
the $s=1/2$ case we have excluded the (available) LSUB9 and LSUB10 data
here to be consistent with the $s>1/2$ cases.]
}
\label{fig5}
\end{figure}
\subsection{Higher spin quantum numbers: $s>1/2$}
\label{larger_S}
Although several magnetic kagome compounds carry spins with $s>1/2$, such as
the $s=3/2$ magnet KCr$_3$(OH)$_6$(SO$_4$)$_2$,\cite{comp1} or the $s=5/2$
compound (H$_3$O)Fe$_3$(OH)$_6$(SO$_4$)$_2$,\cite{comp2}
far fewer theoretical results are available for those higher spin quantum numbers.
In the classical limit, $s \to \infty$, thermal fluctuations may lead to
$\sqrt{3}\times\sqrt{3}$ long-range order as $T \to
0$.\cite{huse1992,henley2009}
In most papers dealing with large-spin quantum models it has been found that quantum fluctuations select the
$\sqrt{3}\times\sqrt{3}$
state.\cite{chub92,sachdev1992,henley1995,cepas2011}
Moreover, magnetic long-range order might be possible for
higher spin values.\cite{sachdev1992,cepas2011}
For our CCM approach for $s>1/2$ we use (instead of the LSUB$n$ scheme) the alternative SUB$n$-$m$
approximation scheme to truncate the expansions of
$S$ and $\tilde S$
in Eqs.~(\ref{eq5}) and (\ref{eq5b}). This is because as $s$ increases the number of fundamental configuration $I$
retained at a given LSUB$n$ level also increases,
since each spin at any site $i$ may be raised up to $2s$ times by its raising operator $s^{+}_{i}$,
and hence each site index $i$ may be repeated up to $2s$
times in the operators $C^{+}_{I}$ of Eq.~(\ref{set1}). In the SUB$n$-$m$ scheme we include
no more than $n$ spin flips spanning a range of no more than
$m$ contiguous lattice sites.\cite{farnell02,bishop04}
In what follows we consider the case $n=m$, i.e. SUB$n$-$n$, which for $s=1/2$ is identical to the LSUB$n$ scheme.
Since the number of coupled ket-state equations for a certain level of
SUB$n$-$n$ approximation increases with increasing spin quantum number $s$,
the highest level of approximation we can consider is SUB8-8 for
$s=1,3/2,2,5/2,3$.
The maximum number of ket-state equations we have to take into account
is 416126 for $s=3$.
For the extrapolation to $n \to \infty$ we use the same extrapolation
formulas as for $s=1/2$, and consider the SUB$n$-$n$ data for $n=4,5,6,7,8$.
First we discuss the GS selection. In Fig.~\ref{fig4}
we present the dependence on the anisotropy
parameter $\lambda$ of the energy
difference per site, $\Delta e \equiv e_0^{\sqrt{3}\times\sqrt{3}} - e_0^{q=0}$,
for the highest level of approximation that we have performed
(viz., SUB$8$-$8$), and for values of the spin quantum number
$s=1/2,1,\ldots,3$.
We find that the $\sqrt{3}\times\sqrt{3}$ state
is selected for all values $s>1/2$.
This is in agreement with previous studies based on
an expansion around the
classical limit,\cite{chub92,sachdev1992,henley1995} but it is in contrast to our
findings for the extreme quantum case $s=1/2$. Hence, interestingly, our results suggest
that for the frustrated quantum spin system under consideration the $s=1/2$
case and the cases $s>1/2$ may exhibit different behavior. It is
interesting to note that a similar effect has also been observed for a frustrated quantum spin chain.\cite{krivnov07}
We see clearly that the energy
difference $\Delta e$ scaled by $s^2$ decreases monotonically with increasing
$s$, thereby demonstrating that for $s \to \infty$ the $\sqrt{3}\times\sqrt{3}$ and the
$q=0$ states become degenerate.
\begin{table}[htb]
\centering
\caption{CCM results for the HAFM on the kagome lattice (i.e., at
$\lambda =
1$) for spin quantum numbers $s=1,3/2,2,5/2,3$.
The quantity $e_0 \equiv E/N$ is the GS energy per spin and $M$ is the magnetic order
parameter (sublattice magnetization).
The SUB$n$-$n$ results are extrapolated to $n\to \infty$ according
to
$e_0(n) = a_0 + a_1(1/n)^2 + a_2(1/n)^4$
and
$M(n)=b_0+b_1(1/n)^{1/2}+b_2(1/n)^{3/2}$
using SUB$n$-$n$ data for $n=4,5,6,7,8$.
}
\begin{tabular}{|c|rr|rr|c}\hline\hline
\parbox[0pt][1.5em][c]{0cm}{} & \multicolumn{2}{c|}{$\sqrt{3}\times\sqrt{3}$} & \multicolumn{2}{c|}{$q=0$} \\\cline{2-5}\hline\hline
$s=1$ & $e_0/s^2$ & $M/s$& $e_0/s^2$ & $M/s$\\\hline
SUB8-8 & -1.383644 & 0.580079 & -1.379680 & 0.607293 \\
extr4-8 & -1.4031 & $< 0$ & -1.3965 & $< 0$ \\\hline\hline
$s=3/2$ & $e_0/s^2$ & $M/s$& $e_0/s^2$ & $M/s$\\\hline
SUB8-8 & -1.257354 & 0.690229 & -1.254588 & 0.709167 \\\hline
extr4-8 & -1.2680 & 0.0744 & -1.2643 & 0.2438 \\\hline\hline
$s=2$ & $e_0/s^2$ & $M/s$& $e_0/s^2$ & $M/s$\\\hline
SUB8-8 & -1.195442 & 0.735642 & -1.193145 & 0.754580 \\\hline
extr4-8 & -1.2026 & 0.2029 & -1.2000 & 0.3645 \\\hline\hline
$s=5/2$ & $e_0/s^2$& $M/s$& $e_0/s^2$ & $M/s$\\\hline
SUB8-8 & -1.157697 & 0.766290 & -1.155703 & 0.785822 \\\hline
extr4-8 & -1.1627 & 0.2942 & -1.1607 & 0.4586 \\\hline\hline
$s=3$ & $e_0/s^2$ & $M/s$& $e_0/s^2$ & $M/s$\\\hline
SUB8-8 &-1.132263 & 0.788722 &-1.130497 & 0.808862 \\\hline
extr4-8 &-1.1360 & 0.3583 &-1.1344 & 0.5256 \\\hline\hline
$s \to \infty $ & $e_0/s^2$ & $M/s$& $e_0/s^2$ & $M/s$\\\hline
exact &-1 & 1 &-1 & 1 \\\hline\hline
\end{tabular}
\label{table2}
\end{table}
Next we discuss the magnetic order parameter $M$.
To compare results for various values of $s$ it is useful to
consider $M/s$.
Since for $s \to \infty $
the chosen reference state is an eigenstate we would get $M/s=1$ within our CCM approach in this classical limit.
Hence, we may expect that by
increasing the spin quantum number the quantity $M/s$ becomes nonzero for a certain value $s > s_0$ in the whole range $0 \le \lambda \le 1$ (and see also, e.g., Ref.~\onlinecite{sachdev1992}).
In Fig.~\ref{fig5}
we show the extrapolated magnetic order parameter for both reference states as a function of the
anisotropy parameter $\lambda$, for values $s=1/2,1,\cdots,5/2,3$.
For large values of $s$ the scaled order parameter becomes almost constant, $M/s \sim 1$, over a wide range of $\lambda$ values. However, as the true kagome
point at $\lambda=1$ is approached we find a steep decay of $M/s$. Nevertheless, only for $s=1/2$ (as discussed already above)
and for $s=1$ does the extrapolated order parameter vanish at $\lambda=1$, whereas $M/s$
remains nonzero for $s>1$. Hence, our data suggest that for higher values of
$s$ a $\sqrt{3}\times\sqrt{3}$ magnetic order might be possible.
To provide more detailed information on the GS properties of the kagome HAFM
we present in Table~\ref{table2}
CCM-SUB$8$-$8$ data as well as extrapolated data at $\lambda=1$, for
values of the spin quantum number $s=1,3/2,3,5/2,3$.
We see clearly that the scaled GS energy per spin approaches the classical value, $e_0/s^2=-1$, quite rapidly as $s$ is increased.
On the other hand, even for the largest spin considered here (viz., $s=3$)
the extrapolated order parameter remains relatively small, particularly for the
$\sqrt{3}\times\sqrt{3}$ state.
\begin{figure}[ht]
\begin{center}
\epsfig{file=fig6a.eps,scale=0.65,angle=0.0}
\epsfig{file=fig6b.eps,scale=0.65,angle=0.0}
\end{center}
\caption{Dependence on the spin quantum number $s$ of (a) the extrapolated scaled GS energy per spin, $e_0/s^2$, and
(b) the extrapolated scaled magnetic order parameter, $M/s$, of the kagome HAFM. For $M/s$ the exponent $\alpha = 1/2$ for the $\sqrt{3}\times\sqrt{3}$ state and $\alpha=2/3$ for the $q=0$
state.
(symbols: CCM data points, lines fits to the data points).
}
\label{fig6}
\end{figure}
Our CCM data for $e_0$ and $M$, available up to $s=3$, together with the known
results in the classical limit, $\lim_{s \to \infty} e_0/s^2 = -1$ and $\lim_{s \to
\infty}
M/s = 1$, also allow us to discuss the $s$-dependence of $e_0/s^2$ and $M/s$ in
the large-$s$ limit.
In spin-wave theories one typically obtains expansions for $e_{o}$ and $M$ in powers of $1/s$.\cite{oitmaa1992,chub94}
For the GS energy of the kagome HAFM the standard linear spin-wave theory yields for both the $\sqrt{3}\times\sqrt{3}$ and the $q=0$ states, $e_0/s^2=-1-0.4412/s$ (and see Refs.~\onlinecite{Harris1992,suzuki1994}). On the other hand,
due to the presence of the flat zero mode in the kagome HAFM, the integral for
the order parameter diverges.\cite{suzuki1994,star2004}
Using an effective spin-wave theory, in which short-wavelength fluctuations are neglected, Asakawa and
Suzuki\cite{suzuki1994} obtained
$M/s= 1 - 0.336/s$, whereas Chubukov\cite{chub92} found fluctuation corrections
proportional to $s^{-2/3}$ (in contrast to conventional spin-waves) by using
a self-consistent spin-wave approach.
Here we use our extrapolated CCM data for $s=3/2,2,5/2$ and $3$ to find the leading
corrections to the classical values. By fitting the extrapolated GS
energy $e_0/s^2$ with a fitting function $f(s)=-1+a_1 s^{-a_2}$ we obtain a value for the exponent $a_2$ very close to one for
both reference states.
Hence, finally we have fitted the extrapolated CCM data for $e_0/s^2$
using $f(s)=-1+x_1
s^{-1} + x_2s^{-2}$. The fits yield the corresponding values $x_1=-0.414$ ($-0.410$) and $x_2=0.018$
($0.020$) for the $\sqrt{3}\times\sqrt{3}$ ($q=0$) reference states, as shown in Fig.~\ref{fig6}(a).
Obviously, the $\frac{1}{s}$ prefactor is close to that of the linear
spin-wave theory, and the contribution of the next order is small.
Note, however, that if the fitting function $f(s)$, with the above given values for $x_1$
and $x_2$, is applied to the $s=1/2$ case, it does not reproduce the GS selection of the $q=0$
state in this extreme quantum case [c.f., Fig.~\ref{fig6}(a) at
the value $1/s = 2$].
Next we use $g(s)=1+b_1 s^{-b_2}$ as a fitting function to
fit the extrapolated CCM order parameters $M/s$, again using data
for $s=3/2,2,5/2$ and $3$.
For the exponent $b_2$ we get the values
$b_2=0.529$ for the $\sqrt{3}\times\sqrt{3}$ reference state
and $b_2=0.666$ for the $q=0$ reference state.
Clearly, the leading correction is not proportional to
$s^{-1}$; rather Chubukov's result\cite{chub92} of a
leading correction proportional to $s^{-2/3}$
is confirmed for the $q=0$ state.
By contrast, for the
$\sqrt{3}\times\sqrt{3}$ reference state our results are in favor of
a leading correction for the order parameter
proportional to $s^{-1/2}$.
Hence, finally we have fitted the extrapolated CCM data for $M/s$ using
the fitting functions
$g(s)=1+y_1
s^{-1/2} + y_2s^{-1}$ [$g(s)=1+y_1
s^{-2/3} + y_2s^{-4/3}$] for the $\sqrt{3}\times\sqrt{3}$ [$q=0$] reference states. The fits yield the corresponding values $y_1=-1.058$ [$-1.000$] and $y_2=-0.094$ [$0.006$], as shown in
Fig.~\ref{fig6}(b).
\section{summary}
In the present investigation we present data for the GS energy per spin, $e_0$, and the order
parameter (sublattice magnetization), $M$,
of the kagome HAFM for spin quantum numbers $s=1/2,1,3/2,2,5/2,3$,
using high-order CCM-SUB$n$-$n$ calculations based on the $\sqrt{3}\times\sqrt{3}$ and
the
$q=0$ reference states.
Our best estimate of the GS energy for the $s=1/2$ case, viz., $e_0=-0.4372$, is clearly below rigorous upper bounds reported
recently,\cite{Evenbly2010,Yan2011} and it also agrees well with recent accurate
DMRG\cite{Yan2011} and
ED\cite{lauchli2011} results.
We find that the GS selection by quantum fluctuations depends on the
spin quantum number $s$. While for $s=1/2$ the $q=0$ state is selected, for all values $s>1/2$
the $\sqrt{3}\times\sqrt{3}$ state has lower energy.
The order parameter $M/s$ obtained by
an appropriate extrapolation of the CCM-SUB$n$-$n$ data
vanishes for $s=1/2$ and $s=1$, but we get small (but nonzero) values $M/s>0$ for all values of the spin quantum number $s>1$.
Using CCM data for $s=3/2,2,5/2$ and $3$ we determine also the leading quantum corrections to the classical values of the GS energy and the order parameter. \\
\section*{Acknowledgments}
O.G. and J.R. thank R.~Zinke and J.~Schulenburg for fruitful discussions.
|
2,869,038,156,914 | arxiv | \section{Introduction}
Given the popularity of social media, it becomes much easier to collect
a large number of videos from Internet for human action recognition.
Effective video representation is required for recognizing human actions
and understanding video content in such rapidly increasing unstructured
data.
By far, the commonly used video representation for action recognition
has been the bag-of-words (BoW) model \cite{Peng2014a}. The basic
idea is summarizing/encoding local spatial-temporal features in a
video as a simple vector. Among local features, dense trajectory (DT)
\cite{Wang2011} and its improved variant (iDT) \cite{Wang2013} provide
state-of-the-art results on most action datasets \cite{Wang2013}.
The main idea is to construct trajectories by tracking densely sampled
feature points in frames, and compute multiple descriptors along the
trajectories.
Despite their success, DT and iDT can produce huge number of local
features, e.x., for a low resolution video in $320\times204$ with
$175$ frames, they can generate $\sim52$ Mb of features \cite{Sapienza2014}.
It is difficult to store and manipulate such dense features for large
datasets with thousands of high resolution videos, especially for
real-time applications.
Existing work focus on reducing the total number of trajectory features
through uniformly random sampling at the cost of minor reduction in
recognition accuracy. \cite{Shi2013} proposed a part model by which
they are able to randomly sample features at lower image scales in
an efficient way. \cite{Kantorov2014} interpolated trajectories using
uniformly distributed nearby feature points. \cite{Sapienza2014}
investigated the influence of random sampling on recognition accuracy
in several large scale datasets. However, intuitively, features extracted
around informative regions, such as human arms in hands waving, should
be more useful in action classification than features extracted on
the background. \cite{Mathe2012,Vig2012} proposed selective sampling
strategies on dense trajectory features based on saliency maps, produced
by modeling human eye movement when viewing videos. They are able
to achieve better recognition results with selectively sampled features.
However, it is impractical to obtain eye movement data for large datasets.
In this work, we investigate several feature sampling strategies for
action recognition, as illustrated in Fig.\,\ref{fig:intro}, and
propose two data driven selective feature sampling methods. Inspired
by the success of applying object proposal techniques in efficient
saliency detection \cite{Li2013}, we construct saliency maps using
one recent object proposal method, EdgeBox \cite{Zitnick2014,Hosang2014},
and selectively sample dense trajectory features for action recognition.
We further extend EdgeBox to produce proposals and construct saliency
maps for objects with motion of interests. More effective features
can be sampled then for action classification. We evaluated a few
feature sampling methods on a publicly available datasets, and show
that proposed motion object proposal based selective sampling method
is able to achieve better accuracy using $25\%$ less features than
using the full feature set.
The remaining of this paper is organized as follows: first we give
a brief introduction about the DT/iDT features and other components
in our action classification framework, then three different feature
sampling methods are described. Finally, we discuss experimental results
on a large video dataset.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{fig/intro}
\par\end{centering}
\caption{\label{fig:intro}Different feature sampling methods for action recognition.}
\end{figure}
\begin{figure*}
\begin{centering}
\includegraphics[width=1\textwidth]{fig/box}
\par\end{centering}
\caption{\label{fig:box}Illustration of selective sampling methods via object
proposal algorithms. From left to right, the original video frame,
dense optical flow field, estimated object boundaries, top $5$ scoring
boxes generated by EdgeBox, saliency map constructed using EdgeBox
proposals, estimated motion boundaries, top $5$ scoring boxes generated
by FusionEdgeBox, saliency map constructed using FusionEdgeBox.}
\end{figure*}
\section{Dense Trajectory Features}
The DT algorithm \cite{Wang2011} represents a video data by dense
trajectories, together with appearance and motion features extracted
around trajectories. On each video frame, feature points are densely
sampled using a grid with a spacing of $5$ pixels for $8$ spatial
scales spaced by a factor of $1/\sqrt{2}$, as illustrated in the
second column of Fig.\,\ref{fig:intro}. Then trajectories are constructed
by tracking feature points in the video based on dense optical flows
\cite{Farneback2003}. The default length of a trajectory is $15$,
i.e., tracking feature points in $15$ consecutive frames. The iDT
algorithm \cite{Wang2013} further enhances the trajectory construction
by eliminating background motions caused by the camera movement.
For each trajectory, $5$ types of descriptors are extracted: 1) the
shape of the trajectory encodes local motion patterns, which is described
by a sequence of displacement vectors on both x- and y-directions;
2) HOG, histogram of oriented gradients \cite{Dalal2005}, captures
appearance information, which is computed in a $32\times32\times15$
spatio-temporal volume surrounding the trajectory; 3) HOF, histogram
of optical flow \cite{Dalal2006}, focuses on local motion information,
which is computed in the same spatio-temporal volume as in HOG; 4+5)
MBHx and MBHy, motion boundary histograms \cite{Dalal2006}, are computed
separately for the horizontal and vertical gradients of the optical
flow. Both HOG, HOF and MBH are normalized appropriately.
To encode descriptors/features, we use Fisher vector \cite{Perronnin2010}
as in \cite{Wang2013}. For each feature, we first reduce its dimensionality
by a factor of two using Principal Component Analysis (PCA). Then
a codebook of size $256$ is formed by the Gaussian Mixture Model
(GMM) algorithm on a random selection of $256,000$ features from
the training set. To combine different types of features, we simply
concatenate their $l_{2}$ normalized Fisher vectors.
For classification, we apply a linear SVM provided by LIBSVM \cite{Chang2011},
and one-over-rest approach is used for multi-class classification.
In all experiments, we fix $C=100$ in SVM as suggested in \cite{Wang2013}.
\section{Feature Sampling Strategies}
In the following, we describe three feature sampling methods, that
are different from using all trajectories and related features computed
on dense grids as in the DT/iDT algorithms. All three methods can
derive a sampling probability for each trajectory feature to measure
whether it will be sampled or not, denoted by $\sigma$. For example,
$\sigma=0.8$ means we sample trajectory features with probability
greater or equal to $0.8$ for action recognition.
\subsection{Uniformly Random Sampling}
Following previous work \cite{Shi2013,Sapienza2014}, we simply sample
dense trajectory features in a random and uniform way. The sampling
probability, $\sigma$, for each trajectory is the same. In experiments,
we randomly sample $80\%$, $60\%$, $40\%$ and $30\%$ of trajectory
features, and report their action recognition accuracies respectively.
\subsection{Selective Sampling via Object Proposal}
EdgeBox \cite{Zitnick2014} is one of efficient object proposal algorithms
\cite{Hosang2014} published recently. We utilize it to construct
saliency map on each video frame, and sample trajectory features with
respect to computed saliency values.
In EdgeBox, given a video frame, object boundaries are estimated via
structured decision forests \cite{Dollar2013}, and object contours
are formed by grouping detected boundaries with similar orientations.
In order to determine how likely a bounding box contains objects of
interests, a simple but effective objectiveness score $s_{\textrm{obj}}$
was proposed, based on the number of contours that are wholly enclosed
by the box. We allow at most $10,000$ boxes in different sizes and
aspect ratios to be examined for a frame. Fig.\,\ref{fig:box} illustrates
estimated object boundaries and top $5$ scoring boxes generated by
EdgeBox in the third and forth columns respectively.
Given thousands of object proposal boxes, on a video frame, we construct
a saliency map through a pixel voting procedure. Each object proposal
box is considered as a vote for all pixels located inside it. We normalize
all pixel votes into $\left[0,1\right]$ to form a saliency probability
distribution. Saliency map examples are illustrated in the fifth column
of Fig.\,\ref{fig:box}. Warmer colors indicate higher saliency probabilities.
Based on constructed saliency maps of a video, we are able to selectively
sample trajectories and related features. If the saliency probability
of the starting pixel of a trajectory is higher than a predefined
sampling probability $\sigma$, the trajectory and related features
will be sampled. In experiments, we report recognition accuracies
for $\sigma$ with $0.2$, $0.4$ and $0.6$ respectively.
\subsection{Selective Sampling via Motion Object Proposal}
Although by stacking boxes generated via EdgeBox are able to highlight
regions in a frame with saliency objects, constructed saliency map
may not be suitable for sampling features for action recognition.
For example, in the last row of Fig.\,\ref{fig:box}, the optical
flow field (second column) clearly indicates the region with motion
of interests for action recognition is located around actor's head
and arms, while top scoring boxes and constructed saliency map via
EdgeBox incorrectly focus on actor's legs. Thus, in order to incorporate
with motion information, we propose a motion object proposal method,
named FusionEdgeBox, where a fused objectiveness score is measured
on both object boundaries and motion boundaries.
The fusion score function is defined as
\begin{equation}
s_{\textrm{fusion}}=\alpha s_{\textrm{obj}}+\beta s_{\textrm{motion}}
\end{equation}
where $s_{\textrm{obj}}$ is the original EdgeBox score, $s_{\textrm{motion}}$
is the proposed motion objectiveness score, and balance parameters
$\alpha$ and $\beta$. We empirically fix $\alpha=\beta=1$ for all
experiments. $s_{\textrm{motion}}$ is defined similar as $s_{\textrm{obj}}$,
i.e., based on the number of wholly enclosed contours in a box. However,
$s_{\textrm{motion}}$ utilizes contours that are grouped from motion
boundaries, which are estimated as image gradients of the optical
flow field. Motion boundary examples are shown in the sixth column
of Fig.\,\ref{fig:box}.
By applying the fusion score into the EdgeBox framework, we are able
to generate a set of proposal boxes, and construct the saliency map
for feature sampling as well. Examples of top $5$ scoring fusion
boxes and constructed saliency maps are illustrated in last two columns
of Fig.\,\ref{fig:box} respectively. Comparing with examples generated
by the original EdgeBox (shown in columns 3-5), we can see that FusionEdgeBox
is able to better explore regions with motion of interests, which
is useful for action feature sampling (verified by later experiments).
Similarly, we report recognition accuracies using sampled trajectory
features for $\sigma$ with $0.2$, $0.4$ and $0.6$ respectively.
\section{Experiments}
We have conducted experiments on one publicly available video datasets,
namely J-HMDB \cite{Jhuang2013}, which consists of $920$ videos
of $21$ different actions. These videos are selected from a larger
dataset HMDB \cite{Kuehne2011}. J-HMDB also provides annotated bounding
boxes for actors on each frame. We report the average classification
accuracy among three training/testing split settings provided by J-HMDB.
In the following, we evaluate action recognition on J-HMDB using sampled
trajectory features through different methods, and discuss their performance.
We also compare obtained accuracies with a few state-of-the-art action
recognition algorithms.
\subsection{Influence of Sampling Strategies}
In addition to three introduced feature sampling methods, to better
understanding trajectory features, we investigate the fourth sampling
method using annotated bounding boxes for actors. We sample trajectory
features, if the starting point of a trajectory locates inside an
annotation box. Similar strategy was proposed in \cite{Jhuang2013},
and we name it as GT.
Figure \ref{fig:dt} and \ref{fig:idt} plot average classification
accuracies over all classes for all sampling methods under different
sampling rates, using the DT feature and iDT feature respectively.
In general, through feature sampling, we are able to achieve higher
performance than directly using all features, since noise background
features have been discarded.
Specifically, for the DT feature, we can see that: 1) trajectory features
sampled inside annotated bounding boxes, achieves higher accuracy
than using all features. Similar phenomena has been observed in \cite{Jhuang2013}
as well which indicates DT features located around human body are
more important than features extracted on other regions. 2) Selective
sampling methods achieve higher accuracies than random sampling given
similar number of sampled features. It shows that sampling DT features
from certain regions is important for action recognition, and object
proposal based strategies are able to detect these regions. 3) Proposed
selective sampling via motion object proposal outperforms other sampling
methods, even outperforms the one based on annotated bounding boxes.
It verifies that proposed FusionEdgeBox method is useful for exploring
regions of interests for action recognition.
For the iDT feature, however, different sampling method result in
similar accuracies. Random sampling outperforms others slightly, especially
when the number of sampled features is small. The reason may be that,
by eliminating background motion caused by the camera movement, the
iDT feature is more compact and meaningful than the DT feature, e.x.,
the average number of iDT features per video is much lower than it
of DT feature. Random sampling is able to better preserve the original
iDT feature distribution than selective samplings which have quite
large sampling bias.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{fig/class-jhmdb-dt}
\par\end{centering}
\caption{\label{fig:dt}Average accuracies using the DT feature.}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{fig/class-jhmdb-idt}
\par\end{centering}
\caption{\label{fig:idt}Average accuracies using the iDT feature.}
\end{figure}
\subsection{Comparisons to state-of-the-arts}
\begin{table}
\begin{centering}
{\small{}}%
\begin{tabular}{|>{\raggedright}p{0.07\columnwidth}|c|c|c|}
\hline
\multicolumn{2}{|c|}{{\small{}Method}} & {\small{}J-HMDB} & {\small{}Memory (GB)}\tabularnewline
\hline
\hline
\multicolumn{2}{|c|}{{\small{}Dense Trajectory \cite{Wang2011}}} & {\small{}$62.88\%$} & {\small{}$5.4$}\tabularnewline
\hline
\multicolumn{2}{|c|}{{\small{}Improved Dense Trajectory \cite{Wang2013}}} & {\small{}$64.52\%$} & {\small{}$4.2$}\tabularnewline
\hline
\multicolumn{2}{|c|}{{\small{}Peng }\emph{\small{}et al.}{\small{} \cite{Peng2014} w/
iDT}} & {\small{}$69.03\%${*}} & {\small{}$4.2$}\tabularnewline
\hline
\multicolumn{2}{|c|}{{\small{}Gkioxari }\emph{\small{}et al.}{\small{} \cite{Gkioxari2014}}} & {\small{}$62.5\%$} & {\small{}-}\tabularnewline
\hline
\hline
\multicolumn{4}{|c|}{{\small{}Discard $20\%\sim25\%$ features}}\tabularnewline
\hline
\multirow{3}{0.07\columnwidth}{{\small{}DT}} & {\small{}Random} & {\small{}$62.33\%$} & {\small{}$4.3$}\tabularnewline
\cline{2-4}
& {\small{}EdgeBox} & {\small{}$65.33\%$} & {\small{}$4.5$}\tabularnewline
\cline{2-4}
& {\small{}FusionEdgeBox} & \textbf{\small{}$\mathbf{65.91\%}$} & {\small{}$4.0$}\tabularnewline
\hline
\multirow{3}{0.07\columnwidth}{{\small{}iDT}} & {\small{}Random} & {\small{}$\mathbf{65.49\%}$} & {\small{}$3.4$}\tabularnewline
\cline{2-4}
& {\small{}EdgeBox} & {\small{}$65.32\%$} & {\small{}$3.6$}\tabularnewline
\cline{2-4}
& {\small{}FusionEdgeBox} & {\small{}$65.11\%$} & {\small{}$3.5$}\tabularnewline
\hline
\hline
\multicolumn{4}{|c|}{{\small{}Discard $70\%\sim80\%$ features}}\tabularnewline
\hline
\multirow{3}{0.07\columnwidth}{{\small{}DT}} & {\small{}Random} & {\small{}$59.90\%$} & {\small{}$1.1$}\tabularnewline
\cline{2-4}
& {\small{}EdgeBox} & {\small{}$58.51\%$} & {\small{}$1.4$}\tabularnewline
\cline{2-4}
& {\small{}FusionEdgeBox} & {\small{}$\mathbf{60.71\%}$} & {\small{}$1.4$}\tabularnewline
\hline
\multirow{3}{0.07\columnwidth}{{\small{}iDT}} & {\small{}Random} & {\small{}$\mathbf{62.34\%}$} & {\small{}$1.3$}\tabularnewline
\cline{2-4}
& {\small{}EdgeBox} & {\small{}$58.85\%$} & {\small{}$1.2$}\tabularnewline
\cline{2-4}
& {\small{}FusionEdgeBox} & {\small{}$60.87\%$} & {\small{}$1.3$}\tabularnewline
\hline
\end{tabular}
\par\end{centering}{\small \par}
\caption{\label{tab:cmp}Comparison to state-of-the-arts in terms of average
accuracy and feature size. {*} It leverages an advanced feature encoding
technique, stacked Fisher vector.}
\end{table}
Table \ref{tab:cmp} shows comparisons of feature sampling methods
in different sampling rates with the state-of-the-arts. Sampling methods
achieve better average accuracies than a few state-of-the-arts using
same classification pipeline, with $\sim20\%$ less features. It is
interesting to observe that, even discarding more than $70\%$ features,
random sampling and proposed selective sampling still are able to
remain comparable performance.
\section{Conclusions}
In this work, we focus on feature sampling strategies for action recognition
in videos. Dense trajectory features are utilized to represent videos.
Two types of sampling strategies are investigated, namely uniformly
random sampling and selective sampling. We propose to use object proposal
techniques to construct saliency maps for video frames, and use them
to guide the selective feature sampling process. We also propose a
motion object proposal method that incorporate object motion information
into object proposal framework. Experiments conducted on a large video
dataset indicate that sampling based methods are able to achieve better
recognition accuracy using $25\%$ less features through one of proposed
selective feature sampling method, and even remain comparable accuracy
with discarding $70\%$ features.
\bibliographystyle{icip}
|
2,869,038,156,915 | arxiv | \section{Introduction}
Automated freight handling system (AFHS) is a type of automated material handling system (AMHS), which is widely adopted in facilities with massive material handling requests, such as freight terminals, distribution centers and production plants, to enhance system efficacy by minimizing operating cost and risk of human errors. As operating cost of AMHS can represent up to 70\% of the cost of a product \citep{liu2002design,giordano2008integrated}, it is critical to smartly design and operate AMHS to improve the overall economic and environmental performance. There has been a considerable growth of interest in studying such problems in both industrial and academic contexts.
This work considers improving AFHS installed in a freight terminal, which employs railed-guided vehicles (RGVs) to transport a tremendous amount of inbound and outbound cargo from their origins to destinations distributed along a linear track. The workload, which is especially high at its peak hour around midnight, leads to a conflict-prone environment that poses great challenges to terminal operations. The present AFHS has been developed to improve the terminal's throughput while eliminating potential human errors. However, the design is not optimal especially in terms of energy efficiency. As energy consumption constitutes one of major sources of operating cost and has gained increasing attention for enabling a greener and sustainable earth, improving energy efficiency of AFHS and in particular developing an energy-efficient RGV routing strategy is of great interest to the industry. So far only heuristic methods have been employed to route RGVs in the current system, which motivates us to develop a rigorous mathematical programming method to improve the system performance. The new routing strategy also meets various service requirements such as delivery time windows (TWs) and avoidance of unloading deadlocks, etc., by incorporating them into the mathematical program from which the strategy results. This is contrast to existing methods which achieve that by abruptly compromising the system's performance or by means of a posteriori sophisticated supervisory control \citep{giordano2008integrated}.
\subsection{Problem description}
A typical work area of the AFHS under consideration is depicted in Figure \ref{fig: toy example - scenario}. An RGV is operated over a linear track to transport containers between work stations located along both sides of the track. Containers are queued at work stations, and will be picked up and delivered to their destined stations on either sides of the track by the RGV via so-called \emph{two-sided loading/unloading} (TSLU) operations. The RGV and work stations are equipped with roller decks to support the TSLU operations. When the RGV is docked to a station, the roller decks rotate forward or backward accordingly to load or unload containers from or to either side of the track. The RGV can carry multiple containers subject to certain capacity limit.
Each transportation of a container is initiated with a pickup and delivery (PD) request to the central control system. Midway drop-off is not allowed in the current practice because of the substantial overhead caused by frequent acceleration and deceleration of the RGV. Once a container is picked up, it remains on the RGV until being delivered to its destination. As containers enter and depart from the AFHS dynamically, the control system accumulates unfinished PD requests and aims at routing the RGV to pick up and deliver the associated containers in an optimal sequence so as to minimize energy consumption required for completing all transport requests subject to service quality and conflict-avoidance constraints for smooth operations.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.9]{Notations_illustration}
\par
\end{centering}
\caption{Typical work area of an RGV in AFHS with exemplary PD requests.\label{fig: toy example - scenario}}
\end{figure}
\subsection{Related literature}
Sequencing PD tasks to be handled by a single vehicle is referred to as vehicle routing or job sequencing \citep{vis2006survey,roodbergen2009survey,carlo2014storage}, and can be treated as a pickup and delivery problem (PDP) \citep{berbeglia2007static,parragh2008survey}. The problem is NP-hard in general due to a complex combinatorial nature, and it includes our problem under consideration as a sophisticated case. Our problem is further complicated by dynamic arrivals of PD requests. So far there have been a variety of research investigations on static/dynamic vehicle routing problems (VRPs) for different applications \citep{psaraftis1988dynamic,berbeglia2010dynamic,pillac2013review}. The literature confined to a single vehicle is briefly reviewed as follows.
Static VRPs assume that all transport requests are known a priori. Atallah and Kosaraju \citep{atallah1987efficient} proved that the problem is polynomial-time solvable if the vehicle has unit capacity while confined to a linear track when no precedence and TW constraints are imposed on the transport requests. The same problem turns out to be NP-hard if the track topology changes to be a tree (or general graph) \citep{frederickson1993nonpreemptive,frederickson1976approximation}. If the vehicle has multiple capacity, the problem is NP-hard even if the track is a simple path \citep{guan1998routing}. Another closely related problem, the PDP (for goods transportation) or the dial-a-ride problem (DARP, for passenger transportation) which includes TW constraints on PD requests has been well studied. Algorithms based on branch-and-cut or column generation are available for solving the problem of a small to medium size \citep{parragh2008survey,cordeau2008recent}. Other related literature is on request/job sequencing in automated storage and retrieval systems (AS/RS), as referred to \citep{roodbergen2009survey} for a recent review.
The aforementioned literature all considered PDPs without loading constraints. In many applications, however, constraints also appear on loading \citep{iori2010routing}. In the \emph{traveling salesman problem with pickup and delivery and LIFO loading} (TSPPDL), a single vehicle must serve paired PD requests while both pickup and delivery must be performed in LIFO (last in first out) order. Heuristic algorithms for solving this problem were introduced in \citep{ladany1984optimal,carrabs2007variable}, and exact formulations and solutions were reported in \citep{carrabs2007additive,cordeau2010branch} which rely on tailored branch-and-cut algorithms. Another related problem is the \emph{traveling salesman problem with pickup and delivery and FIFO loading} (TSPPDF) in which both pickup and delivery must be performed in FIFO (first in first out) order. Heuristic algorithms were introduced in \citep{erdougan2009pickup} for solving the problem, and exact solutions were explored by using additive branch-and-bound \citep{carrabs2007additive} and tailored branch-and-cut \citep{cordeau2010branchFIFO}, respectively.
In practice, VRPs are dynamic in nature as PD requests appear stochastically in time. Research work on dynamic VRPs assumes the requests arrive in an unknown or known stochastic process \citep{berbeglia2010dynamic,pillac2013review}. With an unknown arrival process, deterministic algorithms have been proposed to treat dynamic VRPs and their performances are evaluated by competitive analysis (which compares the worst-case cost achieved by the algorithm for a dynamic problem and the optimal cost obtained for its counterpart having all requests being known a priori) \citep{ascheuer2000online,feuerstein2001line,berbeglia2010dynamic}. However, the algorithms and their analyses all rely on the assumption that the dynamic VRPs do not have side constraints like loading, precedence and TW constraints. On the other hand, if the arrival process of PD requests is known, strategies based on sampling or Monte Carlo simulations are often used to handle dynamic VRPs \citep{berbeglia2010dynamic}. Rolling-horizon based algorithms can also be employed if deterministic estimates of future requests are available \citep{psaraftis1988dynamic}. Alternatively, heuristic algorithms may be developed by exploiting structural properties of a problem, as referred to \citep{berbeglia2010dynamic} for a relevant review.
In particular, among existing literature, {\emph{e.g.}}, \citep{lau2006joint,ou2010scheduling,tang2010improving,Hu2012IEEM}, investigating operational aspects of AFHS (whereas investigations on higher-layer issues can be referred to \citep{derigs2013air} and the references therein), \citep{Hu2012IEEM} studied a most relevant but simpler problem, in which the RGV is assumed to have unit capacity and the goal is to minimize RGV's travel distance for completing all transport requests. The problem is a dynamic PDP with TW, FIFO queuing, and PD precedence constraints, and the investigation forms a pilot study towards obtaining a model and solution for the more complex routing problem considered in this work, of which partial results were reported in \citep{hu2013energy}.
This work differs from existing literature in several aspects. \emph{Firstly}, the problem under investigation is a capacitated PDP under unique conflict-free service constraints. We completely characterize these service constraints under TSLU operations, which include the well-known LIFO and FIFO service constraints as two special cases. This is the first time that such kind of characterization has become available for transport service under TSLU operations, to the best of our knowledge. \emph{Secondly}, the RGV routing problem aims to minimize total energy consumption of operating an RGV for completing all PD requests, which meets well with the interest of saving energy and reducing carbon emission in practice. This differs from the existing literature where travel distance or makespan ({\emph{i.e.}}, task completion time) is taken as the objective \citep{berbeglia2007static,parragh2008survey,xin2014energy}. \emph{Thirdly}, structural properties of the new problem are exploited to reduce the problem domain and derive useful valid inequalities for improving the computational efficiency. \emph{Fourthly}, a rolling-horizon approach is developed for treating the dynamic RGV routing problem, where a way of handling non-zero initial conditions ({\emph{i.e.}}, the RGV starts with nonempty load) is introduced and explained in detail. The new issues revealed and solved all explain the challenges of enabling energy-efficient routing of an RGV confronted in a real AFHS.
The rest of the work is organized as follows. Section \ref{sec: problem formulation} perceives the static RGV routing problem as a sophisticated PDP problem, characterizes its conflict-free service requirements, and develops a full optimization model for it. Section \ref{sec: solution approach} reformulates the initial model into a mixed-integer linear program (MILP), reduces its domain by removing infeasible solutions, and derives valid inequalities from the problem structure for expediting the solution process. Section \ref{sec: rolling-horizon-appro} presents a rolling-horizon approach to treat the dynamic problem, as followed by comprehensive computational studies performed on random instances in Section \ref{sec: computational studies}. Finally, conclusions are drawn in Section \ref{sec: conclusions}. Supporting materials that are helpful to understand the main results are collected in the Appendices.
\section{Static Routing Problem Formulation} \label{sec: problem formulation}
A multi-capacity RGV is routed to serve $n$ PD requests with TW constraints on the deliveries. As midway drop-off is not allowed in current practice, fulfilling a PD request is equivalent to completing two tasks, pickup task ({\emph{i.e.}}, loading a container from its origin) and delivery task ({\emph{i.e.}}, unloading the container at its destination). Therefore, serving $n$ PD requests is equivalent to completing $2n$ tasks and the optimal routing solution is a processing order of the $2n$ tasks for the RGV to consume least energy under service quality and feasibility constraints.
To facilitate problem formulation, we assign a unique integer ID to each task. A pickup task is associated with an integer $i$, where $1\le i \le n$, and its corresponding delivery task associated with $i+n$. So the $i$th PD request means a pair of directed tasks $i\dashrightarrow i+n$, where $\dashrightarrow$ means a simple path that may contain multiple arcs. We denote the sets of pickup and delivery tasks as $P$ and $D$, respectively, {\emph{i.e.}}, $P = \{1, 2, \dots, n\}$ and $D = \{n+1, n+2, \dots, 2n\}$. In the example shown in Figure \ref{fig: toy example - scenario}, there are seven PD requests indicated by dashed arrows, in which $P=\{1, 2, \dots,7\}$, $D=\{8, 9, \dots, 14\}$ and the PD requests correspond to seven pairs of PD tasks $\{1\dashrightarrow 8, 2\dashrightarrow 9, \dots, 7\dashrightarrow 14\}$. For convenience, we refer to the PD requests by their pickup tasks, as $P$.
The static RGV routing problem is then modeled by
a directed graph $G=(V, A)$, where a vertex $i$ in $V$ represents
a pickup or delivery task with the ID of $i$ and an arc $(i,j)$ in $A$ represents
a plausible processing order between the two tasks $i$ and $j$,
{\emph{i.e.}}, whether the task $j$ could be processed right after the task $i$. Two virtual tasks, $0$ and $2n+1$, are added to represent the start and end positions of the RGV, respectively. Define $V' = P \cup D = \{1,2,\dots,2n\}$ and $V = \{0\} \cup V' \cup \{2n+1\}$. The arcs $(i,j)$ for all $i,j\in V'$ and $i,j\in V$ give rise to arc sets $A'$ and $A$, respectively (both of which can be reduced by removing infeasible arcs in Section \ref{subsec: arc_set_reduction}).
A feasible routing solution is then a path starting from vertex $0$, going through all vertices in $V'$ exactly once and ending at vertex $2n+1$. While a path can generally be formulated by a sequence of binary decision variables $x_{ij}$ which indicate the arcs $(i,j)$ used in the path, another group of binary decision variables $y_{ij}^k$, which indicate the arcs traversed during the service of each request $k$ is also required for enforcing conflict-free service constraints as revealed next. The new problem and its unique model differentiate our work from existing literature.
\subsection{Conflict-free service constraints}
In practice there are physical constraints, such as containers on an RGV cannot swap positions, that restrict the TSLU operations. The multi-capacity RGV thus must be routed to load and unload containers in a valid order in order to avoid infeasible operations and deadlock. Here, \emph{infeasible operation} means an operation that attempts to unload a container before another one which is however impossible to realize in practice for physical constraints; and \emph{deadlock} means that a container cannot be unloaded without temporally dropping off another container and it remains so if the order of unloading the two containers is reversed. The requirements are equivalent to imposing conflict-free service constraints on the TSLU operations. Before describing these constraints, we classify all PD requests to be handled by the TSLU operations into four types:
\begin{align}
P^{1} & \triangleq \{i\in P:\,a_{i}=a_{i+n}=0\},\,P^{2}\triangleq\{i\in P:\,a_{i}=a_{i+n}=1\},\nonumber \\
P^{3} & \triangleq \{i\in P:\,a_{i}=0,\,a_{i+n}=1\},\,P^{4}\triangleq\{i\in P:\,a_{i}=1,\,a_{i+n}=0\},\label{eq: PD types}
\end{align}
where $a_i$ is an indicator variable to show on which side task $i$ occurs, and it indicates the north side if $a_i = 0$ and the south side otherwise. The four types of PD requests are illustrated in Figure \ref{fig:Four-types-of PD requests}. For the example shown in Figure \ref{fig: toy example - scenario}, we have $P^1 = \{1\}$, $P^2 = \{3,5\}$ $P^{3} = \{2,6\}$ and $P^4 = \{4, 7\}$.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{PD_types}
\par\end{centering}
\caption{Four types of PD requests.\label{fig:Four-types-of PD requests}}
\end{figure}
Various conflicts may arise if tasks belonging to different types of PD requests are processed sequentially. The types of conflicts reviewed in the literature ({\emph{e.g.}}, \citep{cordeau2010branch,cordeau2010branchFIFO,erdougan2009pickup}) can be regarded as loading/unloading systems restricted to one type of the PD requests defined above. Thus, their conflict-free service constraints can be satisfied by sticking to a certain task service order: If PD requests are all of Type-1 (or Type-2), the LIFO service order will suffice; and if PD requests are all of Type-3 (or Type-4), the FIFO service order will do.
Under TSLU operations, however, a single LIFO or FIFO service order cannot guarantee conflict-free services. As there are four mixed types of PD requests, we will face 10 groups of scenarios for serving different combinations of PD requests. Among them, one group is always conflict-free in which tasks of different types of requests can be served in any order, and that the rest 9 groups have to be constrained to avoid conflicts. Specifically, the conflict-free scenarios are concerned with service of PD request pairs $P^1 \sim P^2$, and the rest 9 groups of scenarios are concerned with service of PD request pairs $P^1 \sim P^1/P^3/P^4$, $P^2 \sim P^2/P^3/P^4$, $P^3 \sim P^3/P^4$, $P^4 \sim P^4$, where the symbol / means ``or''. Different constraints may be imposed in the 9 groups of scenarios to ensure conflict-free services.
The ten groups of scenarios and their associated service constraints can be classified into six cases below. The first two cases correspond to conventional scenarios with LIFO or FIFO loading restrictions, and the next three cases are unique to the routing problem under consideration, and the last case describes conflict-free scenarios where no special service constraint is required.
\subsubsection{Case 1. $P^1\sim P^1$ and $P^2\sim P^2$: LIFO service}
As illustrated in Figure \ref{fig:LIFO}, LIFO service order should be maintained between any pair of PD requests in $P^1$ (or $P^2$ alike) for conflict avoidance. Given requests $j,k \in P^1$ which share the RGV for a certain period of time, if the pickup task $j$ is processed after the pickup task $k$, then the delivery task $j+n$ must be processed before the delivery task $k+n$.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{LIFO}
\par\end{centering}
\caption{LIFO service: $P^1\sim P^1$ and $P^2\sim P^2$. \label{fig:LIFO}}
\end{figure}
To characterize this mathematically, we introduce binary decision variables $y_{ij}^k$ for all request $k\in P$ and arc $(i,j)$ in a feasible set. We have $y_{ij}^k = 1$, if arc $(i,j)$ is traversed on the path from vertex $k$ to vertex $k+n$ ({\emph{i.e.}}, during the service for request $k$); and $y_{ij}^k = 0$ otherwise. The LIFO service order is enforced by the following constraints:
\begin{equation}
\sum_{i: (i,j)\in A'}y_{ij}^k=\sum_{i: (i,j+n)\in A'}y_{i,j+n}^k, \quad
\begin{cases}
\forall j\in P^{1}\backslash\{k\}, k\in P^{1},\\
\forall j\in P^{2}\backslash\{k\}, k\in P^{2}.
\end{cases}\label{eq:LIFO}
\end{equation}
Constraint \eqref{eq:LIFO} means that if task $j$ is processed between $k$ and $k+n$, then task $j+n$ must also be processed between $k$ and $k+n$, namely, the LIFO service order is implemented.
\subsubsection{Case 2. $P^3\sim P^3$ and $P^4\sim P^4$: FIFO service}
Similarly, FIFO service order should be maintained between any pair of PD requests in $P^3$ (or $P^4$ alike) for conflict avoidance, as illustrated in Figure \ref{fig:LIFO}. This requirement can be met by enforcing constrain (\ref{eq:FIFO}) below, which means that if a pickup task of request $k$ in $P^3$ (or $P^4$) is processed before the service of request $j$ of the same type, then the delivery task of request $k$ must be processed before the completion of request $j$, {\emph{i.e.}}, the FIFO service order is enforced.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{FIFO}
\par\end{centering}
\caption{FIFO service: $P^3\sim P^3$ and $P^4\sim P^4$. \label{fig:FIFO}}
\end{figure}
\begin{equation}
\sum_{i: (i,j)\in A'}y_{ij}^k+\sum_{i: (i,j+n)\in A'}y_{i,j+n}^k\le1,\quad \begin{cases}
\forall j\in P^{3}\backslash\{k\}, k\in P^{3},\\
\forall j\in P^{4}\backslash\{k\}, k\in P^{4}.
\end{cases}\label{eq:FIFO}
\end{equation}
\subsubsection{Case 3. $P^1\sim P^3$ and $P^2\sim P^4$: crossing first in (CFI) service}
This case is illustrated in Figure \ref{fig:Crossing-First-In}. Assume $j \in P^3, k \in P^1$ and they are simultaneously served by an RGV in a certain time interval. The PD request $j$ is a crossing request whose pickup and delivery locations are on opposite sides of the track. If the pickup task $j$ is processed after the pickup task $k$ but before the delivery task $k+n$, then the delivery task $j+n$ cannot be performed because the container associated with the request $k$ will block the way out. Thus, in this case, the crossing request must be handled first for ensuring conflict-free service, leading to the so-called CFI service order. This applies to the case for $j\in P^{4}, k\in P^{2}$ alike.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{Crossing-First-In}
\par\end{centering}
\caption{CFI service: $P^1\sim P^3$ and $P^2\sim P^4$. \label{fig:Crossing-First-In}}
\end{figure}
The CFI service order can be enforced by the following constraints:
\begin{equation}
y_{ij}^k = 0, \forall (i,j)\in A',\quad \begin{cases}
\forall j\in P^{3}, k\in P^{1},\\
\forall j\in P^{4}, k\in P^{2}.
\end{cases}\label{eq:CFI}
\end{equation}
Also note that, for $j\in P^{1}, k\in P^{3}$ (or $j\in P^{2}, k\in P^{4}$), we can obtain another set of conflict-free service constraints, $\sum_{i: (i,j+n)\in A'}y_{i,j+n}^k \le \sum_{i: (i,j)\in A'}y_{ij}^k$, meaning that if task $j+n$ is processed between tasks $k$ and $k+n$, then task $j$ must also have been processed. As shown in Appendix \ref{apdix: CFI-alternative}, this set of constraints are equivalent to the constraints in (\ref{eq:CFI}) and hence can be ignored.
\subsubsection{Case 4. $P^1\sim P^4$ and $P^2\sim P^3$: crossing last out (CLO) service}
Similarly, CLO service order should be maintained in Case 4 as illustrated in Figure \ref{fig:Crossing-Last-Out}. Assume that $j \in P^4, k \in P^1$ and they are simultaneously served by the RGV in a certain time interval. Request $j$ is still the crossing request. This time, the pickup tasks $j$ and $k$ can be processed in a flexible order, but the delivery task $j+n$ must be processed after the delivery task $k+n$ because the container associated with request $j$ is queued behind the one associated with request $k$ on the RGV.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{Crossing-Last-Out}
\par\end{centering}
\caption{CLO service: $P^1\sim P^4$ and $P^2\sim P^3$. \label{fig:Crossing-Last-Out}}
\end{figure}
The CLO service order can be enforced by the following constraints:
\begin{align}
y_{i,j+n}^k=0, \forall (i,j+n)\in A', &\quad \begin{cases}
\forall j\in P^{4}, k\in P^{1},\\
\forall j\in P^{3}, k\in P^{2}.
\end{cases} \label{eq:CLO}
\end{align}
As in the CFI case, for all $j\in P^{1}, k\in P^{4}$ (or $j\in P^{2}, k\in P^{3}$) we can obtain another set of conflict-free service constraints, $\sum_{i: (i,j)\in A'}y_{ij}^k \le \sum_{i: (i,j+n)\in A'}y_{i,j+n}^k$, which are equivalent to the constraints in (\ref{eq:CLO}) and hence can be ignored. A proof of this fact is again referred to Appendix \ref{apdix: CFI-alternative}.
\subsubsection{Case 5. $P^3\sim P^4$: Deadlock}
This case corresponds to the deadlock case for any pair of PD requests $P^3\sim P^4$ as illustrated in Figure \ref{fig:Deadlock}. Such pair of PD requests should never be simultaneously served by the RGV because the corresponding containers will block their way out from each other.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{Deadlock}
\par\end{centering}
\caption{Deadlock: $P^3\sim P^4$. \label{fig:Deadlock}}
\end{figure}
The deadlock can be avoided by enforcing the following constraint, which completely avoids overlap between services of the aforementioned two types of PD requests:
\begin{align}
y_{ij}^k = 0, \forall (i,j)\in A', \quad \begin{cases}
\forall j\in P^{4}, k\in P^{3},\\
\forall j\in P^{3}, k\in P^{4}.
\end{cases} \label{eq:Deadlock-a}
\end{align}
It can be shown that the above condition implies that $y_{i,j+n}^k = 0$, for all $(i,j+n)\in A'$ with $j, k$ in the domains given in \eqref{eq:Deadlock-a}.
\subsubsection{Case 6. $P^1\sim P^2$: Free case}
This case corresponds to the free case illustrated in Figure \ref{fig:Free}, in which any pair of PD requests do not interfere from each other. The requests can thus be freely served by the RGV with no dedicated service constraints.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{Free}
\par\end{centering}
\caption{Free service: $P^1\sim P^2$. \label{fig:Free}}
\end{figure}
\begin{rem}
The LIFO and FIFO service constraints formulated in (\ref{eq:LIFO}) and (\ref{eq:FIFO}) have forms conciser than those developed in \citep{erdougan2009pickup} where two additional groups of binary variables with the same dimensionality of $y_{ij}^k$ are required. The current formulation avoids those binary variables and the constraints associated, and hence is computationally more efficient in general.
\end{rem}
\subsection{Problem formulation}
This subsection develops a mathematical model of the energy-efficient RGV routing problem. The major parameters and notations used are listed below for ease of reference:
{\footnotesize
\begin{tabular}[l]{c p{350pt}}
\\
$n$ & The number of PD requests; $|P| = |D| = n$;\\
$P$ & The set of pickup tasks, $P = \{1, 2, \dots, n\};$\\
$D$ & The set of delivery tasks, $D = \{1+n, 2+n, \dots, 2n\};$\\
$V'$ & The union of pickup and delivery tasks, $V'=P\cup D$ ;\\
$A'$ & The set of arcs $(i,j)$ with $i,j\in V'$;\\
$V$ & The full tasks including virtual start and end tasks, $V=\{0\}\cup V' \cup \{2n+1\}$;\\
$A$ & The set of arcs $(i,j)$ with $i,j\in V$;\\
$Q$ & The RGV's capacity in units of load (and every unit has the same weight \footnote{Generalization can be made to associate each unit of load with a different weight if such information is available.});\\
$q_i$ & The units of load associated with task $i$, satisfying $q_i>0$, $q_{i+n} <0$ and $q_i = -q_{i+n}$ for all $i\in P$. There are several types of containers with different sizes. For example, the container in Figure \ref{fig: container_a} admits one unit of load, and so $q_i = 1, q_{i+n}=-1$; and the container in Figure \ref{fig: container_b} admits two units of load, and so $q_i=2, q_{i+n}=-2$;\\
$s_{i}$ & The operating duration of task $i\in V \backslash \{2n+1\}$, with $s_0 \triangleq 0$;\\
$[e_{i},l_{i}]$ & The delivery TW associated with each PD request $i \in P$;\\
$r_{ij}$ & The RGV travelling distance between tasks $i$ and $j$;\\
$t_{ij}$ & The RGV travelling time between tasks $i$ and $j$;\\
\\
\end{tabular}}
\begin{figure}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{Container_a}
\caption{\footnotesize Container with unit load.}
\label{fig: container_a}
\end{subfigure}
\begin{subfigure}[b]{0.407\textwidth}
\includegraphics[width=\textwidth]{Container_b}
\caption{\footnotesize Containers with two units of load.}
\label{fig: container_b}
\end{subfigure}
\caption{Containers with different capacities.}
\label{fig: container}
\end{figure}
There are four groups of decision variables:
{\footnotesize
\begin{tabular}[l]{c p{350pt}}
\\
$x_{ij}$ & The binary variable to indicate whether task $i$ is processed right before task $j$;\\
$y_{ij}^k$ & The binary variable to indicate whether the arc $(i,j)$ is traversed on the path from vertices $k$ to $k+n$, where $k \in P$;\\
$b_{i}$ & The time starting to handle task $i\in V$, with $b_{0} \triangleq 0$;\\
$w_{i}$ & The units of load on the RGV upon completion of task $i\in V$, with $w_{0} \triangleq 0$.\\
&
\end{tabular}}
The static optimal RGV routing problem can be formulated as a mixed integer program defined in (\ref{eq: PDP-TSLU})-(\ref{eq:c - b and w}). For convenience of presentation, we term it as the \emph{pickup and delivery problem with TSLU operations,} or \emph{PDP-TSLU} for short.
\begin{spacing}{1.0}
{\small
\begin{align}
\textbf{PDP-TSLU}: & \quad \min \sum_{i,j: (i,j)\in A}c_{ij}(w_{i})x_{ij}\label{eq: PDP-TSLU}\\
\text{subject to,} \quad \sum_{j:\,(i,j)\in A}x_{ij}=1 & \quad \forall i \in V\backslash \{2n+1\};\label{eq:c - outgoing degree}\\
\sum_{j:\,(j,i)\in A}x_{ji}=1 & \quad \forall i \in V\backslash \{0\};\label{eq:c - incoming degree}\\
b_{i+n}\ge b_{i}+s_{i}+t_{i,i+n} & \quad \forall i \in P;\label{eq:c - completion time}\\
b_{i \oplus 1}\ge b_{i}+s_{i} & \quad \forall b_{i \oplus 1}\ne \varnothing, i \in P;\label{eq:c - precedence in a pickup queue}\\
x_{ij}=1\Rightarrow\Biggl\{\begin{array}{l}
b_{j}\ge b_{i}+s_{i}+t_{ij}\\
w_{j}\ge w_{i}+q_{j}
\end{array} & \quad \begin{array}{l}
\forall (i,j)\in A,\,j \in V' \cup \{2n+1\},\\
\forall (i,j)\in A,\,j \in V';
\end{array}\label{eq:c - time consistency}\\
e_{i}\le b_{i+n}+s_{i+n}\le l_{i} & \quad \forall i \in P;\label{eq:c - time window}\\
\max\{0, q_i\} \le w_{i} \le \min\{Q, Q+q_i\} & \quad \forall i \in V';\label{eq:c - load bound}\\
\sum_{j:\,(i,j)\in A'}y_{ij}^k-\sum_{j:\,(j,i)\in A'}y_{ji}^k=\begin{cases}
1 & \text{if }i=k\\
-1 & \text{if }i=k+n\\
0 & \text{otherwise}
\end{cases} & \quad \forall i \in V',\,k \in P;\label{eq:c - pickups to deliveries}\\
\eqref{eq:LIFO}-\eqref{eq:Deadlock-a}; \label{eq:c-TSLU} \\
y_{ij}^k\le x_{ij} & \quad \forall (i,j)\in A',\,k \in P; \label{eq:c - decision consistency}\\
y_{ij}^k\in\{0,\,1\} & \quad \text{\ensuremath{\forall }}(i,j)\in A',\,k \in P; \label{eq:c - y}\\
x_{ij}\in\{0,\,1\} & \quad \text{\ensuremath{\forall }}(i,j)\in A; \label{eq:c - x}\\
b_{i}\ge 0,w_{j}\ge 0 & \quad \forall i \in V'\cup \{2n+1\},\,j \in V'. \label{eq:c - b and w}
\end{align}
}{\small \par}
\end{spacing}
The objective function in (\ref{eq: PDP-TSLU}) measures the total energy consumption and is a \emph{bilinear} function in $w_{i}$ and $x_{ij}$, whose specific form is derived as follows. Each arc $(i,j)\in A$ is associated with an energy cost $c_{ij}(w_{i})$ (as explicitly depends on the load weight) and a travel
time $t_{ij}$. With the operating profile of the RGV shown in
Figure \ref{fig:RGV profile}, the arc travel time can be expressed as an explicit function of
the arc distance $r_{ij}$ as
\begin{equation}
t_{ij}=\begin{cases}
2\sqrt{\frac{r_{ij}}{a}} & \text{if }0\le r_{ij}\le2r_{1},\\
2t_{1}+\frac{r_{ij}-2r_{1}}{v_{c}} & \text{if }r_{ij}>2r_{1},
\end{cases}\label{eq: t_ij}
\end{equation}
where the parameters $a$, $v_{c}$, $t_{1}$ and $r_{1}$ are defined in Figure \ref{fig:RGV profile}. Let $w_{RGV}$ be the net weight of the empty RGV and $w_i$ denote the load weight upon leaving vertex $i$. The energy cost $c_{ij}(w_i)$ is calculated as the work done by the RGV for traversing the arc $(i,j)$. By Newton's law we have,
\begin{eqnarray}
c_{ij}(w_{i})= & \begin{cases}
\mu gr_{ij}(w_{RGV}+w_{i}) & \text{if}\,a\le\mbox{\ensuremath{\mu}}g,\\
ar_{ij}(w_{RGV}+w_{i}) & \text{if}\,a>\mu g\,\,\text{and }0\le r_{ij}\le2r_{1},\\
\left(2(a-\mu g)r_{1}+\mu gr_{ij}\right)(w_{RGV}+w_{i}) & \text{if}\,a>\mu g\,\,\text{and }r_{ij}>2r_{1},
\end{cases}\label{eq: c_ij}
\end{eqnarray}
where $\mu$ is the rolling friction coefficient while the RGV moves
on the track and $g$ is the gravitational acceleration, both of which
are given constants. Although $\mu$, $g$ and $r_{ij}$ are given constants, the energy cost $c_{ij}(w_i)$ are variables controllable by adjusting the task service order. This cost is generally increasing in $w_{i}$, in contrast to conventional routing cost which is merely determined by the arc distance $r_{ij}$. In the meanwhile, we set $c_{i,\,2n+1}=0$ for all $i\in D$, to account for the practice that the RGV stops at the last delivery.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.7]{Operating_profile_of_RGV}
\par\end{centering}
\caption{Operating profile of an RGV. \label{fig:RGV profile}}
\end{figure}
We briefly explain the constraints of PDP-TSLU. Constraints (\ref{eq:c - outgoing degree}) and (\ref{eq:c - incoming degree}) ensure that each task is served exactly once, with the service starting and ending at virtual tasks 0 and $2n+1$, respectively. Constraint (\ref{eq:c - completion time}) states that each pickup task should be processed before its paired delivery task, and constraint (\ref{eq:c - precedence in a pickup queue}) means that containers at the same work station are picked up in FIFO order. In (\ref{eq:c - precedence in a pickup queue}), the term $i\oplus 1$ represents the pickup task queued right behind task $i$ at the same station. For the example shown in Figure \ref{fig: toy example - scenario}, we have $1\oplus 1 = 2$ and $4\oplus 1 = 5$ .
The indicator constraint (\ref{eq:c - time consistency}) ensures the consistency of service time and load variables and also the route connectivity, which can be linearized using the big-$M$ method (refer to Appendix \ref{apdix: big-M formulation} for details). In (\ref{eq:c - time consistency}), the load inequality constraints are valid alternatives of the corresponding equality constraints for the optimization considered, but this is true for the time constraints if only latest but earliest service constraints are active. Constraint (\ref{eq:c - time window}) imposes a TW constraint on completion time of each request. Constraint (\ref{eq:c - load bound}) enforces the RGV's capacity limit, which is often tighter than applying the obvious bound $0\le w_{i} \le Q$ for all $i\in V'$.
Constraint (\ref{eq:c - pickups to deliveries}) ensures a path from vertices $k$ to $k+n$ for any PD request $k$ that does not pass through the virtual start or end vertex. Constraint (\ref{eq:c-TSLU}) consists of unique conflict-free service constraints resulting from the TSLU operations as revealed in the previous subsection, and constraint (\ref{eq:c - decision consistency}) binds the overall routing decisions with the individual routing decisions of every PD request. Finally, constraints (\ref{eq:c - y})-(\ref{eq:c - b and w}) specify domains of the decision variables.
\begin{rem}
If the objective function is replaced with total travel distance or time, the PDP-TSLU reduces to TSPPDL confined on a linear track if there are only PD requests of Type-1 (or -2), and to TSPPDF confined on a linear track if there are only PD requests of Type-3 (or -4). In the two reduced cases, the conflict-free service constraints (\ref{eq:LIFO})-(\ref{eq:Deadlock-a}) collapse to mean the LIFO and the FIFO service order, respectively.
\end{rem}
\section{Solution approach} \label{sec: solution approach}
This section reformulates the bilinear PDP-TSLU into an MILP, and then reduces the arc set (and so the variables and constraints associated) based on delicate insights into the PDP-TSLU, and next introduces a couple of valid inequalities that exploit structural properties of the problem for solving it more efficiently.
\subsection{Reformulating PDP-TSLU as an MILP}
Let $\alpha_{ij}\triangleq ar_{ij}$ and $\beta_{ij}\triangleq 2(a-\mu g)r_{1}+\mu gr_{ij}$. The PDP-TSLU is bilinear in its present form because the objective function
has the following explicit form:
\begin{equation}
\sum_{(i,j)\in A}c_{ij}(w_{i})x_{ij}=\sum_{(i,j)\in A,r_{ij}\le2r_{1}}\alpha_{ij}\left(w_{RGV}+w_{i}\right)x_{ij}+\sum_{(i,j)\in A,r_{ij}>2r_{1}}\beta_{ij}\left(w_{RGV}+w_{i}\right)x_{ij},\label{eq: obj}
\end{equation}
which contains bilinear terms of $w_{i}x_{ij}$. This makes the problem
difficult to optimize directly. Fortunately, the objective function can be linearized by introducing
new decision variables and additional linear constraints.
Without loss of generality, let us assume $a>\mu g$ (which means
that braking is required for stopping a moving RGV at a deceleration
of $a$). In this case, the objective function of the PDP-TSLU is equivalent
to
\begin{eqnarray*}
\sum_{(i,j)\in A}c_{ij}(w_{i})x_{ij} & = & \sum_{i\in V\backslash\{2 n +1\}}z_{i},
\end{eqnarray*}
subject to, for all feasible index $j$,
\[
x_{ij}=1\Rightarrow z_{i}=\begin{cases}
\alpha_{ij}(w_{RGV}+w_{i}) & \forall (i,j)\in A,\,r_{ij}\le2r_{1},\\
\beta_{ij}(w_{RGV}+w_{i}) & \forall (i,j)\in A,\,r_{ij}>2r_{1}.
\end{cases}
\]
Note that it is unnecessary to introduce $z_{ij}$ for each
$x_{ij}$, because there always exists one and only one immediate successor $j$ to $i$ such that $x_{ij} = 1$ by constraint (\ref{eq:c - outgoing degree}).
The above indicator constraints can be handled directly by solvers embedded
in software like CPLEX \citep{studio2011v124}. Alternatively,
they can be reformulated into linear forms as
\begin{eqnarray*}
z_{i}\ge\alpha_{ij}(w_{RGV}+w_{i})-\gamma_{ij}(1-x_{ij}), & & \forall (i,j)\in A,\,r_{ij}\le2r_{1},\\
z_{i}\ge\beta_{ij}(w_{RGV}+w_{i})-\gamma_{ij}(1-x_{ij}), & & \forall (i,j)\in A,\,r_{ij}>2r_{1},
\end{eqnarray*}
where $\gamma_{ij}$ are upper bounds of $\alpha_{ij}(w_{RGV}+w_{i})$
if $r_{ij}\le2r_{1}$ and of $\beta_{ij}(w_{RGV}+w_{i})$ if $r_{ij}>2r_{1}$. It is feasible to set $\gamma_{ij}=\alpha_{ij}(w_{RGV}+\min\{Q,Q+q_{i}\})$
and $\gamma_{ij}=\beta_{ij}(w_{RGV}+\min\{Q,Q+q_{i}\})$ for the
two cases, respectively. Note that the inequality relaxations of the
indicator constraints do not change the optimal solution because the
objective function minimizes the sum of $z_{i}$ wtih unit
coefficients. Consequently, the PDP-TSLU becomes an MILP solvable
by standard solvers. The solution process can further be enhanced by exploiting structural properties underlying the problem, which are pursued in the next two subsections.
\subsection{Arc set reduction} \label{subsec: arc_set_reduction}
For practical reasons, not all arcs of the complete graph are feasible. By exploiting structural properties of the problem, the arc set $A$ in PDP-TSLU is reducible to the one defined in (\ref{eq:arc set}),
where the relation $i \lhd j $ (or $i \rhd j$) means that pickup task $i$ is queued in front of (or behind) pickup task $j$ at the same station and the term $i\oplus 1$ was defined when explaining \eqref{eq:c - precedence in a pickup queue}.
\begin{align}
A&=\Bigl\{(0,j): j\in P \; \mbox{such that}\; \{k: k\lhd j\} = \varnothing \Bigr\}\nonumber\\
& \quad \cup\Bigl\{(i,\,2n+1): i\in D\; \mbox{such that}\; \{k: k \rhd i-n\} = \varnothing \Bigr\}\nonumber \\
& \quad \cup\Bigl\{ (i,j):\,i\in P,\,j\in P\cup D\backslash \cup_{l=1}^{4}S_{P,l}^{i} \backslash\cup_{l=1}^{3}S_{D,l}^{i}\Bigr\}\nonumber\\
& \quad \cup\Bigl\{ (i,j):\,i\in D,\,j\in P\cup D \backslash \cup_{l=5}^{9}S_{P,l}^{i} \backslash \cup_{l=4}^{8}S_{D,l}^{i} \Bigr\}\nonumber\\
& \quad \big\backslash\Bigl\{(i,\,j): i,j\in P\cup D\; \mbox{such that}\; i=j\;\mbox{or}\;i = j+n\Bigr\} ,\label{eq:arc set}
\end{align}
where
\begin{align*}
S_{P,\,1}^{i} & \triangleq \left\{ j\in P:j\lhd i\right\} ,\\
S_{P,\,2}^{i} & \triangleq \left\{ j\in P:j\rhd i\oplus1\right\} ,\\
S_{P,\,3}^{i} & \triangleq \left\{ j\in P:j\in P^{3}, \mbox{if}\;i\in P^{1}\cup P^{4}\right\}, \\
S_{P,\,4}^{i} & \triangleq \left\{ j \in P:\,j\in P^{4},\mbox{if}\;i\in P^{2}\cup P^{3}\right\} ,\\
S_{P,\,5}^{i} & \triangleq \left\{ j\in P:j \lhd i-n \right\} ,\\
S_{P,\,6}^{i} & \triangleq \bigl\{ j\in P: \{ k\in P^3: \,i-n\lhd k \lhd j \} \ne \varnothing, \,\mbox{if}\; i-n\in P^1 \bigr\}\\
S_{P,\,7}^{i} & \triangleq \bigl\{ j\in P: \{k\in P^4: \,i-n\lhd k \lhd j\}\ne \varnothing,\; \mbox{if}\;\,i-n\in P^2 \bigr\},\\
S_{P,\,8}^{i} & \triangleq \bigl\{ j\in P: \big|\,\{k\in P^3: i-n\lhd k\lhd j\}\,\big| \ge Q,\; \mbox{if}\;i-n \in P^3 \bigr\}\\
S_{P,\,9}^{i} & \triangleq \bigl\{ j\in P: \big|\,\{k\in P^4: i-n\lhd k\lhd j\}\,\big| \ge Q,\; \mbox{if}\;i-n \in P^4\bigr\},\\
S_{D,\,1}^{i} & \triangleq \left\{ j\in D:\,j - n \rhd i\right\} ,\\
S_{D,\,2}^{i} & \triangleq \left\{ j\in D:\,j - n\in P^{1}\cup P^{4}\backslash\{i\},\mbox{if}\;i\in P^{1}\cup P^{3}\right\} \\
S_{D,\,3}^{i} & \triangleq \left\{ j \in D:j - n \in P^{2}\cup P^{3}\backslash\{i\},\mbox{if}\;i\in P^{2}\cup P^{4}\right\} ,\\
S_{D,\,4}^{i} & \triangleq \left\{ j\in D:\,j - n \rhd i-n,\,\mbox{if}\;i-n\in P^{1}\cup P^{2}\right\} ,\\
S_{D,\,5}^{i} & \triangleq \left\{ j\in D:\,j - n \lhd i-n,\,\mbox{if}\;i-n\in P^{3}\cup P^{4}\right\} ,\\
S_{D,\,6}^{i} & \triangleq \left\{ j\in D:\,j - n \in S_{P,\,8}^{i} \cup S_{P,\,9}^{i}\right\} ,\\
S_{D,\,7}^{i} & \triangleq \left\{ j\in D:\,j - n\in P^{2}\cup P^{4}, \,\mbox{if}\;i-n\in P^{3}\right\} ,\\
S_{D,\,8}^{i} & \triangleq \left\{ j\in D:\,j - n\in P^{1}\cup P^{3}, \,\mbox{if}\;i-n\in P^{4}\right\} .
\end{align*}
Of the reduced arc set $A$, the first subset consists of arcs connecting
the start vertex with the head vertex of each pickup queue. The second
subset consists of arcs connecting each delivery vertex, whose paired pickup
vertex is the tail of a pickup queue, with the end vertex.
The third subset consists of arcs connecting a pickup vertex with
all other pickup vertices that are neither predecessors of the current
vertex ($ S_{P,\,1}^{i}$) nor successors other than the
nearest one in the same pickup queue ($ S_{P,\,2}^{i}$
) and meanwhile services of which avoid unloading deadlock ({\emph{i.e.}}, excluding $S_{P,\,3}^{i} \cup S_{P,\,4}^{i}$),
and with delivery vertices whose paired pickup vertices are not successors
of the current vertex ($ S_{D,\,1}^{i}$) and meanwhile
services of which avoid unloading deadlock ({\emph{i.e.}}, excluding $ S_{D,\,2}^{i}\cup S_{D,\,3}^{i}$).
The fourth subset of $A$ is most complicated. It includes arcs connecting a delivery vertex
with pickup vertices which are neither predecessors of the pickup
vertex currently in pair ($ S_{P,\,5}^{i}$) nor successors
that queue after the first successor of which the service leads to
unloading deadlock ($ S_{P,\,6}^{i} \cup S_{P,\,7}^{i}$), nor successors that
queue after the pickup vertex currently in pair for more than $Q$
request distance away while the requests are of Type-3 or Type-4 as the
same as the current request ($S_{P,\,8}^{i}\cup S_{P,\,9}^{i}$), and also includes arcs connecting a delivery vertex with
delivery vertices whose paired pickup vertices are neither successors
of the pickup vertex currently in pair if the current request is of
Type-1 or Type-2 ($ S_{D,\,4}^{i}$) nor predecessors or successors
that queue after the pickup vertex currently in pair for more than
$Q$ request distance away if the current request is of Type-3 or
Type-4 ($S_{D,\,5}^{i}\cup S_{D,\,6}^{i}$), and with delivery
vertices that are not on the same side of the current vertex while
the current request is of Type-3 or Type-4 ({\emph{i.e.}}, excluding $S_{D,\,7}^{i}\cup S_{D,\,8}^{i}$).
Overall the above reductions of the arc set owe to the precedence,
loading and capacity constraints inherent in the transport
service. To have a feel of the power of the above reductions, consider the
toy example shown in Figure \ref{fig: toy example - scenario}.
The arc set is reduced from an original size of 225 ($15^{2}$, for the complete
graph) to a size of 114 (for a much sparser graph), cutting almost
half of the full arcs. A direct consequence of such reductions is
that decision variables and constraints associated with the identified
infeasible arcs are eliminated before formulating and solving the
problem, contributing to a conciser model and hence higher computational
efficiency in general. It should be noted, however, that the elimination of infeasible
arcs is incomplete, which bears the necessity of using the
precedence, loading and capacity constraints in PDP-TSLU to achieve that.
\subsection{Valid inequalities for PDP-TSLU\label{sec: Valid-constraints}}
For a branch-and-cut algorithm to solve the PDP-TSLU, valid inequalities
exploiting structural properties of the problem can be useful to tighten
the lower bounds and avoid unnecessary branching which will consequently
expedite the process of searching for an optimal solution.
The PDP-TSLU is essentially a generic PDP subject to new loading/unloading constraints.
Therefore some valid inequalities known for a generic PDP apply to the PDP-TSLU.
New valid inequalities can also be derived from the unique conflict-free service constraints
in a way similar to those from LIFO or FIFO loading constraints as
presented in \citep{cordeau2010branch,cordeau2010branchFIFO}. Although
there are dozens of such valid inequalities, not all of them achieve
good trade-offs between performance benefit and implementation cost: According to the numerical
studies reported in \citep{cordeau2010branch,cordeau2010branchFIFO},
some kinds of valid inequalities require complicated implementations
(with respect to the demand for computer memory and computational
burden) but only slightly accelerate the solution process. Similar
kinds of valid inequalities are therefore ignored for the PDP-TSLU without
further validation.
\subsubsection{\textit{Valid inequalities inherited from generic PDP} }
Let $S$ be a subset of $P\cup D$. Define $x(S)=\sum_{i,j\in S}x_{ij}$, and let $\pi(S)=\{i\in P|i+n\in S\}$
and $\sigma(S)=\{i+n\in D|i\in S\cap P\}$ which denote the sets of predecessors
and successors of certain vertices in $S$, respectively, where $i+n$ is the delivery task paired with the pickup task $i \in P$. Define $\bar{S}=V\backslash S$. Then the following subtour elimination
constraints are well-known for the PDP (as a generic property of TSP
problems) \citep{cordeau2010branch,cordeau2010branchFIFO}:
\begin{equation}
x(S)\le|S|-1,\,\,\forall \left|S\right|\ge2.\label{vc - subtour elimination}
\end{equation}
By taking account of precedence restrictions in PDP, these constraints
can further be strengthened as \citep{balas1995precedence,cordeau2010branch,cordeau2010branchFIFO}:
\begin{align}
x(S)+\sum_{i\in S}\sum_{j\in\bar{S}\cap\pi(S)}x_{ij}+\sum_{i\in S\cap\pi(S)}\sum_{j\in\bar{S}\backslash\pi(S)}x_{ij} & \le |S|-1,\label{vc - lifted subtour elimination1}\\
x(S)+\sum_{i\in\bar{S}\cap\sigma(S)}\sum_{j\in S}x_{ij}+\sum_{i\in\bar{S}\backslash\sigma(S)}\sum_{j\in S\cap\sigma(S)}x_{ij} & \le |S|-1.\label{vc - lifted subtour elimination2}
\end{align}
Another set of valid inequalities for the PDP are known as lifted $D_{k}^{+}$
and $D_{k}^{-}$ inequalities. The $D_{k}^{+}$ and $D_{k}^{-}$ inequalities
were firstly introduced by Grotschel and Padberg \citep{grotschel1985polyhedral}
and later strengthened by Cordeau \citep{cordeau2006branch}. The
lifted inequalities apply to any ordered set $S=\{i_{1},i_{2},\,\dots,i_{k}\}\subseteq V$
for $k\ge3$, and take the form of
\begin{align}
\sum_{h=1}^{k}x_{i_{h}i_{h+1}}+2\sum_{h=2}^{k-1}x_{i_{h}i_{1}}+\sum_{h=3}^{k-1}\sum_{l=2}^{h-1}x_{i_{h}i_{l}}+\sum_{j+n\in\bar{S}\cap\sigma(S)}x_{j+n,i_{1}} & \le k-1,\label{eq:vc - Dka}\\
\sum_{h=1}^{k}x_{i_{h}i_{h+1}}+2\sum_{h=3}^{k-1}x_{i_{1}i_{h}}+\sum_{h=4}^{k}\sum_{l=3}^{h-1}x_{i_{h}i_{l}}+\sum_{j\in\bar{S}\cap\pi(S)}x_{i_{1}j} & \le k-1,\label{eq:vc - Dkb}
\end{align}
where $i_{k+1}\triangleq i_{1}$.
\subsubsection{\textit{Valid inequalities derived from LIFO and CLO service restrictions}}
We derive new valid inequalities from the LIFO and CLO service constraints as unique to PDP-TSLU. Given $i\in P^{1},\,j\in P^{1}\backslash\{i\}$, if $x_{ij}=1$ is a feasible integer solution, then the pickup and delivery sequence must satisfy $0\prec i\prec j\prec j+n\prec i+n\prec 2n+1$, where the operator $\prec$ represents the precedence relationship. To enforce LIFO service order under TSLU operations, the predecessor of $i+n$ can only be a pickup task in $P^{2}\cup P^{4}$ or a delivery task of a request in $P\backslash(P^{4}\cup\{i\})$ (including $j$). Meanwhile, a valid successor of $j+n$ can only be a pickup task in $P\backslash \{P^3 \cup \{i, j\}\}$ or a delivery task of a request in $P^{2}\cup P^{3}\cup\{i\}$. Similar reasoning can be applied to determine the possible predecessors of $i+n$ and successors of $j+n$ for $i$ belonging to $P^{4}$ and for following CLO service order under TSLU operations. In summary, if arc $(i,j)$ is included in the routing path, then the set of possible predecessors of $i+n$ are given as
\begin{align*}
\mathcal{P}_{i+n}(i,j)=P^{2}\cup P^{4}\cup\{k+n:\,k\in P\backslash(P^{4}\cup\{i\})\}, & \quad \forall i\in P^{1}, j\in P^{1}\backslash\{i\},\\
\mathcal{P}_{i+n}(i,j)=\left(P^{2}\cup P^{4}\backslash\{i\}\right)\cup\{k+n:\,k\in P\backslash(P^{3}\cup\{i\})\}, & \quad \forall i\in P^{4}, j\in P^{1},\\
\mathcal{P}_{i+n}(i,j)=P^{1}\cup P^{3}\cup\{k+n:\,k\in P\backslash(P^{3}\cup\{i\})\}, & \quad \forall i\in P^{2}, j\in P^{2}\backslash\{i\},\\
\mathcal{P}_{i+n}(i,j)=\left(P^{1}\cup P^{3}\backslash\{i\}\right)\cup\{k+n:\,k\in P\backslash(P^{4}\cup\{i\})\}, & \quad \forall i\in P^{3}, j\in P^{2},
\end{align*}
and meanwhile the set of possible successors of $j+n$ are
\begin{align*}
\mathcal{S}_{j+n}(i,j)=\left(P\backslash(P^{3}\cup\{i,j\})\right)\cup\{k+n:\,k\in P^{2}\cup P^{3}\cup\{i\}\}, & \quad \forall i\in P^{1}, j\in P^{1}\backslash\{i\},\\
\mathcal{S}_{j+n}(i,j)=\left(P\backslash(P^{3}\cup\{i,j\})\right)\cup\{k+n:\,k\in P\backslash(P^{3}\cup\{j\})\}, & \quad\forall i\in P^{4}, j\in P^{1},\\
\mathcal{S}_{j+n}(i,j)=\left(P\backslash(P^{4}\cup\{i,j\})\right)\cup\{k+n:\,k\in P^{1}\cup P^{4}\cup\{i\}\}, & \quad \forall i\in P^{2}, j\in P^{2}\backslash\{i\},\\
\mathcal{S}_{j+n}(i,j)=\left(P\backslash(P^{4}\cup\{i,j\})\right)\cup\{k+n:\,k\in P\backslash(P^{4}\cup\{j\})\}, & \quad \forall i\in P^{3}, j\in P^{2}.
\end{align*}
On the other hand, given $i\in P^{1},\,j\in P^{1}\backslash\{i\}$, if $x_{j+n, i+n}=1$ is a feasible solution, then the pickup and delivery sequence again must follow the aforementioned precedence order. To enforce the LIFO service order under TSLU operations, the predecessor of $j$ can only be a pickup task in $\{i\}\cup P^{2}\cup P^{4}$ or a delivery task of a request in $P\backslash(P^{4}\cup\{i,j\})$. Meanwhile, a valid successor of $i$ can only be a pickup task in $P\backslash(P^{3}\cup\{i\})$ or a delivery task of a request in $P^{2}\cup P^{3}$. Similar reasoning can be applied to determine the possible predecessors of $j$ and successors of $i$ for $i$ belonging to $P^{4}$ and for satisfying the CLO service order under TSLU operations. In summary, if arc $(j+n,i+n)$ is included in the routing path, then the set of possible predecessors of $j$ are given as
\begin{align*}
\mathcal{P}{}_{j}(j+n,i+n)=\{i\}\cup P^{2}\cup P^{4}\cup\{k+n:\,k\in P\backslash(P^{4}\cup\{i,j\})\}, & \quad \forall i\in P^{1}, j\in P^{1}\backslash\{i\},\\
\mathcal{P}{}_{j}(j+n,i+n)=\{0\}\cup P^{2}\cup P^{4}\cup\{k+n:\,P\backslash(P^{3}\cup\{i,j\})\}, & \quad \forall i\in P^{4}, j\in P^{1},\\
\mathcal{P}{}_{j}(j+n,i+n)=\{i\}\cup P^{1}\cup P^{3}\cup\{k+n:\,k\in P\backslash(P^{3}\cup\{i,j\})\}, & \quad \forall i\in P^{2}, j\in P^{2}\backslash\{i\},\\
\mathcal{P}{}_{j}(j+n,i+n)=\{0\}\cup P^{1}\cup P^{3}\cup\{k+n:\,P\backslash(P^{4}\cup\{i,j\})\}, & \quad \forall i\in P^{3}, j\in P^{2},
\end{align*}
and meanwhile the set of possible successors of $i$ are
\begin{align*}
\mathcal{S}{}_{i}(j+n,i+n)=\left(P\backslash(P^{3}\cup\{i\})\right)\cup\{k+n:\,k\in P^{2}\cup P^{3}\}, & \quad \forall i\in P^{1}, j\in P^{1}\backslash\{i\},\\
\mathcal{S}{}_{i}(j+n,i+n)=\left(P\backslash(P^{3}\cup\{i\})\right)\cup\{k+n:\,k\in P^{1}\cup(P^{4}\backslash\{i\})\}, & \quad \forall i\in P^{4}, j\in P^{1},\\
\mathcal{S}{}_{i}(j+n,i+n)=\left(P\backslash(P^{4}\cup\{i\})\right)\cup\{k+n:\,k\in P^{1}\cup P^{4}\}, & \quad \forall i\in P^{2}, j\in P^{2}\backslash\{i\},\\
\mathcal{S}{}_{i}(j+n,i+n)=\left(P\backslash(P^{4}\cup\{i\})\right)\cup\{k+n:\,k\in P^{2}\cup(P^{3}\backslash\{i\})\}, & \quad \forall i\in P^{3}, j\in P^{2}.
\end{align*}
In the second and fourth cases of predecessors set of $j$, the vertex $0$ instead of $i$ is included in the set because $j+n\prec i+n$ does not imply $i\prec j$ in those two cases.
With the above insights, four groups of valid inequalities can be obtained for the
PDP-TSLU, each of which indicates a group of arcs that are incompatible with the arc $(i, j)$ or $(j+n, i+n)$ as included in a valid routing path.
\begin{prop}
For each of the vertex pairs $i\in P^{1}\cup P^{4},\,j\in P^{1}\backslash\{i\}$,
or $i\in P^{2}\cup P^{3},\,j\in P^{2}\backslash\{i\}$, the following
inequalities hold for the PDP-TSLU:
\begin{align}
x_{ij}+\sum_{l\notin\mathcal{P}_{i+n}(i,j)}x_{l,i+n} & \le 1,\label{eq: vc - LIFO1a}\\
x_{ij}+\sum_{l\notin\mathcal{S}_{j+n}(i,j)}x_{j+n,l} & \le 1,\label{eq: vc - LIFO1b}\\
x_{j+n,i+n}+\sum_{l\notin\mathcal{P}_{j}(j+n,i+n)}x_{lj} & \le 1,\label{eq: vc - LIFO2a}\\
x_{j+n,i+n}+\sum_{l\notin\mathcal{S}_{i}(j+n,i+n)}x_{il} & \le 1,\label{eq: vc - LIFO2b}
\end{align}
where the variables are zero if their corresponding arcs are not defined
in $A$.
\end{prop}
The next inequality comes from the fact that whenever an arc $(i,i+n)$
for any $i\in P^{3}\cup P^{4}$ is used, it follows from the CLO loading
restriction that the vehicle must arrive empty at $i$ and leaves
empty from $i+n$.
\begin{prop}
For each of the vertex pairs $i\in P^{4},\,j\in P^{1}$, or $i\in P^{3},\,j\in P^{2}$,
the following inequality holds for the PDP-TSLU:
\begin{equation}
x_{ij}+x_{ji}+x_{i,i+n}+x_{i+n,j+n}\le1,\label{eq:vc - LIFO3a}
\end{equation}
where the variables are zero if their corresponding arcs are not defined
in $A$.
\end{prop}
The following inequality merges the two valid inequalities $x_{ij}+x_{i+n,j+n}+x_{j+n,i}\le1$
and $x_{ij}+x_{i+n,j+n}+x_{i+n,j}\le1$, for each pair of $i$
and $j$ in proper sets.
\begin{prop}
For each of the vertex pairs $i\in P^{1}\cup P^{4},\,j\in P^{1}\backslash\{i\}$,
or $i\in P^{2}\cup P^{3},\,j\in P^{2}\backslash\{i\}$, the following
inequality holds for the PDP-TSLU:
\begin{equation}
x_{ij}+x_{i+n,j+n}+x_{j+n,i}+x_{i+n,j}\le1,\label{eq:vc - LIFO3b}
\end{equation}
where the variables are zero if their corresponding arcs are not defined
in A.
\end{prop}
\subsubsection{\textit{Valid inequalities derived from FIFO and CLO service restrictions}}
This subsection derives new valid inequalities pertinent to the FIFO and CLO service constraints of the PDP-TSLU. Given $i\in P^{4},\,j\in P^{4}\backslash \{i\}$, if $x_{ij}=1$ is a feasible solution, then the pickup and delivery sequence must satisfy $0 \prec i \prec j \prec i+n\prec j+n\prec 2n+1$. To enforce the FIFO service order under TSLU operations, the successor of $i+n$ can only be a pickup task in $P\backslash(P^{3}\cup\{i,j\})$ or a delivery task of a request in $P^{2}\cup\{j\}$. Meanwhile, a valid predecessor of $j+n$ can only be a pickup task in $P^{2}\cup(P^{4}\backslash\{i,j\})$ or a delivery task of a request in $P^{1}\cup P^{2}\cup\{i\}$. Similar reasoning can be applied to determine the possible successors of $i+n$ and predecessors of $j+n$ for $i$ belonging to $P^{1}$ and for satisfying CLO service order under TSLU operations. In summary, if arc $(i,j)$ is included in the routing path, then the set of possible successors of $i+n$ are obtained as
\begin{align*}
\mathcal{S}_{i+n}(i,j)=\left(P\backslash(P^{3}\cup\{i,j\})\right)\cup\left\{ k+n:\,k\in P\backslash(P^{3}\cup\{i\})\right\} , & \quad \forall i\in P^{1}, j\in P^{4},\\
\mathcal{S}_{i+n}(i,j)=\left(P\backslash(P^{3}\cup\{i,j\})\right)\cup\left\{ k+n:\,k\in P^{2}\cup\{j\}\right\} , & \quad \forall i\in P^{4}, j\in P^{4}\backslash\{i\},\\
\mathcal{S}_{i+n}(i,j)=\left(P\backslash(P^{4}\cup\{i,j\})\right)\cup\left\{ k+n:\,k\in P\backslash(P^{4}\cup\{i\})\right\} , & \quad \forall i\in P^{2}, j\in P^{3},\\
\mathcal{S}_{i+n}(i,j)=\left(P\backslash(P^{4}\cup\{i,j\})\right)\cup\left\{ k+n:\,k\in P^{1}\cup\{j\}\right\} , & \quad \forall i\in P^{3}, j\in P^{3}\backslash\{i\},
\end{align*}
and meanwhile the set of possible predecessors of $j+n$ are
\begin{align*}
\mathcal{P}_{j+n}(i,j)=P^{2}\cup(P^{4}\backslash\{j\})\cup\left\{ k+n:\,k\in P\backslash(P^{3}\cup\{j\})\right\} , & \quad \forall i\in P^{1}, j\in P^{4},\\
\mathcal{P}_{j+n}(i,j)=P^{2}\cup(P^{4}\backslash\{i,j\})\cup\left\{ k+n:\,k\in P^{1}\cup P^{2}\cup\{i\}\right\} , & \quad \forall i\in P^{4}, j\in P^{4}\backslash\{i\},\\
\mathcal{P}_{j+n}(i,j)=P^{1}\cup(P^{3}\backslash\{j\})\cup\left\{ k+n:\,k\in P\backslash(P^{4}\cup\{j\})\right\} , & \quad \forall i\in P^{2}, j\in P^{3},\\
\mathcal{P}_{j+n}(i,j)=P^{1}\cup(P^{3}\backslash\{i,j\})\cup\left\{ k+n:\,k\in P^{1}\cup P^{2}\cup\{i\}\right\} , & \quad \forall i\in P^{3}, j\in P^{3}\backslash\{i\}.
\end{align*}
Given $i\in P^{4},\,j\in P^{4}$, if $x_{i+n, j+n}=1$ is a feasible solution, then to comply with the FIFO service order under TSLU operations, the successor of $i$ can only be a pickup vertex in $\{j\}\cup P^{1}\cup P^{2}$ or a delivery vertex of a request in $P^{1}\cup(P^{4}\backslash\{j\})$. Meanwhile, a valid predecessor of $j$ can only be a pickup vertex in $\{i\}\cup P^{1}$ or a delivery vertex of a request in $P\backslash(P^{3}\cup\{i,j\})$. Similar reasoning can be applied to determine the possible successors of $i$ and predecessors of $j$ for $i$ belonging to $P^1$ and for satisfying CLO service order under TSLU operations. In summary, if arc $(i+n, j+n)$ is included in the routing path, then the set of possible successors of $i$ are obtained as
\begin{align*}
\mathcal{S}_{i}(i+n,j+n)=\{j\}\cup(P^{1}\backslash\{i\})\cup P^{2}\cup\left\{ k+n:\,k\in P^{2}\cup P^{3}\cup\{i\}\right\} , & \quad \forall i\in P^{1}, j\in P^{4},\\
\mathcal{S}_{i}(i+n,j+n)=\{j\}\cup P^{1}\cup P^{2}\cup\left\{ k+n:\,k\in P^{1}\cup(P^{4}\backslash\{j\})\right\} , & \quad \forall i\in P^{4}, j\in P^{4}\backslash\{i\},\\
\mathcal{S}_{i}(i+n,j+n)=\{j\}\cup P^{1}\cup(P^{2}\backslash\{i\})\cup\left\{ k+n:\,k\in P^{1}\cup P^{4}\cup\{i\}\right\} , & \quad \forall i\in P^{2}, j\in P^{3},\\
\mathcal{S}_{i}(i+n,j+n)=\{j\}\cup P^{1}\cup P^{2}\cup\left\{ k+n:\,k\in P^{2}\cup(P^{3}\backslash\{j\})\right\} , & \quad \forall i\in P^{3}, j\in P^{3}\backslash\{i\},
\end{align*}
and meanwhile the set of possible predecessors of $j$ are
\begin{align*}
\mathcal{P}_{j}(i+n,j+n)=\{0\}\cup P^{1}\cup(P^{4}\backslash\{j\})\cup\left\{ k+n:\,k\in P\backslash\{i,j\}\right\} , & \quad \forall i\in P^{1}, j\in P^{4},\\
\mathcal{P}_{j}(i+n,j+n)=\{i\}\cup P^{1}\cup\left\{ k+n:\,k\in P\backslash(P^{3}\cup\{i,j\})\right\} , & \quad \forall i\in P^{4}, j\in P^{4}\backslash\{i\},\\
\mathcal{P}_{j}(i+n,j+n)=\{0\}\cup P^{2}\cup(P^{3}\backslash\{j\})\cup\left\{ k+n:\,k\in P\backslash\{i,j\}\right\} , & \quad \forall i\in P^{2}, j\in P^{3},\\
\mathcal{P}_{j}(i+n,j+n)=\{i\}\cup P^{2}\cup\left\{ k+n:\,k\in P\backslash(P^{4}\cup\{i,j\})\right\} , & \quad \forall i\in P^{3}, j\in P^{3}\backslash\{i\}.
\end{align*}
With the above insights, four groups of valid inequalities in analog to those in
(\ref{eq: vc - LIFO1a})-(\ref{eq: vc - LIFO2b}) can be obtained to invalidate infeasible predecessors and successors to a given task:
\begin{prop}
For each of the vertex pairs $i\in P^{1}\cup P^{4},\,j\in P^{4}\backslash\{i\}$,
or $i\in P^{2}\cup P^{3},\,j\in P^{3}\backslash\{i\}$, the following
inequalities hold for the PDP-TSLU:
\begin{align}
x_{ij}+\sum_{l\notin\mathcal{S}_{i+n}(i,j)}x_{i+n,l} & \le 1,\label{vc - FIFO1a}\\
x_{ij}+\sum_{l\notin\mathcal{P}_{j+n}(i,j)}x_{l,j+n} & \le 1,\label{vc - FIFO1b}\\
x_{i+n,j+n}+\sum_{l\notin\mathcal{S}_{i}(i+n,j+n)}x_{il} & \le 1,\label{vc - FIFO2a}\\
x_{i+n,j+n}+\sum_{l\notin\mathcal{P}_{j}(i+n,j+n)}x_{lj} & \le 1.\label{vc - FIFO2b}
\end{align}
\end{prop}
The next inequality is adapted from the one proposed for TSPPDF in \citep{cordeau2010branchFIFO}.
It comes from the fact that whenever an arc $(i,i+n)$ for any
$i\in P^{3}\cup P^{4}$ is traversed, it follows from the FIFO service
restriction that the vehicle must arrive empty at $i$ and leaves
empty from $i+n$.
\begin{prop}
For each of the vertex pairs $i\in P^{4},\,j\in P^{4}\backslash\{i\}$,
or $i\in P^{3},\,j\in P^{3}\backslash\{i\}$, the following inequality
holds for the PDP-TSLU:
\begin{equation}
x_{ji}+x_{i,i+n}+x_{i+n,j+n}\le1,\label{eq:vc - FIFO3a}
\end{equation}
where the variables are zero if their corresponding arcs are not defined
in A.
\end{prop}
The following inequality is again adapted from \citep{cordeau2010branchFIFO},
which merges the two inequalities $x_{ij}+x_{j+n,i+n}+x_{j+n,i}\le1$
and $x_{ij}+x_{j+n,i+n}+x_{i+n,j}\le1$ , for each pair $i$ and
$j$ in proper request sets.
\begin{prop}
For each of the vertex pairs $i\in P^{1}\cup P^{4},\,j\in P^{4}\backslash\{i\}$,
or $i\in P^{2}\cup P^{3},\,j\in P^{3}\backslash\{i\}$, the following
inequality holds for the PDP-TSLU:
\begin{equation}
x_{ij}+x_{j+n,i+n}+x_{j+n,i}+x_{i+n,j}\le1,\label{eq:vc - FIFO3b}
\end{equation}
where the variables are zero if the corresponding arcs are not defined
in A.
\end{prop}
The valid inequalities (\ref{vc - lifted subtour elimination1})-(\ref{eq:vc - FIFO3b})
will be used to enhance the branch-and-cut algorithm embedded in
CPLEX for solving the PDP-TSLU, and their usefulness will be evaluated
via numerical experiments.
\begin{rem}
The above valid inequalities are not exclusive to the PDP-TSLU and other derivations are possible. For example, valid inequalities can also be derived from the CFI service restrictions in a similar way. Our experiments have shown that they do not contribute much to the computational efficiency, and hence are ignored in our studies. For another example, valid inequalities can be derived from the geometric
restriction of a linear track, known as the ``edge degree balance''
property \citep{atallah1987efficient}. The property says that, for
an RGV (which starts and ends at the same position) traversing any
track segment, the number of times it moves towards the right must
be equal to that towards the left. The property remains true for the
PDP-TSLU if a virtual arc is introduced to connect the end vertex
with the start vertex. Simulations have shown
that such valid inequalities hardly accelerate the solution process of PDP-TSLU and hence are ignored also.
\end{rem}
\section{Rolling-horizon approach for handling dynamic PDP-TSLU} \label{sec: rolling-horizon-appro}
In practice PD requests arrive stochastically and the PDP-TSLU becomes dynamic. Ideally the dynamic PDP-TSLU should be solved to optimality at each decision point as a static alternative which sequences all PD tasks up to the decision point. In reality, however, a computer has limited computational capacity and a suboptimal solution which considers a limited number of PD requests has to be
explored at each decision point. This motivates us to adopt a rolling-horizon approach for handling the dynamic PDP-TSLU. To that end, we need to treat PDP-TSLU with nonzero
initial load first.
\subsection{Handling nonzero initial conditions}
At the time of recomputing a routing solution, the RGV may contain
containers which have not been delivered, {\emph{i.e.}}, $w_{0}\ge1$. In this case,
the re-optimization is subject to a nonzero initial condition. The
initial load on the RGV may consist of containers that are destined to
stations on the same or different sides of the track, as depicted in
Figure \ref{fig:Initial requests}, where the impossible types
of initial requests are avoided by the previous routing solution.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.7]{Types_of_initial_request}
\par\end{centering}
\caption{Initial delivery requests and their completion as virtual PD requests.
\label{fig:Initial requests}}
\end{figure}
To recompute a routing decision, extra constraints are required for handling new requests without causing conflict with the existing load. Characterizing these constraints
and incorporating them into the present routing model are thus necessary.
The renewed problem can be reformulated as a standard one with zero initial load and partially determined arcs, as explained below.
Let there be $n_{0}$ delivery requests at the time when a
renewal of routing is triggered, of which $n_{0,1}$ are destined
to north stations and $n_{0,2}$ (equal to $n_{0}-n_{0,1}$)
to south stations. In the graph model, it is feasible
to represent the $n_{0,1}$ transport requests by $n_{0,1}$ virtual PD requests of Type-1, each associated with a travel distance equal to the current distance of the RGV to the
destined station (because visiting a virtual pickup vertex does not introduce a routing cost).
Similarly, the rest $n_{0,2}$ transport requests can be represented
by $n_{0,2}$ virtual PD requests of
Type-2. See Figure \ref{fig:Initial requests} for a depiction of
the virtual PD requests, as indicated by arrows comprised of dashed
and solid lines. In the meantime, the current position of the RGV is represented
by a virtual start vertex as directed to one of the virtual
pickup vertices with zero routing cost. The order of visiting the virtual
start and pickup vertices is specified in such a way that yields the initial load setting.
The routing model is then augmented to include the initial delivery
vertices, the virtual pickup and start vertices, and the
arcs associated. Denote the set of virtual pickup vertices of
Type-1 as $P_{0}^{1} \triangleq \{1, 2, \dots, n_{0,1}\}$, and those of Type-2 as $P_{0}^{2} \triangleq \{n_{0,1}+1,n_{0,1}+2, \dots, n_0\}$, and let $P_{0}\triangleq P_{0}^{1} \cup P_{0}^{2}$, as a subset of the augmented pickup vertex set $P$. Without loss of generality, we let the RGV start at the virtual start vertex 0 and visit the virtual pickup vertices in the order of $1, 2, \dots, n_0$,
resulting in the initial load setting. The arcs
associated with these virtual vertices ($\{0\}\cup P_{0}$)
are thus known a priori and the related arc set reduces to
\begin{align*}
A_{0} & = \left\{ (k, k+1):\,k=0, 1, \dots, n_0-1 \right\} \\
& \quad \cup\left\{ (n_0, j):\, j \in P^{1} \cup P^{2} \cup \left\{ n_{0,1}+h, n_{0,2}+h \right\}\right\}\\
& \quad \cup\left\{ (n_0, j): j \in P^{3}, \text{ if } n_{0,1}=0 \right\}\\
& \quad \cup\left\{ (n_0, j): j \in P^{4}, \text{ if } n_{0,2}=0 \right\} ,
\end{align*}
where $h$ is a given size of the rolling horizon, satisfying $h \ge n_0$. All arcs defined in $A$ directing to $\{0\}\cup P_{0}$
become null since all these vertices have already been visited. For each arc $(i,j)$
in the first subset of $A_{0}$ above, the arc distance $r_{ij}$ is equal to zero; and for each arc $(i,j)$
in the rest subsets of $A_{0}$ above, $r_{ij}$ is equal to the distance
of the current position of the RGV to vertex $j$.
Consequently, the RGV routing problem with nonzero
initial load is transformed into a PDP-TSLU with partially determined arcs, in which an empty RGV starts at a virtual
vertex 0, visits the virtual pickup vertices in a predefined order,
and then the rest pickup and delivery vertices within the rolling
horizon by following an order determined by solving the
PDP-TSLU.
\subsection{The rolling-horizon approach}
As PD requests arrive stochastically in a complex manner, we adopt a rolling-horizon approach
to handling the dynamic PDP-TSLU. To make the approach work in real time,
we need to address two issues: One is to select an appropriate
rolling-horizon size and the other is about online implementation
of the routing decisions. In principle, the rolling horizon should be as long
as possible in order to avoid myopic decisions, but in the meantime it must be short enough to afford real-time decisions. The size of the horizon is thus limited by the computational time required
for solving a static PDP-TSLU, and the specific size needs to be determined
via simulations. Regarding the implementation of rolling decisions,
two situations have to be handled appropriately: One is when re-optimization
is demanded while the previous optimization is still in progress; and
the other is when a new decision is available while the RGV is on
the way of executing the previous decision. With these situations
in mind, the rolling-horizon approach is designed to work as follows.
\emph{Rolling-horizon approach}: Whenever a new PD request is issued or there are un-sequenced PD
tasks in the system, the approach recomputes a solution of sequencing of
PD requests by solving the (augmented) PDP-TSLU with a given horizon size (if there were so many requests),
say, $h$. If there are more than $h$ PD requests
to sequence, $h$ of them are selected based on the ``Earliest Due
First'' rule while satisfying the FIFO service order at each station.
In this way, the RGV keeps serving PD tasks as sequenced by
the previous decision until a new decision comes out. In addition,
executing the first task of each decision is made compulsory and a new
decision is adopted only after the RGV completes a current task of the
previous decision.
Following this approach, an RGV is able to handle PD requests continuously
without obvious disruption although the decision is renewed over
time. This is true as long as the computation of a new decision
can be finished before the RGV completes all requests in the previous horizon.
\section{Computational studies} \label{sec: computational studies}
A toy example is first given to illustrate the modeling and solution
of a static PDP-TSLU. Then computational results of static PDP-TSLUs with
randomly generated requests of various sizes are presented to evaluate
the usefulness of the valid inequalities for solving the PDP-TSLUs. Based
on the results, an appropriate horizon size is selected for
implementing the rolling-horizon approach. Then the rolling-horizon
approach is applied to handle dynamic PDP-TSLUs of random instances, and the results are compared with those obtained with a typical rule-based method.
The basic scenario settings of the static and dynamic PDP-TSLUs are as follows. In each instance, the PD requests are randomly generated within
a work area. Specifically, for each static instance a queue
of PD requests at any station on both sides of the track
are generated with a size following a uniform distribution within
$[0,a]$, where $a$ is a given integer which is different for
instances of different sizes. For each dynamic instance, the arrival
time of a new PD request follows a Poisson distribution and its location
follows a uniform distribution within the range of stations. The distance
between two neighboring stations is treated as a unit distance, and a
unit service time is assumed for each PD request as equally divided
between the pickup and the delivery operations. In our studies, an
RGV is assumed to have a constant mass equal to two units of load.
The travel distance of an RGV is defined as the total distance it
travels before the last delivery, and the energy cost of
an RGV is defined as per (\ref{eq: obj}). The friction
coefficient of the track is set as 0.05 and the gravitational acceleration as 9.8
$m/s^{2}$. The problems are coded in C++ and solved via
IBM ILOG CPLEX 12.4 \citep{studio2011v124} which runs on a PC with
Intel Core2 Duo CPU @ 2.66 GHz and 2 GB of RAM.
\subsection{A toy example of static PDP-TSLU}
Consider seven PD requests distributed in nine pairs of face-to-face
stations as shown in Figure \ref{fig: toy example - scenario}, whose graph representation is shown in Figure \ref{fig: toy_example}.
In the representation, pickup vertices sit in FIFO queues while
delivery vertices do not follow any precedence order and are represented
to sit side by side of each other. The RGV starts from location 3.
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{Toy_example_a}
\caption{\footnotesize RGV starting with empty load.}
\label{fig: toy_example_a}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{Toy_example_b}
\caption{\footnotesize RGV starting with two units of load.}
\label{fig: toy_example_b}
\end{subfigure}
\caption{Graph representations of scenarios with empty and nonempty initial
load.}
\label{fig: toy_example}
\end{figure}
Two scenarios are investigated: (a) the RGV starts with empty load, and
(b) the RGV starts with two units of load on board. The RGV is simulated with different capacities. The numerical results of scenario (a) are summarized in part
(a) of Table \ref{tab:Toy example}. We observe that
the energy cost drops considerably once the capacity increases from
1 to 2, but it ceases dropping as the RGV's capacity becomes larger. This is because the service restrictions and the weight-dependent
energy consumption are to prevent full loading and hence limit the benefit
of having a larger capacity. The results also indicate that there can be
multiple solutions for achieving the same energy cost. In
comparison, minimizing travel distance results in higher energy costs for all RGV capacities tested, whose values are 44.59, 52.43, 52.43, 48.51 and 52.43 for the capacity equal to $2, 3, \dots, 6$, respectively (while achieving travel distances of 30, 29, 29, 29 and 29, correspondingly).
In scenario (b), the RGV starts with two units of load
on board, one to be delivered to the north station at location 6
and the other to the south station at location 9. The other PD requests
keep the same as in scenario (a). By introducing a virtual start vertex
(0) and two virtual pickup vertices (1 and 4), the graph model is
augmented as in Figure \ref{fig: toy_example_b}. By solving the
augmented PDP-TSLU with zero initial load and partially determined
arcs under five different RGV capacities, it yields the computational results summarized in part (b) of Table \ref{tab:Toy example}.
The routing solutions turn out to be the same for the five choices
of capacity, resulting in a total energy cost of 55.86. In comparison, minimizing travel distance gives total
energy costs of 56.84, 62.72, 66.64, 66.64 and 64.68 for the five capacities, respectively
(all achieving a travel distance of 36), which are higher
than the aforementioned minimum counterparts obtained by solving the PDP-TSLUs.
\begin{table*}
{\small \caption{{\footnotesize Computational results of the toy example.\label{tab:Toy example}}}
}{\small \par}
\centering{}%
{\scriptsize
\begin{tabular}{c>{\centering}p{1.5cm}>{\centering}p{1cm}>{\centering}p{1.2cm}>{\centering}p{1.4cm}>{\centering}p{5.cm}}
\hline
\noalign{\vskip3pt}
{Case} & { RGV's capacity} & { Energy\linebreak cost} & { Travel distance} & { Completion time} & { Vertex service order}\tabularnewline[3pt]
\hline
\noalign{\vskip3pt}
\multirow{3}{*}{{ (a)}} & { 1} & { 50.47} & { 38} & { 63.78} & { 4$\rightarrow$11$\rightarrow$7$\rightarrow$14$\rightarrow$3$\rightarrow$10$\rightarrow$5$\rightarrow$12$\rightarrow$1\linebreak$\rightarrow$8$\rightarrow$6$\rightarrow$13$\rightarrow$2$\rightarrow$9}\tabularnewline[3pt]
\noalign{\vskip3pt}
& { 2/3/4/6} & { 42.63} & { 30} & { 55.64} & { 3$\rightarrow$10$\rightarrow$4$\rightarrow$11$\rightarrow$7$\rightarrow$5$\rightarrow$14$\rightarrow$12$\rightarrow$1\linebreak$\rightarrow$8$\rightarrow$6$\rightarrow$2$\rightarrow$13$\rightarrow$9}\tabularnewline[3pt]
\noalign{\vskip3pt}
& { 5} & { 42.63} & { 30} & { 55.64} & { 1$\rightarrow$8$\rightarrow$6$\rightarrow$2$\rightarrow$13$\rightarrow$9$\rightarrow$3$\rightarrow$10$\rightarrow$4\linebreak$\rightarrow$11$\rightarrow$7$\rightarrow$5$\rightarrow$14$\rightarrow$12}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{*}{{ (b)}} & { 2/3/4/5/6} & { 55.86} & { 36} & { 65.58} & { 1$\rightarrow$4$\rightarrow$10$\rightarrow$2$\rightarrow$13$\rightarrow$11$\rightarrow$8$\rightarrow$3$\rightarrow$17\linebreak$\rightarrow$12$\rightarrow$5$\rightarrow$14$\rightarrow$6$\rightarrow$15$\rightarrow$9$\rightarrow$7$\rightarrow$18$\rightarrow$16}\tabularnewline[3pt]
\hline
\end{tabular}}
\end{table*}
\subsection{Evaluating the valid inequalities and selecting a size for the rolling
horizon}
This subsection evaluates the usefulness of the valid inequalities
introduced in Section \ref{sec: Valid-constraints} for solving the
PDP-TSLU and meanwhile determines an appropriate size for the rolling
horizon to enable online decisions in a dynamic environment.
The valid inequalities (\ref{vc - subtour elimination})-(\ref{eq:vc - FIFO3b})
are added as redundant constraints to the PDP-TSLU. More precisely,
the valid inequalities (\ref{vc - subtour elimination}) with a subtour
size of 2 are added, and the special cases of constraints (\ref{vc - lifted subtour elimination1})-(\ref{eq:vc - Dkb})
are added:
\begin{itemize}
\item the constraints (\ref{vc - lifted subtour elimination1}) with $S$
equal to $\{i,j\}$, $\{i,j+n\}$, and $\{i,i+n,j\}$,
and the constraints (\ref{vc - lifted subtour elimination2}) with
$S$ equal to $\{i+n,j+n\}$, $\{i,j+n\}$, and $\{i,i+n,j+n\}$;
\item the constraints (\ref{eq:vc - Dka}) and (\ref{eq:vc - Dkb}) with
$k=3$: $x_{i+n,j}+x_{ji}+x_{i,i+n}+x_{j+n,i+n}+2x_{j,i+n}\le2$
and $x_{i,i+n}+x_{i+n,j+n}+x_{j+n,i}+x_{ij}+2x_{i,j+n}\le2$.
\end{itemize}
These simple constraints were once used in solving TSPPDL and TSPPDF
\citep{cordeau2010branch,cordeau2010branchFIFO}. In the meantime, the valid inequalities (\ref{eq: vc - LIFO1a})-(\ref{eq:vc - FIFO3b}) unique to PDP-TSLU
each has a polynomial cardinality and are added as redundant constraints
to the PDP-TSLU directly.
For convenience of investigation, we classify the valid inequalities (\ref{vc - subtour elimination})-(\ref{eq:vc - Dkb}) inherited from a generic PDP as \textit{Group 1}, and
(\ref{eq: vc - LIFO1a})-(\ref{eq:vc - LIFO3b}) derived from LIFO and CLO service restrictions as \textit{Group 2}, and (\ref{vc - FIFO1a})-(\ref{eq:vc - FIFO3b}) derived from FIFO and CLO service restrictions as \textit{Group
3}. The usefulness of including various combinations of these three groups
of valid inequalities for solving PDP-TSLU with/without TW constraints
is evaluated via 20 random instances. In each instance, the work area
consists of 10 pairs of face-to-face stations and the TW constraints (if present)
are generated as delivery deadlines. The deadline for request $i$
is a random variable following a uniform distribution within $[T_{i}-15,\,T_{i}]$,
where $T_{i}$ is the completion time of request $i$ for the same
instance in the absence of TW constraints.
The average CPU times to solve the instances are summarized in Table \ref{tab:test valid inequalities}. We observe that in general, use of any of the three groups of valid inequalities is able to reduce the computational time, which is however least likely when valid inequalities of Group 1 are applied. This implies that the inequalities inherited from a generic PDP is not as useful as those derived from the specific PDP-TSLU, and hence the single inclusion of the Group 1 inequalities is not recommended. Instead, a combination of the 1-3 or 2-3 groups of valid inequalities is able to reduce the computational time in both simulation cases, either with or without TW constraints.
\begin{table*}
{\footnotesize \caption{{ CPU solution time (in second) when different groups (grps.)
of valid inequalities were used.\label{tab:test valid inequalities}}}
}{\footnotesize \par}
\centering{}%
{\scriptsize
\begin{tabular}{>{\raggedright}m{1.4cm}|>{\raggedright}m{1.4cm}ccccccccccccc}
\hline
\noalign{\vskip3pt}
& {$\left|P\right|$} & \multicolumn{2}{c}{{7}} & & \multicolumn{2}{c}{{8}} & & \multicolumn{2}{c}{{9}} & & \multicolumn{2}{c}{{10}} & & \multirow{2}{*}{\textit{Average}}\tabularnewline[3pt]
\cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}
\noalign{\vskip3pt}
& {$Q$} & {2} & {4} & & {2} & {4} & & {2} & {4} & & {2} & {4} & & \tabularnewline[3pt]
\hline
\noalign{\vskip3pt}
& {None} & {9.5} & {17.7} & & {42.9} & {198.6} & & {95.1} & {438.8} & & {316.2} & {1120.2} & & \textit{279.9}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{} & \multirow{1}{1.4cm}{{Grp. 1}} & {9.2} & {16.9} & & {38.8} & {151.5} & & {111.1} & {658.3} & & {208.6} & {2061.7} & & \textit{407.0}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{{Without TW cons.}} & \multirow{1}{1.4cm}{{Grp. 2}} & {9.0} & {18.0} & & {26.6} & {131.1} & & {110.0} & {352.1} & & {226.0} & {1179.6} & & \textit{256.6}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{} & \multirow{1}{1.4cm}{{Grp. 3}} & {6.0} & {18.4} & & {26.2} & {134.3} & & {87.6} & {295.6} & & {227.0} & {1000.9} & & \textit{224.5}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{} & \multirow{1}{1.4cm}{{Grps. 2-3}} & {6.6} & {17.0} & & {31.3} & {132.8} & & {82.4} & {237.9} & & {272.2} & {645.7} & & \textit{178.2}\tabularnewline[3pt]
\noalign{\vskip3pt}
& {Grps. 1-3} & {7.5} & {16.1} & & {35.7} & {144.2} & & {95.1} & {414.1} & & {283.9} & {972.1} & & \textit{246.1}\tabularnewline[3pt]
\hline
\noalign{\vskip3pt}
& {None} & {9.5} & {19.3} & & {42.3} & {185.5} & & {149.2} & {837.6} & & {387.9} & {2691.6} & & \textit{540.4}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{} & \multirow{1}{1.4cm}{{Grp. 1}} & {10.8} & {21.6} & & {44.7} & {200.1} & & {207.8} & {533.0} & & {530.1} & {2301.6} & & \textit{481.2}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{} & \multirow{1}{1.4cm}{{Grp. 2}} & {8.8} & {18.9} & & {39.3} & {201.4} & & {135.9} & {596.8} & & {400.8} & {1801.6} & & \textit{400.4}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{{With TW cons.}} & \multirow{1}{1.4cm}{{Grp. 3}} & {6.5} & {19.4} & & {31.4} & {207.5} & & {110.7} & {600.0} & & {317.7} & {1079.5} & & \textit{296.6}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{} & \multirow{1}{1.4cm}{{Grps. 2-3}} & {7.3} & {20.0} & & {34.6} & {165.3} & & {167.8} & {790.4} & & {286.1} & {1776.2} & & \textit{406.0}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multirow{1}{1.4cm}{} & \multirow{1}{1.4cm}{{Grps. 1-3}} & {7.1} & {18.5} & & {37.4} & {173.9} & & {146.6} & {668.1} & & {266.9} & {1086.9} & & \textit{300.7}\tabularnewline[3pt]
\hline
\end{tabular}}
\end{table*}
The simulation results also indicate that using a rolling-horizon size of
eight would allow the control system to recompute a routing solution within
one minute when the RGV has a capacity of 2 and about three minutes
when the RGV has a capacity of 4. This suggests that a rolling-horizon
size of eight is reasonable for rendering
online routing solutions in a dynamic environment.
\subsection{Evaluating the proposed approach for handling dynamic PDP-TSLU}
Random instances with and without TW constraints are generated to evaluate
the rolling-horizon approach for treating dynamic PDP-TSLUs. In each instance, 50 PD requests are randomly generated in a range of 20 pairs of face-to-face stations. With each request is associated an
arrival time which follows a Poisson distribution with a mean of 0.5
as simulates the stochastic arrival practice.
In the case with TW constraints, each PD request is associated with
a delivery deadline, tight or loose.
Tight deadlines follow a uniform distribution in the range of [50,
80], and loose deadlines follow a uniform distribution
in the range of [150, 200]. In each
pickup queue the request joining first is assigned with a closer deadline. Thus the deadline in each pickup queue is sorted in an increasing
order from head to the rear. To simulate schedulable practice, one out
of three of the PD requests are imposed with tight deadlines and the
others with loose ones.
In each instance, the total energy cost is computed by applying each of the two routing approaches: the rolling-horizon
approach and a rule-based approach. The rolling-horizon approach
was introduced in Section \ref{sec: rolling-horizon-appro}, and is implemented with a rolling-horizon size of eight and by including valid inequalities
of Groups 1-3 into the problem model. In contrast,
the rule-based approach is heuristic and it works as follows.
\textit{Rule-based approach}: In the absence of TW constraints, all PD requests are queued in each station by their arrival times. The pickup service follows the priority rule of ``Earliest
Arrival First'', in which the RGV will first load
the container with the earliest arrival time and follow up with next
arrival container without causing unloading conflict with previously loaded
containers, until the RGV is full. The RGV will then deliver all loaded
containers before picking up any new ones. If delivery requests are assigned
with TW constraints, the priority rule base is shifted from arrival
time to deadline to ensure compliance of the TW constraints. The popular heuristics
of ``Earliest Due First'' is thus adopted,
in which the RGV keeps loading containers with closest deadlines as long as no service conflict arises until the RGV is full. Similarly, the RGV will
not pick up new containers until all loaded ones are delivered.
The complete priority rule mimics the method used in the current AFHS under consideration, and similar
rules had been reported in the literature \cite{roodbergen2009survey,lee1999dispatching}.
This rule-based approach updates the routing decision whenever a new PD request is issued,
and the simple algorithm is able to recompute a routing decision
in real time for a problem having up to fifty requests.
The two approaches are applied to solving random instances of dynamic
PDP-TSLUs without and with TW constraints. The
average simulation results are summarized in Table \ref{tab:dynamic-result-Energy-costs-and}.
We observe that, compared to the rule-based approach, the rolling-horizon approach is able to reduce energy cost by up to 15\%. However, the percentage saving decreases as the RGV's capacity increases or when TW constraints are imposed on the PD requests. We also note that the rolling-horizon approach
has an extra advantage of being able to render feasible decisions
in the presence of TW constraints, in which case the rule-based approach
is often hard to offer.
Another interesting observation is that, when the RGV's capacity
is increased from 2 to 3, both approaches achieve lower energy consumption, but the marginal energy savings are
much less than the naive estimate of $\left|\frac{\nicefrac{n}{3}-\nicefrac{n}{2}}{\nicefrac{n}{2}}\right|\times100\%\thickapprox33.33\%$.
This occurs because the conflict-free service constraints and the weight-dependent energy consumption prevent the RGV exploiting its full capacity. The insight
can be useful to warehouse designers, helping them determine appropriate
capacities for RGVs which work in a similar environment by evaluating
the trade-off between extra construction and operational cost it would
take and the actual benefit it would bring.
\begin{table*}
{ \caption{{ \label{tab:dynamic-result-Energy-costs-and}Total energy costs
for operating an RGV.}}
}{ \par}
\centering{}%
{\scriptsize
\begin{tabular}{lccccccccccc}
\hline
\noalign{\vskip3pt}
\multirow{2}{*}{} & \multicolumn{5}{c}{{ Energy cost (without TW constraints)}} & & \multicolumn{5}{c}{{ Energy cost (with TW constraints)}}\tabularnewline[3pt]
\cline{2-6} \cline{8-12}
\noalign{\vskip3pt}
& \multicolumn{2}{>{\centering}p{1.6cm}}{{ Rule-based\linebreak approach}} & & \multicolumn{2}{>{\centering}p{1.9cm}}{{ Rolling-horizon\linebreak approach}} & & \multicolumn{2}{>{\centering}p{1.6cm}}{{ Rule-based\linebreak approach}} & & \multicolumn{2}{>{\centering}p{1.9cm}}{{ Rolling-horizon\linebreak approach}}\tabularnewline[3pt]
\cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12}
\noalign{\vskip3pt}
{ Instance} & { $Q=2$} & { $Q=3$} & & { $Q=2$} & { $Q=3$} & & { $Q=2$} & { $Q=3$} & & { $Q=2$} & { $Q=3$}\tabularnewline[3pt]
\hline
\noalign{\vskip3pt}
{ a} & { 1335.7} & { 1222.0} & & { 1067.2} & { 910.4} & & { 1386.8} & { 1352.4} & & { 1173.6} & { 1076.0}\tabularnewline[3pt]
\noalign{\vskip3pt}
{ b} & { 1324.0} & { 1169.7} & & { 1112.9} & { 1128.6} & & { 1504.3} & { 1239.7} & & { 1135.8} & { 1214.3}\tabularnewline[3pt]
\noalign{\vskip3pt}
{ c} & { 571.3} & { 583.7} & & { 552.7} & { 477.3} & & { 592.9} & { 595.9} & & { 569.4} & { 505.7}\tabularnewline[3pt]
\noalign{\vskip3pt}
{ d} & { 618.4} & { 555.7} & & { 516.5} & { 464.8} & & { 651.7} & { 607.5} & & { 559.6} & { 471.5}\tabularnewline[3pt]
\noalign{\vskip3pt}
{ e} & { 641.9} & { 563.5} & & { 567.4} & { 540.0} & & { 645.9} & { 571.3} & & { 639.9} & { 561.3}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multicolumn{1}{l}{\textit{ Average}} & \textit{ 898.3} & \textit{ 818.9} & & \textit{ 763.3} & \textit{ 704.2} & & \textit{ 956.3} & \textit{ 873.4} & & \textit{ 815.7} & \textit{ 765.8}\tabularnewline[3pt]
\noalign{\vskip3pt}
\multicolumn{2}{l}{\textit{ Percentage saving}} & & & \textit{ 15.03\%} & \textit{ 14.01\%} & & & & & \textit{ 14.7\%} & \textit{ 12.32\%}\tabularnewline[3pt]
\hline
\end{tabular}}
\end{table*}
\section{Conclusions} \label{sec: conclusions}
This work studied an RGV routing problem in an automated freight handling system. The problem was formulated as an MILP that aims to minimize energy consumption for an RGV to complete pickup and delivery tasks under conflict-avoidance and time window constraints. The energy consumption takes account of routing-dependent gross weight and also dynamics of the RGV, and the conflict-avoidance constraints guarantee conflict-free service under two-sided loading/unloading operations. Arc reduction and valid inequalities were exploited from the problem structure to enhance the MILP solution process. The static problem model and solution approach were integrated with a rolling-horizon approach to handle
the dynamic routing problem where air cargo enters and departs from the system dynamically in time. Numerical experiments suggest that the proposed routing strategy is able to route an RGV to transport cargo with an energy cost up to 15\% lower than a heuristic method implemented in current practice.
The current routing problem assumes a single RGV serving the system. In practice, however, multiple RGVs may be employed which work on a single track or different tracks that handle coupled transport tasks, and meanwhile a container may be relayed to its destination via multiple RGVs. The routing problem thus becomes more complex, and collective routing of the RGVs and containers will be required for achieving optimal energy efficiency under service quality and feasibility constraints. This constitutes interesting and challenging work for the future research.
\section*{Acknowledgment}
This work was supported in part by ATMRI under Grant M4061216.057, by MOE AcRF under Tier 1 grant RG 33/10 M4010492.050 and by NTU under startup grant M4080181.050.
\singlespacing
\section*{Appendices}
{\footnotesize
\renewcommand{\thesubsection}{\Alph{subsection}}
\subsection{Equivalent forms of the CFI and CLO service constraints} \label{apdix: CFI-alternative}
\begin{lem}
The following CFI service constraints are equivalent:
\begin{align*}
\text{(See (\ref{eq:CFI}))} \sum_{i:(i,j)\in A'}y_{ij}^{k}=0,\,\,&\forall j\in P^{3},\, k\in P^{1}\\
\Longleftrightarrow\sum_{i:(i,j+n)\in A'}y_{i,j+n}^{k}\le\sum_{i:(i,j)\in A'}y_{ij}^{k},\,\,&\forall j\in P^{1},\, k\in P^{3},
\end{align*}
and the following CLO service constraints are equivalent:
\begin{align*}
(\text{See (\ref{eq:CLO})}) \sum_{i:\,(i,\, j+n)\in A'}y_{i,\, j+n}^{k}=0, \quad &\forall j\in P^{4}, k\in P^{1} \\
\Longleftrightarrow \sum_{i:\,(i,\, j)\in A'}y_{ij}^{k}\le\sum_{i:\,(i,\, j+n)\in A'}y_{i,\, j+n}^{k}, \quad &\forall j\in P^{1}, k\in P^{4}.
\end{align*}
\end{lem}
\begin{proof}
The proofs of the two equivalences follow similar reasoning. For brevity, we only detail the proof for equivalence between the two CFI service constraints. We prove it by contradiction.
``$\Rightarrow$'':
It is sufficient to show that, given an arbitrary pair $j\in P^{3},\, k\in P^{1}$,
it must have $\sum_{i:(i,k+n)\in A'}y_{i,k+n}^{j}\le\sum_{i:(i,k)\in A'}y_{ik}^{j}$.
Suppose this is not true, {\emph{i.e.}}, there exists a pair $j\in P^{3},\, k\in P^{1}$ such that $\sum_{i:(i,k+n)\in A'}y_{i,k+n}^{j}>\sum_{i:(i,k)\in A'}y_{ik}^{j}$.
This implies that $\sum_{i:(i,k+n)\in A'}y_{i,k+n}^{j}=1$ and $\sum_{i:(i,k)\in A'}y_{ik}^{j}=0$,
which means that request $k$ is picked up before service of request
$j$ and delivered during service of request $j$. Equivalently, this
means that request $j$ is picked up during the service of request $k$, ${\emph{i.e.}},$ $\sum_{i:\,(i,\, j)\in A'}y_{ij}^{k}=1$. This
is contradictory to the given condition and hence proves the sufficiency.
``$\Leftarrow$'': It is proved alike. Given an arbitrary pair
$j\in P^{1},\, k\in P^{3}$, suppose $\sum_{i:(i,k)\in A'}y_{ik}^{j}=1$.
Thus, $\sum_{i:(i,k)\in A'}y_{ik}^{j}\ge\sum_{i:(i,k+n)\in A'}y_{i,k+n}^{j}$,
which means that request $k$ can either
be picked up and delivered, or be picked up but not delivered during
the service of request $j$. This implies that in certain circumstance the request $j$ can be delivered but not picked up during the service of request $k$, {\emph{i.e.}}, it is possible to have $\sum_{i:(i,j+n)\in A'}y_{i,j+n}^{k}=1>0=\sum_{i:(i,j)\in A'}y_{i,j}^{k}$. This
is contradictory to the given condition and hence proves the necessity.\end{proof}
\subsection{Big-$M$ formulation of (\ref{eq:c - time consistency})} \label{apdix: big-M formulation}
The indicator constraint (\ref{eq:c - time consistency}) can
be linearized by using regular big-$M$ formulation as
\begin{equation}
\begin{aligned}
b_{j}\ge b_{i}+s_{i}+t_{ij}-\eta_{ij}(1-x_{ij}), \quad & \forall (i,j)\in A,\\
w_{j}\ge w_{i}+q_{j}-\rho_{ij}(1-x_{ij}), \quad & \forall (i,j)\in A,
\end{aligned}
\end{equation}
where $\eta_{ij}$ and $\rho_{ij}$ are large enough constants such
that the right hand sides of the inequalities are lower bounds of
$b_{j}$ and $w_{j}$, respectively. Specific lower bounds can be derived by using the constraints (\ref{eq:c - completion time}), (\ref{eq:c - time window}), (\ref{eq:c - load bound}) and (\ref{eq:c - b and w}), resulting in feasible $\eta_{ij}$ and $\rho_{ij}$
as summarized in Table \ref{tab:Specifications-of-eta-rho}, where
$l_{0}=q_{0}=q_{2n+1}\triangleq 0$. The detailed derivation is given below.
\begin{table}[H]
\caption{\footnotesize Feasible constant pairs $(\eta_{ij},\,\rho_{ij})$ ($\forall (i,j)\in A$)} \label{tab:Specifications-of-eta-rho}
\centering{}%
\footnotesize
\begin{tabular}{ccc}
\hline
\noalign{\vskip3pt}
& $i\in P\cup\{0\}$ & $i\in D$\tabularnewline[3pt]
\hline
\noalign{\vskip3pt}
$j\in P$ & $(l_{i}-s_{i+n}-t_{i,i+n}+t_{ij},Q)$ & $(l_{i-n}+t_{ij},Q+q_{i})$\tabularnewline[3pt]
\noalign{\vskip3pt}
$j\in D\cup\{2n+1\}$ & $(l_{i}-s_{i+n}-t_{i,i+n}+t_{ij}-e_{j-n},Q+q_{j})$ & $(l_{i-n}+t_{ij}-e_{j-n},Q+q_{i}+q_{j})$\tabularnewline[3pt]
\hline
\end{tabular}
\end{table}
Valid reformations of the time and loading consistency constraints
in (\ref{eq:c - time consistency}) require that $\eta_{ij}\ge b_{i}+s_{i}+t_{ij}-b_{j}$
and $\rho_{ij}\ge w_{i}+q_{j}-w_{j}$ for any $(i,j)\in A$. It
is sufficient to set $\eta_{ij}$ and $\rho_{ij}$ as respective upper
bounds of $\underline{\eta}_{ij}\triangleq b_{i}+s_{i}+t_{ij}-b_{j}$
and $\underline{\rho}_{ij}\triangleq w_{i}+q_{j}-w_{j}$ for a given
$(i,j)$. Based on (\ref{eq:c - completion time}), (\ref{eq:c - time window}),
(\ref{eq:c - load bound}) and (\ref{eq:c - b and w}), such upper
bounds are derived for four cases of arcs as follows.
\emph{Case a}: $i\in P\cup\{0\}$, $j\in P$. It follows that $\underline{\eta}_{ij}\le b_{i+n}-t_{i,i+n}+t_{ij}-b_{j}\le l_{i}-s_{i+n}-t_{i,i+n}+t_{ij}$,
and that $\underline{\rho}_{ij}\le w_{i}+q_{j}-q_{j}\le Q$.
\emph{Case b}: $i\in P\cup\{0\}$, $j\in D\cup\{2n+1\}$. It
follows that $\underline{\eta}_{ij}\le b_{i+n}-t_{i,i+n}+t_{ij}-b_{j}\le l_{i}-s_{i+n}-t_{i,i+n}+t_{ij}-e_{j-n}$,
and that $\underline{\rho}_{ij}\le w_{i}+q_{j}\le Q+q_{j}$.
\emph{Case c}: $i\in D$, $j\in P$. It follows that $\underline{\eta}_{ij}\le l_{i-n}+t_{ij}-b_{j}\le l_{i-n}+t_{ij}$,
and that $\underline{\rho}_{ij}\le w_{i}+q_{j}-q_{j}\le Q+q_{i}$.
\emph{Case d}: $i\in D$, $j\in D\cup\{2n+1\}$. It follows
that $\underline{\eta}_{ij}\le l_{i-n}+t_{ij}-b_{j}\le l_{i-n}+t_{ij}-e_{j-n}$,
and that $\underline{\rho}_{ij}\le w_{i}+q_{j}\le Q+q_{i}+q_{j}$.
\bibliographystyle{elsarticle-num}
|
2,869,038,156,916 | arxiv | \section{Introduction}
We put forth a model of \emph{stateless} distributed computation and initiate the systematic exploration of the power of such computation. We first discuss the motivation for this model and then present an exposition of the model and our results.
\subsection{Motivation}
One of our main motivating examples is routing with the Border Gateway Protocol (BGP). BGP establishes routes between the independently administered networks that make up the Internet, and can be regarded as the glue that holds the Internet together. A BGP-speaking router continuously (1) receives update messages from each of its neighbors, announcing routes to different destinations (IP address prefixes) on the Internet, (2) selects the best available route to each destination (according to its local preferences), and (3) propagates its newly-selected routes to neighboring BGP routers. An important and extensively-studied desideratum is for this decentralized route-selection process to converge to a stable routing configuration from \emph{any} initial configuration of the routing system~\cite{griffin2002stable}.
While BGP route selection involves maintaining state, in terms of internal memory, this is for the sole purpose of recording the last route-advertisement (per IP prefix) received from each neighbor. Indeed, BGP's route-selection can be modeled as a function that maps the most recent messages received from neighbors to routing choices (see~\cite{griffin2002stable}). BGP is a prominent example for environments in which all nodes repeatedly ``best respond'' to each other's most recent choices of actions. Other examples of ``best-response dynamics''~\cite{hart2005adaptive} include additional network protocols~\cite{karp2000gpsr}, diffusion of technologies in social networks~\cite{morris2000contagion}, circuits with feedback loops, and more, as discussed in~\cite{JSW11,JLSW15, engelberg2013best}. Such environments, and others, fall within our framework of stateless computation, in which computational nodes are modeled as not having an internal state (i.e., memory), but rather interact by repeatedly mapping incoming messages (``labels'') to outgoing messages and output values. Since such applications are inherently ongoing and susceptible to transient faults, we focus on \emph{self-stabilizing} protocols.
\subsection{Modeling Stateless Computation}
Consider a distributed network of size $n$, in which every node (processor) receives an external input, $x_i$. The processors compute a global function, $f(x_1, \ldots, x_n)$, by repeatedly exchanging messages. We consider computation in which nodes have no internal \emph{state}. Instead, each node is equipped with a reaction function that maps incoming messages from neighbors to (1) outgoing messages to neighbors and (2) an output value, based on the node's input. We abstract the communication model and refer to a message between two nodes as a \emph{label} on the edge connecting them. Edge labels are assumed to come from a finite set $\Sigma$, the \emph{label space}. A protocol is a specification of the label space and the reaction functions.
Self-stabilizing distributed systems enjoy an important robustness property: the ability to recover from any transient fault, provided that the processor code and input remain intact. The notion of self-stabilization was introduced in a seminal paper by Dijkstra~\cite{dijkstra1982self}. A system is self-stabilizing if, regardless of its initialization, it is guaranteed to arrive at a global ``legitimate state'' in a finite number of steps. We consider two definitions of a legitimate state. From an algorithmic perspective, a legitimate state is a state in which the output value of every node has converged to $f(x_1, \ldots, x_n)$. A stronger stabilization condition requires not only a correct convergence of the outputs, but also the convergence of the communication between the nodes, i.e., that at convergence all reaction functions be at a fixed point. We call the first condition \emph{output stabilization} and the latter \emph{label stabilization}.
In practice, label stabilization can be translated to a reduction in communication overhead and bandwidth. Hence, such a property is clearly appealing in the context of algorithm design for distributed networks.
\subsection{Our Results}
We embark on a systematic exploration of stateless computation. Two sets of questions arise naturally from a distributed computing viewpoint: (1) Under what conditions is self-stabilization of stateless computation guaranteed? and (2) What is the computational power of our model (and its limitations)?
We model the asynchronous nature of distributed environments as a (possibly adversarial) \emph{schedule} that determines which nodes will be \emph{activated} in every time step. Upon activation, a node updates its outgoing labels according to its current incoming label profile, according to its reaction function. We consider the notion of $r$-fair schedule, which captures that each node must be activated at least once in every $r$ consecutive time steps, and we define protocols that converge for every $r$-fair schedule as $r$-stabilizing. Our main general impossibility result (Section~\ref{sec:asynch}) states that simply the existence two stable labelings implies that the protocol cannot be label $(n-1)$-stabilizing. This result imposes an upper bound for the asynchronous robustness of such protocols. We then show that this bound is tight, when we exhibit a protocol that converges for every $r$-fair schedule, for $r < n-1$. As best-response dynamics (with unique best-responses) is a specific realization of stateless computation, our impossibility result implies new nonconvergence results for the broad range of environments that fall within this category, including BGP routing, congestion control, asynchronous circuits, and diffusion of technologies, discussed and formalized in~\cite{JSW11,JLSW15}.
We present in Section~\ref{ssec:hardness} two complementary hardness results, establishing that determining whether a protocol is $r$-stabilizing is both computationally and communicationally hard. We prove that determining whether a protocol is $r$-stabilizing is PSPACE complete, and requires at least exponential number of bits of communication. As these results are proven with respect to \emph{all} possible values of $r$, our proofs are applicable even when assuming synchronized communication (i.e., for $r=1$).
We next turn our attention (in~\cref{sec:os,sec:ls}) to the question of the computational power of stateless computation. We focus on synchronous computation and consider two complexity measures: the \emph{round complexity}, defined as the maximum number of rounds (time units) it takes the protocol to converge, and the \emph{label complexity}, defined as the length of the labels in binary encoding, $\log(|\Sigma|)$. We provide straightforward general upper bounds on the label complexity and round complexity of any function $f$, showing that a linear number of rounds and linear label length are sufficient to compute any function. We show that there exist hard functions that require labels of linear length, matching the general upper bound. We thus investigate what functions can (and cannot) be computed when the label complexity is restricted.
We first examine output-stabilizing protocols in Section~\ref{sec:os}. Our investigation reveals that even in the seemingly simplest network topologies, such as the unidirectional and bidirectional rings, stateless computation is quite powerful. We show that protocols on the unidirectional ring have the same computational power as branching programs of polynomial size,
$\textup{L/poly}$. On the bidirectional ring, we show that protocols with polynomial round complexity essentially have the same computational power as Boolean circuits of polynomial size, i.e., they can decide the languages in $\textup{P/poly}$. Our results imply that proving super-logarithmic lower bounds in the output-stabilizing scheme is linked to resolving fundamental open
questions in complexity theory.
In Section~\ref{sec:ls}, we examine the computational power of label-stabilizing protocols. We first present a general method for proving lower bounds on the label complexity of label-stabilizing protocols on arbitrary graphs, and we utilize this method to prove a linear lower bound and a logarithmic lower bound on the label complexity of protocols on ring topologies for specific functions (equality and majority, respectively).
We conclude and discuss directions for future research in Section~\ref{sec:conc}.
\subsection{Related Work}
Our notion of \emph{output stabilization} is the central objective of research on self-stabilization in distributed computing. The key difference from prior research on this topic is the statelessness restriction. Our notion of \emph{label stabilization}, in contrast, corresponds to the more specific notion of \emph{silent} self-stabilizing algorithms~\cite{dolev1999memory}. There is a large body of literature on the design of silent self-stabilizing algorithms for various kinds of tasks (e.g.,~\cite{huang1992self,kosowski2005self,afek1990memory,cournier2009new}).
Our hardness results therefore translate to results for the self-stabilization of any silent algorithms and our impossibility result translates to impossibility of self-stabilization of \emph{stateless} silent algorithms.
A widely studied measure in the silent self-stabilization literature is the memory requirements of the nodes' public registers, known as the \emph{compactness} criterion. This measure is analogous to label complexity, so our lower-bounding method from Section~\ref{sec:ls} can be applied to silent self-stabilizing protocols in stateless settings.
Systems in which strategic nodes repeatedly \emph{best respond} to each others' actions are often stateless. Such systems, which include interdomain routing on the Internet, asynchronous circuits, and congestion control, were studied by Jaggard et al.~\cite{JSW11,JLSW15}. The immediate precursor of our work is~\cite{JSW11}, which analyzes convergence of best-response dynamics in asynchronous environments. Our results for non-self-stabilization strengthen and generalize the main results in~\cite{engelberg2013best,JSW11} by extending them to stateless computation in general and to all $r$-fair schedules, closing some of the open questions posed in~\cite{JSW11}. Also, while~\cite{JSW11,JLSW15} focus on asynchronous environments, our investigation also encompasses the computational power of stateless computation.
\section{Model and Observations}\label{sec:model}
We consider a model of distributed computation on a strongly connected directed graph $G=([n],E)$, where each node represents a single processor. Each node $i\in[n]$ has a private input $x_i$ from an input space $X$. Informally, the nodes (processors) are \emph{stateless}, in the sense that a processor cannot store information. A node $i$ can communicate with another node $j$ if and only if the edge $(i,j)$ is in $E$, and this communication is captured by $i$ assigning a \emph{label} to that edge. We formalize the interaction between nodes below.
\subsection{Schedules and Stateless Protocols}
Let $\Sigma$ be a nonempty finite set that shall henceforth be called the \emph{label space}. Each node $i$ is equipped with a \emph{reaction function} mapping the labels of $i$'s incoming edges and $i$'s input value $x_i$ to an assignment of labels to $i$'s outgoing edges and an output value in $\{0,1\}$. Formally, the reaction function of node $i$ is a deterministic mapping
\[\delta_i: \Sigma^{-i} \times \{0,1\} \to\Sigma^{+i} \times \{0,1\}\,,\]
where $-i$ and $+i$ are i’s sets of incoming and outgoing edges, respectively.
A schedule is a function, $\sigma:\mathbb{N}^+ \to 2^{[n]}$ that maps time units, $t = 1,2,\ldots$ to a nonempty subset of nodes, $\sigma(t)\subseteq [n]$. $\sigma(t)$ specifies, for each time step $t$, which nodes are \emph{activated} at every time step. Upon activation, a node applies its reaction function to its incoming labels and input and updates its outgoing labels and output accordingly.
A schedule is \emph{fair} if for every $j\in [n]$ there exists an infinite sequence of time steps in which $j\in \sigma(t)$. For any positive integer $r$, we say that a schedule is $r$-fair if every node is activated at least once in every sequence of $r$ consecutive time steps.
A \emph{stateless protocol} $A=(\Sigma,\delta)$ specifies the label space and the vector $\delta=(\delta_1,\ldots,\delta_n)$ of individual reaction functions. Because the reaction functions are deterministic, a global input $(x_1,\ldots,x_n)\in \{0, 1\}^n$, an \emph{initial labeling} $\ell^0$ and a schedule $\sigma$ completely determine all subsequent labelings and outputs. Formally, at every time step $t\in\mathbb{N}$, and every $i\in \sigma(t)$, $i$'s outgoing labeling and output are given by:
\[
(\ell_{+i}^{t}, y_i^{t}) = \delta_i(\ell_{-i}^{t-1}, x_i)
\]
In aggregate, the functions $\delta_i$ together with the schedule $\sigma$ define a global transition function:
\[
\delta: \Sigma^E\times \{0, 1\}^n\times 2^{[n]}\to \Sigma^E\times \{0, 1\}^n
\]
Satisfy,
\[
(\ell^{t}, y^{t}) = \delta(\ell^{t-1}, x, \sigma(t))
\]
\subsection{Self-Stabilization and Computation}
We seek \emph{self-stablizing} protocols, which always converge to a global ``legitimate state'' regardless of initial conditions. We consider two possible formalizations of this concept: \emph{output} $r$-\emph{stabilization} and \emph{label} $r$-\emph{stabilization}. A stateless protocol is output $r$-stabilizing if, for every possible input $(x_1,\ldots,x_n)$ , every initial labeling $\ell^0$, and every $r$-fair scheduling $\sigma$, the output sequence of every node $i$'s, $y_i^1,y_i^2,\ldots$ converges. For a protocol to be label $r$-stabilizing, we additionally require that the sequence of labelings $\ell^0,\ell^1,\ell^2,\ldots$ always converges, i.e., that all reaction functions $\delta_i$ reach a fixed point. We assume that the reaction functions and inputs remain intact throughout the computation and are not subjected to transient faults.
Consider the objective of computing some function $f:X^n\to Y$ on the global input $(x_1,\ldots, x_{n})$ in a stateless manner. We say that an output $r$-stabilizing protocol \emph{computes} a function $f:X^n\to Y$ if the output sequence $y_i^1,y_i^2,\ldots$ of each node $i\in[n]$ converges to $f(x_1,\ldots,x_n)$. We say that a family of protocols $A_1,A_2,\ldots$ $r$-\emph{decides} a language $\mathcal{L}\subseteq \{0,1\}^*$ if for every $n\in\mathbb{N}$, the protocol $A_{n}$ computes the \emph{indicator function} for $\mathcal{L}$ on $\{0,1\}^n$, i.e., the function $f_n:\{0,1\}^n\rightarrow \{0, 1\}$ defined by $f_n(x) = 1$ if and only if $x\in \mathcal{L}$.
\subsection{Round Complexity and Label Complexity}\label{sec:cm}
Let $G=([n],E)$ be a directed graph and $A_n$ a self-stabilizing protocol on $G$. We define the \emph{round complexity}, $R_n$, of $A_n$ as the maximum, over all inputs $(x_1, ... , x_n) \in \{0,1\}^n$ and all initial edge labelings $\ell\in\Sigma^E$, of the number of time steps it takes for the protocol to converge. We define the \emph{label complexity}, $L_n$, as $\log(|\Sigma|)$, i.e., the length of a label in binary encoding. Our results regarding the computational power of stateless computation (as opposed to self-stabilization) focus on the scenario of synchronous interaction between nodes, i.e., that the schedule is $1$-fair and so, in every time step, all nodes are activated.
We proceed by presenting some observations about the relationship between round complexity and label complexity and a general upper bound on the label complexity of Boolean functions. The omitted proofs appear in Appendix \ref{apdx:bounds}.
\begin{proposition}
\label{prop:radius}
Let $f:\{0, 1\}^n\rightarrow \{0, 1\}$ be a non constant Boolean function on a graph $G=([n],E)$, and let $r$ be the graph's radius. Then, for every output-stabilizing protocol, $r \leq R_n$.
\end{proposition}
\begin{proposition}\label{prop:bi}
Let $G=([n], E)$ be a directed graph and $A_n=(\Sigma, \delta)$ a stateless protocol. Then,
\[
R_n \leq |\Sigma|^{|E|} = (2^{L_n})^{|E|}
\]
\end{proposition}
\begin{proposition}
\label{prop:genupperbd}
Let $G=([n], E)$ be a strongly connected directed graph and $f:\{0, 1\}^n\rightarrow \{0, 1\}$ a Boolean function. There exists a label-stabilizing protocol, $A_n$, that computes $f$, with $L_n=n+1$ and $R_n=2n$.
\end{proposition}
\section*{Part I: Self-Stabilization of Stateless Protocols}
\section{Impossibility Result for Self-Stabilization}\label{sec:asynch}
We explore under what conditions a stateless protocol can be self-stabilizing. We exhibit a necessary condition for self-stabilization. Before presenting this condition, we present required terminology. A \emph{stable labeling} for a protocol $(\Sigma, \delta)$ on a graph $G$ is a labeling $\ell \in\Sigma^{E}$ that is a fixed point of every reaction function $\delta_i$, meaning that $\delta_i(\ell_{-i}, x_i) = (\ell_{+i}, y_i)$ for every node $i$.
\begin{theorem}\label{thm_impossible}
No stateless protocol with at least two distinct stable labelings is label $(n-1)$-stabilizing.
\end{theorem}
We show that this result is tight in the sense that a system can be label $(n-2)$-stabilizing even if multiple stable labelings exist. Our proof of Theorem~\ref{thm_impossible} utilizes the classical notion of valency argument from distributed computing theory~\cite{fischer1985impossibility}. Applying a valency argument to our context, however, involves new challenges and the introduction of new ideas. Specifically, the proof involves defining a global state-transition space in a manner that captures all system dynamics under label $(n-1)$-fair schedules. A delicate valency argument is then applied to a carefully chosen \emph{subset} of this space so as to obtain the impossibility result. Our result shows that not only do multiple ``stable states'' induce instability, but this is true even under reasonably fair schedules (linear fairness). This both generalizes and strengthens the nonconvergence result in~\cite{JSW11} and, consequently, has immediate implications for game theory, routing and congestion control on the Internet, diffusion of technologies in social networks, and asynchronous circuits.
\begin{proof}
Let $G=([n], E)$ be a directed graph and $A_n=(\Sigma, \delta)$ a stateless protocol. Suppose $\hat{\ell_1}, \hat{\ell_2}\in \Sigma^{E}$ are two distinct stable labelings, and assume for the sake of contradiction that $A_n$ is label $(n-1)$-stabilizing. For simplicity we ignore the private input and output bits of the nodes, as our only concern is whether or not the labeling sequence stabilizes.
We build a directed \emph{states-graph} $G'=(V', E')$ as follows. Let $m=|E|$ and $r=n-1$. The vertex set is $V' = \Sigma^{m}\times [r]^{n}$. Each node $(\ell, x)\in V'$ consists of a labeling component, $\ell\in \Sigma^{E}$, and a countdown component $x\in [r]^{n}$ that counts for every node in $G$ the number of time steps it is allowed to be inactive. Denote by $V_0' = \{ (\ell, r^{n}) : \ell\in \Sigma^{m} \}$ the set of initialization vertices. To define the edges set we utilize the following countdown mapping, $c: [r]^n \times 2^{[n]}\rightarrow [r]^n$, that satisfy for every $i\in [n]$:
\[
c(x, T)_i =
\begin{cases}
r & \text{If } i \in T\\
x_i - 1 & \text{Otherwise}
\end{cases}
\]
For every node $(\ell, x)$, and every $T \in \{ S : \{i: x_i=1\} \subseteq S \}$, there is a directed edge from $(\ell, x)$ to $(\delta(\ell, T), c(x, T))$. Observe that every run of the protocol with a $r$-fair schedule is represented in $G'$ as a path from an initial vertex to some subset:
\[
T\subseteq \{ (\ell, x): \ell\in \{\hat{\ell}_1, \hat{\ell}_2\} \text{ , } x\in [r]^n\}
\].
We restrict our attention to the subgraph $H = G'(U)$, where:
\[
U = \{ (\ell, x): \ell\in\Sigma^{m} \text{ , } s(x) \geq (1, 2, \ldots, r-1, r, r) \},
\]
and $s(x)$ is the increasingly sorted vector of $x$.
We define the \emph{attractor region} of the stable labeling $\hat{\ell}_1$ as the set of all vertices from which every path reaches a vertex in $\{(\hat{\ell}_1, x ): x\in [r]^n\}$. Namely, from any vertex that is in the attractor region of $\hat{\ell}_1$ we are guaranteed to converge to $\hat{\ell}_1$. We define the attractor region of $\hat{\ell}_2$ as the set of all vertices from which every path reaches a vertex in $\{(\hat{\ell}_i, x ): x\in [r]^n\}$, where $\hat{\ell}_i \neq \hat{\ell}_1$ is a stable state.
To prove that the protocol is not label $r$-stabilizing, we show that existence of a cycle from $i\in V_0'$ (note that $V_0'\subseteq U$) in which every node is not in any attractor region. Assume that $H$ doesn't contain such cycle.
\begin{lemma}
\label{lem:VAv0exists}
There exists a node $i\in V_0'$ that does not belong to any attractor regions.
\end{lemma}
\begin{proof}
Suppose every node $i\in V_0'$ is in some attractor region. As there are two attractor regions, at least one node is in the attractor region of $\hat{\ell}_1$ and at least one node is in the attractor region of $\hat{\ell}_2$. Therefore, there also exist two nodes $(\ell, r^n),(\ell',r^n)\in V_0'$ that are in different attractor regions and their labeling components differ in one coordinate, $(i, j)$.
Observe that $(\delta(\ell, \{i\}), c(r^n, \{i\}))=(\delta(\ell', \{i\}) , c(r^n, \{i\}))$. Thus, they both reach the same node, a contradiction.
\end{proof}
\begin{lemma}
\label{lem:VAseq}
Let $(\ell,x), (\ell',y)\in H$ be two vertices satisfying the following:
\begin{enumerate}
\item $\{e: \ell_e\neq \ell'_e\}\subseteq +j$ for some $j\in [n]$.
\item $x \leq y$.
\end{enumerate}
Then there exists an attractor region that is reachable from both.
\end{lemma}
\begin{proof}
Let $(\ell,x)$ and $(\ell',y)$ be such vertices. If $\ell=\ell'$ then observe that $(\delta(\ell,[n]) , c(x, [n]))=(\delta(\ell', [n]) , c(y, [n]))$. Otherwise, let $j$ be the node that some of its outgoing labels differ in $\ell, \ell'$. Let $j'\in \text{argmin}_{\bar{j}\neq j} x_{\bar{j}}$. Consider the vertices $i = (\delta(\ell, \{j,j'\}), c(x, \{j, j'\}))$ and $i' = (\delta(\ell', \{j,j'\}), c(y, \{j, j'\}))$. Notice that,
\begin{enumerate}
\item For any subset $T$, $c(x, T)\leq c(y, T)$.
\item $(1, 2, \ldots, r-1, r, r) \leq c(x, \{j, j'\})$, thus both vertices are in $H$.
\item Observe that $\ell_{-j} = \ell'_{-j}$ therefore, $\delta(\ell, \{j,j'\})_{+j} = \delta(\ell', \{j,j'\})_{+j}$.
\item Only if $(j,j')\in E$ the resulted labels may be different, and in this case,
$\{e: \delta(\ell, \{j,j'\})_e\neq \delta(\ell', \{j,j'\})_e\}\subseteq +j'$.
\end{enumerate}
Hence, we can apply the same activation for $i$ and $i'$ and repeat this infinitely many times to create two infinite sequences in $H$ such that the two conditions hold. From our assumption, both sequences eventually reach a stable labeling. As both labeling differ only at the outgoing labels of some node and since activating this node lead to the same labeling, it must be that they converge to the same labeling.
\end{proof}
\begin{lemma}
\label{lem:VAimp3}
There exists a node $i\in U$, such that $i$ is not in any attractor region, and for every $(i,i')\in E(H)$, $i'$ is in some attractor region.
\end{lemma}
\begin{proof}
Assume that there is no such node. Then any node that is not in any attractor region has a neighbor that is also not in any attractor region. Thus, starting with arbitrary such node (which we know exists, from Lemma~\ref{lem:VAv0exists}) we can create a path that contains only such nodes. But since our graph is finite this path must be a cycle, a contradiction to our assumption.
\end{proof}
Let $(\ell,x)\in V(H)$ as in Lemma~\ref{lem:VAimp3}. For every $T$ that correspond to an outgoing edge from $(\ell,x)$ the following conditions follow directly from the definition of $H$: (1) $|T|\geq 2$, and (2) If there exists $j$ s.t. $x_j=1$ (there is at most one such vertex) then $j\in T$.
Thus, its edges set is either $2^{[n]}- \bigcup_i\{i\}-\emptyset$ or $\{T\cup\{j\}: T\in 2^{[n] -\{j\}}-\emptyset\}$. Since $(\ell,x)$ is not in any attractor region but its neighbors are either in the attractor region of $\hat{\ell}_1$ or $\hat{\ell}_2$, and combining with our observation that its neighbor set is either the power set of $[n]$ or $[n]\setminus \{j\}$, there must exist $i\in [n]$ and $S\subseteq [n]-\{i\}$ such that $(\delta(\ell, S), c(x, S))$ and $( \delta(\ell, S\cup\{i\}), c(x, S\cup\{i\}))$ belong to two different attractor regions.
Observe that $\delta(\ell, S)$ and $ \delta(\ell, S\cup\{i\})$ might differ only in $+i$, and that $c(x, S)\leq c(x, S\cup\{i\})$ so the conditions of Lemma~\ref{lem:VAseq} hold.
Applying the lemma, we get that there is an attractor region that is reachable from both, a contradiction.
\end{proof}
\noindent{\bf Tightness of Theorem~\ref{thm_impossible}.} We next show, via an example, that the impossiblity result of this section is tight.
\begin{example}
We show a protocol over the clique $K_n$ with label space $\Sigma = \{0, 1\}$. For each node $i\in[n]$, the reaction function is
\[
\delta_i(\ell) =
\begin{cases}
0^{n-1} & \text{if every incoming edge is labeled }0 \\
1^{n-1} & \text{otherwise}\,.
\end{cases}
\]
Observe that $0^{n(n-1)}$ and $1^{n(n-1)}$ are both stable labelings. Also observe that if at some time step there is more than one node $i$ whose outgoing edges are all labeled $1$, then the labeling sequence is guaranteed to converge to $1^{n(n-1)}$. Hence, an oscillation occurs only if at every time step exactly one node's outgoing edges are all labeled 1. This implies that for an oscillation to occur, (1) two nodes must be activated at each time step, and (2) if $i$'s outgoing edges area labeled 1 at time $t$, then $i$ must be activated at time $t+1$. No $r$-fair schedule for $r<n-1$ can satisfy these two conditions, so the labeling sequence must converge.
\end{example}
\noindent{\bf Implications for games, routing, circuits, and more.} We point out that best-response dynamics can be formalized in our model as the scenario that both the output set of each node and the labels of each of its outgoing edges are the same set and represent that node's (player's) possible strategies. Thus, a corollary of Theorem~\ref{thm_impossible} is a generalization of the main result in~\cite{JSW11} (Theorem 4.1) for this setting, showing that multiple equilibria imply instability even under linearly-fair schedules. Consequently, Theorem~\ref{thm_impossible} immediately implies strong nonconvergence results for the spectrum of environments formalized in~\cite{JSW11} (Section 5): routing and congestion control on the Internet, asynchronous circuits, and diffusion of technologies in social networks. We refer the reader to~\cite{JSW11,JLSW15} for more details.
\section{Complexity of Verifying Self-Stabilization} \label{ssec:hardness}
We now turn our attention to the complexity of deciding whether a protocol is $r$-stabilizing. We present two complementary results for the communication complexity and the computational complexity models. Our first result establishes that the communication complexity of verifying $r$-stabilization is exponential in the number of nodes $n$.
\begin{theorem}\label{thm:cc}
Let $A_n = (\Sigma,(\delta_1, \ldots, \delta_n))$ be a stateless protocol. Consider the following 2-party communication problem. Alice receives as input a description of $\delta_1$ and Bob receives $\delta_2$. They both have access to $\delta_3, \ldots, \delta_n $, and they need to decide whether $A_n$ is label $r$-stabilizing. For eny value of $r$, deciding whether $A_n$ is label $r$-stabilizing requires $2^{\Omega(n)}$ bits of communication.
\end{theorem}
Our proof utilizes combinatorial ``snake-in-the-box" constructions, as in~\cite{JSW11}. To prove the theorem for every possible value of $r$ two separate reductions, corresponding to two regimes of $r$ (``higher'' and ``lower'' values) are presented. The proof appears in Appendix~\ref{apdx:hardness}.
The above communication complexity hardness result requires the representation of the reaction
functions to (potentially) be exponentially long. What if the reaction functions can be succinctly
described? We complement the above result present a strong computational complexity hardness result for the case that each reaction function is given explicitly in the form of a Boolean circuit.
\begin{theorem}\label{thm:pspace}
For every $r$, deciding whether a stateless protocol is label $r$-stabilizing is PSPACE-complete.
\end{theorem}
Our proof of Theorem~\ref{thm:pspace} relies on a ``self-stabilization-preserving reduction''~\cite{engelberg2013best}. We first show that the \textsc{String-oscillation} problem can be reduced to the problem of deciding whether a \emph{stateful protocol}, in which reaction functions may depend on both their incoming and their \emph{outgoing} labels, is label $r$-stabilizing. Next, we show how to construct a stateless protocol from a stateful protocol without altering its stabilization properties. The proof appears in Appendix~\ref{apdx:hardness}.
\section*{Part II: Computational Power of Stateless Protocols}
\section{Output Stabilization, Branching Programs, and Circuits}\label{sec:os}
We exhibit output-stabilizing protocols and inspect their computational power in comparison to label-stabilizing algorithms. Consider the clique topology, $K_n$. Note that every Boolean function can be computed using a 1-bit label and withing one round. A similar argument can be made for the \emph{star} topology. We therefore examine poorly connected topologies to gain a better understanding on the computational power and limitations of our model. As a first step in this direction, we consider the \emph{unidirectional} and the \emph{bidirectional} rings.
The ring topology is one of the simplest network topologies, yet even this simple topology exhibits interesting and highly nontrivial phenomena. For this reason,
computation on a ring has been widely studied in a broad variety of contexts, including asynchronous communication, indistinguishable processors, lack of global knowledge, etc.~\cite{attiya1988computing, abrahamson1986probabilistic, burns1980formal, awerbuch1990trade, goldreich1986effects, pachl1982technique, moran1986gap, mansour1986bit}.
Our main positive results establish that even when considering a system with limited expressive power, in terms of relatively succinct label size (logarithmic in $n$), all problems that can be solved efficiently via centralized computation, can also be solved in our restricted model.
We obtain a separation between protocols on the unidirectional and bidirectional rings through an exact characterization of the computational power of each of these environments.
We show that protocols on the unidirectional ring have the same computational power as branching programs of polynomial size, i.e., they can compute the languages in $\textup{L/poly}$. This is also the class of all languages that can be decided by a logspace Turing machines that receive, along with each input $\{0,1\}^n$, an auxiliary \emph{advice} string that depends only on $n$ and is of length polynomial in $n$. This advice is given on a secondary read-only tape, separate from the input tape and work tape.
Our second theorem shows that protocols on the bidirectional ring are stronger, and have the same computational power as polynomial-sized Boolean circuits, the class $\textup{P/poly}$. This is also the class of all languages that can be decided by polynomial time Turing machines with a polynomial sized advice.
Our characterization results reveal that the expressiveness for bidirectional rings is dramatically richer. This, in some sense, reflects the sequential nature of computation on a unidirectional ring, as opposed to the parallelism afforded by the bidirectional ring. We explain below that these results imply that proving super-logarithmic lower bounds for the unidirectional or bidirectional rings would imply major breakthroughs in complexity theory.
\begin{definition}
Given any function $h:\mathbb{N}\to\mathbb{N}$, a language $\mathcal{L}\subseteq\{0,1\}^*$ belongs to the class $\textup{OS}^u_{h}$ if $\mathcal{L}$ is decided by some family $A_1,A_2\ldots$ of stateless protocols such that each $A_n$ has label complexity $O(h(n))$ and runs on the unidirectional $n$-ring.
\end{definition}
\begin{theorem}\label{thm:lpoly}
$\textup{OS}^u_{\log}\equiv \textup{L/poly}$.
\end{theorem}
While we do not have a complete characterization for the class $OS^b_{\log}$, we can characterize a simple variant of this class. If we allow polynomially many ``helper'' nodes whose inputs do not affect the function value, then bidirectional protocols with logarithmic label complexity and polynomial round complexity have the same computational power as circuits of polynomial size.
\begin{definition}
Given any function $h:\mathbb{N}\to\mathbb{N}$, a language $\mathcal{L}\subseteq\{0,1\}^*$ belongs to the class $\widetilde{\textup{OS}}^b_{h}$ if $\mathcal{L}$ is decided by some family $A_1,A_2\ldots$ of stateless protocols such that each $A_n$ has polynomial round complexity, label complexity $O(h(n))$, and runs on the bidirectional $p(n)$-ring, where $p(n)$ is polynomial in $n$.
\end{definition}
\begin{theorem}\label{thm:ppoly}
$\widetilde{\textup{OS}}^b_{\log} \equiv \textup{P/poly}$.
\end{theorem}
The complete proof of Theorem~\ref{thm:ppoly} is given in Appendix~\ref{apdx:os}. The central idea in simulating a Boolean Circuit in the bidirectional ring is to enable the nodes to simultaneously count using a synchronous $D$-counter. The way we think of counting is that after ``sufficiently long'' time has passed, all nodes simultanesouly see the same sequence of incoming labels with the numbers $0, 1, 2, \ldots$, repeated indefinitely. We now show a stateless $D$-counter protocol with a label complexity of $L_n = O(\log(D))$. We point out that this protocol does not compute any function, only reaches this desired global sequence of states. Thus, our reaction functions capture a mapping from incoming labels to outgoing labels only. The label complexity of simulating a global counter depends \emph{only} on the counting value, $D$. Our protocol utilize a $2$-counter protocol as a building block. We first present the $2$-counter protocol followed by a $D$-counter protocol.
\begin{claim}
\label{lem:flipping}
For every odd-sized bidirectional $n$-ring there exists a stateless $2$-counter protocol.
\end{claim}
\begin{proof}
We begin with a proof for $n=3$.
The label space is $\Sigma=\{0, 1\}^2$.
All reaction functions send the same
label in both directions, so we refer to their outgoing labels as two
bits. We use the notation $\ell_{(i, j)}[i]$ to denote the $i^{th}$ bit in label the $\ell_{(i, j)}\in \Sigma$.
The reaction function of node $1$,
\[
\delta_1(\ell_{(2, 1)}, \ell_{(3, 1)})[1] = \text{NOT}(\ell_{(2, 1)}[1])
\]
\[
\delta_1(\ell_{(2, 1)}, \ell_{(3, 1)})[2] = \ell_{(3, 1)}[1]
\]
The reaction function of node $2$,
\[
\delta_2(\ell_{(1, 2)}, \ell_{(3, 2)})[1] = \ell_{(1, 2)})[1]
\]
\[
\delta_2(\ell_{(1, 2)}, \ell_{(3, 2)})[2] = \text{NOT}(\ell_{(1, 2)}[2])
\]
The reaction function of node $3$ is a XOR over the first bit of its incoming labels:
\[
\delta_3(\ell_{(1, 3)}, \ell_{(2, 3)})[1] =
\text{XOR}(\ell_{(1, 3)}[1],\ell_{(2, 3)}[1])
\]
\[
\delta_3(\ell_{(1, 3)}, \ell_{(2, 3)})[2] = \ell_{(2, 3)}[2]
\]
Assume that $(\ell_{(2,1)}[1], \ell_{(1,2)}[1]) = (0,0)$. Observe that at the next time step they alter to $(0, 1)$, then $(1, 1)$, $(1, 0)$ and again, $(0,0)$. Note that they agree/disagree on their outgoing labels every other time step,
therefore, node $3$ must send alternating bits, in its first label bit, and they converge after at most two time steps.
To generalize for any odd $n$, we assign the nodes' reaction functions $\delta'_1,\delta'_2, \ldots , \delta'_n$ as follow. $\delta'_n = \delta_3$,
$\delta'_{1} = \delta_1$.
For every other node $1<j<n$, if $j$ is even then
\[
\delta'_j(\ell_{(j-1, j)}, \ell_{(j+1, j)})[1] = \ell_{(j-1, j)})[1]
\]
\[
\delta'_j(\ell_{(j-1, j)}, \ell_{(j+1, j)})[2] = \text{NOT}(\ell_{(j-1, j)}[2])
\]
and if $j$ is odd then,
\[
\delta'_j(\ell_{(j-1, j)}, \ell_{(j+1, j)})[1] = \ell_{(j-1, j)})[1]
\]
\[
\delta'_j(\ell_{(j-1, j)}, \ell_{(j+1, j)})[2] = \ell_{(j-1, j)}[2])
\]
Notice that the outgoing first bit label sequence of $1$ is $(0, 0, 1, 1, 0, 0, \ldots)$. Since node $2$ repeats it, its first label bit sequence will be the same but with a delay of one time step. As the number of nodes is odd, the delay of node $n-1$ must be odd too, thus nodes $1, n-1$ agree/disagree every other time step, and node $n$ outputs alternate bits at the first bit label sequence. Since node $1$ updates its outgoing second bit according to node $n$ first bit, and it will output alternating bits at its second bit sequence. Node $2$ negating the value it sees at the second bit thus, they both observe the same bit at every time step. We complete the proof by induction over $n$, while using the fact $n$ is odd.
\end{proof}
\begin{claim}\label{clm:counter}
For every odd-sized bidirectional $n$-ring there exists a stateless $D$-counter protocol.
\end{claim}
\begin{proof}
Suppose we have a ring of size $n=2$, the label space $\Sigma =
\{0, 1, ... , D-1\}$, and nodes $1$ and $2$ have the same reaction
function that increments the value on their incoming label, modulo $D$.
Let $(\ell_{(1, 2)}, \ell_{(2, 1)}) = (x,y)$ be the initial labeling.
After one time step, it will turn to $(y+1, x+1)$, after two time steps, $(x+2, y+2)$, and in general the sequence of incoming labels of node $1$ is
\[(y, x+1, y+2, x+3, ...)\]
and of node $2$ is
\[(x, y+1, x+2, y+3, ...)\]
Now, assume
they both knew $x-y$, then by adding it to the $y$ subsequence, they
were able to count together. Clearly, they also need to be able to distinct between the $``x''$ and $``y''$ subsequences. Based on this observation, we will describe the counter algorithm.
\begin{enumerate}
\item The label space is $\Sigma = \{ (b_1, b_2, z, g, c) \} $, where:
\begin{enumerate}
\item $b_1$ and $b_2$ the two bits that implement the $2$-counter protocol from Claim~\ref{lem:flipping}.
\item $z, g, c$ are integers from $[D]$.
\end{enumerate}
\item The reaction function of node $1$ assigns $z=x+1$ for $\ell_{(2, 1)} = (-, -, x, -, -)$.
\item Every other node $j$, assigns $z=y+1$ for $\ell_{(j-1, j)} = (-, -, y, -,
-)$.
\item Observe that all odd/even nodes have the same outgoing sequence, thus, node $1$ can see at the same time the two subsequences in the $z$ field, and calculate the gap $g=x-y$. Then, it simply propagate it to all nodes in a clockwise direction in the $g$ field.
\item Finally, the counter value $c$ will be for every odd node as
\[
c =
\begin{cases}
z+1+g & \text{If } b_2=0 \\
z+1 & \text{Otherwise}
\end{cases}
\]
and for even nodes as
\[
c =
\begin{cases}
z+1 & \text{If } b_2=0 \\
z+1+g & \text{Otherwise}
\end{cases}
\]
\end{enumerate}
The round complexity of the algorithm is $R_n = 4n$ as $n$ time steps are needed for each field $b_1, b_2, z$ and $g$ to stabilize. The label complexity is $L_n = 2+3\log(D)$.
\end{proof}
\begin{corollary}
\label{coro:1}
$P\subseteq \widetilde{\textup{OS}}^b_{\log}$ and $L\subseteq \textup{OS}^u_{\log}$
\end{corollary}
\begin{corollary}
\label{coro:2}
If $\mathcal{L} \in P$ but $\mathcal{L} \notin \textup{OS}^u_{\log}$, then $P \neq LSPACE$
\end{corollary}
\begin{corollary}
\label{coro:3}
If $\mathcal{L}\in \text{NP}$ but $\mathcal{L}\notin \widetilde{\textup{OS}}^b_{\log}$, then $P\neq NP$
\end{corollary}
It follows from the above corollaries that: (1) obtaining super logarithmic lower bounds on the label complexity both for either unidirectional or bidirectional rings would imply major breakthroughts in complexity theory and is thus challenging, and (2) every efficiently-decidable language can be computed in a decentralized, stateless, manner with low label complexity.
Nevertheless, a simple counting argument reveals that some functions cannot be computed by protocols with sublinear label complexity. In fact, this is true for any network topology in which the maximum degree is constant, and even if we do not require label stabilization. This implies that there are problems for which we cannot hope to achieve a better complexity than the trivial upper bound of Proposition \ref{prop:genupperbd}.
\begin{theorem}\label{thm:hardFunc}
Let $\{G_n\}_{n=1}^{\infty}$ be graph family, so that the maximal degree of $G_n$ is constant, i.e., there is some $k\in \mathbb{N}$ so that for every $n$, $\Delta(G_n) = k$. Then for every $n>8$, there exists a function $f:\{0,
1\}^n\rightarrow \{0, 1\}$ that cannot be computed by a stateless protocol on $G_n$ with $L_n < n/(4k)$.
\end{theorem}
\begin{proof}
The number of possible protocols over the label space $\Sigma$ is at most $(2|\Sigma|^k)^{2n|\Sigma|^k}$.
Applying the same protocol for two different Boolean functions on an
inconsistent input must result in an incorrect behavior. It follows that the numbers of protocols is at least the number of Boolean functions, $2^{2^n}$:
\[
(2|\Sigma|^k)^{2n|\Sigma|^k} \geq 2^{2^{n}}
\]
\[
2n|\Sigma|^k \cdot \log(2|\Sigma|^k) \geq 2^n
\]
\[
\log(n) +\log(2|\Sigma|^k)+\log( \log(2|\Sigma|^k) )\geq n
\]
\[
\log(n)+2k\log(|\Sigma|) \geq n
\]
\[
2k\log(|\Sigma|) \geq n-\log(n)
\]
\[
\log(|\Sigma|) \geq \frac{1}{2k} (n-\log(n)) \geq \frac{1}{2k} (n-\frac{n}{2})
\]
\[
L_n \geq \frac{n}{4k}\;.
\]
\end{proof}
\section{A Method for Label-stabilizing Lower Bounds}\label{sec:ls}
In this section, we present a method for deriving lower bounds on label-stabilizing protocols on an arbitrary topology. We then conclude linear and logarithmic lower bounds for concrete functions (\emph{equality} and \emph{majority}) on the ring topology.
We now show that a variation on the \emph{fooling-set method}~\cite{arora2009computational}, can be used to prove lower bounds for label complexity. We present the formal definition followed by our lower bound theorem, which is proven in Appendix~\ref{apdx:lsbound}.
\begin{definition}
A \emph{fooling set} for a function $f:\{0,1\}^{n} \rightarrow \{0,1\}$ is a set $S\subseteq \{0,1\}^{m}\times \{0,1\}^{n-m}$, for some $m\leq n$, such that (1) for every $(x,y)\in S$, $f(x,y) = b$, and (2) for every distinct $(x,y),(x',y')\in S$, either $f(x ,y')\neq b$ or $f(x', y)\neq b$. \end{definition}
\begin{theorem}
\label{thm:lowerbound}
Let $m\in\mathbb{N}$, let $f:\{0, 1\}^{n} \rightarrow \{0, 1\}$ be a function, and let $G=([n],E)$ be a directed graph. Define $C=\{(i,j)\in E:i\leq m<j\}$ and $D=\{(i,j)\in E:j\leq m<i\}$,
the sets of edges out of and into the node subset $[m]$. Suppose $f$ has a fooling set $S\subseteq\{0,1\}^{m}\times\{0,1\}^{n-m}$ such that, for every $(x,y),(x^\prime,y^\prime)\in S$,
\begin{itemize}
\item if $(i,j)\in C$, then $x_{i} = x^\prime_{i}$, and
\item if $(i,j)\in D$, then $y_i = y^\prime_i$.
\end{itemize}
Then every label-stabilizing protocol on $G$ that computes $f$ has label complexity at least $\frac{\log(|S|)}{|C|+|D|}$.
\end{theorem}
We apply this method to give lower bounds on the label complexity of label-stabilizing protocols for two important functions: the \emph{equality} function $\textsc{Eq}_n:\{0, 1\}^{n}\to\{0, 1\}$ defined by $\textsc{Eq}_n(x)=1$ if and only if $n$ is even and $(x_1,\ldots,x_{n/2})=(x_{n/2+1},\ldots,x_{n})$, and the \emph{majority} function $\textsc{Maj}_n:\{0, 1\}^n\to\{0,1\}$ defined by $\textsc{Maj}_{n}(x)=1$ if and only if $\Sigma_{i=1}^{n}x_i \geq n/2$. The proofs appear in Appendix \ref{apdx:lsbound}.
\begin{corollary}\label{cor:eq}
Every label-stabilizing protocol that computes $\textsc{EQ}_n$ on the bidirectional $n$-ring has label complexity at least $\frac{n-2}{8}$.
\end{corollary}
\begin{corollary}\label{cor:maj}
Every label-stabilizing protocol that computes $\textsc{Maj}_{n}$ on the bidirectional $n$-ring has label complexity at least $\log(\lfloor n/2\rfloor)/4$.
\end{corollary}
\section{Summary and Future Work} \label{sec:conc}
We formalized and explored the notion of stateless computation. Our investigation of stateless computation focused on two important aspects: (1) guaranteeing and verifying self-stabilization of stateless computation systems and (2) analyzing the computational power of such systems.
We presented a general necessary condition for self-stabilization, establishing that multiple stable labelings induce potential instability even under relatively fair schedules on node activation. We also investigated the communication and computational complexity of verifying that a system is self-stabilizing and exhibited hardness results for both models of computation. Our results for the computational power of stateless systems show that stateless computation is sufficiently powerful to solve a nontrival range computational problems in a manner that is both robust to transient faults and frugal in terms of message length. We provided exact characterizations of the languages that can be computed by output-stabilizing
protocols with logarithmic label size, on unidirectional and bidirectional rings.
We believe that we have but scratched the surface in our exploration of stateless computation and leave the reader with many open questions: (1) Identifying necessary and sufficient conditions on the reaction functions for self-stabilization. (2) Exploring ``almost stateless'' computation, e.g., computation with aconstant number of internal memory bits. (3) Extending our research to specific network topologies such as the hypercube, torus, trees, etc. (4) Understanding the implications of randomized reaction functions for self-stabilization and computation.
\newpage
|
2,869,038,156,917 | arxiv | \section*{Acknowledgments}
We are grateful to the stimulating discussions with Wolfram Weise.
This work is partly supported by the RIKEN iTHES and iTHEMS Programs.
T.H. is grateful to the Aspen Center for Physics, supported in part by NSF Grants PHY1066292 and PHY1607611.
|
2,869,038,156,918 | arxiv | \section{Introduction}\label{Intro}
Let us begin with the preliminary definitions, concepts and relevant notation on trees. A \emph{graph} $G$ is a pair $(V,E)$ consisting of a countably-infinite set of \emph{vertices} $V$ and a set of \emph{edges} $E\subset \big\{(u, v): u, v\in V, u\neq v\big\}$ (which is a subset of $V\times V$). If $(u, v)\in E$, then we say that $u$ and $v$ are \emph{adjacent} (or \emph{neighbors}) and we denote this by $u\thicksim v$. For each vertex $v$, the \emph{degree} of $v$ is the number of vertices adjacent to $v$, which is denoted by $\mathrm{deg}(v)$. We say that $G$ is \emph{locally finite} if the degree of every vertex is finite. We call that a vertex $v$ is a \emph{terminal vertex} if $v$ has a unique neighbor. A \emph{path of length $n$} joining two vertices $u$ and $v$ is a finite sequence of $(n+1)$ distinct vertices $$u=u_0\thicksim u_1\thicksim u_2 \thicksim\cdots \thicksim u_n=v.$$
By a \emph{tree} $T$ we mean a locally finite, connected and simply-connected graph, i.e., for each pair of vertices there is one and only one path between them. We shall identify $T$ with the collection of its vertices $V$. In the present paper, the tree we consider has a distinguished vertex, which we call the root of $T$ and we denote it by $o$. The length of a path joining the root $o$ and a vertex $v$ is called the \emph{length} of $v$ and is denoted by $|v|$.
Given a tree $T$ rooted at $o$ and a vertex $v\in T$, a vertex $w$ is called a \emph{descendant} of $v$ if $v$ lies in the unique path from $o$ to $w$. In this case, the vertex $v$ is called an \emph{ancestor} of $w$. We denote by $v^{-}$ the neighbor of a vertex $v$ which is an ancestor of $v$, and the vertex $v$ is called a \emph{child} of $v^{-}$. For each $w\in T$, we denote the set of all children of $w$ by $\mathrm{Chi}(w)$. Given a vertex $v$, the \emph{sector determined by $v$} is the set $S_v$ consisting of $v$ and all its descendants. In particular, $S_o=T$.
Let $1\leqslant p<\infty$. The function space $\mathcal L^p(T)$ of $T$ is defined as the set of functions $f: T\rightarrow \mathbb C$ such that
$$\sum_{v\in T}|f(v)|^p<\infty.$$
It is easy to show that every $\mathcal{L}^p(T)$ is a Banach space when endowed with the norm
$$\|f\|_p:=\bigg(\sum_{v\in T}|f(v)|^p\bigg)^{\frac{1}{p}},$$
since it is isomorphic to the $p$-summable sequence space $\ell^p(V)$.
Moreover, the dual space of $\mathcal{L}^p(T)$ with $1<p<\infty$ can be identified with $\mathcal{L}^q(T)$ under the sesquilinear dual pairing
$$\langle f, g\rangle:=\sum_{v\in T}f(v)\overline{g(v)},\ \ \ f\in \mathcal{L}^p(T),\ \ g\in \mathcal{L}^q(T),$$
where $q=\frac{p}{p-1}$ is the conjugate exponent of $p$. In particular, $\mathcal L^2(T)$ is a Hilbert space with the obvious inner product.
When $p=\infty$, the space $\mathcal{L}^\infty(T)$ is the collection of bounded functions $f$ on $T$ equipped with the supremum norm $\|f\|_\infty=\sup\big\{|f(v)|: v\in T\big\}$. In addition, we say that $q=\infty$ is the conjugate exponent of $p=1$ and that $q=1$ is the conjugate exponent of $p=\infty$.
The interest of the study of operators on infinite trees is motivated mainly by the research in harmonic
analysis dealing with the spectrum of the Laplace operator on discrete structures, one can consult \cite{Ca1} and \cite{Ca2} for more information. Linear operators on discrete structures other than Laplacian have been studied by many authors. For instance:
\begin{itemize}
\item \emph{Toeplitz operators.} Pavone studied the properties of Toeplitz operators on a discrete group and discussed the extent to which they paraller the properties of Topelitz operators on the Hardy space of the unit circle, see \cite{Pa, Pa2}; In \cite{CM}, Colonna and Mart\'{\i}nez-Avenda\~{n}o defined the Toeplitz operator on $\mathcal L^p$-spaces of a tree, where $1\leqslant p\leqslant \infty$, and then characterized the boundedness and the eigenvalues of such Toeplitz operator; Motivated by the paper \cite{CM}, Zhang and Zhao \cite{ZZ} established several sufficient conditions for Toeplitz operators to be compact on $\mathcal L^p$-spaces of a tree.
\item \emph{Composition operators.} Pavone \cite{Pa1} characterized the class of hypercyclic composition operators on the $\mathcal L^p$-space of the Poisson boundary of a homogeneous tree, and showed that such operators are actually chaotic. Allen, Colonna and Easley \cite{AC} investigated the boundedness, compactness and spectra of the composition operators on the Lipschitz space of a tree; Later, Allen and Pons (\cite{AP, AP1}) obtained the characterizations for the (weighted) composition operators with closed range, and to be bounded, compact, bounded below, invertible and Fredholm on weighted Banach spaces of a tree; In addition, the bounded, compact, invertible, isometric composition operators on Hardy spaces of the homogenous rooted trees were investigated in the recent paper \cite{Mu}.
\item \emph{Shift operators.} The operator-norm estimate, spectral properties, hyponormality, subnormality and hypercyclicity of (weighted) shift operators on certain function spaces of a tree were well-studied in \cite{J, Ma1, Ma}; Moreover, the norm estimate, kernels and eigenvalues of the shift operator (usually called the adjacency operator) on the $\mathcal L^p$-space of a graph were systematically investigated in the paper \cite{AB}.
\item {\emph{Multiplication operators.}} The boundedness, compactness, invertibility, hypercyclicity and essential norm of multiplication operators on Lipschitz-type spaces of a tree were discussed in a series of papers of Allen, Colonna, Easley and Prudhom \cite{Al2, Al3, Al1, Al4, Co1, Co2}.
\end{itemize}
However, little is known about the spectral properties, self-adjointness and positivity of classical operators on discrete structures. The purpose of this paper is to study some fundamental properties about Toeplitz operators on $\mathcal L^p$-spaces of a tree, such as the spectrum, self-adjointness, positivity and finite rank. Before stating the definition of the Toeplitz operator on $\mathcal L^p$-spaces of a tree, we need to introduce the following two important linear operators on such function spaces. For any function $f: T\rightarrow \mathbb C$, the operator $\nabla$ is defined by
\begin{align*}
(\nabla f)(u):=f(u)-\sum_{v^{-}=u}f(v), \ \ \ \ u\in T.
\end{align*}
Recall that the operator $\Delta$ is the transformation with domain $\mathcal L^1(T)$ defined by
$$(\Delta f)(u):=\sum_{v\in S_u} f(v), \ \ \ \ f\in \mathcal L^1(T), \ \ \ u\in T,$$
where $S_u$ is the sector determined by $u$. For more details concerning the above two operators, one can consult \cite{CM}.
Let $1\leqslant p \leqslant \infty$ and $\varphi$ be a complex-valued function on $T$. The \emph{Toeplitz operator} on $\mathcal L^p(T)$ with symbol $\varphi$ is defined by
$$T_\varphi f:=\Delta(\varphi \nabla f), \ \ \ \ f\in \mathcal L^p(T).$$
Some useful properties and important conclusions about $\nabla$, $\Delta$ and Toeplitz operators on $\mathcal L^p(T)$ will be included in the next section.
The main part of this paper is organized as follows. In Section \ref{spectra}, we will study the spectra of some classes of bounded Topelitz operators on the Banach spaces $\mathcal L^p(T)$ with $1\leqslant p\leqslant \infty$. More precisely, we will show in Theorems \ref{spectrum} and \ref{spectrum2} that the spectrum of the Topelitz operator is equal to the closure of the range of its symbol under certain mild assumptions. This answers an open question posed by Colonna and Mart\'{\i}nez-Avenda\~{n}o \cite{CM}. In Section \ref{positivity}, we investigate the self-adjointness and positivity of Topelitz operators on the Hilbert space $\mathcal L^2(T)$. Surprisingly, we find that there is no nontrivial self-adjoint Topelitz operator on $\mathcal L^2(T)$, see Theorem \ref{SP} for the detailed discussion. Section \ref{finite rank} is devoting to the characterization for finite rank Toeplitz operators on $\mathcal L^p(T)$ with $1\leqslant p\leqslant \infty$. In fact, we will prove that the Toeplitz operator on $\mathcal L^p(T)$ is of finite rank if and only if the support of its symbol is a finite set, see Theorem \ref{FR} for the proof.
\section{Preliminary}\label{S2}
In this section, we recall some important definitions and known results which will be required in the next three sections. We shall make use of the following derivative of a function on a tree repeatedly.
\begin{defi}\label{f'}
Given a complex-valued function $f$ on a tree $T$. We define the function $f_{-}$ on $T$ by
\begin{align*}
f_{-}(v)=
\begin{cases}
f(v^{-}), & \mathrm{if} \ \ \ v\neq o, \vspace{2mm}\\
0, & \mathrm{if} \ \ \ v=o.
\end{cases}
\end{align*}
In addition, the function $f'$ on $T$ is defined by
$$f'(v)=f(v)-f_{-}(v), \ \ \ v\in T.$$
\end{defi}
Recall that the operators $\nabla$ and $\Delta$ were introduced in Section \ref{Intro}. The relationship between these two operators on $\mathcal L^1(T)$ is given by the following lemma, see \cite[Proposition 5.7]{CM} if necessary.
\begin{lem}\label{DN}
If $f\in \mathcal L^{1}(T)$, then $\Delta(\nabla f)=f$ and $\nabla(\Delta f)=f.$
\end{lem}
The following expression for the Toeplitz operator on $\mathcal L^p(T)$ is quite useful in the study of the boundedness, compactness, positivity and spectra of Toeplitz operators, which was proved in \cite[Lemma 6.2]{CM}.
\begin{lem}\label{T}
Let $1<p\leqslant \infty$ and let $q$ be the conjugate exponent of $p$.
If $\varphi$ and $\varphi'$ are both in $\mathcal L^q(T)$ and $f\in \mathcal L^p(T)$, then we have that $\varphi \nabla f\in \mathcal L^1(T)$ and
$$T_\varphi f=f\varphi_{-}+\Delta(f\varphi').$$
\end{lem}
The next two lemmas were established in \cite[Theorems 6.4 and 6.6]{CM}, which give nice sufficient conditions for the boundedness of Toeplitz operators on $\mathcal L^p(T)$ with $1\leqslant p \leqslant \infty$.
\begin{lem}\label{TB}
Assume that $1\leqslant p<\infty$ and $q$ is the conjugate exponent of $p$. If the functions $\varphi$ and $\widehat{\varphi}$ are both in $\mathcal L^q(T)$, then the Toeplitz operator $T_\varphi$ is bounded on $\mathcal L^p(T)$, where $\widehat{\varphi}$ is defined by $$\widehat{\varphi}(v):=(|v|+1)\varphi'(v), \ \ \ \ v\in T.$$
\end{lem}
\begin{lem}\label{TB2}
If $\varphi$ and $\varphi'$ are both in $\mathcal L^1(T)$, then the Toeplitz operator $T_\varphi$ is bounded on $\mathcal L^\infty(T)$.
\end{lem}
Recall that \cite[Theorem 6.10]{CM} obtained a characterization on the point spectrum (the set of eigenvalues) of the Toeplitz operator on $\mathcal L^p(T)$ in terms of the range of its symbol. For the sake of convenience, we quote this theorem in the following.
\begin{lem}\label{PS}
Let $1\leqslant p\leqslant \infty$ and let $q$ be the conjugate exponent of $p$.
Suppose that $\varphi, \varphi'\in \mathcal L^q(T)$ and the Toeplitz operator $T_\varphi$ is bounded on $\mathcal L^p(T)$. Then:\\
$(a)$ $\sigma_p(T_\varphi)=\{\varphi(w): w\in T\}$ if there is no nonzero function $g\in \mathcal L^p(T)$ with $\nabla g=0$;\\
$(b)$ $\sigma_p(T_\varphi)=\{0\}\cup\{\varphi(w): w\in T\}$ if there is a nonzero function $g\in \mathcal L^p(T)$ with $\nabla g=0$.
\end{lem}
\section{The spectra of Toeplitz operators}\label{spectra}
The study of the spectral properties of operators on infinite trees is relatively new. In this section, we investigate the spectra of Toeplitz operators on $\mathcal L^p(T)$ with $1\leqslant p\leqslant \infty$. Let $q$ be the conjugate exponent of $p$ and $\varphi$ be a function in $\mathcal L^q(T)$. Based on the characterization for eigenvalues of Toeplitz operators (Lemma \ref{PS}), we show that the spectrum of the Toeplitz operator $T_\varphi: \mathcal L^p(T)\rightarrow \mathcal L^p(T)$ with $1\leqslant p<\infty$ equals the closure of $\varphi(T)$ if $\widehat{\varphi}$ is also in $\mathcal L^q(T)$. Recall that the function $\widehat{\varphi}$ was introduced in Lemma \ref{TB}. On the other hand, we also prove that the spectrum of the Toeplitz operator $T_\varphi: \mathcal L^\infty(T)\rightarrow \mathcal L^\infty(T)$ is coincided with the closure of $\varphi(T)$ when $\varphi'\in \mathcal L^1(T)$.
Let us consider the case that $1\leqslant p<\infty$ first. As the spectrum of the Topelitz operator on $\mathcal L^1(T)$ was obtained in \cite{CM} via the multiplication operator with the same symbol, it is sufficient for us to discuss the case of $1<p<\infty$.
\begin{thm}\label{spectrum}
Let $1\leqslant p<\infty$ and $q$ be the conjugate exponent of $p$. Suppose that $\varphi$ and $\widehat{\varphi}$ are both in $\mathcal L^{q}(T)$, where $$\widehat{\varphi}(v)=(|v|+1)\varphi'(v), \ \ \ \ v\in T.$$ Then $T_\varphi$ is bounded on $\mathcal L^p(T)$ and in this case we have $$\sigma(T_{\varphi})=\mathrm{clos}\{\varphi(v):v\in T\}.$$
\end{thm}
\begin{proof}
As we mentioned above, the conclusion for the case of $p=1$ was obtained in Section 7 of \cite{CM}. So we need only to consider the case for $p>1$. By Lemma \ref{TB}, we see that the Toeplitz operator $T_\varphi$ is bounded on $\mathcal L^p(T)$. Clearly, $\widehat{\varphi}\in \mathcal L^q(T)$ implies that $\varphi'\in \mathcal L^q(T)$. Then using Lemma \ref{PS}, we have
$$\mathrm{clos}\{\varphi(w):w\in T\}=\mathrm{clos}[\sigma_p(T_\varphi)]\subset \sigma(T_\varphi).$$
To prove the reverse inclusion, it is sufficient to show that $T_\varphi-\lambda I$ is invertible on $\mathcal L^p(T)$ if $\lambda$ is not in the closure of the range of $\varphi$, where $I$ is the identity operator on $\mathcal L^p(T)$. Observe that $\lambda \notin \mathrm{clos}\{\varphi(v):v\in T\}$ implies that $\lambda$ is not an eigenvalue of $T_\varphi$. This means that $T_\varphi-\lambda I$ is an injective from $\mathcal L^p(T)$ to $\mathcal L^p(T)$. According to the Banach inverse operator theorem, we need only to show that $T_\varphi-\lambda I$ is a surjective on $\mathcal L^p(T)$ if $\lambda\notin \mathrm{clos}\{\varphi(v):v\in T\}$.
Suppose that $$\inf_{v\in T}|\varphi(v)-\lambda|\geqslant \delta $$
for some positive constant $\delta$. Let $\delta_0=\min\{\delta, |\lambda|\}$. Then we have that $\delta_0>0$ and
$$|\varphi(v)-\lambda|\geqslant \delta_0$$
and $$ |\varphi_-(v)-\lambda|\geqslant \delta_0$$
for all $v\in T$. Letting $g$ be a function in $\mathcal L^p(T)$, we define
\begin{align}\label{f}
f:=\frac{1}{\lambda}\bigg[\Delta\Big(\frac{\varphi}{\varphi-\lambda}\nabla g\Big)-g\bigg].
\end{align}
In the following, we will show that $f\in \mathcal L^p(T)$ and $f$ is the preimage of $g$ under $T_\varphi-\lambda I$.
To show $f\in \mathcal L^p(T)$, we need only to prove that
$$\Delta\Big(\frac{\varphi}{\varphi-\lambda}\nabla g\Big)\in \mathcal L^p(T).$$
Let us show $\widehat{\big(\frac{\varphi}{\varphi-\lambda}\big)}\in \mathcal L^{q}(T)$ first. Noting that
$$\widehat{\Big(\frac{\varphi}{\varphi-\lambda}\Big)}(v)=(|v|+1)\Big(\frac{\varphi}{\varphi-\lambda}\Big)'(v)=
(|v|+1)\Big[\frac{\varphi(v)}{\varphi(v)-\lambda}-\frac{\varphi_-(v)}{\varphi_-(v)-\lambda}\Big]$$
for $v\in T$,
we have
\begin{align*}
\bigg\|\widehat{\Big(\frac{\varphi}{\varphi-\lambda}\Big)}\bigg\|_{q}&=\bigg[\sum\limits_{v\in T}\Big|(|v|+1)\Big(\frac{\varphi(v)}{\varphi(v)-\lambda}-\frac{\varphi_-(v)}{\varphi_-(v)-\lambda}\Big)\Big|^{q}\bigg]^{\frac{1}{q}}\\
&=\bigg[\sum\limits_{v\in T}\Big|(|v|+1)\Big(\frac{\varphi(v)}{\varphi(v)-\lambda}-\frac{\varphi_-(v)}{\varphi(v)-\lambda}+
\frac{\varphi_-(v)}{\varphi(v)-\lambda}-\frac{\varphi_-(v)}{\varphi_-(v)-\lambda}\Big)\Big|^{q}\bigg]^{\frac{1}{q}}.
\end{align*}
Then triangle inequality gives that
\begin{align*}
\bigg\|\widehat{\Big(\frac{\varphi}{\varphi-\lambda}\Big)}\bigg\|_{q}&\leqslant \bigg[\sum\limits_{v\in T}\Big|(|v|+1)\frac{\varphi(v)-\varphi_-(v)}{\varphi(v)-\lambda}\Big|^{q}\bigg]^{\frac{1}{q}}+
\bigg[\sum\limits_{v\in T}\Big|(|v|+1)\frac{\varphi_-(v)[\varphi(v)-\varphi_-(v)]}{[\varphi_-(v)-\lambda][\varphi(v)-\lambda]}\Big|^{q}\bigg]^{\frac{1}{q}}\\
&=\bigg[\sum\limits_{v\in T}\Big|(|v|+1)\frac{\varphi'(v)}{\varphi(v)-\lambda}\Big|^{q}\bigg]^{\frac{1}{q}}+
\bigg[\sum\limits_{v\in T}\Big|(|v|+1)\frac{\varphi'(v)\varphi_-(v)}{[\varphi_-(v)-\lambda][\varphi(v)-\lambda]}
\Big|^{q}\bigg]^{\frac{1}{q}}\\
&\leqslant \frac{1}{\delta_0}\Big(\sum\limits_{v\in T}\big|(|v|+1)\varphi'(v)\big|^{q}\Big)^{\frac{1}{q}}+
\frac{1}{\delta_0^{2}}\Big(\sum\limits_{v\in T}\big|(|v|+1)\varphi_-(v)\varphi'(v)\big|^{q}\Big)^{\frac{1}{q}}\\
&\leqslant \frac{1}{\delta_0}\Big(\sum\limits_{v\in T}\big|(|v|+1)\varphi'(v)\big|^{q}\Big)^{\frac{1}{q}}+
\frac{\|\varphi\|_\infty}{\delta_0^{2}}\Big(\sum\limits_{v\in T}\big|(|v|+1)\varphi'(v)\big|^{q}\Big)^{\frac{1}{q}}\\
&=\Big(\frac{1}{\delta_0}+\frac{\|\varphi\|_\infty}{\delta_0^{2}}\Big)\|\widehat{\varphi}\|_{q}.
\end{align*}
This shows that $\widehat{\big(\frac{\varphi}{\varphi-\lambda}\big)}\in \mathcal L^{q}(T)$, since $\widehat{\varphi}\in\mathcal L^{q}(T)$.
Next, we are going to show that $\Delta\big(\frac{\varphi}{\varphi-\lambda}\nabla g\big)\in \mathcal L^p(T)$. Noting that $\frac{\varphi}{\varphi-\lambda}$ and $\big(\frac{\varphi}{\varphi-\lambda}\big)'$ are both in $\mathcal L^q(T)$, we obtain by Lemma \ref{T} that
$$\Delta\Big(\frac{\varphi}{\varphi-\lambda}\nabla g\Big)=T_{\frac{\varphi}{\varphi-\lambda}}g=\Big(\frac{\varphi}{\varphi-\lambda}\Big)_{-}g+\Delta\Big[\Big(\frac{\varphi}{\varphi-\lambda}\Big)'g\Big].$$
Furthermore, we have
\begin{align*}
\Big\|\Delta\Big[\Big(\frac{\varphi}{\varphi-\lambda}\Big)'g\Big]\Big\|_p & \leqslant \Big\|\Delta\Big[\Big(\frac{\varphi}{\varphi-\lambda}\Big)'g\Big]\Big\|_1=\sum_{v\in T} \bigg| \Delta\Big[\Big(\frac{\varphi}{\varphi-\lambda}\Big)'g\Big](v)\bigg|\\
&\leqslant \sum_{v\in T}\sum_{u\in S_v}\bigg| \Big(\frac{\varphi}{\varphi-\lambda}\Big)'(u)g(u)\bigg|\\
&=\sum_{v\in T}(|v|+1)\bigg| \Big(\frac{\varphi}{\varphi-\lambda}\Big)'(v)g(v)\bigg|\\
&=\sum_{v\in T}\bigg| \widehat{\Big(\frac{\varphi}{\varphi-\lambda}\Big)}(v)\bigg|~|g(v)|\\
&\leqslant \|g\|_p\bigg\|\widehat{\Big(\frac{\varphi}{\varphi-\lambda}\Big)}\bigg\|_{q},
\end{align*}
to get that $\Delta\big[\big(\frac{\varphi}{\varphi-\lambda}\big)'g\big]\in \mathcal L^p(T)$. Since $\frac{\varphi}{\varphi-\lambda}\in \mathcal L^{q}(T)$ implies that $\frac{\varphi}{\varphi-\lambda}\in \mathcal L^\infty(T)$, we obtain that $\big(\frac{\varphi}{\varphi-\lambda}\big)_{-}$ is also bounded. It follows that $\big(\frac{\varphi}{\varphi-\lambda}\big)_{-}g$ is in $\mathcal L^p(T)$, which gives that $\Delta\big(\frac{\varphi}{\varphi-\lambda}\nabla g\big)\in \mathcal L^p(T)$. Therefore, we obtain that the function defined in (\ref{f}) is in $\mathcal L^p(T)$.
To finish the proof, it remains to verify that
$$(T_{\varphi}-\lambda I)f=g.$$
Indeed, we have by Lemma \ref{T} that $$\frac{\varphi}{\varphi-\lambda}\nabla g\in \mathcal L^{1}(T),$$since $g\in \mathcal L^p(T)$ and $\frac{\varphi}{\varphi-\lambda}, \big(\frac{\varphi}{\varphi-\lambda}\big)' \in \mathcal L^q(T)$. It follows from (\ref{f}) and Lemma \ref{DN} that
\begin{align*}
\nabla f&=\nabla\Bigg[\frac{1}{\lambda}\bigg(\Delta\Big(\frac{\varphi}{\varphi-\lambda}\nabla g\Big)-g\bigg)\Bigg]=\frac{1}{\lambda}\frac{\varphi}{\varphi-\lambda}\nabla g-\frac{\nabla g}{\lambda}=\frac{\nabla g}{\varphi-\lambda}.
\end{align*}
This yields that
\begin{align*}
(T_{\varphi}-\lambda I)f&=T_{\varphi}f-\lambda f=\Delta(\varphi\nabla f)-\lambda f\\
&=\Delta\Big(\frac{\varphi}{\varphi-\lambda}\nabla g\Big)-\Delta\Big(\frac{\varphi}{\varphi-\lambda}\nabla g\Big)+g\\
&=g,
\end{align*}
as desired. This completes the proof of Theorem \ref{spectrum}.
\end{proof}
Now we give a description for the spectrum of $T_\varphi: \mathcal L^\infty(T)\rightarrow \mathcal L^\infty(T)$ with $\varphi, \varphi'\in \mathcal L^1(T)$ in the following theorem.
\begin{thm}\label{spectrum2}
If $\varphi$ and $\varphi'$ are both in $\mathcal L^1(T)$, then $T_\varphi$ is bounded on $\mathcal L^\infty(T)$ and in this case we have $$\sigma(T_{\varphi})=\mathrm{clos}\{\varphi(v):v\in T\}.$$
\end{thm}
\begin{proof}
Lemma \ref{TB2} tells us that $T_\varphi: \mathcal L^\infty(T)\rightarrow \mathcal L^\infty(T)$ is bounded if $\varphi\in \mathcal L^1(T)$ and $\varphi'\in \mathcal L^1(T)$.
According to the proof of Theorem \ref{spectrum}, we need only to show that
$$\Big(\frac{\varphi}{\varphi-\lambda}\Big)_{-}g\ \ \ \ \mathrm{and} \ \ \ \ \Delta\Big[\Big(\frac{\varphi}{\varphi-\lambda}\Big)'g\Big]$$
are both in $\mathcal L^\infty(T)$
if $\lambda\notin \mathrm{clos}\{\varphi(v): v\in T\}$ and $g\in \mathcal L^1(T)$. In fact,
$$\bigg\|\Big(\frac{\varphi}{\varphi-\lambda}\Big)_{-}g\bigg\|_\infty=\sup_{v\in T}\bigg|\Big(\frac{\varphi}{\varphi-\lambda}\Big)_{-}(v)g(v)\bigg|\leqslant \frac{\|g\|_\infty}{\delta_0}\|\varphi\|_\infty,$$
where $\delta_0$ is the constant defined in the proof of Theorem \ref{spectrum}. Moreover, since
\begin{align*}
\bigg\|\Delta\Big[\Big(\frac{\varphi}{\varphi-\lambda}\Big)'g\Big]\bigg\|_\infty&=\sup_{v\in T}\bigg|\sum_{u\in S_v}g(u)\Big(\frac{\varphi}{\varphi-\lambda}\Big)'(u)\bigg|\\
&\leqslant \sup_{v\in T}\sum_{u\in S_v}|g(u)|\bigg|\frac{\varphi(u)}{\varphi(u)-\lambda}-\frac{\varphi_-(u)}{\varphi_-(u)-\lambda}\bigg|\\
&\leqslant \frac{|\lambda|~\|g\|_\infty}{\delta_0^2}\sup_{v\in T}\Big[\sum_{u\in S_v}|\varphi(u)-\varphi_{-}(u)|\Big]\\
&= \frac{|\lambda|~\|g\|_\infty}{\delta_0^2}\|\varphi'\|_1,
\end{align*}
we obtain that $\big(\frac{\varphi}{\varphi-\lambda}\big)_{-}g\in \mathcal L^\infty(T)$ and $\Delta\big[\big(\frac{\varphi}{\varphi-\lambda}\big)'g\big]\in \mathcal L^\infty(T)$.
The rest of the proof of this theorem is similar to the previous one, so we omit the details. This finishes the proof of Theorem \ref{spectrum2}.
\end{proof}
\begin{rem}
The conclusions in Theorems \ref{spectrum} and \ref{spectrum2} are comparable to the corresponding results for analytic Toeplitz operators on the Hardy and Bergman spaces of the open unit disk, see the books \cite{Dou} and \cite{Zhu} for more information.
\end{rem}
\section{The positivity of Toeplitz operators}\label{positivity}
We say that a bounded linear operator $T$ is self-adjoint (positive) on a complex Hilbert space $\mathscr H$ if
$\langle Tx, x\rangle_{\mathscr H}$
is real (nonnegative) for every $x$ in $\mathscr H$. In this section, we study when a bounded Toeplitz operator is self-adjoint (positive) on the Hilbert space $\mathcal L^2(T)$.
Observing that the adjoint of the Topelitz operator $T_\varphi$ does not equal $T_{\overline{\varphi}}$ in general, which leads to certain difficulties in the study of the self-adjointness and positivity of Toeplitz operators on $\mathcal L^2(T)$. However, we are able to obtain a complete characterization for the self-adjoint Toeplitz operators on $\mathcal L^2(T)$ by using their point spectra.
\begin{thm}\label{SP}
Suppose that $\varphi$ is a function in $\mathcal L^2(T)$ such that $\widehat{\varphi}\in \mathcal L^2(T)$. Then the following three conditions are equivalent:\\
$(1)$ $T_\varphi$ is positive on $\mathcal L^2(T)$;\\
$(2)$ $T_\varphi$ is self-adjoint on $\mathcal L^2(T)$;\\
$(3)$ $\varphi(v)=0$ for all $v\in T$.
\end{thm}
\begin{proof}
Clearly, $(1)\Rightarrow (2)$ is trivial and $(3)\Rightarrow (1)$ follows immediately from Lemma \ref{T}. To complete the proof of this theorem, it remains to show that $(2)\Rightarrow (3)$.
Now we suppose that $T_\varphi$ is self-adjoint on $\mathcal L^2(T)$. Then we have by Lemma \ref{PS} that $\varphi$ must be real-valued, since
$$\{\varphi(v): v\in T\}=\sigma_p(T_\varphi)\subset \sigma(T_\varphi)\subset \mathbb R$$
or
$$\{0\}\cup \{\varphi(v): v\in T\}=\sigma_p(T_\varphi)\subset \sigma(T_\varphi)\subset \mathbb R.$$
Let $f$ be any function in $\mathcal L^2(T)$. Since $\widehat{\varphi}\in \mathcal L^2(T)$, we see that the series $\sum\limits_{v\in T}[T_{\varphi}f(v)]\overline{f(v)}$ is absolutely convergent. Now using Lemma \ref{T} again gives
\begin{align}\label{T_phi}
\begin{split}
\langle T_{\varphi}f, f\rangle&=\sum\limits_{v\in T}[T_{\varphi}f(v)]\overline{f(v)}=\sum\limits_{v\in T}\big[f(v)\varphi_{-}(v)\overline{f(v)}+\Delta(f\varphi')(v)\overline{f(v)}\big]\\
&=\sum\limits_{v\in T}|f(v)|^{2}\varphi_-(v)+\sum\limits_{v\in T}\overline{f(v)}\Delta(f\varphi')(v).
\end{split}
\end{align}
Furthermore, we have by the definitions of $\varphi'$ and the operator $\Delta$ that
\begin{align}\label{sum}
\begin{split}
&\sum\limits_{v\in T}\overline{f(v)}\Delta(f\varphi')(v)\\
&\ =\sum\limits_{v\in T}\Big[\overline{f(v)}\sum\limits_{u\in S_{v}}(f\varphi')(u)\Big]\\
&\ =\sum\limits_{v\in T}\Big\{\overline{f(v)}\sum\limits_{u\in S_{v}}f(u)\big[\varphi(u)-\varphi_-(u)\big]\Big\}\\
&\ =\sum\limits_{v\in T}|f(v)|^{2}\varphi(v)-\sum\limits_{v\in T}|f(v)|^{2}\varphi_-(v)+\sum\limits_{v\in T}\Big\{\overline{f(v)}\sum\limits_{u\in S_{v}\backslash\{v\}}\big[f(u)\varphi(u)-f(u)\varphi_-(u)\big]\Big\}.
\end{split}
\end{align}
We first show that $\varphi$ is a constant function. Otherwise, we can choose two vertices $\xi$ and $\eta$ in $T$ such that $\varphi(\xi)\neq \varphi(\eta)$. Since the path joining $\xi$ and $\eta$ is a finite sequence of distinct vertices, we may assume that $\xi\thicksim \eta$. In other words, we obtain that $\xi\in \mathrm{Chi}(\eta)$ or $\eta\in \mathrm{Chi}(\xi)$. Without loss of generality, we may assume that $\eta\in \mathrm{Chi}(\xi)$. Let us consider the function $g: T\rightarrow \mathbb C$ defined by
\begin{align*}
g(v)=
\begin{cases}
1,& \mathrm{if} \ \ v=\xi,\vspace{2mm}\\
\mathrm{i},& \mathrm{if} \ \ v=\eta,\vspace{2mm}\\
0,& \text{otherwise}.\vspace{2mm}\\
\end{cases}
\end{align*}
It follows from (\ref{T_phi}) and (\ref{sum}) that
\begin{align*}
\langle T_{\varphi}g, g\rangle
&=|g(\xi)|^{2}\varphi(\xi)+|g(\eta)|^{2}\varphi(\eta)+\overline{g(\xi)}\big[g(\eta)\varphi(\eta)-g(\eta)\varphi(\eta^{-})\big]\\
&=|g(\xi)|^{2}\varphi(\xi)+|g(\eta)|^{2}\varphi(\eta)+\overline{g(\xi)}g(\eta)\big[\varphi(\eta)-\varphi(\eta^{-})\big]\\
&=\big[\varphi(\xi)+\varphi(\eta)\big]+\big[\varphi(\eta)-\varphi(\xi)\big]\mathrm{i}.
\end{align*}
This means that $\langle T_{\varphi}g, g\rangle\notin \mathbb R$, which contradicts that $T_\varphi$ is self-adjoint. Thus $\varphi$ is a constant function. However, the condition $\varphi\in \mathcal L^2(T)$ yields that $\varphi$ must be zero. This completes the proof.
\end{proof}
\section{Finite rank Toeplitz operators}\label{finite rank}
The final section is devoted to establishing a equivalent characterization for finite rank Toeplitz operators on $\mathcal L^p(T)$, where $1\leqslant p \leqslant \infty$. More concretely, we will show in the following theorem that the Toeplitz operator $T_\varphi$ having finite rank if and only if $\{v\in T: \varphi(v)\neq 0\}$ is a finite subset of $T$.
\begin{thm}\label{FR}
Let $1\leqslant p \leqslant \infty$ and $\varphi$ be a function in $\mathcal L^q(T)$ such that $\varphi'\in \mathcal L^q(T)$ and the Toeplitz operator $T_\varphi$ is bounded on $\mathcal L^p(T)$, where $\frac{1}{p}+\frac{1}{q}=1$. Then $T_\varphi$ has finite rank on $\mathcal L^p(T)$ if and only if $\varphi(v)=0$ except for finitely many points $v\in T$.
\end{thm}
\begin{proof}
We show the sufficiency first. Suppose that $\varphi(v)=0$ for all $v\in T$ except $v_1, v_2, \cdots, v_n$. For any $f\in \mathcal L^p(T)$, we have
\begin{align*}
T_\varphi f(v)=\Delta (\varphi \nabla f)(v)=\sum_{u\in S_v}\varphi(u)\nabla f(u), \ \ \ \ v\in T.
\end{align*}
It follows that $T_\varphi f(v)=0$ for all $v\in T$ with $|v|>\max\{|v_k|: k=1, 2, \cdots, n\}$. Let
$$W:=\Big\{w\in T: |w|\leqslant \max_{1\leqslant k \leqslant n} |v_k|\Big\}.$$
Then $W$ is a finite set and its elements can be enumerated by
$$W=\{w_1, w_2, \cdots, w_N\}.$$
Fix a vertex $\xi\in T$ and define
\begin{align}\label{basis}
e_\xi(v)=\begin{cases}
1,& \mathrm{if} \ \ v=\xi,\vspace{2mm}\\
0,& \text{otherwise}.
\end{cases}
\end{align}
Since $T_\varphi f$ is a vector in $\mathcal L^p(T)$, it can be expressed as
$$T_\varphi f=c_1e_{w_1}+c_2e_{w_2}+\cdots+c_Ne_{w_N}$$
with constants $c_1, c_2, \cdots, c_N$.
This implies that
$$\mathrm{range}(T_\varphi)\subset \mathrm{span}\big\{e_{w_1}, e_{w_2},\cdots, e_{w_N}\big\},$$
to obtain that $T_\varphi$ is of finite rank on $\mathcal L^p(T)$.
To show the necessity, we suppose that $T_\varphi$ is a finite rank operator on $\mathcal L^p(T)$ and $$\mathrm{range}(T_\varphi)\subset \mathrm{span}\big\{e_{v_1}, e_{v_2},\cdots, e_{v_n}\big\},$$
where $v_k$ is a vertex of $T$ and $e_{v_k}$ is defined in (\ref{basis}). According to Lemma \ref{T} and the computations in (\ref{sum}), we obtain that
\begin{align}\label{ew}
\begin{split}
T_{\varphi}e_{w}(v)&=e_{w}(v)\varphi(v^-)+\sum_{u\in S_{v}}e_{w}(u)[\varphi(u)-\varphi(u^-)]\\
&=e_{w}(v)\varphi(v^-)+e_{w}(v)[\varphi(v)-\varphi(v^-)]+\sum_{u\in S_{v}\backslash\{v\}}e_{w}(u)[\varphi(u)-\varphi(u^-)]\\
&=e_{w}(v)\varphi(v)+\sum_{u\in S_{v}\backslash \{v\}}e_{w}(u)[\varphi(u)-\varphi(u^-)]
\end{split}
\end{align}
for every $v\neq o$ and $w\in T$.
If $\varphi$ does not vanish at infinitely many vertices of $T$, then we can choose a vertex $w\in T$ such that
$|w|>\max\big\{|v_{1}|, |v_2|, \cdots, |v_n|\big\}$ and $\varphi(w)\neq 0$. It follows from (\ref{ew}) that
$$T_{\varphi}e_w(w)=\varphi(w).$$
We claim that $T_{\varphi} e_w\notin \mathrm{span}\big\{e_{v_1}, e_{v_2},\cdots, e_{v_n}\big\}$. Indeed, if
$$T_{\varphi} e_w=c_1e_{v_1}+c_2e_{v_2}+\cdots+c_ne_{v_n}$$
for some scalars $c_1, c_2, \cdots, c_n$, then we would have
$$0=c_1e_{v_1}(w)+c_2e_{v_2}(w)+\cdots+c_ne_{v_n}(w)=T_{\varphi} e_w(w)=\varphi(w)\neq 0,$$
which is a contradiction. This yields that $$\mathrm{dim}\big[\mathrm{range}(T_\varphi)\big]>n.$$
But this contradicts that the dimension of $\mathrm{range}(T_\varphi)$ is less than or equal to $n$. Therefore, the proof of Theorem \ref{FR} is finished.
\end{proof}
\begin{rem}
It is worth noting that Theorem \ref{FR} is analogous to the corresponding results for finite rank Toeplitz operators on the Bergman and Fock spaces, see \cite{Luc} and \cite[Theorem 6.42]{Zhu2} respectively.
\end{rem}
\vspace{2mm}
\subsection*{Acknowledgment}
This work was partially supported by NSFC (grant number: 11701052) and Chongqing Natural Science Foundation (cstc2019jcyj-msxmX0337). The third author was partially supported by the Fundamental Research Funds for the Central Universities (grant numbers: 2020CDJQY-A039, 2020CDJ-LHSS-003).\vspace{5mm}\\
|
2,869,038,156,919 | arxiv | \section{Contributions}
All authors designed and planned research collaboratively and performed experiments. W.G. developed the theoretical model, and performed the numerical simulations and PIV analysis. All authors wrote the paper.
\section{Data Availability}
The data that support the plots within this paper and other findings of this study are available from the corresponding author upon request
\section{Acknowledgments}
We thank C. Lowe, D. N. Clarke, K. Uhlinger, and Stanford BIOS 236; also R. Konte, L.-M. Joubert, M. S. Bull, D. Krishnamurthy, and the Prakash Lab. This work was supported by an ARO MURI Grant W911NF-15-1-0358 and an NSF CAREER Award (to M.P.) and an NSF GRFP DGE-114747 (to W.G.)
|
2,869,038,156,920 | arxiv | \section{Introduction}
In 2015, the Advanced Laser Interferometer Gravitational-Wave Observatory (Advanced LIGO) detected gravitational waves passing through Earth for the first time, inaugurating the era of gravitational-wave astronomy~\cite{Abbott:2016blz,Abbott:2016nmj,TheLIGOScientific:2016pea}. During 2017, Advanced LIGO began its second observation run; during this run, LIGO observed another gravitational wave from merging black holes~\cite{scientific2017gw170104}. Other second-generation gravitational-wave observatories, such as Advanced Virgo~\cite{TheVirgo:2014hva}, the Kamioka Gravitational-wave detector (KAGRA)~\cite{Somiya:2011np,aso2013interferometer}, and LIGO India~\cite{LIGOIndia}, will soon join Advanced LIGO, helping to better constrain the sky location and the properties
of observed gravitational waves' sources. Third-generation detector designs, such as the
Einstein Telescope~\cite{EinsteinTelescope} and Cosmic Explorer~\cite{abbott2017exploring}, aim to gain a factor of
$\approx 10$ in sensitivity over second-generation detectors, which corresponds to a factor of $1000$ in detection rate.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=4.5in]{Figures/GWINC_Plot.pdf}
\end{center}
\caption{
Principal noise terms for Advanced LIGO as seen in ~Fig. 2 of Ref.~\cite{aLIGO2}, updated as of November 16, 2016, using the Gravitational Wave Interferometer Noise Calculator~\cite{gwinc}. The component noises add as a root-square-sum since they are statistically independent.\label{fig:gwinc}}
\end{figure*}
Thermal noise is expected to be one of the noise sources that limits the sensitivity of second-generation detectors (once Advanced LIGO commissioning is complete) and third-generation ground-based gravitational-wave detectors. The total thermal noise budget includes contributions from i) Brownian thermal noise, caused by mechanical losses (i.e., by small, imaginary, dissipative terms of the material's elastic moduli), and ii) thermoelastic and thermorefractive coating noise, caused by temperature fluctuations in the materials. Figure~\ref{fig:gwinc} shows an Advanced LIGO noise curve, computed from the constituent noises in the instrument, current as of November 2016. The figure shows that Brownian coating thermal noise and quantum noise are the most important noises in Advanced LIGO's most sensitive frequency band ($\sim 100\mbox{ Hz}$).
Brownian coating thermal noise
limits the sensitivity of Advanced LIGO (e.g.~Fig. 2 of Ref.~\cite{aLIGO2})
and third generation detectors (e.g.~Fig.~20 of
Ref.~\cite{adhikari2014gravitational}). Therefore, a substantial research effort is studying Brownian coating thermal noise
theoretically and experimentally~\cite{harry2002thermal,
penn2003mechanical,rowan2005thermal,harry2006thermal,harry2007titania,
flaminio2010study,kondratiev2011thermal,evans2012reduced,hong2013brownian,
bassiri2013correlations,murray2017cryogenic,gras2017audio,kroker2017brownian}.
Thermoelastic and
thermorefractive noise (caused by temperature fluctuations) in the
coating~\cite{braginsky2003thermodynamical,fejer2004thermoelastic,
evans2008thermo} can also significantly contribute to the total noise in future gravitational-wave detectors, depending on the optics' materials and temperatures, as can Brownian and thermoelastic noise in the
substrate~\cite{braginsky2000thermo,
liu2000thermoelastic,o2004implications,vinet2005mirror}
and suspensions~\cite{saulson1990thermal,gillespie1993thermal,
braginsky1999reduce,gonzalez2000suspensions,kumar2008finite}.
Multi-layered coatings can be designed so that
thermoelastic and thermorefractive noise largely
cancel~\cite{evans2008thermo,Harry:2011book},
since these noises add coherently. Photothermal noise (temperature
fluctuations in the coating caused by absorption of incident laser power)
can similarly be coherently canceled~\cite{chalermsongsak2015coherent}.
Even though our primary motivation for modeling thermal noise is the application to gravitational-wave detection, we note that thermal noise is also a limiting noise in a number of other applications. For instance, thermal noise is a limiting noise for atomic clocks
(e.g.~Ref.~\cite{jiang2011making}). See, e.g., Ref.~\cite{harry2012optical} for a broader introduction to thermal noise.
Theoretical predictions are essential tools for understanding and minimizing thermal noise. Thermal-noise models typically rely on
the fluctuation-dissipation
theorem~\cite{callen1951irreversibility,bernard1959fdt,kubo1966fluctuation},
which relates the thermal
noise to the solution of an auxiliary elastic
problem~\cite{gonzalez1994brownian,levin1998internal}.
When a sinusoidally varying pressure with the same spatial distribution
as the laser beam
shape (i.e.~intensity profile) is applied, power is dissipated as the mirror elastically
deforms. The fluctuation-dissipation theorem relates the dissipated power in the auxiliary elastic problem to the spectral density of the mirror's thermal fluctuations.
In ground-based gravitational-wave detectors, a sound wave crosses
the mirror in much less time
than a gravitational-wave period;
therefore, thermal noise models often make
a quasistatic approximation~\cite{levin1998internal},
with the dissipated energy per cycle
becoming a product
of the potential energy of the elastostatic deformation and a
mechanical loss angle.
Existing models of thermal noise almost always treat the elastic properties
of the materials isotropically when applying the fluctuation-dissipation theorem; this allows the elastic problem
to be approached analytically. This is perfectly sensible for amorphous
materials (as used in Advanced LIGO~\cite{aLIGO2}), but
most plans for future ground-based detectors use crystalline
mirror substrates~\cite{hirose2014update,punturo2010einstein,
adhikari2014gravitational}.
KAGRA~\cite{Somiya:2011np},
which aims to reduce thermal noise by lowering the detector
temperature to 20 K, will use crystalline sapphire
substrates, motivated by crystalline sapphire's high thermal
conductivity~\cite{Somiya:2011np}
and fused silica's increased mechanical loss at low
temperatures~\cite{schroeter2007mechanical}.
Also, GaAs:Al$_{0.92}$Ga$_{0.08}$As (AlGaAs)
crystalline coatings experimentally show
great promise for reducing Brownian coating thermal
noise~\cite{cole2013tenfold,cole2016highperformance}.
Developing a theoretical model of Brownian
coating mirror thermal noise, one flexible enough to incorporate both
amorphous and crystalline materials, will help to understand and reduce Brownian coating noise and
thus to extend the reach of future ground-based
gravitational-wave detectors. To correctly understand thermal noise in crystalline materials, these models must account for their anisotropic elastic moduli. However, crystalline
materials' elastic properties are anisotropic, making analytic calculation of
elastic deformation in crystalline materials highly nontrivial
(though recent work~\cite{pang2009effect}
has succeeded in yielding a semi-analytic
solution for the static elastic deformation of a semi-infinite cubic
crystal). The formidable challenges toward an analytic model of
crystal coating thermal noise motivate numerical thermal-noise modeling. A recent study has numerically modeled
thermal noise in crystalline substrates~\cite{heinert2014fluctuation}.
In this paper, we numerically calculate Brownian coating and substrate noise. We present a new, open-source tool, based on existing open-source frameworks, that i) solves the auxiliary elastic problem for a cylindrical mirror with a thin coating and ii) uses the solution and the fluctuation-dissipation theorem to compute the power spectral density of the thermal noise. We adopt a finite-element method, a typical approach in elasticity computations, and we use adaptive mesh refinement and parallel processing to resolve the thin coating.
Specifically, we compute the thermal noise on a cylindrical mirror with a single-layer, thin reflective coating. For concreteness, we focus on the same case (i.e., same mirror dimensions, coating thickness, laser beam width, temperature, and elastic properties) considered by Cole and collaborators in Ref.~\cite{cole2013tenfold}, to facilitate comparison with their previous calculations (which assumed isotropic materials).
In Sec.~\ref{sec:performance}, we show how our code's run time scales with the number of compute cores. We find that running on 50-100 cores greatly improved performance, but further increases do not significantly improve performance (perhaps because of increasing communication costs). We also find that running on hundreds of cores enabled us to reach higher resolutions, by increasing the total memory available for the calculation.
In Sec.~\ref{sec:results}, we first test our code's numerical convergence. We find that the elastic internal energy (obtained from our solution of the elastic displacement vector) converges with increasing resolution, and we estimate our numerical uncertainty in the coating energy is 0.1\%. Then, we compare our code's results for amorphous materials to approximate, analytic solutions for the amorphous case. We find that edge effects and coating thickness effects scale as we expect.
Then, we compute the Brownian coating thermal noise for i) a mirror with an amorphous, fused-silica substrate and a crystalline, AlGaAs coating, and ii) a mirror with the same substrate but with the crystalline coating replaced with an ``effective isotropic'' coating, i.e., with an amorphous coating with elastic properties meant to mimic the AlGaAs's crystalline elastic properties. The thermal noise, treating the coating as a cubic crystal, is approximately 3\% larger than the thermal noise when treating the coating as an effective isotropic material.
We also compare our numerical calculations with an approximate, analytic result for a semi-infinite, amorphous mirror with a thin, amorphous coating. The effective isotropic calculation predicts approximately 7\% smaller thermal noise, because of finite-size effects. The cubic-crystal, numerical, coating thermal noise differs from the approximate solution by approximately 4\%. From these results, we conclude that, for our calculation, neglecting crystal effects introduces an error of the same order as neglecting edge effects.
The rest of this paper is organized as follows. In Sec.~\ref{sec:techniques}, we introduce our notation and detail our numerical methods for computing the Brownian substrate and coating thermal noise for a mirror with a single-layer, thin, reflective, possibly crystalline coating. In Sec.~\ref{sec:results}, we present our physical results after presenting results that verify our tool's performance and scaling. Finally, we briefly conclude in Sec.~\ref{sec:conclusion}.
\section{Techniques}
\label{sec:techniques}
\subsection{Mirror geometry and laser beam intensity profile}
We begin by considering a cylindrical mirror of radius $R$ and height $H$, where the mirror dimensions are comparable: $R\sim H$. A thin, reflective coating of thickness $d$ satisfying $d \ll R$ and $d \ll H$ covers the front face of the mirror. (In practice, LIGO mirror coatings consist of even thinner alternating layers of different materials; here, for simplicity we only consider single-layer coatings, leaving multi-layer coatings for future work.) We typically will choose our coordinates so that the $z$ axis is the axis of symmetry of the cylinder, where the coating-substrate interface lies in the plane $z=0$.
LIGO measures the position of the mirror by shining a laser beam on the mirror's surface; the beam measures $q(t)$, the surface position weighted by the beam's intensity:
\begin{eqnarray}
q(t) \equiv \int_0^{2\pi} d\varphi \int_0^R dr r p(r,\varphi) Z(r,\varphi,t).
\end{eqnarray} Here, $Z(r,\varphi,t)$ is the displacement at time $t$ of a point on the mirror's surface at cylindrical coordinates $(r,\varphi)$, in the direction parallel to the incident laser beam (which we take to travel along the $z$ axis).
Typically, we will center the beam profile $p(r,\varphi)$ on the mirror, so that the beam's intensity profile is
\begin{eqnarray}\label{eq:pressureGauss}
p(r) & = & \frac{1}{\pi r_0^2\left(1-e^{-R^2/r_0^2}\right)} e^{-r^2/r_0^2},
\end{eqnarray} normalized so that
\begin{eqnarray}
\oint dA p(r) = 2 \pi \int_0^R dr r p(r) = 1.
\end{eqnarray} In LIGO, to minimize diffraction losses, the beam size $r_0$ is kept significantly smaller than the mirror radius: $r_0 \ll R$. Therefore, in practice we neglect the Gaussian in the denominator of Eq.~(\ref{eq:pressureGauss}) when normalizing $p(r)$. Unless stated otherwise, in the rest of this paper, we choose an intensity profile
\begin{eqnarray}
p(r) & = & \frac{1}{\pi r_0^2} e^{-r^2/r_0^2},
\end{eqnarray} Note that some references, such as Ref.~\cite{harry2002thermal} and Ref.~\cite{cole2013tenfold}, use a beam width $w=\sqrt{2} r_0$. In terms of $w$, the intensity profile becomes
\begin{eqnarray}
p(r) & = & \frac{2}{\pi w^2} e^{-2 r^2/w^2}.
\end{eqnarray}
\subsection{Fluctuation-dissipation theorem}
Because the mirror has a temperature $T$, internal thermal noise in the mirror causes fluctuations in $Z(r,\varphi,t)$. We compute the single-sided power\footnote{Note that occasionally we will refer to the amplitude spectral density, which is the square root of the power spectral density.} spectral density $S_q(f)$ of the thermal noise associated with the measurement $q$ at frequency $f$ using the fluctuation-dissipation theorem~\cite{callen1951irreversibility,bernard1959fdt,kubo1966fluctuation,gonzalez1994brownian,levin1998internal}. The theorem relates $S_q(f)$ to the solution of an auxiliary elastic problem:
\begin{enumerate}
\item Imagine applying a pressure to the mirror face (i.e., the top of the mirror coating) by pushing on it with a force $F p(r,\varphi) \sin 2\pi f t$, where $F$ is a force amplitude and $p(r,\varphi)$ is the same as the intensity profile of the laser beam in the actual measurement.
\item The mirror deforms, dissipating energy at a rate\footnote{Note that in Ref.~\cite{hong2013brownian}, $W_{\rm diss}$ refers not to the dissipated power but to the energy dissipated in each cycle. In this paper, $W_{\rm diss}$ refers to dissipated power averaged over a cycle, and $E_{\rm diss}$ refers to the energy dissipated in one cycle.}
(averaged over a cycle) $W_{\rm diss}$.
\item The fluctuation-dissipation theorem gives the thermal noise $S_q(f)$ associated with the actual measurement $q$ in terms of $W_{\rm diss}$ calculated in the auxiliary elastic problem:
\begin{eqnarray}
S_q(f) & = & \frac{2 k_B T}{\pi^2 f^2} \frac{W_{\rm diss}}{F^2}.
\end{eqnarray} Here $k_B$ is Boltzmann's constant. Note that $W_{\rm diss}\propto F^2$, so the thermal noise does not depend on $F$.
\end{enumerate}
\subsection{Elasticity}
For applications to LIGO, the thermal noise is relevant at frequencies $f\sim 100\mbox{ Hz}$ where LIGO is most sensitive. Because these frequencies are far below the resonant frequencies $f\sim 10^4\mbox{ Hz}$ of the mirror materials, the applied force can be treated quasistatically. In the quasistatic approximation, the oscillating applied force is replaced with a static force.
Applying this static force to the face of the mirror deforms the mirror. A small element in the mirror at position $x_i$ is displaced by $u_i$:
\begin{eqnarray}
x_i \rightarrow x_i + u_i.
\end{eqnarray}
This leads to a strain
\begin{eqnarray}\label{eq:strain}
S_{ij} & = & \nabla_{(i}u_{j)} \equiv \frac{1}{2}\left(\nabla_i u_j + \nabla_j u_i \right).
\end{eqnarray} Here and throughout this paper, indices $i,j,k,\ldots$ are spatial indices running over Cartesian coordinates $x,y,z$. By choosing $F$ to be sufficiently small, the deformation can be made sufficiently small that the material's deformation is within its elastic limit. That is, the applied stress is proportional to the strain (``Hooke's law''):
\begin{eqnarray}\label{eq:Hooke}
T_{ij} = Y_{ijkl} S_{kl},
\end{eqnarray} where here and throughout this paper we adopt the Einstein summation convention, summing over repeated indices. The Young's tensor $Y_{ijkl}$ is symmetric on each pair of indices and on interchanging the pairs of indices,
\begin{eqnarray}
Y_{ijkl} & = & Y_{jikl} = Y_{ijlk} = Y_{klij},
\end{eqnarray} leaving 21 independent components of $Y_{ijkl}$. In this paper, although our tool and methods are implemented for any $Y_{ijkl}$, we primarily confine our attention to two cases:
\begin{enumerate}
\item amorphous materials, where symmetry leaves only 2 independent components in $Y_{ijkl}$, corresponding to the Young's modulus $Y$ and Poisson ratio $\sigma$, and
\item cubic crystalline materials, which have 3 independent components in $Y_{ijkl}$.
\end{enumerate} In both cases, the nonzero Young's tensor components can be written in terms of three elastic moduli
$c_{11}$, $c_{12}$, and $c_{44}$ as follows:
\begin{eqnarray}
c_{11} & = & Y_{xxxx} = Y_{yyyy} = Y_{zzzz},\\
c_{12} & = & Y_{xxyy} = Y_{xxzz} = Y_{yyxx} = Y_{yyzz} = Y_{zzxx} = Y_{zzyy},\\
c_{44} & = & Y_{xyxy} = Y_{xyyx} = Y_{yxxy} = Y_{yxyx} = Y_{xzxz} = Y_{xzzx}\nonumber\\ & = & Y_{zxxz} = Y_{zxzx} = Y_{yzyz} = Y_{zyyz} = Y_{zyzy} = Y_{yzzy}.
\end{eqnarray} For cubic crystals, $c_{11}$, $c_{12}$, and $c_{44}$ are independent, while for amorphous materials,
\begin{eqnarray}
c_{11} & = & \frac{Y(1-\sigma)}{(1+\sigma)(1-2\sigma)},\\
c_{12} & = & \frac{Y \sigma}{(1+\sigma)(1-2\sigma)},\\
c_{44} & = & \frac{Y}{2(1+\sigma)}.
\end{eqnarray}
The stress and strain combine to form the stored potential energy density, such that the total stored potential energy in a volume $\mathcal{V}$ is
\begin{eqnarray}\label{eq:U}
U=-\frac{1}{2}\int_{\mathcal{V}} dV S_{ij} T_{ij}.
\end{eqnarray}
The dissipated power is then (e.g., inserting inserting Eq.~\ref{eq:U} into Eq.~(12) of Ref.~\cite{levin1998internal})
\begin{eqnarray}
W_{\rm diss} = 2 \pi f \phi(f) U = - \pi f \int dV S_{ij} T_{ij},
\end{eqnarray} where $\phi(f)$ is a (potentially frequency-dependent) loss angle determined by the imaginary, dissipative elastic moduli. Then the thermal noise becomes (Eq.~(46) of Ref.~\cite{hong2013brownian})
\begin{eqnarray}
S_q(f) & = & \frac{4 k_B T}{\pi f} \frac{U \phi(f)}{F^2}
\end{eqnarray}
For a mirror consisting of a thin reflective coating on top of a substrate, the stored energy is the sum of the energy stored in the substrate and in the coating: $U=U_{\rm sub}+U_{\rm coat}$. When the coating and substrate are different materials, the substrate and coating have different loss angles, $\phi_{\rm sub}$ and $\phi_{\rm coat}$. Then, the thermal noise becomes
\begin{eqnarray}\label{eq:FDT}
S_q(f) & = & \frac{4 k_B T}{\pi f} \frac{U_{\rm sub} \phi_{\rm sub} + U_{\rm coat} \phi_{\rm coat}}{F^2},
\end{eqnarray} where the stored energies $U_{\rm sub}$ and $U_{\rm coat}$ are volume integrals over the substrate and coating, respectively:
\begin{eqnarray}
U_{\rm sub} & = & U=-\frac{1}{2}\int_{\mathcal{\rm sub}} dV S_{ij} T_{ij},\label{eq:Usub}\\
U_{\rm coat} & = & U=-\frac{1}{2}\int_{\mathcal{\rm coat}} dV S_{ij} T_{ij}.\label{eq:Ucoat}
\end{eqnarray}
\label{sec:lossangles}
In fact, as in Ref.~\cite{hong2013brownian}, an amorphous coating has two loss angles, $\phi_B$ and $\phi_S$, corresponding to the imaginary parts of the coating's bulk and shear modulus. If $U_{B}$ and $U_{S}$ are the elastic energy corresponding to the bulk and shear portions of the elastic energy, then Eq.~(\ref{eq:FDT}) becomes (cf. Eq.~(57) of Ref.~\cite{hong2013brownian})
\begin{eqnarray}\label{eq:multilosssub}
S_q(f) & = & \frac{4 k_B T}{\pi f} \frac{U_{\rm sub} \phi_{\rm sub} + U_{\rm B} \phi_{\rm B} + U_{\rm S} \phi_{\rm S}}{F^2}.
\end{eqnarray}
A crystal coating has, in principle, different loss angles for each independent component of the Young's tensor $Y_{ijkl}$. For instance, for a cubic or zincblende crystal, one might write
\begin{eqnarray}\label{eq:multilosscoat}
S_q(f) & = & \frac{4 k_B T}{\pi f} \frac{U_{\rm sub} \phi_{\rm sub} + U_{\rm 11} \phi_{\rm 11} + U_{\rm 12} \phi_{\rm 12} + U_{\rm 44} \phi_{\rm 44}}{F^2},
\end{eqnarray} where $\phi_{\rm 11}$, $\phi_{\rm 12}$, and $\phi_{\rm 44}$ are small, imaginary parts of $c_{11}$, $c_{12}$, and $c_{44}$, respectively, while $U_{\rm 11}$, $U_{\rm 12}$, and $U_{44}$ are the portions of the elastic energy corresponding to each elastic modulus.
Abernathy and collaborators have recently made the first measurements of separate bulk and coating loss angles for a material~\cite{abernathyMultipleLossAngles}. While perhaps a generalization of their method will be able to successfully measure the three (or more) loss angles in a crystalline material, this has not yet been done. Therefore, in this paper, we characterize the coating by a single effective loss angle $\phi_{\rm coat}$, determined from experiment, though we look forward to generalizing our results along the lines of Eq.~(\ref{eq:multilosscoat}) once experimental measurements of 3 independent loss angles for crystalline materials are available.
Thus to compute the thermal noise, we first numerically compute the displacement $u_i$ of the mirror given an applied pressure exerted at the top of the coating; this amounts to numerically solving Newton's second law for elastostatics,
\begin{eqnarray}
0 & = & -\nabla_i T_{ij} - f_i.\label{eq:secondLaw}
\end{eqnarray} Because we are seeking a solution where the mirror is in elastostatic equilibrium, we set $f_i=0$ except on the mirror boundary with the applied pressure.
After obtaining the numerical solution for $u_i$, we i) numerically compute its gradient to obtain the strain $S_{ij}$, ii) use Eq.~(\ref{eq:Hooke}) to compute the stress, iii) insert the stress and strain into Eqs.~(\ref{eq:Usub}) and (\ref{eq:Ucoat}) and integrate to compute $U_{\rm sub}$ and $U_{\rm coat}$, and finally iv) insert $U_{\rm sub}$ and $U_{\rm coat}$ into Eq.~(\ref{eq:FDT}) to compute the Brownian thermal noise $S_{q}(f)$.
In the rest of this section, we derive the equations that determine $u_i$ and cast them in a form suitable for numerical solution using finite elements.
\subsection{Weak form of the elastostatic equations for finite-element numerical solutions}
For completeness, we present the weak form of the three-dimensional elasticity equations that we implemented and used in this paper. The following derivation is not new; it generally follows the derivation given in Sec.~2.4.3 of Ref.~\cite{ibrahimbegovic2009nonlinear} except that we prefer different notation.
Consider an applied force deforming a mirror occupying a volume $V$
enclosed by a surface boundary $\Gamma$.
Note that while eventually we will choose $f_i = 0$ except on the part of $\Gamma$ where the pressure is applied, in this section we postpone making this
choice for generality.
On some parts of the boundary, $\Gamma_{u} \subset \Gamma$,
the displacement is fixed by a
Dirichlet boundary condition
\begin{eqnarray}
u_i = \bar{u}_i,
\end{eqnarray}
while on other parts of the boundary $\Gamma_T \subset \Gamma$,
the traction is fixed by a Neumann boundary condition
\begin{eqnarray}
n_i T_{ij} = T_{nj} = \bar{T}_{nj},
\end{eqnarray} where $n_i$ is normal to the boundary.
Note that only one condition is applied at a given point on the boundary:
$\Gamma = \Gamma_u \cup \Gamma_T$ and $\Gamma_u \cap \Gamma_T = \emptyset$.
To find the weak form of Eq.~(\ref{eq:secondLaw}), begin by introducing a
virtual displacement vector $w_i$, with the property that $w_i=0$ on
$\Gamma_u$ (i.e., the virtual displacement vanishes on the Dirichlet
boundary). Otherwise, the $w_i$ are arbitrary functions of position. Contracting both sides of Eq.~(\ref{eq:secondLaw}) by $w_j$ and
integrating over the volume gives
\begin{eqnarray}
0 & = & -\int_V dV w_j \nabla_i T_{ij} - \int_V dV w_j f_j.
\end{eqnarray} Integrating by parts gives
\begin{eqnarray}
0 & = & \int_V dV \nabla_i\left(w_j\right) T_{ij}\nonumber\\
& & - \int_V dV \nabla_i\left(w_j T_{ij}\right) - \int_V dV w_j f_j,
\end{eqnarray} which after applying Gauss's theorem becomes
\begin{eqnarray}
0 & = & \int_V dV \nabla_i\left(w_j\right) T_{ij}\nonumber\\
& & - \oint_\Gamma dA n_i w_j T_{ij} - \int_V dV w_j f_j.
\end{eqnarray} Because the virtual displacement $w_j$ vanishes on the Dirichlet
boundaries, the boundary term becomes
\begin{eqnarray}
\oint_\Gamma dA n_i w_j T_{ij} & = & \int_{\Gamma_{T}} dA n_i w_j T_{ij},
\end{eqnarray} so
\begin{eqnarray}
0 & = & \int_V dV \nabla_i\left(w_j\right) T_{ij}\nonumber\\
& & - \int_{\Gamma_{T}} dA w_j n_i T_{ij} - \int_V dV w_j f_j.
\end{eqnarray} Applying the Neumann boundary condition and inserting
Eqs.~(\ref{eq:Hooke}) and (\ref{eq:strain}) yields
\begin{eqnarray}\label{eq:weak}
0 & = & \int_V dV \nabla_i\left(w_j\right) Y_{ijkl} \nabla_k u_l
\nonumber\\
& & - \int_V dV w_j f_j - \int_{\Gamma_{T}} dA w_j \bar{T}_{nj}.
\end{eqnarray} This is the weak form of the elastostatic equations that we will use in the next subsection.
\subsection{Discretizing the weak form of the elastostatic equations}
We discretize Eq.~(\ref{eq:weak}) in a standard way. A similar derivation for the two-dimensional, amorphous, elastic equations is given in the discussion of deal.ii's step-8 tutorial~\cite{step8}, which solves the elasticity equations in two dimensions for amorphous materials. For a detailed discussion of finite-element methods applied to the elasticity equations, see, e.g., Ref.~\cite{ibrahimbegovic2009nonlinear}.
To discretize Eq.~(\ref{eq:weak}), we choose $N$ scalar shape functions
$\phi_a\left(x_i\right)$, where $a=1,2,\ldots N$,
which are arbitrary functions of
position $x_i$. Then,
construct $3N$ three-dimensional vector shape functions, by
multiplying each scalar shape function by each of the $3$ orthonormal
basis vectors. E.g., one can define $\Phi_0 =
\left(\phi_0,0,0\right), \Phi_1 = \left(0,\phi_0,0\right), \Phi_2 =
\left(0,0,\phi_0\right),
\Phi_3 = \left(\phi_1,0,0\right), \Phi_4 = \left(0,\phi_1,0\right), \Phi_5 = \left(0,0,\phi_1\right),\ldots$.
Expand the vectors $u_l$ and $w_j$ in terms of these vector-valued shape
functions
\begin{eqnarray}
u_l & = & U_A \Phi_{Al}\left(x_i\right),\\
w_j & = & W_A \Phi_{Aj}\left(x_i\right).
\end{eqnarray} Here $\Phi_{Aj}$ is the $j^{th}$ vector component of the
$A^{th}$ vector shape function, and $U_A$ and $W_A$ are independent of
position $x_i$. Here and in the rest of this paper, the positional dependence
of the $\phi_A$ and $\Phi_{Ai}$ will be suppressed for clarity.
Inserting these expansions into
Eq.~(\ref{eq:weak}) gives
\begin{eqnarray}
0 & = & \int_V dV \nabla_i\left(W_A \Phi_{Aj} \right) Y_{ijkl} \nabla_k \left(U_B \Phi_{Bl}\right)
\nonumber\\
& & - \int_V dV W_A \Phi_{Aj} f_j - \int_{\Gamma_{T}} dA W_A\Phi_{Aj} \bar{T}_{nj}.
\end{eqnarray} Since $W_A$ and $U_A$ are independent of position, this becomes
\begin{eqnarray}
0 & = & W_A U_B \int_V dV Y_{ijkl} \nabla_i \left(\Phi_{Aj}\right) \nabla_k \left(\Phi_{Bl}\right)
\nonumber\\
& & - W_A \int_V dV \Phi_{Aj} f_j - W_A \int_{\Gamma_{T}} dA \Phi_{Aj} \bar{T}_{nj}.
\end{eqnarray} We are free to choose shape functions that vanish on the
boundary, and since $w_j$ is arbitrary other than having to vanish on the boundary, we are free to choose $W_A=1$. This choice gives a discretized weak
form of the elastostatic equations suitable for solving via finite-element
methods:
\begin{eqnarray}
0 & = & U_B \int_V dV Y_{ijkl} \nabla_i \left(\Phi_{Aj}\right) \nabla_k \left(\Phi_{Bl}\right)
\nonumber\\
& & - \int_V dV \Phi_{Aj} f_j - \int_{\Gamma_{T}} dA \Phi_{Aj} \bar{T}_{nj}.
\end{eqnarray} Defining
\begin{eqnarray}
M_{AB} \equiv \int_V dV Y_{ijkl} \nabla_i \left(\Phi_{Aj}\right) \nabla_k \left(\Phi_{Bl}\right),\\
F_A \equiv \int_V dV \Phi_{Aj} f_j + \int_{\Gamma_{T}} dA \Phi_{Aj} \bar{T}_{nj},
\end{eqnarray} the equations can be written in matrix form as
\begin{eqnarray}
M_{AB} U_B = F_A.\label{eq:matrix}
\end{eqnarray}
In practice, the shape functions $\Phi_{Aj}$ are low-order
polynomials, with each function having support only within one
finite-element cell. We use second order polynomials for the shape functions except when we integrate to separately calculate the elastic energy in the substrate and coating, in which case we sub-divide each cell into 1000 smaller cells and interpolate with cubic-polynomial shape functions.
We solve Eq.~(\ref{eq:matrix}) for
$U_B$ using deal.ii finite-element library~\cite{dealII82,dealII85}\footnote{We used version 8.2.1 of the dealII library to obtain the results in this paper, as it was current when we began developing our code. At the time of writing, deal.ii version 8.5 is available.}, the PETSc~\cite{petsc-web-page,petsc-user-ref,petsc_efficient} conjugate gradient linear solver, and the ParaSAILS
preconditioner in the Hypre package~\cite{falgout2002hypre}. Our specific implementation begins with the deal.ii step-8 tutorial~\cite{step8}, which solves the elastic equations in two dimensions for amorphous materials, and then generalizes to three dimensions and arbitrary Young's tensors (though in this paper, we only use isotropic and cubic-crystal Young's tensors).
\subsection{Mesh}
For simplicity, here we confine our attention to simple mirror geometries, generating our initial computational meshes using deal.ii's built-in primitives. We model the mirror as a simple cylinder.
We begin with an initial, coarse mesh. We create this coarse mesh by refining a deal.ii primitive mesh (cylinder or rectangular prism) in two stages: first, we apply two refinements to every element, and then we apply up to two additional refinements on elements within one beam width $r_0$ of the mirror's front, center point. Then, after achieving a solution on the coarse mesh, we refine using adaptive mesh refinement. We estimate the numerical error in each cell using the Kelly error estimator~\cite{kelly1983posteriori,gago1983posteriori}, and then we rank them by this error estimate. We then refine the top 14\% and coarsen the bottom 2\%, approximately doubling the number of cells with each refinement\footnote{When running on multiple processors (i.e., multiple cores), we divide the mesh among them. We then refine the top 14\% and coarsen the bottom 2\% of cells on each processor.}.
\begin{figure*}[ht]
\includegraphics[width=6.5in]{Figures/mesh.pdf}
\caption{
A cross-sectional view of a sample meshed cylindrical domain (fused silica mirror with a cubic crystalline coating) that has been refined, from left to right, 1, 5, and 9 times using adaptive mesh refinement. Below each upper panel, a lower panel zooms in. The dark ring in each image has radius $r_0$ to represent the Gaussian profile pressure applied to the cylinder. The magnitude of the resulting displacement is shown by the coloring. \label{fig:mesh}}
\end{figure*}
\subsection{Performance}\label{sec:performance}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=4.0in]{Figures/allcycles.pdf}
\caption{
The run time of our thermal-noise calculations as a function of the number of compute cores used in the calculation. Colors indicate resolution $N$, where each resolution $N$ has approximately twice the number of finite elements as resolution $N-1$. Performance improves with increasing numbers of cores until it levels off; we suspect this is caused by increasing communication costs. Running on more cores (i.e., on more compute nodes) increases the available memory. Given the memory limits of the cluster where we ran these calculations, the highest resolutions achieved ($N>=9$) require running on more cores (and thus on more compute nodes and more total memory). \label{fig:allcycles}}
\end{center}
\end{figure*}
We have tested how the performance of our code varies with increasing resolution and increasing numbers of compute cores. We label each resolution by an integer $N$, where increasing $N$ by one corresponds to approximately doubling the number of finite-element cells. Figure~\ref{fig:allcycles} shows the wall-clock time elapsed for each resolution of a typical thermal-noise calculation. Initially, increasing the number of processors decreases the time required to complete a given resolution; however, the performance then hits a plateau, as communication costs increase.
For the highest resolutions (e.g., $N=12$ in Fig.~\ref{fig:allcycles}), we often run on more cores than Fig.~\ref{fig:allcycles} would suggest are necessary, because these high resolutions (with hundreds of millions of finite-element degrees of freedom) require the memory from the additional compute nodes. Also, note that occasionally, when performing these timing tests, we observed spuriously inconsistent timing for some simulations; we suspect this occurred when our cluster's network became saturated.
\subsection{Analytic approximate solutions}
The amplitude spectral density of the substrate thermal noise $\sqrt{S_q}$, assuming a semi-infinite amorphous mirror with Young's modulus $Y_{\rm sub}$, mechanical loss angle $\phi_{\rm sub}$, and Gaussian beam radius $r_0$, is given by the square root of its power spectral density (e.g., Eq.~(59) of. Ref.~\cite{liu2000thermoelastic}):
\begin{eqnarray}
\sqrt{S_q} & = & \sqrt{\frac{\sqrt{2}k_BT}{\pi^{3/2}f}\frac{1-\sigma_{\rm sub}^2}{Y_{\rm sub} r_0}\phi_{\rm sub}}.\label{eq:analyticSubstrate}
\end{eqnarray}
For an amorphous substrate with a thin, reflective, amorphous coating, the coating thermal noise is given by (e.g., Eq.~(21) of Ref.~\cite{harry2002thermal} with $\phi_{\|}=\phi_\bot=\phi_{\rm coat}$)
\begin{eqnarray}
\sqrt{S_q} & = & \sqrt{\frac{k_BT}{\pi^2f}\frac{1-\sigma_{sub}^2}{r_0 Y_{\rm sub}}\frac{d}{r_0}\frac{\phi_{\rm coat}}{Y_{\rm sub}Y_{\rm coat}(1-\sigma_{\rm coat}^2)(1-\sigma_{\rm sub}^2)}}\nonumber\\
& \times & \sqrt{Y_{\rm coat}^2(1+\sigma_{\rm sub})^2(1-2\sigma_{\rm sub})^2+Y_{\rm sub}^2(1+\sigma_{\rm coat})^2(1-2\sigma_{\rm coat})}\label{eq:analyticCoat}
\end{eqnarray}
Here, to facilitate comparison with our numerical calculations, we treat the coating as having a single mechanical loss angle $\phi$. More realistically, an amorphous coating should have one loss angle for each of its two independent elastic moduli~\cite{hong2013brownian} of this paper), and a cubic crystalline coating should have one loss angle for each of the three independent components in its Young's tensor $Y_{ijkl}$ (cf. the discussion in Sec.~(\ref{sec:lossangles}).
Note that because we are considering noise in a single mirror, rather than two (as in Ref.~\cite{cole2013tenfold}), our analytic formula differs from Eq.~(1) of Ref.~\cite{cole2013tenfold}.
\section{Results}\label{sec:results}
Unless stated otherwise (e.g., when adjusting a parameter to observe how the numerical solution scales with the adjustments), the numerical parameters in our calculations are taken from Table~\ref{tab:params}. To take a concrete example, we choose the parameters in the top two sections of Table~\ref{tab:params} to agree with those in the supplementary information for Ref.~\cite{cole2013tenfold}.
When we treat the reflective coating as a crystal, we use the elastic moduli $c_{11}$, $c_{12}$, and $c_{44}$ given in the middle section of Table 1.
Also following Ref.~\cite{cole2013tenfold}, we consider the specific case of ${\rm Al}_x{\rm Ga}_{1-x}As$ with $x=0.92$, and we take the values of the elastic moduli from Sec.~S2 of the supplementary information of Ref.~\cite{gehrsitz1999compositional}. We use the same loss angle as in the effective isotropic case. Note that in our numerical calculations, we choose a cubic crystal lattice orientation so that the cube faces are parallel to the mirror faces. We also use the same, single loss angle as in the effective isotropic case.
The bottom section of Table~\ref{tab:params} shows our choice for the mirror radius, where we choose a ``typical'' height that gives a mirror diameter of approximately 1 inch. We choose the total height of the mirror (including the coating) to be the sum of the mirror radius and the coating thickness.
The amplitude of the thermal noise, $\sqrt{S_q}(f)$, is proportional to $f^{-1/2}$ [Eq.~(\ref{eq:FDT})]. For concreteness, we evaluate the thermal noise at a frequency $f=100\mbox{ Hz}$, chosen as a representative frequency of where Advanced LIGO is most sensitive (cf. Fig.~\ref{fig:gwinc}).
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Parameter & Description & Value\\
\hline
$r_0$ & beam width & 177 $\mathrm{\mu m}$\\
$d$ & coating thickness & 6.83 $\mathrm{\mu m}$\\
$T$ & temperature & 300 K\\
$\sigma_{\rm SiO_2}$ & fused silica Poisson ratio & 0.17\\
$\sigma_{\rm iso}$ & effective isotropic AlGaAs Poisson ratio & 0.32\\
$Y_{\rm SiO_2}$ & fused silica Young's modulus & 72 GPa\\
$Y_{\rm iso}$ & effective isotropic AlGaAs Young's modulus & 100 GPa\\
$\phi_{\rm SiO_2}$ & fused silica loss angle & 1$\times 10^{-6}$\\
$\phi_{\rm iso}$ & effective isotropic AlGaAs loss angle & 2.5$\times 10^{-5}$\\
\hline
$c_{11}$ & ${\rm Al}_{x}{\rm Ga}_{1-x}{\rm As}$ elastic modulus & 119.94 GPa\\
$c_{12}$ & ${\rm Al}_{x}{\rm Ga}_{1-x}{\rm As}$ elastic modulus & 55.38 GPa\\
$c_{44}$ & ${\rm Al}_{x}{\rm Ga}_{1-x}{\rm As}$ elastic modulus & 59.15 GPa\\
$x$ & $Al_{x}Ga_{1-x}$ As fraction of aluminum & 0.92\\
$\phi_{\rm crystal}$ & crystalline AlGaAs loss angle & 2.5$\times 10^{-5}$\\
\hline
$R$ & mirror radius & 12500 $\mathrm{\mu m}$\\
$H$ & mirror height & $d+R$ $\mathrm{\mu m}$\\
$f$ & frequency & 100 Hz\\
\hline
\end{tabular}
\caption{Numerical values of parameters used in our numerical thermal noise computations, unless otherwise stated (e.g., when adjusting a parameter to check the expected scaling). \label{tab:params}}
\end{center}
\end{table}
Figure~\ref{fig:convergence} assesses our code's numerical convergence by showing fractional differences in the total stored energy and in the coating stored energy as a function of resolution $N$ (Cf. Sec.~\ref{fig:mesh}). In the first plot, we consider a mirror made entirely of fused silica, treating a thin slice of thickness $d$ on the top face of the cylinder as the coating. The remaining panels in Fig.~\ref{fig:convergence} use an AlGaAs coating, with the first treating the coating as an effective isotropic material (with effective bulk and shear elastic moduli) and with the second treating the coating as a cubic crystal with three independent elastic modulus components. We do not expect perfectly monotonic behavior, because adaptive mesh refinement does not uniformly increase resolution. Nevertheless, we are satisfied that the differences are generally decreasing until they achieve a fractional error of 0.1\%, which is sufficient for our purposes (e.g., sufficient to compare with experimental results).
\begin{figure*}[ht]
\includegraphics[width=3.15in]{Figures/1BeamGlassTotalEnergyCon.pdf}
\\
\includegraphics[width=3.15in]{Figures/1IsoTotalEnergyCon.pdf}
\includegraphics[width=3.15in]{Figures/CrystalTotalEnergyCon.pdf}
\caption{
Numerical convergence for a fused silica mirror with various coatings: \emph{Top:} fused silica, \emph{Bottom Left:} AlGaAs (effective isotropic), \emph{Bottom Right:} AlGaAs (crystalline) \label{fig:convergence}}
\end{figure*}
\begin{figure*}[ht]
\includegraphics[width=3.15in]{Figures/VaryBeamNoise.pdf}
\includegraphics[width=3.15in]{Figures/VaryBeamNoiseDiff.pdf}
\includegraphics[width=3.15in]{Figures/VaryBeamNoiseTot.pdf}
\includegraphics[width=3.15in]{Figures/VaryBeamNoiseTotDiff.pdf}
\includegraphics[width=3.15in]{Figures/CoatingThicknessEffects.pdf}
\includegraphics[width=3.15in]{Figures/CoatingThicknessEffectsDiff.pdf}
\caption{
Numerical computations of thermal noise for different mirrors. The left column shows the numerical thermal noise and the approximate analytic solutions given in Eqs.~(\ref{eq:analyticSubstrate})--(\ref{eq:analyticCoat}). The right column shows the fractional differences between the numerical and approximate analytic solutions. The top two rows show thermal noise for a mirror where both the substrate and coating are made of fused silica, while the bottom row shows thermal noise for a fused-silica substrate with a AlGaAs coating, treated as an effective isotropic material (so that the analytic solutions can be used).\label{fig:edge}}
\end{figure*}
Figure~\ref{fig:edge} compares our code's results to known, approximate, analytic solutions, for isotropic, amorphous coatings. We find that our numerical solutions approach the expected analytic solutions in the appropriate limits. In Fig.~\ref{fig:edge}, the left panels show thermal noise as a function of different dimensionless ratios that each characterize a different approximation in the analytic solutions. The right panels show differences between the numerical and approximate analytic results. Each numerical point represents the highest simulated resolution for the given physical dimensions.
In the top two rows of Fig.~\ref{fig:edge}, we show the coating thermal noise for different beam sizes $r_0$, holding all other quantities fixed. The mirror is entirely fused silica, with the topmost layer of thickness $d$ treated as the coating. In the top row, we show the coating thermal noise as a function of the dimensionless quantity $d/r_0$, which is small in the thin-coating approximation used in the analytic solution. The numerical solution approaches the analytic as $d/r_0$ approaches zero, as expected. In the middle row, we show the total thermal noise for different beam widths $r_0$ as a function of the dimensionless quantity $r_0/R$, which characterizes the importance of edge effects, which are neglected in the analytic solution (i.e., the analytic solution does not depend on $R$). The mirror is again entirely fused silica. Again, the numerical solution approaches the analytic as $r_0/R$ approaches zero, as expected.
In the bottom row of Fig.~\ref{fig:edge}, we show the coating thermal noise for different coating thicknesses $d$ as a function of the dimensionless quantity $d/R$, which characterizes the thin-coating approximation. The mirror in this case is made of two materials, a fused silica substrate of thickness $R$ and an effective isotropic AlGaAs coating of thickness $d$. The numerical solution approaches the analytic as $d/R$ approaches zero, as expected. Finally, note that the value of each point in the difference plots are within the numerical error of that particular point's simulation. Larger numerical error (caused, e.g., by greater difficulty in resolving different length scales) explains the anomalous behavior of the left most points.
Finally, we compare the numerical thermal noise for AlGaAs coatings on fused silica substrates, comparing the results when the coating is treated as a cubic crystal in the elastic problem to results treating the coating as an effective isotropic material. Figure~\ref{fig:noise} shows the coating thermal noise as a function of resolution for two mirrors with the fused silica substrate: one with an AlGaAs effective isotropic coating and one with an AlGaAs crystalline coating. We compare both numerical results to an approximate analytic solution for an amorphous, semi-infinite mirror with a thin, amorphous coating. We resolve the effect of treating the crystalline coating as a cubic crystal.
Unlike the analytic solution, neither numerical solution neglects edge effects or coating thickness effects in the elasticity calculation. As a result of these finite-size effects, the effective isotropic and analytic solutions differ by approximately 7\%. Additionally including crystalline effects (i.e., treating the coating as a cubic crystal, rather than as an amorphous material) causes the thermal noise to differ from the approximate, analytic solution by about 4\%, since the crystalline numerical result is about 3\% larger than the effective-isotropic numerical result. For the particular case we consider (mirror dimension, beam size, temperature, etc.), then, we conclude that including finite-size effects causes a deviation from the approximate, analytic solution comparable in magnitude to that caused by treating the coating as a cubic crystal.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=3.15in]{Figures/IsoAndCrystalNoise.pdf}
\caption{
Coating Brownian noise at
$f=100\mbox{ Hz}$ for analytic and numerical effective isotropic and numerical anisotropic crystalline elastic moduli. \label{fig:noise}}
\end{center}
\end{figure*}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have numerically computed the Brownian substrate and coating thermal noise for a cylindrical mirror with a thin, reflective, possibly crystalline coating. To do this, we have developed a new tool, built on open-source libraries, that computes thermal noise by solving an elastostatic problem and inserting the solution into the fluctuation-dissipation theorem. Using a parallel finite-element method with adaptive mesh refinement, we have demonstrated numerical convergence, resolutions up to approximately $2 \times 10^{8}$ degrees of freedom, and the capability to run on up to hundreds of processors. Because of limited memory on the cluster where we performed our calculations, the highest resolutions were only achievable with the increased memory available with running on a larger number of processors than would otherwise be necessary.
Using this new tool, we have computed the Brownian thermal noise for a cylindrical mirror with a thin, reflective coating. When the coating is amorphous, we agree well with approximate, analytic solutions that neglect edge effects and anisotropic effects, and our numerical results scale as expected with beam radius, mirror size, and coating thickness. When the coating is a cubic crystal (specifically, AlGaAs), our numerical results show a small but significant difference between the noise computed accounting for the crystal's anisotropy and the noise computed while treating the crystal as an effective isotropic material. The C++ source code for our tool is available at \cite{thermalNoiseRepo}.
Because our code is open, additional physics can be incorporated in future work, supporting the long-term goal of understanding and reducing thermal noise in future gravitational-wave detectors. For instance, as discussed in Sec.~\ref{sec:lossangles}, rather than using a single mechanical loss angle, one could extend our tool to treat the mechanical loss in the coating more realistically, by introducing one loss angle for each independent component of the Young's tensor, generalizing the ``bulk'' and ``shear'' loss angles introduced in Ref.~\cite{hong2013brownian}. Other directions for future work include extending our code to compute thermoelastic and thermorefractive noise and incorporating more realistic LIGO mirror shapes with physically correct edge effects. The mirrors could also be treated more realistically, e.g., by including a more realistic mirror shape (including ``ears'' that are held fixed by suspension fibers~\cite{heptonstall2014}) and adjusting our outer boundary condition accordingly. Another potential direction for future study includes varying the laser beam intensity profile. Flatter intensity profiles better average thermal fluctuations than Gaussian profiles do; one can use our tool to explore numerically how thermal noise varies with beam shape, generalizing results for semi-infinite, amorphous optics~\cite{o2004implications,
braginsky2003thermodynamical,fejer2004thermoelastic,vinet2005mirror,
o2006note,Lovelace:2006ne} to crystalline optics of finite size. Finally, realistic LIGO optics use multi-layer coatings; improved, high-accuracy meshes could potentially enable our tool to explore the effects of multiple layers on the coating thermal noise.
\\
\ack{We are pleased to acknowledge Rana X.~Adhikari, Garrett Cole, and Joshua R.~Smith for helpful discussions. This work was supported in part by National
Science Foundation grants PHY-1307489, PHY-1606522, PHY-1654359, and AST-1559694. Computations in this paper were performed on the Orange County Relativity cluster for Astronomy (ORCA), which is supported in part by National Science Foundation grant PHY-1429873, by the Research Corporation for Science
Advancement, and by California State University, Fullerton. Some of the
formulas in this paper were checked using Mathematica.}
\section*{References}
\bibliographystyle{iopart_num}
|
2,869,038,156,921 | arxiv | \section{Introduction}
A newborn protostar generates a fast and well collimated jet, possibly
surrounded by a wider angle wind. In turn, the ejected material drives
(bow-)shocks travelling through the surrounding high-density medium
and traced by H$_2$ ro-vibrational lines at
excitation temperatures of around 2000 K.
As a consequence, slower and cold (10--20
K) molecular outflows are formed by swept-up material, usually traced by
CO. Shocks heat the gas and trigger several processes such as
endothermic chemical reactions and ice grain mantle sublimation
or sputtering. Several molecular species undergo significant
enhancements in their abundances (see e.g., van Dishoeck \& Blake
\cite{vanblake}), as observed by observations at millimeter
wavelengths towards a number of outflows (Garay et al. \cite{garay};
Bachiller \& P\'erez Guti\'errez 1997, BP97 hereafter; J\o{}rgensen et
al. \cite{jorge}). The link between the gas components at $\sim$ 10 K
and the hot 2000 K shocked component is crucial to understanding how the
protostellar wind transfers momentum and energy back to
the ambient medium. In this context, the understanding of the chemical
composition of a typical molecular bow-shock is essential bevause it
represents a very powerful diagnostic tool for probing its physical
conditions.
The L1157 outflow, located at a distance estimated to be between 250 pc
(Looney et al. \cite{looney}) and 440 pc (Viotti \cite{viotti}) may be
regarded as the ideal laboratory for observing the effects of shocks on
the gas chemistry, being the archetype of the so-called chemically
rich outflows (Bachiller et al. \cite{bach01}). The low-luminosity
(4--11 $L_{\rm \sun}$) Class 0 protostar IRAS20386+6751 drives a
precessing powerful molecular outflow associated with several bow
shocks seen in CO (Gueth et al. \cite{gueth96}) and in IR H$_2$
images (Davis \& Eisl\"offel \cite{davis}; Neufeld et
al. \cite{neufeld09}). In particular, the brightest blue-shifted
bow-shock, called B1 (Fig. \ref{maps}), has been extensively mapped with
the PdB and VLA interferometers at mm- and cm-observations revealing a
rich and clumpy structure, the clumps being located at the wall of the
cavity with an arch-shape (Tafalla \& Bachiller
\cite{tafabachi}; Gueth et al. \cite{fred98}; Benedettini et
al. \cite{milena07}, hereafter BVC07; Codella et al. \cite{code09}). L1157-B1 is well
traced by molecules thought to be released by dust mantles such as
H$_2$CO, CH$_3$OH, and NH$_3$ as well as typical tracers of
high-speed shocks such as SiO (e.g., Gusdorf et al. \cite{gus08b}).
Temperatures $\simeq$ 60--200 K (from NH$_3$, CH$_3$CN, and SiO)
as well as around 1000 K (from H$_2$) have been derived (Tafalla \& Bachiller
\cite{tafabachi}; Codella et al. \cite{code09}; Nisini et al. \cite{nisini07}, in prep.).
However, a detailed study of the excitation conditions of the B1 structure
has yet to be completed because of
the limited range of excitation covered by the
observations performed so far at cm- and mm-wavelengths. Observations
of sub-mm lines with high excitation ($\ge$ 50--100 K above the ground
state) are thus required.
As part of the {\it Herschel}
Key Program CHESS\footnote{http://www-laog.obs.ujf-grenoble.fr/heberges/chess/}
(Chemical {\it Herschel} Surveys of Star forming regions),
L1157-B1 is currently being investigated with an unbiased spectral survey using
the HIFI instrument (de Graauw et al. \cite{hifi}). In this Letter, we
report the first results based on HIFI observations in the
555--636 GHz spectral window, confirming the chemical richness and
revealing different molecular components at different
excitation conditions coexisting in the B1 bow structure.
\section{Observations}
\begin{figure}
\centering
\includegraphics[angle=-90,width=5cm]{codella_chessfg1.ps}
\caption{The B1 clump. PdBI emission of
CH$_3$OH(2$_{\rm 1}$--1$_{\rm 1}$)A$^{-}$ (grey) on the CS(2--1)
one (contours), from BVC07. The maps are
centred on the coordinates used for the present HIFI observations
$\alpha_{\rm J2000}$ = 20$^{\rm h}$ 39$^{\rm m}$ 10$\fs$2,
$\delta_{\rm J2000}$ = +68$\degr$ 01$\arcmin$ 10$\farcs$5, i.e. at
$\Delta\alpha$ = +25$\farcs$6 and $\Delta\delta$ = --63$\farcs$5
from the driving protostar. The labels
indicate the main B1 clumps detected in different tracers.
Circles are for the HPBWs of the HIFI data presented here
(39$\arcsec$) and of Band 7 (11$\arcsec$), i.e., at the
highest frequencies of the CHESS surveys.}
\label{maps}
\end{figure}
The observations were performed on 2009, August 1, during the
Performance Verification phase of the HIFI heterodyne instrument (de
Graauw et al. \cite{hifi}) on board of the {\it Herschel} Space Observatory
(Pilbratt et al. \cite{herschel}). The band called 1b (555.4--636.2 GHz)
was covered in double-sideband (DSB) with a total
integration time of 140 minutes.
The Wide Band Spectrometer was used with a
frequency resolution of 1 MHz. The typical HPBW is
39$\arcsec$. The data were processed with the
ESA-supported package HIPE\footnote{HIPE is a joint development by the {\it Herschel} Science
Ground Segment Consortium, consisting of ESA, the NASA {\it Herschel} Science Center,
and the HIFI, PACS and
SPIRE consortia.} ({\it Herschel} Interactive Processing
Environment) for baseline subtraction and sideband
deconvolution and then analysed with
the GILDAS\footnote{http://www.iram.fr/IRAMFR/GILDAS} software. All the
spectra (here in units of antenna $T_{\rm a}$) were smoothed
to a velocity resolution of 1 km s$^{-1}$, except those showing the
weakest emission, which were smoothed to lower spectral resolutions
(up to 4 km s$^{-1}$). At a velocity resolution of 1 km s$^{-1}$, the rms noise is 6--13 mK
($T_{\rm a}$ scale), depending on the line frequency. The main-beam
efficiency ($\eta_{mb}$) has not yet been reliably determined.
When needed, we adopted an average $\eta_{mb}$ of 0.72.
\begin{table*}
\caption{List of molecular species and transitions observed with HIFI (Band 1b):
CO and H$_2$O emission is discussed in Lefloch et al. (\cite{letter2}). Peak velocity and intensity
(in $T_{\rm a}$ scale),
integrated intensity ($F_{\rm int}$), as well as the terminal velocities of the line emission
($V_{\rm min}$ and $V_{\rm max}$) are reported.}
\label{lines}
\centering
\begin{tabular}{lrrrrcccc}
\hline
\multicolumn{1}{c}{Transition} &
\multicolumn{1}{c}{$\nu_{\rm 0}$$^a$} &
\multicolumn{1}{c}{$E_{\rm u}$$^a$} &
\multicolumn{1}{c}{$T_{\rm peak}$} &
\multicolumn{1}{c}{rms} &
\multicolumn{1}{c}{$V_{\rm peak}$} &
\multicolumn{1}{c}{$V_{\rm min}$} &
\multicolumn{1}{c}{$V_{\rm max}$} &
\multicolumn{1}{c}{$F_{\rm int}$} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{(MHz)} &
\multicolumn{1}{c}{(K)} &
\multicolumn{1}{c}{(mK)} &
\multicolumn{1}{c}{(mK)} &
\multicolumn{1}{c}{(km s$^{-1}$)} &
\multicolumn{1}{c}{(km s$^{-1}$)} &
\multicolumn{1}{c}{(km s$^{-1}$)} &
\multicolumn{1}{c}{(K km s$^{-1}$)} \\
\hline
o-H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) & 556936.002 & 27 & 910(17) & 17 & --0.37(1.00) & --25.4 & +7.6 &
11.68(0.10) \\
CH$_3$OH E ($11_{2,9}-10_{1,9}$) & 558344.500 & 168 & 47(10) & 10 & +0.60(1.00) & --2.9 & +2.7 &
0.16(0.02) \\
o-H$_2$CO(8$_{\rm 18}$--7$_{\rm 17}$) & 561899.318 & 118 & 92(4)$^b$ & 8 & +0.38(0.14)$^b$ & --4.5 & +3.6 &
0.49(0.03)$^b$ \\
CH$_3$OH E ($3_{2,2}-2_{1,2}$) & 568566.054 & 32 & 42(8) & 8 & +0.60(1.00) & --2.6 & +2.7 &
0.31(0.02) \\
o-NH$_3$(1$_0$--0$_0$) & 572498.068 & 28 & 122(7) & 7 & +1.03(0.80) & --6.9 & +5.0 & 0.89(0.03) \\
CO(5-4) & 576267.931 & 83 & 883(10) & 10 & +1.60(1.00) & --36.5 & +6.0 & 49.30(0.07) \\
p-H$_2$CO(8$_{\rm 08}$--7$_{\rm 07}$) & 576708.315 & 125 & 41(7) & 7 & +1.80(1.00) & --5.4 & +2.7 &
0.22(0.02) \\
CH$_3$OH A$^{-}$ ($2_{2,1}-1_{1,0}$) & 579084.700 & 45 & 53(8) & 8 & +0.60(1.00) & --6.0 & +3.7 &
0.29(0.03) \\
CH$_3$OH E ($12_{1,12}-11_{1,11}$) & 579151.003 & 178 & 44(8) & 8 & --0.30(1.00) & --4.0 & +4.0 &
0.25(0.02) \\
CH$_3$OH A$^{+}$ ($12_{0,12}-11_{0,11}$) & 579459.639 & 181 & 48(7) & 7 & +0.90(1.00) & --3.9 & +4.9 &
0.27(0.02) \\
CH$_3$OH A$^{+}$ ($2_{2,0}-1_{1,1}$) & 579921.342 & 45 & 42(7) & 7 & --0.60(1.00) & --3.3 & +2.6 &
0.21(0.02) \\
CH$_3$OH E ($12_{2,10}-11_{2,9}$) & 580902.721 & 195 & 11(4) & 4 & --3.00(3.00) & --3.9 & +2.5 &
0.09(0.02) \\
CH$_3$OH A$^{+}$ ($6_{1,6}-5_{0,5}$) & 584449.896 & 63 & 88(9) & 9 & --0.30(1.00) & --6.0 & +2.9 &
0.58(0.03) \\
CS(12--11) & 587616.240 & 183 & 23(2)$^b$ & 5 & --0.61(0.57)$^b$ & --7.3 & +6.5 &
0.19(0.03)$^b$ \\
CH$_3$OH A$^{+}$ ($7_{3,5}-6_{2,4}$) & 590277.688 & 115 & 42(9) & 9 & +0.60(1.00) & --1.0 & +3.0 &
0.16(0.02) \\
CH$_3$OH A$^{-}$ ($7_{3,4}-6_{2,5}$) & 590440.291 & 115 & 40(9) & 9 & --0.60(1.00) & --2.6 & +0.8 &
0.10(0.02) \\
CH$_3$OH E ($9_{0,9}-8_{1,8}$) & 590790.957 & 110 & 40(10) & 10 & +0.60(1.00) & --4.7 & +4.3 &
0.28(0.03) \\
o-H$_2$CO(8$_{\rm 17}$--7$_{\rm 16}$) & 600374.604 & 126 & 29(7)$^b$ & 9 & --0.14(0.57)$^b$ & --3.0 & +1.8
& 0.19(0.04)$^b$ \\
CH$_3$OH E ($4_{2,3}-3_{1,3}$) & 616979.984 & 41 & 47(6) & 6 & +0.90(1.00) & --4.5 & +2.6 &
0.26(0.02) \\
HCN(7--6) & 620304.095 & 119 & 94(7) & 7 & --0.60(1.00) & --7.6 & +3.2 & 0.68(0.03) \\
HCO$^+$(7--6) & 624208.180 & 119 & 30(3)$^b$ & 8 & +0.53(0.47)$^b$ & --3.6 & +4.5 &
0.11(0.03)$^b$ \\
CH$_3$OH A$^{-}$ ($3_{2,2}-2_{1,1}$) & 626626.302 & 52 & 18(5) & 5 & --1.20(4.00) & --2.5 & +6.3 &
0.18(0.07) \\
CH$_3$OH E ($13_{1,13}-12_{1,12}$) & 627170.503 & 209 & 33(7) & 7 & --0.30(1.00) & --3.2 & +2.5 &
0.15(0.02) \\
CH$_3$OH A$^{+}$ ($13_{0,13}-12_{0,12}$) & 627558.440 & 211 & 41(9) & 9 & +0.60(1.00) & --3.6 & +2.5
& 0.19(0.02) \\
CH$_3$OH A$^{+}$ ($3_{1,2}-2_{1,2}$) & 629140.493 & 52 & 52(9) & 9 & +0.60(1.00) & --3.5 & +3.7 &
0.24(0.03) \\
CH$_3$OH A$^{+}$ ($7_{1,7}-6_{0,6}$) & 629921.337 & 79 & 70(13) & 13 & +1.50(1.00) & --3.9 & +3.7 &
0.37(0.04) \\
o-H$_2$CO(9$_{\rm 19}$--8$_{\rm 18}$) & 631702.813 & 149 & 66(4)$^b$ & 9 & +0.45(0.17)$^b$ & --1.5 & +2.6
& 0.22(0.02)$^b$ \\
\hline
\end{tabular}
\begin{center}
$^a$ Frequencies and spectroscopic parameters have been extracted from the Jet
Propulsion Laboratory molecular database (Pickett et al. \cite{pickett}) for all
the transition except those of CH$_3$OH, which have been extracted from the
Cologne Database for Molecular Spectroscopy (M\"uller et al. \cite{muller}). Upper
level energies refer to the ground state of each symmetry. $^b$ Gaussian fit. \\
\end{center}
\end{table*}
\section{Different tracers at different velocities}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=11cm]{codella_chessfg2.ps}
\caption{Molecular line profiles observed towards L1157-B1: species
and transitions are reported in the panels. The vertical solid line
indicates the ambient LSR velocity (+2.6 km s$^{-1}$ from C$^{18}$O emission; BP97), while
the dashed one is for the secondary peak at --4.0 km s$^{-1}$.}
\label{spectra}
\end{figure*}
\begin{figure}
\centering
\includegraphics[angle=-90,width=7cm]{codella_chessfg3.ps}
\caption{{\it Top and Middle panel}: Comparison between the profiles of
NH$_3$(1$_0$--0$_0$), multiplied by a factor 7.4, H$_2$CO(8$_{\rm
17}$--7$_{\rm 16}$), multipled by a factor 22.5, HCN(7--6), multipled
by a factor 9.0, and
H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$), the latter from Lefloch et
al. (\cite{letter2}). The vertical solid line indicates the
ambient LSR velocity (+2.6 km s$^{-1}$). The velocity ranges
arbitrarily defined as HV (--20,--6 km s$^{-1}$; traced by H$_2$O), MV (--6,
--1.5 km s$^{-1}$; outlined by the HCN and H$_2$CO secondary peak), and LV
(--1.5,+2.6 km s$^{-1}$; the rest of the blue wing) are drawn (see
text).
{\it Bottom panel}: Intensity NH$_3$/H$_2$O line ratio as a
function of velocity.}
\label{profiles}
\end{figure}
A total of 27 emission lines were detected, with
a wide range of
upper level energies, from a few tens to a few hundreds of Kelvin. Table 1 lists
the spectroscopic and observational parameters of all the transitions.
For the first time, high excitation (up to
$\simeq$ 200 K) emission lines related to species whose abundance is
largely enhanced in shocked regions were detected.
The CO(5--4) and H$_2$O(1$_{\rm
10}$--1$_{\rm 01}$) lines are analysed in Lefloch et
al. (\cite{letter2}).
Figure \ref{spectra} presents representative examples of line profiles
observed towards L1157-B1. All the spectra contain lines with blue-shifted wings
peaking near 0 km s$^{-1}$, which have a terminal velocity
equal to $\sim$ --8,--6 km s$^{-1}$. Previous PdBI observations
showed that L1157-B1 is associated with very high velocities
(HVs) of as low as $\simeq$ --20 km s$^{-1}$ ($v_{\rm LSR}$ = +2.6 km
s$^{-1}$, BP97). We cannot exclude the lack of detected emission in the HV
regime in the present HIFI spectra being caused by their
relatively low signal-to-noise (S/N) ratio. The PdBI images indicate
that the brightness of the emission lines in the HV regime
is indeed weaker than the emission at
low velocities by a factor of 5--10. The spectra in
Fig. \ref{spectra} clearly show that this weak emission would lie
below the noise. On the other hand, the HV gas is detected
in the very bright lines of CO
and H$_2$O (Lefloch et al. \cite{letter2}).
We note that the HV emission is mostly confined to within the eastern B1a
clump (Fig. \ref{maps}), within an emitting region of size
$\le$ 10$\arcsec$ (Gueth et al. \cite{fred98}; BVC07),
whereas low velocity lines originate in both
the bow-structure and the walls of the outflow
cavity (e.g., the B0e and B0d in Fig. \ref{maps}),
of typical size 15$\arcsec$--18$\arcsec$. Therefore,
the forthcoming HIFI-CHESS observations at higher frequencies and
higher spatial resolution (see the dashed circle in Fig. \ref{maps})
should allow us to study the HV wings in species other than CO and
H$_2$O.
The uniqueness of HIFI lies in its high spectral profile resolution
for many high excitation transitions of a large number of molecular species.
The analysis of the present HIFI spectra reveals a secondary
peak occuring between --3.0 and --4.0 km s$^{-1}$ (here defined medium velocity, MV)
and well outlined by e.g.,
HCN(7--6). The MV peak is also visible in NH$_3$(1$_0$--0$_0$) and in
some lines of CH$_3$OH and H$_2$CO (see Fig. \ref{profiles}), but its occurrence does not show
any clear trend with the choice of tracer of line excitation.
No single-dish spectra had previously detected this spectral feature
(BP97; Bachiller et al. \cite{bach01}).
An inspection of the spectra
observed at PdBI shows that the MV secondary peak is observed in a
couple of lines of the CH$_3$OH(2$_{\rm K}$--1$_{\rm K}$) series (see
Fig. 3 of BVC07) and only towards the
western B1b clump (size $\sim$ 5$\arcsec$). This finding implies that
there is a velocity component originating mainly in the western side of
B1, while the HV gas is emitted from the eastern one (see above).
Figure \ref{profiles} compares the profiles of
the NH$_3$(1$_0$--0$_0$) and H$_2$CO(8$_{\rm 17}$--7$_{\rm 16}$) lines
with the H$_2$O(1$_{\rm 10}$--1$_{\rm 01}$) profile, where the S/N
allows such an analysis (MV and LV ranges).
By assuming that the emission in the MV range is
optically thin (including the H$_2$O line) and originates in
the same region, we obtained from the comparison of their profiles a
straightforward estimate of the relative abundance ratios of the gas
at different velocities.
As a notable example, the NH$_3$/H$_2$O intensity ratio decreases
by a factor $\sim$ 5 moving towards higher velocities
(Fig. \ref{profiles}), implying that a similar decrease in the
abundance ratios occurs.
This may reflect different pre-shock ice compositions in the
gas emitting the MV emission.
Alternatively, this behavior is consistent with
NH$_3$ being released by grain mantles, but water both being released by
grain mantles and, in addition, copiously forming in the warm shocked gas from
endothermic reactions, which convert all gaseous atomic oxygen into
water (Kaufman \& Neufeld 1997; Jim\'enez-Serra et
al. \cite{jimenez}, and references therein). The water
abundance may be enhanced with respect to ammonia in the fast and warm
($\geq 220$ K) gas, which might explain why the H$_2$O wings are larger
than those of NH$_3$, CH$_3$OH, and H$_2$CO, all species being directly
evaporated from dust grain mantles.
\section{Physical properties along the B1 bow shock}
We detected several lines from
CH$_3$OH (17 lines with upper level energies up to 211 K).
We can derive a first estimate of the emitting gas
temperature by means of the standard
analysis of the rotational diagram.
We show the case of methanol (A- and E-forms) in
Fig. \ref{rotameta}.
The derived rotational temperature ($T_{\rm rot}$) is 106 K (with
an uncertainty of $\sim$ 20 K), which represents a
lower limit to the kinetic temperature ($T_{\rm kin}$). In the same figure, we
report the methanol lines (2$_{\rm K}$--1$_{\rm K}$) observed with
PdBI and whose intensity is integrated in the HIFI 39$\arcsec$ beam.
The $T_{\rm rot}$ derived from the ground-based data (based
only on lines with $E_{\rm u}$ $\le$ 50 K; BVC07) is definitely lower,
$\sim$12 K, in perfect agreement with that found with the 30-m
spectra in the same excitation range by Bachiller et
al. (\cite{bach95}). As discussed by Goldsmith \& Langer
(\cite{gold}), this behavior may be caused by either two components
at different temperatures or both non-LTE effects and line opacity.
These two possibilities cannot be distinguished based only on the
rotational diagram.
However, given that a range of $T_{\rm kin}$ and $n_{\rm H_2}$ is naturally expected in a shock,
if we were to assume that two gas components provide an
explanation, they would not only have different temperatures but
also a different column densities. Taking the filling factor ff =
0.13, derived by the CH$_3$OH maps obtained at the PdBI, the low
temperature component column density is 8 $\times$ 10$^{14}$
cm$^{-2}$ (in agreement with Bachiller et
al. \cite{bach95}), whereas the high temperature component has
a column density of around 10$^{14}$ cm$^{-2}$.
We note that the rotation diagrams obtained
for the MV and LV CH$_3$OH emission separately do not allow us
to infer any clear difference.
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{codella_chessfg4.ps}
\caption{Rotation diagrams for the CH$_3$OH transitions measured with
HIFI (triangles) and from ground (PdBI; squares).
Black and blue points are for A- and E-form, respectively.
The parameters $N_{\rm u}$, $g_{\rm u}$, and $E_{\rm u}$ are,
respectively, the column density, the degeneracy and the energy
(with respect to the ground state of each symmetry) of the upper level.
The derived values of
the rotational temperature are reported: (i) 106 K, for the HIFI
lines covering the $E_{\rm u}$ = 32-211 K excitation range and (ii)
12 K (as Bachiller et al. \cite{bach95}), for the
PdBI lines, at lower excitation.}
\label{rotameta}
\end{figure}
It is possible to more tightly constrain the emitting gas temperature and
$n_{\rm H_2}$ density for
the species where the collisional rate coefficients are known,
by performing of a non-LTE
analysis. To this end, we used the non-LTE excitation code RADEX
with an escape probability formalism for the radiative transfer
(Van der Tak et al. \cite{radex}) coupled with the LAMDA database
(Sch\"oier et al. \cite{lamda}).
Methanol is the species detected in the largest number of lines.
The full non-LTE study will be reported in a forthcoming paper.
Here we analysed only the E-form, for which the
collisional rate coefficients are available (Pottage et al. \cite{pottage}).
The major result of
this analysis is that for a range of densities of 10$^{3}$--10$^{7}$ cm$^{-3}$,
the gas temperature exceeds 200 K.
A similar result is obtained by considering H$_2$CO emission.
Finally, by combining the HIFI CS(12--11)
line with CS(2--1) and (3--2) lines observed with ground-based telescopes,
we also derive a kinetic temperature that is definitely above 300 K for the outflowing
gas.
In this case, caution should be taken since we are abl eto trace different
gas components, as suggested by CH$_3$OH, the gas at higher excitation
being traced by CS(12--11).
If we analyse only the (2--1)/(3--2) intensity ratio, the non-LTE approach does
not allow us to constrain
the temperature in this way, but we are able to infer $n_{\rm H_2}$ of around 4 $\times$ 10$^4$ cm$^{-3}$.
Interestingly, when we check for a possible dependence of $n_{\rm H_2}$ on velocity,
the LV range is found to be indicative of a denser medium ($\sim$ 10$^5$ cm$^{-3}$) by
an order of magnitude with respect to the MV gas.
\section{Conclusions}
We have presented the HIFI unbiased spectral survey in the 555-636 GHz
band towards the bright bow-shock B1 of the L1157 protostellar outflow.
For the first time, we have detected high-excitation (up to
$\simeq$ 200 K) emission lines of species whose abundance is
largely enhanced in shocked regions (e.g., H$_2$O, NH$_3$, H$_2$CO,
CH$_3$OH). This has allowed us to trace with these species
the existence of a high excitation component with $T_{\rm kin}$
$\ge$ 200--300 K.
Temperature components from $\sim$ 300 K to $\sim$ 1400 K have been inferred
from the analysis of the H$_2$ pure rotational lines (Nisini et al., in prep.).
Therefore the present observations provide a link
between the
gas at $T_{\rm kin}$ 60--200 K previously observed from the ground and
the warmer gas probed by the H$_2$ lines.
We plan to perform additional HIFI observations in the THz region
towards L1157-B1
to observe more species and transitions, thus to be able to derive reliable abundances and
study of the different gas components associated with the bow structure.
\begin{acknowledgements}
HIFI has been designed and built by a consortium of institutes and
university departments from across
Europe, Canada and the United States under the leadership of SRON Netherlands Institute
for Space Research, Groningen, The Netherlands and with major contributions from Germany, France
and the US. Consortium members are: Canada: CSA, U.Waterloo; France: CESR, LAB, LERMA, IRAM;
Germany: KOSMA, MPIfR, MPS; Ireland, NUI Maynooth; Italy: ASI, IFSI-INAF, Osservatorio
Astrofisico di Arcetri-INAF; Netherlands: SRON, TUD; Poland: CAMK, CBK; Spain:
Observatorio Astron\'omico Nacional (IGN),
Centro de Astrobiolog\'{\i}a (CSIC-INTA). Sweden: Chalmers University of Technology -
MC2, RSS \& GARD;
Onsala Space Observatory; Swedish National Space Board, Stockholm University -
Stockholm Observatory;
Switzerland: ETH Zurich, FHNW; USA: Caltech, JPL, NHSC.
We thank many funding agencies for financial support.
\end{acknowledgements}
|
2,869,038,156,922 | arxiv | \section{Introduction}
A crucial component of Bayesian inference is approximating the posterior distribution, which represents the current state of knowledge about the latent variables $\bd{x}$ after data $\bd{y}$ have been observed. When intractable integrals are involved, variational inference methods find an approximation $q(\bd{x})$ to the posterior distribution $p(\bd{x}|\bd{y})$ by minimizing the Kullback-Leibler (KL) divergence
$\mathrm{KL}\{q(\bd{x})||p(\bd{x}|\bd{y})\}=\int q(\bd{x}){\rm log}\left[{{q(\bd{x})}/{p(\bd{x}|\bd{y})}}\right]d\bd{x}$, providing a lower bound for the marginal likelihood.
To make inference tractable, mean-field variational Bayes (MFVB) methods \citep{jordan1999introduction, wainwright2008graphical} assume $q(\bd{x})$ is factorized over a certain partition of the latent variables $\bd{x}\equiv[\bd{x}_{1}, \hdots, \bd{x}_{J}]$, $q_{\mathrm{VB}}(\bd{x})=\prod_{j}q_{\mathrm{VB}}(\bd{x}_{j})$, with marginal densities $q_{\mathrm{VB}}(\bd{x}_{j})$ in free-form and correlations between partitions neglected. The structured mean-field approaches \citep{saul1996exploiting, hoffman2014structured} preserve partial correlations and apply only to models with readily identified substructures. The variational Gaussian (VG) approximation \citep{barber1998ensemble, opper2009variational} allows incorporation of correlations by postulating a multivariate Gaussian parametric form $q_{\mathrm{VG}}(\bd{x})=\cN(\bd{\mu}, \bd{\Sigma})$. The VG approximation, with continuous margins of real variables, are not suitable for variables that are inherently positive or constrained, skewed, or heavy tailed. For multi-modal posteriors, a mixture of MFVB \citep{VB-1998-MixtureMeanField} or a mixture of uniformly-weighted Gaussians \citep{GershmanBlei} may be employed, which usually requires a further lower bound on the average over the logarithm of the mixture distribution.
To address the limitations of current variational methods in failing to simultaneously characterize the posterior dependencies among latent variables while allowing skewness, multimodality, and other characteristics, we propose a new variational copula framework. Our approach decouples the overall inference task into two subtasks: (\textit{i}) inference of the copula function, which captures the multivariate posterior dependencies; (\textit{ii}) inference of a set of univariate margins, which are allowed to take essentially any form. Motivated by the work on automated (black-box) variational inference \citep{ranganath2013black,mnih2014neural, titsias2014doubly, nguyen2014automated, Dkingma2014}, we present a stochastic optimization algorithm for \emph{generic} hierarchical Bayesian models with continuous variables, which (\rmnum{1}) requires minimal model-specific derivations, (\rmnum{2}) reproduces peculiarities of the true marginal posteriors, and (\rmnum{3}) identifies interpretable dependency structure among latent variables.
Using copulas to improve approximate Bayesian inference is a natural idea that has also been explored recently in other contexts \citep{li2015extending,ferkingstad2015improving}. Independently from our work, \cite{Dtran2015} presented a copula augmented variational method with fixed-form marginals, and utilizes regular vines to decompose the multivariate dependency structure into bivariate copulas and a nest of trees. Our method provides complementary perspectives on nonparametric treatment of univariate marginals.
\section{Variational Copula Inference Framework}
Sklar's theorem \citep{1959fonctions} ensures that any multivariate joint distribution $Q$ can be written in terms of univariate marginal distributions $F_{j}(x)=P(X_{j}\leq x)$, $j=1, \hdots, p$ and a copula which describes the dependence structures between variables, such that
\begin{align}\label{eq1}
Q(x_{1}, \hdots, x_{p})=C[F_{1}(x_{1}),\hdots, F_{p}(x_{p})].
\end{align}
Conversely, if $C$ is a copula and $\{F_{j}\}_{j=1:p}$ are distribution functions, then the function $Q$ defined by \eqref{eq1} is a $p\mhyphen$dimensional joint distribution function with marginal distributions $F_{1}, F_{2}, \hdots, F_{p}$, owing to the marginally closed property \citep{xue2000multivariate}. Assuming $Q(x_{1}, ..., x_{p})$ has $p$-order partial derivatives, the joint probability density function (PDF) is
$q(x_{1}, \hdots, x_{p}) = c_{\bd{\Theta}}[F_{1}(x_{1}),\hdots, F_{p}(x_{p}) ]\prod_{j=1}^{p}f_{j}(x_{j})$, where $f_{j}(x_{j})$ is the PDF of the $j$th variable and it is related to the corresponding cumulative distribution function (CDF) by $F_{j}(x_{j})=\int_{-\infty}^{x}f_{j}(t) d t$, $c_{\bd{\Theta}}$ is the copula density with parameter $\bd{\Theta}$.
Sklar's theorem allows separation of the marginal distributions $F_{j}(x_{j})$ from the dependence structure, which is appropriately expressed in the copula function $C$. As a modeling tool, the specified copula function and margins can be directly fitted to the observed data $\bd{y}$ \citep{liu2009nonparanormal, wauthier2010heavy, lopez2013gaussian}
with their parameters optimized via Bayesian or maximum likelihood estimators (see \cite{smith2013bayesian} and the references therein).
In contrast, our goal is to use a copula as an \emph{inference engine} for full posterior approximation.
All the unknowns (variables/parameters) in the user-specified hierarchical model are encapsulated into a vector $\bd{x}$, and the optimal variational approximation $q_{\mathrm{VC}}(\bd{x})$ to the true posterior $p(\bd{x}|\bd{y})$ is found under the Sklar's representation. This approach provides users with full modeling freedom and does not require conditional conjugacy between latent variables; thus the approach is applicable to general models.
Within some tractable copula family $C\in \mathcal{C}$, and assuming $F(\cdot)$ and $C(\cdot)$ to be differentiable, we construct the variational proposal as
$q_{\mathrm{C}}(\bd{x})= c(\bd{u}) \prod_{j=1}^{p}f_{j}(x_{j})$, where $\bd{u}=F(\bd{x})=[F_{1}(x_{1}), \hdots, F_{p}(x_{p})]$, such that the approximation satisfies
\begin{align}
q_{\mathrm{C}}^{\star}(\bd{x})&=\operatornamewithlimits{arg\,min}_{q_{\mathrm{C}}(\bd{x})} \mathrm{KL}\{q_{\mathrm{C}}(\bd{x})||p(\bd{x}|\bd{y})\}\cr
&=\operatornamewithlimits{arg\,min}_{q_{\mathrm{C}}(\bd{x})} \mathrm{KL}\{q_{\mathrm{C}}(\bd{x})||p(\bd{x})\}-\mathbb{E}_{q_{\mathrm{C}}(\bd{x})}[\ln {p(\bd{y}|\bd{x})}],\nonumber
\end{align}
where $p(\bd{y}|\bd{x})$ is the likelihood and $p(\bd{x})$ is the prior. Letting the true posterior $p(\bd{x}|\bd{y})$ in Sklar's representation be $p(\bd{x}|\bd{y}) =c^{\star}(\bd{v})\prod_{j}f_{j}^{\star}(x_{j})$, where $\bd{v}=[F_{1}^{\star}(x_{1}), \hdots, F_{p}^{\star}(x_{p})]$, $c^{\star}(\bd{v})$ and $\{f_{j}^{\star}(x_{j})\}_{j=1:p}$ are the true underlying copula density and marginal posterior densities, respectively, the KL divergence decomposes into additive terms (derivations are provided in Supplementary Material),
\begin{align}\label{eq2}
\mathrm{KL}\{q_{\mathrm{C}}(\bd{x})||p(\bd{x}|\bd{y})\} &=\mathrm{KL}\{c[F(\bd{x})]||c^{\star}[F^{\star}(\bd{x})]\} \nonumber\\ & +\sum\nolimits_{j}\mathrm{KL}\{f_{j}(x_{j})||f_{j}^{\star}(x_{j})\}.
\end{align}
Classical methods, such as MFVB and the VG approximation are special cases of the proposed VC inference framework. We next compare their KL divergence under Sklar's representation and offer a reinterpretation of them under the proposed framework.
\subsection{Special Case 1: Mean-field VB}
The mean-field proposal corresponds to the independence copula $C_{\Pi}(\bd{u})=\prod_{j=1}^{J}u_{j}$ with free-form marginal densities $f_{j}(\bd{x}_{j})$. Given $c_{\Pi}(\bd{u})= 1$ we have $q_{\Pi}(\bd{x})=c_{\Pi}(\bd{u})\prod_{j}f_{j}(\bd{x}_{j})=\prod_{j}f_{j}(\bd{x}_{j})=q_{\mathrm{VB}}(\bd{x})$. If MFVB is not fully factorized, i.e. $J<p$, the independence copula is the only copula satisfying the marginal closed property, according to the impossibility theorem \citep{nelsen2007introduction}.
MFVB assumes an independence copula and only optimizes the free-form margins,
\begin{align}\label{eq3}
\mathrm{KL}\{q_{\mathrm{VB}}(\bd{x})||p(\bd{x}|\bd{y})\} &=\mathrm{KL}\{c_{\Pi}[F(\bd{x})]||c^{\star}[F^{\star}(\bd{x})]\} \nonumber\\ &+\sum\nolimits_{j}{\mathrm{KL}\{f_{j}(x_{j})||f_{j}^{\star}(x_{j})\}}.
\end{align}
The lowest achievable KL divergence in MFVB is $\mathrm{KL}\{q_{\mathrm{VB}}(\bd{x})||p(\bd{x}|\bd{y})\}=\mathrm{KL}\{c_{\mathrm{\Pi}}[F(\bd{x})]||c^{\star}(F(\bd{x}))\}$, which is achieved when the true posterior marginals are found, i.e. $F_{j}\equiv F^{\star}_{j}, \forall j$ , in which case the overall KL divergence is reduced to the KL divergence between the independence copula and the true copula. As is shown in \eqref{eq3}, the objective function contains two terms, both involving marginal CDFs $\{F_{j}\}_{j=1:p}$. Since in general $c^{\star}\neq{}c_{\Pi}$, the optimal $F$ minimizing the first term will not be equal to $F^\star$. Therefore, minimizing \eqref{eq3} will not lead to the correct marginals and this partially explains the reason why MFVB usually cannot find the true marginal posteriors in practice (e.g., variances can be severely underestimated \citep{neville2014mean}), even though it allows for free-form margins.
\subsection{Special Case 2: VG Approximation}
In fixed-form variational Bayes \citep{honkela2010approximate}, such as VG approximation, the multivariate Gaussian proposal $q_{\mathrm{VG}}(\bd{x})=\cN(\bd{x}; \bd{\mu}, \bd{\Sigma})$ can be written as $q_{\mathrm{VG}}(\bd{x})=c_{\mathrm{G}}(\bd{u}|\bd{\Upsilon})\prod_{j=1}^{p}\phi_{j}(x_{j}; \mu_{j}, \sigma_{j}^{2})$. VG not only assumes the true copula function is a Gaussian copula \citep{xue2000multivariate} with parameter $\bd{\Upsilon}=\bd{D}^{-{1}/{2}}\bd{\Sigma}\bd{D}^{-{1}/{2}}$, $\bd{D}={\rm diag}(\bd{\Sigma})$,
but is also restricted to univariate Gaussian marginal densities $\{\phi_{j}(x_{j}; \mu_{j}, \sigma_{j}^{2})\}_{j=1:p}$,
\begin{align}\label{eq4}
\mathrm{KL}\{q_{\mathrm{VG}}(\bd{x})||p(\bd{x}|\bd{y})\}& =\mathrm{KL}\{c_{\mathrm{G}}[\Phi(\bd{x})]||c^{\star}[F^{\star}(\bd{x})]\}\nonumber \\ &+\sum\nolimits_{j}{\mathrm{KL}\{\phi_{j}(x_{j})||f_{j}^{\star}(x_{j})\}}.
\end{align}
We can see in \eqref{eq4} that if the margins are misspecified, even if the true underlying copula is a Gaussian copula, $c_{\mathrm{G}}\equiv c^{\star}$,
there could still be a discrepancy $\sum_{j}{\mathrm{KL}\{\phi_{j}(x_{j})||f_{j}^{\star}(x_{j})\}}$ between margins, and $\mathrm{KL}\{c_{\mathrm{G}}[\Phi(\bd{x})]||c^{\star}[F^{\star}(\bd{x})]\}$ is not zero.
Concerning analytical tractability and simplicity, in the sequel we concentrate on variational Gaussian copula (VGC) proposals constructed via Gaussian copula with continuous margins, i.e. $q_{\mathrm{VGC}}(\bd{x}) = c_{\mathrm{G}}(\bd{u}|\bd{\Upsilon})\prod_{j=1}^{p}f_{j}(x_{j})$,
where $\bd{u}=[F_{1}(x_{1}), \hdots, F_{p}(x_{p})]$.
Our VGC method extends MFVB and VG, and improves upon both by allowing simultaneous updates of the Gaussian copula parameter $\bd{\Upsilon}$ and the adaptation of marginal densities $\{f_{j}(x_{j})\}_{j=1:p}$. First, the univariate margins in VGC is not restricted to be Gaussian. Second, the Gaussian copula in VGC is more resistant to local optima than the independence copula assumed in MFVB and alleviates its variance underestimation pitfall, as is demonstrated in Section \ref{secNIGG}.
\section{Variational Gaussian Copula Approximation}
A Gaussian copula function with $p\times p$ correlation matrix $\bd{\Upsilon}$ is defined as
$C_{\mathrm{G}}(u_{1}, \hdots, u_{p}|\bd{\Upsilon})=\Phi_{p}(\Phi^{-1}(u_{1}), \hdots,\Phi^{-1}(u_{p})|\bd{\Upsilon}): [0,1]^{p}\rightarrow [0,1]$ where $\Phi(\cdot)$ is a shorthand notation of the CDF of $\cN(0, 1)$, and $\Phi_{p}(\cdot|\bd{\Upsilon})$ is the CDF of $N_{p}(\bd{0}, \bd{\Upsilon})$. The Gaussian copula density is
\begin{align}
c_{\mathrm{G}}(u_{1}, \hdots, u_{p}|\bd{\Upsilon})& = \frac{1}{\sqrt{|\bd{\Upsilon}|}}{\exp{\left\{-\frac{\bd{z}^{T}( \bd{\Upsilon}^{-1}-\bd{I}_{p})\bd{z}}{2}\right\}}}, \nonumber
\end{align}
where $\bd{z} =[\Phi^{-1}(u_{1}), \hdots, \Phi^{-1}(u_{p})]^{T}$.
In the proposed VGC approximation, the variational proposal $q_{\mathrm{VGC}}(\bd{x})$ is constructed as a product of Gaussian copula density and continuous marginal densities.
The evidence lower bound (ELBO) of VGC approximation is
\begin{align}\label{eq5}
\mathcal{L}_{\mathrm{C}}[q_{\mathrm{VGC}}(\bd{x})]&=\int \bigg[c_{\mathrm{G}}[F(\bd{x})]\times \prod_{j=1}^{p}f_{j}(x_{j})\bigg]\ln{{p(\bd{y},\bd{x})}}d\bd{x} \nonumber\\ & + H[c_{\mathrm{G}}(\bd{u})]+\sum_{j=1}^{p} H[{f_{j}}(x_{j})],
\end{align}
where ${u}_{j}=F_{j}(x_{j})$, $H[f(x)]=-\int f(x)\ln{f(x)} d x$.
However, directly optimizing the ELBO in \eqref{eq5} w.r.t. the Gaussian copula parameter $\bd{\Upsilon}$ and the univariate marginals $\{f_{j}(x_{j})\}_{j=1:p}$ often leads to a non-trivial variational calculus problem. For computational convenience, we present several equivalent proposal constructions based on Jacobian transformation and reparameterization.
\begin{table*}[bpht]
\centering
\caption{Equivalent Representations of Variational Gaussian Copula (VGC) Proposals}
\label{my-label}
\small{
\begin{tabular}{|l|l|c|c|}
\hline
& Posterior Formulation & \multicolumn{2}{c|}{Optimization Space}\\ \hline
R0 &Original & \multicolumn{2}{c|}{Multivariate (non-Gaussian) density $q(\bd{x})$} \\ \hline
R1 & Sklar's Representation & Copula density $c_{\mathrm{G}}(\bd{u}|\bd{\Upsilon})$ & Univariate marginals $\{f_{j}(x_{j})\}_{j=1:p}$ \\ \hline
R2 & Jacobian Transform & Gaussian density $q(\wt{\bd{z}})=\cN(\bd{0}, \bd{\Upsilon})$ & Monotone functions $\{g_{j}(z_{j})\}_{j=1:p}$ \\ \hline
R3 & Parameter Expansion & Gaussian density $q(\wt{\bd{z}})=\cN(\bd{\mu}, \bd{C}\bd{C}^{T})$ & Monotone functions $\{h_{j}(\wt{z_{j}})\}_{j=1:p}$ \\ \hline
\end{tabular}}
\end{table*}
\vspace{-0.2cm}
\subsection{Equivalent Variational Proposals }
We incorporate auxiliary variables $\bd{z}$ by exploiting the latent variable representation of the Gaussian copula: $x_{j} =F_{j}^{-1}(u_{j})$, $u_{j}=\Phi({z}_{j})$, $\bd{z}\sim N_{p}(\bd{0}, \bd{\Upsilon})$.
Letting $g_{j}(\cdot)=F_{j}^{-1}(\Phi(\cdot))$ be bijective monotonic non-decreasing functions, $x_{j}=g_{j}(z_{j})$, $\forall j$, the Jacobian transformation gives
\vspace{-0.0cm}
\begin{align}
q_{\mathrm{VGC}}(\bd{x}) &=\int \bigg[\prod_{j=1}^{p}\delta(x_{j}-g_{j}(z_{j}))\bigg]q_{\mathrm{G}}(\bd{z}; \bd{0}, \bd{\Upsilon}) d\bd{z}\cr &= q_{\mathrm{G}}(g^{-1}(\bd{x}); \bd{0}, \bd{\Upsilon})\bigg[\prod_{j=1}^{p}\frac{d}{dx_{j}}g_{j}^{-1}(x_{j})\bigg],\nonumber
\end{align}
where $\delta(\cdot)$ is the Dirac delta function.
It is inconvenient to directly optimize the correlation matrix $\bd{\Upsilon}$ of interest, since $\bd{\Upsilon}$ is a positive semi-definite matrix with ones on the diagonal and off-diagonal elements between $[-1,1]$. We adopt the parameter expansion (PX) technique \citep{liu1998parameter, liu1999parameter}, which has been applied in accelerating variational Bayes \citep{jaakkola2006parameter} and {the} sampling {of} correlation matrix \citep{talhouk2012efficient}. Further considering
$\wt{z_{j}}=t_{j}^{-1}(z_{j})=\mu_{j}+\sigma_{jj}z_{j}$, $\wt{\bd{z}}\sim N_{p}(\bd{\mu}, \bd{\Sigma})$, $\bd{\Sigma}=\bd{D}\bd{\Upsilon}\bd{D}^{T}$, $\bd{D}=[{\rm diag}(\sigma_{jj})]_{j=1:p}$, thus $x_{j}=g(z_{j})=g(t(\wt{z_{j}})):=h(\wt{z_{j}})$,
where $h_{j}(\cdot)=g_{j}\circ t_{j}(\cdot)$ are also bijective monotonic non-decreasing functions, the variational proposal is further written as
\begin{align}
q_{\mathrm{VGC}}(\bd{x})&=\int \bigg[\prod_{j=1}^{p}\delta(x_{j}-h_{j}(\wt{z_{j}}))\bigg]q_{\mathrm{G}}(\wt{\bd{z}}; \bd{\mu}, \bd{\Sigma}) d\wt{\bd{z}}\cr &= q_{\mathrm{G}}(h^{-1}(\bd{x}); \bd{\mu}, \bd{\Sigma})\bigg[\prod_{j=1}^{p}\frac{d}{dx_{j}}h_{j}^{-1}(x_{j})\bigg].\nonumber
\end{align}
Given the transformations $\{h_{j}\}_{j=1:p}$, $q_{\mathrm{G}}(\wt{\bd{z}}; \bd{\mu}, \bd{\Sigma})$ can be further reparameterized by the Cholesky decomposition $\bd{\Sigma}=\bd{C}\bd{C}^{T}$\citep{challis2013gaussian, titsias2014doubly}, where $\bd{C}$ is a square lower triangular matrix. Table \ref{my-label} summarizes four translatable representations of variational proposals.
\subsection{VGC with Fixed-form Margins}
The ELBO under Sklar's representation \eqref{eq5} is therefore translated into the Jacobian representation
\vspace{-0.3cm}
\begin{align}\label{eqelbols}
& \mathcal{L}_{\mathrm{C}}[q_{\mathrm{VGC}}(\bd{x})]=\mathbb{E}_{\cN(\wt{\bd{z}}; \bd{\mu}, \bd{\Sigma})}[ \ell_{s}(\wt{\bd{z}})-\ln{q_{\mathrm{G}}(\wt{\bd{z}})}
],\cr & \ell_{s}(\wt{\bd{z}}, h) = \ln{p(\bd{y},h(\wt{\bd{z}}))}+\sum_{j=1}^{p} \ln h_{j}'(\wt{z_{j}}).\end{align}
The monotonic transformations $h_{j}(\cdot)=F_{j}^{-1}[\Phi(t(\cdot))]$ can be specified according to the desired parametric form of marginal posterior, if the inverse CDF $F_{j}^{-1}$ is tractable. For example, the multivariate log-normal posterior can be constructed via a Gaussian copula with log-normal (LN) margins,
\begin{align}
q_{\textrm{VGC-LN}}(\bd{x})=C_{\mathrm{G}}(\bd{u}|\bd{\Upsilon})\prod_{j=1}^{p}\mathrm{LN}(x_{j}; \mu_{j},\sigma_{j}^{2}).
\end{align}
This also corresponds to imposing exponential transform on Gaussian variables, $\bd{x}=h(\wt{\bd{z}})=\exp(\wt{\bd{z}})$, $\wt{\bd{z}}\sim \cN(\bd{\mu}, \bd{\Sigma})$. In this case, $\{\mu_{j},\sigma_{j}^{2}\}_{j=1:p}$ controls the location and dispersion of the marginal density; $h(\cdot)$ does not have any additional parameters to control the shape and $\ln h'(\wt{z_{j}})=\wt{z_{j}}$ takes a simple form. VGC-LN is further discussed in Section \ref{sec2dLN} and Section \ref{secNIGG}.
Given the copula function $C$, we only need to find $p$ one-dimensional margins. However, without knowing characteristics of the latent variables, specifying appropriate parametric form for margins is a difficult task in general cases. First, the marginals might exhibit multi-modality, high skewness or kurtosis, which are troublesome for particular parametric marginals to capture. Second, a tractable inverse CDF with optimizable arguments/parameters, as required here, are available only in a handful of cases. Instead of using some arbitrary parametric form, we construct bijective transform functions via kernel mixtures, which lead to highly flexible (ideally free-form) marginal proposals.
\section{Bernstein Polynomials based Monotone Transformations }
The marginal densities in VGC can be recovered through Jacobian transformation,
\begin{align}
f_{j}(x_{j})&=q_{\mathrm{G}}(h_{j}^{-1}({x}_{j}); \mu_{j},\sigma_{2}^{2})\frac{d}{dx_{j}}h_{j}^{-1}(x_{j})\cr &=q_{\mathrm{G}}(h_{j}^{-1}({x}_{j}); \mu_{j},\sigma_{2}^{2})\frac{1}{h_{j}'(h_{j}^{-1}(x_{j}))},
\end{align}
where the ${[h_{j}'(h_{j}^{-1}(x_{j}))]^{-1}}$ term is interpreted as a marginal-correction term. {To guarantee analytical tractability}, we require $h(\cdot)$ to be (\rmnum{1}) bijective; (\rmnum{2}) monotonic non-decreasing; (\rmnum{3}) having unbounded/constrained range; (\rmnum{4}) differentiable with respect to both its argument and parameters; and (\rmnum{5}) sufficiently flexible. We propose a class of continuous and smooth transformations $h(\cdot)$ constructed via kernel mixtures that automatically have these desirable properties.
\subsection{Continuous Margins Constructed via Bernstein Polynomials}
The Bernstein polynomials (BPs) have {a uniform convergence property for continuous functions on unit interval $[0,1]$} and have been used for nonparametric density estimation \citep{petrone1999bayesian}. It seems more natural to use kernel mixtures directly as the variational proposal. However, the difficulty lies in tackling the term $f(F^{-1}(\cdot))$ involving the inverse CDF of mixtures (not analytical) and the need of a further lower bound on the entropy of mixtures. In this paper, we overcome this issue by using a sandwich-type construction of the transform $h(\wt{z})$\footnote{The index $j$ on $\wt{z}$ is temporarily omitted for simplicity, and is added back when necessary.} which maps from $(-\infty, \infty)$ to some target range building upon BP,
\begin{align}\label{eqBPs}
h(\wt{z})&=\Psi^{-1}[B(\Phi(\wt{z}); k, \bd{\omega})], \cr B(u; k, \bd{\omega})&=\sum_{r=1}^{k}\omega_{r,k}I_{u}(r, k-r+1),
\end{align}
where $I_{u}(r, k-r+1)$ is the regularized incomplete beta function. $\Phi(\cdot)$ is the standard normal CDF mapping from $(-\infty, \infty)$ to $[0,1]$, and $\Psi^{-1}(\cdot)$ is some predefined tractable inverse CDF with fixed parameters; for example, the inverse CDF of the exponential distribution helps map from $[0,1]$ to $(0,\infty)$ for positive variables. $B(u; k, \bd{\omega})$ relocates the probability mass on the unit interval $[0,1]$. The degree $k$ is an unknown smoothing parameter, and $\bd{\omega}$ is the unknown mixture weights on the probability simplex $\Delta_{k}=\{(\omega_{1}, \hdots, \omega_{k}): \omega_{i}\geq 0, \sum_{i}\omega_{i}=1\}$. The proposed sandwich-type transformation avoids the difficulty of specifying any particular types of marginals, while still leads to tractable derivations presented in Section \ref{sec:5}.
\subsection{Variational Inverse Transform}\label{secVIT}
Considering a $\textrm{1-}$d variational approximation problem ($x$ is a scalar, the true posterior $f(x)$ is known up to the normalizing constant), fix $q(\wt{z})=\cN(0,1)$, thus $u=\Phi(\wt{z})\sim\mathcal{U}[0,1]$, we can learn the monotonic transformation $\xi(\cdot)=Q^{-1}(\cdot)$ on the base uniform distribution $q_{0}(u)$ by solving a variational problem,
\begin{align}
\xi^{\star}(\cdot) = \operatornamewithlimits{arg\,min}_{\xi} \mathrm{KL}\{q(x)||f(x)\}, \quad x=\xi(u)=Q^{-1}(u), \nonumber
\end{align}
i.e., if we generate $u \sim \mathcal{U}[0,1]$, then $x=\xi^{\star}(u)\sim Q^{\star}$. $Q^{\star}$ is closest to the true distribution $F$ with the minimum KL divergence. This can be interpreted as the variational counterpart of the inverse transform sampling \citep{Devroye86}, termed as variational inverse transform (VIT). Our BP-based construction $\xi(\cdot)=Q^{-1}(\cdot)=\Psi^{-1}(B(u; k, \bd{\omega}))$ is one appropriate parameterization scheme for the inverse probability transformation $Q^{-1}(\cdot)$. VIT-BP offers two clear advantages. First, as opposed to fixed-form variational Bayes, it does not require any specification of parametric form for $q(x)$. Second, the difficult task of calculating the general inverse CDFs $Q^{-1}(\cdot)$ is lessened to the much easier task of calculating the predefined tractable inverse CDF $\Psi^{-1}(\cdot)$. Some choices of $\Psi(\cdot)$ include CDF of $\cN(0,1)$ for variables in $(-\infty, \infty)$, $\mathrm{Beta}(2,2)$ for truncated variables in $(0,1)$.
To be consistent with VIT, we shall set $\Phi(\cdot)$ in \eqref{eqBPs} to be $\Phi(\cdot|\mu, \sigma^{2})$, instead of $\Phi(\cdot|0,1)$, such that $u$ is always uniformly distributed. Ideally, BP itself suffices to represent arbitrary continuous distribution function on the unit interval. However, it might require a higher order $k$. As is demonstrated in Section \ref{sectionbp}, this requirement can be alleviated by incorporating auxiliary parameters $\{\mu, \sigma^{2}\}$ in VGC-BP, which potentially help in changing location and dispersion of the probability mass.
\section{Stochastic VGC}\label{sec:5}
\begin{algorithm*}[t]
\caption{(VGC-BP) Stochastic Variational Gaussian Copula Inference with Bernstein Polynomials}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} observed data $\bd{y}$, user specified model $\ln p(\bd{y},\bd{x})$ and first-order derivatives $\nabla_{\bd{x}}\ln p(\bd{y}, \bd{x})$, Bernstein polynomials degree $k$, predefined $\Psi(\cdot)$ and $\Phi(\cdot)$ \STATE \textbf{Initialize} variational parameter $\Theta_{0}=\left(\bd{\mu}_{0}, \bd{C}_{0}, \{\bd{\omega}^{(j)}_{0}\}_{j=1:p}\right)$, $t=0$.
\REPEAT
\STATE $t=t+1$,
\STATE Sample $\wt{\bd{\epsilon}}\sim q_{\mathrm{G}}\left(\bd{\wt{\epsilon}}, \bd{0}, \bd{I}_{p}\right)$, and set $\wt{\bd{z}}=\bd{\mu}_{t-1}+\bd{C}_{t-1}\bd{\epsilon}$,
\STATE $\bd{\mu}_{t} = \bd{\mu}_{t-1}+\lambda_{t}[\nabla_{\wt{{\bd{z}}}}\ell_{s}(\wt{\bd{z}}, {h})-\nabla_{\wt{\bd{z}}}\ln{q_{\mathrm{G}}(\wt{\bd{z}})}]$, \hfill \% Update $\bd{\mu}_{t-1}$ with stepsize $\lambda_{t}$
\STATE $\bd{C}_{t} = \bd{C}_{t-1}+\eta_{t}[\nabla_{\wt{{\bd{z}}}}\ell_{s}(\wt{\bd{z}}, {h})-\nabla_{\wt{\bd{z}}}\ln{q_{\mathrm{G}}(\wt{\bd{z}})}]\bd{\epsilon}^{T}
$, \hfill \% Update $\bd{C}_{t-1}$ with stepsize $\eta_{t}$
\FOR{$j=1$ {\bfseries to} $p$}
\STATE $\bd{\omega}^{(j)}_{t} = \mathcal{P}(\bd{\omega}^{(j)}_{t-1} +\xi_{t}^{(j)}\nabla_{\bd{\omega}^{(j)}}\ell_{s}(\wt{\bd{z}}, h))$, \hfill \% Update $\bd{\omega}^{(j)}_{t-1}$ with stepsize $\xi_{t}^{(j)}$ and gradient projection $\mathcal{P}$
\ENDFOR
\UNTIL{convergence criterion is satisfied}
\STATE {\bfseries Output:} marginal parameters $\left(\{\bd{\omega}^{(j)}\}_{j=1:p},\bd{\mu},\bd{\sigma^{2}}\right)$ and copula parameters $\bd{\Upsilon}$
\end{algorithmic}
\end{algorithm*}
The derivations of deterministic VGC updates are highly model-dependent. First, due to the cross terms often involved in the log likelihood/prior, the corresponding Gaussian expectations and their derivatives may not be analytically tractable. Second, owing to the non-convex nature of many problems, only locally optimal solutions can be guaranteed. In contrast, stochastic implementation of VGC only requires the evaluation of the log-likelihood and log-prior along with their derivatives, eliminating most model-specific derivations, and it provides a chance of escaping local optima by introducing randomness in gradients.
\subsection{Coordinate transformations}\label{secmuC}
Applying the coordinate transformations\footnote{If necessary, the Gaussian copula can be replaced with other appropriate parametric forms. The coordinate transformation supports many other distributions as well, for example, those described in Appendix C.2. of Rezende et al. (2014).
} of stochastic updates, $\wt{\bd{z}}=\bd{\mu}+\bd{C}\bd{\epsilon}$, $\bd{\epsilon}\sim \cN(\bd{0}, \bd{I})$, introduced in \citep{rezende2014stochastic, titsias2014doubly}, the gradient of the ELBO w.r.t. variational parameter $(\bd{\mu}, \bd{C})$ can be written as
\begin{align}\label{ourscheme}
\hspace{-0.65cm}\nabla_{\bd{\mu}}\mathcal{L}_{\mathrm{C}}&=\mathbb{E}_{q_{G} (\wt{\bd{z}})}\left[\nabla_{\wt{\bd{z}}}\ell_{s}(\wt{\bd{z}}, h)-\nabla_{\wt{\bd{z}}}\ln{q_{\mathrm{G}}(\wt{\bd{z}})}\right],\cr \hspace{-0.6cm}
\nabla_{\bd{C}}\mathcal{L}_{\mathrm{C}}&=\mathbb{E}_{q_{G}(\wt{\bd{z}})}\left[\nabla_{\wt{\bd{z}}}(\ell_{s}(\wt{\bd{z}}, h)-\nabla_{\wt{\bd{z}}}\ln{q_{\mathrm{G}}(\wt{\bd{z}})})\bd{\epsilon}^{T}\right],
\end{align}
where the stochastic gradient terms
\begin{align}
\nabla_{\wt{{z}_{j}}}\ell_{s}(\wt{\bd{z}})&=\nabla_{\wt{{z}_{j}}} \ln p(\bd{y},h(\wt{\bd{z}}))+\nabla_{\wt{{z}_{j}}}\ln h_{j}'(\wt{z}_{j})\cr &={\color{black}\frac{\partial \ln p(\bd{y},\bd{x})}{\partial x_{j}}}h_{j}'({\wt{{z}_{j}}})+\nabla_{\wt{{z}_{j}}}\ln h_{j}'(\wt{z}_{j}). \nonumber
\end{align}
According to the chain rule, the first derivative of $h(\cdot)$ w.r.t $\wt{z}$ is,
\begin{align}\label{eq8}
h'(\wt{z})&=\frac{d \Psi^{-1}[B(\Phi(\wt{z}); k, \bd{\omega})]}{d B(\Phi(\wt{z}); k, \bd{\omega})}\frac{d B(\Phi(\wt{z}); k, \bd{\omega})}{d \Phi(\wt{z})}\frac{d \Phi(\wt{z})}{d \wt{z}}\nonumber \\& = \frac{b(\Phi(\wt{z}); k, \bd{\omega})\phi(\wt{z})}{\psi(h(\wt{z}))},
\end{align}
where $b(u; k, \bd{\omega})=\sum_{r=1}^{k}\omega_{r,k}\beta(u; r, k-r+1)$, $\beta(x; a, b)$ is the beta density $\beta(x; a, b)={\Gamma(a+b)}/{(\Gamma(a)\Gamma(b))}x^{a-1}(1-x)^{b-1}$. Therefore, $\ln h'(\wt{z})= \ln b(\Phi(\wt{z}); k, \bd{\omega})+\ln \phi(\wt{z}) -\ln\psi(h(\wt{z}))$ and $\nabla_{\wt{{z}_{j}}}\ln h_{j}'(\wt{z}_{j})={h_{j}''(\wt{z}_{j})}/{h_{j}'(\wt{z}_{j})} $ all take analytical expressions, where
\begin{align}
h_{j}''(\wt{z}_{j})&=[\rho_{1}'(\wt{z}_{j})\rho_{2}(\wt{z}_{j})\rho_{3}(\wt{z}_{j})+\rho_{1}(\wt{z}_{j})\rho_{2}'(\wt{z}_{j})\rho_{3}(\wt{z}_{j})\cr &-\rho_{1}(\wt{z}_{j})\rho_{2}(\wt{z}_{j})\rho_{3}'(\wt{z}_{j})]/{[\rho_{3}(\wt{z}_{j})]^{2}},\nonumber
\end{align}
where $\rho_{1}(\wt{z}_{j})=b(u_{j}; k, \bd{\omega}^{(j)})$, $\rho_{2}(\wt{z}_{j})=\phi(\wt{z_{j}})$, $\rho_{3}(\wt{z}_{j})=\psi(h_{j}(\wt{z}_{j}))$, $\rho_{1}'(\wt{z}_{j})=\phi(\wt{z_{j}})\sum_{r=1}^{k}\omega_{r,k}^{(j)}\beta'(u_{j}; r, k-r+1)$, $\rho_{2}'(\wt{z}_{j})=-\wt{z_{j}}\phi(\wt{z_{j}})$, $\rho_{3}'(\wt{z}_{j})=\psi'(h_{j}(\wt{z}_{j}))h_{j}'(\wt{z}_{j})$, $u_{j}=\Phi(\wt{z_{j}})$, $\phi(\cdot)$ is the PDF of $\cN(0,1)$, $\psi(\cdot)$ and $\psi'(\cdot)$ are the predefined PDF and its derivative respectively.
Defining $\beta(x; a, 0)=\beta(x; 0, b)=0$, the derivative is written as a combination of two polynomials of lower degree
$\beta'(x; a, b)=(a+b-1)[\beta(x; a-1, b)-\beta(x; a, b-1)]$.
In stochastic optimization, the gradients expressed in terms of expectations are approximated using Monte Carlo integration with finite samples. The gradients
contain expectations on additive terms. Note that \cite{rezende2014stochastic} and \cite{titsias2014doubly} ignore the stochasticity in the entropy term $\mathbb{E}_{q_{G} (\wt{\bd{z}})}[-\ln{q_{\mathrm{G}}(\wt{\bd{z}})]}$ and assume $\nabla_{\bd{\mu}}\mathbb{E}_{q_{G} (\wt{\bd{z}})}[-\ln{q_{\mathrm{G}}(\wt{\bd{z}})]}=0$ and $\nabla_{\bd{C}}\mathbb{E}_{q_{G} (\wt{\bd{z}})}[-\ln{q_{\mathrm{G}}(\wt{\bd{z}})]}={\rm diag}[1/C_{jj}]_{j=1:p}$. This creates an inconsistency as we only take finite samples in approximating $\mathbb{E}_{q_{G} (\wt{\bd{z}})}[\nabla_{\wt{\bd{z}}}\ell_{s}(\wt{\bd{z}})]$, and perhaps surprisingly, this also results in an increase of the gradient variance and the sensitivity to the learning rates. Our method is inherently more stable, as the difference between the gradients, $\nabla_{\wt{\bd{z}}}[\ell_{s}(h(\wt{\bd{z}}))-q_{\mathrm{G}}(\wt{\bd{z}})]$, $\forall \wt{\bd{z}}$, tends to zero when the convergent point is approached. In contrast, the gradients in previous method diffuses with a constant variance even around the global maximum. This phenomenon is illustrated in Section \ref{sec2dLN}.
The alternative log derivative approach are also applicable to VGC inference and other types of copulas, see \cite{PaisleyBlei, mnih2014neural, rezende2014stochastic} for references. We leave this exploration open for future investigation.
\subsection{Update the BP Weights}
Under a given computational budget, we prefer a higher degree $k$, as there is no over-fitting issue in this variational density approximation task.
Given $k$, the basis functions are completely known, depending only on index $r$. The only parameter left to be optimized in the Bernstein polynomials is the mixture weights. Therefore, this construction is relatively simpler than Gaussian mixture proposals \citep{GershmanBlei, nguyen2014automated}. Assuming permissibility of
interchange of integration and differentiation holds, we have $\nabla_{\bd{\omega}^{(j)}}\mathcal{L}_{\mathrm{C}}=\mathbb{E}_{q_{G}(\wt{\bd{z}})}\left[\nabla_{\bd{\omega}^{(j)}}\ell_{s}(\wt{\bd{z}}, h, \bd{y})\right]$, with the stochastic gradients
\begin{align}
& \nabla_{\bd{\omega}^{(j)}}\ell_{s}(\wt{\bd{z}}, h, \bd{y})= \nabla_{\bd{\omega}^{(j)}}\ln p(\bd{y},h(\wt{\bd{z}}))+\nabla_{\bd{\omega}^{(j)}} \ln h_{j}'(\wt{z_{j}}
\cr &=\frac{\partial \ln p(\bd{y},\bd{x})}{\partial x_{j}}\bigg[\frac{\partial h_{j}(\wt{z}_{j})}{\partial \omega_{r,k}^{(j)}}\bigg]_{r=1:k} + \bigg[\frac{\partial \ln h_{j}'(\wt{z_{j}})}{\partial\omega_{r,k}^{(j)}}\bigg]_{r=1:k}, \nonumber
\end{align}
where
\begin{align}
\frac{\partial h_{j}(\wt{z}_{j})}{\partial \omega_{r,k}^{(j)}}&=\frac{\partial \Psi^{-1}[B(u_{j}; k, \bd{\omega}^{(j)})]}{\partial \omega_{r,k}^{(j)}}
=\frac{I_{u_{j}}(r, k-r+1)}{\psi( h_{j}(\wt{z}_{j}))},\nonumber
\end{align}
\begin{align}
{\partial \ln h_{j}'(\wt{z_{j}})}/{\partial\omega_{r,k}^{(j)}}&={\beta(u_{j}; r, k-r+1)}/{b(u_{j}; k, \bd{\omega}^{(j)})}\cr &-\frac{\psi'(h_{j}(\wt{z}_{j}))}{\{\psi(h_{j}(\wt{z}_{j}))\}^{2}}I_{u_{j}}(r, k-r+1). \nonumber
\end{align}
The gradients w.r.t $\bd{\omega}^{(j)}$ turn into expectation straightforwardly, to enable stochastic optimization of the ELBO. To satisfy the constraints of $\bd{\omega}^{(j)}$ on the probability simplex, we apply the gradient projection operation $\mathcal{P}$ introduced in \cite{duchi2008efficient} with complexity $\mathcal{O}(k{\rm log}{k})$. The above derivations related to BPs together with those in Section \ref{secmuC} are all analytic and model-independent. The only two model-specific terms are ${\color{black}\ln p(\bd{y},\bd{x})}$ and ${\color{black}{\partial \ln p(\bd{y},\bd{x})}/{\partial \bd{x}}}$. The stochastic optimization algorithm is summarized in Algorithm \ref{alg:example}, with little computational overhead added relative to stochastic VG. The stability and efficiency of the stochastic optimization algorithm can be further improved by embedding adaptive subroutines \citep{duchi2011adaptive} and considering second-order optimization method \citep{fankai2015}.
\section{Experiments}
We use Gaussian copulas with fixed/free-form margins as automated \emph{inference engines} for posterior approximation in generic hierarchical Bayesian models. We evaluate the peculiarities reproduced in the univariate margins and the posterior dependence captured broadly across latent variables. This is done by comparing VGC methods to the ground truth and other baseline methods such as MCMC, MFVB, and VG (see Supplementary Material for detailed derivations). Matlab code for VGC is available from the GitHub repository: \url{https://github.com/shaobohan/VariationalGaussianCopula}
\vspace{-0.08cm}
\subsection{Flexible Margins}\label{sectionbp}
We first assess the marginal approximation accuracy of our BP-based constructions in Section \ref{secVIT}, i.e., $h(\cdot)=\Psi^{-1}(B(\Phi(\wt{z}); k, \bd{\omega}))$ via $1\mhyphen$d variational optimization, where $\wt{z}\sim\cN(0,1)$ in VIT-BP, and $\wt{z}\sim\cN(\mu,\sigma^{2})$ in VGC-BP. For fixed BP order $k$, the shape of $q(x)$ is adjusted solely by updating $\bd{\omega}$, according to the variational rule. In VGC-BP, the additional marginal parameters $\{\mu, \sigma^{2}\}$ also contribute in changing location and dispersion of $q(x)$. Examining Figure \ref{fig1d}, VGC-BP produces more accurate densities than VIT-BP under the same order $k$. Hereafter, the predefined $\Psi(\cdot)$ for real variables, positive real variable, and truncated [0,1] variables are chosen to be the CDF of $\cN(0,1)$, $\mathrm{Exp}(1)$ and $\mathrm{Beta}(2,2)$, respectively.
\begin{figure}[bpht]
\vskip -0.05in
\begin{center}
\tiny{
$\begin{array}{cc}
\hspace{-0.5cm}\includegraphics[height=0.16\textwidth, width=0.2\textwidth]{fig_sn} &
\hspace{-0.4cm}\includegraphics[height=0.16\textwidth, width=0.2\textwidth]{fig_student}\\
(a) \textrm{ Skew-Normal} (\alpha = 5)& (b)\textrm{ Student's t } (\nu = 1)\\
\hspace{-0.5cm}\includegraphics[height=0.16\textwidth, width=0.2\textwidth]{fig_gamma} &
\hspace{-0.4cm}\includegraphics[height=0.16\textwidth, width=0.2\textwidth]{fig_beta}\\
(c) \textrm{ Gamma}(5,2)& (d) \textrm{ Beta}(0.5, 0.5)
\end{array}$}
\end{center}
\vskip -0.15in
\caption{Marginal Adaptation: VIT-BP v.s. VGC-BP}
\label{fig1d}
\end{figure}
\subsection{Bivariate Log-Normal}\label{sec2dLN}
The bivariate log-normal PDF $p(x_{1}, x_{2})$ \citep{aitchison1957lognormal} is given by
\begin{align}
p(x_{1}, x_{2})&={\exp{(-{\zeta}/{2})}}/[{2\pi x_{1} x_{2} \sigma_{{1}}\sigma_{{2}}\sqrt{1-\rho^2}}],\cr
\zeta &= \frac{1}{1-\rho^2}\bigg[\alpha_{1}^{2}(x_{1})-2\rho\alpha_{1}(x_{1})\alpha_{2}(x_{2})+\alpha_{2}^{2}(x_{2})\bigg],\nonumber
\end{align}
where $\alpha_{i}(x_{i})={(\ln{x_{i}}-\mu_{i})}/{\sigma_{i}}$, $ i = 1, 2$, $-1<\rho<1$.
We construct a bivariate Gaussian copula with (\rmnum{1}) Log-normal margins {(VGC-LN)} and (\rmnum{2}) BP-based margins {(VGC-BP)}. We set $\mu_{1}=\mu_{2}=0.1$ and $\sigma_{1}=\sigma_{2}=0.5$, $\rho = 0.4$ or $-0.4$ (first and second row in Figure \ref{figtwodLN}). Both VGC-LN and VGC-BP methods presume
the correct form of the underlying copula (bivariate Gaussian) and learn the copula parameters $\rho$. VGC-LN further assumes exactly the true form of the univariate margins (log-normal) while VGC-BP is without any particular assumptions on parametric form of margins. Figure \ref{figtwodLN} shows that VGC-BP find as accurate joint posteriors as VGC-LN, even though the former assumes less knowledge about the true margins.
\begin{figure}[bpht]
\vskip -0.07in
\begin{center}
\tiny{
$\begin{array}{ccccc}
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figb1} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figb2} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figb3} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figb4} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figb5}\\[-0.2em]
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figa1} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figa2} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figa3} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figa4} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.1\textwidth]{figa5}\\
\end{array}$}
\end{center}
\vskip -0.10in
\caption{Approximate Posteriors via VGC methods}
\label{figtwodLN}
\end{figure}
In updating $(\bd{\mu}, \bd{C})$, {VGC-LN1} and {VGC-BP1} follow the scheme in \citep{titsias2014doubly} and neglect the stochasticity in the entropy term; while {VGC-LN2} and {VGC-BP2} are based on our scheme in \eqref{ourscheme}. Under the same learning rates, we define the relative mean square error (RMSE) of the copula parameter as ${R}(\rho)=\frac{(\hat{\rho}-\rho)^{2}}{\rho^2}$; both VGC-LN and VGC-BP results in Figure \ref{figcompVG} consistently show that our method leads to less noisy gradients and converges faster.
\begin{figure}[bpht]
\vskip -0.07in
\begin{center}
\footnotesize{
$\begin{array}{cccc}
\hspace{-0.45cm}\includegraphics[height=0.12\textwidth, width=0.131\textwidth]{figb6} &
\hspace{-0.45cm}\includegraphics[height=0.12\textwidth, width=0.131\textwidth]{figb7} &
\hspace{-0.45cm}\includegraphics[height=0.12\textwidth, width=0.131\textwidth]{figa6} &
\hspace{-0.45cm}\includegraphics[height=0.12\textwidth, width=0.131\textwidth]{figa7}\\
\end{array}$}
\end{center}
\vskip -0.12in
\caption{ $\mathrm{RMSE}(\rho)$ of VGC-LN and VGC-BP v.s. Iterations; Left two: $\rho=0.4$; Right two: $\rho=-0.4$}
\label{figcompVG}
\end{figure}
\subsection{Horseshoe Shrinkage}\label{secNIGG}
The horseshoe distribution \citep{carvalho2010horseshoe} can be represented in equivalent conjugate hierarchies \citep{neville2014mean} $ y|\tau\sim \cN(0,\tau)$, $\tau|\lambda\sim \mathrm{InvGa}(0.5, \lambda)$, $ \lambda \sim \mathrm{InvGa}(0.5, 1)$. Here we assume $y=0.01$ is the (single) observation. Denoting $\bd{x}=(x_{1}, x_{2})=(\tau, \gamma=1/\lambda)$, we implemented the VGC-BP algorithm ($k=10$) and VGC-LN algorithms (deterministic implementations\footnote{For gradient updates, we use a quasi-Newton strategy implemented in \cite{schmidt2012minfunc}.} are available in this special case). We compared them with two baselines: (\rmnum{1}) Gibbs sampler ($1\times 10^6$ samples), and (\rmnum{2}) MFVB. From Figure \ref{fig1}, it is noted that the VGC methods with full correlation matrix (VGC-LN-full, VGC-BP-full) are able to preserve the posterior dependence and alleviate the under-estimation of the posterior variance. VGC-LN-full lead to higher ELBO than MFVB, and the gain is lost with factorized assumption $\bd{\Upsilon}=\bd{I}$ (VGC-LN-diag) in which case the Gaussian copula reduces to the independence copula. The restriction of parametric margins is relaxed in VGC-BP. With refinement of the mixture weights, VGC-BP leads to higher ELBO than VGC-LN. Since the Gaussian copula admits neither lower nor upper tail dependence, the posterior dependence it is able to preserve can be restrictive. It is a future research topic to explore other copula families that allow more complex posterior dependencies in variational copula inference.
\begin{figure}
\begin{minipage}[b]{0.65\linewidth}
\centering
{\small\begin{tabular}{c}
\hspace{-0.35cm}\includegraphics[width=5.2 cm,height=5.2cm]{fig_nigg}
\end{tabular}}
\end{minipage}%
\begin{minipage}[b]{0.3\linewidth}
\centering%
\scalebox{.7}{\begin{tabular}{|l|l|}
\hline
Methods & ELBO \\ \hline
MFVB & -1.0778 \\ \hline
VGC-LN-full & -0.0634 \\ \hline
VGC-LN-diag & -1.2399 \\ \hline
VGC-BP & 0.3277\\ \hline
\end{tabular}} \par\vspace{-4pt}
\end{minipage}
\vskip -0.12in
\caption{(Left Panel) Approximated Posteriors (Shown in Log Space for Visualization Purpose); (Right Panel) comparison of ELBO of different variational methods}
\vskip -0.15in
\label{fig1}
\end{figure}
\subsection{Poisson Log-Linear Regression}\label{secPoLog}
We consider the tropical rain forest dataset \citep{moller2007modern}, a point pattern giving the locations of 3605 trees accompanied by covariate data giving the elevation. Resampling the data into a grid of $50\times 50$m ($u_{i}$ locates the $i\mhyphen$th grid), the number of trees $y_{i}$ per unit area is modeled as, $y_{i}\sim \mathrm{Poisson}(\mu_{i})$, $i=1,\hdots, n$, ${\rm log}(\mu_{i})=\beta_{0} + \beta_{1}u_{i}+\beta_{2}u_{i}^2$, $\beta_{0}\sim N(0, \tau)$, $\beta_{1}\sim N(0, \tau)$, $\beta_{2}\sim N(0, \tau)$, $\tau\sim \mathrm{Ga}(1,1)$. We denote $\bd{x}=(\beta_{0}, \beta_{1}, \beta_{2}, \tau)$, and choosing $\Psi^{-1}(\cdot)$ to be the CDF of $\cN(0,1)$ or $\mathrm{Exp}(1)$ accordingly. The implementation of VGC-BP leads to highly accurate marginal and pairwise posteriors (See Figure \ref{figpolog}), as compared to the MCMC sampler ($1\times 10^{6}$ runs) implemented in JAGS \footnote{\url{http://mcmc-jags.sourceforge.net/}} as reference solutions.
\begin{figure}[bpht]
\vskip -0.02in
\begin{center}
\tiny{
$\begin{array}{cccc}
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige1a} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige2} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige3} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige4}
\\[-0.3em]
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige5} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige6a} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige7} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige8}
\\[-0.3em]
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige9} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige10} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige11a} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige12}
\\[-0.3em]
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige13} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige14} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige15} &
\hspace{-0.35cm}\includegraphics[height=0.09\textwidth, width=0.12\textwidth]{fige16a}
\\[-0.3em]
\end{array}$}
\end{center}
\vskip -0.12in
\caption{Univariate Margins and Pairwise Posteriors}
\vskip -0.15in
\label{figpolog}
\end{figure}
Interestingly, for non-conjugate models with unknown exact joint posteriors, VGC still provides a Sklar's representation of the approximated posterior, including an analytical Gaussian copula, and a number of univariate margins (summarized as univariate histograms if not in closed-form). For further uses such as calculating sample quantiles, simulating samples from $q_{\mathrm{VGC}}(\bd{x})$ is independent and faster, as compared to MCMC. The obtained posterior approximation could possibly improve the efficiency of Metropolis-Hastings (MH) samplers by replacing the MCMC prerun as a reasonable proposal \citep{schmidl2013vine}.
The proposed method is an automated approach of approximating full posteriors. It is readily applicable to a broad scope of latent Gaussian models with non-conjugate likelihoods. Compared with the integrated nested Laplace approximation (INLA) \citep{Rue09} and integrated non-factorized variational inference \citep{shaobo2013}, our approach does not need to discretize the space for non-Gaussian variables and thus does not suffer from the limits on the number of hyperparameters.
\vspace{-0.15cm}
\section{Discussions}
This article proposes a unified variational copula inference framework. In VGC, we have focused on Gaussian copula family for simplicity, however, other more flexible forms such as Gaussian mixture copula can be considered as well. To avoid the difficulty of specifying marginals for hidden variables, a nonparametric procedure based on Bernstein polynomials indirectly induces highly flexible univariate margins. \cite{Dtran2015} and \cite{Kucukelbir2015} could potentially benefit from our flexible margins, while our approach is likely to benefit from the vine copula decomposition \citep{Dtran2015} to allow richer or more complex dependencies and the automatic differentiation techniques applied in \cite{Kucukelbir2015}.
\subsubsection*{Acknowledgments}
This research was supported in part by ARO, DARPA,
DOE, NGA, ONR and NSF.
\clearpage
|
2,869,038,156,923 | arxiv | \section{Introduction}
Channelizing the electronic motion through confinement is the key to the future success of fabrication of nano-scale electronic devices \cite{Cui2001}. One of the most appropriate way of achieving it is to tailor the potential profile of electrons by constructing hetero-interfaces \cite{Mani2002}, superlattices \cite{Nadvornik} and thin films \cite{Fujimori2011}, where the confinement length is comparable to the de Broglie wavelength of the associated electron. Among the heterostructures and films, the oxide families are intriguing and exhibit novel quantum states due to collective phenomena by virtue of interplay between spin, charge, and orbital degrees of freedom \cite{Hwang2012,Dagotto2005,Chakhalian2014}.
The widely investigated insulating oxide interfaces LaAlO$_3$/SrTiO$_3$ \cite{Ohtomo2004,Nakagawa2006,Karolina2009} and LaMnO$_3$/ SrMnO$_3$ \cite{Bhattacharya2008,Nanda2008} produce 2DEG to quench the polar catastrophe that arises due to alternate stacking of positively and negatively charged layers along the La side and charge neutral layers along the Sr side. A 2DEG can also be formed by quantizing the three dimensional metallic state through a confinement potential. While examples are many in semiconducting heterostructures \cite{Zhong2015,Kjaergaard2016}, it is a rare occurrence in the family of correlated oxides. One of them is the case of SrVO$_3$ ultra thin-film (8 monolayers) deposited on Nb:SrTiO$_3$ substrate \cite{Fujimori2011}. Here, the three dimensional metallic V-$t_{2g}$ states are confined by potential well which is formed due to the Schottky barrier created at the Nb:SrTiO$_3$/SrVO$_3$ interface and the natural barrier at the SrVO$_3$/vacuum interface. Such orbital selective quantization by exploiting the $d$ orbital anisotropy forms the basic premise for the evolving area of orbitronics \cite{Tokura2003}, where the electric currents are controlled through $d$ orbital states \cite{Tokura2000}. The natural extension of orbitronics is to spin-polarize the pre-existed conducting electrons by exploiting the spin anisotropy which is one of the primary intents in this work.
To begin with, it is essential to have a source of spin-polarized conducting electrons and in this context, the double perovskite Sr$_2$FeMoO$_6$ (SFMO) has already been well established as a half-metallic system with high Curie temperature (T$_C$ $\sim$ 450 K) \cite{Kobayashi1998} and spin-polarization as large as 70\% \cite{Raghava2009}. The dispersive Mo-4$d$ ($xy$, $xz$, and $yz$) states are partially occupied in the spin-down channel, while a large band gap exists in the spin-up channel to create a half-metallic system where the electrons are mobile in all the three dimensions. A quantum well structure can be designed to quantize the SFMO mobile electrons, by tailoring a bicolor superlattice with the other constituent being an insulator. The rare ferromagnetic insulator La$_2$CoMnO$_6$ (LCMO) is an excellent choice as its T$_C$ is close to 230 K \cite{Goodenough2003, Bull2003} and it offers a minor in-plane lattice mismatch ($\sim$ 1.5 \%) when the superlattice is grown along [001] direction.
Recent advances in modern state-of-the-art techniques such as molecular beam epitaxy and atomic layer deposition methods have paved the way to create such layered oxide superlattices. Stable nanometer thick SFMO and LCMO films, grown using PLD and RF magnetron sputtering techniques, have already been reported in the literature \cite{Du2013,Hauser2011,Galceran2016,Iliev2007}. Also experimentally, successful attempts have been made to grow free standing oxide films (e.g. VO$_2$ \cite{Li2016}, Fe$_3$O$_4$ \cite{Wu2016}, Pb(Zr,Ti)O$_3$ \cite{Jiang2017} and interfaces BiFeO$_3$/CoFe$_2$O$_4$ \cite{Amrillah2017}) using van der Waal heteroepitaxy techniques.
In this work, we examine the (SFMO)$_2$/(LCMO)$_2$ superlattices in two different configurations as shown in Fig. \ref{fig:Structure_SL_H_L_mechanism} using DFT+$U$ calculations. In the first case, assuming the higher symmetric tetragonal SFMO as the substrate, the in-plane symmetry of the superlattice is taken to be the same as that of SFMO and we define the structure as SL-H. The second configuration (SL-L) carries the in-plane symmetry of lower symmetric monoclinic LCMO. In both the configurations, the atomic positions are relaxed to obtain the ground state.
The electronic and magnetic ground state of both SL-H and SL-L superlattices reveal no new magnetic ordering. However, a periodic quantum well with depth close to 1 eV is developed along the z-axis (growth direction) due to the difference in the chemical potential of the constituents. As a consequence, there is an orbital selective quantization of the fractionally occupied $t_{2g}$ itinerant states. The strong correlation further localizes these quantized states. During the whole process the planar $xy$ dispersive state remains unchanged which leads to the evolution of a two dimensional spin-polarized electron gas (2D-SPEG) from a 3D-SPEG. The mechanism, as understood from the electronic structure calculations presented in this paper, is schematically illustrated in Fig. \ref{fig:Structure_SL_H_L_mechanism}. It completely differs from the mechanism of charge reconstruction involved in single perovskite polar interfaces and hence opens up new avenues to synthesize next generation heterostructures out of non-polar correlated oxides in order to create 2DEG for practical purposes.
\onecolumngrid
\begin{center}
\begin{figure}[H]
\includegraphics[angle=-0,origin=c,height=8.0cm,width=18.0cm]{Fig1.png}
\caption{Crystal structure of (SFMO)$_2$/(LCMO)$_2$ superlattice and mechanism for formation of 2DEG. (a) The (SFMO)$_2$/(LCMO)$_2$ superlattice assuming high symmetric SFMO as the substrate (SL-H). (b) The same superlattice, but with lower symmetric LCMO as the substrate (SL-L). The lowering in the symmetry of SL-L superlattice is due to the tilting and rotation of the octahedral complexes. (c) Schematic illustration of quantum confinement and formation of spin-polarized two dimensional electron gas in the superlattices. Here, the up-arrow and down-arrow represent the spin-up and spin-down DOS respectively and $t_{2g}$ denotes the triply degenerate ($xy$, $yz$, $xz$) states. In SFMO, it carries the character of Fe and Mo-$d$ states signifying stronger Fe-Mo hybridization. In LCMO, the strong correlation effect splits the Co-$t_{2g}$ states to form two subbands, namely, lower Hubbard band (LHB) and upper Hubbard band (UHB). The potential well of the superlattice breaks the three-fold degeneracy to two-fold degenerate $xz$ and $yz$ states and non-degenerate $xy$ state. While the former are quantized, the latter remained dispersive as in the bulk to form the 2DEG.}
\label{fig:Structure_SL_H_L_mechanism}
\end{figure}
\end{center}
\twocolumngrid
\section{Computational details}
Density functional theory (DFT) calculations are carried out using both pseudopotential method with plane wave basis set as implemented in Quantum espresso simulation package \cite{Paolo2009} and full potential linearized augmented plane wave method with the basis set includes local orbitals (FP-LAPW+lo) as implemented in WIEN2k simulation package \cite{Blaha2001}. For both the cases the PBE-GGA \cite{Perdew1996} exchange correlation functional is considered and an $8\times8\times8$ Monkhorst-Pack k-mesh is used for the Brillouin Zone integration.
The pseudopotential method is used for structural optimization and for the calculation of the electrostatic potential in the real space. For structural relaxation, the kinetic energy cutoff for the planewaves is set to 30 Ry. The electron-ion interaction is considered within Vanderbilt ultrasoft pseudopotential for which charge density cutoff is chosen to be 300 Ry. The tolerance for the Hellmann-Feynman force on each atom is taken as 20 meV/\AA.
The optimized structure obtained from pseudopotential method, has been further used for the calculation of electronic and magnetic structure using FP-LAPW method. The computational details for this method are as follows. To incorporate the effect of strong correlation, an effective onsite correlation parameter $U$ ($U_{eff}$ = $U-J$) is included through rotationally invariant Dudarev approach \cite{Dudarev1998}. All the results in the paper are presented for $U$ = 3 eV. However, to examine the invariance of the mechanism, the results are also analyzed for higher $U$ (= 5 eV). The LAPW basis function considers 5$d$ and 6$s$ of La; 5$s$ of Sr; 3$d$ and 4$s$ of Mn, Fe, and Co; 4$d$ and 4$s$ of Mo, and 2$s$ and 2$p$ of O. The $RK_{max}$ is taken to be 7.0 yielding 24235 plane waves for each k-point in the interstitial region. The principal component of conductivity tensors ($\sigma_{\alpha\beta}$) are computed using semi-classical Boltzmann transport theory as implemented in BoltzTraP code \cite{Madsen2006}. A highly dense non-shifted mesh with 32000 k-points is used to obtain the smooth interpolation of bands and to compute the necessary derivatives which is required for the calculation of $\sigma_{\alpha\beta}$.\\
\section{Bulk Electronic Structure}
Bulk SFMO is a half-metallic ferrimagnet, where only the spin-down channel exhibits the metallic behavior and Fe spins are aligned antiparallel to Mo spins \cite{Kobayashi1998,Sarma2000}. As Fig. \ref{fig:bulk_ES_GS_mechanism} shows, the Fermi level (E$_F$) in the spin-down channel is occupied by the Mo predominant bonding states of the Mo-$t_{2g}\downarrow$ - Fe-$t_{2g}\downarrow$ hybridization. In the $d^5$ configuration of the high-spin Fe$^{3+}$ ion, the $t_{2g}\downarrow$ state is expected to be empty. Similarly, in the $d^1$ configuration of Mo$^{5+}$ ion, the $t_{2g}\downarrow$ state is partially occupied while the $d\uparrow$ states are empty. However, the delocalized 4$d$ states hybridize significantly with the Fe-$t_{2g}\downarrow$ states to form a set of partially occupied dispersive bands. As a consequence, a three dimensional spin-polarized electron gas is formed.
\onecolumngrid
\begin{center}
\begin{figure}[H]\hspace*{1cm}
\includegraphics[angle=0,origin=c,height=6.0cm,width=16.0cm]{Fig2.png}
\caption{Band structure (shown in spin-down channel) and densities of states for SFMO and LCMO. The results are obtained using GGA+$U$ ($U$ = 3 eV). A four formula unit cell is used in order to have an identical Brillouin zone as that of the superlattice. The partially occupied $t_{2g}$ states in the spin-down channel form the 3DEG. LCMO exhibits insulating ground state following Mott mechanism.}
\label{fig:bulk_ES_GS_mechanism}
\end{figure}
\end{center}
\twocolumngrid
Double perovskite LCMO is a rare ferromagnetic insulator \cite{Baidy2011}. Its band structure and densities of states (DOS) are shown in Fig. \ref{fig:bulk_ES_GS_mechanism}. While the Mn stabilizes in 4+ charge state, leading to $t_{2g}^3\uparrow$ $e_{g}^0\uparrow$ $t_{2g}^0\downarrow$ $e_{g}^0\downarrow$ configuration, the Co stabilizes in 2+ charge state leading to $t_{2g}^3\uparrow$ $e_{g}^2\uparrow$ $t_{2g}^2\downarrow$ $e_{g}^0\downarrow$ configuration. In the spin-up channel, the band gap is opened by the large crystal field split of Mn-$d$ state as well as large spin-exchange split of both Mn and Co-$d$ states \cite{Zhu2012}. In the absence of strong correlation effect, $t_{2g}^2\downarrow$ configuration would have created a metallic state for the perfect cubic phase. However, with tilting of the octahedra as well as strong correlation effect, the $t_{2g}$ states are further split into occupied lower Hubbard band and unoccupied upper Hubbard band to open up a gap in the spin-down channel to make the system insulating \cite{Parida2017}. Our estimated exchange energies ($J$ = $E_{\uparrow\downarrow}-E_{\uparrow\uparrow}$) confirm that there is a strong ferromagnetic coupling between the Co and Mn spins ($J_{Co-Mn}\sim$ 10.11 meV) \cite{Lv2012} which overcomes the Co-Co ($J_{Co-Co}\sim-1.92$ meV) and Mn-Mn ($J_{Mn-Mn}\sim-1.52$ meV) antiferromagnetic couplings. The detailed mechanisms are illustrated in the appendix to further elucidate the half-metallic and ferromagnetic-insulating behavior of SFMO and LCMO respectively.
\section{Formation of Periodic Quantum Well Structure}
The growth of the (SFMO)$_2$/(LCMO)$_2$ superlattice, as shown in Fig. \ref{fig:Structure_SL_H_L_mechanism}, brings a potential mismatch between the SFMO and LCMO site and hence, creates a quantum well structure. To demonstrate it, we have estimated the variation of the macroscopic average of electrostatic potential ($V^{MA}$) of bulk SFMO and LCMO as well as that of the SL-H superlattice as follows. First, the $xy$-planar average of the potential ($V^{PA}$) is obtained by averaging the raw three dimensional potential ($V^{raw}$) \cite{Franciosi1996}.
\begin{equation}
V^{PA}(z)=\frac{1}{S}\int_sV^{raw}(x,y,z)dxdy.\\
\end{equation}
Where $S$ is the area of the (001) plane of the unit cell. The $V^{PA}$ is further averaged to obtain $V^{MA}$.
\begin{equation}
V^{MA}(z)=\frac{1}{c}\int^{z+c/2}_{z-c/2}V^{PA}(z^\prime)dz^\prime\\
\end{equation}
Here, $c$ is the length of one period. For LCMO and SFMO slabs, the respective lattice parameters are taken as $c$. In the case of superlattice, the V$^{MA}$ is calculated using the $c$ lattice parameter of SFMO as well as that of LCMO and the average of the two is considered to minimize the error at the interface \cite{Koberidze2016}.
\begin{figure}[H]
\begin{center}
\includegraphics[angle=0,origin=c,height=9.2cm,width=8.5cm]{Fig3.png}
\caption{(a) Planar average (V$^{PA}$) and macroscopic average (V$^{MA}$) potential of 4 unit cell thick SFMO and LCMO slabs with reference to the vacuum level. (b) V$^{PA}$ and V$^{MA}$ of the SL-H superlattice suggesting the formation of periodic quantum well structure.}
\label{fig:avg_potential}
\end{center}
\end{figure}
Fig. \ref{fig:avg_potential} shows the V$^{PA}$ and V$^{MA}$ for the pure LCMO and SFMO slabs as well as for the SL-H superlattice. When compared with the vacuum level, the $V^{MA}$ of the SFMO slab is found to be $\sim$ 1 eV higher than that of the LCMO side. Hence, in the absence of any significant ionic displacements and breakdown of the planar geometry, the SL-H superlattice is expected to produce a periodic quantum well structure with depth of 1 eV. Our structural relaxation on the SL-H superlattice suggests that non-planar displacement of the ions are of the order of 0.004 {\AA} and therefore, the layered geometry is maintained. Also, Fig. \ref{fig:avg_potential} (b) infers that the $V^{MA}$ of the superlattice in the ground state structure shows a periodic quantum well of depth $\sim$ 0.96 eV.
The spin-polarized 3DEG of the superlattice will now experience this periodic quantum well and also, the strong correlation effect. Hence, new quantum states are expected to emerge which we have examined by carrying out band structure calculations.
\section{Eigenstates reconstruction and formation of 2DEG}
In the spin-up channel, bulk SFMO and LCMO exhibit a band gap larger than the depth of the potential well (see Fig. \ref{fig:bulk_ES_GS_mechanism}). Therefore, in this spin channel, like the bulk, the superlattices also exhibit insulating behavior. Hence, our band structure analysis for the superlattice is restricted to the spin-down channel.
The $t_{2g}$ projected spin-down band structure of the SL-H superlattice within the independent electron approximation ($U$ = 0) is shown in Fig. \ref{fig:SL-H_u0}. Since Mn-$t_{2g}\downarrow$ states lie far above the E$_F$ due to large exchange splitting, the effect of the potential well is inconsequential. For the remaining six transition metal elements (two of each Fe, Mo, and Co), the periodic potential well along $z$ breaks the three-fold degeneracy and splits the corresponding $t_{2g}\downarrow$ states into planar $xy$ and two-fold degenerate $xz$ and $yz$ states. The left panel of Fig. \ref{fig:SL-H_u0} highlights the $xy$ orbital dominated bands. The two lower lying (nearly) occupied parabolic bands (1, 2; blue) belong to two Co atoms located in the lower potential region. Out of the remaining four, two of them are partially occupied parabolic bonding bands (3, 4; magenta) and two of them are the unoccupied anti-bonding bands (5, 6; cyan) resulted out of Fe-Mo $t_{2g}\downarrow-t_{2g}\downarrow$ interactions as discussed in the bulk band structure (Fig. \ref{fig:bulk_ES_GS_mechanism}). Except for a minor shift in their energy levels, these bonding bands resemble to that of the bulk ($U$ = 3 eV) band structure which suggests that these states are delocalized and are not affected by the quantum well.
The right panel of Fig. \ref{fig:SL-H_u0}, highlights the bands dominated by the orbitals ($xz$, $yz$). Unlike the bands with in-plane $xy$ states, these bands, lying in the range E$_F$ - 0.8 to E$_F$ + 1.2 eV, are found to be localized and discrete which is a signature of quantization. Due to degeneracy of $xz$ and $yz$ states, there are six pairs of such bands (1$^{\prime}$ to 6$^{\prime}$ of Fig. \ref{fig:SL-H_u0} (right)). The lower, middle, and upper two pairs are predominantly contributed by Co, Fe, and Mo atoms respectively. However, reasonable presence of Mo-$\{xz, yz\}$ characters in the lower two pairs suggests that a new Mo-$t_{2g}$ - Co- $t_{2g}$ hybridization has taken place across the interface.
\onecolumngrid
\begin{center}
\begin{figure}[H]\hspace*{2cm}
\includegraphics[angle=-0,origin=c,height=4.7cm,width=12.0cm]{Fig4.png}
\caption{Spin-down band structure of SL-H superlattice for $U$ = 0. The contribution of the planar orbitals (Mo, Fe, and Co -$xy$) and the $z$ axis oriented orbitals ($xz$ and $yz$) to the band structure are shown on the left and the right respectively. The discrete pairs 1$^{\prime}$ to 6$^{\prime}$ are the outcome of the quantization through the periodic potential well (see Fig. \ref{fig:avg_potential}). The partially occupied bands, 3 and 4 of the left panel, forms the spin-polarized 2DEG.}
\label{fig:SL-H_u0}
\end{figure}
\end{center}
\twocolumngrid
\begin{center}
\begin{table}[H]
\caption{Total energy of different magnetic configurations of the superlattice. In bulk magnetic ordering, the spins of the transition metal cations in SFMO are antiparallel, whereas they are parallel in LCMO. In C-AFM configuration, the intra-plane coupling between the neighboring spins is antiferromagnetic, while inter-plane coupling is ferromagnetic. In G-AFM spin arrangement, both intra and inter-plane couplings between the neighboring spins are antiferromagnetic. The A-AFM arrangement corresponds to intra-plane ferromagnetic coupling and inter-plane antiferromagnetic coupling. We find that there are no new magnetic phases and the superlattice inherits the spin-arrangement of the respective bulk compounds.}
\vspace{0.5cm}
\label{tab:magnetic_ordering}
\begin{tabular}{ c c c c }
\hline
\multicolumn{2}{ c }{Interface magnetic orderings} & \multicolumn{2}{ c }{\hspace{0.2cm}$\Delta$E in eV}\\
\cline{3-4}
\multicolumn{2}{ c }{} & \hspace{0.7cm}SL-H & \hspace{0.7cm} SL-L\\
\hline
Bulk & \hspace{0.3cm}$\mathrm{(Co)_\uparrow(Mn)_\uparrow/(Fe)_\uparrow(Mo)_{\downarrow}}$ & \hspace{0.7cm}0 & \hspace{0.7cm}0\\
\hline
C-AFM & \hspace{0.3cm}$\mathrm{(Co)_\uparrow(Mn)_\downarrow/(Fe)_\uparrow(Mo)_\downarrow}$ & \hspace{0.7cm}0.82 & \hspace{0.7cm}0.19\\
\hline
G-AFM & \hspace{0.3cm}$\mathrm{(Co)_\uparrow(Mn)_\downarrow/(Fe)_\downarrow(Mo)_\uparrow}$ & \hspace{0.7cm}0.73 & \hspace{0.7cm}0.20\\
\hline
A-AFM & \hspace{0.3cm}$\mathrm{(Co)_\uparrow(Mn)_\uparrow/(Fe)_\downarrow(Mo)_\downarrow}$ & \hspace{0.7cm}0.09 & \hspace{0.7cm}0.10\\
\hline
\end{tabular}
\end{table}
\end{center}
The independent electron approximation does not provide the exact ground state, particularly in the case of oxides, as there is inadequacy in accounting the electron correlation in the system. The correlation effect can be included through the parametric Hubbard $U$ formalism. As the ground state electronic structure of bulk SFMO and LCMO is accurately estimated for $U$ = 3 eV, we have considered the same for the superlattices as well. Also, to determine the ground state magnetic configuration, several possible arrangements of the Co, Mn, Fe, and Mo spins are considered and the corresponding total energies are estimated in Table \ref{tab:magnetic_ordering}. We find that there is no magnetic reconstruction and the bulk magnetic ordering of SFMO and LCMO constitutes the magnetic ground state of the superlattice.
The spin-down band structure for the magnetic ground state of SL-H superlattice is shown in Fig. \ref{fig:SL_H_I_L_ES_MLWF}(a). We find that following the Mott mechanism, there is a significant re-positioning of the bands w.r.t. the $U$ = 0 band structure. Out of the two lower lying Co-$xy$ dominated bands (1 and 2 of Fig. \ref{fig:SL-H_u0} (left)), the occupied one lowers its energy by roughly 1 eV and the fractionally occupied one raises its energy approximately by 1 eV to become unoccupied. However, the fractionally occupied itinerant Mo-Fe bonding $xy$ states remain unchanged. Similarly, in the case of potential well quantized bands, dominated by $xz$, $yz$ characters, the lowest pair (1$^{\prime}$) is pushed down further below and lies at $-$1.2 eV w.r.t. E$_F$. Also, there is a visible separation of 0.5 eV between the next two quantized pairs (2$^{\prime}$ and 3$^{\prime}$). While 2$^{\prime}$ is completely occupied, 3$^{\prime}$ is empty. These two quantized states are now dominated with Mo and Co-{$xz$, $yz$} characters. The upper three quantized states are less affected by the $U$ effect. In addition to the re-positioning, the strong correlation effect further localizes the quantized states. The band dispersion, plotted in the interfacial reciprocal space ($k_x-k_y$ plane) (Fig. \ref{fig:SL_H_I_L_ES_MLWF} (b)) and the eigenstate resolved DOS for the whole Brillouin zone (Fig. \ref{fig:SL_H_I_L_ES_MLWF} (c)), further confirm the quantization and localization of the $xz$ and $yz$ based bulk states and the presence of unaffected itinerant bonding $xy$ states. The charge density plot of Fig. \ref{fig:SL_H_I_L_ES_MLWF} (d) provides a visualization assessment of the orbital contribution of bands 3, 4, 2$^{\prime}$, and 3$^{\prime}$.
\onecolumngrid
\begin{center}
\begin{figure}[H]
\includegraphics[angle=-0,origin=c,height=9.0cm,width=18.0cm]{Fig5.png}
\caption{Strongly correlated electronic structure of the SFMO/LCMO superlattice. (a) The spin-down band structure of the SL-H superlattice projecting the contribution of in-plane ($xy$) and out-of-plane ($xz, yz$) orbitals on the bands as obtained using GGA$+U$ (= 3 eV). For the color code of the bands, refer Fig. \ref{fig:SL-H_u0}. (b) The same spin-down band structure, but now plotted in the interfacial reciprocal space ($k_x-k_y$) plane and in the vicinity of E$_F$. (c) The spin-down DOS of the SL-H superlattice. The numbering of the bands is identified with discretization due to the effect of both the periodic quantum well and strong correlation. (d) (left to right) The charge density plots for the dispersive bands $\{3, 4\}$ and quantized bands $2^{\prime}$ and $3^{\prime}$. The former further confirms that the itinerant electrons, occupied by the hybridized states, are formed by the $xy$, $x$ and $y$ orbitals of the FeMoO$_4$ plane. The charge densities also re-verify that the quantized bands $2^{\prime}$ and $3^{\prime}$ are formed by the out-of-plane $xz$ and $yz$ orbitals. (e) and (f) represent the spin-down band structure for the superlattice with intermediate symmetry (SL-I) and with lower symmetry (SL-L) superlattices (see Fig. \ref{fig:Structure_SL_H_L_mechanism} (b)) respectively. While the lowering in symmetry further discretizes the quantized bands, the partially occupied dispersive bands $\{3, 4\}$ are almost unaffected.}
\label{fig:SL_H_I_L_ES_MLWF}
\end{figure}
\end{center}
\twocolumngrid
To examine the robustness of the quantization and the 2DEG, we have examined the ground state electronic structure of the lower symmetry structure (SL-L, Fig \ref{fig:Structure_SL_H_L_mechanism}(b)), where the SL is designed assuming the SL is grown on a LCMO substrate. Also, the electronic structure of an intermediate symmetry (SL-I), designed by taking the average of LCMO and SFMO crystal, is calculated. Figs. \ref{fig:SL_H_I_L_ES_MLWF} (e and f) show the spin-down band structure of SL-I and SL-L configurations. While the crystal structure of SL-H is tetragonal, it is monoclinic for SL-I and SL-L. With lowering in the symmetry from tetragonal to monoclinic ($\beta \neq 90^0$, there is an intermixing of the $xy$ state with the $xz$ and $yz$ states through inter-site hybridization. This results in discretization and minor localization of the planar $xy$ dominated bands as can be observed from Figs. \ref{fig:SL_H_I_L_ES_MLWF} (e and f). However, the effect is very weak and can be neglected. Later from Fig. \ref{fig:conductivity}, we will find that the electrical conductivity along the superlattice growth direction is negligible for all the superlattices confirming the electron conduction confined to the $xy$ plane.
\begin{figure}[H]
\begin{center}
\includegraphics[angle=-0,origin=c,height=6.0cm,width=8.0cm]{Fig6.png}
\caption{The planar $xy$ projected bands (1 to 6) and $z$-axis oriented $xz$ and $yz$ orbitals projected bands (1$^{\prime}$ to $6^{\prime}$) in the spin-down channel for the SL-H superlattice. The results are obtained for $U$ = 5 eV.}
\label{fig:SL_H_u5}
\end{center}
\end{figure}
To see if the spin-polarized 2DEG and the quantized states remain invariant with respect to strong correlation effect, we have further examined the electronic structure for higher values of $U$. The spin-down band structure of SL-H superlattice for $U$ = 5 eV is shown in Fig. \ref{fig:SL_H_u5}. We find that the formation of 2DEG is invariant. However, higher value of $U$ further localizes the quantized states. In Appendix-C, we have applied different $U$ to different transition elements and still find that it has no bearing over the formation of 2DEG.
\section{Transport properties}
The formation of spin-polarized 2DEG out of bulk SFMO 3DEG through confinement effect, can be quantified by calculating the conductivity. For this, we have adopted semi-classical Boltzmann transport theory as implemented in BoltzTraP code \cite{Madsen2006} and calculated the conductivity tensor ($\sigma$) from the first order derivatives of bands $\epsilon (k)$.
\begin{equation}
\sigma_{\alpha\beta}(\epsilon)=\frac{e^2\tau}{N}\sum_{i,\textbf{k}} v_\alpha(i,\textbf{k})v_\beta(i,\textbf{k})\frac{\delta(\epsilon-\epsilon_{i,\textbf{k}})}{d\epsilon},
\label{eqn:transport_eqn3}
\end{equation}
where $\tau$ is the relaxation time, $i$ is the band index, $v$ is the first order derivative of $\epsilon_{i,\textbf{k}}$, and N is the number of $\textbf{k}$ points sampled. The notations $\alpha$ and $\beta$ stand for the crystal axes. The temperature dependent conductivity as evaluated using Eq. \ref{eqn:transport_eqn3} is given below.
\begin{equation}
\sigma_{\alpha\beta}(T,\mu) =\frac{1}{\Omega}\int\sigma_{\alpha\beta}(\epsilon) \Big[-\frac{\partial f_\mu(T,\epsilon)}{\partial\epsilon} \Big]d\epsilon,
\label{eqn:transport_eqn}
\end{equation}
where $\Omega$ is the volume of the unitcell, $\mu$ (= E$_F$) is the chemical potential and $f$ is the Fermi-Dirac distribution function.
In Fig \ref{fig:conductivity}, we have plotted $\sigma/\tau$ vs $E-E_F$ at room temperature for both bulk and the superlattices. In bulk SFMO, conductivity along all the three principal axes are nearly same as it has partially occupied dispersive three-fold degenerate $t_{2g}$ states (see Fig. \ref{fig:bulk_ES_GS_mechanism}). In contrast, for the SL-H superlattice, the potential along $z$ restricts the electronic motion along [001]. Hence, $\sigma_{xx}/\tau$ and $\sigma_{yy}/\tau$ are finite, but $\sigma_{zz}/\tau$ is negligible. However, the magnitude of conductivity along $x$ or $y$ has reduced, approximately by two-third, compared to the bulk. It is due to the fact that in SL-H, $xz$ and $yz$ orbitals are no longer dispersive. Only the bonding dispersive band dominated with the planar $xy$ orbital contributes to the conductivity. The $\sigma/\tau$ vs $E-E_F$ plot, for the SL-I and SL-L superlattices with reduced symmetry also shows similar conductivity phenomena as that of SL-H suggesting the robustness of the spin-polarized 2DEG against any perturbation through lattice distortion.\\
\onecolumngrid
\begin{center}
\begin{figure}[H]
\includegraphics[angle=-0,origin=c,height=5.0cm,width=18.0cm]{Fig7.png}
\caption{Transport properties of bulk SFMO and SFMO/LCMO superlattice (a) - (d). The principal component of electrical conductivity tensor at room temperature for bulk SFMO, SL-H, SL-I, and SL-L superlattices respectively. The results are obtained from Eq. \ref{eqn:transport_eqn} using semi-classical Boltzmann transport theory. The confinement potential restricts the electron motion along [001] and hence, $\sigma_{zz}$ becomes negligible. Significant values of $\sigma_{xx}$ and $\sigma_{yy}$ imply two dimensional mobility of the electrons and hence, the the formation of spin-polarized 2DEG. Due to $xy$ planar symmetry, the $\sigma_{xx}$ and $\sigma_{yy}$ are same in bulk SFMO and SL-H superlattice. Minor distortion in the plane makes them distinguishable in SL-I and SL-L superlattices.}
\label{fig:conductivity}
\end{figure}
\end{center}
\twocolumngrid
\vspace*{-2cm}
\section{Conclusions}
In summary, using DFT+$U$ method, we have shown that a magnetic metal-insulator superlattice Sr$_2$FeMoO$_6$/La$_2$CoMnO$_6$ creates a spin-polarized 2DEG (SP-2DEG). Our study provides an alternate quantization mechanism to intrinsically form 2DEG which is very different from the conventional engineering of polar hetero-interfaces to achieve the same. The quantization mechanism involves confinement of the spin-polarized mobile electrons through a periodic finite square well potential and further localization of the quantized states through strong correlation effect. This restricts the mobility of the electron gas to the plane perpendicular to the potential well. An experimental realization of such a superlattice will be an ideal platform to study several fundamental phenomena like intrinsic anomalous Hall effect and Rashba effect. Since the bulk magnetic order is unaffected in this superlattice, it is expected to have high Curie temperature as in the bulk. Therefore, the SP-2DEG formed here, will be useful for spintronic applications.
\vspace*{-0.4cm}
\section{Acknowledgements}
The authors would like to thank HPCE, IIT Madras for providing the computational facility. This work is supported by Department of Science and Technology, India, through Grant No. EMR/2016/003791.
|
2,869,038,156,924 | arxiv | \section{Introduction}
Quantum cryptography has become one of the most fruitful and versatile commercial applications of quantum information. While classical encryption can in principle be compromised with a powerful enough computer, quantum encryption provides a platform where any eavesdropping attempt can be detected with a very high probability. There are several major schemes where quantum encryption is employed such as: (i) Quantum key distribution (QKD) where a random key is generated and securely shared between two parties and used later in classical encryption. (ii) Quantum secure direct communication (QSDC) where a certain message is securely and directly transferred between two parties using a quantum algorithm without the need for sharing a secure key or sending data over the classical channel except for detecting an eavesdropper. (iii) Deterministic secure quantum communication (DSQC) where the message is also sent deterministically over a quantum channel with the help of sending data over the classical channel \cite{long2007}. While the experimental realization of QKD has been achieved at least as early as 1992 \cite{bennett1992}, the proof-of-concept experiments of quantum direct communication has been achieved only recently \cite{hu2016, sun2018,zhu2017,qi2019}.
There is a wide variety of proposals for each of these schemes that differ in terms of the states of the photons used (entangled photons or single photons) and the type of the quantum channel (one-way or two-way channel). While the oldest QKD protocol (BB84) introduced by Bennett and Brassard in 1984 \cite{bb84} uses single photons, there are other protocols that use entangled pairs of photons \cite{ekert91,bennett1992}. Similarly, there are numerous QSDC schemes that use entangled photons, usually in one of Bell states (\cite{zhu2006,bostrom2002,deng2003, cai2004,gao2019,li2020,zhou2020,zhou2020-2,xie2020}) and others which use single photons \cite{deng2004,hu2016}.
DSQC schemes also use either entangled photons \cite{shimizu1999,li2006,joy2017} or single photons \cite{beige2001,li2006,huang2012,jiang2017}. While there exists direct quantum communication protocols which require a one-way quantum channel such as \cite{deng2003}, many QSDC/DSQC protocols require two-way quantum channels where photons are sent back and forth between Alice and Bob (the two famous parties who know the laws of quantum mechanics very well and use them in order to secure their communication). This later type requires storing the qubits for a long time using a quantum memory which may be difficult to achieve due to their short coherence time and requires also the precise control of the timing of their manipulation. Overcoming these difficulties by implementing an atomic quantum memory \cite{zhang2017} or devising a new protocol that does not require a quantum memory \cite{sun2018} was only achieved very recently.
QKD can be implemented by sending single photons using only a single degree of freedom, i.e., using a two-dimensional Hilbert space, as in BB84 protocol. For sending data in a deterministic manner using a one-way quantum channel, we need at least a four-dimensional Hilbert space \cite{beige2001,beige2002}. For example, in the protocol proposed by A. Beige et. al. \cite{beige2002} both the spatial and polarization degrees of freedom of single photons are used.
In DSQC/QSDC, we not only aim to detect an eavesdropper (let us call him Evan) with a very high probability, but also to prevent Evan from discerning a good part of the message before being detected \cite{deng2004}. One of the main ideas in this paper is that fulfilling the first aim actually facilitates the fulfillment of the second one, by sending an encrypted message while sending the key to decrypt this message only after the safety of the communication channel is verified. This can be done in several ways. For example, we can preprocess the message before sending it with a DSQC protocol using a symmetric cryptographic algorithm and send the crypto-key (using a similar DSQC protocol) only if the channel is safe. In this way, we can ensure that even if Evan discerned any part of the sent packet, he will not be able to decipher the message since the key will not be available to him. Another method to fulfill this aim is simply to shuffle the bits constituting the message in a random order and only send the information used to restore the order of each bit after ensuring the privacy of the channel.
In this paper, we present a scheme where both the key and the encrypted message are sent over a quantum channel using pairs of single photons or entangled photons. The two protocols require a one-way quantum channel in addition to the classical channel and use a similar pre- and postprocessing of the transmitted bits (Figure 1) but differ in the quantum encryption part (Figure 2). In section 2, we present the first protocol using unentangled photons and describe the classical preprocessing common in the two protocols. In section 3, we present the second protocol using entangled photons. The two protocols are described using generic quantum circuits. Finally, in section 4, we analyze the robustness of these protocols against famous eavesdropping schemes.
\section{DSQC protocol without entanglement}
In this protocol, Alice encodes each bit by two photons (we will refer to photons as qubits henceforth) encoded in two different bases assigned to the two qubits randomly. For general qubits, a Hadamard gate (H) can be inserted to one of the two qubits selected randomly after being encoded in the computational basis with the state of the classical bit. Therefore, `1' is encoded by either of the two-qubit states $|+1\rangle$ or $|1+\rangle$ and similarly `0' is encoded by either $|-0\rangle$ or $|0-\rangle$ randomly selected, where $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ and $|-\rangle=\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)$. In the case for photons, the two bases can be the rectilinear and diagonal polarization. Bob, on the other hand, measures the two qubits always in the same basis which can be either one of them randomly for each pair (see Figure 2-a, b). By doing so, and assuming a noiseless channel, he ensures that at least one of the two qubits will be measured in the correct basis. The measurement outcome of the other qubit will be completely random. In cases where his measurements of the two qubits agree, he knows for sure which bit was encoded by Alice without the need for classical communication. For the other cases, Bob will send to Alice over the classical channel the locations of the pairs where his measurement outcomes are different. Alice, in turn, will send him over the classical channel her choices for these cases. Bob will then find out which one of the two qubits was measured without passing through a Hadamard gate (H) in either side or with passing through H in both sides. In both cases, the measurement outcome of this bit is the true classical bit encoded by Alice since $\text{HH}=\mathbb{1}$
So far, we have introduced only the quantum encryption part of our DSQC protocol. In order to detect eavesdropping and ensure that Evan cannot decode any part of the message before he is detected, more layers of complexity should be added at each level (see Figure 1). For example, Alice can insert a random subset of bits (redundancy check bits) into the main message at random locations and communicate with Bob in public at the end of transmission her choices for these bits together with their locations. An eavesdropper intervening in the middle by doing any kind of measurements will spoil the encoding of the redundant qubit pairs. Moreover, in order to prevent Evan from detecting any sequence of bits before being detected, the packet is encrypted with some sort of symmetric-key encryption algorithms before being sent to Bob. The key is generated at Alice's side and sent to Bob in the same manner at the end of the encrypted packet transmission only if no eavesdropping is detected. Consequently, even if Evan could intercept the entire encrypted message by posing as Bob, he would not be able to get any useful information from it without the key used by Alice to encrypt the message. In other words, in order for Evan to get any part of the packet he needs to know both the exact key and the exact encrypted message without being detected which is very improbable to happen.
Let us now outline the complete algorithm in detail.
\begin{enumerate}
\item Alice divides the full message into small packets $M$, and computes a hash value $S$ for each packet, such as a cyclic redundancy check (CRC) \cite{peterson1961} or a checksum to detect errors in the transmission. Let us denote each of the new packets resulting after appending $S$ to $M$ as $C$.
\item Alice generates a random key $K$ and use it to encrypt $C$ by a symmetric key algorithm \cite{delfs2007} to obtain a new packet $P$. One possibility is to use an error correcting code such as \cite{moldovyan2017} in order to overcome errors due to noisy channels or imperfect photon detectors. For optimal security, a one-time pad symmetric key whose length is at least as long as the message should be used. Other keys may not guarantee unconditional security of the transmission. Let us assume that we use Vernam's One-Time Pad \cite{vernam1926}: $P=K \oplus C$, where $\oplus$ indicates the bitwise XOR operation.
\item Alice adds a small number of random bits at random locations of $P$ as a redundancy check to obtain a new packet $T$.
\item Inside the quantum encoder, Alice encodes each bit of $T$ by two qubits in the computational basis $\{|0\rangle,\ |1\rangle\}$ according to the logical bit before applying a Hadmard gate to one of the two qubits selected randomly (see Figure 2-a).
\item Bob receives each pair of qubits and either applies a Hadamard gate to the two qubits or not randomly and records his measurements for each pair, as in Figure 2-b.
\item Bob sends to Alice over the classical channel the indices of the pairs where his measurements outcomes agree. These are the bits of $T$ which Bob could decode independently of Alice.
\item Alice sends to Bob over the classical channel her choices of the basis for the other pairs. Bob uses this information to decode the rest of $T$.
\item Alice sends to Bob the indices of the redundant bits added to $P$ and they compare their values of these bits over the classical channel. If the number of discrepancies between them is higher than a certain threshold determined by the noise of the channel, they conclude that an eavesdropper is intercepting the transmission and the transmission is aborted. Otherwise, $P$ is recovered from $T$ by removing the redundancy bits.
\item Alice proceeds with the transmission of the key. The key is fed into the quantum encoder and steps 4-7 are repeated for $K$.
\item Bob uses $K$ to decrypt $P$ in order to obtain $C$; $C=P\oplus K$. He computes the hash value from $M$ and compares it with the received one ($S$). In case of discrepancies, they conclude that either the channel is too noisy and the errors have corrupted the message/key or the whole transmission is compromised
\end{enumerate}
\section{DSQC protocol with entanglement}
Here, we introduce a second protocol which is similar to the one presented in the previous section in terms of the classical preprocessing, but differs in the quantum encoding stage.
In this protocol, the message bits (the logical bits) are the control qubits of the the second Z-gate in the circuit used by Alice as shown in in Figure 2-c. Alice also has the freedom to randomly send either two entangled qubits or two non-entangled qubits depending on the random control qubit of her two-qubit controlled-Z gate (the third Z-gate in the Alice's circuit). In the first case, when the controlled-Z gate is not enabled, she encodes the `1' by the state $\frac{1}{\sqrt{2}}(|-0\rangle-|+1\rangle)$ and `0' by the state $\frac{1}{\sqrt{2}}(|+0\rangle-|-1\rangle)$. The two states are verified to be entangled using the Peres-Horodecki criterion \cite{leinaas2006}. On the other hand, when Alice does not enable the controlled-Z gate, she encodes the `1' by the state $|--\rangle$ and `0' by the state $|+-\rangle$. We note that both states are non-entangled and the message in this case is encoded by the first qubit only, while the second qubit is redundant. The two-qubit controlled Z-gate can be implemented using the standard Toffoli and controlled-Z gates as shown in Figure 2-e.
Bob, on his side, also has the freedom to insert a controlled-Z gate before he applies a Hadamard gate on the first qubit as shown in in Figure 2-d. If both Alice and Bob enable their controlled-Z gate, the two-qubit state directly before the measurement of Bob will be
$\frac{1}{\sqrt{2}}(|10\rangle-|01\rangle)$ for `1' and $\frac{1}{\sqrt{2}}(|00\rangle-|11\rangle)$ for `0', i.e., Bob can distinguish the two message bits by detecting whether the two measurements agree or not. Interestingly, if neither Alice nor Bob inserts their controlled-Z gate, the two-qubit state directly before the measurement of Bob will the same as in the previous case and Bob will be using the same rule. On the other hand, if the choices of Alice and Bob don't agree, the states of the two qubits just before the measurement of Bob will be $|1-\rangle$ for `1' and $|0-\rangle$ for `0', i.e., Bob will be looking at the measurement of the first qubit only to decode the logical bit. Therefore, Alice should communicate her random choices of the two-qubit controlled Z-gate generated by the random number generator to Bob at the end of the transmission of each packet.
\section{Security analysis and discussion}
Let us imagine a typical eavesdropping scenario and analyze the quantum bit error rate (QBER) caused by it, assuming that perfect photon detectors are used by all sides. Let us consider first the quantum encoder without entanglement. A typical strategy Evan can follow after intercepting the two qubits is to behave as Bob by measuring the two qubits in the same basis, re-encoding them, as Alice would do, and sending them forward to Bob. This is called intercept-resend-attack. Let also us assume that Evan can listen to the classical channel without interrupting it. That indicates he will know the correct values of the logical bits being sent by Alice after the public exchange between Alice and Bob corresponding to these cases as well as the cases in which he could decode the logical qubits on his own. The two cases combined amount to 75\% of all bits, i.e., the mutual information between Alice and Evan ($I_{AE}$) is 0.75. But what about the errors his intervention will cause at the side of Bob? Since Bob randomly applies the Hadamard gates to both qubits, he will get the same values measured by Evan at half the time and completely random values at the other half. In the last case, the random values will cause errors with probability 50\%. Therefore, the bit error rate, assuming a noiseless channel, caused by the intervention of Evan is 25\%, similar to the QBER of BB84 protocol for the same kind of attack. More complex attack scenarios may result in lower QBER. We show in Fig. 3-a and 3-b QBER and $I_{AE}$ obtained numerically by simulating Evan's attacks and using packets of length 10000 bits. The rate of Evan's intervention $\epsilon$ varies from 0 to 1. We can see that at $\epsilon=1$, i.e., Evan intercepts all the qubits, we obtain the theoretical predictions justified earlier, QBER=0.25 and $I_{AE}=0.75$.
\begin{table*}[ht]
\centering
\begin{tabular}{@{}llclllll@{}}
\toprule
& Elsayed I & \multicolumn{1}{l}{Elsayed II} & Shimizu \cite{shimizu1999} & Beige \citep{beige2002} & Rong I \cite{rong2020-I} & Rong II \cite{rong2020-II} & Zou \cite{zou2014} \\ \midrule
Entangled photons & \multicolumn{1}{c}{} & \checkmark & \multicolumn{1}{c}{\checkmark} & & & \multicolumn{1}{c}{\checkmark} & \\
Single-way quantum channel & \multicolumn{1}{c}{\checkmark} & \checkmark & \multicolumn{1}{c}{\checkmark} & \multicolumn{1}{c}{\checkmark} & & & \\
Classical Bob & & \multicolumn{1}{l}{} & & & \multicolumn{1}{c}{\checkmark} & \multicolumn{1}{c}{\checkmark} & \multicolumn{1}{c}{\checkmark} \\
Quantum memory & & & & & & & \multicolumn{1}{c}{\checkmark} \\ \bottomrule
\end{tabular}
\caption{ \label{table-1} Comparison between different deterministic quantum communication protocols, including the two protocols presented in this paper. The criteria are whether the protocol use single photons or entangled photons, single-way quantum channel or two-way quantum channel, whether Bob uses quantum operations on the received photons before the measurement in the computational basis or not and whether a quantum memory used to store the photons is needed or not. }
\end{table*}
Since single photon sources are often not ideal, an emitted light pulse supposed to contain one photon can carry more or less than a single photon. This leaves room for a smart eavesdropper to perform a photon-number-splitting attack, whereby he splits the light pulse if it carries more than one photon, and stores one photon at his disposal \cite{brassard2000}. Evan can then listen to the classical communication between Alice and Bob in order to get any clue about the right measurement basis to perform on the stolen photon. Let us analyze the effect of this attack on our DSQC scheme without entanglement. If Evan succeeds to get duplicates of every pair of qubits sent to Bob, he will wait for the classical communication between Alice and Bob to take place and listen to the choices Alice sends to Bob at certain locations. This scenario corresponds to 50\% of all transmitted qubit pairs. For the other 50\% where Bob could decode the logical bit on his own without the need to communicate with Alice, Evan will have to do exactly the same as Bob did; that is to measure the two qubits with a similar, but random basis. In half the cases, Evan will get similar measurement outcomes, thus decodes the logical bit correctly. Therefore in this eavesdropping scheme, Evan will be able to decipher overall 75\% of all the message. This is quite a huge ratio, but it also assumes ideal circumstances at the side of Evan, such as the ability to store photon pairs for a long time and the ability to get duplicates of every photon pair transmitted from Alice to Bob.
Let us now consider the same intercept-resend attack for the quantum encoder with entanglement. As before, Evan will intercept the two qubits, try to extract the maximum information, and resend them to Bob. While posing as Bob, Evan will randomly enable the controlled-Z gate, record his measurements and then generate a quantum state of two qubits before he sends them to Bob. Evan will not, however, be able to decode the logical qubits before the end of the packet transmission when Alice communicate her choices for the two-qubit controlled-Z gate to Bob. He will then be able to decode all the bits with 100\% success probability, but at what cost? Since Evan has to resend the qubits to Bob on time, before he succeeds in decoding the logical bits, he will have to assume random values for the message bits and Alice's random number generator bits and use these random bits in the circuit that generates the states he sends to Bob, causing unavoidable errors. We show in Fig. 3-c the numerical simulation of QBER for this attack and we notice that for a 100\% intervention rate, QBER=0.5.
Another attack strategy by Evan that will surprisingly leave no trace, i.e., cause no errors at the side of Bob, is to measure the first qubit only in the rectilinear basis, $\{|-\rangle,|+\rangle$. In this case, Evan will always decode the logical bits sent via non-entangled states correctly, while he will only decode the cases where entanglement is used with 50\% probability of success. Thererfore, the information Evan will retrieve in this case will be only 75\% of the packet. This information will not be helpful to Evan since the key will also be encoded by the quantum encoder, therefore, Evan will not gain full access to either the key or the encrypted message. We performed a numerical simulation for $I_{AE}$ of this attack while varying the rate of Evan's intervention $\epsilon$ from 0 to 1. The results are presented in Fig. 3-d.
There is another class of secure direct communication protocols where, unlike other protocols such as the ones presented in this paper, Bob is not a fully quantum agent. In these semi-quantum protocols such as \cite{zou2014,rong2020-I,rong2020-II}, Bob does not have to perform complex quantum operations other than measuring the received photons in the computational basis or reflecting the photons back to Alice. In Table 1, we give a comparison between the two protocols presented in this paper, the semi-quantum protocols \cite{zou2014,rong2020-I,rong2020-II}, and two of the earliest DSQC protocols, namely the protocol of Beige et. al. \cite{beige2002} where single photons are used and the protocol by Shimizu et. al. \cite{shimizu1999} which uses entangled photons. The later protocol also intersperses the message with random check bits for detecting eavesdropping like ours. We compare between these seven protocols based on whether the protocol uses single photons or entangled photons, single-way quantum channel or two-way quantum channel, whether Bob uses quantum operations on the received photons before the measurement in the computational basis or not and whether a quantum memory used to store the photons is needed or not.
In conclusion, it was shown that by combining the methods of classical cryptography and quantum encryption we can find new protocols for deterministic secure quantum communication that encodes both the key and the message with the same quantum algorithm. In the proposed scheme, we send the key through the quantum channel with the same quantum encryption technique we use for the message, therefore it represents an intermediate case between QSDC where no key at all is used and conventional DSQC techniques where a key is sent over the classical channel. While for the quantum one-time pad protocol \cite{deng2004} a security check is performed before the message is sent, here, we perform the check concurrently while sending the encrypted message. The proposed schemes do not require a two-way quantum channel nor a quantum memory. They are in principle similar in nature to the deterministic algorithm proposed in \cite{beige2001,beige2002}, which use the spatial and polarization degrees of freedom of single photons, where the message is encrypted by Alice using a secret crypto key before being sent to Bob and also random control bits are inserted that aim to detect eavesdropping. A full security proof and the analysis of more complex attack strategies and the effects of imperfect detectors and channel losses are required to ensure the security and practicality of the proposed protocol.
The author thanks Prof. Mark Hillery for the hospitality of Hunter College of CUNY where this work was initiated.
\bibliographystyle{andp2012}
\section{Introduction}
Quantum cryptography has become one of the most fruitful and versatile commercial applications of quantum information. While classical encryption can in principle be compromised with a powerful enough computer, quantum encryption provides a platform where any eavesdropping attempt can be detected with a very high probability. There are several major schemes where quantum encryption is employed such as: (i) Quantum key distribution (QKD) where a random key is generated and securely shared between two parties and used later in classical encryption. (ii) Quantum secure direct communication (QSDC) where a certain message is securely and directly transferred between two parties using a quantum algorithm without the need for sharing a secure key or sending data over the classical channel except for detecting an eavesdropper. (iii) Deterministic secure quantum communication (DSQC) where the message is also sent deterministically over a quantum channel with the help of sending data over the classical channel \cite{long2007}. While the experimental realization of QKD has been achieved at least as early as 1992 \cite{bennett1992}, the proof-of-concept experiments of quantum direct communication has been achieved only recently \cite{hu2016, sun2018,zhu2017,qi2019}.
There is a wide variety of proposals for each of these schemes that differ in terms of the states of the photons used (entangled photons or single photons) and the type of the quantum channel (one-way or two-way channel). While the oldest QKD protocol (BB84) introduced by Bennett and Brassard in 1984 \cite{bb84} uses single photons, there are other protocols that use entangled pairs of photons \cite{ekert91,bennett1992}. Similarly, there are numerous QSDC schemes that use entangled photons, usually in one of Bell states (\cite{zhu2006,bostrom2002,deng2003, cai2004,gao2019,li2020,zhou2020,zhou2020-2,xie2020}) and others which use single photons \cite{deng2004,hu2016}.
DSQC schemes also use either entangled photons \cite{shimizu1999,li2006,joy2017} or single photons \cite{beige2001,li2006,huang2012,jiang2017}. While there exists direct quantum communication protocols which require a one-way quantum channel such as \cite{deng2003}, many QSDC/DSQC protocols require two-way quantum channels where photons are sent back and forth between Alice and Bob (the two famous parties who know the laws of quantum mechanics very well and use them in order to secure their communication). This later type requires storing the qubits for a long time using a quantum memory which may be difficult to achieve due to their short coherence time and requires also the precise control of the timing of their manipulation. Overcoming these difficulties by implementing an atomic quantum memory \cite{zhang2017} or devising a new protocol that does not require a quantum memory \cite{sun2018} was only achieved very recently.
QKD can be implemented by sending single photons using only a single degree of freedom, i.e., using a two-dimensional Hilbert space, as in BB84 protocol. For sending data in a deterministic manner using a one-way quantum channel, we need at least a four-dimensional Hilbert space \cite{beige2001,beige2002}. For example, in the protocol proposed by A. Beige et. al. \cite{beige2002} both the spatial and polarization degrees of freedom of single photons are used.
In DSQC/QSDC, we not only aim to detect an eavesdropper (let us call him Evan) with a very high probability, but also to prevent Evan from discerning a good part of the message before being detected \cite{deng2004}. One of the main ideas in this paper is that fulfilling the first aim actually facilitates the fulfillment of the second one, by sending an encrypted message while sending the key to decrypt this message only after the safety of the communication channel is verified. This can be done in several ways. For example, we can preprocess the message before sending it with a DSQC protocol using a symmetric cryptographic algorithm and send the crypto-key (using a similar DSQC protocol) only if the channel is safe. In this way, we can ensure that even if Evan discerned any part of the sent packet, he will not be able to decipher the message since the key will not be available to him. Another method to fulfill this aim is simply to shuffle the bits constituting the message in a random order and only send the information used to restore the order of each bit after ensuring the privacy of the channel.
In this paper, we present a scheme where both the key and the encrypted message are sent over a quantum channel using pairs of single photons or entangled photons. The two protocols require a one-way quantum channel in addition to the classical channel and use a similar pre- and postprocessing of the transmitted bits (Figure 1) but differ in the quantum encryption part (Figure 2). In section 2, we present the first protocol using unentangled photons and describe the classical preprocessing common in the two protocols. In section 3, we present the second protocol using entangled photons. The two protocols are described using generic quantum circuits. Finally, in section 4, we analyze the robustness of these protocols against famous eavesdropping schemes.
\section{DSQC protocol without entanglement}
In this protocol, Alice encodes each bit by two photons (we will refer to photons as qubits henceforth) encoded in two different bases assigned to the two qubits randomly. For general qubits, a Hadamard gate (H) can be inserted to one of the two qubits selected randomly after being encoded in the computational basis with the state of the classical bit. Therefore, `1' is encoded by either of the two-qubit states $|+1\rangle$ or $|1+\rangle$ and similarly `0' is encoded by either $|-0\rangle$ or $|0-\rangle$ randomly selected, where $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ and $|-\rangle=\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)$. In the case for photons, the two bases can be the rectilinear and diagonal polarization. Bob, on the other hand, measures the two qubits always in the same basis which can be either one of them randomly for each pair (see Figure 2-a, b). By doing so, and assuming a noiseless channel, he ensures that at least one of the two qubits will be measured in the correct basis. The measurement outcome of the other qubit will be completely random. In cases where his measurements of the two qubits agree, he knows for sure which bit was encoded by Alice without the need for classical communication. For the other cases, Bob will send to Alice over the classical channel the locations of the pairs where his measurement outcomes are different. Alice, in turn, will send him over the classical channel her choices for these cases. Bob will then find out which one of the two qubits was measured without passing through a Hadamard gate (H) in either side or with passing through H in both sides. In both cases, the measurement outcome of this bit is the true classical bit encoded by Alice since $\text{HH}=\mathbb{1}$
So far, we have introduced only the quantum encryption part of our DSQC protocol. In order to detect eavesdropping and ensure that Evan cannot decode any part of the message before he is detected, more layers of complexity should be added at each level (see Figure 1). For example, Alice can insert a random subset of bits (redundancy check bits) into the main message at random locations and communicate with Bob in public at the end of transmission her choices for these bits together with their locations. An eavesdropper intervening in the middle by doing any kind of measurements will spoil the encoding of the redundant qubit pairs. Moreover, in order to prevent Evan from detecting any sequence of bits before being detected, the packet is encrypted with some sort of symmetric-key encryption algorithms before being sent to Bob. The key is generated at Alice's side and sent to Bob in the same manner at the end of the encrypted packet transmission only if no eavesdropping is detected. Consequently, even if Evan could intercept the entire encrypted message by posing as Bob, he would not be able to get any useful information from it without the key used by Alice to encrypt the message. In other words, in order for Evan to get any part of the packet he needs to know both the exact key and the exact encrypted message without being detected which is very improbable to happen.
Let us now outline the complete algorithm in detail.
\begin{enumerate}
\item Alice divides the full message into small packets $M$, and computes a hash value $S$ for each packet, such as a cyclic redundancy check (CRC) \cite{peterson1961} or a checksum to detect errors in the transmission. Let us denote each of the new packets resulting after appending $S$ to $M$ as $C$.
\item Alice generates a random key $K$ and use it to encrypt $C$ by a symmetric key algorithm \cite{delfs2007} to obtain a new packet $P$. One possibility is to use an error correcting code such as \cite{moldovyan2017} in order to overcome errors due to noisy channels or imperfect photon detectors. For optimal security, a one-time pad symmetric key whose length is at least as long as the message should be used. Other keys may not guarantee unconditional security of the transmission. Let us assume that we use Vernam's One-Time Pad \cite{vernam1926}: $P=K \oplus C$, where $\oplus$ indicates the bitwise XOR operation.
\item Alice adds a small number of random bits at random locations of $P$ as a redundancy check to obtain a new packet $T$.
\item Inside the quantum encoder, Alice encodes each bit of $T$ by two qubits in the computational basis $\{|0\rangle,\ |1\rangle\}$ according to the logical bit before applying a Hadmard gate to one of the two qubits selected randomly (see Figure 2-a).
\item Bob receives each pair of qubits and either applies a Hadamard gate to the two qubits or not randomly and records his measurements for each pair, as in Figure 2-b.
\item Bob sends to Alice over the classical channel the indices of the pairs where his measurements outcomes agree. These are the bits of $T$ which Bob could decode independently of Alice.
\item Alice sends to Bob over the classical channel her choices of the basis for the other pairs. Bob uses this information to decode the rest of $T$.
\item Alice sends to Bob the indices of the redundant bits added to $P$ and they compare their values of these bits over the classical channel. If the number of discrepancies between them is higher than a certain threshold determined by the noise of the channel, they conclude that an eavesdropper is intercepting the transmission and the transmission is aborted. Otherwise, $P$ is recovered from $T$ by removing the redundancy bits.
\item Alice proceeds with the transmission of the key. The key is fed into the quantum encoder and steps 4-7 are repeated for $K$.
\item Bob uses $K$ to decrypt $P$ in order to obtain $C$; $C=P\oplus K$. He computes the hash value from $M$ and compares it with the received one ($S$). In case of discrepancies, they conclude that either the channel is too noisy and the errors have corrupted the message/key or the whole transmission is compromised
\end{enumerate}
\section{DSQC protocol with entanglement}
Here, we introduce a second protocol which is similar to the one presented in the previous section in terms of the classical preprocessing, but differs in the quantum encoding stage.
In this protocol, the message bits (the logical bits) are the control qubits of the the second Z-gate in the circuit used by Alice as shown in in Figure 2-c. Alice also has the freedom to randomly send either two entangled qubits or two non-entangled qubits depending on the random control qubit of her two-qubit controlled-Z gate (the third Z-gate in the Alice's circuit). In the first case, when the controlled-Z gate is not enabled, she encodes the `1' by the state $\frac{1}{\sqrt{2}}(|-0\rangle-|+1\rangle)$ and `0' by the state $\frac{1}{\sqrt{2}}(|+0\rangle-|-1\rangle)$. The two states are verified to be entangled using the Peres-Horodecki criterion \cite{leinaas2006}. On the other hand, when Alice does not enable the controlled-Z gate, she encodes the `1' by the state $|--\rangle$ and `0' by the state $|+-\rangle$. We note that both states are non-entangled and the message in this case is encoded by the first qubit only, while the second qubit is redundant. The two-qubit controlled Z-gate can be implemented using the standard Toffoli and controlled-Z gates as shown in Figure 2-e.
Bob, on his side, also has the freedom to insert a controlled-Z gate before he applies a Hadamard gate on the first qubit as shown in in Figure 2-d. If both Alice and Bob enable their controlled-Z gate, the two-qubit state directly before the measurement of Bob will be
$\frac{1}{\sqrt{2}}(|10\rangle-|01\rangle)$ for `1' and $\frac{1}{\sqrt{2}}(|00\rangle-|11\rangle)$ for `0', i.e., Bob can distinguish the two message bits by detecting whether the two measurements agree or not. Interestingly, if neither Alice nor Bob inserts their controlled-Z gate, the two-qubit state directly before the measurement of Bob will the same as in the previous case and Bob will be using the same rule. On the other hand, if the choices of Alice and Bob don't agree, the states of the two qubits just before the measurement of Bob will be $|1-\rangle$ for `1' and $|0-\rangle$ for `0', i.e., Bob will be looking at the measurement of the first qubit only to decode the logical bit. Therefore, Alice should communicate her random choices of the two-qubit controlled Z-gate generated by the random number generator to Bob at the end of the transmission of each packet.
\section{Security analysis and discussion}
Let us imagine a typical eavesdropping scenario and analyze the quantum bit error rate (QBER) caused by it, assuming that perfect photon detectors are used by all sides. Let us consider first the quantum encoder without entanglement. A typical strategy Evan can follow after intercepting the two qubits is to behave as Bob by measuring the two qubits in the same basis, re-encoding them, as Alice would do, and sending them forward to Bob. This is called intercept-resend-attack. Let also us assume that Evan can listen to the classical channel without interrupting it. That indicates he will know the correct values of the logical bits being sent by Alice after the public exchange between Alice and Bob corresponding to these cases as well as the cases in which he could decode the logical qubits on his own. The two cases combined amount to 75\% of all bits, i.e., the mutual information between Alice and Evan ($I_{AE}$) is 0.75. But what about the errors his intervention will cause at the side of Bob? Since Bob randomly applies the Hadamard gates to both qubits, he will get the same values measured by Evan at half the time and completely random values at the other half. In the last case, the random values will cause errors with probability 50\%. Therefore, the bit error rate, assuming a noiseless channel, caused by the intervention of Evan is 25\%, similar to the QBER of BB84 protocol for the same kind of attack. More complex attack scenarios may result in lower QBER. We show in Fig. 3-a and 3-b QBER and $I_{AE}$ obtained numerically by simulating Evan's attacks and using packets of length 10000 bits. The rate of Evan's intervention $\epsilon$ varies from 0 to 1. We can see that at $\epsilon=1$, i.e., Evan intercepts all the qubits, we obtain the theoretical predictions justified earlier, QBER=0.25 and $I_{AE}=0.75$.
\begin{table*}[ht]
\centering
\begin{tabular}{@{}llclllll@{}}
\toprule
& Elsayed I & \multicolumn{1}{l}{Elsayed II} & Shimizu \cite{shimizu1999} & Beige \citep{beige2002} & Rong I \cite{rong2020-I} & Rong II \cite{rong2020-II} & Zou \cite{zou2014} \\ \midrule
Entangled photons & \multicolumn{1}{c}{} & \checkmark & \multicolumn{1}{c}{\checkmark} & & & \multicolumn{1}{c}{\checkmark} & \\
Single-way quantum channel & \multicolumn{1}{c}{\checkmark} & \checkmark & \multicolumn{1}{c}{\checkmark} & \multicolumn{1}{c}{\checkmark} & & & \\
Classical Bob & & \multicolumn{1}{l}{} & & & \multicolumn{1}{c}{\checkmark} & \multicolumn{1}{c}{\checkmark} & \multicolumn{1}{c}{\checkmark} \\
Quantum memory & & & & & & & \multicolumn{1}{c}{\checkmark} \\ \bottomrule
\end{tabular}
\caption{ \label{table-1} Comparison between different deterministic quantum communication protocols, including the two protocols presented in this paper. The criteria are whether the protocol use single photons or entangled photons, single-way quantum channel or two-way quantum channel, whether Bob uses quantum operations on the received photons before the measurement in the computational basis or not and whether a quantum memory used to store the photons is needed or not. }
\end{table*}
Since single photon sources are often not ideal, an emitted light pulse supposed to contain one photon can carry more or less than a single photon. This leaves room for a smart eavesdropper to perform a photon-number-splitting attack, whereby he splits the light pulse if it carries more than one photon, and stores one photon at his disposal \cite{brassard2000}. Evan can then listen to the classical communication between Alice and Bob in order to get any clue about the right measurement basis to perform on the stolen photon. Let us analyze the effect of this attack on our DSQC scheme without entanglement. If Evan succeeds to get duplicates of every pair of qubits sent to Bob, he will wait for the classical communication between Alice and Bob to take place and listen to the choices Alice sends to Bob at certain locations. This scenario corresponds to 50\% of all transmitted qubit pairs. For the other 50\% where Bob could decode the logical bit on his own without the need to communicate with Alice, Evan will have to do exactly the same as Bob did; that is to measure the two qubits with a similar, but random basis. In half the cases, Evan will get similar measurement outcomes, thus decodes the logical bit correctly. Therefore in this eavesdropping scheme, Evan will be able to decipher overall 75\% of all the message. This is quite a huge ratio, but it also assumes ideal circumstances at the side of Evan, such as the ability to store photon pairs for a long time and the ability to get duplicates of every photon pair transmitted from Alice to Bob.
Let us now consider the same intercept-resend attack for the quantum encoder with entanglement. As before, Evan will intercept the two qubits, try to extract the maximum information, and resend them to Bob. While posing as Bob, Evan will randomly enable the controlled-Z gate, record his measurements and then generate a quantum state of two qubits before he sends them to Bob. Evan will not, however, be able to decode the logical qubits before the end of the packet transmission when Alice communicate her choices for the two-qubit controlled-Z gate to Bob. He will then be able to decode all the bits with 100\% success probability, but at what cost? Since Evan has to resend the qubits to Bob on time, before he succeeds in decoding the logical bits, he will have to assume random values for the message bits and Alice's random number generator bits and use these random bits in the circuit that generates the states he sends to Bob, causing unavoidable errors. We show in Fig. 3-c the numerical simulation of QBER for this attack and we notice that for a 100\% intervention rate, QBER=0.5.
Another attack strategy by Evan that will surprisingly leave no trace, i.e., cause no errors at the side of Bob, is to measure the first qubit only in the rectilinear basis, $\{|-\rangle,|+\rangle$. In this case, Evan will always decode the logical bits sent via non-entangled states correctly, while he will only decode the cases where entanglement is used with 50\% probability of success. Thererfore, the information Evan will retrieve in this case will be only 75\% of the packet. This information will not be helpful to Evan since the key will also be encoded by the quantum encoder, therefore, Evan will not gain full access to either the key or the encrypted message. We performed a numerical simulation for $I_{AE}$ of this attack while varying the rate of Evan's intervention $\epsilon$ from 0 to 1. The results are presented in Fig. 3-d.
There is another class of secure direct communication protocols where, unlike other protocols such as the ones presented in this paper, Bob is not a fully quantum agent. In these semi-quantum protocols such as \cite{zou2014,rong2020-I,rong2020-II}, Bob does not have to perform complex quantum operations other than measuring the received photons in the computational basis or reflecting the photons back to Alice. In Table 1, we give a comparison between the two protocols presented in this paper, the semi-quantum protocols \cite{zou2014,rong2020-I,rong2020-II}, and two of the earliest DSQC protocols, namely the protocol of Beige et. al. \cite{beige2002} where single photons are used and the protocol by Shimizu et. al. \cite{shimizu1999} which uses entangled photons. The later protocol also intersperses the message with random check bits for detecting eavesdropping like ours. We compare between these seven protocols based on whether the protocol uses single photons or entangled photons, single-way quantum channel or two-way quantum channel, whether Bob uses quantum operations on the received photons before the measurement in the computational basis or not and whether a quantum memory used to store the photons is needed or not.
In conclusion, it was shown that by combining the methods of classical cryptography and quantum encryption we can find new protocols for deterministic secure quantum communication that encodes both the key and the message with the same quantum algorithm. In the proposed scheme, we send the key through the quantum channel with the same quantum encryption technique we use for the message, therefore it represents an intermediate case between QSDC where no key at all is used and conventional DSQC techniques where a key is sent over the classical channel. While for the quantum one-time pad protocol \cite{deng2004} a security check is performed before the message is sent, here, we perform the check concurrently while sending the encrypted message. The proposed schemes do not require a two-way quantum channel nor a quantum memory. They are in principle similar in nature to the deterministic algorithm proposed in \cite{beige2001,beige2002}, which use the spatial and polarization degrees of freedom of single photons, where the message is encrypted by Alice using a secret crypto key before being sent to Bob and also random control bits are inserted that aim to detect eavesdropping. A full security proof and the analysis of more complex attack strategies and the effects of imperfect detectors and channel losses are required to ensure the security and practicality of the proposed protocol.
The author thanks Prof. Mark Hillery for the hospitality of Hunter College of CUNY where this work was initiated.
\bibliographystyle{andp2012}
|
2,869,038,156,925 | arxiv | \section{Introduction}
There is a growing interest in the community in making an embodied AI agent perform a complicated task following natural language directives. Recent studies of
vision-language navigation tasks (VLN) have made significant progress \cite{anderson2018vision,fried2018speaker,zhu2020vision}. However, these studies consider navigation in static environments, where the action space is simplified, and there is no interaction with objects in the environment.
To consider more complex tasks, a benchmark named ALFRED was developed recently \cite{shridhar2020alfred}. It requires an agent to accomplish a household task in interactive environments following given language directives. Compared with VLN, ALFRED is more challenging as the agent needs to (1) reason over a greater number of instructions and (2) predict actions from larger action space to perform a task in longer action horizons. The agent also needs to (3) localize the objects to manipulate by predicting the pixel-wise masks. Previous studies (e.g., \cite{shridhar2020alfred}) employ a Seq2Seq model, which performs well on the VLN tasks \cite{ma2019selfmonitoring}. However, it works poorly on ALFRED. Overall, existing methods only show limited performance; there is a huge gap with human performance.
In this paper, we propose a new method that leads to significant performance improvements. It is based on several ideas. Firstly, we propose to choose a single instruction to process at each timestep from the given series of instructions. This approach contrasts with previous methods that encode them into a single long sequence of word features and use soft attention to specify which instruction to consider at each timestep implicitly \cite{shridhar2020alfred,legg2020eccv,singh2020eccv}. Our method chooses individual instructions explicitly by learning to predict when the agent completes an instruction. This makes it possible to utilize constraints on parsing instructions, leading to a more accurate alignment of instructions and action prediction.
Secondly, we propose a two-stage approach to the interpretation of the selected instruction. In its first stage, the method interprets the instruction without using visual inputs from the environment, yielding a tentative prediction of an action-object sequence. In the second stage, the prediction is integrated with the visual inputs to predict the action to do and the object to manipulate. The tentative interpretation makes it clear to interact with what class of objects, contributing to an accurate selection of objects to interact with.
Moreover, we acquire multiple agent egocentric views of a scene as visual inputs and integrate them using a hierarchical attention mechanism. This allows the agent to have a wider field of views, leading to more accurate navigation. To be specific, converting each view into an object-centric representation, we integrate those for the multiple views into a single feature vector using hierarchical attention conditioned on the current instruction.
Besides, we propose a module for predicting precise pixel-wise masks of objects to interact with, referred to as the mask decoder. It employs the object-centric representation of the center view, i.e., multiple object masks detected by the object detector. The module selects one of these candidate masks to specify the object to interact with. In the selection, self-attention is applied to the candidate masks to weight them; they are multiplied with the tentative prediction of the pairs of action and an object class and the detector's confidence scores for the candidate masks.
The experimental results show that the proposed method outperforms all the existing methods by a large margin and ranks first in the challenge leaderboard as of the time of submission. A preliminary version of the method won the ALFRED Challenge 2020 \footnote{The ALFRED Challenge 2020 \href{https://askforalfred.com/EVAL}{https://askforalfred.com/EVAL}}. The present version further improved the task success rate in unseen and seen environments to 8.37\% and 29.16\%, respectively, which are significantly higher than the previously published SOTA (0.39\% and 3.98\%, respectively) \cite{shridhar2020alfred}.
\section{Related Work}
\subsection{Embodied Vision-Language Tasks}
Many studies have been recently conducted on the problem of making an embodied AI agent follow natural language directives and accomplish the specified tasks in a three-dimensional environment while properly interacting with it. Vision-language navigation (VLN) tasks have been the most extensively studied, which require an agent to follow navigation directions in an environment.
Several frameworks and datasets for simulating real-world environments have been developed to study the VLN tasks. The early ones lack photo-realism and/or natural language directions \cite{kempka2016vizdoom,ai2thor,wu2018building}. Recent studies consider perceptually-rich simulated environments and natural language navigation directions \cite{anderson2018vision,chen2019touchdown,hermann2020learning}.
In particular, since the release of the Room-to-Room (R2R) dataset \cite{anderson2018vision} that is based on real imagery \cite{chang2017matterport3d}, VLN has attracted increasing attention, leading to the development of many methods \cite{fried2018speaker,wang2019reinforced,ma2019selfmonitoring,tan2019learning,majumdar2020improving}.
Several variants of VLN tasks have been proposed. A study \cite{Nguyen_2019_CVPR} allows the agent to communicate with an adviser using natural language to accomplish a given goal.
In a study \cite{thomason2020vision}, the agent placed in an environment attempts to find a specified object by communicating with a human by natural language dialog.
A recent study \cite{suhr-etal-2019-executing} proposes the interactive environments where users can collaborate with an agent by not only instructing it to complete tasks, but also acting alongside it.}
Another study \cite{krantz2020beyond} introduces a continuous environment based on the R2R dataset that enables an agent to take more fine-grained navigation actions.
A number of other embodied vision-language tasks have been proposed such as visual semantic planning \cite{zhu2017visual,gordon2019should} and embodied question answering \cite{embodiedqa,gordon2018iqa,wijmans2019embodied,puig2018virtualhome}.
\subsection{Existing Methods for ALFRED}
As mentioned earlier, ALFRED was developed to consider more complicated interactions with environments, which are missing in the above tasks, such as manipulating objects. Several methods for it have been proposed so far. A baseline method \cite{shridhar2020alfred} employs a Seq2Seq model with an attention mechanism and a progress monitor \cite{ma2019selfmonitoring}, which is prior art for the VLN tasks. In \cite{singh2020eccv}, a pre-trained Mask R-CNN is employed to generate object masks. It is proposed in \cite{legg2020eccv} to train the agent to follow instructions and reconstruct them. In \cite{corona2020modularity}, a modular architecture is proposed to exploit the compositionality of instructions. These methods have brought about only modest performance improvements over the baseline. A concurrent study \cite{singh2020moca} proposes a modular architecture design in which the prediction of actions and object masks are treated separately, as with ours. Although it achieves notable performance improvements, the study's ablation test indicates that the separation of the two is not the primary source of the improvements.
Closely related to ALFRED, ALFWorld \cite{shridhar2021alfworld} has been recently proposed to combine TextWorld \cite{cote18textworld} and ALFRED for creating aligned environments, which enable transferring high-level policies learned in the text world to the embodied world.}
\section{Proposed Method}
The proposed model consists of three decoders (i.e., instruction, mask, and action decoders) with the modules extracting features from the inputs, i.e., the visual observations of the environment and the language directives. We first summarize ALFRED and then explain the components one by one.
\begin{figure*}[t!]
\vspace{-1.0em}
\centering
\includegraphics[width=\linewidth]{figures/model_v1.pdf}
\caption{
\textbf{Architecture overview of the proposed model.} It consists of the modules encoding the visual inputs and the language directives (Sec.~\ref{sec:feat_representation}), the instruction decoder with an instruction selector (Sec.~\ref{sec:inst_decoder}), the action decoder (Sec.~\ref{sec:action_decoder}), and the mask decoder (Sec.~\ref{sec:mask_decoder}).}
\vspace{-0.75em}
\label{fig_overview}
\end{figure*}
\subsection{Summary of ALFRED}
ALFRED is built upon AI2Thor \cite{ai2thor}, a simulation environment for embodied AI. An agent performs seven types of tasks in 120 indoor scenes that require interaction with 84 classes of objects, including 26 receptacle object classes.
For each object class, there are multiple visual instances with different shapes, textures, and colors.
The dataset contains 8,055 expert demonstration episodes of task instances.
They are sequences of actions, whose average length is 50, and they are used as a ground truth action sequence at training time. For each of them, language directives annotated by AMT workers are provided, which consist of a goal statement $G$ and a set of step-by-step instructions, $S_{1},\ldots, S_{L}$. The alignment between each instruction and a segment of the action sequence is known. As multiple AMT workers annotate the same demonstrations, there are 25,743 language directives in total.
We wish to predict the sequence of agent's actions, given $G$ and $S_1,\ldots,S_L$ of a task instance.
There are two types of actions, navigation actions and manipulation actions. There are five navigation actions (e.g., \texta{MoveAhead} and \texta{RotateRight}) and seven manipulation actions (e.g., \texta{Pickup} and \texta{ToggleOn}). The manipulation actions accompany an object. The agent specifies it using a pixel-wise mask in the egocentric input image. Thus, the {\em outputs} are a sequence of actions with, if necessary, the object masks.
\subsection{Feature Representations} \label{sec:feat_representation}
\subsubsection{Object-centric Visual Representations}
Unlike previous studies \cite{shridhar2020alfred,singh2020eccv,legg2020eccv}, we employ the object-centric representations of a scene \cite{devin2018deep}, which are extracted from a pretrained object detector (i.e., Mask R-CNN \cite{he2017mask}). It provides richer spatial information about the scene at a more fine-grained level and thus allows the agent to localize the target objects better. Moreover, we make the agent look wider by capturing the images of its surroundings, aiming to enhance its navigation ability.
Specifically, at timestep $t$, the agent obtains visual observations from $K$ egocentric views. For each view $k$, we encode the visual observation
by a bag of $N$ object features,
which are extracted the object detector. Every detected object
is associated with a visual feature, a mask, and its confidence score. We project the visual feature
into $\mathbb{R}^{d}$ with a linear layer, followed by a ReLU activation and dropout regularization \cite{srivastava2014dropout} to obtain a single vector; thus, we get
a set of $N$ object features for view $k$, $V^{k}_{t} = (v^{k}_{t,1}, \ldots, v^{k}_{t,N})$.
We obtain $V^{1}_{t}, \ldots, V^{K}_{t}$
for all the views.
\subsubsection{Language Representations}
We encode the language directives as follows. We use an embedding layer initialized with pretrained GloVe \cite{pennington2014glove} vectors to embed each word of the $L$ step-by-step instructions and the goal statement. For each instruction $i(=1,\ldots,L)$, the embedded feature sequence is inputted to a two-layer LSTM \cite{hochreiter1997long}, and its last hidden state is used as the feature $s_i\in\mathbb{R}^d$ of the instruction. We use the same LSTM for all the instructions with dropout regularization. We encode the goal statement $G$ in the same manner using an LSTM with the same architecture different weights, obtaining $h_\text{G}\in\mathbb{R}^d$.
\subsection{Instruction Decoder} \label{sec:inst_decoder}
\subsubsection{Selecting Instructions}
Previous studies \cite{shridhar2020alfred,singh2020eccv,legg2020eccv} employ a Seq2Seq model in which all the language directives are represented as a {\em single sequence} of word features, and soft attention is generated over it to specify the portion to deal with at each timestep.
We think this method could fail to correctly segment instructions with time, even with the employment of progress monitoring \cite{ma2019selfmonitoring}.
This method does not use a few constraints on parsing the step-by-step instructions that they should be processed in the given order and when dealing with one of them, the other instructions, especially the future ones, will be of little importance.
We propose a simple method that can take the above constraints into account, which explicitly represents which instruction to consider at the current timestep $t$. The method introduces an integer variable $m_t(\in[1,L])$ storing the index of the instruction to deal with at $t$.
To update $m_t$ properly, we introduce a virtual action representing the {\em completion of a single instruction}, which we treat equally to the original twelve actions defined in ALFRED. Defining a new token \texttt{COMPLETE} to represent this virtual action, we augment each instruction's action sequence provided in the expert demonstrations
always end with \texttt{COMPLETE}.
At training time, we train the action
decoder to predict the augmented sequences. At test time, the same decoder predicts an action at each timestep; if it predicts \texttt{COMPLETE}, this means completing the current instruction.
The instruction index $m_t$ is updated as follows:%
\begin{equation}
m_t=
\begin{cases}
m_{t-1} + 1,& \text{if } \mathrm{argmax}(p^\text{a}_{t-1}) = \texttt{COMPLETE}\\
m_{t-1}, & \text{otherwise},
\end{cases}
\end{equation}
where $p^\text{a}_{t-1}$ is the predicted probability distribution over all the actions at time $t-1$, which will be explained in Sec.~\ref{sec:action_decoder}.
The encoded feature $s_{m_t}$ of the selected instruction is used in all the subsequent components, as shown in Fig.~\ref{fig_overview}.
\begin{figure}[!ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/instruction_decoder.pdf}
\caption
An example illustrates how we reinitialize the hidden states of the two LSTMs in the instruction encoder by $s_{m_t}$ when $m_t = m_{t-1} + 1$ ($m_t = 4)$.}}
\label{fig:inst_decoder}
\vspace{-0.2cm}
\end{figure}
\subsubsection{Decoder Design}
As explained earlier, our method employs a two-stage approach for interpreting the instructions. The instruction decoder (see Fig.~\ref{fig_overview}) runs the first stage, where it interprets the instruction encoded as $s_{m_t}$ {\em without any visual input}. To be specific, it transforms $s_{m_t}$ into the sequence of action-object pairs without additional input. In this stage, objects mean the {\em classes} of objects.
As it is not based on visual inputs, the predicted action-object sequence has to be tentative. The downstream components in the model (i.e., the mask decoder and the action decoder) interpret $s_{m_t}$ again, yielding the final prediction of an action-object sequence, which are grounded on the visual inputs. Our intention of this two-stage approach is to increase prediction accuracy; we expect that using a prior prediction of (action, object class) pairs helps more accurate grounding.
In fact, many instructions in the dataset, particularly those about interactions with objects, are sufficiently specific so that they are uniquely translated into (action, object class) sequences with a perfect accuracy, even without visual inputs. For instance,
`
Wash the mug in the sink'' can be translated into
(\texta{Put}, \texta{Sink}), (\texta{TurnOn}, \texta{Faucet}),
(\texta{TurnOff}, \texta{Faucet}), (\texta{PickUp}, \texta{Mug}).
However, this is not the case with navigation instructions.
For instance,
``{Go straight to the sink}'' may be translated into a variable number of repetition of \texta{MoveAhead}; it is also hard to translate ``{Walk into the drawers}'' when it requires to navigate to the left/right. Therefore, we separately deal with the manipulation actions and the navigation actions. In what follows, we first explain the common part and then the different parts.
Given the encoded feature $s_{m_t}$ of the selected instruction, the instruction decoder predicts the action and the object class to choose at $t$. To be precise, it outputs the probability distributions $p^\text{ia}_t(\in\mathbb{R}^{N_\text{a}})$ and $p^\text{io}_t(\in\mathbb{R}^{N_\text{o}})$ over all the actions and the object classes, respectively; $N_\text{a}$ and $N_\text{o}$ are the numbers of the actions and the object classes.
These probabilities $p^\text{ia}_t$ and $p^\text{io}_t$ are predicted separately by two LSTMs in an autoregressive fashion.
The two LSTMs are initialized whenever a new instruction is selected; to be precise, we reset their internal states as $h^\text{ia}_{t-1}=h^\text{io}_{t-1}=s_{m_t}$ for $t$ when we increment $m_t$ as $m_t=m_{t-1}+1$ (
see the example in Fig. \ref{fig:inst_decoder}}). Then, $p^\text{ia}_t$ and $p^\text{io}_t$ are predicted as follows:
\begin{subequations}
\begin{align}
p^\text{ia}_{t} &= \mathrm{softmax}(W_\text{ia}\text{LSTM}(E_{\text{a}}(p^\text{ia}_{t-1}), h^\text{ia}_{t-1}) + b_\text{ia}),\\
p^\text{io}_{t} &= \mathrm{softmax}(W_\text{io}\text{LSTM}(E_{\text{o}}(p^\text{io}_{t-1}), h^\text{io}_{t-1}) + b_\text{io}),
\end{align}
\end{subequations}
where $W_\text{ia} \in \mathbb{R}^{N_\text{a} \times d}$,
$b_\text{ia}\in \mathbb{R}^{N_\text{a}}$,
$W_\text{io} \in \mathbb{R}^{N_\text{o} \times d}$,
and $b_\text{io}\in \mathbb{R}^{N_\text{o}}$ are learnable parameters;
$E_\text{a}$ maps the most likely action into the respective vectors according to the last predictions $p^\text{ia}_{t-1}$ using a dictionary with $N_\text{a}\times d$ learnable parameters; $E_\text{o}$ does the same for the object classes. The predicted $p^\text{ia}_{t}$ and $p^\text{io}_{t}$ are transferred to the input of these LSTMs at the next timestep and also inputted to the downstream components, the mask decoder and the action decoder.
Now, as they do not need visual inputs, we can train the two LSTMs in a supervised fashion using the pairs of instructions and the corresponding ground truth action-object sequences. We denote this supervised loss, i.e., the sum of the losses for the two LSTMs, by $\mathcal{L}_{\text{aux}}$. Although it is independent of the environment and we can train the LSTMs offline, we simultaneously train them along with other components in the model by adding $\mathcal{L}_{\text{aux}}$ to the overall loss. We think this contributes to better learning of instruction representation
$s_{m_t}$, which is also used by the mask decoder and the action decoder.
As mentioned above, we treat the navigation actions differently from the manipulation actions. There are three differences. First, we simplify the ground truth action sequence for the navigation actions if necessary. For instance, suppose an instruction ``{Turn left, go ahead to the counter and turn right}'' with a ground truth action sequence ``\texta{RotateLeft}, \texta{MoveAhead}, \texta{MoveAhead}, \texta{MoveAhead}, \texta{MoveAhead}, \texta{RotateRight}''. The repetition of \texta{MoveAhead} reflects the environment and cannot be predicted without visual inputs. Thus, by eliminating the repeated actions, we convert the sequence into the minimum-length one, ``\texta{RotateLeft}, \texta{MoveAhead}, \texta{RotateRight}'', and regard it as the ground truth sequence, training the instruction decoder. Second, as there is no
accompanied object for the navigation actions, we use the object-class sequence ``\texta{None, None, None}'' as the ground truth. Third, in the case of navigation actions, we do not transfer the outputs $p^\text{ia}_{t}$ and $p^\text{io}_{t}$ to the mask decoder and the action decoder and instead feed constant (but learnable) vectors $p^\text{ia}_{\text{nav}} \in \mathbb{R}^{N_\text{a}}$ and $p^\text{io}_{\text{nav}} \in \mathbb{R}^{N_\text{o}}$ to them.
As the instruction decoder learns to predict the minimum-length action sequences as above, providing such predictions will be harmful for the action decoder. We avoid this by feeding $p^\text{ia}_{\text{nav}}$ and $p^\text{io}_{\text{nav}}$.
\subsection{Action Decoder} \label{sec:action_decoder}
The action decoder receives four inputs and predicts the action at $t$. The inputs are as follows: the encoded instruction $s_{m_t}$, the output $p_t^\text{ia}$ and $p_t^\text{io}$ of the instruction decoder\footnote{These are replaced with $p_{\text{nav}}^\text{ia}$ and $p_{\text{nav}}^\text{ia}$ if $\mathrm{argmax}(p^\text{ia}_t)$ is not a manipulation action, as mention above.}
and aggregated feature $v_t$ of visual inputs, which will be described below.
\subsubsection{Hierarchical Attention over Visual Features}
As explained in Sec.~\ref{sec:feat_representation}, we use the multi-view object-centric representation of visual inputs. To be specific, we aggregate $N\times K$ outputs of Mask R-CNN from $K$ ego-centric images, obtaining a single vector $v_t$. The Mask R-CNN outputs for view $k(=1,\ldots,K)$ are the visual features $(v^k_{t,1},\ldots,v^k_{t,N})$ and
the confidence scores
$(\rho^k_{t,1},\ldots,\rho^k_{t,N})$ of $N$ detected objects.
To do this feature aggregation, we employ a hierarchical approach, where we first search for the objects relevant to the current instruction in each view and then merge the features over the views to a single feature vector. In the first step, we compute and apply soft-attentions over $N$ objects for each view. To be specific, we compute attention weights
${\alpha}^{k}_{\text{s}} \in \mathbb{R}^N$ across
$v^{k}_{t,1},\ldots,v^{k}_{t,N}$
guided by $s_{m_t}$ as
\begin{equation}
\alpha^{k}_{\text{s},n} = \mathrm{softmax}(({v}^{k}_{t,n})^\top W^k_\text{s} s_{m_t}), \\
\end{equation}
where $W^k_\text{s} \in \mathbb{R}^{d \times d}$ is a learnable matrix, for $k=1,\ldots,K$.
We then apply the weights to the $N$ visual features multiplied with their confidence scores for this view, yielding a single $d$-dimensional vector as
\begin{equation}
{v}^{k}_{t} = \sum_{n=1}^{N} {\alpha}^{k}_{\text{s},n}
{v}^{k}_{t,n} \rho^k_{t,n},
\end{equation}
where $\rho^k_{t,n}$ is the confidence score associated with
$v^k_{t,n}$.
In the second step, we merge the above features $v^1_t,\ldots,v^K_t$ using
\emph{gated-attention}. We compute the weight $\alpha_{g}^k (\in \mathbb{R})$
of view $k(=1,\ldots,K)$ guided by $s_{m_t}$ as
\begin{equation}
\label{eq:gate-attn}
\alpha^{k}_{\text{g}} = \mathrm{sigmoid}(({v}^{k}_{t})^\top W_\text{g} s_{m_t}),
\end{equation}
where $W_\text{g}\in \mathbb{R}^{d \times d}$ is a learnable matrix.
Finally, we apply the weights to $\{v^k_t\}_{k=1,\ldots,K}$ to have the visual feature $v_t \in \mathbb{R}^d$ as
\begin{equation}
v_t = \sum_{k=1}^{K} {\alpha}^{k}_{\text{g}} {v}^{k}_{t}.
\end{equation}
As shown in the ablation test in the appendix,
the performance drops significantly when replacing the above gated-attention by soft-attention, indicating the necessity for merging observations of different views, not selecting one of them.
\subsubsection{Decoder Design}
The decoder predicts the action at $t$ from $v_t$, $s_{m_t}$, $p_t^\text{ia}$ and $p_t^\text{io}$. We employ an LSTM, which outputs the hidden state $h^\text{a}_{t}\in\mathbb{R}^d$ at $t$ from the previous state $h^\text{a}_{t-1}$ along with the above four inputs as
\begin{equation}\label{eq:action_pred}
h^\text{a}_{t} = \mathrm{LSTM}([v_t; s_{m_t}; p^\text{ia}_{t}; p^\text{io}_{t}], h^\text{a}_{t-1}),
\end{equation}
where $[;]$ denotes concatenation operation. We initialize the LSTM by setting the initial hidden state $h^\text{a}_0$ to $h_\text{G}$, the encoded feature of the goal statement; see Sec.~\ref{sec:feat_representation}. The updated state $h^\text{a}_{t}$ is fed into a fully-connected layer to yield the probabilities over the $N_\text{a} +1$ actions including \texttt{COMPLETE} as follows:
\begin{equation}
p^\text{a}_t = \mathrm{softmax}(W_\text{a} h^\text{a}_{t} + b_\text{a}),
\end{equation}
where $W_\text{a} \in \mathbb{R}^{(N_\text{a} + 1)\times d}$
and
$b_\text{a} \in \mathbb{R}^{N_\text{a} + 1}$.
We choose the action with the maximum probability for the predicted action. In the training of the model, we use cross entropy loss $\mathcal{L}_\text{action}$ computed between $p^\text{a}_t$ and the one-hot representation of the true action.
\subsection{Mask Decoder} \label{sec:mask_decoder}
To predict the mask specifying an object to interact with, we utilize the object-centric representations
$V^c_t=(v^c_{t,1},\ldots,v^c_{t,N})$
of the visual inputs of the central view ($k=c$).
Namely, we have only to select one of the $N$ detected objects. This enables more accurate specification of an object mask than predicting a class-agnostic binary mask as in the prior work \cite{shridhar2020alfred}.
To do this, we first apply simple self-attention to the visual features $V^c_t$, aiming at capturing the relation between objects in the central view. We employ the attention mechanism inside the light-weight Transformer with a single head proposed in \cite{nguyenefficient} for this purpose, obtaining $\bar{\cal A}_{V^c_t}(V^c_t) \in \mathbb{R}^{N\times d}$. We then apply linear transformation to $\bar{\cal A}_{V^c_t}(V^c_t)$
using a single fully-connected layer having weight $W\in \mathbb{R}^{N\times d}$ and bias $b\in \mathbb{R}^d$,
with a residual connection as
\begin{equation}
\hat{V^c_t} = \mathrm{ReLU}(W \bar{\cal A}_{V^c_t}
(V^c_t) +\mathbf{1}_{K} \cdot b^\top) + V^c_t,
\end{equation}
where $\mathbf{1}_K$ is $K$-vector with all ones.
We then compute the probability $p^\text{m}_{t,n}$ of selecting $n$-th object from the $N$ candidates using the above self-attended object features along with other inputs $s_{m_t}$, $p^\text{ia}_{t}$, and $p^\text{io}_{t}$. We concatenate the latter three inputs into a vector $g^\text{m}_t = [s_{m_t}; p^\text{ia}_{t}; p^\text{io}_{t}]$ and then compute the probability as
\begin{equation}\label{eq:mask_pred}
p^\text{m}_{t,n} = \mathrm{sigmoid}((g^\text{m}_t)^\top W_\text{m} \hat{v}^{c}_{t,n}),
\end{equation}
where $W_\text{m}\in\mathbb{R}^{d + N_\text{a} + N_\text{o} \times d}$ is a learnable matrix. We select the object mask with the highest probability (i.e., $\mathrm{argmax}_{n=1,\ldots,N}(p^\text{m}_{t,n})$) at inference time.
At training time, we first match the ground truth object mask with the object mask having the highest IoU. Then, we calculate the BCE loss $\mathcal{L}_\text{mask}$ between the two.
\section{Experiments}
\subsection{Experimental Configuration}
\paragraph{Dataset.}
We follow the standard procedure of ALFRED; 25,743 language directives over 8,055 expert demonstration episodes are split into the training, validation, and test sets. The latter two are further divided into two splits, called {\em seen} and {\em unseen}, depending on whether the scenes are included in the training set.
\paragraph{Evaluation metrics.}
Following ~\cite{shridhar2020alfred}, we report the standard metrics, i.e., the scores of Task Success Rate, denoted by \textbf{Task} and Goal Condition Success Rate, denoted by \textbf{Goal-Cond}. The \textbf{Goal-Cond} score is the ratio of goal conditions being completed at the end of an episode. The \textbf{Task} score is defined to be one if all the goal conditions are completed, and otherwise 0. Besides, each metric is accompanied by a path-length-weighted (PLW) score \cite{Anderson2018OnEO}, which measures the agent's efficiency by penalizing scores with the length of the action sequence.
\paragraph{Implementation details.}
\begin{table*}[t!]
\centering
\resizebox{1.00\textwidth}{!}{
\begin{tabular}{@{}laarrcaarr@{}}
\toprule
\multicolumn{1}{l}{\multirow{3}{*}{Model}}
& \mcp{4}{\textbf{Validation}} & & \mcc{4}{\textbf{Test}} \\
& \mcc{2}{\textit{Seen}} & \mcc{2}{\textit{Unseen}}
&
& \mcc{2}{\textit{Seen}} & \mcc{2}{\textit{Unseen}} \\
& \multicolumn{1}{b}{Task} & \multicolumn{1}{b}{Goal-Cond}
& \multicolumn{1}{c}{Task} & \multicolumn{1}{c}{Goal-Cond}
&
& \multicolumn{1}{b}{Task} & \multicolumn{1}{b}{Goal-Cond}
& \multicolumn{1}{c}{Task} & \multicolumn{1}{c}{Goal-Cond} \\
\cmidrule{1-5} \cmidrule{7-10}
\multicolumn{8}{l}{\bf Single view} \\
\multicolumn{1}{l}{\cite{shridhar2020alfred}} & $3.70$ ($2.10$) & $10.00$ ($7.00$) & $0.00$ ($0.00$) & $6.90$ ($5.10$) & & $3.98$ ($2.02$) & $9.42$ ($6.27$) & $0.39$ ($0.80$) & $7.03$ ($4.26$) \\[1pt]
\multicolumn{1}{l}{\cite{legg2020eccv}} &\multicolumn{1}{b}{-} & \multicolumn{1}{b}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & & ${3.85}$ (${1.50}$) & ${8.87}$ (${5.52}$) & ${0.85}$ (${0.36}$) & ${7.68}$ (${4.31}$) \\[1pt]
\multicolumn{1}{l}{\cite{singh2020eccv}} & ${4.50}$ (${2.20}$) & ${12.20}$ (${8.10}$) & ${0.70}$ (${0.30}$) & ${9.50}$ (${6.10}$) & & ${5.41}$ (${2.51}$) & ${12.32}$ (${8.27}$) & ${1.50}$ (${0.7}$) & ${8.08}$ (${5.20}$) \\[1pt]
\multicolumn{1}{l}{\cite{singh2020moca}} & ${19.15}$ (${13.60}$) & ${28.50}$ (${22.30}$) & ${3.78}$ (${2.00}$) & ${13.40}$ (${8.30}$) & & ${22.05}$ (${15.10}$) & ${28.29}$ (${22.05}$) & ${5.30}$ (${2.72}$) & ${14.28}$ (${9.99}$) \\[1pt]
\multicolumn{1}{l}{Ours (1 visual view)}& ${18.90}$ (${13.90}$) & ${26.80}$ (${21.90}$) & ${3.90}$ (${2.50}$) & ${15.30}$ (${10.90}$) & & $15.20$ ($11.79$) & $23.95$ ($20.27$) & $4.45$ ($2.37$) & $14.71$ ($10.88$) \\[1pt]
\cmidrule{1-5} \cmidrule{7-10}
\multicolumn{8}{l}{\bf Multiple views} \\
\multicolumn{1}{l}{Ours (5 visual views)} & {$\B{33.70}$ ($\B{28.40}$)} & $\B{43.10}$ ($\B{38.00}$) & $\B{9.70}$ ($\B{7.30}$) & $\B{23.10}$ ($\B{18.10}$) & & $\B{29.16}$ ($\B{24.67}$) & $\B{38.82}$ ($\B{34.85}$) & $\B{8.37}$ ($\B{5.06}$) & $\B{19.13}$ ($\B{14.81}$) \\[1pt]
\multicolumn{1}{l}{Ours (5 visual views)$^\diamond$} & ${14.30}$ (${10.80}$) & ${22.40}$ (${19.60}$) & ${4.60}$ (${2.80}$) & ${11.40}$ (${8.70}$) & & $12.39$ ($8.20$) & $20.68$ ($18.79$) & $4.45$ ($2.24$) & $12.34$ ($9.44$) \\[1pt]
\cmidrule{1-5}\cmidrule{7-10}
\multicolumn{1}{l}{Human} & \multicolumn{1}{b}{-} & \multicolumn{1}{b}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & & \multicolumn{1}{b}{-} & \multicolumn{1}{b}{-} & $91.00$ ($85.80$) & $94.50$ ($87.60$) \\
\bottomrule
\end{tabular}
}
\caption{\textbf{Task and Goal-Condition Success Rate.}
For each metric, the corresponding path weighted metrics are given in (parentheses).
The highest values per fold and metric are shown in \B{bold}. Our winning entry in the ALFRED Challenge 2020 is denoted with $^\diamond$ .
}
\label{tab:results}
\end{table*}
We use $K=5$ views:
the center view, \emph{up} and \emph{down} views with the elevation degrees of $\pm 15^{\circ}$, and \emph{left} and \emph{right} views with the angles of $\pm 90^{\circ}$.
We employ a Mask R-CNN model with ResNet-50 backbone that receives a $300 \times 300$
image and outputs $N = 32$
object candidates.
We train it before training the proposed model with 800K frames and corresponding instance segmentation masks collected by replaying the expert demonstrations of the training set. We set the feature dimensionality $d=512$.
We train the model using imitation learning on the expert demonstrations by minimizing the following loss:
\begin{equation}
\mathcal{L} = \mathcal{L}_\text{mask} + \mathcal{L}_\text{action} +
\mathcal{L}_\text{aux}.
\end{equation}
We use the Adam optimizer with an initial learning rate of $10^{-3}$, which is halved at epoch 5, 8, and 10, and a batch size of 32 for 15 epochs in total. We use a dropout with the dropout probability 0.2 for the both visual features and LSTM decoder hidden states.
\subsection{Experimental Results} \label{sec:result}
\begin{table}[t!]
\centering
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{@{}ldcdcdcdc@{}}
\toprule
\multirow{2}{*}{Sub-goal}
& \multicolumn{2}{c}{\small\cite{shridhar2020alfred}}
& \multicolumn{2}{c}{\small\cite{singh2020moca}}
& \multicolumn{2}{c}{\textbf{Ours}}
\\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& \multicolumn{1}{a}{Seen} & \multicolumn{1}{c}{Unseen}
& \multicolumn{1}{a}{Seen} & \multicolumn{1}{c}{Unseen}
& \multicolumn{1}{a}{Seen} & \multicolumn{1}{c}{Unseen}
\\
\hline
{Goto} & $51$ & $22$ & ${54}$ & $ {32}$ & $\B{59}$ & $ \B{39}$ \\
\hline
{Pickup} & $32$ & $21$ & ${53}$ & $ {44}$ & $\B{84}$ & $ \B{79}$ \\
{Put} & ${81}$ & ${46}$ & $62$ & $39$ & $\B{82}$ & $ \B{66}$ \\
{Slice} & $25$ & $12$ & ${51}$ & ${55}$ & $\B{89}$ & $ \B{85}$ \\
\hline
{Cool} & ${88}$ & ${92}$ & $87$ & $38$ & $\B{92}$ & $ \B{94}$ \\
{Heat} & ${85}$ & ${89}$ & $84$ & $86$ & $\B{99}$ & $ \B{95}$ \\
{Clean} & ${81}$ & $57$ & $79$ & $\B{71}$ & $\B{94}$ & $ {68}$ \\
{Toggle} & $\B{100}$ & ${32}$ & $93$ & $11$ & ${99}$ & $ \B{66}$ \\
\hline
{Average} & $68$ & $46$ & ${70}$ & ${47}$ &\B{87} & \B{74} \\
\bottomrule
\end{tabular}
}
\caption{
\textbf{Sub-goal success rate.} All values are in percentage. The agent is evaluated on the Validation set. Highest values per fold are indicated in \B{bold}.
}
\label{tab:res_subgoal}
\end{table}
Table~\ref{tab:results} shows the results. It is seen that our method shows significant improvement over the previous methods~\cite{shridhar2020alfred,legg2020eccv,singh2020eccv,singh2020moca} on all metrics. Our method also achieves better PLW (path length weighted) scores in all the metrics (indicated in the parentheses), showing its efficiency. Notably, our method attains \textbf{8.37\%} success rate on the unseen test split, improving approximately 20 times compared with the published result in \cite{shridhar2020alfred}. The higher success rate in the unseen scenes indicates its ability to generalize in novel environments. Detailed results for each of the seven task types are shown in the appendix.
The preliminary version of our method won an international competition, whose performance is lower than the present version. It differs in that $(p^\text{ia}_{t}, p^\text{io}_{t})$ are not forwarded to the mask decoder and the action decoder and the number of Mask R-CNN's outputs is set to $N=20$. It is noted that even with a single view (i.e., $K=1$), our model still outperforms ~\cite{shridhar2020alfred,legg2020eccv,singh2020eccv} in all the metrics.
\paragraph{Sub-goal success rate.}
Following~\cite{shridhar2020alfred}, we evaluate the performance on individual sub-goals.
Table~\ref{tab:res_subgoal} shows the results.
It is seen that our method shows higher success rates in almost all of the sub-goal categories.
\subsection{Ablation Study}
We conduct an ablation test to validate the effectiveness of the components by incrementally adding each component to the proposed model. The results are shown in Table \ref{tab:ablation}.
\begin{table}[h!]
\begin{small}
\begin{adjustwidth}{-0.05cm}{}
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{@{}cccccc@{}}
\toprule
\multirow{3}{*}{Model} & \multicolumn{4}{c}{\bf Components} & \multicolumn{1}{c}{\bf Validation} \\
\cmidrule(r){2-5} \cmidrule(l){6-6}
& {Instruction} & {Two-stage} & {Multi-view} & Mask & \multicolumn{1}{c}{\multirow{2}{*}{Seen / Unseen}} \\
& {Selection} & {Interpretation} & {Hier. Attn} & Decoder \\
\cmidrule(r){1-5} \cmidrule(l){6-6}
1 & \xmark & \xmark & \xmark & \cmark & \multicolumn{1}{r}{2.8 / 0.5} \\
2 & \cmark & \xmark & \xmark & \cmark & \multicolumn{1}{r}{12.9 / 2.9} \\
3 & \cmark & \cmark & \xmark & \cmark & \multicolumn{1}{r}{18.9 / 3.9} \\%& \ \multicolumn{1}{r}{\Bl${9.51}$ (${-}$)} & \multicolumn{1}{r}{\Bl${}$ (${-}$)} & \Bl{${3.29}$ (${-}$)} & \multicolumn{1}{r}{\Bl${14.0}$ (${-}$)} \\%
4 & \cmark & \cmark & \xmark & \xmark & \multicolumn{1}{r}{3.8 / 0.7} \\%& \multicolumn{1}{r}{\Bl${9.51}$ (${-}$)} & \multicolumn{1}{r}{\Bl${}$ (${-}$)} & \Bl{${3.29}$ (${-}$)} & \multicolumn{1}{r}{\Bl${14.0}$ (${-}$)} \\%
5 & \cmark & \cmark & \cmark & \cmark & \multicolumn{1}{r}{33.7 / 9.7} \\
\bottomrule
\end{tabular}
}
\end{adjustwidth}
\label{tab:dataset_comparison}
\end{small}
\vspace{-0.5em}
\caption{
\textbf{Ablation study for the components of the proposed model.}
We report the success rate (Task score) on the validation seen and unseen splits. The {\xmark} mark denotes that a corresponding component is removed from the proposed model.
}
\vspace{-1em}
\label{tab:ablation}
\end{table}
\begin{figure*}[t!]
\includegraphics[width=\linewidth]{figures/cool_place_unseen.png}
\vspace{-0.2cm}
\caption{Our agent completes a \textbf{Cool} \& \textbf{Place} task ``\textit{Put chilled lettuce on the counter}" in an unseen environment.}
\label{fig:tasktype}
\vspace{-0.2cm}
\end{figure*}
The model variants 1-4 use a single-view input ($K=1$); they do not use multi-view inputs and the hierarchical attention method. Model 1 further discards the instruction decoder by replacing it with the soft-attention-based approach \cite{shridhar2020alfred}, which yields a different language feature $s_{\text{att}}$ at each timestep. Accordingly, $p^\text{io}_t$ and $p^\text{ia}_t$ are not fed to the mask/action decoders; we use $g^\text{m}_t = [s_\text{att}; h^\text{a}_t]$. These changes will make the method almost unworkable. Model 2 retains only the instruction selection module, yielding $s_{m_t}$. It performs much better than Model 1. Model 3 has the instruction decoder, which feeds $p^\text{io}_t$ and $p^\text{ia}_t$ to the subsequent decoders. It performs better than Model 2 by a large margin, showing the effectiveness of the two-stage method.
Model 4 replaces the mask decoder with the counterpart of the baseline method \cite{shridhar2020alfred}, which upsamples a concatenated vector $[g^m_t; v_t]$ by deconvolution layers. This change results in inaccurate mask prediction, yielding a considerable performance drop. Model 5 is the full model. The difference from Model 3 is the use of multi-view inputs with the hierarchical attention mechanism. It contributes to a notable performance improvement, validating its effectiveness.
\subsection{Qualitative Results}
\subsubsection{Entire Task Completion}
Figure \ref{fig:tasktype} shows the visualization of how the agent completes one of the seven types of tasks. These are the results for the unseen environment of the validation set. Each panel shows the agent's center view with the predicted action and object mask (if existing) at different time-steps.
See the appendix for more results.
}
\subsubsection{Mask Prediction for Sub-goal Completion}
\begin{figure}[h!]
\begin{subfigure}{.45\columnwidth}
\centering
\includegraphics[width=\linewidth]{figures/vis1.png}
\caption{\cite{shridhar2020alfred}}
\label{fig:sfig1}
\end{subfigure} \hfill
\begin{subfigure}{.45\columnwidth}
\centering
\includegraphics[width=\linewidth]{figures/vis2.png}
\caption{Ours}
\label{fig:sfig2}
\end{subfigure}
\vspace{-0.1cm}
\caption{The prediction masks generated by Shridhar \emph{et al}. and our method where the agents are moved to the same location to accomplish {\bf Slice} sub-goal.}
\label{fig:vis_comparison}
\vspace{-0.2cm}
\end{figure}
Figure \ref{fig:vis_comparison} shows
an example of the mask prediction}
by the baseline \cite{shridhar2020alfred} and the proposed method.
It shows our method can predict a more accurate object mask when performing {\bf Slice} sub-goal.
More examples are shown in the appendix.
Overall, our method shows better results, especially for difficult sub-goals like \textbf{Pickup}, \textbf{Put}, and \textbf{Clean}, for which a target object needs to be chosen from a wide range of candidates.
\section{Conclusion}
This paper has presented a new method for interactive instruction following tasks and applied it to ALFRED. The method is built upon several new ideas, including the explicit selection of one of the provided instructions, the two-stage approach to the interpretation of each instruction (i.e., the instruction decoder), the employment of the object-centric representation of visual inputs obtained by hierarchical attention from multiple surrounding views (i.e., the action decoder), and the precise specification of objects to interact with based on the object-centric representation (i.e., the mask decoder). The experimental results have shown that the proposed method achieves superior performances in both seen and unseen environments compared with all the existing methods. We believe this study provides a useful baseline framework for future studies.
\paragraph{Acknowledgments.}
This work was partly supported by JSPS KAKENHI Grant Number 20H05952 and JP19H01110.
{\small
\bibliographystyle{named}
|
2,869,038,156,926 | arxiv | \section{Introduction}
On 2013, September 29 a large solar filament rose smoothly, without any previous flare, vigorous dynamics, or significant brightening arcade. This filament lifted and afterwards created a two-ribbon flare, and its X-ray flux reached the category C1.2 by GOES 1-8~\AA\footnote{\url{http://www.lmsal.com/solarsoft/last_events_20131001_2320/index.html}}. The eruption led to an Earth-directed halo coronal mass ejection (CME). The interplanetary plasma interacted with the Earth's magnetic field creating a moderate geomagnetic storm with effects on the ground \citep[{\it Dst} geomagnetic index reached --75 nT, and storm classification follows ][]{Gonzalez1994} on 2013 October 2\footnote{{\it Dst} data available in \url{http://wdc.kugi.kyoto-u.ac.jp/dst_provisional/201310/index.html}}. However, since the interplanetary medium and terrestrial effects are beyond the scope of this paper, they will be treated elsewhere. This eruption went unnoticed for flare early alerts and predictions and it produced a geomagnetic storm. This case is a very pertinent example, which can be used to obtain input for simulations and for solar and space weather predictions and warnings.
In this paper, we focus on the solar source and trigger of this filament eruption, which seems to be a supergranular-sized magnetic flux emergence occurrence underneath the filament. We investigate the reasons for this kind of an undisturbed rise using Solar Dynamics Observatory (SDO) data by means of the Atmospheric Imaging Assembly (AIA) at various EUV wavelengths (AIA/SDO 304~\AA, 193~\AA, 211\AA~and 94~\AA). Also, to explain the event we obtained a close-up using Helioseismic and Magnetic Imager (HMI) magnetograms, continuum, and linear polarization images. We also took advantage of the fully inverted HMI data in this work.
In the review of \citet{Linker2003} it is stated that only 20\% of the CMEs are associated with a large flare. Flareless CMEs have been reported by \citet{Song2013}. \citet{Gosain2012} state that most of the filament eruptions related to CMEs pass undetected because of their low emission and, consequently, are not associated with flares. Therefore, trusting only in flare occurrence may be misleading and the region can actually be CME-productive.
Some very recent and comprehensive summaries of the various models that can trigger the eruption of filaments and CMEs are \citet{Aulanier2010, Schmieder2013, Aulanier2014, Parenti2014} and references therein; the "magnetic breakout model" \citep{Antiochos1998, Antiochos1999}; a highly twisted flux rope, as in the "torus instability" \citep{Torok2003, Torok2005, Kliem2006}; different instabilities, e.g. MHD \citep{Klimchuk1989, Sturrock1989}; catastrophic loss of equilibrium \citep{Forbes1991}; approaching polarities \citep{Forbes1995}; moving features and magnetic cancellation \citep{Ballegooijen1989, Forbes1991}; and the "tether cutting" model, implying reconnection in the arcades below the flux rope \citep{Moore1992, Moore2001}. Considering the magnetic flux emergence, filament eruptions due to flux emergence in the surroundings have been analysed in simulations by \citet{Chen2000}. The magnetic configuration of active and quiescent filaments and their CME productivity have been studied in \citet{Feynman1995}. Slow and fast eruptive phases have been studied by \citet{Sterling2005, Sterling2007}.
We consider the observational features of these models to choose the most plausible model for this phenomenon.
More particularly, different works are revised in the review by \citet{Chen2011}: \citet{Zhang2008} studied events of flux emergence and cancellation 12 h prior to a CME initiation, finding that 91\% are related. \citet{Wang1999} also found filament eruptions close to flux emergence regions. \citet{Jing2004} detected that more than half of the eruptive filaments were related to flux emergence. \citet{Zhang2001} related relatively small flux cancellation areas with the filament eruption of the Bastille Day event.
However, many of these works were qualitative, only with the presence and closeness of the flux emergence to relate with other magnitudes. Here we aim to describe the filament quantitatively and to assess its instabilities, and we describe how a flux emergence can influence a filament lift-off.
This paper is organized as follows: first, the observations are described in Section~\ref{s1}, describing the flux emergence and filament dynamics, along with the surrounding coronal holes. All calculations are included in Section~\ref{s1}. The discussion and conclusions are presented in Sections~\ref{s2} and \ref{s3}, respectively.
\section{Observations of the emerging region and filament lift-off}\label{s1}
We used data provided by the space solar facility Solar Dynamics Observatory \citep[SDO, ][]{Pesnell2012}. Data were acquired with the Atmospheric Imaging Assembly \citep[AIA, ][]{Lemen2012} and the Helioseismic and Magnetic Imager \citep[HMI, ][]{Scherrer2012}. The AIA multi-wavelength data comprise observations in 304 and 193~\AA, which sample the high chromosphere, transition region, and corona, respectively. We also included AIA 211~\AA~data, which are useful as an intermediate wavelength measurement since the filament opacity, compared to 304~\AA, is somewhat reduced. The AIA 94~\AA~data also probe the inner corona. The HMI data sample the Fe $\textsc{i}$ line at 6173~\AA~to generate line-of-sight (hereafter, LOS) photospheric magnetograms. This line samples a photospheric height around $\sim$100~km \citep{Fleck2011}. The data cadence for the general study is chosen to be 12~min from September 29 to October 1 on both AIA and HMI instruments. In other particular cases of the study, the cadence is reduced to 1~min or increased up to 180~min (3~h). Images are co-spatial and quasi-synchronized (within seconds). Pixel sampling for both instruments is 0\arcsec6.
All images are reduced through the SolarSoftWare (SSW) $\it{read\_sdo.pro}$ and $\it{aia\_prep.pro}$ routines. Next, frames are corrected from rotation and aligned.
Figure~\ref{fig1} shows context images. The left panel indicates the whole solar hemisphere, showing the filament and two surrounding coronal holes in AIA 193~\AA. The right panel shows the filament in AIA 304~\AA~as a dark, inverse-S-like, elongated structure located between two opposite polarities of NOAA AR11850 and the following active region. The filament was visible close to the trailing boundary of AR 11850, with its southern part close to its positive facular polarity, and the spine located at the polarity inversion line (PIL), with a facular negative polarity on the left and another positive on the right. As seen in AIA 304 and 193~\AA, in close to five days prior to the eruption, the filament developed from a short structure (Sept 25, 21:00~UT) to a inverse-S-shaped elongated in just one day. This fully developed filament evolved in a dark area in 193~\AA, which is an open field region. The length was about $\sim$0.40$R_{\sun}$ on Sept 25 to $\sim$0.63$R_{\sun}$ two days later. This filament showed very active mass motion along the spine and barb. The southern part is more elevated than the northern part at this stage. The heights look similar at Sept 28, 10:00~UT. This situation seems to reverse on Sept 28 19:00~UT. The magnetic flux emergence started around Sept 29 at 00:00~UT and the appearance of pushing the filament from below is conspicuous. This emergence displayed small pores in continuum. There was a positive polarity where one of the barbs was located, and this region seems to be attaching the filament to the surface. The apex of the filament rose, placed somewhat northern from the barb, around 21:00~UT. Eventually the filament developed a two-ribbon flare on Sept 29 at 21:43~UT, which led to a maximum flux of X-ray of C1.2 in GOES. The southern leg was detached early in the flare, and the northern leg produced a shower of coronal rain on Sept 29, at 23:29~UT, lasting around one hour. Thread-like features of the filament feet appeared to fall down.
The subsequent CME appeared in LASCO at 22:24~UT, with a plane-of-the-sky speed (POS) of $\sim$ 600 {${\rm km}\:{\rm s}^{-1}\:\!\!$} . The halo CME is reported in the CACTUS catalog \citep[\url{http://sidc.oma.be/cactus/catalog/LASCO/2_5_0/qkl/2013/09/latestCMEs.html}, ][]{Robbrecht2004, Robbrecht2009}. However, the reported speed in the LASCO catalog (\url{http://cdaw.gsfc.nasa.gov/CME_list/}) is 1179 {${\rm km}\:{\rm s}^{-1}\:\!\!$}. This discrepancy is due to the external CME shell, which is faster than the filament.
\begin{figure*
\begin{center}
\includegraphics[width=0.52 \linewidth]{./context1.pdf}~~
\includegraphics[width=0.52 \linewidth]{./contextha2.pdf}
\caption{Context figures of the filament. The grid spaces mark 15\mbox{$^{\circ}$}. {\it Left:} Image in AIA 193~\AA, on 2013 Sept 29, at 00:00~UT. Image is clipped from 50 to 5000 DN. The filament is located on 20$\mbox{$^{\circ}$}$N 15$\mbox{$^{\circ}$}$W, with two coronal holes surrounding it on coordinates 40$\mbox{$^{\circ}$}$N 20$\mbox{$^{\circ}$}$W (CH1) and 0$\mbox{$^{\circ}$}$N 7$\mbox{$^{\circ}$}$W (CH2).
{\it Right:} Context image showing a derotated AIA 304~\AA~image displaying the filament, and the positive polarity (in green) and negative (blue). Different colour shades are explained in caption of Fig.~\ref{fig2}. The bipolar flux emergence occurrence is located at [250\hbox{$^{\prime\prime}$}, 200\hbox{$^{\prime\prime}$}], enclosed in a white square. A movie available in the online data base shows the temporal evolution of the whole visible hemisphere from 21:00 to 22:09 UT in the AIA 304~\AA~channel. In the movie, the FOV of the right panel of this figure is marked by a black rectangle, and the emergence by a white square. The green and blue contours have the same meaning as in the right panel.}
\label{fig1}
\end{center}
\end{figure*}
\subsection{Magnetic flux emergence}
We selected AIA 304~\AA~images and HMI magnetograms with a cadence of 12 min, from Sept 29 00:00~UT to Oct 1 10:00~UT. We analysed AIA 304~\AA~close-up images with overlaid HMI magnetograms of the aforementioned filament barb (located on $\sim$ 18\mbox{$^{\circ}$} N, 15\mbox{$^{\circ}$} W in the left panel of Fig.~\ref{fig1}), which appears to be the location where the photospheric magnetic flux emergence happens. The panels in Fig.~\ref{fig2} show the evolution, displaying selected frames\footnote{ The temporal evolution shown in Fig.~\ref{fig2} is available in the on-line edition.}. This barb seems to be close to a positive polarity, on Sept 29 at 00:00~UT. A large magnetic supergranular extent appeared, as seen in the HMI magnetograms. This large supergranular emergence exhibited mixed polarities at first, and later on the positive and negative polarities drifted, rearranging as ordinary network elements. Three hours later, the main negative polarity started developing a pore. The positive polarities also started developing small pores in the following hours. Six hours later, this emergence extent, consisting of positive polarities {\it (green)} on the left and negative polarities {\it (blue)} on the right, moved apart up to $\sim$ 25\hbox{$^{\prime\prime}$}\ (panel 1 in Fig.~\ref{fig2}), and reached a maximum size of $\sim$ 33\hbox{$^{\prime\prime}$}~around 13:00 UT. At this moment, the negative polarity peaked --1000~G, while the positive was about 800~G. Therefore, the magnetic field of both polarities increased from around +(-)200 to +(-)900~G in only six hours. A negative polarity on Sept 30, 11:24~UT, emerged and drifted to the right side with other negative polarity elements. These magnetic rearrangements are visible for the whole period and the loops in AIA 304~\AA~mark the polarity correspondence; however, no pronounced change in polarity is noticeable in HMI magnetograms, neither shear in AIA 304 nor 193~\AA~data. At the end of the sequence, the polarities diminished their flux and approached each other. The magnetic emergence seems to push the barb of the filament from below during the whole event.
In the second row of panels in Fig.~\ref{fig2}, we plot AIA 211~\AA~images in mainly two lines: Fe~$\textsc{xiv}$, corresponding to $ log T$ $\sim$ 6.30; and Fe~$\textsc{xiii}$, corresponding to $ log T$ $\sim$ 6.25 \citep{ODwyer2010}. This wavelength happened to become very adequate to observe flux emergence patches under filaments, since the S/N is good, samples high temperatures, and the opacity of the filament becomes reduced but not totally suppressed.
We also present evidence of an EUV counterpart of the flux emergence in 94~\AA~in the third row of panels in Fig.~\ref{fig2}. These images sample a temperature range in the corona \citep{ODwyer2010} with two lines: the inner quiet corona, sampled by Fe~$\textsc{x}$, corresponding to $ log T$ $\sim$ 6.05, and the flaring range, sampled by Fe~$\textsc{xviii}$, corresponding to $ log T$ $\sim$ 6.85.
The region started to increase its intensity from 6 DN to 30 DN. In \citet{ODwyer2010}, the quiet sun intensity is synthetically defined as 6 DN. Therefore, the intensity of this emerging region is five times that of the quiet sun, and 10\% of the maximum intensity in the flare of the full disk in this period.
This observational fact may suggest rapid connectivity between the photospheric longitudinal and transverse fields and the corona, since the signal starts to increase from 01:00 UT onwards, as shown in Fig.~\ref{fig2} and in the movie added as on-line material.
After the flare, we can see that opposite magnetic polarities rearrange and sometimes approach and disappear, displaying a brightening between them as a cancellation signature. In the post-flare panels, the last one is a very clear case.
Since we want to measure the magnetic flux emergence and check whether magnetic cancellation takes place, we compute the magnetic flux and magnetic flux density to this small area in HMI LOS-magnetograms. We set a threshold of +(-)90~G, that is, $\approx$ 10$\sigma$ for 45-s magnetograms \citep{Liu2012}. This threshold is high enough to account for the flux computations in the inter-network and network, avoiding noise effects. Under all considerations, the filling factor is equal to 1.
We calculated the signed magnetic (positive and negative) flux separately, as in the left panel of Fig.~\ref{fig3}. The magnetic flux has been corrected from heliocentric angle ($\mu$).
The vertical bars mark the flare onset. The starting time is Sept 29 at 00:00~UT. The positive flux reaches a maximum value at $\Delta t$=600~min, but there is a plateau from $\Delta t$=600~min to $\Delta t$=960~min (from 10-16h~UT). Both positive and negative fluxes start to decrease around Sept 29 at 12:00~UT. The maximum unsigned flux is $\sim$3.4$\times$10$^{20}$ Mx, and the unsigned flux rate, 4.29$\pm$0.01 $\times$10$^{17}$~Mx min$^{-1}$. Calculating the emergence flux rate (up to the middle of the plateau) yields + 2.23$\pm$0.07 $\times$10$^{17}$~Mx min$^{-1}$. Considering the first maximum, the rate is +2.60$\pm$0.08 $\times$10$^{17}$~Mx min$^{-1}$.
We performed the same operation for the negative polarity, obtaining --2.52$\pm$0.07 $\times$10$^{17}$~Mx min$^{-1}$.
For the negative polarity, we can see a clear decay in both magnetic flux and flux density, as shown in Fig.~\ref{fig3}. Calculating the decay rate of this negative area yields 5.2$\pm$1.2 $\times$10$^{16}$~Mx min$^{-1}$, which is much slower than the emergence rate.
The signed magnetic flux densities, shown in the right panel of Fig.~\ref{fig3}, are both around $\sim$+(-)250~G and follow the trend as the signed magnetic flux. There are two peaks on the positive magnetic flux density some time after the flare, probably caused by recurrent magnetic intensification in the area (see Section~\ref{s2}).
To rule out any possible contribution from other areas to the flux emergence in the region, we have also analysed three different areas where magnetic emergence is detected (close to the main sunspot of the active region [700\hbox{$^{\prime\prime}$}, 90\hbox{$^{\prime\prime}$}]. This area was actually flare-productive on Sept 29 at 05:11 UT, with the onset of a C1.6 flare. In the active region, the positive polarity decreased from 2$\times$10$^{21}$~Mx about 25\%, and the negative polarity increased from --1.5$\times$10$^{21}$~Mx by 16\%, becoming both fluxes equal 2.5~h before the flare. Furthermore, we studied the areas around coordinates [400\hbox{$^{\prime\prime}$}, 150\hbox{$^{\prime\prime}$}] and [400\hbox{$^{\prime\prime}$}, 30\hbox{$^{\prime\prime}$}], corresponding to supergranular voids, where magnetic flux emergence is also observed. For these regions and the threshold set, the flux emergence rate is negligible.
We computed the total magnetic flux of the whole AR as well, from September 24 00:00 UT to Sept 29 00:00 UT. The positive flux started at 0.9$\times$10$^{22}$~Mx, peaking on Sept 26 with 1.3$\times$10$^{22}$~Mx. The negative flux started at --1.1$\times$10$^{22}$~Mx, peaking at the same time. At the time of the flare, the fluxes were similar to Sept 24, keeping the decreasing trend. The rate of the total flux emergence respect to the AR is around 1\%.
We also measured the expansion velocity of the emergence, with two main opposite polarities, from Sept 29~01:00~UT to 13:00~UT with different methods. The first method is the magnetic centroids, as detailed in \citet{Balmaceda2010}, but the distance was computed between the positive and negative polarities above the threshold of +(-)90 G. With this method, the expansion velocity is 0.165$\pm$0.006~{${\rm km}\:{\rm s}^{-1}\:\!\!$}. The second method we used is visually measuring the distance between the two main polarities (as a diameter and obtaining the radius), getting 0.100$\pm$0.003 {${\rm km}\:{\rm s}^{-1}\:\!\!$}. The third procedure is the one described in \citet{Palacios2012}, measuring the area and assuming it as circular, subsequently calculating the radius from it. This yields a velocity of 0.157$\pm$0.004 {${\rm km}\:{\rm s}^{-1}\:\!\!$}.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth, angle=0.]{./supermap_multi_rgb_final_2.pdf}
\caption{{\it Top row}: frame sequence of AIA 304~\AA~images of a filament's barb {\it (darker~areas)} with LOS-magnetic field superimposed. Different shades of green (from lighter to darker) contours mark different levels, as 90, 200, 500, 600, and 700 G, while a gamut of blue contours mark -90, -200, -500, -700, -900 G for the negative polarities.\ The frame size is 100\hbox{$^{\prime\prime}$} $\times$ 100\hbox{$^{\prime\prime}$}. Sequence timing aims to show the magnetic flux emergence (panel 1), pre-flare (panel 2), flare (panel 3), and decay (panel 4), from left to right. {\it Bottom row}: same temporal sequence in AIA 211 and 94~\AA. Movie issued as on-line material. }
\label{fig2}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.38]{./flux11.pdf}
\caption{{\it Left:} computed magnetic flux on positive patches (solid line), on negative patches (dashed line), signed magnetic flux (dotted-dashed line), unsigned magnetic flux (dotted line). A vertical line marks the time of the flare at 21:43~UT. {\it Right:} mean values of the magnetic flux density of the positive (solid) and negative (dashed) regions.}
\label{fig3}
\end{center}
\end{figure*}
\subsubsection{Linear polarization and transverse field of the flux emergence}
SDO/HMI retrieves a full-disk imaging of Stokes parameters, in the form of HMI full-disk Stokes spectrograms ({\it HMI.S{ \_}720s} data series). We used Stokes {\it U} and {\it Q} images to create linear polarization images. These images are created with six points \citep[line sampling of about 69 m\AA~ from the centre line;][]{Schou2012, Centeno2011} in the Fe~$\textsc{i}$ line at 6173~\AA. Using the standard SSW routine for HMI/SP reduction, images are accumulated and processed. Profiles are extracted from the images and normalized by the red-most wavelength point in Stokes I ($I_{6}$).
For each frame, we have computed the linear polarization using the following formula \citep[e.g. ][]{Solanki2010, Pillet2011}:
\begin{align}
L_{pol}=\frac{1}{6}\sum^{6}_{i=1}\frac{ \sqrt{U_{i}^{2}+Q_{i}^{2}}}{ I_{6}}
.\end{align}
In this stage, we also used also HMI inverted data by the Milne-Eddington inversion code VFISV \citep{Borrero2011, Centeno2014}. These data consist of the magnetic field, azimuth, and inclination to compute and plot the transverse field through $B\cdot sin(\gamma)$ with $\gamma$ the inclination with respect to the LOS, as described in \citet{Hoeksema2014}. For a general comparison between the LOS-magnetograms and the LOS-magnetic field obtained by weak field approximation through the Stokes profiles in active regions, see \citet{Palacios2015}.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.415]{./map_pol1.pdf}
\includegraphics[scale=0.415]{./map_pol2.pdf}
\includegraphics[scale=0.415]{./map_linpol2.pdf}
\caption{{\it Left and centre:} AIA 94~\AA~image at different instants with LOS-magnetic field superimposed, with the same contours as Fig.~\ref{fig2}. Orange and red contours mark levels of 180, 200, 250, 400, 500 G in transverse field. {\it Right:} linear polarization map at 03:48 UT, plot in logarithmic colour scale.}
\label{figpol}
\end{center}
\end{figure*}
We studied the transverse field magnetic flux density. This includes a threshold of 180 G, which is double the longitudinal flux. The magnetic flux density is included in Fig.~\ref{figpol}. In the left and central panels, we show both longitudinal (green and blue) and transverse field (shades of orange), with a 94~\AA~background, in two different instants: 03:48 and 12:00 UT. The rightmost panel shows the linear polarization at 03:48 UT plotted in logarithmic colour scale. The maximum linear polarization is 1.5\%.
The structure of the transverse field in the blob is not uniform, but creates bridges between the main positive and negative polarities. In the first four hours, this situation is conspicuous. After that, the transverse field patches remain in the main polarities because some inclination of the mainly vertical magnetic field. Some showers of cosmic rays appear at some moments, altering the most sensitive part of the polarization images, i.e. the linear polarization, translated into transverse magnetic field and making it look noisy.
\subsection{Filament rising}
We calculated the rising speed on the apex of the filament on de-rotated AIA 304~\AA~images, also provided in the on-line material, and whose detailed reference frame is shown in the right panel of Fig.~\ref{fig1}. The cadence of this series is 1~min from 21:00 to 22:00~UT. We examined the previous frames from 20:00-21:00~UT, but the rising is negligible at that period. As shown in Fig.~\ref{fig4}, the filament kept a constant height during 22~min, and the lift-off took around 38~min.The lift-off of the filament started much before the flare onset at 21:43~UT. The height was estimated on de-rotated images, considering a fiducial point [340\hbox{$^{\prime\prime}$}, 340\hbox{$^{\prime\prime}$}] that lies in the line of the axis defined by the two-ribbon flare middle point and the footpoints. The distance between footpoints is around 0.8 $R_{\sun}$.The apex height was measured, albeit the southern part of the filament was more active on rising the first 20~min. These measurements are indicated with triangles in Fig.~\ref{fig4}. The visual error for the uncorrected height is considered as 3 pixels of a rebinned image of 1024 $\times$ 1024 pixels, translated as 5220~km (not shown). We corrected from projection of the heliocentric angle, i.e. dividing by the heliocentric angle sine, considering the eruption follows the local surface normal direction, as shown in the plots of Fig.~\ref{fig4}. The mean velocity from the flare to the end at 22:00~UT is 188$\pm$5~{${\rm km}\:{\rm s}^{-1}\:\!\!$}, and the subsequent acceleration in this period is 0.065$\pm$0.007~{${\rm km}\:{\rm s}^{-2}\:\!\!$}. The maximum velocity is 312~{${\rm km}\:{\rm s}^{-1}\:\!\!$}. (Considering the 38-min period that the rising lasts, the velocity is 115$\pm$5~{${\rm km}\:{\rm s}^{-1}\:\!\!$}~ and acceleration is 0.049$\pm$0.001~{${\rm km}\:{\rm s}^{-2}\:\!\!$}~).
For this event, we fit the ascent height to different functions, since this can shed light on the physical mechanism that triggered the eruption. We tried exponential, $\it{log-log}$, and parabolic function. Then we evaluated them with the merit function $\chi^{2}$ (unreduced). The exponential fit for the time-height (uncorrected) yields $\chi^{2}$=1.7, while the $\it{log-log}$ fitting yields $\chi^{2}$=5.9. In the case of projection-corrected height, the fitting is even better: $\chi^{2}$=0.9, while the $\it{log-log}$ fitting yields $\chi^{2}$=3.4. Parabolic fittings get very large $\chi^{2}$.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.33]{./veloc_lift_final.pdf}
\includegraphics[scale=0.33]{./veloc_lift_c2_c3x_final.pdf}
\caption{{\it Left:} height versus time of the filament apex. Two plots are shown: triangles represent the uncorrected height data; and the solid line indicates the height after correcting of heliocentric angle projection. Error bars are computed considering the visual error on height and angle correction. Time starts from 21:00~UT and covers 1~h. In the last 38~min, the height starts increasing. A vertical bar marks the flare onset, at 21:43~UT. {\it Right}: height-time diagram, including LASCO C2 (diamonds) and C3 (asterisks) data.}
\label{fig4}
\end{center}
\end{figure*}
We also completed the study with SOHO \citep{Domingo1995} LASCO C2 and C3 level 1 data \citep{Brueckner1995}. After centring and rotating, we performed image base differences and then, applied a Sobel operator for enhancement. We measured the apex in the filament images. The plate scale for C2 is 11.9\hbox{$^{\prime\prime}$} pix$^{-1}$ and for C3 is 56\hbox{$^{\prime\prime}$} pix$^{-1}$. We show the whole time-height scale for AIA304, C2 and C3 in Fig~\ref{fig4} (bottom). The $\chi^{2}$ for the whole sequence is 3.80.
\subsection{Potential field modelling and critical decay index for the filament region and emergence}\label{pfsss}
We used the package Potential Field Source Surface available in SSW {\it PFSS} \citep{Schrijver2003} to assess the critical height of the filament through the decay index of the external field $n$, since we aim to know whether the filament was already unstable before the emergence or the emergence made the filament unstable. The underlying model for the decay index $n$ is the torus instability. The input of the {\it PFSS} model is a synoptic magnetogram. We used the highest resolution input data, the available daily quasi-synoptic HMI magnetograms, obtained at 12:00 UT, on Sept 25, 26, 28, and 29 (27 not available). These synoptic maps were not derotated, so care had to be taken to identify the areas. We set the {\it PFSS} computation grid as 800 points in solar longitude, 400 in latitude, and 66 points in height, covering from 1$R_{\sun}$ to 2$R_{\sun}$. Therefore, the spatial sampling of the grid is 0.45\mbox{$^{\circ}$}~per grid point in longitude and latitude, and around 10600 km per grid point on height. We included a context image of the {\it PFSS} loop set for this region on Sept 29, 18:00 UT (top panel of Fig.~\ref{pfss_cuts}). The height rendered is in agreement with the estimated filament height.
Then, with the decay index of the external field $n$, used recently in \citet[][ and references therein]{Kliem2006, Torok2007, Zuccarello2014}, we estimated this magnitude as
\begin{align}
n=-R \frac{\partial{ln(B_{ex})}}{\partial{R}}
.\end{align}
We generated datacubes of $n$ for Sept 25, 26, 28, and 29. We selected a box of 10$\times$65$\times$66 grid points, equivalent to 4\deg5 $\times$29\deg25 $\times$1$R_{\sun}$ (solar longitude, latitude, radial height). The $Z$ axis actually goes from $\sim$ 20 Mm to 700 Mm, since we removed the points on the very bottom. This final volume of $\sim$55~Mm $\times$ 357~Mm $\times$ 680~Mm includes the filament and flux emergence. The $Y$ axis covers approximately the filament's length, but considering its shape, it cannot be considered as wholly lying in the solar longitude selected plane. We clipped the $n$ values from 1.2 to 2.0. We cut this box along and across the exact location where the flux emergence happens, as shown in Fig.~\ref{pfss_cuts}.
The decay index $n$ is represented in shades of purple-blue on the volume where it is torus-unstable \citep[$n$ becomes a critical factor for instability, $n_{crit}$, when values ranges from 1.3 to 1.5, as in][]{Zuccarello2014}. Considering the low impulsivity of the ejection, this threshold is also in agreement with \citet{Torok2007}.
During the previous days, some network points are rooted in the area, but the most important variation are from Sept 28 12:00 UT to Sept 29 12:00 UT. The height of the filament may vary and is only marked for Sept 29, with a hollow circle the projection-uncorrected height, and the corrected height with a cross. On Sept 28, $n$ increased on the emergence area. Black areas are those where $n<$1.2, and these areas decreased from 25 onwards. On Sept 26, a large network area also made $n$ increase, but it was not located in same coordinates of the emergence.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.30]{./arcs_AR_final.pdf} \includegraphics[scale=0.182]{./YY_final.pdf}
\includegraphics[scale=0.23]{./scale_nn.pdf}
\includegraphics[scale=0.355]{./XX_final.pdf}
\caption{{\it Top left:} PFSS magnetic rendering corresponding to the filament region rendered by {\it PFSS} for Sept 29 data. A low-resolution magnetogram was used, with the same FOV as Fig.~\ref{fig1} (right). {\it Top right:} transverse slice of the decay index $n$ across the volume that encloses the filament and emergence. The estimated height of the filament for Sept 29 is marked with a black cross (corrected), or small circumference (uncorrected). {\it Bottom:} longitudinal slice of the decay index $n$ along the volume that encloses the filament and emergence, and same symbols for the filament height. This figure belongs to subsection~\ref{pfsss}.}
\label{pfss_cuts}
\end{center}
\end{figure*}
\section{Discussion}\label{s2}
The emergence of this relatively small-scale region follows Hale-Nicholson's law of polarities \citep[the same leading and trailing polarity in the same hemisphere, ][]{Hale1919}; in this case, negative is leading, and positive is trailing, as the leading AR11850. However, in the right panel of Fig.~\ref{fig1}, we notice a different magnetic configuration. The filament is located between the negative polarity of the next active region (to the left) and the positive polarity from AR11850 between where we can locate the polarity inversion line (PIL). Therefore, the emergence exhibits a normal configuration, but inverse to the filament configuration, which can help the magnetic instability. The situation is similar to the simulations that originate eruptions in \citet{Chen2000}, as an opposite polarity configuration emerges beneath or at the side of a filament. Observing in different coronal wavelengths, as 193 \AA~and 94~\AA, there is no evidence of a large magnetic canopy over the filament in the form of bright loops. LASCO images also reveal a leading edge and the filament inside this edge. These images seem to be in agreement with the standard model of a twisted flux rope surrounded by a less twisted longitudinal magnetic field, but without including a large-scale magnetic arcade. The inverse-S shape and barbs would define it as dextral chirality, and the longitudinal field of the filament would then point to the solar north direction. The twisted field lines would exhibit negative helicity.
We have to note that this filament configuration (dextral) follows the hemispheric pattern; however, exceptions to this rule are reported \citep[e.g. see][and references therein]{Mackay2014}. Furthermore, under a full polarimetry analysis, the chirality definition may change, from a topological orientation of features to a real estimation of the magnetic field direction and sense \citep{Hanaoka2014}.
The barb is very close to the positive polarity from before the supergranular flux emergence, and seems fundamental to the mass dynamics of the filament. Since the barb is very close to the emergence, it may be the point where the instability is transmitted.
The rate of flux emergence, which is about 2$\times$10$^{17}$~Mx min$^{-1}$, is at least one order of magnitude larger than those reported by \citet{Palacios2012}. However, the expansion velocity of the flux emergence is much slower, at 0.16~{${\rm km}\:{\rm s}^{-1}\:\!\!$}~ , than in Palacios et al., which is approximately 0.6~{${\rm km}\:{\rm s}^{-1}\:\!\!$}~. In this case, the higher magnetic fields may slow down the velocity of the upflowing plasma.
The expansion velocity calculated with centroids is similar.
Expecting dramatic changes in photospheric magnetic flux as large reconnection features, we only found the bright blob in 94~\AA, being conspicuously bright in 211~\AA. The magnetic flux emergence lasts around 12 hours and smoothly decays, probably due to the reconnection that started a bit after the flux emergence detection. We included spectropolarimetric data to clarify the magnetic structure by adding the transverse component during the whole emergence. We may expect a larger transverse field structure, but that may be smeared out because of the averaging over 12 minutes, or some circular to linear cross-talk. Combined images between AIA 94~\AA ~images and longitudinal magnetograms suggest the magnetic link between the emergence in the photosphere and the coronal counterpart, in the form of a blob, brighter than the surroundings. This coronal counterpart is also visible in 304 and 193~\AA, but somewhat occulted by the barb due to its opacity. The blob is best seen in 211~\AA\ since the emergence patch is better seen through the filament barb, as the latter is less opaque.
Compared to simulations of \citet{Kusano2012} and \citet{Toriumi2013}, this emergence region resembles an OP-type configuration with azimuth angle $\phi_{e} \sim$ 160\mbox{$^{\circ}$}, and shear $\theta_{0}$ $\sim$ 60\mbox{$^{\circ}$} ($\phi_{e}$ is the angle between the PIL and the orientation of the bipolar region. With this shear and according to these simulations, the injected kinetic energy might be large enough to provoke an eruption. \citet{Bamba2013} states that OP-type may influence the destabilization of a flux rope via reconnection.
We followed the evolution of the filament for about six days, and the phenomenon known as the "sliding door effect" \citep{Okamoto2008, Okamoto2009, Kuckein2012} is not evident, since the distance between the positive and negative facular boundaries remains the same, $\sim$220\hbox{$^{\prime\prime}$}. The filament was already there at the beginning of the study, and the part that grew from there does not exhibit any distinctive pattern of the sliding door effect.
Development of pores have been reported by \citet{Kuckein2012}, and this is considered tracers of the filament emergence. However, \citet{Vargas2012} had studied the event of \citet[][]{Okamoto2008, Okamoto2009} and considered these photospheric signatures as necessary but not sufficient pieces of evidence for supporting the hypothesis of emergence. We also find very small appearance of pores. However, it is probably more related to the intensification of the magnetic field \citep{Grossmann1998, Bellot2001, Danilovic2010} rather than
characteristics of another flux rope emerging, since the magnetic field density increases from -200 to -1000 G . In the right panel of Fig.~\ref{fig3}, other peaks appear in the positive magnetic field density, probably caused by recurrent magnetic field intensification in the region.
Another point is that the CME speed was reported as 600 {${\rm km}\:{\rm s}^{-1}\:\!\!$}. In the case of CME velocities, we always should have in mind the solar escape velocity, that is 617 {${\rm km}\:{\rm s}^{-1}\:\!\!$}. Any difference from this velocity can account for the projection angle or the kinematic energy injection to the CME.
Furthermore, we checked that CH1 becomes darker visually, which might be an indicator of more open field lines there. Some of these features will be treated in another paper.
Regarding the filament eruption, the current instability models might explain or rule out the mechanism that causes the eruption. In the review of, e.g. \citet{Aulanier2014}, different models are presented. The breakout model \citep{Antiochos1998, Antiochos1999} depends on significant shearing, and involving reconnection above the flux rope \citep{Schmieder2013}. Reconnection is strongly enclosed in some of these models \citep{Zuccarello2013}. Loss of equilibrium models \citep{Forbes1991, Forbes1995} may explain the slow filament rising \citep{Schmieder2013}. These models rely on photospheric motions, polarity approaching, or vortical motion \citep{Amari2003, Aulanier2010}. Another model to consider is "tether-weakening", with observational examples presented by \citet{Sterling2005, Sterling2007}. This weakening is usually due to emergence off the filament channel \citep{Moore1992}.
In addition to the observational signatures of the models, the height-time fitting may provide hints about the physical mechanism. In \citet{Schrijver2008}, the expansion is fit with different functional forms (polynomial, exponential, power-law...) and evaluated with the merit function $\chi^{2}$. Time-height exponential fitting is also used in very rapid CMEs, like in the case of \citet{Gallagher2003}. \citet{Gosain2012} also uses fitting to exponential. The time-height fittings presented here gets better results for exponential functions than any other, and even better to projection-corrected height than uncorrected. Actually, an exponential ascent may be a characteristic of different ideal instabilities, e.g. "torus instability" at the first stages \citep{Schrijver2008, Gosain2012}. Observing the filament expansion and its shape at several solar radii, it may reveal a kink instability, as in \citet{Torok2005}.
We checked the decay index $n$ of the area from Sept 25 to 29, similar to the study by \citet{Zuccarello2014}.\ The parameter $n_{crit}$ is always between 1.3 and 1.5 on the filament and emergence volume, which may indicate a torus instability. However, the lower part of the volume exhibits an important change of $n$ in the area corresponding to the flux emergence, proving useful to identify ongoing instabilities. Importantly, $n_{crit}$ ranging the mentioned values may also imply an exponential-to-linear expansion \citep{Kliem2006}.
After exploring the main models of eruption in the above paragraphs, ideal instabilities (torus instability and also kink) can be the most plausible mechanism. We may establish a possible scenario as follows: the filament is destabilized, and a large magnetic flux emergence below the filament spine can help the destabilization, elevating against gravity and magnetic tension, increasing the magnetic pressure.
The filament may be located close by a neutral point, which might be metastable (the paler shade of blue in Fig.~\ref{pfss_cuts}, which is the minimum corresponding $n$ in the $Z$ direction). This point may allow the topology of the magnetic field to be maintained without 3-D reconnection with the lower part of the filament, and may help the torus instability to develop due to the flux emergence beneath this point. These kind of eruptions may be slow, leading to CMEs, and with one/several flux emergence patches underneath.
\section{Summary and conclusions}\label{s3}
This filament, located in the boundaries of the active region AR11850, evolves during almost five days of observations and eventually erupts. On Sept 29, 00:00 UT, a supergranular flux area started emerging and about 20~h later, the filament started rising. The most conspicuous rising phase lasted about 38 min and the C1.2 flare onset started at 21:43 UT, while the two-ribbon arcade started developing around 15 min later.
The height-time profile corresponds to an exponential profile, and the theoretical model that fits this smooth ascent most is the torus instability, plausibly related to a large magnetic flux emergence, and supported by the height-time profile and the succession of events (first the filament eruption and the flare afterwards).
\section*{Acknowledgements}
The authors want to acknowledge SDO/AIA and SDO/HMI Data Science Centers and Teams, and the GOES and LASCO teams. The LASCO catalog is a CME catalogue that is generated and maintained at the CDAW Data Center by NASA and The Catholic University of America in cooperation with the Naval Research Laboratory. SOHO is a project of international cooperation between ESA and NASA.
We would like to thank to the Virtual Solar Observatory (VSO) and Helioviewer for data acquisition and visualisation, respectively. Also, we acknowledge data use from WDC from Geomagnetism, Kyoto, and LMSAL Solarsoft. This research has made use of NASA's Astrophysics Data System. We would like to thank funding from the Spanish project PPII10-0183-7802 from ``Junta de Comunidades de Castilla -- La Mancha'' and MINECO project code AYA2013-47735-P. We want to thank the anonymous referee for useful comments.
\bibliographystyle{aa}
|
2,869,038,156,927 | arxiv | \section{Introduction}\label{intro}
\vspace{-0pt}
Multiclass classification is the problem of classifying data instances into one of three or more classes. In a typical learning process of multiclass classification, assuming that there are $K > 2$ classes, i.e. $Y= \{C_1,C_2,...,C_K\}$, and $n$ training instances, i.e. $S=\{\{x_1,y_1\},\{x_2,y_2\}...,\{x_n,y_n\}\}$, each training instance belongs to one of $K$ different classes, and the goal is to construct a function $f(x)$ which, given a new data instance $x$, can correctly predict the class to which the new instance belongs. Multiclass classification problems are very common in the real-world with a variety of scenarios, such as image classification~\cite{ciregan2012multi}, text classification~\cite{nigam2000text}, e-commerce product classification~\cite{schulten2001commerce}, medical diagnosis~\cite{panca2017application}, etc. Currently, one of the most widely-used solutions for multiclass classification is the decomposition methods\footnote{There are also some other efforts that trying to solve multiclass problem directly, like~\cite{bredensteiner1999multicategory,choromanska2015logarithmic,mroueh2012multiclass,weston1998multi, hsu2009multi,prabhu2014fastxml,si2017gradient,yen2016pd}. However, they are not as popular as decomposition methods and thus are not in the scope of this paper.}, which splits a multiclass problem, or polychotomy, into a series of independent two-class problems (dichotomies) and recompose them using the outputs of dichotomies in order to reconstruct the original polychotomy. In practice, the widespread use of decomposition methods is mainly due to its simplicity and easy-adaptation to existing popular learners, e.g. support vector machines, neural networks, gradient boosting trees, etc.
There are a couple of concrete realization of decomposition methods, including One-Versus-All (OVA)~\cite{nilsson1965learning}, One-Versus-One (OVO)~\cite{hastie1998classification}, and Error-Correcting-Output-Code (ECOC)~\cite{dietterich1995solving}.In particular, OVA trains $K$ different base learners, for the $i$-th of which let the positive examples be all the instances in class $C_i$ and the negative examples be all not in $C_i$; OVO trains $K\times(K-1) / 2$ base learners, one of each to distinguish each pair of classes. While OVA and OVO are simple to implement and widely-used in practice, they yield some obvious disadvantages. First, both OVA and OVO are based on the assumption that all classes are orthogonal and the corresponding base learners are independent with each other, which, nevertheless, neglect the latent correlation between these classes in real-world applications. For example, in a task of image classification, the instances under the `{\em Cat}' class apparently yield stronger correlation to those under the `{\em Kitty}' class than those under the `{\em Dog}' class. Moreover, the training of OVA and OVA is inefficient since its high computation complexity when $K$ is large, leading to extremely high training cost when processing large-scale classification datasets.
ECOC-based methods, on the other hand, are theoretically preferable over both OVA and OVO since it can in some sense alleviate their disadvantages. More concretely, ECOC-based methods rely on a coding matrix, which defines a new transformation of instance labeling, to decompose the multiclass problem into dichotomies, and then recompose in a way that makes decorrelations and correct errors. Generating different distances for different pairs of classes, indeed, enable ECOC-based methods to leverage the correlations among classes into the whole learning process. For example, if the coding matrix assigns $(1, 1, 1)$, $(1, 1, -1)$ and $(-1, -1, -1)$ to `{\em Cat}', `{\em Kitty}' and `{\em Dog}', respectively, the learned model can ensure a closer distance between instance pairs across `{\em Cat}' and `{\em Kitty}' than those across `{\em Cat}' and `{\em Dog}'. Moreover, since the length of the code, also the number of base learners, could be much smaller than $K$, ECOC-based methods can significantly reduce the computation complexity over OVA and OVO, especially when the original class number $K$ is very large.
Given the delicate design of class coding, the performance of ECOC-based methods highly depends on the design of the coding matrix and the corresponding decoding strategy. The most straightforward way is to create a random coding matrix for class transformation with Hamming decoding strategy. The accuracy of this simple approach, apparently, can be of highly volatile due to its randomness. To address this problem, many efforts have been made focusing on optimizing the coding matrix. However, it is almost impossible to find an optimal coding matrix due to its complexity and even finding a sub-optimal coding matrix is likely to be quite time-consuming. Such uncertainty and inefficiency in recognizing a sub-optimal coding matrix undoubtedly prevent the broader using of the ECOC-based methods in real-world scenarios.
To address this challenge, we propose a new dynamic ECOC-based decomposition approach, named LightMC. Instead of using fixed coding matrix and decoding strategy, LightMC can dynamically optimize the coding matrix and decoding strategy, toward more accurate multiclass classification, jointly with the training of base learners in an iterative way.
To achieve this, LightMC takes advantage of a differentiable decoding strategy which allows it to perform the optimization by gradient descent, guarantees that the training loss can be further reduced.
In addition to improving final classification accuracy and obtaining the coding matrix and decoding strategy more beneficial to the classification performance, LightMC can, furthermore, significantly boost the efficiency since it saves much time for searching sub-optimal coding matrix. As LightMC will optimize coding matrix together with the model training process, it is not necessary to spend much time in tuning an initial coding matrix, and, as shown by further empirical studies, even a random coding matrix can result in satisfying.
To validate the effectiveness and efficiency of LightMC, we conduct experimental analysis on several public large-scale datasets. The results illustrate that LightMC can outperform OVA and existing ECOC-based solution on both training speed and accuracy.
This paper has following major contributions:
\begin{itemize}[leftmargin=12pt,itemsep=1pt,topsep=2pt]
\item We propose a new dynamic decomposition algorithm, named LightMC, that can outperform traditional ECOC-based methods in terms of both accuracy and efficiency.
\item We define a differentiable decoding strategy and derive an effective algorithm to dynamically refine the coding matrix by extending the well-known back propagation algorithm.
\item Extensive experimental analysis on multiple public large-scale datasets to demonstrate both the effectiveness and the efficiency of proposed new decomposition algorithm is highly efficient.
\end{itemize}
The rest of the paper is organized as followed. Section~\ref{sec_background} introduces ECOC decomposition approaches and related work. Section~\ref{lightmc} presents the details of the LightMC. Section~\ref{experiment} shows experiment results that validate our proposition on large-scale public available multiclass classification data sets. Finally, we conclude the paper in Section~\ref{conclusion}.
\vspace{-0pt}
\section{Preliminaries}\label{sec_background}
\vspace{-0pt}
\subsection{Error Correcting Output Code (ECOC)} \label{sec_ecoc}
\vspace{-0pt}
ECOC was first introduced to decompose multiclass classification problems by
Dietterich and Bakiri~\cite{dietterich1995solving}. In this method, each class $k$ is assigned to a codeword $\boldsymbol{M_k}$, where $M_{kj}$ represents the label of data from class $k$ when learning the base learner $j$. All codewords can be combined to form a matrix $\boldsymbol{M}\in \{-1,1\}^{K\times L}$, where $L$ is the length of one codeword as well as the number of base learners. Given the output of base learners $\boldsymbol{o} = \{o_1, o_2,\dots,o_L\}$, the final multiclass classification result can be obtained through a decoding strategy:
\begin{equation}
\label{original_decoding}
\hat{\boldsymbol{y}}= argmin_k(\boldsymbol{t}), \textrm{\ \ where\ \ } t_k = \frac{1}{2}\sum_{j=1}^{L}{\vert M_{kj}-sgn(o_j) \vert}
\end{equation}
where $\hat{\boldsymbol{y}}$ is the predicted class and $sgn$ is the sign function and $sgn(o)$ equals 1 if $o \geq 0$ otherwise $-1$. This decoding strategy is also called hamming decoding as it makes the prediction by choosing the class with lowest hamming distance. Under such decoding strategy, the coding matrix is capable of correcting a certain amount of errors made by base learners~\cite{dietterich1995solving}.
ECOC-based methods yield many advantages over traditional decomposition approaches. First, the introducing of the coding matrix, which can indicate different distances between different class pairs, indeed enables us to integrate the correlation among classes into the classification modeling so as to further improve the classification accuracy. Moreover, since code length $L$, i.e., the number of base learners, could be much smaller than the number of classes $K$, ECOC-based methods can be more efficient than OVA and OVO, especially when $K$ is very large.
It is obvious that the classification performance of ECOC-based methods highly depend on the design of coding matrix. Nevertheless, the complexity of finding the best coding matrix is NP-Complete as stated in~\cite{crammer2002learnability}. Thus, it is almost impossible to find an optimal coding matrix, and even finding a sub-optimal coding matrix is likely to be quite time-consuming. Such uncertainty and inefficiency in finding a sub-optimal coding matrix undoubtedly prevent the broader using of the ECOC-based methods in real-world applications.
\vspace{-0pt}
\subsection{Related work}\label{relatedwork}
\vspace{-0pt}
Recent years have witnessed many efforts attempting to improve ECOC-based decomposition methods. Especially, many of existing studies focused on discovering more appropriate coding matrix. For example, some efforts made hierarchical partition of the class space to generate corresponding code~\cite{baro2009traffic,pujol2006discriminant}; some other studies explored the genetic algorithm to produce coding matrix with good properties~\cite{garcia2008evolving, bautista2012minimal, bagheri2013genetic, bautista2014design}; moreover, there are a couple of efforts that have demonstrated significant improvement on ECOC-based methods by using spectral decomposition to find a good coding matrix~\cite{zhang2009spectral} or by relaxing the integer constraint on the coding matrix elements so as to adopting a continuous-valued coding matrix~\cite{zhao2013sparse}. In the meantime, some previous studies turned to optimizing the decoding strategy by employing the bagging and boosting approach~\cite{hatami2012thinned, rocha2014multiclass} or assigning deliberate weigmost of previoushts on base learners for further aggregation~\cite{escalera2006ecoc}.
While these previous studies can improve ECOC-based methods in some sense, they still suffer from two main challenges:
1) \emph{Efficiency:} In order to increase multiclass classification accuracy, many of previous works like~\cite{baro2009traffic,pujol2006discriminant} designed the coding matrix with a long code length $L$, ranging from $K-1$ to $K^2$, which leads to almost as many base learners as models needed in OVA and OVO. Such limitation makes existing ECOC-methods very inefficient in the large-scale classification problems.
2) \emph{Scalability:} In fact, most of the previous ECOC-based methods were studied under a small-scale classification data, which usually consists of, for example, tens of classes and thousands of samples~\cite{zhao2013sparse}. To the best of knowledge, there is no existing deep verification of the performance of ECOC-based methods on a large-scale classification data. Meanwhile, such investigation is even quite difficult theoretically, since most of them cannot scale up to the large-scale data due to the long coding length and expected great pre-processing cost.
Because of these major shortages, it is quite challenging in applying existing ECOC-based methods into the real-world applications, especially those large-scale multiclass classification problems.
\vspace{-0pt}
\section{LightMC}\label{lightmc}
\vspace{-0pt}
To address those major shortages of ECOC-based methods stated in Sec.~\ref{sec_background}, we proposed a new multiclass decomposition algorithm, named LightMC. Instead of determining the coding matrix and decoding strategy before training, LightMC attempts to dynamically refine ECOC decomposition by directly optimizing the global objective function, jointly with the training of base learners.
More specifically, LightMC introduces a new differentiable decoding strategy, which enables LightMC to optimize the coding matrix and decoding strategy directly via gradient descent during the training of base learners. As a result, LightMC yields two-fold advantages: 1) \emph{Effectiveness:} rather than separate the designing of coding matrix and decoding strategy from the base learning training, LightMC can further enhance ECOC-based methods in terms of classification accuracy by jointly optimizing the coding matrix, decoding strategy, and base learners; 2) \emph{Efficiency:} since the coding matrix will be automatically optimized in the subsequent training, LightMC can significantly reduce time cost for finding a good coding matrix before training,
In this section, we will first introduce the overall training algorithm. Then, we will present our new decoding model and derive the optimization algorithms for decoding strategy and coding matrix based on it. Moreover, we will take further discussions on the performance and efficiency of LightMC.
\vspace{-0pt}
\subsection{Overall Algorithm}\label{overall_algorithm}
\vspace{-0pt}
\begin{figure}[H]
\centering
\includegraphics[height=1.1cm, width=11.8cm]{flowchart_simple.png}
\caption{The general learning procedure of LightMC.}
\label{flowchart}
\end{figure}
\begin{minipage}[t]{0.49\textwidth}
\begin{algorithm}[H]
\caption{LightMC}
\label{algo_train}
\begin{algorithmic}[1]
\State {\bfseries Input:} data $\boldsymbol{X}$, label $\boldsymbol{y}$, code length $L$, \\base learner $f$, max iteration (epoch) times $T$, starting iteration $i_s$, base learner learning rate $\alpha$, decoding parameter $\boldsymbol{\Theta}$
\State Initialize a coding matrix $\boldsymbol{M}$
\State Initialize $\boldsymbol{\Theta}$ according to $\boldsymbol{M}$
\For{$i=1$ {\bfseries to} $T$}
\State Train base learner $f_1,f_2,...,f_L$ for a single iteration using $\boldsymbol{X},\boldsymbol{y},\boldsymbol{M}$ and $\alpha$
\If{base learner is not boosting learner {\bfseries or} ($i\geq i_s$ {\bfseries and} ($i-i_s)\mod{1/\alpha} = 0$)}
\State $\boldsymbol{o} = Predict(f_1, .., f_L)$
\State $\hat{\boldsymbol{y}} = Decode(\boldsymbol{\Theta}, \boldsymbol{o})$
\State {\bfseries TrainDecoding($\boldsymbol{\Theta}, \boldsymbol{y},\hat{\boldsymbol{y}}$)}
\State {\bfseries TrainCodingMatrix($\boldsymbol{M}, \boldsymbol{y},\hat{\boldsymbol{y}}$)}
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hspace{1mm}
\begin{minipage}[t]{0.49\textwidth}
\begin{algorithm}[H]
\caption{TrainDecoding}
\label{algo_decoding}
\begin{algorithmic}[1]
\State {\bfseries Input:} parameter $\boldsymbol{\Theta}$, label $\boldsymbol{y}$, prediction $\hat{\boldsymbol{y}}$
\State Use mini-batch gradient descent to update $\boldsymbol{\Theta}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{TrainCodingMatrix}
\label{algo_update}
\begin{algorithmic}[1]
\State {\bfseries Input:} coding matrix $\boldsymbol{M}$, label $\boldsymbol{y}$, prediction $\hat{\boldsymbol{y}}$
\State Compute $\boldsymbol{G}$: data-wise gradients
\State Compute $\boldsymbol{C}$: \#data for each class
\For{$i=1$ {\bfseries to} $L$} \Comment{For all codes}
\State {Compute $\boldsymbol{S}$: sum gradients at $i$th code for each class}
\For{$k=1$ {\bfseries to} $K$}
\State $\beta_k=S_k/C_k$
\State $M_{ki}=M_{ki}-\gamma_2\beta_k$
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\end{minipage}
The general learning procedure of LightMC is summarized as shown in~Fig.~\ref{flowchart}. More specifically, before LightMC starts training, a coding matrix is first initialized by existing ECOC-based solutions. Then, to make full use of training information from base learners, LightMC employs an alternating optimization algorithm, which alternates the learning of base learners together with the coding and decoding optimization: when training base learners, the coding and decoding strategy is fixed when training base learners, and vice versa. This joint learning procedure will run repeatedly until the whole training converges.
Note that, instead of determining coding matrix before training, LightMC develops an end-to-end solution to jointly train base learners and the decomposition models in an iterative way. The details of the LightMC algorithm can be found in Alg.~\ref{algo_train}. Within this algorithm, there are two essential steps: \textbf{TrainDecoding} is used to optimize the decoding strategy, the details of which will be revealed in Sec.~\ref{sec_traindecoding}; and, \textbf{TrainCodingMatrix} aims at optimizing the coding matrix, the details of which will be introduced in Sec.~\ref{codematrix_update}.
\vspace{-0pt}
\vspace{-5pt}
\subsection{New Differentiable Decoding Strategy: Softmax Decoding}\label{sec_traindecoding}
\vspace{-0pt}
To find the optimal coding and decoding strategies, it is necessary to optimize directly on the global objective function. However, since most existing decoding strategies are not differentiable, it prevents us from optimizing the global objective function directly by employing widely-used back propagation method. To remove this obstacle, it is critical to design a decoding strategy which is differentiable while preserving error correcting properties.
A deep-dive into the decoding strategy, i.e., Eq.~\ref{original_decoding}, discloses two non-differentiable functions: $sgn$ and $argmin$. As introduced in~\cite{escalera2010decoding}, $sgn$ can be removed directly, since the resulting distance function will become Manhattan (L1) distance, which still preserves its error correcting property. In the meantime, $argmin$ can be replaced by the widely-used $softmax$, which is able to approximate $argmin$ with producing continuous probabilities and thus differentiable. More specifically, we can first replace the $argmin$ to $argmax$ by reversing the sign of $M_{kj}$ at the same time. In this way, when the output of the $j$-th classifier $o_j$ equals to $M_{kj}$, the distance will be the maximum value instead of the minimum. After that, we can replace the $argmax$ to $softmax$ directly, and the whole decoding strategy becomes
\begin{equation}
\label{equ_new_coding}
\hat{\boldsymbol{y}}= softmax(\boldsymbol{t}), \textrm{\ \ where\ \ } t_k = \frac{1}{2}\sum_{j=1}^{L}{\vert -M_{kj}-o_j \vert},
\end{equation}
where $t_k$ denotes the similarity between the classifier output and the code of class $k$. Although the L1 loss is applied in the algorithm, L2 loss or other distance functions mentioned in~\cite{escalera2010decoding} are also applicable and should produce similar results. Note that, after all the transformation mentioned above, the decoding strategy will assign the highest score to the class closest to the output vector, which, in other words, is exactly the error-correcting property~\cite{escalera2010decoding}.
Recognizing such differentiable error correcting decoding strategy enables us to employ the widely-used gradient descent algorithm to optimize the decoding strategy directly. Before doing this, we notice that the new decoding function can be rewritten into a form of single layer softmax regression. As the distance function in Eq.~\ref{equ_new_coding} satisfies
\begin{equation*}
\vert -M_{kj}-o_j \vert =
\begin{cases}
1-o_j, &M_{kj}=-1 \\
1+o_j, &M_{kj}=1
\end{cases}
= 1 + M_{kj}o_j ,
\end{equation*}
it allows the decoding strategy to be rewritten into:
\begin{equation}
\label{equ_softmax}
\begin{split}
t_k = \frac{1}{2}\sum_{j=1}^L{(1 + M_{kj}o_j)} &= \frac{1}{2}(\boldsymbol{M_k}^{T}\boldsymbol{o}+L),
\textrm{\ \ let\ } \boldsymbol{\theta_k}=\boldsymbol{M_k}, b_k=L,\textrm{\ \ then we have}\\
\boldsymbol{\hat{y}} &= softmax(\boldsymbol{t}), \ \ t_k = \frac{1}{2}(\boldsymbol{\theta_k}^T\boldsymbol{o}+b_k)
\end{split}
\end{equation}
which yields exactly the same form as a single-layer linear model with a softmax activation. As a result, we can use the gradient descent to train the softmax's parameters $\boldsymbol{\Theta}$, which is initialized by $\boldsymbol{M}$, in order to reduce the overall loss. Considering the convenience of derivative computation, we choose multiclass cross entropy, which is commonly used together with the softmax function, as our loss function. The overall loss on a single data point can be formulated as
\begin{equation*}
J = -\sum_{k=1}^K{(1-y_k)log(1-{\hat{y}_k})+y_klog({\hat{y}_k})},
\textrm{\ \ where $\boldsymbol{\Theta}$ is updated by \ }\boldsymbol{\theta_k}^t = \boldsymbol{\theta_k}^{t-1} - \gamma_1 \frac{\partial J}{\partial \boldsymbol{\theta_k}^{t-1}},
\end{equation*}
where $\gamma_1$ is the learning rate, $\boldsymbol{y}$ is a one-hot vector transformed from the original label.
This optimization process is called by \textbf{TrainDecoding} in Alg.~\ref{algo_decoding}. Like ordinary gradient descent, data are partitioned into mini batches which are used to calculate current gradients for a single round of update. We can also apply the L1/L2 regularization here to improve the generalization ability. Note that, the validity of gradient descent guarantees the overall loss to decrease through iterations, which ensures this algorithm is a valid method to refine the decoding strategy.
\vspace{-0pt}
\subsection{Coding Matrix Optimization}\label{codematrix_update}
\vspace{-0pt}
Besides decoding optimization, it is quite beneficial to optimize coding matrix through the iterative training as well. We notice that, if the input $\boldsymbol{o}$ of softmax decoding can also be updated via back propagation, we are able to further lower the overall training loss. The corresponding update process can be defined as
$\boldsymbol{o}^t = \boldsymbol{o}^{t-1} - \gamma_2 \frac{\partial J}{\partial \boldsymbol{o}^{t-1}}$, where $\gamma_2$ is the learning rate.
However, $\boldsymbol{o}$ cannot be updated directly since it is the output of base learners. Fortunately, optimizing the coding matrix $\boldsymbol{M}$ enables us to update the $\boldsymbol{o}$ indirectly so as to further reduce the overall training loss.
As stated in Sec.~\ref{sec_ecoc}, $M_{kj}$ determines the label of the data belonging to class $k$ when they are used to train base learner $j$. If we assume that base learners are able to fit the given learning target perfectly, then for any classifier $j$, its output for any data belonging to class $k$ will always satisfy $o_j^i=M_{kj}$. Thus, the changes of $\boldsymbol{M}$ will affect the targets of base learners, and then the output $\boldsymbol{o}$ of base learners will be changed subsequently. Moreover, since the gradient $\frac{\partial J}{\partial M_{kj}}$ is equal to $G_{ij}=\frac{\partial J}{\partial o_j^i}$ in this situation, we can optimize $\boldsymbol{M}$ by gradient descent:
$M_{kj}^t = M_{kj}^{t-1} - \gamma_2 \frac{\partial J}{\partial M_{kj}^{t-1}} = M_{kj}^{t-1} - \gamma_2 \frac{\partial J}{\partial G_{ij}}$.
However, there is no perfect base learner in practice.
As a result, we cannot use above solution to optimize $\boldsymbol{M}$ directly since $\frac{\partial J}{\partial M_{kj}} \neq G_{ij}$. Nevertheless, there are many data samples that can be used for a single class $k$. That is, for a $M_{kj}$, there are many $G_{ij}$, where $y_i = k$. So instead of using unstable gradient point $G_{ij}$, we can use average gradient of each class to have a more stable estimation for $\frac{\partial J}{\partial M_{kj}}$:
\begin{equation}\label{average_gradient}
\frac{\partial J}{\partial M_{kj}} \vcentcolon= \frac{1}{\vert \boldsymbol{\Omega_k} \vert}\sum_{i \in \boldsymbol{\Omega_k}}{G_{ij}}, \textrm{\ \ where \ }\boldsymbol{\Omega_k} = \{i|y_i=k\}
\end{equation}
Then this estimation can be used to update the coding matrix. This optimization algorithm is described in Alg.~\ref{algo_update}, which is almost the same as a normal back propagation algorithm except using the whole batch data to calculate average gradients before performing updates. This method is also empirically proved to be effective by our experiment, as shown in the next section, which means, by optimizing global objective function, the coding matrix can be definitely refined to reduce the loss as well as enhance the generalization capability.
\vspace{-0pt}
\subsection{Discussion}\label{lightmc_discussion}
\vspace{-0pt}
In the rest of section, we take further discussions about the efficiency and performance of LightMC.
\begin{itemize}[leftmargin=12pt ,itemsep=1pt,topsep=1pt]
\item \emph{Efficiency:} Compared with existing ECOC-based methods, LightMC is more efficient as it can use much less time to find a coding matrix before training. Meanwhile, it can even produce the comparable performance since the coding matrix will be dynamically refined in the subsequent training. Moreover, LightMC only requires little additional optimization computation cost, which is the same as the cost of single layer linear model and much smaller than the cost of powerful base learners like the neural networks and GBDT. The experimental results in the following section will further demonstrate the efficiency of LightMC.
\item \emph{Mini-Batch Coding Optimization Method:} One shortage of Alg.~\ref{algo_update} is inefficient in memory usage as it uses the full batch to update. Actually, it is quite natural to switch to mini-batch update since the average gradients can be calculated in mini-batches as well.
\item \emph{Distributed Coding:} Binary coding is used in most existing ECOC-based methods. On the other hand, LightMC employs the distributed coding to perform the continuous optimization. Apparently, distributed coding, also called embedding, contains more information than binary coding~\cite{mikolov2013distributed,zhao2013sparse}, which enables LightMC to leverage more information over the correlations among classes.
\item \emph{Alternating Training with Base Learners:} As shown in Alg.~\ref{algo_train}, when the base learner is not the boosting learner, for example, the neural networks, LightMC can be called at each iteration(epoch). For the boosting learners, LightMC is conducted starting from $i_s$-th round and called once per $1/\alpha$ round. It is because there is a \textit{learning rate} $\alpha$, which will shrinkage the output of model at each iteration, in boosting learners. As a result, boosting learners need more iterations to fit the new training targets. Therefore, using initial rounds $i_s$ and being called once per $1/\alpha$ round can improve the efficiency, since calling LightMC at each iteration is not necessary.
\item \emph{Compared with Softmax Layer in Neural Networks:} The form of softmax decoding is similar to the softmax layer in neural networks. However, they are different indeed: 1) the softmax layer is actually the same to OVA decomposition, and it does not use coding matrix to encode the correlations among classes; 2) they use different optimization schemes: the loss per sample is reduced in the optimization of softmax layer, while softmax decoding optimizes the loss per class (see Eq.~\ref{average_gradient}). It is hard to say which one is better in practice for neural networks, even some recent works found the accuracy is almost the same while using fixed softmax layer~\cite{hoffer2018fix}. This topic, however, is not in the scoop of this paper.
\end{itemize}
\vspace{-0pt}
\vspace{-5pt}
\section{Experiment}\label{experiment}
\vspace{-0pt}
\subsection{Experiment Setting}
\vspace{-0pt}
\begin{table}[t]
\centering
\vspace{-0pt}
\vspace{-5pt}
\caption{Datasets used for experiments.}
\label{datainfo}
\begin{tabular}{lccccr}
\toprule
Dataset & \#class & \#feature & \#data\\
\midrule
News20~\cite{chang2011libsvm} & 20& 62,021& 19,928& \\
Aloi~\cite{chang2011libsvm}& 1,000& 128& 108,000& \\
Dmoz~\cite{yen2016pd}& 11,878& 833,484& 373,408& \\
LSHTC1~\cite{yen2016pd}& 12,045& 347,255& 83,805& \\
AmazonCat-14K~\cite{mcauley2015inferring, mcauley2015image} \footnote{\url{http://manikvarma.org/downloads/XC/XMLRepository.html}} & 14,588 \footnote{Number of class is 3344 after converting to multi-class format}& 597,940& 5,497,775& \\
\bottomrule
\end{tabular}
\end{table}
In this section, we report the experimental results regarding our proposed LightMC algorithm. We conduct experiments on five public datasets, as listed in Table~\ref{datainfo}. From this table, we can see a wide range of the sizes of datasets, the largest of which has millions of samples with ten thousand classes and can be used to validate the scalability of LightMC. Among them, AmazonCat-14K is originally a multilabel dataset; we convert it to a multi-class one by randomly sampling one label per data. As stated in Sec.~\ref{relatedwork}, to the best of our knowledge, it is the first time to examine ECOC-based methods on such large-scale datasets.
For the baselines, we use OVA and evolutionary ECOC proposed in~\cite{bautista2012minimal}. OVO is excluded in baselines due to its extremely inefficiency of using $K\times(K-1)/2$ base learners. For example, in \textbf{LSHTC1} data, OVO needs 72 million base learners and is estimated to take about 84 days to run an experiment even when the cost of one base learner is 0.1 second. The initial coding matrix of LightMC is set to be exactly the same as the ECOC baseline to make them comparable. Besides, to see the efficiency of LightMC, we add another LightMC baseline, but starting from random coding matrix, called LightMC(R). As for the length of the coding matrix $L$, a length of $10log_2(K)$ was suggested in~\cite{allwein2000reducing}. Considering that our base learner is more powerful, we set the $L$ to $min(5log_2(K-1)+1, K/2)$.
For all decomposition methods we use LightGBM~\cite{ke2017lightgbm} as to train base learners. In all experiments we set \textit{learning\_rate} ($\alpha$) to $0.1$, \textit{num\_leaves} (max number of leaves in a single tree) to $127$ and \textit{early\_stopping} (early stopping rounds) to 20. For the AmazonCat-14K, we override \textit{num\_leaves} to 300 and \textit{early\_stopping} to 10, otherwise it needs several weeks to run a experiment. Other parameters remain to be the same as default. Our experimental environment is a Windows server with two E5-2670 v2 CPUs (in total 20 cores) and 256GB memories. All experiments run with multi-threading and the number of threads is fixed to 20.
Regarding parameters used by LightMC, the starting round $i_s$ is set to $30$, $\gamma_1$ to $0.1$ and $\gamma_2$ to $0.2$. And softmax's parameters $\Theta$ are trained for one epoch each time the optimization method is called.
\vspace{-0pt}
\vspace{-10pt}
\subsection{Experiment Result Analysis}
\vspace{-0pt}
\begin{table}[h]
\centering
\vspace{-0pt}
\caption{Comparison on test classification error, lower is better.}
\label{error_result}
\begin{tabular}{lrrrrrrr}
\toprule
Dataset & OVA & ECOC & LightMC(R) & LightMC\\
\midrule
News20& 18.66\% & 20.82\% $\pm$ 0.33\%& 20.63\% $\pm$ 0.57\%& \textbf{18.63\% $\pm$ 0.37\%}\\
Aloi& 11.44\% & 10.72\% $\pm$ 0.12\%& 10.75\% $\pm$ 0.23\%& \textbf{9.75\% $\pm$ 0.12\%}\\
Domz& N/A & 55.87\% $\pm$ 0.34\%& 55.55\% $\pm$ 0.44\%& \textbf{53.95\% $\pm$ 0.25\%}\\
LSHTC1& N/A & 76.04\% $\pm$ 0.59\%& 76.17\% $\pm$ 0.73\%& \textbf{75.63\% $\pm$ 0.33\%}\\
AmazonCat-14K& N/A& 27.05\% $\pm$ 0.11\%& 26.98\% $\pm$ 0.21\%& \textbf{25.54\% $\pm$ 0.10\%}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[hb]
\centering
\vspace{-0pt}
\caption{Training convergence time (average of multiple runs) and coding matrix searching time (the last column) in second.}
\label{timing_result}
\begin{tabular}{lrrrr|rr}
\toprule
Dataset & OVA & ECOC & LightMC(R) & LightMC & \textit{Coding Matrix}\\
\midrule
News20& \textbf{71} & 120 & 133 & 100 & \textit{34} \\
Aloi& 1494 & 717 & 753 & \textbf{627} & \textit{201} \\
Domz& > 259k & 58,320 & 61,930 & \textbf{51,840} & \textit{13,233} \\
LSHTC1& > 86k & 5,796 & 5,995 & \textbf{5,690} & \textit{926} \\
AmazonCat-14K& > 969k & 332,280 & 354,480& \textbf{311,040} & \textit{48,715} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\setlength\tabcolsep{3pt}
\vspace{-0pt}
\caption{Code distances in LightMC over class pairs on \textbf{News20} dataset, regarding iterations.}
\label{dist_result}
\begin{tabular}{lrrrrrrrrrrrrrrr}
\toprule
Class Pairs & 0 & 50 & 100 & 150 & 200 & 300 & 400 & 500 & 1000 \\
\midrule
ibm.hardware, mac.hardware &98.9 & 97.4 & 96.8 & 95.9 & 95.1 & 93.1 & 91.1 & 89.3 &81.8 \\
mac.hardware, politics.mideast &120.9 & 135.5 & 136.4 & 136.8 & 140.8 & 145.6 & 149.5 & 152.7 &163.2\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[ht]
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=6.7cm, height=4.5cm]{aloi}
\caption{Aloi}\label{aloi_performance}
\end{subfigure}
\hspace{2mm}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=6.7cm, height=4.5cm]{LSHTC1}
\caption{LSHTC1}\label{LSHTC1_performance}
\vspace{-5pt}
\end{subfigure}
\caption{Convergence curves on \textbf{Aloi} and \textbf{LSHTC1} datasets.}
\vspace{-0pt}
\end{figure}
The experiment results are reported in Table~\ref{error_result} and~\ref{timing_result}. The OVA error result on \textbf{Dmoz}, \textbf{LSHTC1} and \textbf{Amazon-Cat-14k} datasets are not reported since the time costs are extremely too high. However, we estimate their convergence time by using the subset of the original data.
From these two tables, we find LightMC outperforms all the others in terms of both accuracy and convergence time. In particular, both ECOC and LightMC yield faster convergence over OVA when $K$ is larger. Furthermore, compared with ECOC, LightMC increases the accuracy by about 3\% (relatively), and improves 5.88\% at the best case on the \textbf{LSHTC1} dataset. As for the speed, LightMC also uses less time than ECOC to converge. These results clearly indicate that LightMC can further reduce the overall loss by dynamically refining the coding and decoding strategy as expected.
We can also find that, while starting from random coding matrix, the accuracy of LightMC(R) is comparable with that of ECOC. Despite the slower convergence of LightMC(R), the total time cost of LightMC(R) is still much less than ECOC, since ECOC spends an enormous additional time to find a good coding matrix before training. This result further implies the efficiency of LightMC: it can provide comparable accuracy without searching a sub-optimal coding matrix before training.
To demonstrate more learning details, we plot the curves of the test error regarding the training time on \textbf{Aloi} and \textbf{LSHTC1} datasets, as shown in Fig.~\ref{aloi_performance} and~\ref{LSHTC1_performance}, respectively. From Fig.~\ref{aloi_performance}, we can see clearly that the curve of LightMC always stays below the curves of the other two methods and converges earliest at the lowest point. Fig.~\ref{LSHTC1_performance} shows a slightly different pattern: LightMC and ECOC have similar accuracy and take comparable time to converge. However, LightMC still always stays below ECOC and converges 1,405 seconds earlier than ECOC, which also indicates that LightMC succeeds in enhancing existing ECOC methods.
In addition, to illustrate the effects of LightMC in optimizing the code matrix, we calculate the distances of some class pairs, on \textbf{News20}, over the optimized coding matrix. As shown in Table~\ref{dist_result}, the distance over the class pair (`{\em ibm.hardware}',`{\em mac.hardware}') is obviously much smaller than that over (`{\em mac.hardware}',`{\em politics.mideast}'). Moreover, the distance over the former pair keeps reducing along with the training of LightMC, while that over the latter, on the other hand, keeps increasing due to the irrelevance between this class pair. This result empirically implies the effectiveness of LightMC in optimizing the coding matrix towards to the right direction.
As a summary, all these results have illustrated the effectiveness and efficiency of LightMC. LightMC cannot only empower existing ECOC-based methods but also achieve the comparable classification accuracy using much less time since it saves the time for finding a sound coding matrix. Moreover, LightMC can optimize the coding matrix towards to the better direction.
\vspace{-0pt}
\section{Conclusion}\label{conclusion}
\vspace{-10pt}
We propose a novel dynamic ECOC-based multiclass decomposition algorithm, named LightMC, to solve large-scale classification problems efficiently. To leverage better of correlations among classes, LightMC dynamically optimizes its coding matrix and decoding strategy, jointly with the training of base learners. Specifically, we design a new differentiable decoding strategy to enable direct optimization over the decoding strategy and coding matrix. Experiments on public datasets with classes ranging from twenty to more than ten thousand empirically show the effectiveness and the efficiency of LightMC. In future, we plan to examine how LightMC will work while replacing the softmax layer in neural networks.
\newpage
\bibliographystyle{unsrt}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.